Embedded - 229: Slinky with a Lot of Math
Episode Date: January 12, 2018Nick Kartsioukas (@ExplodingLemur) spoke with us about information security, melting down spectres, lemurs, and sensible resolutions. Nick recommends Aumasson’s Serious Cryptography (also available ...from NoStarch) as a good orientation. (Offline, he also recommended Shneier’s Secrets and Lies.)  When thinking about security, you need to develop your threat model (EFF) and not panic (Mickens). As a user of the internet, there are some getting started guides (Motherboard, EFF, Smart Girl’s Guide to Privacy) along with Nick’s advice of using an antivirus program (comparison), an Adblocker (uBlock), a password manager, and 2-factor authentication. Data backups are also very useful (3-2-1 rule: 3 copies, 2 separate media, 1 offsite). For a professional infosec perspective, the CIS 20 are best practice guidelines for computer security. For Spectre and Meltdown, the best high-level explanation is in Twitter from @gsuberland though XKCD does its usual good job as well. For more detail, about speculative execution bugs, check out this github readme. For the history of the Stuxnet, check out Zetter’s Countdown to Zero Day and the Security Now podcast episode 291. Ham radio Field Days for 2018 are June 23-24 Last but not least: Depression lies so get help and if you want to know how to help someone else, look at MakeItOk.org
Transcript
Discussion (0)
Welcome to Embedded.
I am Alicia White, here with Christopher White.
Our guest is Nick Karchoukas, here to talk about information security.
Hi, Nick. Thanks for being on the show with us today.
Hello. Thanks for having me here.
Could you tell us a little bit about yourself?
Sure. So I am an information security engineer that focuses on infrastructure stuff.
My background is in Linux system administration and network engineering,
which I've been doing just about half of forever, started back in high school.
Kind of have always been interested in the security field, been going to the DEF CON security conference for many years, and then sort of fell into a security engineering role
about three years ago when I was on loan to a security team as a networking expert.
So that's my career thing.
At home, I play with electronics and microcontrollers,
ham radio stuff, and RC aircraft.
And I have a serious problem collecting single-board computers
and software-defined radio peripherals.
Yes, this is funny, because when I mentioned that the Mi Arm was on sale at Hackaday,
Nick got one, and I knew that he didn't really want one, but he had to,
because, gosh, they were cheap and cool.
Indeed. It's sitting on my shelf, and actually I bought two.
What kind of attack is that, where you take advantage of somebody's weaknesses?
Hoarding instincts?
Hoarding by proxy.
Yeah.
Yeah.
And you do listen to this show at least sometimes, right?
Yes.
So, you're familiar with Lightning Round?
I am.
That's too bad, because that's not what we're going to do.
Oh, no. no uh we we do have short questions and we do want short answers but they're thematically
different this time apparently this is lemur round this is this is lightning lemur round yes
oh dear christopher do you want to go first i'm afraid uh are you afraid you don't know the
answers i don't know the answers but that's fine where the answers, but that's fine. Where do lemurs live?
Madagascar.
That is correct.
I know because of the movie.
Is the etymology of the word lemur in Latin related to monkeys, ghosts, or giant eyes?
Yes.
They're called spirits of the forest. let me give you a hint here then then i guess i would go with ghosts yes
boy oh boy uh what animal was the first in the lemur taxonomy the ring-tailed lemur the o'reilly
book lemur that's not actual Which is actually a Tarsier.
See, you should get this or giant sloths.
This was an easy question.
I'm going to go with a ring tail.
That is true.
Which of these are not a lemur?
An aye-aye, an indri, a mouse lemur, or a silky sifaka?
Mouse lemur?
Nope.
Trick question.
They're all lemurs.
Damn it.
What do you mean?
True or false?
Their tails are longer than their bodies.
Oh, you guys are mean to me.
I don't know.
That one's true.
Okay, this is the last one.
If you were on Jeopardy
and the category was coins and the answer was mouse lemur, what would the question be?
This is impossible.
Why are you doing this?
She's always wanted to be a game show host.
This is the closest she's going to get.
Oh, no, I laughed too much about it.
I forgot what the question was.
Something about Jeopardy.
Coins and mouse lemurs.
What upon...
Oh no, how do you answer Jeopardy questions as a question?
Right. Can I have a lightning round question, please?
A tip everyone should know.
Always, no, no, never forget to check your references.
I think the kids like it when I get down verbally, don't you?
Yes, yes.
See, I thought you were Yes, yes. All right.
See, I thought you were just going to ask questions about Limor Fried.
You know, if I'd thought of that, it totally would have happened.
What is it that you do?
I mean, what is an information security person,
what is a day in the life of Nick like?
I feel like I'm talking to the bobs.
What would you say you do here?
Like a double rainbow.
As an aside, if you guys know the Star Trek Next Generation episode, Darmok.
Yes.
My friends and I communicate in that way, but in movie quotes.
So, anyway.
What is the usual day in the day of Nick? Lots of emails,
lots of meetings, lots of consultation tickets. Usually what I do is my team fields questions
from other teams within the company about how to do things securely as far as shipping data back and forth, getting services to talk to each other,
authenticating users and services to each other,
connecting things to the network, that sort of thing.
We also review questions about company security policy and standards
and exception requests to those policies to see if the request makes sense from
both what they're asking for and if it's acceptable from a risk management standpoint.
So are you mostly behind the scenes then, not so much getting involved if there's an emergency,
or do they call you when there's some sort of attack in progress as well?
No, I don't work on the incident response side.
That sounds good.
And you work for a fairly large company.
I know we're not going to say which company
because you are here representing yourself, not them.
But it is a big company that has internal stuff
and external stuff and customer stuff.
And so it sounds so you mostly,
it sounds like you mostly work on the internal teams working with each other
on how they're going to deal with customers. Is that correct?
So either the teams producing services that are customer facing or services
that are consumed by others within the company.
So we have a lot of internal company services for all kinds of different things,
just from the basic email and patching services to build services,
code storage, that sort of thing.
Okay.
I want to ask you specific questions, but I don't know.
Can you give me an example that I can ask questions about? It doesn't really have to be real. that is transferring data to a third party or how a customer would interface with a service and how their data would be handled internally,
what sort of protections would be around it,
what kind of authentication methods would be used
for the customer to authenticate to that service
to then do whatever it is they're going to do.
And then the nature of the data that's being presented to the customer,
if we need to have any particular protections around that or requirements
around customer authentication to see that data.
Okay.
And like this,
this,
yes,
this is the sort of example I want.
Okay.
So we have customers and they have a bunch of different data and some of that data can go to third parties and some of it can't.
And you definitely should ask the customer before you do these things because that's just privacy 101.
But where does security come in here?
Is it just a matter of customers changing their password or is there some backend thing that we should know more about?
So there are a lot of backend services
that deal with storage of customer data,
whether the type of the data it is,
whether it's being stored encrypted at rest or not,
whether or not it's being encrypted in transit.
And for exchanging data with third parties, there are additional security reviews we do
of those third parties to make sure they adhere to the same security policies and standards
that we do as far as treatment of data, be it customer or corporate internal data or IP, that sort of thing.
How do you learn these things?
I mean, it's got to be a pattern because you're going to have these recommendations for the third parties.
And it's going to be things like don't store passwords in plain text, but a lot more detail than that.
How do you learn what the best practices are?
So there are a number of guides and what we consider to be appropriate
treatment of data and, you know, kind of how we view customer data. Some of the other guides out
there, there's the, what is it, the CIS 20 critical security controls that involve things like, you know, make sure you know what's on your network, make sure you are authenticating users appropriately, those guidelines. And you can take those and sort of drill down and apply them to specific services and products and such within your own environment. I still feel like it's a bit of a wall. What are the security best practices?
Can you just sum that up in like 10 minutes?
So it depends.
It depends on
what sort of data you're handling.
For example, if you are just handling, say, comments for a web page or a web forum or something that's a public forum, that's going to be vastly different than the kind of practices you would have in place if you're handling payment instruments, um, like credit card numbers and such.
So kind of where you start out is kind of classifying your data and what you're,
what you're doing with it, um, and identifying your, uh, what sort of threats you would expect to your data.
So getting together what's called a threat model,
basically figuring out what your risks are,
what your acceptance of risk is,
and those will factor into how in-depth
you want to go into security controls,
whether you want to say all the things must be encrypted all the time on
everything. And in order to access it,
you have to go through the Maxwell smart style eight doors,
one of which slams on you and then enter a secret key into the thing and drop
down into the cone of silence where it's whispered to you in another language versus just, you know,
it's posted on a webpage for all to see.
Yeah, you sent me a link to the EFF assessing your risks page.
And it was pretty interesting because it goes through, well, what are you protecting?
How likely is it other people will want it?
Who are you protecting it from?
Is it okay if casual people get it or guests or family members?
And then there's the, how much trouble am I willing to go through to prevent any consequences should I fail to protect it? And so it's a good way of categorizing what you're doing,
even if it isn't tactical advice.
It isn't a, you should do this with it.
It's a, why are you bothering sort of questions,
the assessing the risks part.
So just to jump in for a second,
here's one of the problems I have with security,
and it's very confusing to me.
When working on medical stuff, we used to do a risk analysis.
So it would be this big spreadsheet, and we'd have a device,
and we'd say, okay, these are the things that can go wrong.
This would be the effect on the patient.
And then we'd have another column.
This is how we're mitigating it.
And you'd have to give this to the FDA and say,
look, we've thought about these risks. Here's how severe we think they are. And here's how we are either,
here's how they're either not a problem, or here's what we're doing to make sure they're not a
problem. Stuff with security information kind of feels nebulous to me because there's all these
attacks. You might think that a customer's data is not interesting. Like, let's say it's
step data from a track, you know, a fitness tracker or something, or just some very mundane
kind of information. But I've seen a lot of attacks where people can synthesize, with enough of that data, private information.
Same thing with web surfing.
Like, oh, I don't need to see your bookmarks, but I can glean from, I don't know, something your browser's doing that seems completely unrelated and produce a way to identify you personally.
This is like when Target knows you're pregnant before you know you're pregnant?
Yeah, I don't even know if that particular case was true,
but it's this kind of, we're going to apply all kinds of inference to boring data.
So I guess my question is, where can you draw the line anymore?
So the matrix that you mentioned actually is used in information security as well for risk analysis.
So in terms of, let's say there's an announcement of an exploit against some software operating system, whatever.
So the way you would look at that is what is the likelihood of that exploit being
exploited? And what is the level of impact if it is exploited? And then based on those,
you look at your, your mitigations, you can say, okay, I'm going to patch that. I'm going to
reduce access to that service to reduce the likelihood of exploit. I'm going to reduce access to that service to reduce the likelihood of exploit.
I'm going to reduce the data set that's available to that service to reduce the impact.
Or it may just be that whatever this is, companies can say that we accept the risk of this. Um,
as long as, you know,
that's documented somewhere and they,
they have a good understanding of what it is they're accepting the risk of
that may be a valid,
uh,
business choice.
When you talk about inference of data from lesser data sets that gets into,
I think the likelihood of exploit category where it is possible to do those sorts of things.
The question is, how much effort does it take to do it? worth it to an attacker to take advantage of the level of effort to get that data?
And then, you know, how much is that going to be worth to them after spending that effort?
And what is going to be exposure to any of the owners of that data after it's been,
you know, de-anonymized or whatnot in that context.
Okay, that makes sense.
So you can do a real risk analysis with that
based on an assessment of the value of the attack.
Yep.
Okay, so there's always been this don't rule your own crypto,
which I totally believe in.
Hey, rot 13 forever.
Twice.
But how do I even know what crypto is out there?
Do you have a good reference for figuring out what sorts of crypto will work in which scenarios?
I mean, I can't use the latest super-duper thingy,
Crypto Bob, in my embedded system,
but I don't want to just use ROT13.
There's got to be something in between.
Yep.
Embedded systems are particularly difficult with crypto.
I've seen this with little embedded network infrastructure devices
where as new versions of TLS are developed,
TLS is what SSL kind of grew up into.
There are harder and harder crypto operations
for the new ciphers that are released.
And then best practice is to disable many of the older versions of TLS
and disable the weaker ciphers because maybe there are weaknesses in them
or they're just, you know, they're not of a level of security
that is sufficient for whatever you're doing and trying to do elliptic curve.
Diffie Hellman on an eight bit micro is just not going to happen.
I guess I wasn't really providing any info, just agreeing with you.
Do you have a book, a book, a starter book?
So there is a book that just came out that I started reading through,
which is serious crypto Crypto by Jean-Philippe Amasson.
And that is published by No Starch Press.
And it kind of goes over the fundamentals of cryptography.
And when you think about crypto, there is a lot of math involved.
Generally, if you are taking an existing implementation or some reference design,
you don't need to understand every little internal bit of the machine. As far as like,
you know, for example, AES has a bunch of internal working parts. It's an encryption algorithm that
does a bunch of stuff. I don't know all that bunch of stuff. I know the different modes that it can
operate in, such as like block chaining mode or stream, different streaming modes and knowing kind of those operations and when
they're best used is,
is kind of what you kind of need to know as an implementer.
And so a book like this,
serious cryptography would kind of go into that and describe,
you know,
these are different modes of encryption.
These are when you would use symmetric crypto. This is when you would use asymmetric or public private key cryptography.
This was when you would use these stream ciphers or block ciphers, that sort of thing.
Okay. That is sort of what I wanted to know. Yes. On the other hand i feel like i feel like all of crypto and security is kind of
pointless i mean i don't really want to say that because i know that's a lot of your job and i'm
that's not good but it's also false that's good every time i hear about these exploits like this
Spectre
and Meltdown
these
processor level exploits
there's no way we're going to repair
and I'm just like
God there's nothing I can do
there's no winning
even Alice in Wonderland and the Red Queen
and running constantly but getting nowhere.
I'm still falling way behind.
Help me.
Stop reading the news.
Yes.
So, first off, don't panic.
There is a lot of fear, uncertainty, and doubt in headlines relating to security news.
Don't panic.
There are always going to be exploits discovered.
It's always going to be a cat and mouse game. Um, with, with the meltdown inspector, uh, exploits, we're kind of paying for assumptions
that were made in computing a long time ago that have been carried forward.
Um, which is you are the only person that is going to be running code on this processor
and it will always be under your complete control.
And if you look at just a web browser,
you're running a JavaScript plugins,
et cetera.
That's all stuff fed to the browser from some third party that you have no
control over.
And the browser is just happily executing away.
So just, yeah, it sounds kind of like from the way I'm describing it,
we should be panicking, but don't.
There are things you can do to mitigate issues.
Just like as an end user, a couple of the best things you can do to mitigate issues. Just like as an end user, a couple of the best things you can do are use a password manager and use two-factor authentication.
Password manager gives you randomly generated strong passwords across different passwords across multiple sites.
And just use one strong password to log into your password manager.
Use a two-factor authentication token or app or SMS code
to help prevent that password from being phished or otherwise captured.
And that will go a long way to mitigating end user risks, at least on your own
machine at home. I like that. One of my New Year's resolutions last year was to get a password
manager. And I did, and I became a convert. And I started talking about it and trying to
convert everybody else. I thought having a password manager would be annoying
because I'd have to change all my passwords,
which were many of them the same and really not that great.
But now I have stronger passwords and they aren't the same.
And I have this feeling that if you use the same password
between different sites, it's sort of like reusing the same Kleenex from different people, which is just horrifying.
And I have some two-factor authentication, but then I hear that password managers are being hacked.
Wow.
Wow.
It's not like, you know, credit union or credit.
Anyway, do I have to worry about that too? Or can I just say, look, I've done a good job. I actually paid for one. So it's not like I'm just using the worst one and I didn't write my own. So I don't have to support it. Is that good enough, really? I mean, is password manager third party good enough?
For most people, I think so.
Again, it depends on your threat model.
If you are, say, Edward Snowden,
your threat model is going to be slightly different.
There's,
there's a paper written kind of about threat modeling and threat assessment in a,
it's written sort of in an amusing style by James Mickens.
It's called this world of ours.
And at one point it says, well, you're're you're either dealing with massad or not massad
if your adversary is not massad then you'll probably be fine if you pick a good password
and don't respond to emails from cheapest pain pills virtual basket bit.biz.ru if your adversary
is the massad you're gonna die and there's nothing that you can do about it.
I think that's an extreme way to look at it, but it is a good way
to look at it. And I think one of the
things people don't do enough is
analogize their normal physical
security with computer security
because we have houses with locks
and
windows. And you could have
a house without windows and it would be a lot more secure
all kinds of exploits to get into a house
but we do say this is the limit
given our
like you said our threat model
that there aren't people trying to get into our house
every single day to kill us
or steal things
then okay this is as far as I need to go
to prevent casual people
so finding the right balance is probably a little trickier with computer security.
And also on that same analogy, there are things you can do with your house where you make it a less attractive target to, say, a burglar casing your neighborhood.
You have a security camera up.
You have a dog at home that barks a lot at
anybody approaching the house. You have little signs that say this house protected by whatever
security company. Um, when somebody sees that and then they see a house down the street that
has its side window open all day long, when people go off to work,
they're going to pass up your house and go to that other easier target. Um, It's side window open all day long when people go off to work.
They're going to pass up your house and go to that other easier target.
If you're using two-factor authentication,
it's going to make phishing your passwords a whole lot harder and your accounts are just going to be passed over.
Okay. Two-factor authentication. I'll do more of that. your accounts are just going to be passed over.
Okay.
Two-factor authentication.
I'll do more of that.
I don't have it set up on everything. I'll use ten-factor authentication.
Should we talk about why it's called two-factor authentication
with the whole biometrics and what you have and what you are and what you know?
Yep.
Sure. Go ahead. So, yeah, there are the various factors. biometrics and what you have and what you are and what you know yep sure so go ahead so yeah there
are the the various factors there's something you know which is usually a password something you
have which can be a security token um like a usb plug-in thing or an app on your phone
and then there's something you are which is is your thumbprint biometric, uh,
scan of your eye, uh, your face, your DNA, Christopher's face, your voice. Hi, my name
is Werner Brandes. Um, again, movie references. Uh, so usually, so the way I do things on my phone is I have a strong unlock pin that I use all the time to unlock my phone.
And then I have my password manager on my phone that requires an additional verification step that requires me to use Touch ID to unlock the password manager.
So that's sort of the way you can go is, is having a second factor
to authenticate yourself to a service. Um, one of the issues I have with biometrics is, uh, if your,
if your token is compromised, um, it's very hard to replace your fingerprints. Depending on the implementation
of the biometric verification system, it might not be storing that data in a secure way.
It might not be storing the data in a way that keeps collisions from happening. For example, at a previous job, I had used this
biometric hand scanner where you would punch in a pin,
which was basically not really the
something you know token. It was just essentially your username.
And then you would put your hand in the scanner. And so it was
one-factor biometric authentication.
And, uh, heard a talk at DEF CON several years ago, um,
of a guy that had this and was playing around with it at his house and his
neighbor came over and, uh,
the guy was setting up his neighbor to watch his plants for the weekend or
something. And his neighbor had punched in the guy's pin instead of his own scanned his hand and unlocked the door because that particular biometric scanner
only registered like nine points of data for your hand geometry so that's you know not really enough
but so that suggests that people should if they use biometrics,
be aware of the implementation to some degree.
Yes, definitely.
Things like the iPhone Face ID and Touch ID.
I'm not as familiar with the Face ID stuff yet.
I've looked into Touch ID,
and the way it stores the data in the secure enclave is rather nice.
Yeah, I think Face ID, it stores it in the same place.
But whether it's
less or more of a one-way kind of thing. Yeah, and the way it registers
facial geometry, that sort of thing, would be interesting to look at.
Because I have heard about collisions.
Anyway, so yeah yeah i can't open
christopher's phone anymore with with my face but i haven't we haven't tried it on your brother i
would be interested in seeing i don't think it'll work on him but have you tried wearing a christopher So that's biometrics.
The something you have, there are security tokens.
YubiKey or Yubico makes the YubiKey. Oh, yes.
I used to have one of those and then it would have a different number all the time.
And I'd have to type in the number before it changed.
And then after a little while, they would get out of sync and it'd be very
exciting.
Yeah.
That was,
that was a good introduction to time sinking problem.
So there are the,
the ones that show a little number on them.
Those are time-based.
There are also,
there are actually also ones that are just a counter-based that have the
little display on them.
Those have sync problems if you have it in your pocket and the button gets pressed a bunch of times.
It gets way ahead of what the server is expecting your next token to be.
So, yeah, those are things you have.
There are also USB ones that you plug in and they will generate one-time passwords. Or there's a universal two-factor or U2F, which actually performs does crypto hand-wavy stuff, because I haven't looked into the implementation details of U2F.
But it performs a secure negotiation to authenticate you to the service after that, between this little secure enclave on your USB drive and whatever you're connecting to. There's also the time-based
one-time code
apps for your phone, like Google Authenticator
and the like.
Those are pretty good.
Again,
if your threat model is
somebody putting malware on your phone
and then sniffing the memory
of that application and getting all your
OTP keys, that could be a problem.
And then the...
The most popular one is SMS, right?
Yep.
I mean, that's what I get.
Everybody that wants me to do two-factor authentication
wants to send me a text whenever I log in
or whenever I log in from a new location.
Yep, that one is the most common.
That one has been seeing a lot of exploitation recently due to people taking advantage of phone company tech support. I say, I am Christopher, and my phone broke. I wish to move my number over to this other MZ or this other SIM card ID.
And they say, okay, because we're helpful.
Okay, so there's the whole problem right there.
So it's finding the weak point and exploiting it there is, you know, once you get a human in the mix, you can, you know.
Sounds like a solution is destroy all humans.
Helpful people are the worst.
I know, argue with somebody or, you know, lot more effort because you need somebody to call and spend time on the phone and try and,
you know, argue with somebody or maybe call back several times, try and get somebody different
that might be more helpful. Um, so if it's just, I'm trying this for this list of 20,000 people I
have, uh, it's, it's not as likely to happen versus I am targeting these specific five people.
Yeah, you'd have to have something you really wanted from my account before you'd be willing
to spend an hour on the phone with my phone provider.
Yep. What seems to be the most common recently is Bitcoin exchange sites. So people that have all their Bitcoin in an exchange site,
uh,
wallet online,
uh,
that uses SMS two factor off,
they'll go after that and then drain their Bitcoin and run off and do
whatever it is you do with Bitcoin with it.
Attempt to liquidate it,
which probably takes several days or something.
Yeah.
And in the meantime, somebody fishes your Bitcoin account and, you know, the circle continues.
All right.
So that all makes sense.
But I want to go back to the processor thing that happened recently.
Okay.
Could somebody explain what happened i mean all i know
is that suddenly my processor is vulnerable to things and it wasn't just intel and it was
everybody um so the cortex m's because they're the best no because they don't have any fancy
fun hardware that makes me go fast the The Cortex-Rs, though, apparently, are potentially vulnerable.
Well, they're A's, basically.
Oh, okay. I thought they were more similar to the M's. Never mind. Interesting.
I'll have to look into those. No, it's a pair of A's
playing together. Oh, and they're in lockstep?
Or voting. Or something.
Oh, cool. Okay.
Anyway, so as processor designers tried to find ways of making their processors faster, and they were running into limits of the clock speeds they could get on the process size at the time, they started looking into ways of maybe instead of just making go faster and
faster and faster,
utilize all the bits of the processor at the same time if they could.
So if you have a bit of the processor that does integer math and a bit of the
processor does floating point math,
maybe you could,
if you had two operations that were one doing integer math,
one doing floating point math,
instead of doing them,
you know,
do one and then the next,
maybe you just stick them both into their respective chunks of the
processor.
And I will wave my hands and say,
I'm sorry if this sounds super confusing,
cause I am really not familiar with deep processor architecture stuff.
So you guys need to keep me honest.
To the best of our abilities i i so
far you're good for a very long instruction word micro architecture that's what they do and this is
why clock speeds haven't gone up but processor speeds have i mean clock speeds have gone up some
but partially it used to be a clock speeds would go up dramatically a lot yeah double every year
or something and now
they don't double that fast yeah what we're seeing more is slightly higher clock speeds slightly
lower uh power consumption but still seeing some performance increase that's not proportional to
just those uh adjustments and it is due partially to this parallelization
and partially due to bigger caches.
Yep.
If I understand correctly.
Okay.
And then you can also do pipelining,
which I read a bunch about this morning
and promptly mostly forgot.
So you can have multiple stages of execution through your processor where you'll have an instruction that needs to do several things instead of just waiting for it to get through all of its several things.
As it gets through the first thing, you send a second instruction in and they go and you kind of fill up this pipeline of operations.
So I learned about pipelining on a TI DSP a decade and a half ago or something.
And it was really great when we were doing signal processing where I would have to do a filter.
And I worked on a multi-rate filter, so I was changing things from like 47 kilohertz
to something not equally divisible, like 9 kilohertz.
And you have to go up and then down in like one step.
And it's a very complicated filter.
It's a lot of math, but it's the same math over the signal. You just keep doing
the same thing over and over again. And the only time that you don't do this thing is when an
oddity occurs and you have to restart the filter or you have to change the filter coefficients.
And so it's very mechanical once it's in place. And sometimes there's fine tuning
on the fly. And so it isn't something you could just put in hardware, but you are just doing the
same. I mean, totally boring, exactly what processors should do. Very boring things,
not like machine learning. And so the pipelining there makes a ton of sense because, okay, I'm
going to multiply this and then I'm going to add that, and then I'm going to multiply this, and then I'm going to add that,
and it's just going to be over and over again. And the TI processor would stack up like four
samples worth of math operations. And as long as I didn't change anything, it would go as fast as
possible. But if I changed anything, then I lost all of that pipeline.
And the reason that you can't necessarily run at pipeline speeds is because RAM is much slower
and Flash is much, much, much, much, much, much, much slower than the processor itself.
So you want to get all of these things into the fastest registers you can and get them all lined up to go as though you were doing that pendulum
thing with the marbles and you start on one end and it click, click, clicks all the way through
and it jumps up to the other end. That's kind of what the pipeline is. You just want it to go all
the way through and you want it to keep going over and over again. So yeah, pipelining.
Yep.
That's it in a nutshell.
And so if you have a branching instruction.
Ifs, whiles, fors,
anything that may change the execution.
You can, what would normally happen is
you would see that and go,
okay, that is conditional upon
the result of some operation that is currently in the pipeline.
I'm going to hang on and wait.
That pipeline is going to empty.
I'm going to see the result and then continue on and refill the pipeline.
That's not very efficient.
You're losing your pipeline there.
So what you can do is say, okay, the last time this happened, this is the result I got.
And so I'm going to go on the assumption
that that is the result I'll see this time.
And it isn't always last time.
You can sometimes write your code
so that if you write your code,
if there is an error, then do this.
It may always do that.
I mean, you have a long list of things that are happening,
and somewhere in there you have if there is an error.
And you can choose to put it if there is an error
or if there is not an error,
and then fill in the next thing however you want.
And the compiler may speculatively execute
one way or the other.
Well, we're getting ahead of it.
The compiler doesn't do any of that.
That's the chip.
Right, right, the chip.
But, I mean, this isn't new.
The speculative execution
was always part of pipeline.
Well, let's talk about what that...
You skipped to it.
Oh.
He was getting to it.
Oh, Nick, go ahead, sorry.
So...
I know all this stuff.
I'm so excited.
I didn't think I would know anything about this, but this is all old chip stuff.
But okay.
Speculative execution.
Go ahead.
Lemurs of Madagascar.
Branch prediction and speculative execution are the chip's way of making assumptions, usually based on past behavior and past results and saying,
okay, this is what I think is going to happen.
So I'm going to start doing this and just keep the pipeline going.
If that turns out to not be a valid assumption,
I can just throw that away and start over again. I'm going to lose a bit of pipeline stage process in the meantime,
which will maybe slow it down a little.
But if that's correct, then I don't have to perform that wait for the result.
It's just there. It's already done.
So instead of losing the pipeline either way, you are losing the pipeline only for one path.
Yep.
And to make things more complicated, if you have more than one pipeline.
Right.
You can stage the speculative execution somewhere else and toss out one of them, but one pipeline is unaffected.
Yeah. So where Meltdown and Spectre come in is taking advantage of speculative
execution and cache timing attacks.
So cache timing attacks, your cache is a place where your processor stores recent stuff.
Memory addresses it's accessed recently, et cetera, et cetera.
And cache is way faster than RAM, which is way faster than Flash.
You left some ways out there, but okay.
I said way, so it was long way. So you have your most recently accessed data there.
If you are trying to understand what some other process has done,
you can ask the processor to perform some different operations. And if it takes a while to
come back, you know, okay, this was not in the cache. If it comes back right away, oh, this was
cached. So I know that whatever something else was doing, it recently did this operation. So you basically perform a timing-based attack using
things that are in the cache or not in the cache
to see what has happened recently.
So for my filtering example,
I would have
recent data in the cache, and I would have my, I would have recent data in the cache and I would have
whatever my filter coefficients were in the cache. And if you looked at the cache, you might be able
to say, oh, the system's in this state right now. You can't really look at the cache. Oh, I thought
that was what he was saying. No, you can ask questions about the cache. You can't ask questions
about the cache. You can just ask the processor to do things.
Right.
And based on how long it takes, you know whether or not it's in the cache.
It's really subtle.
Wow.
Okay, so it's not that my data is in the cache.
It's that if you tried to run something else and it was fast, you know, you know, it must be that data that's in the cache.
And if it's slow,
then that data wasn't in the cache.
And,
but there's an additional,
there's an additional complication here because the important part is
privilege levels.
Yes.
So modern CPU architectures have multiple privilege levels.
Usually you see them referred to as rings.
Like an Intel x86 CPU has ring zero,
which is the most privileged.
That's where the kernel lives,
the kernel being the operating system
that gets access to all the things.
And then usually ring three is user level stuff, user space.
So my own unprivileged, uh, applications that I'm running, um, in, uh, uh, hypervisor based systems.
Um, there's a ring negative one, uh, which is the hypervisor lives there so that virtualized systems can
operate at ring zero and so forth.
And then if you look at the Intel management and stuff,
it's like negative two or negative three,
which is a whole different ball of wax.
We can talk about later if you want.
Anyway,
so the privilege rings, um, a lower privilege ring is not permitted to cross a boundary into a higher privileged ring.
Um, so user space stuff is not supposed to be able to access, uh, kernel space stuff.
And that is enforced, uh, within the CPU architecture.
But this isn't like when I type sudo edit something in Linux,
and it's not like when I'm in Windows and say,
yes, allow this executable to change my system.
Those are all happening still in user space.
This is at the processor level.
So this is different, right?
Yes.
Well, sort of. So this is at, right? Yes. Well, sort of.
So this is at the kernel level.
So this is where the operating system is doing things like, let's say, on a Windows machine that has BitLocker encryption enabled.
The kernel is the one managing the disk encryption operations and the encryption key for the disk.
And as a user, you're not supposed to be able to touch that memory.
The kernel can present that data to the user through various ways.
But as just like a user performing a syscall
to read that memory location out,
you can't do that.
Okay, so I have rings, I have priorities.
If I'm in the user ring,
I can't interface directly to the kernel ring.
I have to go through the correct APIs.
Yep, like various syscalls
are usually what get you through to the kernel to request things from it.
So is this attack having to do with the cache timing running in the user ring and noting the cache that is on the kernel ring?
So the way it works is, oh, one more Morton thing.
If you're attempting to access ring zero things from ring three, you will get an exception
thrown by the processor.
It will say, no, you can't do that.
Is the exception cause a blue screen or a reset or handled by the
kernel or is it handled by the user ring my guess is you'd probably just your program would crash
yeah i don't be denied it would be like when i said printf and i'm printfing to the wrong thing
and it gives me an error and i just ignore it yeah well. Well, somewhere in there. The system wouldn't crash now. Okay.
Okay.
Well, I mean, because if you were a Snoopy user snooping into the kernel and it crashed,
then we wouldn't have this problem.
Okay.
But then you'd be giving it people easy ways to crash people's computers.
Yep.
Is that better or worse?
No, let's finish this.
Okay. Okay, so what you can do is craft a program that will, at a high level, it allocates some memory and then says, okay, now do some things and then copy this somewhere else, and then it performs some bitwise operation during the copy
using a bit of kernel memory space.
Right.
Branch history buffer.
Hey, I'm looking this up. I'm looking up.
So if you write it correctly,
when that gets optimized through in the processor and when it gets set up for
a,
for its execution path using to get pipelined and then the speculative
execution stuff does its thing. The check to see if you are allowed to read that kernel space doesn't happen until after the operations that you have requested are completed.
Right.
But the data doesn't get returned to you because then the check happens and the exception gets thrown. However, at that time, the operation that you requested to happen
has already been processed through and is present in the cache.
So then when you go and pull, oh no, no. So you can, you, you check two different memory
addresses. You ask for an allocation in two different spaces. Basically, you request a memory allocation,
and the address is dependent on one bit of a memory location of kernel space.
Okay.
And so then you go and request those two memory locations that you have potentially allocated.
And if the response comes back faster, that's the bit you know was in the chunk of memory
that you requested from the kernel. And so you perform this over and over and over,
iterating through and retrieve one bit at a time of a chunk of kernel memory.
And if you know, if you know what you're doing.
If you know within the kernel where something is stored,
you can go after a password, a key, whatever you want,
and you can go and grab it.
Now, modern kernels use address space layout randomization, which is supposed to
randomize where things are within kernel memory. So you can't predict where things happen.
Related to this paper is another paper about bypassing kernel address space layout randomization. So if you can predict where things in the kernel are
and then exploit this particular cache timing attack
that lets you get stuff that you target things within the kernel.
Is it my imagination or is this really complicated?
It's extremely complicated.
And it's one of the reasons that I'm not able to be angry.
I mean, everybody's saying, oh, Intel, these morons.
Linus came out and had this whole Linus-y rant about how terrible Intel is.
Linus Linux, Linus.
Yeah.
Okay.
And I'm looking at this and having read some of the paper and the explanations, I'm like, okay.
So, yeah, they came up with something
15 20 years ago and some researchers have been thinking really really hard with unbounded
resources for the last two decades on how maybe to work their way through this there's no way to win
nobody's going to nobody was going to sit down with their processor microarchitecture and come
up with this attack while they're designing a processor.
Maybe I'm wrong, but I think we're being unfair.
It didn't hit the risk assessment, did it?
No.
No.
Because, again, at the time that these processors were being designed, also the assumption was, okay, this is a processor that you're going to run in a desktop or something.
And you're the user running things,
you're the user owning the code, all that,
and the environment has changed.
I just read something this morning about the Itanium architecture.
Oh, God.
Where it exposed the pipelines to the underlying OS,
and it was up to the OS to implement
its own speculative execution,
which is kind of interesting,
which would allow you to do some interesting things
on the OS side to figure out
how you want to do speculative execution
across security boundaries or not, that sort of thing.
AMD is not vulnerable in general to the meltdown vulnerability
because it doesn't perform speculative execution across security boundaries,
is my understanding.
I could be wrong.
Well, one of the things I saw about Intel is it's one of the special caches
and there's no distinction
between user space
and kernel space
things within it.
So some of the memory mappings,
it just shares them.
And I think that's the root of the problem
with the Intel stuff
is that
in order to make things go really fast,
they didn't make separate page tables.
And I'm not sure that AMD does that.
But Spectre sounds like the same thing.
So everybody's vulnerable to this.
Anything that uses speculative execution, yeah.
Anything modern.
Except the Raspberry Pi.
Which isn't modern.
I'm raising my hand and saying no, no, no.
Anything not Cortex-M.
No, I'm saying the TI DSPs had this speculative execution and pipelining,
but it didn't have the kernel and ring and priority and many people using it.
This is a problem because there are cloud computers, right? it okay this is a problem because there are cloud
computers right no this is a problem because you have privilege levels no if if i was running on
my computer yeah it wouldn't matter oh oh but browsers run code yes i really think this was
sort of a cloud problem because that's where you have lots of users on one computer but
it's more of a problem for them but it's still a problem i mean that assumes that i leave a browser application
running for long enough for it to identify everything about my kernel um the browser ones
so uh specter is more yeah it's more general general speculative execution based.
It allows process to divulge information about memory that it has.
So that's kind of a user space, user space thing.
Like one browser tab could ask about the contents of another browser tab.
Or what else I'm running, which is something I really don't want.
I don't want them to look at my skype or email programs so firefox uh just rolled out an update
that has a mitigationates the issue.
As far as the general internal mechanics of Spectre versus Meltdown, those I'm not too sure of.
Yeah, it's fuzzy to me.
But, I mean, it sounds like, for the most part, Meltdown has already been patched.
Apple's already released a patch before anybody, before the exploit had been publicized.
But Intel's CEO is too busy selling stuff.
Oh.
Oh, that was mean.
Intel's not going to release a patch.
It's Microsoft.
Intel, I think, is releasing a microcode patch also to help.
A BIOS.
To kind of help the issue um
i think that one i don't know if that's going to be for meltdown specter or both
um of course the fun with microcode patches is you're altering your processor everything's fine
yeah and also you're not just altering your processor. The way to apply the patch, usually microcode patches are applied through system BIOS updates.
Do you know anyone that updates their BIOS?
Yeah.
No.
I mean, I update my BIOS on my laptop a lot trying to get the screen to stop flickering.
But once it stopped flickering enough, I stopped paying attention to BIOS.
Yeah. And then even more than that, do you know any companies that release BIOS updates for any systems that are more than two, three years old?
No.
Why would they?
Yeah, so this is the problem.
Because the advice from security people often is, oh, we'll just keep your system up to date and apply patches,
which is fine if they're available.
And even if they're available, that advice doesn't apply in all environments.
If you are a hospital, first off, you have devices that are life safety critical that
you can't necessarily just reboot every Tuesday. Second, you have vendors that need to go through all that fun validation stuff in
order to release an update.
And that is a ton of work on them,
which they're not just going to do every patch Tuesday,
go off to the FDA and say,
Oh,
Hey,
by the way,
you have industrial control systems where you have to really vet the patches for what it's going to do to your system, industrial and scientific stuff.
Yeah, all of these environments where, you know, you hear about these compromises and everyone just says, oh, you should have patched.
Well, you're not familiar with their environment.
They couldn't have necessarily just patched.
On the other hand, a lot of those things shouldn't be connected to networks. That is true. There are a lot of issues with how networks are set up together,
how you handle trust boundaries and what is permitted to connect to each.
But even with, you know, isolated networks, look at... Oh, jeez, I forgot the name of it.
What was that crazy industrial control targeting one?
The SCADA one?
Yeah, that targeted the Iranian...
Yeah.
...finery plant.
Yes, Stuxnet.
Stuxnet,uxnet yes so all of those systems were completely isolated
somebody
basically like dropped a USB key
in a parking lot
and it got plugged in
somebody who shall not be
who shall remain named CIA
or NSA
once again if the Mossad wants your data You shall remain named CIA or NSA.
Once again, if the Mossad wants your data.
If you look into the way that Stuxnet worked,
it becomes really obvious just how targeted it was and the intel available to those that created it
in order to target it thusly.
It's an interesting read if you read up on it.
I think that some of the takeaway
from some of the security discussions is
if you're a very prized, unique, singular target,
there's all kinds of things that are going to be done to try
to get to you that won't be
applied to the general public.
So things like hospitals
and industrial controls stuff,
yeah. Network isolation,
set up least privilege accounts,
you know, having every machine
on the network logged in as admin is not necessarily the
best thing.
Um,
there are lots of things that can be done.
Uh,
always,
you know,
defense in depth is what,
what we always try for is have as many layers as you can to,
uh,
reduce,
um,
impact and reduce likelihood of something happening. All right. have as many layers as you can to reduce impact
and reduce likelihood of something happening.
All right.
So I missed the ending.
Is somebody coming at me through my browser?
Should I deal with this, or can I put my head back in the sand?
It's super comfortable.
I think Microsoft's already got a patch for Meltdown,
and there are
already patches for browsers either coming in the next week or or yeah so if you have
your system at home um update your software patch your system if you have a system running in a
hospital um don't browse the internet from anything on any life safety critical device.
That's a permanent thing, not right now.
I mean, just for right now.
Just in general.
To watch Netflix on the MRI.
Yeah.
I was thinking somebody loading
Yahoo News on their IV drip machine.
Why not?
Yeah, apply patches.
One thing that I found is
unfortunately
almost necessary these days
is an ad blocker. I use
uBlock Origin because
ads are just
proven to be again and again a terrible
vector for malware infection.
Antivirus, anti-malware. There's a site
av-comparatives.org that has
basically comparisons of different antivirus products tested against
various malware samples.
What else can you do? Password manager, set up two-factor authentication.
Not necessarily, well, not necessarily what you would think of as security related,
but part of information security is availability. Keep backups. Backups are very important,
especially as you see a lot more prevalence of crypto locker type malware, which will take your data and just encrypt it and then ransom it for some,
some of money before they give you the decryption key.
Keep backups.
There's with backups of three to one rule,
three copies of your data on two different types of media and at least one of
them offsite. So have your computer with either a local network or attached disk to do your
backups locally,
and then find some cloud-based backup provider that does encryption of your
data,
preferably for whom you can provide your own encryption key. Zero knowledge is the
catchphrase to look for. And that means that I provide my own encryption key and they can't
look inside my data. Right. All they get is a blob that they store and none of any of your data is presented to them. Um, if you set up two factor off, uh,
if you're doing a, uh, U2F token, um, get to register them both and stick one in a fire safe.
Uh, if you're using an app on your phone, also put the app on a tablet and have both of them
set up for the same sites.
So if your phone dies, you can still get into all your services.
They also usually have one-time use codes that you can print out
and just kind of stick those in a safe.
Yeah.
Cool.
Don't panic.
It's hard because you do hear all this stuff
and on one hand it is like home security.
Like, I'm not going to make myself an attractive target.
That's a goal.
On the other hand, it's as though one person could case a million homes an hour.
And so it just is, the automation in it is hard.
It makes it a little scarier than a home security to me.
If they're casing a million homes an hour, though, then they're going after the easy
in order for them to then it takes like, say, 10 seconds for most targets.
But some of the targets take five minutes.
That's a big difference when you multiply it across a million.
And you want yours to be at least 12 seconds.
If everybody else is 10, you need to just,
you don't need to run faster than the bear.
You just need to run faster than the person next to you.
Yep. Yeah.
So there's the low hanging fruit is, is the easy stuff to go after,
which is the password manager and two-factor auth.
And those are easy to get into the practice of using.
And then browser ad-blocking plugins,
just add one of those in,
and then you pretty much just leave it and forget about it.
I don't do that enough.
I feel a little guilty sometimes
because I do want the sites I go to to be supported,
but they are also a little scummy.
Yeah, that's the problem.
So it's tough.
And then I end up not going to those places anyway because...
Have you ever clicked on any of the ads that support these sites?
Then you're not supporting them anyway.
Oh, one actually. Q Overload used to have an ad that I liked.
But they already went out of business, so.
That's true.
Okay, let's switch topics entirely for a little while.
I think I can't stand any more security today.
Okay.
What do you do with ham radio?
I'm one of those hams that doesn't really like talking to people.
So I have, right now my rack is being rebuilt, but normally I have a radio dedicated to APRS, which is just an automatic packet system that reports telemetry and position data.
So I have a station that gates that to the internet and then will gate messages from
other users to each other, either from the air to the internet or vice versa.
I have an HF radio that I do some digital stuff on.
There are a bunch of digital text modes that have
very low communications rates,
but really good resistance
to noise.
Basically, some of them are below
the noise floor. For example, the
Whisper system that
Christopher, you've talked about.
It takes
two minutes to send
13 bytes.
Yeah.
And with 200 milliwatts, you can be heard around the world.
I have a thing up on my tower that listens to local aircraft position beacons so I can see who's flying around me.
How big of a tower do you have?
It's a 30-foot tower.
Oh, my.
So I live in a sort of a semi-rural area down on the central coast.
So I have a, you know, not a massive backyard, but it's big enough.
And I was able to drop a huge chunk of concrete into the ground and build this thing up in the air.
You're going to make me feel guilty that I keep suggesting Christopher just tie his antenna to a bolt
and throw it in a tree.
That was my idea.
It's true.
I don't have any trees, though.
My antenna doesn't work very well.
Antennas are the big mystery
for ham radio for me.
I can do all the other stuff and then then I get to antennas and like,
I don't understand anything people are talking about.
I don't understand how to make this work.
I can never make the SWR do what it's supposed to.
It just vertical.
There's all these horizontal.
There's all these impedance,
inductors and things and coils and transformers.
And it's like,
you know,
this is the worst part of electronics.
Don't lick the high power antenna.
Just think of it
as wiggling a slinky. Yeah,
but it's a slinky with a lot
of math.
Yeah.
So, yeah, ham radio is
really, for me, it's just another part
of tinkering with electronics and yeah, ham radio is really, for me, it's just another part of tinkering with electronics.
And, yeah, I just like RF stuff as well as the others.
Yeah, cool.
But do you ever, like, actually talk to the ham radio people?
I mean, do you do the whole...
The ham radio people. QC, Q you do the whole... The ham radio people.
QC, QC, this is Nick, the exploding lemur.
Occasionally.
So I've done some contesting, usually when we do Field Day,
which is the big nationwide ham radio, quote, preparedness exercise,
which is mostly an excuse for people to grab their
radios, go out into a park and look weird in front of everyone.
I've done some talking on that, which can be fun.
I kind of like the contesting better than the rag chewing because you just, you make
your contact and move on.
You don't have to have any social aspect.
Yeah. I joke that I don't have to have any social aspect. Yeah.
I joke that I don't have enough health problems to really rag chew on HF.
And saying that's going to get a lot of people mad at me, but I'm sorry.
I've made comments.
I'm in favor of these preparedness days and I like them,
but then I go deal with,
then I go hang out with the emergency people and,
and they'd have,
they have better contests for me because they're trying to like optimize where
people should go and which fire trucks should go here versus there and how many
people will be saved.
And I'm just like,
well,
yeah,
I can make contacts or I can just optimize this and do all the paperwork for you.
And you guys can all just go drink coffee.
That's great.
It's a different skill set.
There's a set of classes that I think FEMA has available.
Oh, the community emergency response classes.
Yeah.
Those are response training cert.
Yes.
Cert is one.
So cert is the, like the local stuff.
But FEMA also runs a bunch of different stuff about emergency management.
Yes.
And they have a big bunch of online classes too.
Yeah. Those are the ones I can't remember what they're called.
Oh, it's in my cert packet.
I'm sure.
Cause I've taken a couple. I just. Yeah, it's in my cert packet, I'm sure. Because I've taken a couple.
Yeah.
It's all about incident response and incident management systems.
Basically, how you organize an incident response with different agencies, that sort of thing.
Yes.
And who gets to be in charge.
Has to be in charge? Has to be in charge. Who handles the buying the food and who handles the paying for the fuel for all the fire trucks and et cetera, et cetera.
And you know you're on the team if they feed you packaged food. If it's actually an emergency, eat the packaged food. That's one of the things I learned in CERT class.
Yep. All right. I think maybe we should go back to our weekends.
I desperately want to put my head back into the sand
as soon as I install an ad blocker.
You don't have an ad blocker already?
I have one on my iPad.
I don't think I have one on my...
I mean, I run Chrome.
I think it has an ad blocker.
I have no idea. I don't see ads very Chrome. I think it has an ad blocker. I have no idea.
I don't see ads very often because I only go to the same five websites.
And two of them are yours.
Well, and one of them is Wikipedia.
Chris, do you have any other questions you want to ask Nick?
Okay.
Nick, do you have any thoughts you'd like to leave us with?
So I do have one thought that might sound like kind of a downer.
It's kind of serious and different topic, but something that I think is important to talk about.
So a colleague of mine a few months ago took her own life rather suddenly. And I just want to make sure to kind of have a reminder for all those listening that there are services available to help.
There are people to listen. And there's always somebody that you can reach out to.
And for those that may have somebody in their life that is struggling and you think may need some help,
there are resources for you as well to look into to see how you can help
and maybe get them some help or be there to listen for them.
I think it was on the last show with Chris Gamow where I said one of my goals for 2018
is better mental health because, yeah, depression lies.
Depression tells you you're not good enough.
It tells you it will never get better.
It tells you there's no hope.
And those are lies that are very easy to believe.
And if you are believing those, yeah, there's help.
Get help.
It does get better.
All right.
Now that we're all riled up, Nick, I'm going to say thank you.
No, it's a tough topic and we shouldn't ignore it.
Yeah.
And I know it's odd, but yeah, we don't talk about it. security, there is a large population of those afflicted with
depression and other issues.
There was a talk, actually at DEF CON several years ago, specifically
about mental health and information security as there have been
a number of suicides and
it is something that exists
that affects a large number of people.
And just ignoring it is not going to help anyone.
And I will throw out that there's this site called makeitok.org. And it's a site
that is trying to make it so we can talk more about mental health. And the same way we would
talk about having a broken leg. It's not an embarrassing thing. It's something you just
have to, it's not good, but you can deal with it. And so if you know somebody, maybe look there for resources.
They're not the only place. There are a lot of them.
But don't think I'll do it next week. Do it today.
Our guest has been Nick
Karchukas, security engineer and exploding
lemur. Thank you for being with us, Nick engineer and exploding lemur.
Thank you for being with us, Nick. This was very helpful.
Thank you guys so much for having me on.
Thank you to Christopher for producing and co-hosting.
And of course, thank you for listening.
I have a quote to leave you with.
I had something more serious, but now we're going to go for funny from John Cleese in his autobiography, in which he explains that swearing was one of the
things he was famous for. And then he goes on to say, this incidentally is one of my three
claims to fame. The others are that I have a species of lemur named after me,
and that I was once French kissed on camera by Tim Curry.
Embedded is an independently produced radio show that focuses on the many aspects of engineering.
It is a production of Logical Elegance, an embedded software consulting company in California.
If there are advertisements in the show, we did not put them there and do not receive money from them.
At this time, our sponsors are
Logical Elegance and listeners like you.