Programming Throwdown - Tamper Protection
Episode Date: December 21, 2015This show covers Tamper Protection: How hackers are able to tamper with compiled programs and the programs that prevent tampering. ★ Support this podcast on Patreon ★ ...
Transcript
Discussion (0)
programming thrown out episode 49 tamper protection take it away jason hey everyone this is a double episode uh December double episode pretty excited about that
um we're going to we have an amazing interview coming up and we want to give a lot of time for
that so uh you know we won't uh we'll skip a lot of uh we'll usually cover in the beginning of the
show and save you know our tool of the show book of the show that we had lined up for um for next month um but something i did want to talk about is what did
things cost and why did things cost what they do so someone sent me a youtube video uh co-workers
sent me a youtube video called uh most expensiveest uh stuff it's not stuff it's
actually the s word but but you can you can figure it out but it's it's it's a show um
um where they just try like they drive the most expensive car and they try on a two million dollar
pair of shoes and things like that what yeah it's crazy um it's really funny actually um and so
it made me think you know i mean okay these are extreme things but but in general like why do
things cost what they cost like kind of who figures that out and you know we all took economics in high
school um and you talk about you know the supply curve and the demand curve and all of that, but it doesn't really explain why those curves are what they are for different things.
I did a bunch of research on this, poking around on Wikipedia and reading some papers and things like that.
Basically, what I learned is there's a lot of different theories and they're all
true, you know, to different degrees for different products.
So I'll just kind of go through them.
The first one is called the labor theory of value.
And it's very simple.
It just says things are worth whatever it costs to produce them.
So, you know, if something's really hard to produce, such as, you know, way back in the day, salt was used as currency because it was so valuable.
And it was valuable because it was very difficult to mine salt back then.
Now it's very easy to mine, so it wouldn't make a good currency, obviously.
But so that's the idea. but then that kind of breaks down i mean if if you you know if it's just because something is difficult to get doesn't necessarily make it useful i mean like
panda bears are rare but you know they might be difficult to find but you know panda bear feces isn't worth a ton of
money you know so it's it can't it can't just be that was a stretch i know but it can't just be
how hard something is to get it doesn't make that by itself doesn't work um and so um so then they came up with this idea called the subjective theory of value.
And this handles, so, you know, there's, take like diamonds, for example.
Diamonds are more expensive than emeralds, but they're about the same in terms of how hard it is to find them and things like that.
But diamonds are more expensive just because it's part of our culture to give diamonds.
So it's sort of like we've just propagated this value for diamonds for no reason.
There's nothing inherently about diamonds that make them more expensive than emeralds.
And so the subjective theory of value is kind of weak, right?
Because it's just like saying, well, things cost more because they cost more it doesn't really help but the reality is that there's a psychological there's sort of a group mentality like mob thinking part of this and and that just reflects that um well this is
like uh diamonds are supposed to be colorless but then so if your diamond has a tint of color to it, then colored diamonds, and then now they become more valuable.
Exactly, or even more ridiculous is you can get a synthetic diamond that chemically is exactly the same from a chemistry perspective as a diamond, but it's almost worthless relatively worthless right well no they charge a lot
and I don't know if they need to or they just do
or if it's a conspiracy
but I think they only charge
like 10% difference
or something oh really I thought it was
like a tenth
of the price no I mean like
tuber zirconia or something is like a tenth
but those are chemically different
right right okay well yeah but you're right but the point is there I mean, like cubic zirconia or something is like a tenth, but those are chemically different. Right, right.
Okay, well, yeah.
But you're right.
But the point is they're chemically the same diamond.
But the fact that it is different at all.
Yeah, exactly.
Exactly.
And so there's that to think about.
So you have this labor theory.
You have this subjective theory.
There's also one called marginalism or the theory of
marginal utility and the idea there is the value of something goes down the easier it is to add
another one to the ecosystem so in other words if you look at say a loaf of bread you know if you
compare you might only have one or maybe no loaves of bread at your house and so
you have to go to the store and for you it's very important to get some bread right now
but if you look at the whole ecosystem for you know safeway or kroger or publics or whatever
supermarket you have near you for them to add a loaf of bread to their network is very simple
i mean the bread manufacturers are probably stepping on bread all the time by accident
because there's just so much bread that they're producing.
And so the marginal utility of bread is tiny.
The same isn't true of diamonds.
I mean, you don't go to a diamond factory and just see a bunch of discarded diamonds
everywhere because each one is too hard to come by.
And so marginalism is,
is kind of connected to the labor theory,
but,
but it's,
it's not entirely that way because for example,
um,
if you're out in the desert and you don't have access to any stores or
anything,
uh,
water becomes extremely valuable.
And it didn't make water more difficult to produce.
It's just that you're in an environment where you don't have access to a lot of water.
And so the marginal utility goes up.
So marginalism, think of it as sort of replacing the labor theory of value kind of so um so basically there's all of these different theories they're all true in different circumstances and there's no way to tease apart you know what kind
of is assigned to what so you can't say oh diamond% of the value is subjective. 10% is this.
You know, it doesn't, it's very hard to do that.
So how do you break down something like an iPhone?
An iPhone costs, what, $600?
I don't know, unlocked, a new iPhone.
I don't know.
I think like $600, but yeah, sure.
Sure, $700, $600, whatever.
And then you always see whenever the new iPhone comes out,
they say, here's how much the
actual components cost and someone tries to do a really good job of estimating it and it comes to
like 120 and so you have 120 of like materials cost then there's some labor to assemble iphones
and some labor to develop iphones and the software that goes on them.
But then there's all this extra at the end, like the profit margin.
And is it a theory to say, predict how much Apple should make that profit margin?
Yeah, I mean, it's super hard.
So I'll talk about, so iPhones are a restricted commodity in the sense that, you know, Apple just sets the price, right?
But if you
look at open commodities like say oil for example so um so with something like oil they can't really
it turns out they can't really do a good job of this of teasing this apart and there's no science
to to it so what they do is because only apple makes iphones but everyone who makes oil makes the same
oil right right and so so they can't control those dynamics because it's just too complicated
and so they um that's why they have features so if you want to buy any commodities i mean if you
buy a loaf of bread at safeway it's different different. But if you're buying 10,000 tons of bread or something, yeah, or a ton of oil or anything like that, anything that level.
Barrels of oil.
Right, barrels of oil.
If you do any of that, there's a commodities market, and you have to say in advance, like, three months from now i want these barrels of oil um and then there's
uh in the case of oil there's opec who they handle this socially so in other words if something weird
happens and everyone starts asking for oil they through you know phone calls and things like that
they just sort it out manually like they, they decide they need to do an auction
or they call people and tell them,
hey, your order got canceled
because crazy stuff happened in the market.
And basically the free market isn't really that free
is what I learned.
Well, OPEC specifically is not that,
like they control the market, they manipulate it.
Yeah, well, that's even worse, right?
But even if you look at grain, you know, almost any kind of utility, it's all just very highly regulated.
So I'll give you another example.
So in the Bay Area or in all of California, we have a drought, right?
And so when there's a drought, they don't just raise the price of water until the demand
goes down that's what a free market would do right but no they they impose restrictions they say you
know people can only water their lawns on mondays or something like that and basically it's because
they know that if they try to do some kind of free market economics, it's going to end badly, basically,
because their models are wrong.
It's hard to decompose the price,
and it's just going to end badly.
So it's kind of amazing.
I mean, I'm not trying to be glib or rude to economists or anything.
I think that's amazing.
I wish if I could go back to school,
I would study economics.
And I have a lot of, yeah, I give them a lot of credit.
But the reality is, is these systems
are just not modeled well.
Whether they can be is debatable,
but right now they're not modeled well.
And so there's a lot of social,
you know, a lot of elbow grease goes into making this work the thing i remember about
economics is a lot of it comes down to like predicting what a rational person should do
but then there are many edge cases that break down because people aren't rational
often oh that's even worse yeah that's that's that's a whole other thing. You're right. I mean, it turns out even if people are rational, the system is completely unstable.
There's something called the bullwhip effect, which you should look into.
And basically, it's similar to the butterfly effect.
It just says that if you make small changes, like you just barely change the price of, say, an iPhone or a barrel of oil, it causes massive change.
You change the price of a barrel of oil by a cent and that causes an oil field to get
shut down because there's this weird chaos theory going in there.
And these are all assuming everyone's rational.
They're just using mathematical models.
As soon as you get into real human beings, it's even more broken.
Yeah, it's even more broken yeah it's unbelievable so anyways that's my pseudo
rant but that's what I learned from from most expensive as stuff or from I guess
the the the queries that derived from that anyways if you want to check that
show out it is actually really funny they don't obviously talk about why
things cost what they do but but they have fun putting on two million pair of shoes.
All right, so without any further ado, we're going to talk to some folks at Intel.
These guys are superstars at you know tamper software
tampering and tamper protection
they've done definitely
their fair share of I guess
you'd call it gray hat gray hat
and white hat
hacking and
it's going to be a lot of fun to
talk to them about this well welcome
we're here with the team from Intel
the tamper protection division I guess our team we'll let to talk to them about this. Well, welcome. We're here with the team from Intel,
the Tamper Protection Division, I guess,
our team.
We'll let them introduce themselves in a minute.
We've got Thaddeus, Mark, Steve, and Norman.
So we're just going to go around the room,
the proverbial virtual room,
and have everyone introduce themselves.
So I guess, Thaddeus, you're up first.
Hey, I'm Thaddeus.
Call me Thad.
Lettness.
I work over at Intel. I've been there about 10 years.
I used to work in the digital home group doing things like the Google TV and stuff like that.
A year or two ago, I moved over to Steve and Mark's team and been working on tamper protection ever since.
Mark?
Yeah, so I'm Mark Valley.
I've been at Intel for almost 17 years now.
I started out at the University of Florida as a mechanical engineer in my undergrad and did a master's in computer engineering and never thought I'd come west of the Mississippi,
but they decided to fly me out here on their dime, and I really like what they offer.
And, you know, free flights in a nice warm weather just like Florida will do that for me.
Go Gators.
Nice.
Yeah, absolutely.
You know, I don't know how we got the record we got so far this year.
That's all right.
Hey, at least I think you won a game.
What?
I went to University of Central Florida.
We went 0-12.
University of Florida won many games this year, I think.
No, UCF.
I know, I'm saying.
Yeah, but UCF is. Yeah, UC. I know, I'm saying UCF
went 0-12. UCF is up and coming.
They've got a lot of the Florida talent, so I think they're going to do
alright in a few years.
Yep.
Hi, I'm
Steve Price. I'm a product manager
at Intel, and I
started my career
as a software developer.
If any of you have some age, you'll remember WordPerfect.
I was a developer of WordPerfect for Windows, or one of the many developers.
And then I had the opportunity about 18 years ago to work for Intel.
And Intel, you can do amazing things.
It's not just on the commercial.
It's a real great place to work.
And so I product manage a brand new tool called Intel Tamper Protection Toolkit.
And we are excited to tell you a little bit more about that in this podcast.
Very nice.
Cool.
I actually didn't know for the longest time what product manager was.
And then I read one of the books of the show.
I don't remember which episode,
but it was The Hard Thing About Hard
Things by Ben Horowitz.
And actually, he explains
he developed this
training class for his product
managers, and he called it Good Product Manager,
Bad Product Manager. And he used
sort of that dichotomy to explain
what a product manager does, and it
totally made sense after that.
So if you guys wanted to know
what product managers do, definitely check
out that book.
I felt like it really kind of explained it.
The summary is, you're
CEO of the product.
That's the summary. That's not a bad one.
I would say my description is very
close to that. It is,
it's the guy that does the stuff that the coders hate doing.
That's my job.
All right, Norman.
Thanks, guys.
My name is Norm Chow.
Oh, can you hear me?
Yep.
We can hear you.
Yep.
Yeah, my name is Norm Chow.
I am the marketing person for Intel Tamper Protection.
I've been with Intel for five years.
I love every minute of it.
Before that, I was actually a pure Windows coder for five years,
so I bring a lot of technical experience.
And, you know, I really appreciate the engineering team coming
on this conference call and kind of describing the product to everyone.
So I think we'll have Steve give us maybe a little bit of, or whoever you think
is best.
So I think we'll have Steve give us a little bit of a review of what's going on and then
we'll go over to Jason in order to get some questions going for the team.
Perfect.
So when you think of Intel, a lot of times you'll think of the bong tone that you hear on the commercials and, of course, the chips that we make for CPUs.
But Intel produces a lot of software to make that hardware fly.
And my team comes from the software side.
And being a part of Intel, we're in the position to see a lot of the general needs
in the software development community.
And what we've come to understand is that coders, whether or not they want to or not,
most of them don't want to, is they have to put secrets in their software.
And coders will do a lot of interesting things to hide or mask those secrets, but at the
end of the day, you'll have to put in either an account name and password or a crypto key
or your secret sauce that makes your code better than anyone else's.
And so while they're not comfortable in doing that, they still have to write and ship their
code.
Another thing that we found is that developers want to do wonderful things with their code
and they want to achieve personal goals and help other people.
But while doing that, they want some say in how their code is used.
And so we at Intel, understanding these needs in the marketplace,
with a lot of the really smart people we have.
I have developed technology that helps you hide your secrets in your code
and that lets you be able to respond
when people try to modify your code without your permission.
And that collection of capabilities and technology
we call Intel Tamper Protection Toolkit.
Cool. Thank you. That was an awesome intro.
So just to kind of recap, so I think a lot of people,
you hear these terms like reverse engineering or code tampering,
and especially you see it in the press,
like someone reverse engineered some project,
they were able to get in,
or hackers reverse engineered some client,
and now they could get into Target or what have you.
But what does that actually mean?
Like if someone tampers with a binary
or if someone reverse engineers some code,
what are they actually doing?
So reverse engineering is,
it's kind of the process of pulling apart
source binary code, machine code, usually,
although in this day and age,
we have all sorts of scripts and stuff as well,
and trying to piece back together what the original logic was.
It's going to be things like disassembling the codes,
going back from machine code to assembly,
trying to reconstruct the flow, the structure of the code,
and understand the idioms that the compilers and stuff were putting in
with the ultimate goal of being able to say, you know,
here's a function that does exactly what the original programmer wrote.
Gotcha. Is that the same as tampering or is tampering something different?
Tampering usually starts with reverse engineering.
You usually have to have some idea what you want to do.
But tampering then takes it another step forward
and says, I'm going to go and take your DLL
or your code loaded into RAM
and make some change.
Maybe tweak a variable
so that it's always a particular value
or change an if statement
so it just unconditionally passes
so you can buy you know maybe to bypass a license check or you know I mean a great example is the
old game genie right any code you put in the game genie was basically code tampering it said
here's an address here's a value value. Whenever somebody wants that address, give them that value.
Oh, I actually never knew that.
So for people who maybe are younger in the audience,
Game Genie was this amazing magical thing that you plugged into your Nintendo.
And it plugged into the Nintendo, but then it offered a plug out. So you could plug a cartridge into the Game Genie, which is plugged into the Nintendo.
So think of it as sort of a connector that sat in between.
And then when you started the Nintendo, it just presented you with this crazy screen
like you would see in some sort of cheesy CSI episode.
And you would punch in these alphanumeric entries that were completely
cryptic.
But when you were done, you would get them from some kind of reference manual.
But then when you were done, you just had infinite lives or Mario could fly or something
like that.
And so I guess that's under the hood.
That's actually code tampering that's making that happen.
And in order to find out exactly what values to punch in
and how to build that thing,
they had to reverse engineer both the Nintendo and the software first.
Oh, I see.
So in this case, do you think they were working with the software company or were they actually reverse engineering it blind?
Well, in the Nintendo world, I thought they had to have like permissions and stuff.
But I mean, certainly you can think of other platforms where, I mean, back in the Commodore 64 days and stuff like that, there were things like this as well, right?
Oh, I see. That's right.
GameG probably not because there were licensing issues with that back in the day.
So Nintendo would probably jump on you.
Cool.
So obviously they didn't have the source code for any of Mario or anything like that,
so they had to completely figure out by sort of poking around
what they could modify to get behaviors that they want instead of just crashing it.
Gotcha.
There's actually a lot of these sort of bootleg
cartridges. Like there's this
Noah's Ark game.
It's actually a set of three
it's called Bible
Games, I think, or Bible Study.
It is an NES cartridge that was
not licensed in any way by Nintendo
but they were able to
sort of handshake with
the Nintendo and convince Nintendo they were a licensed cartridge.
And it's probably by doing exactly what you said,
by reverse engineering the input-output of the Nintendo.
Yeah, in fact, there was a whole...
I forget where I read this,
but there was a whole series of games by one manufacturer
that actually did reverse engineer the handshake chip
and started making these third-party unlicensed games.
The big deal with that was that Nintendo's revenue model
was that they got a cut from everybody's game.
So there was some incentive for these guys to bypass that
so that they can get a cut themselves.
Oh, I see, so they didn't have to pay that share. Exactly.
They could just sell it to the consumer and any additional money over the top
they got or they could lower the price of their cartridge.
Oh, interesting. That starts to show
the battle between developers
and commercial entities.
You're always wanting to
get paid for your products and so you
put in code and then somebody
reverse engineers it and finds a way around it,
and so then you do something more to lock it down even more.
And there's a lot of benefit sometimes.
The new Nintendo stuff and the emulators and stuff
were able to unlock a lot of games having reverse engineered
a lot of the old Nintendo equipment.
Now you can play games that aren't manufacturing more.
The hardware is slowly going out of date,
but we can still play them thanks to people diving into the software
and reverse engineering how they worked.
Right.
Yeah, one thing to understand about that whole ecosystem is that
we can reverse engineer them now, but it took time.
I mean, ultimately, given enough time and resources, you can, we can reverse engineer them now, but it took time. I mean, ultimately, given enough time and resources,
you can pretty much eventually reverse engineer everything.
It just may be really expensive or
may take a really long time. So your goal as
a developer may not be that they never can
break it because that's almost impossible, but to
make it hard enough so that only
someone who really is super motivated
would do it, and hopefully there's
not going to be that many motivated people.
I mean, if you're storing, you're storing the keys to Fort Knox in there
there's going to be more incentive than if you're just storing
some game secrets or something.
Gotcha.
And I think that Moore's Law plays a
huge role in this. I know
this is more
on the cryptography side but
if you had Amazon Web
Services
not S3 but Elastic Cloud, Elastic Compute Cloud of today, and you went back to the 1990s, you could actually break HTTPS and you could just liquidate people's bank accounts and you would just be a god on the internet. And so, just furthering your point that given enough resources,
nothing is completely secure.
But as you said, it's this constant arms race
between, in this case, just going back to Nintendo,
between Nintendo who's trying to preserve
their intellectual property
and various people who are trying
to do the reverse engineering to increase their intellectual property and various people who are trying to do the reverse
engineering to save, to increase their bottom line.
And that goes the other direction too. When you get to the 2030s or so, don't expect
your PGP encrypted stuff to work with quantum computers around.
Right, exactly.
Yeah, you know, and what we found too is that there's tight Fort Knox security.
I think Mark brought Fort Knox, but then there's good enough security.
You just need to be a little bit better than the other guy so that when they come to you and their easy tricks don't work,
then they might go looking for better targets of opportunity.
And so you don't have to be super, super secure, but just more than you were before.
And that buys you a lot of value.
Yeah.
And that goes with understanding your threat model.
In other words, what attacks are you expecting?
I mean, as security engineers, that's something that we often think about when we're
architecting solutions.
I mean, obviously, we can't protect against, you know, the nation-state attack necessarily,
but you can, you know, at what level of resource are you considering?
Gotcha. It's very similar to guarding anything,
guarding your home or guarding your locker in school or any of that.
You know, sure, someone can come by with wire cutters
and cut the lock of your locker.
But that's a very high...
I mean, someone's going to be seen walking around campus with lock cutters, and that's
already introducing a lot of risk for them.
And unless they just really hate you for some reason, it's just not worth it.
So by putting the lock, you haven't sort of eliminated completely the idea of someone breaking into your locker, but you've made it prohibitively difficult.
So what are some reasons why, you know, I mean, we talked about Nintendo and about Bible games, but obviously that was much, that was, you know, I guess 20 years ago.
So today, what are the big reasons why people tamper with binaries?
And so who are the people nowadays who are tampering with binaries?
Well, there's a bunch of reasons.
I mean, one of the big ones is curiosity.
A lot of the people who poke around in binaries are students or hackers in the good sense of hackers
people just to enjoy you know peeking into how stuff works people want to fix
bugs if there is a a driver for an old printer and it doesn't work you know
with Windows 10 you know somebody can go in and figure out how that driver works
and fix it and usually that's to everybody's benefit, including the printer manufacturer.
Could be to add functionality in a similar way with printers, maybe to add a new format
for sending messages.
Or new cheat mode.
Yep, or cheat mode. Getting into some of the more nefarious reasons, bypassing authorization checks is a big one.
This might be for things like DRM or license keys for software.
If you're downloading Wares, you find the cracked version of the program, right?
Spreading malware. A lot of the way that malware
spreads is by infecting other binaries. And so
there you're diving into the binary and finding somewhere to
stuff your malware. And then generally
just to maliciously change behavior. If you want to
sneak a man in the middlethe-middle attack in
to sniff somebody's financial information
as they're doing transactions online and that sort of stuff,
if you can hop into the browser,
that's a good place to do it, right?
Oh, I see.
So you look for some code that handles the,
you know, when you type in your password
and it turns it into little asterisks in the browser,
like you would inject something right there
and know that you're getting, hopefully,
hopefully for the attacker, you're getting passwords and things like that.
Okay, that makes sense.
Are there, as far as the, so you mentioned hobbyists.
Are there also, I mean, is this mostly individuals?
Are there, you know, like, is there like organized crime?
Like, are there massive entities that are doing this?
Oh, heck yes.
Or is it mainly people in their basement or what?
I mean, if you look at, it's getting more and more organized all the time.
I mean, if you look, there are like international gangs now that it's not drugs anymore, but it's all about hacking stuff. I mean, very well organized, pretty
scary groups that do all sorts of things that are very financial nature. They're very much
organized as businesses and they're not the lone hacker in the basement that you'd think
of.
Yeah. So I was at Black Hat this last summer, and they were showing the nature of what you call malware or the black market industry.
And you can go and you find the right places, and you can find a hacked kiosk, let's say, on the market that whoever hacked it is selling it now for someone else to exploit or to get gain from it.
So there is a lot of activity, a lot of it very organized with very, very skilled people out there
just wanting to benefit from others.
So it's a very vibrant market, if you could say.
Gotcha.
It's not totally along the line of reverse engineering,
but I was trying to get my Windows product key
before I upgraded to Windows 10 just in case something went wrong,
and I was extremely careless and ran this program
that gave me my product key and also installed something terrible.
And before I could remove it, it had gone into my browser kind of local settings and
pulled out my credit card.
And sure enough, the next day, someone went with my credit card to Home Depot and bought
a bunch of restricted chemicals, which is actually quite terrifying.
I mean, they bought things
that you can only buy in certain amounts,
but because they were masquerading as me,
they were able to buy more
than probably they would have by themselves.
And I mean, yeah, that whole thing is just frightening.
It screams of like a huge organization, right?
Cool.
So I think we've kind of, you know, sort of touched on this,
but what are the consequences for not protecting against tampering? If someone just runs regular old, you know, GCC produces,
maybe GCC with debugging symbols and no optimization
and starts distributing the binary, you know,
what are the consequences of that?
The big one is usually the money motive, right?
Lost revenue if people are able to install your software without a license, possibly
legal liability if you're supposed to be protecting, say, somebody's financial information or content
that you're licensing and you let people bypass your DRM,
you could actually be liable for that.
A big one that we think about is reputation.
Just somebody being able to get in and monkey with your code
and then make you look bad.
Bad press, yeah.
Yeah.
Within Intel, we look at things like loss of intellectual property,
like trade secrets, stuff that isn't actually patented,
but shows how stuff works.
Usually you don't want that.
Like algorithms?
Yeah, generally.
If you've got an algorithm that is really that much better in your product,
then, you know, you might want to protect that
so that somebody else can't get at it.
Possibly loss of customers.
If your mobile payments get hacked,
then people are going to stop using you.
Take a look at Chipotle
if you want to see how fickle people are nowadays, right?
And it can even be life and death.
You know, more and more we've got
things like traffic lights and power
controlled by computers, even airplanes
and cars, right, with the
computer
controlled cars nowadays.
Yeah, so if somebody can
hack in and gain control
and get their software running
where yours should be,
you know, somebody could lose their life over.
Yeah, I guess that's a good point.
I mean, I think a lot of times hacking is the vague term that people use.
And, you know, initially it's like, oh, you're going to talk to some web service
and try to understand its behavior, and then you're going to try to send it some code,
some exploit,
whatever.
And I mean,
I guess,
right.
Some kind of like black.
Yeah,
that's one way of doing it.
But the other way is like you're saying,
if you're running a server,
that's a program and someone can get that program and look at it and figure
out by looking at the source code,
like,
Oh,
Hey,
look,
there's a,
you know,
not balanced check string print here.
Oh,
well now I know exactly what to send it to hack it. Then
that's much worse. Which is exactly what stuff like Heartbleed was, right? I mean, somebody
sets up an overflow and all of a sudden they can start whatever code they want running on your
server. There's not much we can do about that from the Tampa protection side. But if their code
starts to interfere and interact with protected code, we can at
least identify that and stop it at that point.
So does the tamper protection work at all with the trusted security modules that are
in some computers, or is it that kind of unrelated?
So actually, tamper protection is a software sort of trusted execution environment.
So there's no special hardware required other than IA.
So in a perfect world, everyone would have the very latest SGX technology
that's in the latest Skylake processors.
And they'd use that sort of hardware protected execution environment.
The problem is that very few people have those right now.
And let's say you're a Netflix and you want to deploy something.
I mean, you'd love everyone to have Skylakes,
but realistically you want to sell to a lot of people
and not very many people have them yet.
So until those are widely deployed everywhere,
you're going to need some stopgap until you have hardware support.
Right, and I think if you're in the business of tampering the binary,
you would just get some older hardware to do the tampering,
I think, right?
Presumably.
If you waited in time, once everyone has
a protected hardware execution environment,
then it's going to be much harder to tamper
in the same ways.
That actually might be a good topic.
I brought it up, but you want to go ahead and explain
maybe what hardware modules,
what a trusted execution environment, like what does that mean?
So what that typically means is that general code running on the computer
can't see into what's going on.
Imagine it like a sort of a little box somewhere.
So a great example of this is you can imagine that if you look at an older PC, there's this thing called the ME, the
manageability engine, if you have an Intel processor.
And the manageability engine is actually a separate processor that is sort of an embedded
device.
And the main CPU cannot access its memory, cannot talk to it in any significant way,
so it can't really inspect what's going on.
So you can think of the thing that that processor is running as an environment
that's protected from the CPU. So even the OS, if you somehow compromise the OS, you can't see
what's running on those things. And similarly, with the latest Intel processor, the Skylake
models, you in fact can do that on the main CPU in such a way that the main CPU
can actually see what's going on either in the process itself
or even in memory because the memory is encrypted
from the other parts of the CPU. So even the
OS doesn't have the power to go in there and see what's going on.
Oh, I see. So that protects you
against
sort of attacking
the OS or attacking other binaries
getting into
I guess like explored IDFC and things like that.
Right.
But if you attack it, you have to have a program.
So let's say you have some hacking tool that can write to memory.
Well, the problem is that the memory that you're writing to
is essentially not accessible to even that tool.
So there's no other program that can run into that memory
other than the program that owns it.
Oh, now I get it.
It makes the discovery of the hack very difficult.
It's literally impossible.
Yeah, and also, you know, let's say that you're a programmer and you're writing code, you
know, either in your basement or you're part of a small company.
To get a piece of software on the manageability engine is a difficult proposition, right,
because it's a secure environment.
And very few people write to that level of security or to, you know, that have access to get their software on the engine.
And it's, as Mark said, it's a separate computer, almost within a computer, intended to run securely.
But what a lot of developers want to do is they just want to write their code.
And they want to be able to write their code in such a way that people don't mess with them.
And so what we provide is a software trusted execution environment. As we said, it's not
impervious, but it's better than nothing. And it's a way that gives you a little bit more comfort and
security to know that when you write your code and you put you have to put your secrets in or you want to be able to detect when
someone has changed something to get a different behavior then you have some
some Avenue recourse as a developer to to minute either minimize the impact or
at least slow them down to the point to where they might they'll think it's not
worth the trouble and go on and mess with someone else's code.
Gotcha.
So that is a good segue to the next question,
which is how does this actually work?
In other words, are we talking about something that scrambles your code
before it compiles it?
Is it Stone Compiler?
Is it a virtual machine?
What is actually the tamper protection product?
Well, I'll just take a product manager's crack, and then the real answer will come from the
geniuses. But really, if you look at tamper protection product, it's basically two main
components. One of them is a command line executable. We call it IProt, Intel Protector for long, but it's an IProt, it's IProt and it's a
command line. You type in some input parameters and you include, one of those parameters is the
input file. And this input file is your binary. It could either be a Windows binary or an Android
binary, DLL form. And then it spits out another executable that will run on any Intel processor,
but this one will be self-encrypting and self-modifying.
And so the difference between the input binary you give it and the output binary it spits out
is the same identical functionality, but as it runs, it will be encrypting itself,
unencrypting itself, modifying the code
to make it very difficult for someone
to figure out what's going on.
So that's one component,
and that's a command line executable.
Another one is a library called CodeVerify,
and how that's used is you weave the CodeVerify
into your code, and what it allows you to do
is reach out from this trusted executable environment.
So just going back to IPROT, the binary it spits out, we call a software trusted executable
environment. And from within that base, then you can reach out to other unprotected pieces of code
and check it. You know, are you still the same piece of code that I expect you to be? And if it
isn't, then we can flag it. And because it's
in a TEE, it becomes difficult for an attacker, let's say, to change the signal from zero to one,
you know, from a yes to a no, for example. So that's pretty much that. It's very simple.
We have some other ancillary pieces that help developers do their work,
that help with what we call we call code bind that lets the
verification piece check a predefined version of the file so you know when
it's been changed but pretty much just a security compiler you can call it and a
library that helps you go out and check and verify the integrity of your code.
Yeah, and speak a little bit to sort of like the workflow of how you'd work.
I mean, if you're using something like GCC or something like that,
essentially, you know, you would take your program
and separate out what you think the critical sensitive functionality is.
I mean, you don't want to necessarily put the whole program in this
because it has performance consequences.
And then that turns into a shared library,
either an SO or a DLL,
depending on what platform you're going into.
And then that is your sort of root of trust,
your thing to start with.
And so you compile it through GCC,
you get an SO out for a DLL,
then you run that DLL through our tool,
and then you have a new DLL
with the same entry points, same functionality, but
it's not going to be very easy to
sample with our first engineer.
And that's for IProt.
Yeah. And then for
the code verify technology,
you know, you take
a binary, you know, EXE,
I believe we can even take EXEs for that,
I know DLLs, NESSOs,
and you generate, basically, it's like a cryptographic hash signature of that file using a tool called CodeBind that I think Steve mentioned.
And then from that, you can check all or even just parts of your execution file against that.
And that's kind of the lightweight, relatively fast
iProt output because it's self-modifying and in fact it keeps very little of the
executable code in plain text at any given time. So as you're running it's
constantly trying to pull stuff down and transcript it to the next phase of
execution. So there's a lot of heavy-duty processing going on there.
So that's the heavyweight thing.
You want to be in as little as possible,
only for your most secure code,
and then jump out to this much faster engine
for more of the general,
is my code still untampered?
Could you maybe walk us through the flow,
like a function level, like what kind of function?
So you said you wouldn't want it all to be there.
So your main is running, you're doing something, you want to log a user in or whatever.
Like what would be the kind of thing you'd want to do where you would jump to the environment
and then come back?
And how does a person not just bypass the trusted environment?
Like how do you prevent them from not going there?
Okay, great question.
So the first way you do that is you've got to put something of important value in the
trusted environment.
In other words, something that if they were to just rip out and skip it, then whatever
functionality that your code performs would no longer work.
So one example we could have here is let's just say we're doing a video decoder, for
instance.
And then the video decoder is decrypting some video stream. And then it has a decrypt functionality.
And then it passes that video stream on to some other buffer in decrypted form.
And let's say you don't really care so much.
Once the bytes are decrypted, they're uncompressed.
So there's lots of, you know, the size of the video is huge.
So you're not going to, no one wants the raw bits.
But you want to protect the encrypted bits from easily being transferred around.
So, you know, one attack, you know, would be, hey, I'm just going to call your,
well, the first attack would be I'm just going to look at the key,
the decryption keys inside your sensitive application.
So that's an example of something that you might want to protect with iProt.
So you'd typically put, you know, the actual decryption stage in there. And the issue you
have though, so let's say anybody can call this. So I'm just going to keep your thing as is. I'm
not going to modify it. I'm not going to hack it. I'm just going to call your API and then
I'm just going to grab the decrypted bits and bam, I'm good. And then I can write them to disk and
do whatever. So that's one place where you can use code verify from there.
So what you can do is when you're ready to pass the bits off
to some other module in your system that does the next thing with those bits,
you can make sure that the one you're handing it off to
is not some rogue application or rogue program or modified program
that's going to write it off to disk or something to a file,
but it's going to be the expected module that is the display driver or something like that.
And that's where you would use code verify
to sort of inspect it before you hand the bits off
every cycle of the decrypt.
So, you know, an attacker to bypass that
would, you know, have to extract the key,
which is more difficult,
or they would have to find some way
to modify something in the chain
to be able to write it to disk.
So those are the things you want to avoid.
Yeah, yeah.
I was just going to say, you know,
Mark in the past has described that scenario as a decrypting fool.
And so, you know, someone hands you, you know,
they isolate the module that does the decryption,
and they say, oh, okay, I just handed some encrypted bits and it's going to spit me out the good ones.
And if the thing that does the decryption can't identify who's calling him and enforce that it's the one he expects, then that's the decrypting fool scenario.
Exactly.
So you can use code to verify.
In addition to checking your downstream who you deliver to, you can check your callers to make sure that they're
authorized to actually use you.
So if you do something sensitive, like
you say, you know, if you do something like
return passwords given a username,
you can make sure that only someone who's authorized
to ask for
a password back from
a username or whatever, or
you're a safe, and you're opening the safe,
you only open the safe to authorize you know components so then
people can't just pick up your code and use it wherever they want and who cares
what it's exactly that without modifying and that's where the iPod comes in is
that in order for them to be able to modify it in such a way to bypass these
checks that's what makes it hard so you need the iPod is sort of somewhere safe
to stand
before you go out and do all this checking.
Because otherwise, if they just change the checks,
then it's all for naught.
I mean, like if you were to write this without iPod,
as a reverse engineer, what I would do is I'd find out
where the if check is, if this module is trusted,
and I just always change it to unconditional,
go ahead and assume it's trusted.
So you need to make sure that that attack is hard to do so that
the check isn't a waste of time
so
can you kind of walk me through
so there's this machine code
and I have this and there's these
I think there's like IDS is one of them
but there's a bunch of these decompilers
that you can download off the internet
and you end up with and correct
me if i'm wrong but i believe you end up with you know c code with some assembly woven in where it
couldn't do anything better and and and the c code it it tries to uh you end up with ifs and and a
little bit of logic um of course all the variable variable names are A, B, C, D, because those aren't preserved, right?
And from there, that's how you would do things is what you're saying.
You'd find this if.
Maybe you'd step through this C code that you've generated through decompiler.
You'd recompile it and walk through it and see, oh, this line right here is where it's checking for the CD key.
And I'm just going to make that.
I'm just going to delete that code or make it always true or something.
So if you tried to do the same thing, you get one of these decompilers off the internet and you run it on a binary.
And just assume that the entire binary is
under tamper protection.
So what does that look like?
Is it just like a complete mess?
I ran IDA Pro for Curiosity earlier this week, actually, on one of our binaries.
And that has a very expensive tool.
Yes, that is a very expensive tool.
It's more for disassembly, but it takes the disassembly apart
and creates a program flow structure
like you're talking about.
And when I did it against the unobfuscated code,
it showed me a really great flow,
mapped really well to the original structure.
When I looked at the encrypted code,
I saw a couple of assembly instructions and a couple of blocks of stuff
going off writing to random memory, and then it just ended in a call to an undefined function,
which is actually, you know, knowing how it works, that was what I expected to see.
But really, it can't go beyond that because most of the time, the functions that are being called are either ones that we write ourselves as we go or ones that we have decrypted along the way as we've been mutating the assembly code.
Oh, I see.
Now I think it's kind of click so so so what your program does is it creates something it creates a binary
that just cannot be represented as a source file because of its self-modifying nature like there
there's no sort of c it just there's no c syntax that could exist that could explain what's
happening you would actually have to step through every possible phase of execution,
track down every path that it traces
in order to figure out
what instructions would be executed
because at any given static point in time,
most of them are going to be
not just encrypted,
but have been encrypted multiple times
and have to be decrypted multiple times in order to be visible.
Yeah, what these tools typically do, the IDA pros and things,
that's what we call static analysis.
In other words, they take a look at the binary state on disk
and they try to reverse engineer it as best it can from that state.
The issue is when you're having things that are constantly being encrypted and decrypted,
at every stage, ideally, you would want to run it while it's running, not just simply at rest.
Because there's nothing it can tell you about the encrypted code until the code became unencrypted.
And the way our tool works is that only a very small portion of the code at any point in time is ever in the clear.
So it's much more work to be able to do that sort of analysis. And the other advantage of that is we actually use that code
as part of the key to encrypt.
The code at one state is part of the key to encrypt to the next state.
So if you make a change, it starts to become this cascading reaction
that ultimately generally results in protection
exception of some sort
that will cause the processor to throw back to the OS and kill the process.
Exactly.
That's the primary way we protect against tampering in this environment
is that by modifying the code, you're modifying the decryption key,
which results in decrypting perfectly legal instructions
that are total garbage that are bound to lead you to a crash.
Cool. Yeah, that's awesome. Yeah. Thank you so much for explaining that. Really,
it took me from a complete Luddite of reverse engineering and tamper protection to now I think
I could explain to somebody how it works. So that's totally awesome. So for other people like
Patrick, I don't know about Patrick, so for other people like, like Patrick, I don't know
about Patrick, but for other people like me who know absolutely nothing about reverse engineering
temper protection, but want to get into this line of work. And, you know, they really, they think
it's, it's very exciting and they want to learn more and they want to, to be doing sort of what
you're doing. What kind of, you know, maybe they're still in school or they're thinking about going back to school.
What kind of background is useful for this work?
So not Java.
So please, you know, I mean, you know,
one of the things we're actually hiring right now,
we're trying to hire people,
and it's very challenging to find people nowadays
who understand how the machine works.
I mean, not just, you know, what happens in the stack, what happens, you know,
how stacks are laid out, you know, how things, what happens when a function call happens,
you know, how pointers work, just, you know, that layer of understanding of how the machine works.
So an interest in not just at the top level, like, quote, unquote, coding an application,
but understanding when you say something in C, what are you really telling the machine to do you know so things that you might
do in school i mean like you know anything that talks with assembly or sort of microcontrollers
you know back in the day we had like the motorola 68ks and stuff like that but understanding what
your c code does even writing an inline assembly on your own,
this is one thing you can do,
is go ahead and write yourself a very small C program
and then try and look at its disassembly
and see if you can crack open the manual
and sort of figure out what's going on.
Yeah, like Mark says,
understanding assembly languages, instruction sets.
I remember one of the early things I did was I wrote a hash function.
I started out in Pascal and then rewrote it multiple times in assembly,
just understanding how every instruction worked
and how to maximize performance on that.
Honestly, one of the things I do for fun is I read the C and C++ standards
for understanding how compilers work.
And, I mean, I couldn't write a compiler, honestly.
It's something I would like to do at some point.
And I've written parsers for C.
And obviously, I know a bit about the assembly side of things.
But it does mean that you can write a compiler, but understanding, you know, what's happening at a lower level
in the language, you know, what happens in memory when you have an object and it's, you
know, inherits from another object and, you know, what is a virtual function table and,
you know, what is happening on the stack when you call a virtual function, that sort of
stuff is, you know, kind of where we sort of stuff is, you know,
kind of where we think that that's, you know,
kind of the problems we look at all day in and day out.
Yeah. I'd echo what Thad said. And, you know, even more,
that brings up the point.
One thing you can get from school that's very useful is compiler classes.
That is a really good place to sort of the interface for the language and the
machine meet.
And I think people who understand how the machine works are going to be much better
programmers than people who are, you know, really far away.
You can still be a, you know, a decent programmer at a high level for big picture things, but
it gives you a little extra edge when you wondering what some, what's going on with
some bug or some crash, if you have some inkling about how the machine actually works.
And I would highly recommend it to anyone who wants to be a programmer.
And that could be the most fun, too.
I mean, you might spend, you know, 90% of your time
integrating two systems,
and it's just a lot of reading documentation
and writing APIs that work together and things like that.
But then you encounter some kind of performance issue
where maybe you have to rewrite something in C
or you have to understand, as Mark said,
what's really happening under the hood.
Why is this code so slow?
And that is often the most fun part of the job.
Well, to be fair, it's somewhat of an advantage
because I have some of that experience
to be one of the few people on a team that knows how to do that to oh yeah look at the assembly go into gdb and say give me the assembly instructions
what why is this taking too long because it's a rare skill so people value it and you can really
you don't need it often I would say if you're a normal software engineer not doing something
specifically related to that you may not need it that much but when it comes time and you can pull out the big guns and do that people get really happy very
fast yeah i mean you can really you know that's a skill set that is shrinking more and more we see
out there and the people who you know know stuff like that are gold to us but they're also
gold to whatever sort of software team they're on, just to echo what Patrick said.
So I have a question about self-modifying code.
How does that not get flagged by the antivirus?
There's a couple.
Great question.
Or is the antivirus just that crappy?
Because if I was going to write an antivirus, that'd be the first thing I would just flag
as a virus.
So not all self-modifying code will get flagged to start with,
but you do run into a lot more risk of it.
And Mark has looked into that more than I have,
so I'll let him talk about it.
Right, and so that's sort of the double-edged sword
about all these trusted execution environments.
On one hand, you would like to have your code somewhere protected
so that no one else knows what's going on in the container with it,
but that also, malware authors, virus writers also would love that same thing so that potentially
legitimate processes on the machine can inspect and see what the code is going to do before
it's executed.
And that's a tug of war in all these sorts of TE environments.
I mean, the way it's often solved is that there's some sort of signature that's done
so that you at least say where we were,
you know, where this came from, so that if maybe it fails, but after or does something malicious,
but after that, you know, you can trace back to, you know, it came from Microsoft, or it came from
a non trusted party. So maybe next time, I won't trust him. So you can decide sort of what level
of trust you give to each one, if it's one that has no trust, then you better run it,
you know, on a clean machine isolated. But if it's something that came from someone who you trust and you say,
okay, I'll let it run and I'll trust it. It's going to do what I expect.
Yeah, that makes sense. Yeah, I had this when I was cleaning off this virus off my machine,
I noticed that I was kind of expecting it to be more malicious than it was. I went through all my services and sure enough, there was a service that was,
I don't remember the name of it, but it was definitely, it is a name that stood out.
And, um, and it, as we said in the beginning, you don't have to be, uh, for, for, for most cases,
you don't have to be Fort Knox. You shouldn't expect someone from CSI or...
Who's the guy
from Mission Impossible? I'm totally drawing a blank.
The Scientologist?
Tom Cruise.
You don't expect Tom Cruise to
get into your PC.
It's Bing Reims who does that, right?
Yeah, right.
But you can actually stop 99% of the problems. yeah right but yeah but
you could actually stop you know 99%
of the problems and for
average people out there
who don't have that much
at stake who aren't
holding Fort Knox in their basement
they get you know a phenomenal amount
of protection so very cool
Fort Knox hardware is the way to go
I don't know is there even. Fort Knox hardware is the way to go.
I don't know. Is there even, is Fort Knox still significant? Because I heard the whole gold thing.
Well, obviously gold doesn't match to dollars anymore. That went away a long time ago.
But I think even, I wonder if, I don't know if Fort Knox even holds that much.
They claim they do. But if you trust the conspiracy theories, they haven't done an audit in a while so it may all be gone and spent
oh yeah it's just holding UFOs
they just spent it and didn't tell anyone
oh okay
I want the keys to that
center of the internet that they visited
the Avengers
that's where I want
oh yeah and like somewhere in Norway
or something
yeah
yeah it's always in some frigid environment even Oh, yeah, and like somewhere in Norway or something? Yeah.
Yeah, it's always in some frigid environment.
Even the Watchmen, right?
The main base.
I don't want to spoil the movie,
but there's one part where they go to an evil lair,
and it's always in some glacier somewhere.
So do you guys find yourself,
so if we want to say the cat and the mouse,
the good guys and the bad guys, we'll just all pretend that we trust you, as we talked about,
and that you're the good guys.
Do you have to put on your bad guy hat and
do stuff to keep up
on the latest and greatest of
disassembly techniques and that kind of stuff?
Do you find yourself...
Somebody mentioned going to Black Hat,
that kind of stuff.
How much would you say is that?
Yeah, go ahead.
So to be a good security guy, you've got to be able to think like the bad guys.
The only difference between us and the bad guys should be what side we're on.
If you're not thinking about the possible attacks and the possible ways to exploit systems,
then you're going to be blind to everything. So, I mean, in everything, even, you know, for most of us, it's not just in what we do
now, but in other systems, like, you know, someone is, you know, setting up a system
to, you know, lines to, you know, for a fair or something like that.
We, you know, can see the potential exploits there.
I mean, you've got to be able to look at everything as a potential attack.
Every time I got on an airplane, that's the way I think.
Yeah, that doesn't go well for me
because I don't typically start talking about it.
I shouldn't talk about this.
You're listening.
So how much would you consider yourself,
yeah, so you guys mentioned that,
like you think in a security mind,
you even say like as security,
I don't know what you say,
engineer or security researchers or security,
like how much of it would you say is that
versus people who just say like, oh, I'm a programmer and don't assign that kind of label. Like,
is there a real difference between programmers who program with security minded and the rest?
There is, yeah. I mean, if I would say one, the big difference is, you know, there's lots of
skills that you pick up over time, as far as idioms and things like that, that are, you know,
takes lots of training. But the most important skill is being able to sort of understand the threats, the potential
threats, the potential attackers and the potential risks and what you're willing to risk.
I mean, you know, there's a consequence to security, right?
I mean, you could have max security and then your users can't do anything and you don't
want that, right?
So you've got to sort of at least understand sort of what
you want what your users want and what the right balance is gotcha it'd be like having you know 12
locks on your door i mean it's sure but it just would take you a long time to get in exactly
this might get a little meta but when you're writing the code to do the tamper protection
these these processes you said that encrypt a executable or encrypt a library and that kind of stuff.
When you're writing that code, do you guys have like a pretty set strict set of guidelines you follow to make sure that code itself isn't filled with flaws?
Or is it something that you kind of like have an intuitive sense as an individual when you guys do code reviews?
Do you have extra things you check to make sure you're not introducing problems?
Well, there's a couple layers of validation, right?
I mean, there's one thing where you, you know, when you're doing the initial design for, is this thing going to work?
How are you going to exploit it just from a design point of view?
And then there's the actual implementation stuff where that's where the code reviews come in.
Does it conform to, you know, the design?
And in every design, no design is perfect. There's always, you know what the, you know,
you make trade-offs, you make decisions on, you know,
what you protect more, what less,
what potential threats are you willing to give up
for additional benefits.
And even with our tool, the toolkit that we're providing,
we're giving the developers the ability to make that trade-off.
There's a couple of like really simple options
you can give on the command line
that provide sort of a trade-off
between the level of security
and the level of performance you get.
You can give a command line,
like how frequently do you mutate?
How often do you go through the code
and do we change stuff?
If you do it very frequently,
it'll be much more secure,
but at the same time, it'll be a lot slower.
If you do it less frequently,
less secure, but your code will run faster.
And this is for the application level sort of stuff,
and there's similar trade-offs that we make all the time.
So we come together in meetings and talk about the trade-offs,
talk about the threats, and then based on those,
we make some of our best calls based on what we perceive to be
the best interest of the users,
or we can provide a knob that the user can tune and say,
hey, it'll be up to the user.
We'll give them these tools, and we'll warn them how to use them
and put it in their hands.
But in the toolkit code that you guys write itself,
is that such that it's, oh, I guess kind of like public key encryption.
You can know the methodology.
It doesn't really help you.
Or is it that you have to take special steps in writing those tools
so that if someone got a hold of them,
they wouldn't be able to now exploit everyone
who's ever used your toolkit before.
I think there's a little bit of both.
There are definitely, you know,
part of the way the thing works is that, you know,
there are, you know, obviously some secrets
that are stored in this thing that's encrypting and decrypting.
So knowing how that behaves can give some insights to attackers.
But in addition, that's not going to help them with,
you know,
breaking an existing thing or at least tampering with something is they're
still going to have to,
you know,
extract all the code and reverse engineer it.
So there's a little bit of both.
Yeah.
We,
we take some effort like in our release process.
We'll,
we'll make sure we scrub out a lot of debug messages,
any symbols that,
that really say exactly how we're doing something.
And, you know, let us trace a specific, say, a given instruction all the way through from input until, you know, we know where it's encrypted in the final output.
We'll make sure we scrub that sort of stuff out.
But, you know, it's kind of that defense in depth, right?
How deep do we want to defend it?
And we don't encrypt the entire program with its own methodology because that would have a huge performance impact.
Is the toolkit protected by the toolkit?
The code verified library is, by its very nature, has to be able to be encrypted by iProt.
But we don't generally encrypt much else of the toolkit with itself.
Because of the nature of what it is, what the tool set is, the risk analysis just hasn't taken us there at this point.
Although I'm sure Steve hears this, he's like,
hmm, that's something I can put on the backlog.
Yeah, right.
To do inception system.
You just keep encrypting yourself and just watch the binary get bigger and bigger.
And it does get bigger.
And you don't have the answer.
But you mentioned earlier, if you're running on an Intel processor, you get this stuff.
What happens if you run the code signed by this on a non-Intel processor,
like an AMD processor?
Okay, Intel instruction set.
Okay, sorry.
AMD will work.
We don't support ARM today.
It's something we've definitely looked at.
Oh, well, how does the Android work then?
Well, there's Intel Android, right?
Oh, I didn't know that.
Okay.
That explains that.
For everybody listening, Intel does support Android.
In fact, Intel is one of the biggest contributors to the kernel.
I did not know that.
I mean, I know iPhones are all ARM, and so I just assumed that Androids were all ARM, too.
I did not know at all.
So I learned a lot of things today. Obviously, we so I just assumed that Androids were all ARM, too. I did not know at all. So, learned something.
Learned a lot of things today. Obviously, we're hoping to change that. We're hoping, you know,
we can convince, you know, Apple
we've got a good enough product that, you know, they can put it in
their phones and stuff like that. You know, that's
what Intel is trying to do. But
in order to do that, we've got to make sure that it runs really
well. You know, it runs really well in IA.
So, that's one of the things we're doing. Well, not for the iPhone
case, but... Yeah, and there are the iPhone case. Yeah. And there are actually
Apple. Right. Yeah, but there are actually
Intel phones, actually. Of course,
I think the first one just started shipping
in the US a short while ago, but I think most of them are like
in markets like China and India and stuff like
that. I actually saw
it. Oh, cool. I actually saw an ad
for one. Where was I? I think it was in
Russia when I was
traveling there this past summer.
I saw an ad for a Zen phone in Russia.
An Intel Zen phone.
Oh, very cool. And so Zen is
owned by Intel then? No, no, no.
So Intel does not sell
phones, right? We sell process.
Oh, I see. So we sell it to...
Got it.
So when you do
tamper protection for an Android thing, is there different heuristics that you use? Because obviously battery life is more of a concern, right? Got it. have a big thing to do, but again, we put that in the developer's hands. It's the same fundamental trade-offs even on an
Intel device and a laptop, which a lot of the
form factors are. So again, if you crank
it up a lot, you'll get more security,
but more processing means more
battery drain. So again, we put that
in the developer's hands, but we don't
specifically right now have automated
tuning is what we call it when you
adjust the
security version performance trade-offs.
But that's something that we're
planning to add in future versions.
Cool. Very cool.
And so just to explain
the deployment from your side.
So you sell
a piece of software,
someone buys, installs it on their machine,
and then they have those
tools that they can integrate into Visual Studio or into their make files or what have you,
Gradle, what have you. Yes. Actually, you can actually get it for free right now. The beta,
you can download from our website and you can download it and try it out without paying for it for this version.
Oh, that is awesome.
Wow.
Yeah, I'm so glad you said that. Yeah, definitely.
If you're interested in this stuff, you should definitely do that.
Everyone at home should check that out.
You should build a Hello World and then run this on it and see what happens.
We are very interested.
Anything users want to tell us about it,
the beta is there because we really want to know
what people want to do with it.
So if you try it, give us feedback.
We want feedback.
We are most hungry for feedback.
Yeah, I mean, if you build something that does
some kind of Monte Carlo estimation or whatever,
I mean, tic-tac-toe or whatever,
and you run binary tampering on it and your tic-tac-toe or whatever and you run
binary tampering on it and
your tic-tac-toe game doesn't work anymore,
let them know. I'm sure they would
love to talk to you.
A few more things.
The URL for that is at software.intel.com
slash tamper
dash protection.
Norm, do we have the
Hello World video up there?
I recently just did
a quick sort of, you know,
in Visual Studio, just a quick, you know,
like a six-minute video that basically shows you how
to do a really dumb, you know,
Hello World-style program
and run it through iProp. And I'm not sure
if it's been published yet. Norm, do you know?
Or maybe it will have had been published
by the time this gets published.
Oh, nice use of the, what is it,
the past participle or something?
Will have had been published.
Nice.
I think it's the future past participle.
Future past perfect, I think.
Something like that, yeah.
Yeah, Thad went to a good school.
Oh, I was on a vacation with my family, and there was a juggler.
And there weren't a lot of people in the audience.
And so he said, who wants to go on the stage?
And my wife yelled, this guy does.
And so I ended up on the stage.
And he said, do you know? And he mentioned mentions i honestly don't remember the lady's name but it's some lady who could shoot
a bow and arrow over her shoulder and i guess she's some famous famous uh lady of that time
a 13 1400 i don't i don't remember the name that's what it is he goes he goes do you do you know any
do you know annie oakley and I said, and I knew he was
going to shoot a bow and arrow. He hit a coke
can off my head. I said, did she shoot
a bow blindfolded? And he
goes, you went to public school, didn't you?
It's like
all sad trombone played in the background.
I don't know if we got an answer about whether the video was posted.
I'm waiting on pins and needles.
We could actually go to the website and see.
I think we may have lost Norm.
I recommend going to the website and seeing if it's there.
If it's not, go back again.
Yeah, the marketing video is up there.
I know that.
We are actively getting collateral up there based on –
I think Mark was at EndefCon
this last week
and so he's learned
some more about what we want to put up on that site.
Actually, I just remastered
it like two nights ago, so I don't
know if they managed to get it through all the
appropriate legal channels.
You're like a media mogul.
Actually, no, it was my second
time using Camtasia.
The first time was with the first try.
But I've learned so much.
So if you like the sound of the voices on the podcast,
tune into the video.
That's right.
So do you find at an environment like Black Hat
where I typically...
I know it's both researchers and people trying to explain.
It's kind of a mix of people.
But are they generally receptive to this kind of thing?
Or are they kind of like, oh, that's not going to be fun when that comes out?
No, you know, yeah.
So there's all kinds of people that go there.
You know, there's a lot of, you know, IT enterprise types.
There's coder types.
There's, you know, government types that want to protect. And so
this kind of technology is of high interest. And it's not the same thing as like network
firewalls and things. So you'll see security defense in depth is a phrase you'll hear a lot
in the security space. And so there's a lot of people at certain layers, and
the people that are
focused and interested in binary security,
definitely this is
right down their alley.
I always thought it'd be cool to go to DEF CON.
I'd always be terrified to bring any electronics, but I've never
been able to come up with a legitimate work reason to go.
Just don't connect to the network.
Yeah.
I know, I heard what Snowden said.
I think I'm not taking the phone with me.
Yeah, I'll be that guy who goes
to DEF CON and leaves with
no personal information.
I know, it's already
happened to me once.
Yeah, at DEF CON, there's a
big video
of flashing
out IP addresses and MAC
addresses of devices
that have been scanned at
DEF CON and so it's
kind of a wall of shame so definitely keep
your devices off
yeah
don't bring them there to test them
no
and then melt them as soon as you walk out
yes
sell it to somebody else
oh that's nice of you
we can start a whole enterprise around that
a business model
I think the Russian mafia already has a speed dial
oh
alright well did you guys have any last things you wanted to leave us with?
Anything we didn't bring up? Any questions we should have asked you but didn't?
Please encourage your audience to study more low-level stuff so we have more
people to hire with the right skills.
Yeah. I mean, there's so much to do. And low-level is, I think,
going to become more and more important. I mean, there's so much to do. And low level is, I think, going to become more and more important.
I mean, we had basically 10, 15 years of cloud.
And now there's Internet of Things, there's drones, there's autonomous vehicles, there's tamper protection.
There's so many different things now that are requiring low level firmware or or low-level software engineers,
that it definitely would be a great field.
If you're interested in that, you should definitely look at the future.
Don't look at right now or look at five years ago where cloud was so big.
Look at sort of what you want to do, and if you're passionate about that,
you should definitely go for it.
There'll be tons of opportunity.
Exactly.
I mean, if you look at where things are going with IOT the Internet of Things I mean pretty
much everything is gonna have a some kind of microcontroller in it and it's
both wonderful between a very scary thing that your light bulbs could
potentially be you know compromised by someone mm-hmm or worse stop lights your
car and this is scary scary stuff so we need smart people to help solve these problems.
Yeah, you know, in the security developer space,
there's a very big demand and not a whole lot of supply.
So if someone is looking to diversify
or just to create a niche for themselves,
and they like coding,
security space is really a great place to be. It's like doing puzzles. If you
like to think, if you like to think through hard problems, it's very rewarding and you'll find a
lot of opportunity. And so, you know, get good on tools like, you know, Intel Tamper Protection
Toolkit and get good with disassemblers.
And you'll find yourself, as you spend time doing that,
you're the go-to guy for anything security.
And just by getting familiar with some of those tools
and some of the thinking patterns that Mark mentioned,
you'll find your career growing in a wonderful way.
Yeah, start by trying to break stuff and then worry about securing it later. That's my recommendation there. mentioned, you'll find your career growing in a wonderful way. Yeah.
Start by trying to break stuff and then worry about securing it later.
That's my recommendation there.
But keep yourself, it'll keep you up at night.
I don't know.
I read like, I remember back in school, I would read like Kevin Mitnick's book about
social engineering or whatever and be like, oh, oh man, this is not good.
Yeah.
Everything in the world is just vulnerable.
Yeah. Yeah. Or go play eve online wait what go play eve online what is what's the connection that'll keep you up oh i see
yeah that will that will you'll stay up all night playing even managing the security and things like
and trust and things like that okay oh i see. Oh, I see the social network there.
Okay, that makes sense.
I've heard crazy stories of 20,000 hours of work
getting just destroyed in some battle.
It's unbelievable.
Actually, even worse than that,
20,000 hours of work gets destroyed by someone who you trusted
who ended up turning on you and stealing your stuff.
And that's supposed to be fun it sounds like real life i have that part of it again for people who think that way it can be interesting
how do you protect it sounds awesome how do you protect you know i know somebody who doesn't
i know somebody who doesn't even play that much, but through his, I guess, social graces,
is able to just have incredible wealth on EVE Online
just because he's on the top of kind of, I guess,
I don't want to say a pyramid scheme,
but he's built like a multi-level marketing.
Exactly.
It's a great model of the real world.
Right.
Cool.
Thank you guys so much for coming on the show
I'll post a link to
your email
addresses if people want to reach out to you
you know if they
want to come
and work with you guys
or if they just have questions about
tamper protection or if their
trial expired and they want to beg you guys
for another 30 days or something so I expired and they want to beg you guys for another 30 days or something.
I have a way to contact you guys.
This has been absolutely fascinating.
Thanks again for coming on the show.
It's totally awesome.
Thanks, guys.
Pleasure. our pilot.