Command Line Heroes - Lurking Logic Bombs
Episode Date: March 22, 2022Logic bombs rarely have warning sounds. The victims mostly don’t know to expect one. And even when a logic bomb is discovered before it’s triggered, there isn’t always enough time to defuse it. ...But there are ways to stop them in time. Paul Ducklin recounts the race to defuse the CIH logic bomb—and the horrible realization of how widespread it was. Costin Raiu explains how logic bombs get planted, and all the different kinds of damage they can do. And Manuel Egele shares some strategies for detecting logic bombs before their conditions are met.If you want to read up on some of our research on logic bombs, you can check out all our bonus material over at redhat.com/commandlineheroes. Follow along with the episode transcript. Â
Transcript
Discussion (0)
Hello?
Hi. Who's this?
Um, who's this?
I think you have the wrong number.
Do I?
I'm hanging up. I don't have time for this.
No, no, don't. I want to talk.
Look, I don't know who you are. I've got to go.
I'm late to meet a friend.
Are you?
Then why did you just make popcorn?
Okay.
You've probably seen this one, right? An innocent girl is home alone one night when she gets a deeply disturbing phone call.
She runs around the house, peering out windows, locking the doors.
She's stuck on the idea that this creepy caller could get in.
But what if the call is coming from inside the house?
Just like that classic horror scene, digital security breaches can come from inside the house.
They've already gained access to all the computers they're going to attack before anybody even knows they exist.
I'm talking about one of the most terrifying kinds of malware out there, the logic bomb.
I'm Saranya Barg, and this is Command Line Heroes,
an original podcast from Red Hat. All season long, we're learning about the malware,
cybercrimes, and bad actors that security teams guard against. In our first two episodes,
we looked at vectors of attack, the way worms, viruses, and Trojan horses infect our devices.
Now, with logic bombs, we're looking at a dangerous kind of payload.
Logic bombs buy their time and go off at a particular moment, often as a punishment or retribution.
They're a powerful way to set off a massive, coordinated ambush.
We often think of logic bombs as inside jobs.
Maybe a disgruntled sysadmin plants one and it just sits there on the server for years until the day that sysadmin gets fired.
Then, boom.
All logic bombs attack from inside the house.
But not all logic bombs are actually planted by insiders.
We're starting this episode with an example of a truly dangerous logic bomb
that was created by a lone student.
Someone who didn't seem to have any special access at all.
You know those little rust spots you sometimes see on an older car and you think,
oh, I'll just scratch the rust off and you start scratching and pretty soon the whole thing's,
oh no, I wish I hadn't started.
Paul Ducklin is a principal research scientist at Sophos, the security company. He was working there back in the late 90s when a strange new virus called CIH appeared.
It was spreading around the world fast
on any computer that ran Windows.
And in the late 90s, that was a lot of computers.
Ducklin remembers a call he got on a Friday afternoon.
He was getting ready for the weekend.
It was a rare and beautiful spring day
in England, and his Italian motorcycle was waiting outside. But the voice on the phone was terrified.
We've just suddenly realized it is all over the business.
The client needed immediate help removing a virus off their computers. Why the rush?
Because the virus was a logic bomb.
It had a detonation date, which was visible from its source code.
Think of a time bomb with that glowing red countdown.
Once CIH was detected, it was a race against the clock.
As anyone who was around then will remember,
the kind of D-Day, the big bad day for this CIH virus was the 26th of April, 1999.
The 26th of April also happened to be the anniversary of the Chernobyl disaster.
Nobody knew at the time if that was significant, but some people started calling it the Chernobyl virus, an allusion to both the date and the scale of this impending nightmare.
Whatever you call it, Chernobyl, CIH, its deadline was days away.
And Ducklin knew he had to help whoever he could.
I just said to this guy, why don't we just form a team of two?
I'll come around and help you.
And when there's nobody in the office,
we will close all the doors, put on all the lights, build some cleanup gear,
and we'll just go around and make sure every computer is okay.
Later at home, Ducklin felt pretty sure they'd managed to save that company.
He didn't hear back from them on April 27th. But that last-minute effort to save
their computers was just one of the countless emergencies taking place all around the world.
By the time CIH was ready to detonate, 60 million computers had the virus in waiting.
My name is Kostin Rayo, and I'm the head of the global research and analysis team at
Kaspersky. Kostin Raiu is based in Bucharest, where he's devoted his life to fighting computer
viruses. He remembers the year leading up to CIH's blow. The warning signs were there.
We started seeing software. It was mostly pirate software that was being
distributed through different internet websites. Sometimes we even received CD-ROMs from magazines
which were infected with this CIH virus. In many countries in the late 90s, people didn't have
access to fast internet, so they were sharing
a lot of software via CD-ROMs. CD-ROMs were particularly tricky to deal with because you
couldn't disinfect them. Some kept a Windows 98 CD in a drawer and just kept infecting their own
computer and others over and over again. And then, on top of the CD-ROM problem, CIH had a
bewildering ability to hide itself from detection. This malware seemed capable of slipping through
the cracks, staying out of sight from ordinary antivirus scans. Instead of copying itself at
the end of the files, it would try to find holes,
effectively holes in these executables,
and hide himself in one of these holes.
And as a result of this technique,
basically an infection would be almost invisible.
This technique that Ryu is describing
is called code caving.
Most viruses, like he said, attach their code to the end
of an infected file. A good sysadmin could memorize the size of well-known files on their system,
and if the size changed, they'd know that something was up. But CIH was able to split its code up
and insert slices of itself in the coding slots on software utilities.
And that meant the file size would never actually change.
The result was that from the outside, it was very difficult to notice that an intruder had arrived.
Many people had no clue their computers were infected. A stealth approach
like this is especially useful for a logic bomb because they rely on our ignorance. They need to
hang out undetected while they wait for that one dramatic moment when they detonate.
Stopping CIH before it went off wasn't possible. There were just too many infected
computers, too many shadowy cases where the virus remained hidden. And besides, not everybody had a
specialist like Paul Ducklin on hand to save the day. So some computers, a lot of computers actually,
were going to be infected when this particular logic bomb went off.
It was kind of, let's say, kind of a waiting, waiting to see if this is kind of the hydrogen bomb or if it's more like a firecracker.
Yeah, it wasn't a firecracker. On April 26, 1999, if you turned on an infected computer, CIH was turned on too.
And it got to work.
For starters, it erased the first megabyte of information on a hard drive.
This is a critical area.
It's called the partition table.
And it acts like a table of contents for your computer.
Without it, computers are lost.
Next, CIH attacked the basic input-output system, the BIOS,
which could leave the computer entirely useless.
And that was the really surprising thing about this event.
CIH was attacking computers in a very new way.
Perhaps the most significant feature was a destructive payload,
the ability to wipe the system BIOS,
to wipe it with trash, rendering computers unbootable and sometimes kind of useless.
This was the first virus that actually left computers requiring new hardware.
It would write garbage into your BIOS, so you couldn't boot your computer at all.
For the unlucky ones, this was unfixable, and you'd have to get a whole new motherboard.
By the time CIH had finished going off,
it had threatened the work of millions of people,
especially throughout Asia,
causing enormous stress and panic,
as well as a billion US dollars in damage.
And nobody knew why.
The guy who wrote this virus, he didn't do it to make money like in the current era it was just hey look how clever i am four days after cih exploded taiwan's police arrested a
24 year old man named chin in how what are those initials c i-I-H. He was a student, it turned out, at Taitung University in Taiwan,
and he'd stuck his initials inside the virus code,
like a signature at the bottom of a painting.
I guess the guy just thought he was being really clever.
Chen Enhao has said he wasn't trying to be malicious,
that he just wanted to point out the security flaws in antivirus software that folks were relying on back then.
Like a lot of stories we're covering this season,
the CIH logic bomb seems to have been created
by someone who didn't think very clearly about potential victims.
For people who got hit by this, it was really unhappy times.
Chen and Hao apparently didn't have especially evil intentions.
But logic bombs can be a lot more strategic.
And that's what makes them so dangerous.
Logic bomb means there's some specific, if you like, subcomponent of the malware
that under certain very specific circumstances,
in this case, it was just the date on the calendar is the 26th of April, but it could be something more subtle, such as
go and do a database query. Oh, look, such and such a contractor by the name Jones or Smith
or whatever got fired last week, engage the logic bomb and go into destruction mode.
This is one of the classic scenarios for logic bombs. You could call it a revenge plot.
There is, if you like, a sort of if this, then that feature in the program that is specifically designed to look for a non-obvious condition and then deliberately do something bad
that nobody would have intended to put in the program.
Back in the 90s, the if-then scenarios were often pretty benign.
The logic bomb date would be to celebrate some political event
or it would be somebody's birthday. It
was almost like the logic bomb was part of the puzzle. Whereas I think these days, things are
quite different, right? You're putting a logic bomb there because you're the attacker, you know
the conditions that trigger it, and if nobody else has realized, it gives you a secret way back in later.
Kostin Ryu told us that these days, logic bombs can be strategically placed throughout our digital infrastructure or on the computers of major companies, simply waiting for any
number of if-then scenarios to occur.
There could be a lot of logic bombs hidden in critical infrastructures around the world. And
the reason I'm saying that is because we have seen the signs of such things in some of our research.
Every now and then, we discover malware, which is unlike anything else that we have seen to that day. And one very good example is a malware that has a hidden capability.
And it's actually cryptographically hidden
that you can't simply determine what it is supposed to do.
So the inner payload, if you want, the warhead is encrypted
and the attackers are essentially the only ones with the keys.
In the same way that Chen Yinghao was able to use code caving to hide his virus inside coding gaps,
there could easily be cryptographic tricks or new undiscovered ways that bad actors are burying
logic bombs around the digital world. It reminds me of that disturbing scene
where the call is coming from inside the house.
We can guess that some threats are already inside the system.
We can assume there are logic bombs we don't know about,
hidden bombs that are set to go off
whenever certain requirements are met.
The election of a certain leader could trigger one, or maybe another when gas prices hit a certain high.
The trigger could be anything that a hacker sees as a moment to unleash chaos.
We see them deployed in different places, especially critical infrastructure and especially energy-related
companies. Of course, whenever we saw things like this, we disinfected them, we deleted them
with our products. But the fact that we are seeing such cases, it kind of makes me believe
that there are hidden warheads planted around the internet,
probably in critical points, in critical infrastructure.
The usual security responses we use against viruses and worms can't be applied if we don't even know the problem is there.
Since the CIH logic bomb, there have been plenty more cautionary tales.
Bank servers are attacked. Databases at the TSA are threatened. I would suspect that there's well-positioned, well-placed logic bombs in critical places around the world,
which is, I would say, maybe just another
dimension of cyber warfare.
We know we need to up our security game to protect against these potential logic bombs. But how far has security come since April
26, 1999, when the CIH bomb exploded? Could we prevent all that damage today?
That is a good question.
Manuel Aguilay is an associate professor at Boston University,
and his research focus is software security.
So the implementation that the person writing it was able to write it,
I don't think that there was a good preventative measure that would have hindered the implementation.
From a technical perspective, I don't know, even in retrospect,
of a good mechanism to say that something like this should not be possible.
So far this season, we've talked about digital hygiene,
all the common-sense practices that everyday users can employ
to keep their devices safe.
Careful what link you click on.
Check the URL for that little padlock icon.
Things like that.
But here's the thing about logic bombs.
Because they're often engineered by insiders,
and because they specialize in stealth,
sitting there and waiting in silence,
ordinary digital hygiene might not be enough.
Aguilay points out that institutional moves are necessary too.
Large companies, for example, could set up their systems
so software written by a particular coder
doesn't
work after they leave the company. That might make it harder to plant a logic bomb. Better yet,
security pros can be constantly scanning for suspicious code.
Is there code that is potentially nefarious that only gets executed in very narrow circumstances?
That might be a warning sign.
An example that looks at what day is it today
and then only executes code if today is a specific trigger date,
that would be something that an automated analysis can very well detect.
That's a start.
Big organizations also need to keep a kind of ongoing audit, though.
Something that lets them know who's writing what.
To be able to attribute every piece of code to a user that either authored that code or installed that code on a given system.
In addition to keeping track of code attribution, limiting access also sounds like a good idea. Operational systems should be locked down with only giving people access to those systems who actually need that access.
So if someone needs to analyze data, they need to get access to that data.
Absolutely.
Does that mean at the same time that they can schedule code to be executed sometime in the future?
Probably not.
And on top of all that, Aguilay always recommends cryptographic verification.
I would consider it a standard good practice nowadays.
What that means is a cryptographic checksum is attached to files.
It can be used to verify that the file hasn't been altered down the road.
So whether we're talking about something as small as my smartphone or as big
as a company's headquarters, you can limit things so you're only executing software that's
cryptographically signed. There are lots of common sense solutions like these that can keep people
from planting a logic bomb. But why not use every security measure we've got? We need a whole arsenal of systematic
efforts to secure our digital work and lives. I think the realistic best we can hope for
is to make it more costly and more complicated for attackers to be successful.
Security is never about one perfect strategy. It's a whole attitude of vigilance.
You have to assume that attackers are using every new trick they can employ.
So we have to do the same.
We have to even assume some problems are brilliantly hidden,
just waiting to cause havoc down the road.
Logic bombs force us to investigate every nook and cranny. They remind us that the
villain could be calling from inside the house. So I know some of this stuff sounds quite scary
and can be stressful. But here's the good news. Even in a worst-case scenario, where a logic bomb does a lot of damage,
we have a chance afterward to sweep up the rubble and learn what went wrong.
We can learn from these attacks and improve security going forward.
Every horror story points out the vulnerabilities that we've got to address next.
And meanwhile, we're getting better and better at spotting those warning signs.
I'm Saranya Tbarek, and this is Command Line Heroes, an original podcast from Red Hat.
Next time, we switch from the inside job of logic bombs to a major external threat, botnets. Those hordes of zombified computers
that obey their herders' commands. Never miss an episode by following or subscribing
wherever you get your podcasts. And until next time, keep on coding.
Hi, I'm Jeff Ligon.
I'm the Director of Engineering for Edge and Automotive at Red Hat.
Even 10 years ago, the chaos of running hundreds and thousands of containers in a cluster,
it didn't feel like you could go from that to running just dozens in a car.
But these days, it's coming.
In fact, containers are a big part of the future vision of software-defined vehicles.
And look, if we can get the container revolution to work in cars,
then everything a cloud-native developer can do today can apply to cars.
This huge ecosystem of engineers can start to write applications for automotive.
We can completely change the industry.
This is why Red Hat's open-source approach to edge computing is so important.
The way we collaborate, the way we build together,
it's already making some pretty incredible things possible.
Learn more about them at redhat.com slash edge.