Blog

Malware Analysis Lesson 4; Malware obfuscation techniques

Malware, which can be any type of malicious code[4][, and detectors or anti-virus tools are in a continual arms race. With malware developing ever more advanced and sophisticated obfuscation techniques and detectors researching more complex detection mechanisms to identify the malware. This arms race has been ongoing for decades and traditionally detectors have relied on large databases of known signatures [3], or hashes, of the malware. However, malware often uses a range of techniques to change this signature from one infection (or generation) to the next. This makes it more challenging for detectors and malware analysts to identify the behaviour of malware in a timely manner, or even to identify the code as being malicious at all. We are going to look at some obfuscation techniques and describe how detection can be carried out for those techniques. Some methods can be highly complex, such as the malware being interwoven into a targeted host file, while others are simpler like changing the packer used. In all cases it makes detection more time consuming and resource intensive to isolate and identify the malware signature. To compound the woes of signature-based detector’s, this method is not effective against new malware using unknown vulnerabilities (think zero days). This relentless development of new malware variants has made signature-based detection less effective. However, with the reduction in the effectiveness of signatures, behavioural, heuristic and sandbox-based detection have been developed. To understand how all this works we first need to understand how the different types of malware and how they obfuscate themselves.


4 Categories of malware

Due to the diverse methods malware uses to obfuscate itself, it is necessary to categorize them and there are four main types of obfuscated malware; Encrypted, Oligomorphic, Polymorphic and Metamorphic. Let’s go through these now.

Encrypted malware

There are two types of encrypted malware, malware using encryption and malware using packers. With encryption the malware uses encryption to conceal itself from detection. This type of malware is usually composed of the decryptor and its encrypted main body.[1] This method is effective for two reasons, firstly by encrypting the malicious code it executes the malware cannot identify the payloads signature; secondly by changing the encryption key it uses the signature of the encrypted code itself changes.[4] To ensure the malware remains obfuscated throughout multiple generations, and in order to avoid its encrypted signature from being static – and thus identifiable by signature based detectors; Every time the malware is run it can generate and uses a new encryption key to keep its signature unique. For best effect this new key should be generated in a random and unpredictable manner. However, the decryptor portion of the malware cannot be encrypted as it needs to be executed and it retains a static signature. Due to this detection methods that focus on the malware decryptor signature are usually successful. [1]

Packers[3]

Packers are usually legitimate tools to decrease the size of an application while it is stored or transported, like compressing documents but in a way that still lets the application be executed. Even small changes to the underlying application can drastically change the signature of the resulting packed executable. There are multiple packing applications and research on which packers are most effective for evading detectors. One example of this is “Jon Oberheide and his colleagues at the University of Michigan wrote PolyPack, a Web-based application that supports 10 packers and 10 malware detection engines (like virus total)”[3]. This research and similar applications can help malware authors identify which packer would be best for their malware to avoid detection.

One-way packed malware can be detected is by having a database of all possible signatures a packed malware can produce. This is very inefficient, and a better option is to use what is called “Entropy Analysis”[3] to identify the packed malware. This can detect packed files but cannot detect the packer used, which can cause difficulties for deeper analysis. PHAD, PE-Probe and MRC all use Entropy analysis. Without unpacking the file, it can be difficult to know if its malware or a legitimate application, especially as we need to identify the right packer to unpack the file. This scan be difficult, packers are commonly used to spread malware.

Oligomorphic and Polimorphic

Malware that can mutate their decryptor’s from one generation to the next have been designed to fix the shortcomings of purely encrypted malware. The first example of this was the oligomorphic malware which was able to change its decryptor. [1] However oligomorphic malware was initially very limited in the maximum number of decryptor versions it could produce, allowing the signatures of all possibilities to eventually be calculated. This catalogue of signatures allowed detectors to identify all variants of the malware.

Fig. 1 Oligomorphic malware

Polymorphic malware is an encryption method that mutates its static binary code. [3] It was developed to attempt to take the ideas of Oligomorphic malware and further improve them by being able to generate an incalculable number of potential decryptor variants so that no single signature sequence will match all possible variants of this malware. It achieves this by using several very cool obfuscation methods we will talk about later including dead code insertion, register reassignment, Code Transposition and Instruction Substitution [2]. Each time the code is run it mutates itself by using a different key. To make things even more challenging for malware analysts, there are many tools out there such as The Mutation engine that automates the process; allowing regular, non-obfuscated malware to be converted into polymorphic malware.

To detect these types of malware the detectors make use of tools like sandboxing. With sandboxing the detector executes the malware in a secure emulator. We then execute the malware and wait for its constant body (the payload) to be decrypted in RAM after execution and try to match a signature. [1] This works as the polymorphic engine does not significantly change the native opcode that runs in memory. [3] Another way to detect polymorphic malware is by using Neural Pattern Recognition, which has shown a high detection rate, based on a small sample set. [3]

Malware obfuscation is a fast-paced arms race that continuously results in more dangerous malware that is harder to detect. Malware authors attempt to counter sandboxed execution by creating malware that detects when it is running in a virtualised environment and not decrypt it payload. Other malware authors create malware that may wait for some event that does not usually occur when executed in a sandbox, before decrypting it payload. Detectors are improving all the time and are incorporating features to defeat this type of malware with advanced techniques. [1] The decrypted code is essentially the same in each case, thus RAM/memory-based signature detection is possible. Block hashing can also be effective in identifying memory-based remnants.

Metamorphic malware

With the previous class of malware, we discussed how the decryptor was changed with each generation of the malware to avoid detection. Metamorphic malware takes this approach and builds on it by incorporating multiple obfuscation techniques into its payload rather than, or as well as, its decryptor. This way it may not need to use encryption or packing and still can be difficult to detect due to its ever-changing signature. It can maintain its behaviour without ever needed to repeat the same set of native opcodes in memory. [3] It needs to be able to recognize, parse and mutate its own body whenever it propagates. [1]  

There are two types of metamorphic malware, open-world and close-world. Open-world, as shown in the Conficker Worm, leverages a command and control structure – with the malware connecting to its controlling master server to download updates and functionality after the initial infection. Closed-world malware from each generation to the next uses self-mutating code via a binary transformer which modifies the binary code itself to avoid detection. [3]. Win32/Apparition was the first example to demonstrate these techniques. [3] The methods used to achieve this level of obfuscation are discussed below.

fig 2. Metamorphic malware

Obfuscation techniques

Polymorphic and Metamorphic malware take advantage of several techniques to obfuscate their code. We are going to go through several methods now .

Garbage/Dead Code Insertion; Dead code insertion pads out the code in some way with garbage, to change the files signature. This garbage could be randomly generated strings; or it could be new instruction sets that don’t do anything, or just don’t change the malicious operation of the code. NOP or CLC instructions can be used to fill out the code no operation instructions. Using Push and Pop operations on registers is another way. These garbage insertions can be defeated by modern detectors which identify the garbage, such as operations that do nothing, and then deletes it from the code before analysing and comparing the malwares signature. [8]

Register Reassignment/Swapping; In assembly, all programs work from a limited set of instructions and have a limited set of memory space for storing and fetching values. These memory spaces are known as CPU Registers. The number of registers a CPU has can vary. i386, for example, has 4 main registers; EAX, EBX, ECX and EDX. Malware can take advantage of these multiple registers for obfuscation. By switching the registers called and used the malware can change its code, such as from EAX to EBX and vice versa, from generation to generation while keeping the behaviour the same. [5][1]

Changing flow control/Subroutine Reordering; By changing the order of the program’s subroutines malware can produce an exponential number of potential variations. This involves changing jumps in the assembly code and reordering the call sequence by adding subroutines.[3] By changing the order of these jumps, and the order in which different functions are called – combined with other obfuscation methods, such as dead code function insertions, we not only change the signature and make it difficult for automated detectors to identify the malware, but we also increase the challenge of identifying what the malware does through static analysis. Block hashing and heuristic analysis can be the best ways for detectors to identify malware of this type.

Code/Instruction Substitution; Malware, like all code, is made up of a sequence of functions. With most programming languages there are multiple functions that can carry out the same behaviour. In x86, for example, XOR can be replaced by SUB and MOV can be replaced with PUSH. [1] This change results in a new generation of malware, with its own signature that is difficult for detectors to pick up on, even when detecting the instruction set used by the malware. Heuristic and behavioural detection are best placed to identify malware using this form of obfuscation. [8]

Code Transposition; Code transposition is reordering the code in a way the does not impact functionality. This can be through shuffling the order of the instructions and then calling them when needed in the main body. with unconditional branching statements or jumps [1]. The original malware can still be recovered by removing those statements and jumps. This obfuscation, because the malware is so complex, can be difficult and time consuming to both create it, and to detect it. Block hashing is one way to detect this form of malware, where the detectors hash segments, or blocks, of the malicious code are hashed and then checked by an algorithm for similarities with known malware.

Code Integration/Insertion; This is one of the most difficult malware obfuscation techniques to both implement and to detection or analyse. it involves the malware inserting it code within a legitimate program. It does this by decompiling the target executables into manageable objects and inserting itself in between those objects and finally reassembling the entire executable. Once reassembled we see the new generation of the malware. This changes the target programs signature and makes the malware difficult to detect. The best way to detect this malware is by keeping a database of legitimate/white-listed applications and their corresponding baseline signature and treat any applications that deviate from this baseline as malicious. Block hashing and heuristics detection can also be used.

Fileless malware; A new trend in malware obfuscation that has come to the fore over the past 2 years is fileless malware. This obfuscation technique has the malware forgo having a copy of itself stored on the target machines HDD or SDD completely and lives entirely in the RAM. Detectors can have a hard time detecting the malicious function, especially if it combines some armouring techniques, such as relying on external events before acting maliciously, and even when it is detected it can be difficult for to analyse as once the machine is shut down the malware is gone. A live image of the ram is needed to analyse it.

Lets try to obfuscate some malware!

let’s put this theory into practice. We are taking the sample malware from Das Malwerk http://dasmalwerk.eu we have chosen Filename: 25786c51-414b-11e8-a472-80e65024849a.file as we will obfuscate. This malware has a hash of 36E79238CF645F38FA9CE671A850CC3E29338B65 with a detection rate of 50 / 63 engines picking it up.

Initial output from Virus Total

Here we can see 50 engines detect our malware, so we are going to now try a few ways to reduce the detection rate. From static analysis we could identify that this file is written in .NET. By using a .NET packer, Netshrink, we get the new hash of; 87755627F18616749F257524152B1C60F036C6EF when checking this hash in VirusTotal, success! It does not exist.

Virus Total doesnt have the files hash value

This is good, but next let’s upload the file to virus total. For the hash to be detected the hash must be in the VirusTotal hash database, by uploading the malware we can check

So just by changing the packer we can reduce the detection rate from 50 to 25! The detectors that did identify the malware we can see their comments like “Behaves Like”, “Heuristic”, and “Suspicious” this suggests that some form of dynamic analysis was used to identify the file as malicious. Let’s try now to play with the source code. We will decompile this .net application with dotPEEK. This gives us the source code in an exported visual basic file. Opening this in VB Studio we can see the complexity of the malware we selected. First we are going to add a function that will add two numbers, then recompile and get a hash, then compare results.

Decompiling the code with DotPeek

Opening the file in Visual studio then we see we cannot compile it again. DotPEEK seems to have decompiled it with errors such as “base.\u002Ector();” instead of “base.ctor();” 261 errors different errors to fix in all. With this fixed and compiling successfully we have a full understanding of what this malware – Orcus does. Complete with allowing partial Remote Code Execution, setting up FTP servers, allowing DDOS, stealing password and logging keystrokes, we must proceed with the utmost caution. Like a big game hunter about to take out his first sealion. Unfortunately after fixing these 261 errors we get an additional 400 errors, such as “The type or namespace name ‘Shared’ does not exist in the namespace ‘Orcus’ (are you missing an assembly reference?) -using Orcus.Shared.Communication;” which is beyond our understanding of computer programming. This could be the result of the decompile not catching all of the source code

Debugging and trying to obfuscate the decompiled code in Visual Studios

Instead of a decompiler lets try using a debugger to walk through the assembly and see if there are some changes we can make at that level. We can see the malware author has done extensive obfuscation already. We saw this when investigating the source code above where we found functions that did nothing. Here at assembly level we see dead code insertion via nop and padding at the base of the file;

NOP Dead code insertion in Orcus
End of file padding

One thing we could do for an easy demonstration is change the register from the padding at the end from EAX to EBX but let us try something more challenging. One thing I am worried about is damaging the code functionality so I am going to replace some of the NOP commands with push EAX; pop EAX which should serve the same function, this demonstrates dead code insertion and instruction substitution. This was done for multiple pairs of NOP commands found.

Replaced NOP with Push/Pop obfuscation

After this small change we have a hash of; 5AED9A880DB19E1EC35E8A63C09EEF45EC50A2C7 lets see if this, before packing it, makes a difference to out detection rate. As expected this file hash has nothing found on Virus Total. When uploading the file itself we get 42/70 detection rate. This is somewhat better than the initial 50/68 we got initially.

After our obfuscation

If we pack this malware after the changes we get a hash of 2AB951E7904EBBF355954C5501E6D5EE356120AF and this hash still has no matches on Virus Total. Interestingly when we upload this file to Virus Total we get 32/68 detections. Which is higher than our initial packed file. This could be due to the frequency of uploads we have done and the time the detectors have had to analyse our files.

As one final test lets try register swapping at the end of file padding to see if there is any difference. We will also change 1 registry used for an actual instruction. As this is a complex code and to avoid breaking it we will change the registry used at the beginning of the binary.

If we swap the EAX registers used to EBX we can then assess the results.

With this process complete and the resulting file dumped into an exe, we pack it with NetShrink again and have a hash of 5AED9A880DB19E1EC35E8A63C09EEF45EC50A2C7. The result here is unexpected with 42 detectors identifying the malware;

When we upload the file, itself we get the same result. Three conclusions that we could draw from this are; VirusTotal and its detectors are learning from our uploads each time we obfuscate the malware to become more accurate at detecting its malicious nature. The packer we used, NetShrink could be relatively obscure and the detectors had to spend time analysing it(in this case over a 2 week period). Finally, it could be the obfuscation methods we used towards the end, where we focused on changing small segments of the code in the debugger were insufficient to fool the detectors – in this case it is highly possible block hashing was used. While our obfuscation efforts gave us mixed results, we were able to go through several obfuscation

References and bibliography

[1] Ilsun You, Kangbin Yim. (2010) ‘Malware Obfuscation Techniques: A Brief Survey‘, BWCCA 2010

[2] Philip Okane, Sakir Sezer, Kieran Mclaughlin. (2011) ‘Obfuscation: The Hidden Malware‘, IEEE Security & Privacy

[3] Jian Li, Jun Xu, Ming Xu, HengLi Zhao, Ning Zheng. (2009) ‘Malware Obfuscation Measuring via Evolutionary Similarity (2009)’,First International Conference on Future Information Networks

[4] Lysne O. (2018) ‘Static Detection of Malware’, The Huawei and Snowden Questions. Simula Springer Briefs on Computing, vol 4. Springer

[5] Kristian Iliev (2017) ‘Top 6 Advanced Obfuscation Techniques Hiding Malware on Your Device’,https://sensorstechforum.com/advanced-obfuscation-techniques-malware/ (accessed: 13/03/2019)

[6] Dr. Amit Kumar Bindal, Navroop Kaur. (2016) ‘A complete dynamic malware analysis’, International Journal of Computer Applications

[7] Mario Luca Bernardi, Marta Cimitile, Francesco Mercaldo, Damiano Distante. (2016) ‘A constraint-driven approach for dynamic malware detection’, 14th Annual Conference on Privacy, Security and Trust

Figures 1, 2 & 3: Camouflage In Malware: From Encryption To Metamorphism (2012)         ; Babak Bashari Rad, Maslin Masrom, Suhaimi Ibrahim



Malware Analysis Lesson 4; Dynamic Analysis

Last week we looked at static analysis, investigating malware without running it. We looked at some problems with this, the obfuscated source code, calling libraries dynamically and a multitude of methods malware authors can use to frustrate analysts. When this is successful and we have all the information we can get from static analysis we can go deeper in trying to identify what the malware is doing. The way we do this is with Dynamic Analysis, where we run the malware in a sandbox and monitor what changes it makes to the system, what network calls are made, what it looks like in the RAM etc. By monitoring the malware while it is running we are able to identify its functionality and behavior. For example the function names we found in out static analysis strings dump may not all be called when its run, dynamic analysis lets us confirm what is going on.

By launching the malware in a controlled and monitored sandbox we can observe and document its effects on a system. This is especially useful when its an encrypted or packed file. Once the contents are loaded into memory we can read the contents and observe what it does.

What do we need for our sandbox?

First thing, read Malware Unicorns Reverse Engineering 101 course. Its amazing and gives you the ISO’s to run to setup your sandbox! Link is here. We have to run the file in this sandbox as if the file turns out to be malicious we can’t risk the malware infecting all the machines on our network.

A good malware analysis environment achieves 3 things:

  • It allows us to monitor as much activity from the executed program as possible.
  • It performs this monitoring in such a way that the malware does not detect it – which is important as some malware will not execute some functions if it detects it is being run in a sandbox.
  • Ideally it should be scalable so we can run many samples repeatedly in an isolated and automated way.

What kind of information are we looking for?

We should aim to capture everything the malware does so we can build up a picture of what is going on. We need;

  • All traces of function calls made,
  • What files are created, deleted, downloaded or modified
  • Memory dumps so we get all that juicy information stored in ram (volatility)
  • Network activity and traffic
  • Registry activity for windows.

What are some ways we can create this environment.

Air gapped

Air gapped networks are very simple but difficult to maintain. The first is to have physical machines that are isolated from your network (possibly completed disconnected from any network). Military, power plants, avionics and malware analysts all make heavy use of air gapped networks. This physical isolation is a literal gap of air between the sandbox and the Internet. By using this type of setup we rely on malware(and our sandbox) never needing access to the internet, which is often not the case. It also present difficulties with moving files on and off the machine safely – which makes a breach possible.

There have been a few attacks on air gapped networks. The one I’m going to chat about always gets me excited – its like the plot of a bond movie! STUXNET! This was a piece of military grade malware designed by the US and Israeli to infect and disrupt the Iranian Nuclear program. The Iranian site was air gapped and from what i can tell very well protected. The way the malware was introduced was by dropping infected USB keys where the scientists working on the site were likely to see them. Eventually the malware was able to worm its way into the centrifuges and cause alot of disruptions. Wired has an awesome article on it i recommend; https://www.wired.com/2014/11/countdown-to-zero-day-stuxnet/

The attack propagated and infiltrated the air gapped network by the removable media used to transfer files.

Virtualisation

Other than network access another issue with air gapped sandboxes is the need to install the operating system after every malware execution. Such a pain. Visualization is how we can have that isolation without the manual pain of reinstalling. By using Virtual Box or VMware we can snapshot what the sandbox is like before executing the malware, then rollback quickly and easily after to this known good state. Problem with visualization is that the malware can easily detect that its being run on a VM. Some malware once it knows its being run on a VM will not execute, or will not execute in the way it would on a regular device. This is to make it more difficult for us analysts to monitor what it does. Oh how malware authors hate us!

How malware identifies if its in a sandbox can include;

  • Guest additions(VMware tools and the like)
  • Device drives installed
  • Well known malware monitoring tools running in the background.
  • The MAC of the VM.
  • Traces in the registry for windows machines.
  • Process execution times.
  • Lack of host activity like downloaded apps, temporary files, browsing history etc.
  • The presence, or lack there of, of anti-virus applications.
  • The type of system its run on (CPU, Ram, HDD etc)

What are the problems with dynamic analysis?

There are a few problems, or challenges for the optimist, we need to be aware of with dynamic analysis. When we first run the file we don’t know what it will do. It may execute all its malicious code at once, or it may not. It might wait for a trigger event; like a particular time of day, a connection to a network, or for the user to do something. For example if we are analyzing a backdoor Trojan, it may just open a port and wait for the Remote Server to try to connect to it before doing anything else. We may also have to execute the program many times to find out all it does or how it affects the systems.

Dynamic Analysis Tools

File System and Registry monitoring: These tools, like process Monitor, allows us to see how processes read, write and delete registry entries and files.
Process monitoring: Process Explorer and Process Hacker help you observe processes started by an executable, including network ports opened!
Network Monitoring: Like wireshare allows us to observe network traffic for anything malicious the malware is trying to do. Including DNS resolution requests, bot traffic and downloads.
Change Detection: Regshot is a small program that lets us compare the systems state before and after the execution of a file – including registry changes. FCIV lets us compare file hashes of files before and after. Its also possible that puppet could be used for this.
DNS Spoofing: Inetsim is a tool that simulates a network so that malware interacting with a remote host continues to run, allowing is to observe its behavior further.

That said if anyone has some cool alternatives to these let me know! 🙂

Where possible always establish a baseline of what your machine should look like before the malware is executed so we can investigate changes.

Whats next?

The next step is to practice with the tools we have learned. This wont be something ill document as i find its always best to learn by doing.
The Zoo (https://github.com/ytisf/theZoo/tree/master/malwares/Binaries) is where i will get my malware. I have already setup my sandbox from our static analysis session. Next step will be running through all the monitoring tools one by one to explore how they work. Using snapshots build into Virtual Box i am going to assess the tools one at a team and after im finished i will rollback my snapshot and try an new tool.

Malware Analysis Lesson 3; Static Analysis

During the course of our careers we will come across artifacts left behind by malicious hackers the are easily decipherable. These may be scripts, text files or something else that’s easy to open and view. But more often we, as malware analysts come across files that are not easily identifiable as malicious. Take executable files for example. These may be malicious or benign so we tend to take extra precautions with them. To figure out if an executable is safe or not it tends to help if we understand the structure b of executable files – and in turn gain an understanding of how these files affect our systems.

How do we do this? With static analysis. We are going to focus on PE (Portable Executable) file structure for this class. PE files are executables usually found on windows. The reason we take the time to perform this analysis is to identify them as benign – and thus avoid the costly steps of an investigation; or to identify them as malicious to be marked for evidence. If it is malware we can then classify it if it is a new malware or variant strain of an existing malware. This gains us some uber street cred.

How do we get files to analyse?

If we think there is malware running on our system we can get a dump of our ram and then parse it with tools like Volatility which is a neat open source tool. This would give us some cool information on the processes running and files open, allowing is to gain an understanding of what is going on. It is particularly useful for attacks where the hard disk may not be accessible, such as during a ransomware attack. This information allows us to locate the suspcious file and from there we can upload it the VirusTotal or a similar site to see if we get any hits. If virus total gives us ambiguous results we get to put our sherlock holmes hat on and investigate further.

The first step in malware analysis after we have our suspect file is to carry out a static analysis. We discussed this in our last talk and what it allows us to do is gather information without actively running the malware(actively running = dynamic analysis). So lets take this step by step.

Opening the malware to read the clear text

We can do this in several ways, using notepad, a hex editor or an application like Strings. What it allows us to do is to read any cleartext in the executable. There will be alot of unrenderable garbage but buried beneath that we may begin to form an understanding of what the code is doing. We might be able to identify error messages, help pages, function calls or similar.

1. Take a hash of the file.

We first hash the file so we can easily identify it and check for changes later. There are lots of hashing algorithms we can use and it doesn’t really matter which one, MD5 or SHA are the most common. With the has we can check it with antivirus scanners quickly for matches, or use something like the NIST databases. This helps us identify if the malware has been investigated before and what the findings were. Some places we can compare are;

  • NIST National Software Reference Library
  • Team Cymru Hash Registry

Apart from helping us identify known malware these can also help us identify if the file is a legitimate software or operating system file. By doing this we reduce the number of files that require a review.

In addition to the traditional methods of hash comparison above there have been several new types of hashing that may be more useful. Similarity hashing, fuzzy hashing, piecewise(or block) hashing and similarity digests are all tools we can use and there are a number of algorithms that support them, such as SDHASH, SSDEEP and MINHash.

Similarity Hashes compare files for similarity(shocking i know). They calculate the hash of portions of the code, such as functions or blocks, to identify similarities such as the same functions. Because its calculated based on a block of the binary if one piece of the code changes the parts of the code that are still similar will compare with the original. This gives us a comparison score. THis helps us identify modified files, increases the difficulty of malicious actors obfuscating their code and it speeds up our analysis time by removing a lot of manual analysis. The NSRL has hashes and similarity hashes.

2. Check the suspicious file with VirusTotal

Before we start with a full blown analysis a good, quick check we can carry out is uploading the file to one of the many virus scanning sites like Virus Total, Meta-Defender, Virscan, Jottie and others. Generally you can upload the file itself or just check the hash. This can be great to identify if the file is a known piece of malware but as you will see if you try this yourself it can be hit or miss, especially if the file is a malware variant that hasn’t been identified and classified before. We discussed in lesson one about how malware variants are being released daily and even reputable antivirus companies are not picking them all up. If we are lucky we can collect some information on the file, what it does and what malware family it can belong to – which will aide us later.

3. Investigating the inside of the file!

Strings.exe is a pretty cool too and can be downloaded from here. It goes through the file and extracts the contents and tries to print out any ASCII or Unicode values it identifies into a file you specify(or in the cmd prompt if you didnt specify a file). For malware that is not heavily obfuscated or encrypted this can be a useful tool for identifying the nature or purpose of the file.

The information we get can tell us about network activity, file activity and registry activity – along with providing us the function call names the file uses. This can help us narrow down if the file is trying to copy our clipboard, set up a server, gain persistence and more. This makes it a great tool to use to start by looking for odd or unique strings.

If the file we are looking at is a PE(from lesson two) there are two good tools we are going to look at in detail; PEDump and PEView. PEDump can be used to extract detailed information from the header of a PE file; this information can be shown on the cmd prompt or stored to a file. PEView is a GUI(yuck) tool that allows us to view what makes up the PE’s headers and sections.

PEDump: http://www.wheaty.net/downloads.htm

PEView: http://wjradburn.com/software/

PE File Analysis

The are aseveral components to a PE file that we should be familiar with before going further;

MS-DOS Header is the ever static “This program cannot be run in DOS mode” that is kept around for legacy reasons to prevent you running the program in incompatible DOS operating systems. If this is omitted the OS would fail to attempt to load the file on legacy machines. This header occupies the first 64 bytes of the PE file.

For executable files on the windows systems the MS-dos header is also called IMAGE_DOS_HEADER.

The file signature(or Magic Number) consists of the letters MZ, or 0x5a4d in little-endian hexadecimal format (Little endian machine: Stores data little-end first. When looking at multiple bytes, the first byte is smallest.), and is always the first two bytes of a file. Wanna know what MZ stands for? Its the initials of the guy that designed it!

e_lfanew is the last 4-byte value that points to the location of the PE header. This is important as after the header is the stub program that is run by ms-dos when the executable is loaded. This checks the OS compatibility with the file.

Next comes the PE File Signature which should be identifiable by a value of 0xe8 or PE.

The PE File Header/IMAGE_FILE_HEADER is contained in the 20 bytes following the PE signature and includes several things we will find useful;

The can be very important for forensics investigating as it represents the time that the image was created by the linker. The value is represented as the number of seconds since the start of January 1, 1970 in Universal Coordinated Time. This gives us the time on the coders computer from when they compiled the executable, and may be a clue as to when the program was created. But as with all computer evidence; this could have been modified at some point.

Machine show the architecture the program is designed to run on, such as x86 or x64.

The Characteristics flag gives us some more clues as to what the files does. For example characteristics can include IMAGE_FILE_EXECUTABLE_IMAGE(i.e. that this is an executable), IMAGE_FILE_DLL (i.e. that it is a DLL) and IMAGE_FILE_SYSTEM(for system files). But there are many others.

IMAGE_OPTIONAL_HEADER we get the chance to get additional information from the file such as the magic number – information about the structure of the file to enable the OS to execute the file. Including;

  • SizeOfCode: The total size of all code in the file.
  • AddressOfEntryPoint: The address where the loader begins execution of the file.
  • MinorOperatingSystemVersion: Minimum OS Version needed to run.
  • Checksum: The hash.

Sections

After this we have the actual sections of the file itself, which is what we are interested in. The most common and interesting sections of the PE file are;

  • .text: This section contains the instructions that the CPU executes. All other sections just store the data and supporting information. This should be the only section that can execute and the only section containing code.
  • .rdata: This section contains the import and export information we can see in PEView. Sometimes this is split into .edata (export) and .idata (import).
  • .data: The .data section contains global variables and data which is to be accessed from anywhere in the program. Local data is NOT stored here.
  • .rsrc: Contains resources used like icons, images, menus and strings.

Section headers are called IMAGE_SECTION_HEADER and give us information about the section structures including size and characteristics of each section. Within these sections are IMAGE_DATA_DIRECTORY structures that act as a directory for import/export tables, resource directories etc.

These sections hold information on the code and data for the applications. Libraries, DLL’s, API’s and any systems calls made are stored here.

Linking

Programmers can link imports so they dont need to re-impliment certain functionality in their code and between multiple programs. As analysts we can gather a lot of information on what a program does based on the functions it imports. There are 3 types of linking;

Static: Used in linux/unix, all code is in the executable. Makes the executable big. It’s difficult to differentiate between statically linked code and the executable’s own code

Runtime: Executables connect to libraries only when that function is needed. The linked functions do not have to be declared in the executable file header. This means the program can access any function in any library on the system and we can only know if we execute the malware for dynamic analysis.

Dynamic: This is the most common method of linking. With dynamic linking the host OS searches for the linked libraries when the program is loaded. When the program calls the linked library function, that function executes within the library. The PE file header stores information about every library that will be loaded and every function that will be used by the program. We can make calculated assumptions about what the program does based on these functions.

Summary

Knowing how PE files are structured and built can aide us in identifying the purpose of the application. There are a variety of tools used to help with this, like PEDump (which lets us locate the import data directory and parse the structures to determine the DLLs and the functions the application uses.), but it is ultimately a very manual process. There are countless functions and system calls that a file can import. MSDN have a great reference facility to find out what each function does.

Business Continuity Management Lesson 1; An introduction.

Business continuity is the uninterrupted availability of all key resources supporting essential business functions. Business Continuity management aims to provide the availability of processes and resources following disruption to ensure the continued achievement of mission critical objectives. Its very dry. Writing this makes me question my choices in life.

Should I have a Business Continuity Plan?

A Business Continuity Plan is an essential part of good business practise for any organisation but especially for organisations dealing with the following;

  • The requirement for availability during an extended working day, such as 365 day a year up time.
  • High dependence on certain key facilities, such as data centres or manufacturing facilities.
  • Heavy reliance on IT, data, comms and telephony, which describes most companies.
  • A high level of compliance, audit, legal or regulatory impact in the event of loss of a facility such as finance or healthcare.
  • If there is the potential for legal liability.
  • Possible loss of the confidence and support of workforce – where an incident could cause staff to look to move to new organisations which can be a concern at startups.
  • Potential loss of political and stakeholder support.

Ok, that describes me! What are the benefits?

A business continuity program will have benefits for your organisation by providing;

  • A more resilient operation infrastructure
  • Better compliance and quality requirements
  • Capability to continue to achieve your organisations mission
  • Capability to continue the businesses profitability
  • Capability to maintain market share
  • Improved morale for employees
  • Protection of image, reputation and brand value

Nice! What are the impacts of business Disruption?

Having Business Continuity Management in place can help with 4 main areas; Marketing, Finance, Statutory/Regulatory and Quality.

Marketing;

  • Continued operations is crucial to maintaining customer confidence, too much disruption and customer churn is a certainty!
  • Very often strong advertising and messaging can work against us, especially where we set a high bar of expectation.
  • Typically spending on marketing may require triple its normally allocated annual budget in the aftermath of a disaster as our skilled team tries to restore customer confidence and maintain or recover market share.

Financial;

  • Many contracts contain damages or penalty clauses, often expressed as a percentage of the contracts value, that may be invoked in the event of a service failure.
  • Even for unforeseeable disasters (Force Majeure) clauses are being contested in courts as in our modern world most disasters should be foreseeable and planned for.
  • There can be other financial losses too;
    • Loss of interest on overnight balances(?)
    • Cost of interest on cashflow – especially if you need an overdraft to cover cashflow disruptions.
    • Delays in customer accounting and payments.
    • Loss of control over debtors.
    • Loss of credit control.(?)

Statutory or compliance requirements;

  • Many organisations have to meet legal requirements including for;
    • Record keeping and audit trails,
    • Compliance requirements and industry regulations
    • Health and safety, and environmental requirements
    • Government agency requirements
    • Tax and customs requirements,
    • Import and export regulations,
    • Data protection regulations
  • Depending on your organisation some are all of these will relate to you and not having the capability to comply can result in severe penalties.

Quality

  • Many international standards have DRP and BCM requirements including;
    • ISO 9000 requires quality management system audits and surveillance visits
    • ISO 27001
    • ISO 22301 is for Business Continuity Management
  • All these require a business continuity plan to be created and available to protect critical business processes from disruption.
  • Loss of services aggravated by a lack of a BCP can even cause those standard bodies to review and even withdraw your accreditation.

Survival from disruption

  • We must implement a BCM to ensure our survival following a disruption.
  • Impacts can include;
    • Existing customers leaving.
    • Prospective customers looking elsewhere
    • Loss of market share.
    • Damaged image and credibility
    • Reversed cashflow
    • Costs could spiral out of control.
    • Inventory costs rise and management is difficult – especially for grocery stores and retail.
    • Share prices can drop
    • Competitors may take advantage of the disruption.
    • Key staff may leave
    • Layoff may be necessary.

That’s interesting but what are the causes?

Far too many to count, from local to regional and all the way up to national here are a few;

Local;

  • Systems failure
  • Data corruption
  • Garda blocking access to the locality due to an emergency.
  • Chemical contamination
  • Hacking
  • Theft of PC’s with Personal information – in some cases these PCs may be the only source of that information.
  • Loss of supplied service from backups and data retention to AWS
  • Loss of power

Regional/National

  • Earthquakes.
  • Terrorism
  • Brexit
  • Volcanic erruptions
  • Snow, floods and windy weather
  • Civil unrest

These don’t sound like IT though?

That’s right, Business Continuity is about more than IT. It has to cover all business operations. These can include manufacturing, retail, operations, and all front and back office activities deemed critical. BCM should be encouraged for organisations of all sizes and US PS-PREP (?) has been a key drivers here.

So what counts as a disaster then?

Disasters can be difficult to identify; really its any event where critical operations are impacted. Where there are control

One example of what would previously be classed as a disaster but not now would be how organisations have migrated services to the cloud, so one physical failure of equipment does not impact the organisation.

Another would be an organisation that has 2 telecoms providers. If one provider goes down it is not a disaster as the second provider can be used.

A last example would be where supply lines fails but the organisation has built up a buffer supply.

In all these examples the critical operations of the organisation continue. Your organisation should take account of what these critical operations are to help define what a disaster is for you. This definition may prove vital for decision-making in determining when to activate the DRP.

What should my Recovery Timescale be?

When planning for business continuity, downtime should be an essential concern and we should aim to reduce or eliminate it in the case of a disaster where possible. This is an even bigger concern in our modern world of online transaction – Imagine Amazons lost sales every second of downtime as frustrate customers try competitor websites. Aside from customer facing applications, downtime can also impact other organisations due to our interconnect and interdependent economies. A good example of this is after Japans 2011 Tsunami, memory chip manufacturing was disrupted and as a result the cost of memory worldwide shot up.

First objectives and the road map we want to recover to always differs depending on the individual organisations needs. An online retailer might prioritize restoring its front end and order processing; while a law firms priority may be to ensure record retention and access to backend services and file shares; At the same time a bank might prioritize restoring services that ensure integrity of its databases are maintained.

Likewise some organisations might envision a full recovery of mission critical activities to be essential, while others might favor a partial recovery followed by phased restoration afterwords. There are a few measurements we use for this – http://www.bcmpedia.org/ is great for bits on info and clarification;

  • Recovery Time Objective (RTO) – This is the targeted time from when service is disrupted to when full operations are regained.
  • Maximum Tolerable Downtime (MTD) – Is the time after which the service disruption causes irreversible damage to the organisation.
  • Maximum Tolerable Period of Disruption (MTPD) – Is the same as MTD
  • Recovery Point Objective (RPO) – describes the interval of time that might pass during a disruption before the quantity of data lost during that period exceeds the Business Continuity Plan’s maximum allowable threshold or “tolerance.”
  • Maximum Tolerable Data loss (MTDL) – The same as RPO – its the max dataloss you can suffer before catastrophic impact to your organisation. This can be more important that the period of time a service is down for – especially with the rise of FinTech organisations.

Generally we have seen a trend of RTO’s and MTD’s requirements decreasing. The reason for this reduced tolerance for disruption can be seen in a few ways; Many businesses are now very depended on Enterprise Resource Planning and Customer Relationship Management tools, downtime to these can have a paralyzing impact on operations. Companies can have call centers and offer 24/7/365 support; or have front ends or client integration that can all be critical components that clients rely on. Any outage can be fatal for the organisation.

Given this complexity recovery can be complex with different modules, backup schedules, data points and integration points. The speed of recovery is equally important to the protection of data and transactions.

So I kind of get it – but what is Business Continuity; is it a plan, a project or a process?

Great question mini-me! So business continuity usually starts as a project; small scale and highly focused. However once this testing starts generally management see’s the benefit of it and extends it. Generally this cycle repeats until the project becomes an ongoing program and then through the standards it sets and follows it becomes a management system. While initially viewed as a temporary project it evolves into an essential Business As Usual activity.

The end result is a plan that is maintained, regularly reviewed, that staff ared trained in to know what to do should a disaster occur.

Sure but i want to have a structured plan, what should that look like?

For a proper BCP Cycle plan there are a few different models like the one shown above from Ebrary. It operates in a cycle for a few reasons.

At the outset it is necessary to understand and gain executive buy in for certain business issues. This can involve carrying out Risk Assessments and Business Impact Assessments. This allows for us to understand our specific risks and what options are open to us for continuity and recovery. It also helps us understand what we need to proect from the perspective of having contingencies in place and managing risks.

Who do i need to involve in my BCP committee?

As said before there has been a move away from IT-Focused BCP’s towards Business Process BCP’s. Data is now located across the enterprise and critical business processes that rely on IT are carried out throughout the organisation. Because of this any BCP has to focus on the wider organisational units rather than just IT. We need to involve all executives, managers and employees. Its a big job so it is essential we have a BC co-coordinator who is responsible for maintaining, updating, reviewing and distributing the BCP.

BC is growing in maturity; what are the drivers?

Typically companies start with basic incident management runbooks, or plans. These will be how to respond to general hazards like fires, malware infections, bomb threats and dangerous weather etc. These would operate as the lowest maturity level for a BCP and the goal is to cover Health and Safety.

As organisations mature these strategies evolve to include disaster recovery plans and procedures for loss of information and communications technology, equipment, applications and data. This was viewed as a siloed IT activity with no consideration of overall impact on business activities.

Since those times several groups have begun exerting influence to encourage a more holistic approach to DRP’s including DRII, Survive!, ACP and DRIE. At the same time physical security has become more of a consideration, especially in Europe where terrorist attacks have occurred more frequently.

What else do I need to know?

Regulatory

Since the 1990’s regulatory requirements have come to the fore, pushing for a more holistic approach to BCM; with the goal to cover more areas of risk. The need for this was emphasized by high visibility scandles such as Enron(funny story; most of the ArturAnderson employees moved to EY!), Worldcon and Parmalat. So far FYRE Festival has not pushed for DRP and BCM regulator requirements for music festivals, but one can hope.

Some relevent legislation includes;

  • SOX – US but similar legislation in EU(but not as comprehensive)
  • GLBA – US
  • HIPPA – US
  • GDPR – EU
  • NISD – EU
  • PSD2 – EU

Standards

There have also been a number of independent and international standards organisations can acquire to reassure clients and stakeholders that should the worst happen, they are prepared! Initially all companies rushed to standardise BCM requirements but this led to too many potential standards, causing alot of confusion. Large multinationals then led a push to… wait for it…. standardise the standards. These included;

  • Singapore Standard SS 507
  • BS 27999-1
  • ISO 22301:2012 BC
  • lastly, one of the oldest – 1991 US National Fire Protection Association (NFPA) started to develop NFPA 1600 standard (disaster/emergency management, release in 1995) was improved to include business continuity.

These standards once mature provided numerous benefits including r

  • Reducing supply chain disruption which had a positive impact on more organisations.
  • Reduced costs for a company relating to unexpected disruption.
  • Improving customer satisfaction by ensuring their needs were serviced as expected.
  • Reduced barriers for accessing new markets.
  • Reduced impacts to the environment.
  • Improved market share.

So lets zoom in on supply chains for a moment.

Early on BCM advocates saw that their organisations were disrupted if links in their supply chain suffered disasters. The previous link on memory manufacture in Japan after the tsunami is a good example of this; as is the hard drive shortage caused by flooding in Thailand in the same year, 2011. In both of these instances the impact of the disrupted manufacturing caused issues down the supply chain for PC manufacturers.

In a survey from the Aberdeen Group;

  • over 50% of respondents had suffered supply chain failure in the previous year.
  • 56% -> supplier capacity did not meet the demand
  • 49% -> suffered raw materials price increases or shortages
  • 45% -> experienced unexpected changes in customer demand
  • 39% -> experienced shipment delays/damages/misdirects
  • 35% -> suffered fuel price increases or shortages

Whats this holistic approach thing you mentioned?

Well like my favorite word “Synergy”, Holistic is mostly a buzz word consultants and academics use to pad out their work. What we mean by holistic is just taking an end to end view of business continuity, encompassing all elements of the organisation. For an enterprise that can include taking into account, amoung others;

  • BC and DR
  • Operational Risk Management
  • Insurance aspects
  • Security compliance and breaches (information, telco, and e-commerce)
  • Regulatory compliance
  • Business trading and financial risk management
  • Asset protection
  • Project development and production risk management
  • Supply chain risk management
  • Quality tracking, defect management, maintenance, and product recall
  • Problem management and escalation from helpdesks
  • Customer complaint issues
  • Health and safety
  • Environmental risks and safety management
  • Marketing protection (image and reputation)
  • Crisis management (branch attacks, hostage and kidnap, product recall, fraud)

Operational and Business Resilience

Business resiliency is simply the ability of the organisation to adapt and react to internal and external dynamic changes. These can be opportunities or threats, disruptions or disasters. An organisations business resiliency is assessed by how much disruption the org can absorb before there is a significant impact on the business. Organisations never want to use their DRP plan, they want to avoid the crisis altogether.

But thats enough of DRP for this week…

Malware Analysis – Lesson 2; Types of malware

It is important for any malware analyst to understand the different categories of malware and how they try to infect our systems in order to better check for indicators of compromise. Malware is categorized and sub categorized based on its behavior, purpose and infection vector. Even beyond this many malware have spawned slight variants, such as Zeus, further providing a need for categorization.

By finding the commonalities between different malware we are able to more easily find malware indicators of compromise that allows us more efficiently identify and isolate malware strains.

Malware Stats

Before we continue it is interesting to note that while total malware is being churned out at an exponential rate, new malware appearing every year has mostly remained static. The primary reason for this is to create new, good malware there is a high level of technical skill required. Many of the threat actors we encounter would not meet this skill level and so rely on purchasing malware or changing existing malware into a new variant.

This is very easy to do and can be as simple as adding padding, changing the portions of the malware that is encrypted, moving functions around or even changing the functions themselves. Each of these steps(and theres more that can be done!) act as a way of tricking antivirus’ into believing the application is different even though the purpose and result of execution is the same. This explosion of variants can easily be seen with the banking Trojan Zeus, which has over 100,000 variants and more appearing every day.

AV-Test has some great statistics on this: https://www.av-test.org/en/statistics/malware/

Malware Classification

Classifying malware by behavior helps us gain an understanding of what the malware’s infection vector is, what its purpose is, how big of a risk it is and how we can defend against it. Knowing these things, and being able to quickly classify malware in this way allows us to more quickly respond. We need to know what malware has done, how the asset was compromised and when to restore it to a known good state.

Later in this post we are going to go through the classifications but until then Aman Hardikar has a nice mind map that gives a good visualization of how classification may be done. His blog may be found here.

The Computer Antivirus Research Organization Naming Scheme

CARO is a naming convention for malware varients that makes new malware names easy to understand, informative and standardized. It was created primarily for AV companies and, while not universally adopted it is used with variations among many vendors.

The naming convention is split as follows:

Type – the classification of malware (eg Trojan)

Platform – The operating system targeted

Family – the broader group of malware the variant belongs to.

Variant Letter – an alpha identifier (aaa, aab, aac etc)

Any additional information – for example if it is part of a modular threat.

If we put these components together we get an informative name. I have included a Microsoft image to help illustrate this below and the MS article can be found here.

MAEC

MAEC is a community developed structure language for encoding and sharing information about malware. It contains information on a malware’s behavior, artifacts or IoC’s, and relationships between malware samples were relevant. Those relationships give MAEC and advantage over relying on simple signatures and hashes as it generates these relationships based on low level functions, code, API calls and more. It can be used to describe malicious behaviors found or observed in malware at a higher level, indicating the vulnerabilities it exploits, behavior like email address harvesting from contact lists or disabling of a security service.

MISP

Where MAEC is a structured language like XML, MISP is an open source system for the management and sharing of IoCs. Its primary features include a centralized searchable data repository, a flexible sharing mechanism based on defined trust groups and semi-anonymized discussion boards.

An IoC is any indicator caused by a malicious action, intrusion or similar. It can be network calls, processes running, registry key creation and more. By having a central database of IOCs we are able to leverage the experience gained from cyber attacks across the world.

ENISA has a pretty cool report that include some info on MAEC and MISP. https://www.enisa.europa.eu/publications/standards-and-tools-for-exchange-and-processing-of-actionable-information/at_download/fullReport

Malware Types

Gollum’s current opinion of malware

viruses

One of the oldest, and definitely most well known malware are Viruses. They can be identified by the way they copy, or inject. themselves into other programs and files. This allows them to persist on a machine even if the original file is deleted and spread to other devices as the “host” files or programs are distributed. There are a few types of viruses that have been identified, and this feeds into our classification taxonomy discussed earlier. Symantec have a pretty good blog on these which can be found here. There are two attributes specifically that i will talk about here;

File Injectors are how viruses spread and the way they infect a host file. There are 3 types of this. Overwriting Infectors overwrite the host file as needed. Companion Infectors rename themselves as the target file. Parasitic Infectors attach themselves to a host file.

Memory resident virus are discussed under the Boot Sector and Master Boot Record virus sections on Symantec guide and discussed in detail by trend micro here. Memory resident infectors remain in a computers RAM after it has been executed to try to infect target files, programs or media (Like floppy drives! if they still exist… surely some bank somewhere uses them 🙂 ). The way the actually infect that target is the same as the File Injector method.

Close up; Macro Viruses

Macro viruses are less of an issue these days as macros are disabled by default in Microsoft word, and enabling them gives the user a pop-up warning them that the macro is attempting to run. In the past these were a major issue however so lets give a brief run down of them.

Macros a small scripts written in the language of the application it is run on, like Microsoft Word, Excel or Visual Basic. This OS independence means a macro run on a windows system will also run on a Mac OSX. Once executed the macro can run a number of functions, from infecting every document of that type, to changing the document contents or even deleting the contents. Generally Macros are spread via spam emails with “invoices” attached. Given they are still prevalent in your spam folder, for the budding Malware analyst these type of macros can be a good opportunity to analyse what a macro is doing. Just be sure to use a secure environment!

Worms

Worms are malware that replicates itself with little to know interacting by the user. How WannaCry spread throughout the world with the SMBv1 vulnerability EternalBlue is a recent example of this. Other types of worms can use browsers, email and IM’s to spread.

Close up; Mass-mailers

Mass Mailers are the traditional worm type malware. This is spread via tantalizingly designed emails that tempt you to click on them. That invoice you forgot to pay, the secret crush who loves you and more are all examples of how this type of malware encourages you to open it. Its a form of social engineering that fools you into clicking the link in the email or downloading the attachment. Some advanced mass mailers can even turn your computer into an SMTP server, to spread to other hosts; by compromising you address book they can email your contacts.

Other types

Other types of Worms include File Sharing worms, which rely on users downloading and running the applications, commonly seen in torrents. Who can resist that randomly uploaded movie on the pirate bay? You have been dying to see it, and anyway who has the cash to pay for it?

Internet worms would be WannaCry. These worms use vulnerabilities to spread across networks.

Instant Messaging worms used to be common and would take over the old IM clients (remember MSN Messenger?) and message all your contacts trying to get them to download the worm themselves.

Trojans

Trojan malware comes from the old Greek epic The Iliad (and the Brad Pitt movie Troy), in that myth that Greece was laying siege to a city, Troy. After a long stalemate they had soldiers hid inside a giant, hollow wooden horse that they pretended was a gift and tricked the defending Trojans into bringing the horse inside to one of the temples. When night fell the soldiers snuck out and opened the gates to the city! I recommend you read the poem itself as its really cool! 🙂

Trojan malware is malware that pretends to be a legitimate application but does something malicious. They do not replicate and tend to have a purpose that benefits from it evading detection, like operating as a backdoor. In many cases the Trojan program’s legitimate “cover” is fully functioning so that the victim will not remove it.

Close up; Bankers

Banker Trojans are designed to steal sensitive user information, like credit card details, credentials and other high value data. We spoke about Zeus before and this is an example of a banker Trojan. It acquires the data and then forwards it on to a Command and Control server, which receives and stores the data to be accessed by the malware author. I wonder if this outbound traffic could be used to detect it.

Close up; Keyloggers

Keyloggers continuously monitor and record keystroke. Usually storing them in a file or exfiltrating them to a command and control server, like with the Bankers. In some cases the keylogger with try to identify specific information by monitoring for “trigger” events; like visiting specific websites to try and capture the credentials. This logging behavior is also seen in bankers.

Close up; Backdoors

Backdoors are an great persistence tool for an attacker. The Trojan operating as a legitimate application opens a port on the server and listens for a connection. The attacker can then connect to your asset through the Trojan. In less targeted attacks the malware may compromise your system and setup a backdoor for the attacker, or their command and control server, to send commands. This outcome of this can be harvesting sensitive data, using your asset as a pivot point to traverse the network or using your asset as a “Zombie” in a botnet.

Rootkits

Rootkits are more scary than what we have talked about so far. Rootkits are not necessarily malware in and of itself, but is a collection of techniques and tools coded into malware allowing for privilege escalation. The aim of the rootkit is to fully compromise the system, conceal its presence and offer persistence. The escalated privilege can be gained by direct attack, using previously acquired log in credentials among other methods. These are difficult to find due to the elevated access it has.

Scareware

Scareware is any malicious application that uses social engineering to scare a user into buying unwanted software. Be giving a sense of urgency (“BUY NOW BEFORE ITS GONE!!!”) and intimidation (“YOUR LAPTOP HAS BEEN HACKED, BUY THIS TO FIX IT!”) the victim may pay the demand. This can come in the form of dodgy antivirus’s but can also be seen in other threats, the most prominent being ransomware. The Scareware portion of ransomware tends to be the countdown timer before “Your files are gone forever”

Adware

Adware can take many forms but the purpose is the same, to show you lots of advertisements. In the past the result of this was to see brightly colored and invasive banner advertisements and pop-up advertisements during regular browsing. As this would generally cause frustration from users(and subsequent adware removals) modern adware tends to try to be more subtle for persistence. Even more they will monitor what you do and create user profiles to then sell to third parties. Scary!

Spam

Spam is an incredibly common function of malware. Brian Krebs  wrote a great book on investigating primarily Russian spammers and found it was a multi-million euro business. In 2014 it was estimated over 90% of emails are spam. That is trillions of emails per year dedicate to this wasteful activity. Spam primarily uses email but can also use blogs, IM’s, advertisements and SMS. Its free for the spam artist and the occasional successes mean its unlikely to subside anytime soon. Beyond the usual risks this also concerns companies as they are responsible for protecting their employees from the kind of abuse spam can entail.

Infection vectors

This portion of our lesson is going to discuss how malware attempts to infect a system. There is the standard technical aspect of how the infection vector enables the malware to infect a system that we need to note, put the vector can also be social engineering. The ways malware authors leverage social engineering to get people to install malware was discussed already but here they are again;

  • Coming from a trusted source, like a worm from a friend saying I love you.
  • Having a sense of urgency or importance, such as having to “INSTALL THIS NOW BEFORE ITS TOO LATE”
  • Arousing interest of the victim, like the friend saying i love you but also offering an interesting service the victim has need of.

Email

Email far exceeds any other vector in terms of speed and coverage by which the malware can spread. Anyone with an email account can be a target of this and this makes extensive use of the social engineering techniques mentioned previously. ILOVEYOU was one of the earliest examples of this attack but macros and viruses embedded in documents distributed by spam campaigns are all still common.

Social networking

Social networking is a great platform for attackers to use due to its extensive reach. Attackers use social networks as a way to enter the lives of victims and provide them with links to malicious sites or malicious files to download. The attacker can also add friends of friends to extend its reach. Generally people accept friend requests if there are mutual connections more often than if there are no mutual friends. Making use of email and password lists that are readily available they can gain access to compromised existing accounts or just create their own.

Setting up their own pages can also be used to spread malware. When a user likes this page they receive updates directly to their news-feed, eliminating the need for a friend request.

Portable Media

Many organisations now a days block the USB ports on their assets and this is for a good reason. Having malware stored on a USB that automatically runs is a real risk facing us today and it can have big consequences. A recent example of this is the 2014 STUXNET attack, where the Israeli intelligence services MOSSAD left USB keys outside of an Iranian Nuclear plant. These USB keys had malware on them and when an unwitting scientist plugged one of them into his workstation the Malware slowly worked its way through the system until it found the Centrifuge SCADA systems. It then executed its main function of causing extensive damage. A lot has been written on this attack and it makes for interesting reading; https://en.wikipedia.org/wiki/Stuxnet

While STUXNET was a highly targeted attack, portable media can also be used for opportunistic attacks.

URL Links

URL links are a special kind of infection vector as they are usually spread through other infection vectors e.g. social networks, IMs, email etc. Examples of this vector can include link-shortening services and misspelled legitimate domain names. These URLs lead to fake websites that look legitimate (or could even have XSS scripts in the case of URL shorteners). This tricks the user into carrying out their tasks as normal not knowing that the attacker is recording their interaction with the fake website. This includes collecting any credentials used. Many banks have started enforcing Multi-Factor Authentication to mitigate the risks from this(as well as other attacks like Phishing).

File Sharing

The pirate bay and other file sharing websites have long been host to many types of malware. Users tend not to investigate what they download opening themselves up to compromise.

Soundwaves

A good blog on how this vector is used can be found here: https://blog.systoolsgroup.com/malware-through-soundwaves/

Bluetooth

Think BlueBourne; A good blog on how this vector is used can be found here: https://www.extremetech.com/mobile/255752-new-blueborne-bluetooth-malware-affects-billions-devices-requires-no-pairing

Always keep the law in mind when planning security.

Your legal and contractual requirements should be firm considerations when planning out and implementing your information security. If in doubt as to what your obligations are and which pieces of legislation apply to you, work with your legal team to identify them.

Security category – 18.1. Compliance with legal and contractual requirements

18.1.1. Identification of applicable legislation and contractual requirements.

All companies should adhere to their contractual and regulatory obligations, but to do so we need to know what those obligations are. Your organization should take care to go through its contracts and understand what is expected of you. You should also have specially trained staff with knowledge of regulations impacting your industry at hand when drafting policies, procedures or stands. These staff can keep you informed of changing requirements so you can be sure to include them to ensure you are compliant. Remember, if you have offices in multiple legal jurisdictions your plans should take the different legal environments into account.

18.1.2. Intellectual property rights.

You should make sure that, for any material you use such as software, you are compliant with copyright and IP laws, as well as any licensing fees that may apply. Ensuring that software on your assets has been attained from the vendor, and that only correctly licensed versions can be installed we can reduce our risk. Outlining employee responsibilities, such as not using pirated software, in the Acceptable Use Policy can help us be compliant, as can regular audits of software. Be prepared to hand licensing information to the vendor should they wish to audit you.

18.1.3. Protection of records.

In many jurisdictions there is legislation in place to specify how record retention should be carried out. An example of this from the GDPR is for healthcare records[1];

“In general, medical records should be retained by practices for as long as is deemed necessary to provide treatment for the individual concerned or for the meeting of medico-legal and other professional requirements. At the very least, it is recommended that individual patient medical records be retained for a minimum of eight years from the date of last contact or for any period prescribed by law. (In the case of children’s records, the period of eight years begins from the time they reach the age of 18).”

 You should have policies in place to protect records in accordance these laws, as well as contractual and regulatory requirements. Similarly, you may wish to tailor your retention policy in a manner that benefits your organization and helps further your business needs. This can be done but should be carried out in line with legislation, regulatory and contractual requirements. Keeping records for too long, beyond a reasonable need for the business can cost resource in maintaining them and we run the risk of greater loss should a breach occur, with that in mind it is encouraged to limit the retention period of records where reasonable.

18.1.4. Privacy and protection of personally identifiable information.

Nearly all countries have some requirements for reasonable protection of collected PII. In some jurisdictions, such as the European Union and the incoming GDPR, not sufficiently protecting PII can cause fines to be leveraged against the organization. To use the GDPR as an example a company can be fined up to 4% of its annual revenue. One of the best ways to best ensure compliance is to designate an employee a Privacy Officer who can advise on local regulations.

18.1.5. Regulation on cryptographic controls.

In a previous control we discussed the importance of using encryption for confidentiality, integrity and non-repudiation, but in some states the use of encryption is heavily regulated, and in some cases, require decryption keys to be provided to the authorities. It is important to understand your local laws when using encryption or incorporating encryption in your products.


[1] https://www.icgp.ie/go/in_the_practice/it_faqs/managing_information/8E2ED4EE-19B9-E185-837C1729B105E4D9.html

When things go wrong – Business Continuity and Redundancy!

Security category – 17.1. Information security continuity

17.1.1. Planning information security continuity.

Having comprehensive business continuity and disaster recovery plans can be vital for in organization’s survival should a disaster occur. Such plans should be sure to include security which is still important, if not more so, during a crisis and should be included in any plans created. If there are no such plans then the organization should strive to maintain security at its normal level during a disaster. If possible Business Impact Analysis’ should be carried out to investigate the security needs during different disasters.

17.1.2. Implementing information security continuity.

Ensuring that security controls in any plans are carried out in a disaster is just as important as having the plans themselves. There should be documented processes and procedures in place and easily accessible to staff during such a situation. These documents should be available in both electronic and paper format, with copies stored in geographically separate locations. This should allow us to maintain a command structure that includes security responsibilities, and keeps staff accountable and aware that security is still necessary. In some types of disasters our primary security controls may fail, in this case we should have separate, mitigating controls ready to be implemented.

17.1.3. Verify, review and evaluate information security continuity

This helps us ensure our plans are effective and will work as intended. In practice, it is carried out through table-top exercises, structured walkthroughs, simulation tests, parallel tests, and full interruption tests.[1] The plan should be updated to reflect changes in the organization, frequently tested to ensure it works as envisioned and that everyone involved is trained to know what to do with a disaster strikes.

Security category – 17.2. Redundancies

17.2.1. Availability of information processing facilities.

A key tenet of security is ensuring availability and this can be better enforced by using redundancy. This is simply having multiple redundant components so that if one fails operations fail-over to the remaining, working components. This can be expensive and what applications are in scope for this redundancy should be in line with the business needs.


[1] https://en.wikipedia.org/wiki/Disaster_recovery_plan#Testing_the_plan