Malware Analysis Lesson 4; Dynamic Analysis

Last week we looked at static analysis, investigating malware without running it. We looked at some problems with this, the obfuscated source code, calling libraries dynamically and a multitude of methods malware authors can use to frustrate analysts. When this is successful and we have all the information we can get from static analysis we can go deeper in trying to identify what the malware is doing. The way we do this is with Dynamic Analysis, where we run the malware in a sandbox and monitor what changes it makes to the system, what network calls are made, what it looks like in the RAM etc. By monitoring the malware while it is running we are able to identify its functionality and behavior. For example the function names we found in out static analysis strings dump may not all be called when its run, dynamic analysis lets us confirm what is going on.

By launching the malware in a controlled and monitored sandbox we can observe and document its effects on a system. This is especially useful when its an encrypted or packed file. Once the contents are loaded into memory we can read the contents and observe what it does.

What do we need for our sandbox?

First thing, read Malware Unicorns Reverse Engineering 101 course. Its amazing and gives you the ISO’s to run to setup your sandbox! Link is here. We have to run the file in this sandbox as if the file turns out to be malicious we can’t risk the malware infecting all the machines on our network.

A good malware analysis environment achieves 3 things:

  • It allows us to monitor as much activity from the executed program as possible.
  • It performs this monitoring in such a way that the malware does not detect it – which is important as some malware will not execute some functions if it detects it is being run in a sandbox.
  • Ideally it should be scalable so we can run many samples repeatedly in an isolated and automated way.

What kind of information are we looking for?

We should aim to capture everything the malware does so we can build up a picture of what is going on. We need;

  • All traces of function calls made,
  • What files are created, deleted, downloaded or modified
  • Memory dumps so we get all that juicy information stored in ram (volatility)
  • Network activity and traffic
  • Registry activity for windows.

What are some ways we can create this environment.

Air gapped

Air gapped networks are very simple but difficult to maintain. The first is to have physical machines that are isolated from your network (possibly completed disconnected from any network). Military, power plants, avionics and malware analysts all make heavy use of air gapped networks. This physical isolation is a literal gap of air between the sandbox and the Internet. By using this type of setup we rely on malware(and our sandbox) never needing access to the internet, which is often not the case. It also present difficulties with moving files on and off the machine safely – which makes a breach possible.

There have been a few attacks on air gapped networks. The one I’m going to chat about always gets me excited – its like the plot of a bond movie! STUXNET! This was a piece of military grade malware designed by the US and Israeli to infect and disrupt the Iranian Nuclear program. The Iranian site was air gapped and from what i can tell very well protected. The way the malware was introduced was by dropping infected USB keys where the scientists working on the site were likely to see them. Eventually the malware was able to worm its way into the centrifuges and cause alot of disruptions. Wired has an awesome article on it i recommend; https://www.wired.com/2014/11/countdown-to-zero-day-stuxnet/

The attack propagated and infiltrated the air gapped network by the removable media used to transfer files.

Virtualisation

Other than network access another issue with air gapped sandboxes is the need to install the operating system after every malware execution. Such a pain. Visualization is how we can have that isolation without the manual pain of reinstalling. By using Virtual Box or VMware we can snapshot what the sandbox is like before executing the malware, then rollback quickly and easily after to this known good state. Problem with visualization is that the malware can easily detect that its being run on a VM. Some malware once it knows its being run on a VM will not execute, or will not execute in the way it would on a regular device. This is to make it more difficult for us analysts to monitor what it does. Oh how malware authors hate us!

How malware identifies if its in a sandbox can include;

  • Guest additions(VMware tools and the like)
  • Device drives installed
  • Well known malware monitoring tools running in the background.
  • The MAC of the VM.
  • Traces in the registry for windows machines.
  • Process execution times.
  • Lack of host activity like downloaded apps, temporary files, browsing history etc.
  • The presence, or lack there of, of anti-virus applications.
  • The type of system its run on (CPU, Ram, HDD etc)

What are the problems with dynamic analysis?

There are a few problems, or challenges for the optimist, we need to be aware of with dynamic analysis. When we first run the file we don’t know what it will do. It may execute all its malicious code at once, or it may not. It might wait for a trigger event; like a particular time of day, a connection to a network, or for the user to do something. For example if we are analyzing a backdoor Trojan, it may just open a port and wait for the Remote Server to try to connect to it before doing anything else. We may also have to execute the program many times to find out all it does or how it affects the systems.

Dynamic Analysis Tools

File System and Registry monitoring: These tools, like process Monitor, allows us to see how processes read, write and delete registry entries and files.
Process monitoring: Process Explorer and Process Hacker help you observe processes started by an executable, including network ports opened!
Network Monitoring: Like wireshare allows us to observe network traffic for anything malicious the malware is trying to do. Including DNS resolution requests, bot traffic and downloads.
Change Detection: Regshot is a small program that lets us compare the systems state before and after the execution of a file – including registry changes. FCIV lets us compare file hashes of files before and after. Its also possible that puppet could be used for this.
DNS Spoofing: Inetsim is a tool that simulates a network so that malware interacting with a remote host continues to run, allowing is to observe its behavior further.

That said if anyone has some cool alternatives to these let me know! 🙂

Where possible always establish a baseline of what your machine should look like before the malware is executed so we can investigate changes.

Whats next?

The next step is to practice with the tools we have learned. This wont be something ill document as i find its always best to learn by doing.
The Zoo (https://github.com/ytisf/theZoo/tree/master/malwares/Binaries) is where i will get my malware. I have already setup my sandbox from our static analysis session. Next step will be running through all the monitoring tools one by one to explore how they work. Using snapshots build into Virtual Box i am going to assess the tools one at a team and after im finished i will rollback my snapshot and try an new tool.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s