Business Continuity Management Lesson 1; An introduction.

Business continuity is the uninterrupted availability of all key resources supporting essential business functions. Business Continuity management aims to provide the availability of processes and resources following disruption to ensure the continued achievement of mission critical objectives. Its very dry. Writing this makes me question my choices in life.

Should I have a Business Continuity Plan?

A Business Continuity Plan is an essential part of good business practise for any organisation but especially for organisations dealing with the following;

  • The requirement for availability during an extended working day, such as 365 day a year up time.
  • High dependence on certain key facilities, such as data centres or manufacturing facilities.
  • Heavy reliance on IT, data, comms and telephony, which describes most companies.
  • A high level of compliance, audit, legal or regulatory impact in the event of loss of a facility such as finance or healthcare.
  • If there is the potential for legal liability.
  • Possible loss of the confidence and support of workforce – where an incident could cause staff to look to move to new organisations which can be a concern at startups.
  • Potential loss of political and stakeholder support.

Ok, that describes me! What are the benefits?

A business continuity program will have benefits for your organisation by providing;

  • A more resilient operation infrastructure
  • Better compliance and quality requirements
  • Capability to continue to achieve your organisations mission
  • Capability to continue the businesses profitability
  • Capability to maintain market share
  • Improved morale for employees
  • Protection of image, reputation and brand value

Nice! What are the impacts of business Disruption?

Having Business Continuity Management in place can help with 4 main areas; Marketing, Finance, Statutory/Regulatory and Quality.


  • Continued operations is crucial to maintaining customer confidence, too much disruption and customer churn is a certainty!
  • Very often strong advertising and messaging can work against us, especially where we set a high bar of expectation.
  • Typically spending on marketing may require triple its normally allocated annual budget in the aftermath of a disaster as our skilled team tries to restore customer confidence and maintain or recover market share.


  • Many contracts contain damages or penalty clauses, often expressed as a percentage of the contracts value, that may be invoked in the event of a service failure.
  • Even for unforeseeable disasters (Force Majeure) clauses are being contested in courts as in our modern world most disasters should be foreseeable and planned for.
  • There can be other financial losses too;
    • Loss of interest on overnight balances(?)
    • Cost of interest on cashflow – especially if you need an overdraft to cover cashflow disruptions.
    • Delays in customer accounting and payments.
    • Loss of control over debtors.
    • Loss of credit control.(?)

Statutory or compliance requirements;

  • Many organisations have to meet legal requirements including for;
    • Record keeping and audit trails,
    • Compliance requirements and industry regulations
    • Health and safety, and environmental requirements
    • Government agency requirements
    • Tax and customs requirements,
    • Import and export regulations,
    • Data protection regulations
  • Depending on your organisation some are all of these will relate to you and not having the capability to comply can result in severe penalties.


  • Many international standards have DRP and BCM requirements including;
    • ISO 9000 requires quality management system audits and surveillance visits
    • ISO 27001
    • ISO 22301 is for Business Continuity Management
  • All these require a business continuity plan to be created and available to protect critical business processes from disruption.
  • Loss of services aggravated by a lack of a BCP can even cause those standard bodies to review and even withdraw your accreditation.

Survival from disruption

  • We must implement a BCM to ensure our survival following a disruption.
  • Impacts can include;
    • Existing customers leaving.
    • Prospective customers looking elsewhere
    • Loss of market share.
    • Damaged image and credibility
    • Reversed cashflow
    • Costs could spiral out of control.
    • Inventory costs rise and management is difficult – especially for grocery stores and retail.
    • Share prices can drop
    • Competitors may take advantage of the disruption.
    • Key staff may leave
    • Layoff may be necessary.

That’s interesting but what are the causes?

Far too many to count, from local to regional and all the way up to national here are a few;


  • Systems failure
  • Data corruption
  • Garda blocking access to the locality due to an emergency.
  • Chemical contamination
  • Hacking
  • Theft of PC’s with Personal information – in some cases these PCs may be the only source of that information.
  • Loss of supplied service from backups and data retention to AWS
  • Loss of power


  • Earthquakes.
  • Terrorism
  • Brexit
  • Volcanic erruptions
  • Snow, floods and windy weather
  • Civil unrest

These don’t sound like IT though?

That’s right, Business Continuity is about more than IT. It has to cover all business operations. These can include manufacturing, retail, operations, and all front and back office activities deemed critical. BCM should be encouraged for organisations of all sizes and US PS-PREP (?) has been a key drivers here.

So what counts as a disaster then?

Disasters can be difficult to identify; really its any event where critical operations are impacted. Where there are control

One example of what would previously be classed as a disaster but not now would be how organisations have migrated services to the cloud, so one physical failure of equipment does not impact the organisation.

Another would be an organisation that has 2 telecoms providers. If one provider goes down it is not a disaster as the second provider can be used.

A last example would be where supply lines fails but the organisation has built up a buffer supply.

In all these examples the critical operations of the organisation continue. Your organisation should take account of what these critical operations are to help define what a disaster is for you. This definition may prove vital for decision-making in determining when to activate the DRP.

What should my Recovery Timescale be?

When planning for business continuity, downtime should be an essential concern and we should aim to reduce or eliminate it in the case of a disaster where possible. This is an even bigger concern in our modern world of online transaction – Imagine Amazons lost sales every second of downtime as frustrate customers try competitor websites. Aside from customer facing applications, downtime can also impact other organisations due to our interconnect and interdependent economies. A good example of this is after Japans 2011 Tsunami, memory chip manufacturing was disrupted and as a result the cost of memory worldwide shot up.

First objectives and the road map we want to recover to always differs depending on the individual organisations needs. An online retailer might prioritize restoring its front end and order processing; while a law firms priority may be to ensure record retention and access to backend services and file shares; At the same time a bank might prioritize restoring services that ensure integrity of its databases are maintained.

Likewise some organisations might envision a full recovery of mission critical activities to be essential, while others might favor a partial recovery followed by phased restoration afterwords. There are a few measurements we use for this – is great for bits on info and clarification;

  • Recovery Time Objective (RTO) – This is the targeted time from when service is disrupted to when full operations are regained.
  • Maximum Tolerable Downtime (MTD) – Is the time after which the service disruption causes irreversible damage to the organisation.
  • Maximum Tolerable Period of Disruption (MTPD) – Is the same as MTD
  • Recovery Point Objective (RPO) – describes the interval of time that might pass during a disruption before the quantity of data lost during that period exceeds the Business Continuity Plan’s maximum allowable threshold or “tolerance.”
  • Maximum Tolerable Data loss (MTDL) – The same as RPO – its the max dataloss you can suffer before catastrophic impact to your organisation. This can be more important that the period of time a service is down for – especially with the rise of FinTech organisations.

Generally we have seen a trend of RTO’s and MTD’s requirements decreasing. The reason for this reduced tolerance for disruption can be seen in a few ways; Many businesses are now very depended on Enterprise Resource Planning and Customer Relationship Management tools, downtime to these can have a paralyzing impact on operations. Companies can have call centers and offer 24/7/365 support; or have front ends or client integration that can all be critical components that clients rely on. Any outage can be fatal for the organisation.

Given this complexity recovery can be complex with different modules, backup schedules, data points and integration points. The speed of recovery is equally important to the protection of data and transactions.

So I kind of get it – but what is Business Continuity; is it a plan, a project or a process?

Great question mini-me! So business continuity usually starts as a project; small scale and highly focused. However once this testing starts generally management see’s the benefit of it and extends it. Generally this cycle repeats until the project becomes an ongoing program and then through the standards it sets and follows it becomes a management system. While initially viewed as a temporary project it evolves into an essential Business As Usual activity.

The end result is a plan that is maintained, regularly reviewed, that staff ared trained in to know what to do should a disaster occur.

Sure but i want to have a structured plan, what should that look like?

For a proper BCP Cycle plan there are a few different models like the one shown above from Ebrary. It operates in a cycle for a few reasons.

At the outset it is necessary to understand and gain executive buy in for certain business issues. This can involve carrying out Risk Assessments and Business Impact Assessments. This allows for us to understand our specific risks and what options are open to us for continuity and recovery. It also helps us understand what we need to proect from the perspective of having contingencies in place and managing risks.

Who do i need to involve in my BCP committee?

As said before there has been a move away from IT-Focused BCP’s towards Business Process BCP’s. Data is now located across the enterprise and critical business processes that rely on IT are carried out throughout the organisation. Because of this any BCP has to focus on the wider organisational units rather than just IT. We need to involve all executives, managers and employees. Its a big job so it is essential we have a BC co-coordinator who is responsible for maintaining, updating, reviewing and distributing the BCP.

BC is growing in maturity; what are the drivers?

Typically companies start with basic incident management runbooks, or plans. These will be how to respond to general hazards like fires, malware infections, bomb threats and dangerous weather etc. These would operate as the lowest maturity level for a BCP and the goal is to cover Health and Safety.

As organisations mature these strategies evolve to include disaster recovery plans and procedures for loss of information and communications technology, equipment, applications and data. This was viewed as a siloed IT activity with no consideration of overall impact on business activities.

Since those times several groups have begun exerting influence to encourage a more holistic approach to DRP’s including DRII, Survive!, ACP and DRIE. At the same time physical security has become more of a consideration, especially in Europe where terrorist attacks have occurred more frequently.

What else do I need to know?


Since the 1990’s regulatory requirements have come to the fore, pushing for a more holistic approach to BCM; with the goal to cover more areas of risk. The need for this was emphasized by high visibility scandles such as Enron(funny story; most of the ArturAnderson employees moved to EY!), Worldcon and Parmalat. So far FYRE Festival has not pushed for DRP and BCM regulator requirements for music festivals, but one can hope.

Some relevent legislation includes;

  • SOX – US but similar legislation in EU(but not as comprehensive)
  • GLBA – US
  • HIPPA – US
  • GDPR – EU
  • NISD – EU
  • PSD2 – EU


There have also been a number of independent and international standards organisations can acquire to reassure clients and stakeholders that should the worst happen, they are prepared! Initially all companies rushed to standardise BCM requirements but this led to too many potential standards, causing alot of confusion. Large multinationals then led a push to… wait for it…. standardise the standards. These included;

  • Singapore Standard SS 507
  • BS 27999-1
  • ISO 22301:2012 BC
  • lastly, one of the oldest – 1991 US National Fire Protection Association (NFPA) started to develop NFPA 1600 standard (disaster/emergency management, release in 1995) was improved to include business continuity.

These standards once mature provided numerous benefits including r

  • Reducing supply chain disruption which had a positive impact on more organisations.
  • Reduced costs for a company relating to unexpected disruption.
  • Improving customer satisfaction by ensuring their needs were serviced as expected.
  • Reduced barriers for accessing new markets.
  • Reduced impacts to the environment.
  • Improved market share.

So lets zoom in on supply chains for a moment.

Early on BCM advocates saw that their organisations were disrupted if links in their supply chain suffered disasters. The previous link on memory manufacture in Japan after the tsunami is a good example of this; as is the hard drive shortage caused by flooding in Thailand in the same year, 2011. In both of these instances the impact of the disrupted manufacturing caused issues down the supply chain for PC manufacturers.

In a survey from the Aberdeen Group;

  • over 50% of respondents had suffered supply chain failure in the previous year.
  • 56% -> supplier capacity did not meet the demand
  • 49% -> suffered raw materials price increases or shortages
  • 45% -> experienced unexpected changes in customer demand
  • 39% -> experienced shipment delays/damages/misdirects
  • 35% -> suffered fuel price increases or shortages

Whats this holistic approach thing you mentioned?

Well like my favorite word “Synergy”, Holistic is mostly a buzz word consultants and academics use to pad out their work. What we mean by holistic is just taking an end to end view of business continuity, encompassing all elements of the organisation. For an enterprise that can include taking into account, amoung others;

  • BC and DR
  • Operational Risk Management
  • Insurance aspects
  • Security compliance and breaches (information, telco, and e-commerce)
  • Regulatory compliance
  • Business trading and financial risk management
  • Asset protection
  • Project development and production risk management
  • Supply chain risk management
  • Quality tracking, defect management, maintenance, and product recall
  • Problem management and escalation from helpdesks
  • Customer complaint issues
  • Health and safety
  • Environmental risks and safety management
  • Marketing protection (image and reputation)
  • Crisis management (branch attacks, hostage and kidnap, product recall, fraud)

Operational and Business Resilience

Business resiliency is simply the ability of the organisation to adapt and react to internal and external dynamic changes. These can be opportunities or threats, disruptions or disasters. An organisations business resiliency is assessed by how much disruption the org can absorb before there is a significant impact on the business. Organisations never want to use their DRP plan, they want to avoid the crisis altogether.

But thats enough of DRP for this week…

Malware Analysis – Lesson 2; Types of malware

It is important for any malware analyst to understand the different categories of malware and how they try to infect our systems in order to better check for indicators of compromise. Malware is categorized and sub categorized based on its behavior, purpose and infection vector. Even beyond this many malware have spawned slight variants, such as Zeus, further providing a need for categorization.

By finding the commonalities between different malware we are able to more easily find malware indicators of compromise that allows us more efficiently identify and isolate malware strains.

Malware Stats

Before we continue it is interesting to note that while total malware is being churned out at an exponential rate, new malware appearing every year has mostly remained static. The primary reason for this is to create new, good malware there is a high level of technical skill required. Many of the threat actors we encounter would not meet this skill level and so rely on purchasing malware or changing existing malware into a new variant.

This is very easy to do and can be as simple as adding padding, changing the portions of the malware that is encrypted, moving functions around or even changing the functions themselves. Each of these steps(and theres more that can be done!) act as a way of tricking antivirus’ into believing the application is different even though the purpose and result of execution is the same. This explosion of variants can easily be seen with the banking Trojan Zeus, which has over 100,000 variants and more appearing every day.

AV-Test has some great statistics on this:

Malware Classification

Classifying malware by behavior helps us gain an understanding of what the malware’s infection vector is, what its purpose is, how big of a risk it is and how we can defend against it. Knowing these things, and being able to quickly classify malware in this way allows us to more quickly respond. We need to know what malware has done, how the asset was compromised and when to restore it to a known good state.

Later in this post we are going to go through the classifications but until then Aman Hardikar has a nice mind map that gives a good visualization of how classification may be done. His blog may be found here.

The Computer Antivirus Research Organization Naming Scheme

CARO is a naming convention for malware varients that makes new malware names easy to understand, informative and standardized. It was created primarily for AV companies and, while not universally adopted it is used with variations among many vendors.

The naming convention is split as follows:

Type – the classification of malware (eg Trojan)

Platform – The operating system targeted

Family – the broader group of malware the variant belongs to.

Variant Letter – an alpha identifier (aaa, aab, aac etc)

Any additional information – for example if it is part of a modular threat.

If we put these components together we get an informative name. I have included a Microsoft image to help illustrate this below and the MS article can be found here.


MAEC is a community developed structure language for encoding and sharing information about malware. It contains information on a malware’s behavior, artifacts or IoC’s, and relationships between malware samples were relevant. Those relationships give MAEC and advantage over relying on simple signatures and hashes as it generates these relationships based on low level functions, code, API calls and more. It can be used to describe malicious behaviors found or observed in malware at a higher level, indicating the vulnerabilities it exploits, behavior like email address harvesting from contact lists or disabling of a security service.


Where MAEC is a structured language like XML, MISP is an open source system for the management and sharing of IoCs. Its primary features include a centralized searchable data repository, a flexible sharing mechanism based on defined trust groups and semi-anonymized discussion boards.

An IoC is any indicator caused by a malicious action, intrusion or similar. It can be network calls, processes running, registry key creation and more. By having a central database of IOCs we are able to leverage the experience gained from cyber attacks across the world.

ENISA has a pretty cool report that include some info on MAEC and MISP.

Malware Types

Gollum’s current opinion of malware


One of the oldest, and definitely most well known malware are Viruses. They can be identified by the way they copy, or inject. themselves into other programs and files. This allows them to persist on a machine even if the original file is deleted and spread to other devices as the “host” files or programs are distributed. There are a few types of viruses that have been identified, and this feeds into our classification taxonomy discussed earlier. Symantec have a pretty good blog on these which can be found here. There are two attributes specifically that i will talk about here;

File Injectors are how viruses spread and the way they infect a host file. There are 3 types of this. Overwriting Infectors overwrite the host file as needed. Companion Infectors rename themselves as the target file. Parasitic Infectors attach themselves to a host file.

Memory resident virus are discussed under the Boot Sector and Master Boot Record virus sections on Symantec guide and discussed in detail by trend micro here. Memory resident infectors remain in a computers RAM after it has been executed to try to infect target files, programs or media (Like floppy drives! if they still exist… surely some bank somewhere uses them 🙂 ). The way the actually infect that target is the same as the File Injector method.

Close up; Macro Viruses

Macro viruses are less of an issue these days as macros are disabled by default in Microsoft word, and enabling them gives the user a pop-up warning them that the macro is attempting to run. In the past these were a major issue however so lets give a brief run down of them.

Macros a small scripts written in the language of the application it is run on, like Microsoft Word, Excel or Visual Basic. This OS independence means a macro run on a windows system will also run on a Mac OSX. Once executed the macro can run a number of functions, from infecting every document of that type, to changing the document contents or even deleting the contents. Generally Macros are spread via spam emails with “invoices” attached. Given they are still prevalent in your spam folder, for the budding Malware analyst these type of macros can be a good opportunity to analyse what a macro is doing. Just be sure to use a secure environment!


Worms are malware that replicates itself with little to know interacting by the user. How WannaCry spread throughout the world with the SMBv1 vulnerability EternalBlue is a recent example of this. Other types of worms can use browsers, email and IM’s to spread.

Close up; Mass-mailers

Mass Mailers are the traditional worm type malware. This is spread via tantalizingly designed emails that tempt you to click on them. That invoice you forgot to pay, the secret crush who loves you and more are all examples of how this type of malware encourages you to open it. Its a form of social engineering that fools you into clicking the link in the email or downloading the attachment. Some advanced mass mailers can even turn your computer into an SMTP server, to spread to other hosts; by compromising you address book they can email your contacts.

Other types

Other types of Worms include File Sharing worms, which rely on users downloading and running the applications, commonly seen in torrents. Who can resist that randomly uploaded movie on the pirate bay? You have been dying to see it, and anyway who has the cash to pay for it?

Internet worms would be WannaCry. These worms use vulnerabilities to spread across networks.

Instant Messaging worms used to be common and would take over the old IM clients (remember MSN Messenger?) and message all your contacts trying to get them to download the worm themselves.


Trojan malware comes from the old Greek epic The Iliad (and the Brad Pitt movie Troy), in that myth that Greece was laying siege to a city, Troy. After a long stalemate they had soldiers hid inside a giant, hollow wooden horse that they pretended was a gift and tricked the defending Trojans into bringing the horse inside to one of the temples. When night fell the soldiers snuck out and opened the gates to the city! I recommend you read the poem itself as its really cool! 🙂

Trojan malware is malware that pretends to be a legitimate application but does something malicious. They do not replicate and tend to have a purpose that benefits from it evading detection, like operating as a backdoor. In many cases the Trojan program’s legitimate “cover” is fully functioning so that the victim will not remove it.

Close up; Bankers

Banker Trojans are designed to steal sensitive user information, like credit card details, credentials and other high value data. We spoke about Zeus before and this is an example of a banker Trojan. It acquires the data and then forwards it on to a Command and Control server, which receives and stores the data to be accessed by the malware author. I wonder if this outbound traffic could be used to detect it.

Close up; Keyloggers

Keyloggers continuously monitor and record keystroke. Usually storing them in a file or exfiltrating them to a command and control server, like with the Bankers. In some cases the keylogger with try to identify specific information by monitoring for “trigger” events; like visiting specific websites to try and capture the credentials. This logging behavior is also seen in bankers.

Close up; Backdoors

Backdoors are an great persistence tool for an attacker. The Trojan operating as a legitimate application opens a port on the server and listens for a connection. The attacker can then connect to your asset through the Trojan. In less targeted attacks the malware may compromise your system and setup a backdoor for the attacker, or their command and control server, to send commands. This outcome of this can be harvesting sensitive data, using your asset as a pivot point to traverse the network or using your asset as a “Zombie” in a botnet.


Rootkits are more scary than what we have talked about so far. Rootkits are not necessarily malware in and of itself, but is a collection of techniques and tools coded into malware allowing for privilege escalation. The aim of the rootkit is to fully compromise the system, conceal its presence and offer persistence. The escalated privilege can be gained by direct attack, using previously acquired log in credentials among other methods. These are difficult to find due to the elevated access it has.


Scareware is any malicious application that uses social engineering to scare a user into buying unwanted software. Be giving a sense of urgency (“BUY NOW BEFORE ITS GONE!!!”) and intimidation (“YOUR LAPTOP HAS BEEN HACKED, BUY THIS TO FIX IT!”) the victim may pay the demand. This can come in the form of dodgy antivirus’s but can also be seen in other threats, the most prominent being ransomware. The Scareware portion of ransomware tends to be the countdown timer before “Your files are gone forever”


Adware can take many forms but the purpose is the same, to show you lots of advertisements. In the past the result of this was to see brightly colored and invasive banner advertisements and pop-up advertisements during regular browsing. As this would generally cause frustration from users(and subsequent adware removals) modern adware tends to try to be more subtle for persistence. Even more they will monitor what you do and create user profiles to then sell to third parties. Scary!


Spam is an incredibly common function of malware. Brian Krebs  wrote a great book on investigating primarily Russian spammers and found it was a multi-million euro business. In 2014 it was estimated over 90% of emails are spam. That is trillions of emails per year dedicate to this wasteful activity. Spam primarily uses email but can also use blogs, IM’s, advertisements and SMS. Its free for the spam artist and the occasional successes mean its unlikely to subside anytime soon. Beyond the usual risks this also concerns companies as they are responsible for protecting their employees from the kind of abuse spam can entail.

Infection vectors

This portion of our lesson is going to discuss how malware attempts to infect a system. There is the standard technical aspect of how the infection vector enables the malware to infect a system that we need to note, put the vector can also be social engineering. The ways malware authors leverage social engineering to get people to install malware was discussed already but here they are again;

  • Coming from a trusted source, like a worm from a friend saying I love you.
  • Having a sense of urgency or importance, such as having to “INSTALL THIS NOW BEFORE ITS TOO LATE”
  • Arousing interest of the victim, like the friend saying i love you but also offering an interesting service the victim has need of.


Email far exceeds any other vector in terms of speed and coverage by which the malware can spread. Anyone with an email account can be a target of this and this makes extensive use of the social engineering techniques mentioned previously. ILOVEYOU was one of the earliest examples of this attack but macros and viruses embedded in documents distributed by spam campaigns are all still common.

Social networking

Social networking is a great platform for attackers to use due to its extensive reach. Attackers use social networks as a way to enter the lives of victims and provide them with links to malicious sites or malicious files to download. The attacker can also add friends of friends to extend its reach. Generally people accept friend requests if there are mutual connections more often than if there are no mutual friends. Making use of email and password lists that are readily available they can gain access to compromised existing accounts or just create their own.

Setting up their own pages can also be used to spread malware. When a user likes this page they receive updates directly to their news-feed, eliminating the need for a friend request.

Portable Media

Many organisations now a days block the USB ports on their assets and this is for a good reason. Having malware stored on a USB that automatically runs is a real risk facing us today and it can have big consequences. A recent example of this is the 2014 STUXNET attack, where the Israeli intelligence services MOSSAD left USB keys outside of an Iranian Nuclear plant. These USB keys had malware on them and when an unwitting scientist plugged one of them into his workstation the Malware slowly worked its way through the system until it found the Centrifuge SCADA systems. It then executed its main function of causing extensive damage. A lot has been written on this attack and it makes for interesting reading;

While STUXNET was a highly targeted attack, portable media can also be used for opportunistic attacks.

URL Links

URL links are a special kind of infection vector as they are usually spread through other infection vectors e.g. social networks, IMs, email etc. Examples of this vector can include link-shortening services and misspelled legitimate domain names. These URLs lead to fake websites that look legitimate (or could even have XSS scripts in the case of URL shorteners). This tricks the user into carrying out their tasks as normal not knowing that the attacker is recording their interaction with the fake website. This includes collecting any credentials used. Many banks have started enforcing Multi-Factor Authentication to mitigate the risks from this(as well as other attacks like Phishing).

File Sharing

The pirate bay and other file sharing websites have long been host to many types of malware. Users tend not to investigate what they download opening themselves up to compromise.


A good blog on how this vector is used can be found here:


Think BlueBourne; A good blog on how this vector is used can be found here:

Always keep the law in mind when planning security.

Your legal and contractual requirements should be firm considerations when planning out and implementing your information security. If in doubt as to what your obligations are and which pieces of legislation apply to you, work with your legal team to identify them.

Security category – 18.1. Compliance with legal and contractual requirements

18.1.1. Identification of applicable legislation and contractual requirements.

All companies should adhere to their contractual and regulatory obligations, but to do so we need to know what those obligations are. Your organization should take care to go through its contracts and understand what is expected of you. You should also have specially trained staff with knowledge of regulations impacting your industry at hand when drafting policies, procedures or stands. These staff can keep you informed of changing requirements so you can be sure to include them to ensure you are compliant. Remember, if you have offices in multiple legal jurisdictions your plans should take the different legal environments into account.

18.1.2. Intellectual property rights.

You should make sure that, for any material you use such as software, you are compliant with copyright and IP laws, as well as any licensing fees that may apply. Ensuring that software on your assets has been attained from the vendor, and that only correctly licensed versions can be installed we can reduce our risk. Outlining employee responsibilities, such as not using pirated software, in the Acceptable Use Policy can help us be compliant, as can regular audits of software. Be prepared to hand licensing information to the vendor should they wish to audit you.

18.1.3. Protection of records.

In many jurisdictions there is legislation in place to specify how record retention should be carried out. An example of this from the GDPR is for healthcare records[1];

“In general, medical records should be retained by practices for as long as is deemed necessary to provide treatment for the individual concerned or for the meeting of medico-legal and other professional requirements. At the very least, it is recommended that individual patient medical records be retained for a minimum of eight years from the date of last contact or for any period prescribed by law. (In the case of children’s records, the period of eight years begins from the time they reach the age of 18).”

 You should have policies in place to protect records in accordance these laws, as well as contractual and regulatory requirements. Similarly, you may wish to tailor your retention policy in a manner that benefits your organization and helps further your business needs. This can be done but should be carried out in line with legislation, regulatory and contractual requirements. Keeping records for too long, beyond a reasonable need for the business can cost resource in maintaining them and we run the risk of greater loss should a breach occur, with that in mind it is encouraged to limit the retention period of records where reasonable.

18.1.4. Privacy and protection of personally identifiable information.

Nearly all countries have some requirements for reasonable protection of collected PII. In some jurisdictions, such as the European Union and the incoming GDPR, not sufficiently protecting PII can cause fines to be leveraged against the organization. To use the GDPR as an example a company can be fined up to 4% of its annual revenue. One of the best ways to best ensure compliance is to designate an employee a Privacy Officer who can advise on local regulations.

18.1.5. Regulation on cryptographic controls.

In a previous control we discussed the importance of using encryption for confidentiality, integrity and non-repudiation, but in some states the use of encryption is heavily regulated, and in some cases, require decryption keys to be provided to the authorities. It is important to understand your local laws when using encryption or incorporating encryption in your products.


When things go wrong – Business Continuity and Redundancy!

Security category – 17.1. Information security continuity

17.1.1. Planning information security continuity.

Having comprehensive business continuity and disaster recovery plans can be vital for in organization’s survival should a disaster occur. Such plans should be sure to include security which is still important, if not more so, during a crisis and should be included in any plans created. If there are no such plans then the organization should strive to maintain security at its normal level during a disaster. If possible Business Impact Analysis’ should be carried out to investigate the security needs during different disasters.

17.1.2. Implementing information security continuity.

Ensuring that security controls in any plans are carried out in a disaster is just as important as having the plans themselves. There should be documented processes and procedures in place and easily accessible to staff during such a situation. These documents should be available in both electronic and paper format, with copies stored in geographically separate locations. This should allow us to maintain a command structure that includes security responsibilities, and keeps staff accountable and aware that security is still necessary. In some types of disasters our primary security controls may fail, in this case we should have separate, mitigating controls ready to be implemented.

17.1.3. Verify, review and evaluate information security continuity

This helps us ensure our plans are effective and will work as intended. In practice, it is carried out through table-top exercises, structured walkthroughs, simulation tests, parallel tests, and full interruption tests.[1] The plan should be updated to reflect changes in the organization, frequently tested to ensure it works as envisioned and that everyone involved is trained to know what to do with a disaster strikes.

Security category – 17.2. Redundancies

17.2.1. Availability of information processing facilities.

A key tenet of security is ensuring availability and this can be better enforced by using redundancy. This is simply having multiple redundant components so that if one fails operations fail-over to the remaining, working components. This can be expensive and what applications are in scope for this redundancy should be in line with the business needs.


There has been a breach! How do we manage Incidents?

Even with a comprehensive defense in-depth architecture, highly qualified and trained staff, the right processes and a plethora of technical security controls in place we are all at risk of a security breach. How we react to this breach and how we learn from it is vital to ensuring we continually improve our posture.

Incident response is a very flexible area because how much you invest in it should, generally, be in proportion to your organisations risk. NIST has a great, if heavy, guide on this located here – but for understanding the framework steps themselves I much prefared Rapid7’s summary;

ISO27001 however focus’ on 7 controls;

Security category – 16.1. Management of information security incidents and improvements

16.1.1. Responsibilities and procedures.

Any security incident that could take place should have procedures in place to instruct staff how to act with responsibilities and roles clearly defined. This should cover all phase of an attack[1];

  1. Preparation
  2. Identification
  3. Containment
  4. Eradication
  5. Recovery
  6. Lessons Learned

Actions at all stages should have procedures in place, actions taken at each step should be logged and reviewed and, where necessary it should be possible to escalate incidents. When creating procedures creating a list of potential incidents should be considered.

16.1.2. Reporting information security events.

Your organization should document what constitutes a security event and the should have a single point of contact the should receive reports of these incidents. This point of contact can be a person but is more likely an Incident Response team. All staff should know who to contact in the event of an incident and should have a standardized process to lodge reports.

16.1.3. Reporting information security weaknesses.

Giving staff training to help them identify security weaknesses, and having an easy to use reporting process to report their finding can greatly assist your security team with identify problems. Part of this training should discourage employees from trying to test or exploit the weakness they have found as this should be done by specially trained personnel only.

16.1.4. Assessment and decision on information security events.

An information security event indicates that the security of an information system, service, or network may have been breached or compromised. It indicates that an information security policy may have been violated or a safeguard may have failed. An information security incident is made up of one or more unwanted or unexpected information security events that could very likely compromise the security of information and weaken or impair business operations.[2] Trying to decide if an event constitutes an incident is an important function of the point of contact but they may not work in isolation and the responsibility my fall on a dedicated Information Security Incident Response Team.

16.1.5. Response to information security incidents.

The intent behind the response is to prevent further compromising of the environment by containing the attacker. While the most obvious way of doing this can be shutting down the impacted servers it should be noted that in doing that we lose evidence stored on the machines RAM. Evidence collection should go hand in hand with the initial response and the assets affected should have an image of their hard drive taken and hashed and a chain of custody kept of who handles the original asset’s data. Any testing or investigations should be done on copied images, never the original. Documented procedures should guide your team on how to correctly respond, who is to be notified and how evidence is to be collected and what the escalation process is.

16.1.6. Learning from information security incidents.

The documentation on the incident that the organization has accrued and the experience its incident response team has gained should be used to digest how the incident was responded to with the intent on finding ways to improve the process. This can help us speed up incident resolution in future, or avoid them completely. In some cases, past incidents can be used for training new incident response staff and for improving organizational awareness.

16.1.7. Collection of evidence.

Evidence collection is vital if your organization plans to pursue charges and having specialist staff with training on how to properly collect evidence and store it is vital to ensuring the evidence can be admitted to court. ISO/ IEC 27037 goes into detail on evidence collection and should be read and documented procedures written. Staff should then receive training on those procedures and only those trained staff should be involved with evidence collection.



Malware Analysis – Lesson 1; an Introduction

I discussed in our last post how I have returned to college and will be digitizing my notes, as I make them. This is the first post in that series. It will cover a few areas of malware analysis on a high level, focusing on definitions and a few descriptive lines on each. The areas covered will be discussed in greater detail in future blogs.

What is Malware Analysis?

Malware Analysis is an extremely interesting area of cyber security where we take a piece of malware or malicious code and put it under a microscope. We examine the code with an aim to understand it; How did it infect our system, how does it spread, what does it do and what does it aim to do (what is its intent)? The information we collect from this investigation can be used to stop the malware spreading, and even help us improve our security to prevent a similar infection in future.

WannaCry is a great example of this. Marcus Hutchins[1] famously identified the malwares kill switch using dynamic analysis – by monitoring the malwares network connections and traffic he saw multiple queries being made to an unregistered domain. When he initially registered the domain he had unwittingly stopped the malware in its tracks. In his MalwareBytes blog post he also gives us a good insight into how malware analysis is carried out;

  1. Look for unregistered or expired C2 domains belonging to active botnets and point it to our sinkhole (a sinkhole is a server designed to capture malicious traffic and prevent control of infected computers by the criminals who infected them).
  2. Gather data on the geographical distribution and scale of the infections, including IP addresses, which can be used to notify victims that they’re infected and assist law enforcement.
  3. Reverse engineer the malware and see if there are any vulnerabilities in the code which would allow us to take-over the malware/botnet and prevent the spread or malicious use, via the domain we registered. [2]

There are 2 types of malware analysis we are going to discuss next – Static and Dynamic Analysis

What is Static Analysis?

With static analysis we look at analysing the malware itself in the form of code reviews. By going through the code or malware structure we try to identify functions within them. This can be a very challenging task, especially once you see the methods the malicious coders deploy to prevent analysis of their code.

This kind of analysis takes place when the malware is “at rest”, that is it is not being run, and it can be useful as a first step preliminary analysis. There are two types of Static analysis. Basic analysis makes use of simple hash comparisons (commonly seen in older antivirus) and the extraction of strings, headers, functions and system API calls to try to build a picture of what the malware is. Advanced analysis is much cooler, hardcore disassembling the executable into assembly language and then review that, jumps and all! We have several tools to help us with this, IDA Pro being what comes to mind.

What is Dynamic Analysis?

Dynamic analysis, as the name suggests, deals with malware “in process” (i.e. malware actively running on the victims machine). When we are carrying out this kind of malware analysis we knowingly execute it in order to observe (and document!) its impact on the victim. This gives us a much better understanding and detailed view of what the malware is doing, and can be especially useful if the executable is packed or encrypted in part or whole. This is because, when you execute a program and it is in memory/in use it is decrypted and unpacked. There are 2 types of dynamic analysis, like with static; Basic and Advanced. With Basic Dynamic Analysis the method of analysis is to run the malware and gather information on what it is doing. We can identify, for example, what changes are being made on the file system, to configuration files, what registry keys are being created and edited, and what network activity is taking place and more. For Advanced Dynamic Analysis we thoroughly debug the malware binary and step through its execution. This involved us trying to identify each instruction and the outcome of that instruction, giving us a better understanding of all activities.

Show we do our analysis on a virtual or physical environment?

When we decide to get practical we must figure out if we are going to use a VM running on our laptop to act as a virtual environment or to get an old derelict computer for a physical environment. In general physical environments are preferred as modern malware may attempt to detect if it is running on a VM (and if so will refuse to perform any actions). If we are using physical machines we need to take adequate care with making sure it is sufficiently segregated from our internal network and the public internet. There are a few ways to do this, mostly by restricting traffic on your firewall, or even just not connecting the device to the network at all (the infamous Air-gap method). There are also some tools to help us with snapshotting and roll backs so we don’t have to reinstall after every execution using Ghost Imaging software or using technology like Deep Freeze, which resets core configurations on every reboot to a known good state.

Having a virtual environment simply means using VMware or VirtualBox on your standard workstation to run VMs of your “victim” to execute the malware on. This can be great for speedy rollback as there is usually snapshotting in your virtual environment but if the virtual nature of this environment is detected by the malware it may not run. There may be ways to mask this but I will need to research this in a later post.

What is automated analysis?

After reading about static and dynamic analysis, large portions look like they might be repetitive and tedious. Both of these traits indicate a good candidate for automation and Malware analysis is no exception. This will free up the ever more precious human analyst for more important work and reduce the ever present risk of error to some degree. There are many automated analysis tools available. Some such as Comodo Valkyrie and Threat Expert are cloud based tools where we upload our malware samples, while others such as ZeroWine and Buster are locally installed. These analysers are sandboxed so can run malware automatically with lesser risk (because risk is never 0!) of compromising the host system. They are great for reducing the noise our analysts sift through and highlighting the most important findings for further review but come with several drawbacks;

  • If the target malware is VM-aware then they may identify the analyser as a virtual environment and not execute as normal;
  • If the malware has a particular trigger it requires to run it may not receive this with an automated tool such as
    • Requiring human interaction;
    • Executing at a particular time;
    • Executing after a predefined action has taken place or similar – such as fileless cryptojackers waiting until the device has been idle for a period of time before starting to mine.
  • They might miss certain logical indicators that a human would not.

Despite these flaws, they are a great first step.

How does malware stop us from analysing it?

Malware has a few techniques it uses to try and prevent us from analysing it. A few ways have already been mentioned in the automated analysis section. If the malware detects it is running on a virtual machine it may not run normally or at all. It might rely on triggers to try to avoid or delay detection of activities until conditions are right or it may even remain dormant for a long period of time, frustrating the analyst into believing it is benign. One method of combating analysts that malware employs that is very effective is the use of obfuscation techniques.

Obfuscation is making use of a few techniques to make reading the code to identify its purpose challenging. With encryption the malware is composed of two parts, one part is the encrypted main body of code, and the other is the decryptor used to recover that code. In general a different encryption key is used for each iteration resulting in different encrypted outputs and hashes, confusing antivirus’ but the decryptor itself tends to remain on changed, providing a way to detect these infections.

The malware may also use encoding such as XOR or Base64 to transform its code and make it less readable by humans – this is especially true for malware that uses custom encoding. This means that even once an analyst unpacks to source code there is an extra layer of defense they must navigate. Defense in depth it seems is used by both sides in this computational war.

Packing is another way malware can try to obfuscate itself. Packing is used by many applications – malicious and legitimate. By packing the code we are compressing it for distribution. This can help malware avoid detection and analysis though most packing tools are common and detectable the analyst must still unpack the malware prior to analysing it.

Had your fill of obfuscation yet? Or are you still finding it incredibly interesting? Code obfuscation is a common source of frustration felt by analyst’s worldwide. The malware authors play with their code to make it as confusing as possible. They do this by re-ordering their code so it does not flow in a logical fashion, they might insert code that can lead analysts down a dead end (called Dead Code)  by having code with functions and calls that do not do anything. They might substitute common instructions for less known equivalents. In some cases they may change the assembly language jump instructions to further cause confusion.


For a first class, this was jam-packed with exciting information, especially around Code obfuscation, which if you thought was skimmed over, don’t worry I’ll be writing a dedicated post on this in the future! There should be one new blog post per week on malware analysis bookmark us so you never miss a chapter! If you cant wait that long to go further on your malware hunting journey I recommend Malware Unicorns Reverse Engineering 101 for a cool course on both static and dynamic analysis;

It is complete with some amazing graphics that put my text heavy blog to shame. 🙂

Until next time.



Security in your supply chain matters!

Its often said that you are only as secure as your weakest link. In most cases this weak link is described as your end users. But in more cases an often forgotten risk is the weak link in your supply chain. Third party vendors and providers must be reviewed as part of your security management strategy.

The best example of this is one of the first lessons of a web application penetration tester (or a malicious hacker) is to identify how many websites are being hosted on the same server as their target. Once they have this list they can go through each to identify the site with the weakest security and use that to attempt to gain access to the hosting server.

For other third parties who may have a VPN tunnel established with your corporate network; without adequate consideration and controls put in place to manage this access any compromise of your third parties network also compromises your network. Similarly any of your third parties staff, without appropriate controls in place, could damage your organisation.

The solution is to never assume security when dealing with third parties. Where possible several steps should be taken;

  • Security requirements should be detailed in contracts and compliance monitored.
  • Access to the organisations network should be managed, segmented and monitored to ensure only authorized actions are taking place.
  • Only reputable third parties should be contracted.
  • At a minimum all internal security policies, process, guidelines and standards should be applied to all third parties.

What does ISO27001 say?

Security category – 15.1. Information security in supplier relationships

15.1.1. Information security policy for supplier relationships.

Rules should be in place that govern what a vendor can access and how they should access it, as well as specifying other security requirements. These should require the security a vendor should have on their own network, how incidents should be reported and any other requirements your organization deems necessary, depending on the value of what the vendor will have access to. Having a policy outlining what is expected can help guide us when we are considering vendor relationships.

15.1.2. Addressing security within supplier agreements.

The rules we set out in our Information Security Policy for Supplier Agreements should be included in all contracts with vendors and they should commit to upholding these requirements. Periodic auditing can be considered to ensure compliance.

15.1.3. Information and communication technology supply chain.

It stands to reason that if there is access allowed between your network and your vendors network, then any party with access to your vendors network potentially has access to your organization, such as your vendors suppliers. There should be policies in place to ensure access between you and your vendor is restricted and controls to protect against unauthorized access. Ensuring your organization and your vendor keep an audit and log trail to track access and requests can provide accountability and requiring your vendor to screen their suppliers can also reduce this risk.

Security category – 15.2. Supplier service delivery management

15.2.1. Monitoring and review of supplier services.

This will provide us with the confidence that are suppliers are adhering to the security requirements of their contract. Reviewing the audit trail of a vendor, conducting vulnerability assessments on their network and engaging in regular meetings to ensure the vendor understands their obligations can all prove helpful.

15.2.2. Managing changes to supplier services.

Vendors should not be able to make any ad-hoc changes to their service. This can include patching, upgrades and improvements. Any changes should be managed to limit disruption and ensure service continuity in the event of problems occurring. This also gives us a chance to review our security posture and introduce new controls as required to ensure the changes do not weaken our security position.