Blog

Always keep the law in mind when planning security.

Your legal and contractual requirements should be firm considerations when planning out and implementing your information security. If in doubt as to what your obligations are and which pieces of legislation apply to you, work with your legal team to identify them.

Security category – 18.1. Compliance with legal and contractual requirements

18.1.1. Identification of applicable legislation and contractual requirements.

All companies should adhere to their contractual and regulatory obligations, but to do so we need to know what those obligations are. Your organization should take care to go through its contracts and understand what is expected of you. You should also have specially trained staff with knowledge of regulations impacting your industry at hand when drafting policies, procedures or stands. These staff can keep you informed of changing requirements so you can be sure to include them to ensure you are compliant. Remember, if you have offices in multiple legal jurisdictions your plans should take the different legal environments into account.

18.1.2. Intellectual property rights.

You should make sure that, for any material you use such as software, you are compliant with copyright and IP laws, as well as any licensing fees that may apply. Ensuring that software on your assets has been attained from the vendor, and that only correctly licensed versions can be installed we can reduce our risk. Outlining employee responsibilities, such as not using pirated software, in the Acceptable Use Policy can help us be compliant, as can regular audits of software. Be prepared to hand licensing information to the vendor should they wish to audit you.

18.1.3. Protection of records.

In many jurisdictions there is legislation in place to specify how record retention should be carried out. An example of this from the GDPR is for healthcare records[1];

“In general, medical records should be retained by practices for as long as is deemed necessary to provide treatment for the individual concerned or for the meeting of medico-legal and other professional requirements. At the very least, it is recommended that individual patient medical records be retained for a minimum of eight years from the date of last contact or for any period prescribed by law. (In the case of children’s records, the period of eight years begins from the time they reach the age of 18).”

 You should have policies in place to protect records in accordance these laws, as well as contractual and regulatory requirements. Similarly, you may wish to tailor your retention policy in a manner that benefits your organization and helps further your business needs. This can be done but should be carried out in line with legislation, regulatory and contractual requirements. Keeping records for too long, beyond a reasonable need for the business can cost resource in maintaining them and we run the risk of greater loss should a breach occur, with that in mind it is encouraged to limit the retention period of records where reasonable.

18.1.4. Privacy and protection of personally identifiable information.

Nearly all countries have some requirements for reasonable protection of collected PII. In some jurisdictions, such as the European Union and the incoming GDPR, not sufficiently protecting PII can cause fines to be leveraged against the organization. To use the GDPR as an example a company can be fined up to 4% of its annual revenue. One of the best ways to best ensure compliance is to designate an employee a Privacy Officer who can advise on local regulations.

18.1.5. Regulation on cryptographic controls.

In a previous control we discussed the importance of using encryption for confidentiality, integrity and non-repudiation, but in some states the use of encryption is heavily regulated, and in some cases, require decryption keys to be provided to the authorities. It is important to understand your local laws when using encryption or incorporating encryption in your products.


[1] https://www.icgp.ie/go/in_the_practice/it_faqs/managing_information/8E2ED4EE-19B9-E185-837C1729B105E4D9.html

When things go wrong – Business Continuity and Redundancy!

Security category – 17.1. Information security continuity

17.1.1. Planning information security continuity.

Having comprehensive business continuity and disaster recovery plans can be vital for in organization’s survival should a disaster occur. Such plans should be sure to include security which is still important, if not more so, during a crisis and should be included in any plans created. If there are no such plans then the organization should strive to maintain security at its normal level during a disaster. If possible Business Impact Analysis’ should be carried out to investigate the security needs during different disasters.

17.1.2. Implementing information security continuity.

Ensuring that security controls in any plans are carried out in a disaster is just as important as having the plans themselves. There should be documented processes and procedures in place and easily accessible to staff during such a situation. These documents should be available in both electronic and paper format, with copies stored in geographically separate locations. This should allow us to maintain a command structure that includes security responsibilities, and keeps staff accountable and aware that security is still necessary. In some types of disasters our primary security controls may fail, in this case we should have separate, mitigating controls ready to be implemented.

17.1.3. Verify, review and evaluate information security continuity

This helps us ensure our plans are effective and will work as intended. In practice, it is carried out through table-top exercises, structured walkthroughs, simulation tests, parallel tests, and full interruption tests.[1] The plan should be updated to reflect changes in the organization, frequently tested to ensure it works as envisioned and that everyone involved is trained to know what to do with a disaster strikes.

Security category – 17.2. Redundancies

17.2.1. Availability of information processing facilities.

A key tenet of security is ensuring availability and this can be better enforced by using redundancy. This is simply having multiple redundant components so that if one fails operations fail-over to the remaining, working components. This can be expensive and what applications are in scope for this redundancy should be in line with the business needs.


[1] https://en.wikipedia.org/wiki/Disaster_recovery_plan#Testing_the_plan

There has been a breach! How do we manage Incidents?

Even with a comprehensive defense in-depth architecture, highly qualified and trained staff, the right processes and a plethora of technical security controls in place we are all at risk of a security breach. How we react to this breach and how we learn from it is vital to ensuring we continually improve our posture.

Incident response is a very flexible area because how much you invest in it should, generally, be in proportion to your organisations risk. NIST has a great, if heavy, guide on this located here – https://csrc.nist.gov/publications/detail/sp/800-61/rev-2/final but for understanding the framework steps themselves I much prefared Rapid7’s summary; https://blog.rapid7.com/2017/01/11/introduction-to-incident-response-life-cycle-of-nist-sp-800-61/

ISO27001 however focus’ on 7 controls;

Security category – 16.1. Management of information security incidents and improvements

16.1.1. Responsibilities and procedures.

Any security incident that could take place should have procedures in place to instruct staff how to act with responsibilities and roles clearly defined. This should cover all phase of an attack[1];

  1. Preparation
  2. Identification
  3. Containment
  4. Eradication
  5. Recovery
  6. Lessons Learned

Actions at all stages should have procedures in place, actions taken at each step should be logged and reviewed and, where necessary it should be possible to escalate incidents. When creating procedures creating a list of potential incidents should be considered.

16.1.2. Reporting information security events.

Your organization should document what constitutes a security event and the should have a single point of contact the should receive reports of these incidents. This point of contact can be a person but is more likely an Incident Response team. All staff should know who to contact in the event of an incident and should have a standardized process to lodge reports.

16.1.3. Reporting information security weaknesses.

Giving staff training to help them identify security weaknesses, and having an easy to use reporting process to report their finding can greatly assist your security team with identify problems. Part of this training should discourage employees from trying to test or exploit the weakness they have found as this should be done by specially trained personnel only.

16.1.4. Assessment and decision on information security events.

An information security event indicates that the security of an information system, service, or network may have been breached or compromised. It indicates that an information security policy may have been violated or a safeguard may have failed. An information security incident is made up of one or more unwanted or unexpected information security events that could very likely compromise the security of information and weaken or impair business operations.[2] Trying to decide if an event constitutes an incident is an important function of the point of contact but they may not work in isolation and the responsibility my fall on a dedicated Information Security Incident Response Team.

16.1.5. Response to information security incidents.

The intent behind the response is to prevent further compromising of the environment by containing the attacker. While the most obvious way of doing this can be shutting down the impacted servers it should be noted that in doing that we lose evidence stored on the machines RAM. Evidence collection should go hand in hand with the initial response and the assets affected should have an image of their hard drive taken and hashed and a chain of custody kept of who handles the original asset’s data. Any testing or investigations should be done on copied images, never the original. Documented procedures should guide your team on how to correctly respond, who is to be notified and how evidence is to be collected and what the escalation process is.

16.1.6. Learning from information security incidents.

The documentation on the incident that the organization has accrued and the experience its incident response team has gained should be used to digest how the incident was responded to with the intent on finding ways to improve the process. This can help us speed up incident resolution in future, or avoid them completely. In some cases, past incidents can be used for training new incident response staff and for improving organizational awareness.

16.1.7. Collection of evidence.

Evidence collection is vital if your organization plans to pursue charges and having specialist staff with training on how to properly collect evidence and store it is vital to ensuring the evidence can be admitted to court. ISO/ IEC 27037 goes into detail on evidence collection and should be read and documented procedures written. Staff should then receive training on those procedures and only those trained staff should be involved with evidence collection.


[1] http://blog.securitymetrics.com/2017/03/6-phases-incident-response-plan.html

[2] http://www.praxiom.com/iso-27001-definitions.htm

Malware Analysis – Lesson 1; an Introduction

I discussed in our last post how I have returned to college and will be digitizing my notes, as I make them. This is the first post in that series. It will cover a few areas of malware analysis on a high level, focusing on definitions and a few descriptive lines on each. The areas covered will be discussed in greater detail in future blogs.

What is Malware Analysis?

Malware Analysis is an extremely interesting area of cyber security where we take a piece of malware or malicious code and put it under a microscope. We examine the code with an aim to understand it; How did it infect our system, how does it spread, what does it do and what does it aim to do (what is its intent)? The information we collect from this investigation can be used to stop the malware spreading, and even help us improve our security to prevent a similar infection in future.

WannaCry is a great example of this. Marcus Hutchins[1] famously identified the malwares kill switch using dynamic analysis – by monitoring the malwares network connections and traffic he saw multiple queries being made to an unregistered domain. When he initially registered the domain he had unwittingly stopped the malware in its tracks. In his MalwareBytes blog post he also gives us a good insight into how malware analysis is carried out;

  1. Look for unregistered or expired C2 domains belonging to active botnets and point it to our sinkhole (a sinkhole is a server designed to capture malicious traffic and prevent control of infected computers by the criminals who infected them).
  2. Gather data on the geographical distribution and scale of the infections, including IP addresses, which can be used to notify victims that they’re infected and assist law enforcement.
  3. Reverse engineer the malware and see if there are any vulnerabilities in the code which would allow us to take-over the malware/botnet and prevent the spread or malicious use, via the domain we registered. [2]

There are 2 types of malware analysis we are going to discuss next – Static and Dynamic Analysis

What is Static Analysis?

With static analysis we look at analysing the malware itself in the form of code reviews. By going through the code or malware structure we try to identify functions within them. This can be a very challenging task, especially once you see the methods the malicious coders deploy to prevent analysis of their code.

This kind of analysis takes place when the malware is “at rest”, that is it is not being run, and it can be useful as a first step preliminary analysis. There are two types of Static analysis. Basic analysis makes use of simple hash comparisons (commonly seen in older antivirus) and the extraction of strings, headers, functions and system API calls to try to build a picture of what the malware is. Advanced analysis is much cooler, hardcore disassembling the executable into assembly language and then review that, jumps and all! We have several tools to help us with this, IDA Pro being what comes to mind.

What is Dynamic Analysis?

Dynamic analysis, as the name suggests, deals with malware “in process” (i.e. malware actively running on the victims machine). When we are carrying out this kind of malware analysis we knowingly execute it in order to observe (and document!) its impact on the victim. This gives us a much better understanding and detailed view of what the malware is doing, and can be especially useful if the executable is packed or encrypted in part or whole. This is because, when you execute a program and it is in memory/in use it is decrypted and unpacked. There are 2 types of dynamic analysis, like with static; Basic and Advanced. With Basic Dynamic Analysis the method of analysis is to run the malware and gather information on what it is doing. We can identify, for example, what changes are being made on the file system, to configuration files, what registry keys are being created and edited, and what network activity is taking place and more. For Advanced Dynamic Analysis we thoroughly debug the malware binary and step through its execution. This involved us trying to identify each instruction and the outcome of that instruction, giving us a better understanding of all activities.

Show we do our analysis on a virtual or physical environment?

When we decide to get practical we must figure out if we are going to use a VM running on our laptop to act as a virtual environment or to get an old derelict computer for a physical environment. In general physical environments are preferred as modern malware may attempt to detect if it is running on a VM (and if so will refuse to perform any actions). If we are using physical machines we need to take adequate care with making sure it is sufficiently segregated from our internal network and the public internet. There are a few ways to do this, mostly by restricting traffic on your firewall, or even just not connecting the device to the network at all (the infamous Air-gap method). There are also some tools to help us with snapshotting and roll backs so we don’t have to reinstall after every execution using Ghost Imaging software or using technology like Deep Freeze, which resets core configurations on every reboot to a known good state.

Having a virtual environment simply means using VMware or VirtualBox on your standard workstation to run VMs of your “victim” to execute the malware on. This can be great for speedy rollback as there is usually snapshotting in your virtual environment but if the virtual nature of this environment is detected by the malware it may not run. There may be ways to mask this but I will need to research this in a later post.

What is automated analysis?

After reading about static and dynamic analysis, large portions look like they might be repetitive and tedious. Both of these traits indicate a good candidate for automation and Malware analysis is no exception. This will free up the ever more precious human analyst for more important work and reduce the ever present risk of error to some degree. There are many automated analysis tools available. Some such as Comodo Valkyrie and Threat Expert are cloud based tools where we upload our malware samples, while others such as ZeroWine and Buster are locally installed. These analysers are sandboxed so can run malware automatically with lesser risk (because risk is never 0!) of compromising the host system. They are great for reducing the noise our analysts sift through and highlighting the most important findings for further review but come with several drawbacks;

  • If the target malware is VM-aware then they may identify the analyser as a virtual environment and not execute as normal;
  • If the malware has a particular trigger it requires to run it may not receive this with an automated tool such as
    • Requiring human interaction;
    • Executing at a particular time;
    • Executing after a predefined action has taken place or similar – such as fileless cryptojackers waiting until the device has been idle for a period of time before starting to mine.
  • They might miss certain logical indicators that a human would not.

Despite these flaws, they are a great first step.

How does malware stop us from analysing it?

Malware has a few techniques it uses to try and prevent us from analysing it. A few ways have already been mentioned in the automated analysis section. If the malware detects it is running on a virtual machine it may not run normally or at all. It might rely on triggers to try to avoid or delay detection of activities until conditions are right or it may even remain dormant for a long period of time, frustrating the analyst into believing it is benign. One method of combating analysts that malware employs that is very effective is the use of obfuscation techniques.

Obfuscation is making use of a few techniques to make reading the code to identify its purpose challenging. With encryption the malware is composed of two parts, one part is the encrypted main body of code, and the other is the decryptor used to recover that code. In general a different encryption key is used for each iteration resulting in different encrypted outputs and hashes, confusing antivirus’ but the decryptor itself tends to remain on changed, providing a way to detect these infections.

The malware may also use encoding such as XOR or Base64 to transform its code and make it less readable by humans – this is especially true for malware that uses custom encoding. This means that even once an analyst unpacks to source code there is an extra layer of defense they must navigate. Defense in depth it seems is used by both sides in this computational war.

Packing is another way malware can try to obfuscate itself. Packing is used by many applications – malicious and legitimate. By packing the code we are compressing it for distribution. This can help malware avoid detection and analysis though most packing tools are common and detectable the analyst must still unpack the malware prior to analysing it.

Had your fill of obfuscation yet? Or are you still finding it incredibly interesting? Code obfuscation is a common source of frustration felt by analyst’s worldwide. The malware authors play with their code to make it as confusing as possible. They do this by re-ordering their code so it does not flow in a logical fashion, they might insert code that can lead analysts down a dead end (called Dead Code)  by having code with functions and calls that do not do anything. They might substitute common instructions for less known equivalents. In some cases they may change the assembly language jump instructions to further cause confusion.

Summary

For a first class, this was jam-packed with exciting information, especially around Code obfuscation, which if you thought was skimmed over, don’t worry I’ll be writing a dedicated post on this in the future! There should be one new blog post per week on malware analysis bookmark us so you never miss a chapter! If you cant wait that long to go further on your malware hunting journey I recommend Malware Unicorns Reverse Engineering 101 for a cool course on both static and dynamic analysis; https://securedorg.github.io/RE101/

It is complete with some amazing graphics that put my text heavy blog to shame. 🙂

Until next time.


[1] https://twitter.com/MalwareTechBlog

[2] https://www.malwaretech.com/2017/05/how-to-accidentally-stop-a-global-cyber-attacks.html

Security in your supply chain matters!

Its often said that you are only as secure as your weakest link. In most cases this weak link is described as your end users. But in more cases an often forgotten risk is the weak link in your supply chain. Third party vendors and providers must be reviewed as part of your security management strategy.

The best example of this is one of the first lessons of a web application penetration tester (or a malicious hacker) is to identify how many websites are being hosted on the same server as their target. Once they have this list they can go through each to identify the site with the weakest security and use that to attempt to gain access to the hosting server.

For other third parties who may have a VPN tunnel established with your corporate network; without adequate consideration and controls put in place to manage this access any compromise of your third parties network also compromises your network. Similarly any of your third parties staff, without appropriate controls in place, could damage your organisation.

The solution is to never assume security when dealing with third parties. Where possible several steps should be taken;

  • Security requirements should be detailed in contracts and compliance monitored.
  • Access to the organisations network should be managed, segmented and monitored to ensure only authorized actions are taking place.
  • Only reputable third parties should be contracted.
  • At a minimum all internal security policies, process, guidelines and standards should be applied to all third parties.

What does ISO27001 say?

Security category – 15.1. Information security in supplier relationships

15.1.1. Information security policy for supplier relationships.

Rules should be in place that govern what a vendor can access and how they should access it, as well as specifying other security requirements. These should require the security a vendor should have on their own network, how incidents should be reported and any other requirements your organization deems necessary, depending on the value of what the vendor will have access to. Having a policy outlining what is expected can help guide us when we are considering vendor relationships.

15.1.2. Addressing security within supplier agreements.

The rules we set out in our Information Security Policy for Supplier Agreements should be included in all contracts with vendors and they should commit to upholding these requirements. Periodic auditing can be considered to ensure compliance.

15.1.3. Information and communication technology supply chain.

It stands to reason that if there is access allowed between your network and your vendors network, then any party with access to your vendors network potentially has access to your organization, such as your vendors suppliers. There should be policies in place to ensure access between you and your vendor is restricted and controls to protect against unauthorized access. Ensuring your organization and your vendor keep an audit and log trail to track access and requests can provide accountability and requiring your vendor to screen their suppliers can also reduce this risk.

Security category – 15.2. Supplier service delivery management

15.2.1. Monitoring and review of supplier services.

This will provide us with the confidence that are suppliers are adhering to the security requirements of their contract. Reviewing the audit trail of a vendor, conducting vulnerability assessments on their network and engaging in regular meetings to ensure the vendor understands their obligations can all prove helpful.

15.2.2. Managing changes to supplier services.

Vendors should not be able to make any ad-hoc changes to their service. This can include patching, upgrades and improvements. Any changes should be managed to limit disruption and ensure service continuity in the event of problems occurring. This also gives us a chance to review our security posture and introduce new controls as required to ensure the changes do not weaken our security position.

The importance of masking test data

We discussed in a previous blog post that you should maintain one or more test environments for your applications. This allows us to fully test an application for functionality and security without impacting the production environment. However if these test environments use unsanitised or anonymised production data, which can include personally identifiable information, there is an inherent risk of a data breach.

Organisations should maintain test data, which maps to the productions data for fields, data types and syntax but does not contain actual information on your customers. If you do decide to use production data you must ensure it is santitised, replace any PII in the data such as names, credit card information or similar with tokens, placeholders or randomised segments. There is a good pdf on data masking here; https://www.informatica.com/downloads/6993_data_sec_sap_wp_web.pdf

So what does ISO27001 say;

Security category – 14.3. Test data

14.3.1. Protection of test data.

Any data used for testing purposes should be taken from a carefully selected sample pool and given adequate protection. Avoiding the use of PII and sensitive data can help reduce the risks of disclosure. Standard best practices include mimicking the access controls for production environment on the testing environment, having a procedure for gaining authorization to use production data, maintaining an audit trail and ensuring the destruction of data when no longer needed.

What is needed to ensure a secure software development framework?

Most developers will understand at least one development model, from the Software Delivery Life Cycle to the Waterfall model and many more aside, it is only recently that these models have been reviewed to ensure security requirements are taken into account at the earliest possible stage. This is an important update as new software development, and updates to existing applications can introduce new and evermore nasty vulnerabilities into our network. So how do we reduce our risk in this area? By taking a comprehensive view of security at all stages and taking all aspects of new software development into account!

Security category – 14.2. Security in development and support processes

14.2.1. Secure development policy.

A policy should outline the security requirements required during information systems development. It should include controls to protect the development environment, guidelines on how to create secure code and testing the code to verify it is secure. There are many programming methodologies that can be incorporated into this control such as the Secure Software Development Lifecycle.

14.2.2. System change control procedures.

This control ties into having a change management process and requires a similar process for deploying code during the development stage of new applications. It should include a risk analyse of impact, a roll back plan, testing and approval requirements. Documentation should be updated to accommodate changes and a version control record and audit log maintained. This is SOFTWARE FOCUSED.

14.2.3. Technical review of applications after operating platform changes.

Change always makes information systems more vulnerable. Therefore, we have a test environment but even with that whenever we make changes to applications, we should run through application use cases and conduct tests to ensure the change hasn’t negatively impacted usability or functionality.

14.2.4. Restrictions to changes to software packages

Where possible we should avoid making changes to software packages. Changes can introduce new vulnerabilities, break functionality and may prove a resource intensive exercise. In some circumstances it is justifiable to take this risk for a business need. In these cases, we should make sure we have the vendors written consent to changes, document if the vendor will still support the software and make sure we run comprehensive vulnerability tests to help us assess the risk. It is also best practise to maintain a version repository of any changes made, including keeping a copy of the original.

14.2.5. Secure system engineering principles.


Principles, such as those described in NIST SP 800-160 should be formally implemented into the organizations methodology and process, documented and maintained to allow for secure software engineering throughout the software development lifecycle. These principles should also be reviewed annually to ensure they still represent current best practises.

14.2.6. Secure development environment.

The people, technology and development process should all be protected with consideration given to the classification of data to be processed, the business criticality, legal requirements, access control and backup requirements.

14.2.7. Outsourced development.

With globalization resulting in more and more companies entering into partnership with outsourcing firms for their software development it becomes ever more important that the organizations security requirements are adhered to by their outsourcing partner. To ensure this is the case the company should actively work with and monitor the outsourced team to ensure compliance. Agreements should be in place to ensure the acceptance testing requirements are in place to include security concerns and that periodic auditing of the partners environment allowed.

14.2.8. System security testing.

As security should be a concern at every stage of the project security testing should be conducted throughout development with the extent of testing dependent on the business criticality of the application.

14.2.9. System acceptance testing.

As previously stated “If it doesn’t work securely, it doesn’t work”. There should be clearly documented acceptance criteria for new applications, upgrades and patches that must be met before these changes are accepted and rolled out. These criteria should include security concerns and after testing any issues should be remediated.