Secure information at all times, especially when transmitting it.

Information is rarely static, staying in one place throughout its lifecycle. More commonly information is processed, disseminated and dispersed, between applications and between people. Processes and procedures need to be created to govern this transfer in a secure and responsible way; likewise personnel (including third parties receiving information) must be trained to treat any information they have access to, apparently.

Security category – 13.2. Information transfer

13.2.1. Information transfer policies and procedures.

The process that employees need to follow should be explicitly stated including an acceptable use policy which employee’s need to sign to transmit information. These policies should include measures to prevent staff from forwarding malicious mail, engaging in harassment and should detail the retention period and disposal procedure for emails, when encryption should be used and steps to protect against information disclosure such as being overheard having confidential conversations in public places.

13.2.2. Agreements on information transfer.

Where feasible during communications with vendors, government and other parties it should be explicitly required in agreed contracts what level of security is required for correspondence. This should keep include non-repudiation, responsibilities in the event of disclosure, technical requirements and data classification.

13.2.3. Electronic messaging.

As more and more companies use Electronic Messaging, such as email, Lync and Slack, in their day to day communications repertoire, and sometimes as a complete replacement for letters and calls, it is important that we have policies in place detailing how this can be used and what controls are in place to protect us such as keeping a record of messages exchanged, ensuring encryption in transit and at reset and having mechanisms for non-repudiation in place.

13.2.4. Confidentiality or non-disclosure agreements.

Staff, contractors and other third parties working in your organization may all have access to confidential information. This needs to be protected and one of the best ways to do this is to require all parties with access to sign confidentiality or non-disclosure agreements(NDA). These agreements should specify what is to be kept confidential, for how long, what the penalties are for disclosure, what the protected information can be used for and how that information should be protected and disclosures reported.

A short post following a long break.

Apologies everyone for the long delay between posts but I hope you enjoyed the last two on network security, and especially Vulnerability Management(which was our most popular posting to date!). Now that the holidays are over and I have settled into my new role lets continue running through the second half on the ISO27001:2013 controls.

This post will be shorter than the previous ones as we are only dealing with one control;

Security category – 12.7. Information systems audit considerations

12.7.1. Information systems audit considerations.

Any audit of information systems should be carefully planned, and have coverage agreed on in advance with the goal of minimizing disruption to business operations. While audits are an amazing tool for finding gaps and weaknesses in our security standing. Auditors are observers. The audit should not change any of the information stored on assets being reviewed and the auditors access should be monitored and logged.

Ideally the auditor should have read only access and should only run their audit scripts outside of business hours to minimize disruption.

Networked security

Your internal network has to be a prominent consideration when planning your information security. Your network carries sensitive data and is the primary route an attacker uses to navigate between your assets. Due to this it should receive special considerations and security should be designed into it from the beginning. This blog post is aimed at covering the 27001 standard but network security has its own separate standard i would highly recommend reading details on how to build controls around your network security. Segmentation, access controls, encryption, detection and more are just some of the tools we use.

Security category – 13.1. Network security management

13.1.1. Network controls.

Your network is how your staff and users access your information systems. That same network, if not adequately protected can allow a malicious user to try and compromise your environment, or if an asset is already compromised to more easily move around. There are many ways to protect your network and the level of protection should depend on your environment. Access should be restricted and controlled to protect against misuse and abuse. ISO 27033 discusses network security in detailed and should be reviewed in relation to this control category.

13.1.2. Security of network services

In addition to segmenting you network you should include additional controls such as firewalls to control access, NAC’s and filtering to restrict access to approved users/devices and NIDS and NIPS to monitor for abuse. Other tools can be used and should any service contracts your company enters into should include requirements for protection levels.

13.1.3. Segregation in networks.

All organizations can benefit from network segmentation. This could be something as simple as

Your network should be divided into subsections based on the users, application type and classification of data held in that environment. There are many ways the segment the network such as using VLANs or firewalls and time should be spent to plan out this segregation to ensure optimal protection and separation.

8 essential best practice lessons for Vulnerability Management.

This is a post I have been thinking about for a long time. I have been working in Threat and Vulnerability Management with a lot of emphasis on continuous improvement. Together with my team we were responsible for the successful vulnerability scanning and remediation reporting of over 12,000 assets. A large number that presents a number of challenges. Since we began this project there have been multiple cycles of change and improvement. As part of these improvements I try to find best practices and advice on what to do differently, but too often the advice I read, or the videos I watch, are sales pitches with no true take home and use lessons. These lessons that I have collected here should help managing vulnerabilities at all sizes from enterprise scale environments to SME’s.

In this blog post I am going to go through the 8 most important lessons I have learned that can, and should, be applied to any organizations Vulnerability Management project;

Lesson 1; Continuously map your network.

We can’t protect assets if we don’t know they are there. Most modern security frameworks support automated tools being used to scan and keep an inventory of all the assets in your network. From enterprise tools like BMC’s ADDM to just writing a homemade script using nmap there is a way to make sure all your subnets are frequently mapped, inventoried and that any anomalous devices found are highlighted for investigation.

Lesson 2; Every asset has to have an owner, and every owner has to understand their responsibilities.

Once we have this inventory of all assets on our network we next need to assign owners and track who has access to the machine. In many cases this will be the System Administrator but in more complex organizations ownership can be split between infrastructure owner, the application owner, the business owner or some combination of the three. This presents issues in assigning remediation tickets to people. Having the correct people for assigning tickets to agreed  and decided upon before there is a need, and having this documented, can alleviate challenges before they occur.

Lesson 3; Authentication is key to understanding your threat landscape.

Some vulnerabilities can be identified remotely, but most can only be identified through authenticated scanning. Having authenticated scanning setup is the only way we can get a holistic and informed view of the risks we face. This has benefits in reducing false positives, increasing confirmed findings and insuring with know what vulnerabilities an attacker could leverage after initially gaining access to a machine.

Lesson 4; Scan frequently.

In most cases we can’t scan constantly, the overhead makes such a task prohibitive. At the very least we should aim to be scanning monthly; and more frequently for specific high risk vulnerabilities as they are disclosed.  Scanning as often as we can ensures we have up to  date information on what has been fixed and what is outstanding.

Lesson 5; Communicate with your blue team; know what controls are in place to mitigate your risk.

If your organization is big enough keep an open flow of communication with your blue team members; this allows you to keep your understanding of systems from firewalls, IPS/IDS, antivirus, SIEM and other detective and mitigating controls up to date. While we always strive to remediate every vulnerability, in the next lesson we will start prioritizing, understanding the defences in place will help us prioritize what is most urgent to fix.

Lesson 6; Prioritize your findings; where the assets are, what they do and the information they contain.

Now collecting as much information as we can including, but not limited to;

  • Network location of asset
  • Sensitivity of information stored
  • Criticality of application running on the asset
  • Mitigating controls in place

We can start identifying which vulnerabilities on which servers are the highest risk, and thus should be remediated first. By following this approach, including looking at what could decrease the risk of compromise, we can have a truly accurate understanding of where our sysadmins need to spend their time. It ensure true high risk vulnerabilities are remediated first and lower risk or mitigated as soon as possible.

Lesson 7; Build a narrative, connecting your findings to the organizations wider security posture.

Prioritizing vulnerabilities in lesson 6, helps us manage resourcing but many times senior managers are more business focused and don’t want to invest resources beyond their risk appetite. Learning how to build a narrative to gain traction at senior management is a very important skill that i’m still learning. Some tools used here;

  • Graphing a visual dashboard for tracking progress
  • Highlighting how assets map to applications
  • Highlight where legal requirements may come into play (SOX, GDPR etc)

Lesson 8; Don’t neglect your sysadmins. Build your relationships and reap the rewards.

Building a strong rapport with your organization’s systems administrators can go a long way to maintaining a secure environment. By staying in touch we can highlight how good security benefits them in the long run by reducing incidents and downtime we can help maintain good relations and reduce tension. Prioritizing and using authenticated scanning to reduce false positives also makes sure their time is not wasted and that they are not overworked. Finally by talking with the sysadmins formally and informally you can gain a better understanding of your organizations infrastructure which can help you make better decisions regarding risk.

ISO 27001:2013 Security category – 12.6. Technical vulnerability management

These best practices also flow into the Technical Vulnerability Management category of ISO 27001, including this for completeness with the ISO blog series.

12.6.1. Management of technical vulnerabilities.

Having a vulnerability management program in place can be very important for learning about individual vulnerabilities and the risks surrounding them. This proactive measure can allow your team to more quickly respond to new threats and put in place mitigating steps to reduce the risk of them if remediation is not immediately possible. Using vulnerability scanners for this can be an important step towards comprehensive coverage and with tools such as Nessus, Nexpose and Qualys, among others, organizations have many tools to choose from.

12.6.2. Restrictions on software installation.

Similar to requiring trained staff for software installations there should be rules and restrictions in place on what software can be requested, used and installed. Restrictions should be in place to ensure staff can only have software they need to do their job installed and this should cover all levels of the organization. This will drastically reduce the risk of malware being introduced to your environment. There are two ways to go about this, blacklisting software explicitly states what software is not allowed to be installed in your environment and can be used to prevent known trojan horse application and spyware from being installed. Whitelisting software explicitly states what software can be installed and is a more restrictive option, with whitelisting only software that has been specifically tested and approved can be used.

Monitor and assess the software your staff use

In many small organizations staff by default have full control over installing new applications onto their workstation. This presents a huge amount of risk as this software can be malicious in nature, or cause performance and compatibility issues. we should always look to making sure we have some kind of software deployment framework in place.  We should;

  • Ensure we have trained staff in place to deal with new software deployments, including installation.
  • Make sure we test any new software we want to bring into our organization.
  • Create a secure repository of approved software programs (and maintain a record of the expected hash value) that users can use.
  • Maintain a record of all software installed on assets with a risk assessment of each.
  •  Ensure the new software is covered in backup and update processes.
  • As always ensure privileged user account use is restricted; in this case so users cant arbitrarily install their own software.

12.5.1. Installation of software on operational systems.

Installing, or allowing the installation of unknown or untested software can introduce system instability, malware and other risk. Any new software installations should need to follow a standard procedure to be approved and installed. This installation should be carried out by the organizations IT team, untrained staff should never be able to perform this function and the team performing installations should first test the new software for compatibilities issues, vulnerabilities or similar.

Well Maintained logging is essential!

All organizations with information systems need to know what’s happening on, and between those systems. This is where a comprehensive logging setup can be very beneficial. It shows you what has happened and when it happened. Make sure we protect these logs from being altered or destroyed, and that admin access to the logs is monitored should also be considerations.. Finally making sure your time is synchronized across your estate ensures you can build an accurate timeline of events when things go wrong.


Security category – 12.4. Logging and monitoring

12.4.1. Event logging.

On any system that processes information we should ensure we have auditing and logging in place. We must also ensure that it cannot be tampered with by the users of that system.

One way to accomplish this goal is to use rsyslog to store logs remotely, away from the clutches of a compromised local system. This control is important for any event that requires investigating and can help us find the cause of problems quickly and accurately. The level of logging should also be tailored to what is useful and what is useful depends on the type of information and purpose of this server. Too high of a logging level will lead to important log entries being overlooked due to the “noise” of excessive logging or even the servers hard disk filling up causing a crash or for older log entries to be overwritten. Too low of a logging level can lead to important event information not being recorded. Reviews of logs should be regularly carried out and logs should be kept according the retention period decided by what your organization deems necessary for investigations.


12.4.2. Protection of log information.

Logs are only as useful as they are accurate. Steps should be taken to ensure users cannot alter log entries, either maliciously or accidently. Enough storage space should be available to reduce the risk of excessive log files being generated to overwrite previous, important, entries. In addition, only authorized staff should be able to view logs. Ways to ensure logs have not been tampered with include storing logs remotely and ensuring integrity is maintained using file hashes.


12.4.3. Administrator and operator logs.

Who watches the watcher? The age-old question can give security teams sleepless nights. The system owners, administrators, often have root or administrator privileges. To protect against abuse any use of these privileges should be recorded and reviewed. Likewise, the audit trail kept should be stored in a way that the administrator cannot tamper with.


12.4.4. Clock synchronization.

As important as logs are they can simply add to the confusion if an organizations logs don’t follow a standardized data/time format and time zone throughout the various time zones the company operates in. While this may not be an issue for organizations based in the one time zone best practice dictates the organization decides on a time zone and format to follow and enforces that on all its assets and logs. This is known as your reference time. In many cases organizations settle on UTC for their reference time

Short reminder to backup!

Do it now! While you have time. 🙂

Security category – 12.3. Backup

12.3.1. Information backup.

No matter how secure we are, risk can never be completely eliminated, and we should prepare for the day when we need to recover from an incident that has separated us from our valuable data. This can be in the form of lost information or lost configuration, having a strong backup policy protects us in these situations. By having important data regularly backed up, with those backups tested to ensure we can restore from them, we can limit the impact of an attack. When designing a backup policy, it is important to ensure that storage requirements and the value of the data stored is considered during planning. Best practices include keeping separate copies of the backups in geographically separate locations, minimizing time between backups to the company’s acceptance of how much data can be lost and ensure staff are trained to be able to restore data from backups when required.