Share via


Microsoft 365 Certification - Sample Evidence Guide

Overview

This guide has been created to provide ISVs with examples of the type of evidence and level of detailed required for each of the Microsoft 365 Certification controls. Any examples shared in this document don't represent the only evidence that can be used to demonstrate that controls are being met but act only as a guideline for the type of evidence required.

Note: The actual interfaces, screenshots, and documentation used to satisfy the requirements will vary depending on product use, system setup, and internal processes. In addition, please note that where policy or procedure documentation is required the ISV will be required to send the ACTUAL documents and not screenshots as maybe shown in some of the examples.

There are two sections in the certification that require submissions:

  1. The Initial Document Submission: a small set of high level documents required for scoping your assessment.
  2. The Evidence Submission: the full set of evidence required for each control in-scope for your certification assessment.

Tip

Try the App Compliance Automation Tool for Microsoft 365 (ACAT) to achieve an accelerated path to achieve Microsoft 365 certification by automating the evidence collection and control validation. Learn more about which control is fully automated by ACAT.

Structure

This document maps directly to controls you'll be presented during your certification in partner center. The guidance provided within this document is detailed as follows:

  • Security Domain: The three security domains that all controls are grouped into: Application Security, Operational Security, and Data Security and Privacy.
  • Control(s): = Assessment Activity Description - These control(s) and associated number (No.) are taken directly from the Microsoft 365 Certification Checklist.
  • Intent: = The intent of why the security control is included within the program and the specific risk that it's aimed to mitigate. The hope is that this information will provide ISVs with the reasoning behind the control to better understand the types of evidence that needs to be collected and what ISVs must pay attention to and have awareness and understanding of in producing their evidence.
  • Example Evidence Guidelines: = Given to help guide the Evidence Collection Tasks on the Microsoft 365 Certification Checklist spreadsheet, this allows the ISVs to clearly see examples of the type of evidence that can be used by the Certification Analyst who will use it to make a confident determination that a control is in place and maintained – it's by no means exhaustive in nature.
  • Evidence Example: = This section gives example screenshots and images of potential evidence captured against each of the controls within the Microsoft 365 Certification Checklist spreadsheet, specifically for the Operational Security and Data Security and Privacy Security Domains (Tabs within the spreadsheet). Note any information with red arrows and boxes within the examples is to further aid your understanding of the requirements necessary to meet any control.

Security Domain: Application Security

Control 1 - Control 16:

The Application Security domain controls can be satisfied with a penetration test report issued within the last 12 months showing that your app has no outstanding vulnerabilities. The only required submission is a clean report by a reputable independent company.

Security Domain: Operational Security / Secure Development

The ‘Operational Security / Secure development’ security domain is designed to ensure ISVs implement a strong set of security mitigation techniques against a myriad of threats faced from activity groups. This is designed to protect the operating environment and software development processes to build secure environments.

Malware Protection - Anti-Virus

Control #1: Provide policy documentation that governs anti-virus practices and procedures.

  • Intent: The intent of this control is to assess an ISV’s understanding of the issues they face when considering the threat from computer viruses. By establishing and using industry best practices in developing an anti-virus policy and processes, an ISV provides a resource tailored to their organization’s ability to mitigate the risks faced by malware, listing best practices in virus detection and elimination, and gives evidence that the documented policy provides suggested security guidance for the organization and its employees. By documenting a policy and procedure of how the ISV deploys anti-malware decencies, this ensures the consistent rollout and maintenance of this technology in reducing the risk of malware to the environment.
  • Example Evidence Guidelines: Provide a copy of your Antivirus/Antimalware policy detailing the processes and procedures implemented within your infrastructure to promote Antivirus/Malware best practices. Example Evidence
  • Example Evidence:

Antivirus and Malware Policy screenshot

Note: This screenshot shows a policy/process document, the expectation is for ISVs to share the actual supporting policy/procedure documentation and not simply provide a screenshot.

Control #2: Provide demonstrable evidence that anti-virus software is running across all sampled system components.

  • Intent: It's important to have Anti-Virus (AV) (or anti-malware defenses) running in your environment to protect against cyber security risks that you may or may not be aware of as potentially damaging attacks are increasing, both in sophistication and numbers. Having AV deployed to all system components that support its use, will help mitigate some of the risks of anti-malware being introduced into the environment. It only takes a single endpoint to be unprotected to potentially provide a vector of attack for an activity group to gain a foothold into the environment. AV should therefore be used as one of several layers of defense to protect against this type of threat.
  • Example Evidence Guidelines: To prove that an active instance of AV is running in the assessed environment. Provide a screenshot for every device in the sample that supports the use of anti-virus, which shows the anti-virus process running, the anti-virus software is active, or if you have a centralized management console for anti-virus, you may be able to demonstrate it from that management console. If using the management console, be sure to evidence in a screenshot that the sampled devices are connected and working.
  • Evidence Example 1: The below screenshot has been taken from Azure Security Center; it shows that an anti-malware extension has been deployed on the VM named "MSPGPRODAZUR01".

Screenshot of Azure Security Center; it shows that an Antimalware extension has been deployed on the VM

  • Evidence Example 2

The below screenshot has been taken from a Windows 10 devices, showing that "Real-time protection" is switched on for the host name "CLARANET-SBU-WM".

Screenshot of Windows 10 devices, showing that "Real-time protection" is switched on

Control #3: Provide demonstratable evidence that anti-virus signatures are up-to-date across all environments (within 1 day).

  • Intent: Hundreds of thousands of new malware and potentially unwanted applications (PUA) are identified every day. To provide adequate protection against newly released malware, AV signatures need to be updated regularly to account for newly released malware.
  • This control exists to ensure that the ISV has taken into consideration the security of the environment and the effect that outdated AV can have on security.
  • Example Evidence Guidelines: Provide anti-virus log files from each sampled device, showing that updates are applied daily.
  • Example Evidence: The following screenshot shows Microsoft Defender updating at least daily by showing 'Event 2000, Windows Defender' which is the update. The hostname is shown, showing that this was taken from the in-scope system "CLARANET-SBU-WM".

screenshot shows Microsoft Defender updating at least daily by showing 'Event 2000, Windows Defender'

Note: The evidence provided would need to include an export of the logs to show daily updates over a greater time-period. Some anti-virus products will generate update log files so these files should be supplied or export the logs from Event Viewer.

Control #4: Provide demonstratable evidence that anti-virus is configured to perform on-access scanning or periodic scan across all sampled system components.

Note: If on-access scanning isn't enabled, then a minimum of daily scanning and alerting_ MUST _be enabled.

  • Intent: The intention of this control is to ensure that malware is quickly identified to minimize the effect this may have to the environment. Where on-access scanning is carried out and coupled with automatically blocking malware, this will help stop malware infections that are known by the anti-virus software. Where on-access scanning isn't desirable due to risks of false positives causing service outages, suitable daily (or more) scanning and alerting mechanisms need to implement to ensure a timely response to malware infections to minimize damage.
  • Example Evidence Guidelines: Provide a screenshot for every device in the sample that supports anti-virus, showing that anti-virus is running on the device and is configured for on-access (real-time scanning) scanning, OR provide a screenshot showing that periodic scanning is enabled for daily scanning, alerting is configured and the last scan date for every device in the sample.
  • Example Evidence: The following screenshot shows Real-time protection is enabled for the host, "CLARANET-SBU-WM".

Screenshot shows Real-time protection is enabled for the host

Control #5: Provide demonstratable evidence that anti-virus is configured to automatically block malware or quarantine and alert across all sampled system components.

  • Intent: The sophistication of malware is evolving all the time along with the varying degrees of devastation that they can bring. The intent of this control is to either stop malware from executing, and therefore stopping it from executing its potentially devastating payload, or if automatic blocking isn't an option, limiting the amount of time malware can wreak havoc by alerting and immediately responding to the potential malware infection.

  • Example Evidence Guidelines: Provide a screenshot for every device in the sample that supports anti-virus, showing that anti-virus is running on the machine and is configured to automatically block malware, alert or to quarantine and alert.

  • Example Evidence 1: The following screenshot shows the host "CLARANET-SBU-WM" is configured with real-time protection on for Microsoft Defender Antivirus. As the setting says, this locates and stops malware from installing or running on the device.

screenshot shows the host "CLARANET-SBU-WM" is configured with real-time protection on for Microsoft Defender Antivirus.

Control #6: Provide demonstratable evidence that applications are approved prior to being deployed.

  • Intent: With application control, the organization will approve each application/process that is permitted to run on the operating system. The intent of this control is to ensure that an approval process is in place to authorize which applications/processes can run.

  • Example Evidence Guidelines: Evidence can be provided showing that the approval process is being followed. This may be provided with signed documents, tracking within change control systems or using something like Azure DevOps or JIRA to track these requests and authorization.

  • Example Evidence: The following screenshot demonstrates an approval by management that each application permitted to run within the environment follows an approval process. This is a paper-based process at Contoso, however other mechanisms may be used.

screenshot demonstrate an approval by management that each application permitted to run within the environment follows an approval process.

Control #7: Provide demonstratable evidence that a complete list of approved applications with business justification exists and is maintained.

  • Intent: It's important that organizations maintain a list of all applications that have been approved, along with information on why the application/process has been approved. This will help ensure the configuration stays current and can be reviewed against a baseline to ensure unauthorized applications/processes aren't configured.

  • Example Evidence Guidelines: Supply the documented list of approved applications/processes along with the business justification.

  • Example Evidence: The following screenshot lists the approved applications with business justification.

screenshot lists the approved applications with business justification.

Note: This screenshot shows a document, the expectation is for ISVs to share the actual supporting document and not to provide a screenshot.

Control #8: Provide supporting documentation detailing that application control software is configured to meet specific application control mechanisms.

  • Intent: The configuration of the application control technology should be documented along with a process of how to maintain the technology, that is, add and delete applications/processes. As part of this documentation, the type of mechanism used should be detailed for each application/process. This will feed into the next control to ensure the technology is configured as documented.

  • Example Evidence Guidelines: Provide supporting documentation detailing how application control has been set up and how each application/process has been configured within the technology.

  • Example Evidence: The following screenshot lists the control mechanism used to implement the application control. You can see below that 1 app is using Certificate controls and the others using the file path.

screenshot lists the control mechanism used to implement the application control.

Note: This screenshot shows a document, the expectation is for ISVs to share the actual supporting document and not to provide a screenshot.

Control #9: Provide demonstratable evidence that application control is configured as documented from all sampled system components.

  • Intent: The intent of this is to validate that application control is configured across the sample as per the documentation.

  • Example Evidence Guidelines: Provide a screenshot for every device in the sample to show that it has application controls configured and activated. This should show machine names, the groups they belong to, and the application control policies applied to those groups and machines.

  • Evidence Example: The following screenshot shows a Group Policy object with Software Restriction Policies enabled.

screenshot shows a Group Policy object with Software Restriction Policies enabled.

This next screenshot shows the configuration in line with the control above.

screenshot shows the configuration in line with the control above.

This next screenshot shows the Microsoft 365 Environment and the Computers included within the scope being applied to this GPO Object 'Domain Computer Settings'.

screenshot shows the M365 Environment and the Computers included within the scope being applied to this GPO Object 'Domain Computer Settings'.

This final screenshot shows the in-scope server "DBServer1" being within the OU within the screenshot above.

screenshot shows the in-scope server "DBServer1" being within the OU within the screenshot above.

Patch Management – Risk Ranking

The swift identification and remediation of security vulnerabilities helps to minimize the risks of an activity group compromising the environment or application. Patch management is split into two sections: risk ranking and patching. These three controls cover the identification of security vulnerabilities and ranking them according to the risk they pose.

This security control group is in scope for Platform-as-a-Service (PaaS) hosting environments since the application/add-in third-party software libraries and code base must be patched based upon the risk ranking.

Control #10: Supply policy documentation that governs how new security vulnerabilities are identified and assigned a risk score.

  • Intent: The intent of this control is to have supporting documentation to ensure security vulnerabilities are identified quickly to reduce the window of opportunity that activity groups have to exploit these vulnerabilities. A robust mechanism needs to be in place to identify vulnerabilities covering all the system components in use by the organizations; for example, operating systems (Windows Server, Ubuntu, etc.), applications (Tomcat, MS Exchange, SolarWinds, etc.), code dependencies (AngularJS, jQuery, etc.). Organizations need to not only ensure the timely identification of vulnerabilities within the estate, but also rank any vulnerabilities accordingly to ensure remediation is carried out within a suitable timeframe based on the risk the vulnerability presents.

Note Even if you're running within a purely Platform as a Service environment, you still have a responsibility to identify vulnerabilities within your code base: that is, third-party libraries.

  • Example Evidence Guidelines: Supply the support documentation (not screenshots)

  • Example Evidence: This screenshot shows a snippet of a risk ranking policy.

screenshot shows a snippet of a risk ranking policy.

Note: This screenshot shows a policy/process document, the expectation is for ISVs to share the actual supporting policy/procedure documentation and not provide a screenshot._

Control #11: Provide evidence of how new security vulnerabilities are identified.

  • Intent: The intent of this control is to ensure the process is being followed and it's robust enough to identify new security vulnerabilities across the environment. This may not just be the Operating Systems; it may include applications running within the environment and any code dependencies.

  • Example Evidence Guidelines: Evidence may be provided by way of showing subscriptions to mailing lists, manually reviewing security sources for newly released vulnerabilities (would need to be adequately tracked with timestamps of the activities, that is, with JIRA or Azure DevOps), tooling which finds out-of-date software (for example, could be Snyk when looking for out-of-date software libraries, or could be Nessus using authenticated scans which identify out-of-date software.).

Note If using Nessus, this would need to be run regularly to identify vulnerabilities quickly. We would recommend at least weekly.

  • Example Evidence: This screenshot demonstrates that a mailing group is being used to be notified of security vulnerabilities.

screenshot demonstrates that a mailing group is being used to be notified of security vulnerabilities.

screenshot also demonstrates that a mailing group is being used to be notified of security vulnerabilities.

Control #12: Provide evidence demonstrating that all vulnerabilities are assigned a risk ranking once identified.

  • Intent: Patching needs to be based upon risk, the riskier the vulnerability, the quicker it needs to be remediated. Risk ranking of identified vulnerabilities is an integral part of this process. The intent of this control is to ensure there's a documented risk ranking process which is being followed to ensure all identified vulnerabilities are suitably ranked based upon risk. Organizations usually utilize the CVSS (Common Vulnerability Scoring System) rating provided by vendors or security researchers. it's recommended that if organization rely on CVSS, that a re-ranking mechanism is included within the process to allow the organization to change the ranking based upon an internal risk assessment. Sometimes, the vulnerability may not be application due to the way the application has been deployed within the environment. For example, a Java vulnerability may be released which impacts a specific library that t used by the organization.

  • Example Evidence Guidelines: Provide evidence by way of screenshot or other means, for example, DevOps/Jira, which demonstrates that vulnerabilities are going through the risk ranking process and being assigned an appropriate risk ranking by the organization.

  • Example Evidence: This screenshot shows risk ranking occurring within column D and re-ranking in columns F and G, should the organization perform a risk assessment and determine that the risk can be downgraded. Evidence of re-ranking risk assessments would need to be supplied as supporting evidence

Evidence of re-ranking risk assessments would need to be supplied as supporting evidence

Patch Management – Patching

The below controls are for the patching element for Patch Management. To maintain a secure operating environment, applications/add-ons and supporting systems must be suitably patched. A suitable timeframe between identification (or public release) and patching needs to be managed to reduce the window of opportunity for a vulnerability to be exploited by a 'activity group'. The Microsoft 365 Certification doesn't stipulate a 'Patching Window', however Certification Analysts will reject timeframes that aren't reasonable.

This security control group is in scope for Platform-as-a-Service (PaaS) hosting environments since the application/add-in third-party software libraries and code base must be patched based upon the risk ranking.

Control #13: Provide policy documentation for patching of in-scope system components that includes suitable minimal patching timeframe for critical, high, and medium risk vulnerabilities; and decommissioning of any unsupported operating systems and software.

  • Intent: Patch management is required by many security compliance frameworks that is, PCI-DSS, ISO 27001, NIST (SP) 800-53. The importance of good patch management can't be over stressed as it can correct security and functionality problems in software, firmware and mitigate vulnerabilities, which helps in the reduction of opportunities for exploitation. The intent of this control is to minimize the window of opportunity an activity group has to exploit vulnerabilities that may exist within the in-scope environment.

  • Example Evidence Guidelines: Provide a copy of all policies and procedures detailing the process for patch management. This should include a section on a minimal patching window, and that unsupported operating systems and software must not be used within the environment.

  • Example Evidence: Below is an example policy document.

Screenshot of a copy of all policies and procedures detailing the process for patch management.

Note: This screenshot shows a policy/process document, the expectation is for ISVs to share the actual supporting policy/procedure documentation and not simply provide a screenshot._

Control #14: Provide demonstratable evidence that all sampled system components are being patched.

Note: Include any software/third-party libraries.

  • Intent: Patching vulnerabilities ensures that the differing modules that form part of the information technology infrastructure (hardware, software, and services) are kept up to date and free from known vulnerabilities. Patching needs to be carried out as soon as possible to minimize the potential of a security incident between the release of vulnerability details and patching. This is even more critical where the exploitation of vulnerabilities known to be in the wild.

  • Example Evidence Guidelines: Provide a screenshot for every device in the sample and supporting software components showing that patches are installed in line with the documented patching process.

  • Example Evidence: The following screenshot shows that the in scope system component "CLARANET-SBU-WM" is carrying out Windows updates in line with the patching policy.

screenshot shows that the in scope system component "CLARANET-SBU-WM" is carrying out Windows updates in line with the patching policy.

Note: Patching of all the in-scope system components needs to be evidence. This includes things like; OS Updates, Application/Component Updates (i.e__.,_ Apache Tomcat, OpenSSL, etc.), Software Dependencies (for example, JQuery, AngularJS, etc.), etc.

Control #15: Provide demonstratable evidence that any unsupported operating systems and software components aren't used within the environment.

  • Intent: Software that isn't being maintained by vendors will, overtime, suffer from known vulnerabilities that aren't fixed. Therefore, the use of unsupported operating systems and software components must not be used within production environments.

  • Example Evidence Guidelines: Provide a screenshot for every device in the sample showing the version of OS running (including the server's name in the screenshot). In addition to this, provide evidence that software components running within the environment are running supported versions. This may be done by providing the output of internal vulnerability scan reports (providing authenticated scanning is included) and/or the output of tools which check third-party libraries, such as Snyk, Trivy or NPM Audit. If running in PaaS only, only third-party library patching needs to be covered by the patching control groups.

  • Example Evidence: The following evidence shows that the in-scope system component THOR is running software that is supported by the vendor since Nessus hasn't flagged any issues.

evidence shows that the in-scope system component THOR is running software that is supported by the vendor since Nessus hasn't flagged any issues.

Note: The complete report must be shared with the Certification Analysts.

  • Example Evidence 2

This screenshot shows that the in-scope system component "CLARANET-SBU-WM" is running on a supported Windows version.

screenshot shows that the in-scope system component "CLARANET-SBU-WM" is running on a supported Windows version.

  • Example Evidence 3

The following screenshot is of the Trivy output, which the complete report doesn't list any unsupported applications.

screenshot is of the Trivy output, which the complete report doesn't list any unsupported applications.

Note: The complete report must be shared with the Certification Analysts.

Vulnerability Scanning

By introducing regular vulnerability assessments, organizations can detect weaknesses and insecurities within their environments which may provide an entry point for a malicious actor to compromise the environment. Vulnerability scanning can help to identify missing patches or misconfigurations within the environment. By regularly conducting these scans, an organization can provide appropriate remediation to minimize the risk of a compromise due to issues that are commonly picked up by these vulnerability scanning tools.

Control #16: Provide the quarterly infrastructure and web application vulnerability scanning reports. Scanning needs to be carried out against the entire public footprint (IP addresses and URLs) and internal IP ranges.

Note: This MUST include the full scope of the environment.

  • Intent: Vulnerability scanning looks for possible weaknesses in an organizations computer system, networks, and web applications to identify holes which could potentially lead to security breaches and the exposure of sensitive data. Vulnerability scanning is often required by industry standards and government regulations, for example, the PCI DSS (Payment Card Industry Data Security Standard).

  • A report by Security Metric entitled "2020 Security Metrics Guide to PCI DSS Compliance" states that 'on average it took 166 days from the time an organization was seen to have vulnerabilities for an attacker to compromise the system. Once compromised, attackers had access to sensitive data for an average of 127 days' therefore this control is aimed at identifying potential security weakness within the in-scope environment.

  • Example Evidence Guidelines: Provide the full scan report(s) for each quarter's vulnerability scans that have been carried out over the past 12 months. The reports should clearly state the targets to validate that the full public footprint is included, and where applicable, each internal subnet. Provide ALL scan reports for EVERY quarter.

  • Example Evidence: Example Evidence would be to supply the scan reports from the scanning tool being used. Each quarter's scanning reports should be supplied for review. Scanning needs to include the entire environments system components so; every internal subnet and every public IP Address/URL that is available to the environment.

Control #17: Provide demonstratable evidence that remediation of vulnerabilities identified during vulnerability scanning are patched in line with your documented patching timeframe.

  • Intent: Failure to identify, manage and remediate vulnerabilities and misconfigurations quickly can increase an organization's risk of a compromise leading to potential data breaches. Correctly identifying and remediating issues is seen as important for an organization's overall security posture and environment which is in line with best practices of various security frameworks for; example, the ISO 27001 and the PCI DSS.

  • Example Evidence Guidelines: Provide suitable artifacts (that is, screenshots) showing that a sample of discovered vulnerabilities from the vulnerability scanning are remediated in line with the patching windows already supplied in Control 13 above.

  • Example Evidence: The following screenshot shows a Nessus scan of the in-scope environment (a single machine in this example named "THOR") showing vulnerabilities on the 2nd August 2021.

screenshot shows a Nessus scan of the in-scope environment (a single machine in this example named "THOR") showing vulnerabilities on the 2nd August 2021.

The following screenshot shows that the issues were resolved, 2 days later which is within the patching window defined within the patching policy.

screenshot shows that the issues were resolved, 2 days later which is within the patching window defined within the patching policy.

Note: For this control, Certification Analysts need to see vulnerability scan reports and remediation for each quarter over the past twelve months.

Firewalls

Firewalls often provide a security boundary between trusted (internal network), untrusted (Internet) and semi-trusted (DMZ) environments. These will usually be the first line of defense within an organizations defense-in-depth security strategy, designed to control traffic flows for ingress and egress services and to block unwanted traffic. These devices must be tightly controlled to ensure they operate effectively and are free from misconfiguration that could put the environment at risk.

Control #18: Provide policy documentation that governs firewall management practices and procedures.

  • Intent: Firewalls are an important first line of defense in a layered security (defense in-depth) strategy, protecting environments against less trusted network zones. Firewalls will typically control traffic flows based upon IP Addresses and protocols/ports, more feature rich firewalls can also provide additional "application layer" defenses by inspecting application traffic to safeguard against misuse, vulnerabilities and threats based upon the applications being accessed. These protections are only as good as the configuration of the firewall, therefore strong firewall policies and support procedures need to be in place to ensure they're configured to provide adequate protection of internal assets. For example, a firewall with a rule to allow ALL traffic from ANY source to ANY destination is just acting as a router.

  • Example Evidence Guidelines: Supply your full firewall policy/procedure supporting documentation. This document should cover all the points below and any additional best practices applicable to your environment.

  • Example Evidence: Below is an example of the kind of firewall policy document we require (this is a demo and may not be complete).

example of the kind of firewall policy document we require

example of the kind of firewall policy document we require 2

example of the kind of firewall policy document we require 3

Control #19: Provide demonstrable evidence that any default administrative credentials are changed prior to installation into production environments.

  • Intent: Organizations need to be mindful of vendor provided default administrative credentials which are configured during the configuration of the device or software. Default credentials are often publicly available by the vendors and can provide an external activity group with an opportunity to compromise an environment. For example, a simple search on the Internet for the default iDrac (Integrated Dell Remote Access Controller) credentials will highlight root::calvin as the default username and password. This will give someone remote access to remote server management. The intent of this control is to ensure environments aren't susceptible to attack through default vendor credentials that haven't been changed during device/application hardening.

  • Example Evidence Guidelines

  • This can be evidenced over a screensharing session where the Certification Analyst can try to authenticate to the in-scope devices using default credentials.

  • Example Evidence

The below screenshot shows what the Certification Analyst would see from an invalid username / password from a WatchGuard Firewall.

screenshot shows what the Certification Analyst would see from an invalid username / password from a WatchGuard Firewall.

Control 20: Provide demonstrable evidence that firewalls are installed on the boundary of the in-scope environment, and installed between the perimeter network (also known as DMZ, demilitarized zone, and screened subnet) and internal trusted networks.

  • Intent: Firewalls provide the ability to control traffic between different network zones of different security levels. Since all environments are Internet connected, firewalls need to be installed on the boundary, that is, between the Internet and the in-scope environment. Additionally, firewalls need to be installed between the less trusted DMZ (De-Militarized Zone) networks and internal trusted networks. DMZs are typically used to serve traffic from the Internet and therefore is a target of attack. By implementing a DMZ and using a firewall to control traffic flows, a compromise of the DMZ won't necessarily mean a compromise of the internal trusted networks and corporate/customer data. Adequate logging and alerting should be in place to help organizations quickly identify a compromise to minimize the opportunity for the activity group to further compromise the internal trusted networks. The intent of this control is to ensure there's adequate control between trusted and less trusted networks.

  • Example Evidence Guidelines: Evidence should be provided by way of firewall configuration files or screenshots demonstrating that a DMZ is in place. This should match the supplied architectural diagrams demonstrating the different networks supporting the environment. A screenshot of the network interfaces on the firewall, coupled with the network diagram already supplied as part of the Initial Document Submission should provide this evidence.

  • Example Evidence: Below is a screenshot of a WatchGuard firewall demonstrating two DMZs, one is for the inbound services (named DMZ), the other is serving the jumpbox (Bastian Host).

screenshot of a WatchGuard firewall demonstrating two DMZs, one is for the inbound services (named DMZ), the other is serving the jumpbox (Bastian Host).

Control 21: Provide demonstrable evidence that all public access terminates in the demilitarized zone (DMZ).

  • Intent: Resources that are publicly accessible are open to a myriad of attacks. As already discussed above, the intent of a DMZ is to segment less trusted networks from trusted internal networks which may contain sensitive data. A DMZ is deemed less trusted since there's a much great risk of hosts that are publicly accessible from being compromised by external activity groups. Public access should always terminate in these less trusted networks which are adequately segmented by the firewall to help protect internal resources and data. The intent of this control is to ensure all public access terminates within these less trusted DMZs as if resources on the trusted internal networks were public facing, a compromise of these resources provides an activity group a foothold into the network where sensitive data is being held.

  • Example Evidence Guidelines

  • Evidence provided for this could be firewall configurations which show the inbound rules and where these rules are terminating, either by routing public IP Addresses to the resources, or by providing the NAT (Network Address Translation) of the inbound traffic.

  • Example Evidence

In the screenshot below, there are three incoming rules, each showing the NAT to the 10.0.3.x and 10.0.4.x subnets, which are the DMZ subnets

screenshot of three incoming rules, each showing the NAT to the 10.0.3.x and 10.0.4.x subnets, which are the DMZ subnets

Control 22: Provide demonstrable evidence that all traffic permitted through the firewall goes through an approval process.

  • Intent: Since firewalls are a defensive barrier between untrusted traffic and internal resources, and between networks of different trust levels, firewalls need to be securely configured and ensure that only traffic which is necessary for business operations are enabled. By allowing an unnecessary traffic flow, or a traffic flow that is overly permissive, this can introduce weaknesses within the defense at the boundary of these various network zones. By establishing a robust approval process for all firewall changes, the risk of introducing a rule which introduces a significant risk to the environment is reduced. Verizon's 2020 Data Breach Investigation Report highlights that "Error's", which includes misconfigurations, is the only action type that is consistently increasing year-to-year.

  • Example Evidence Guidelines: Evidence can be in the form of documentation showing a firewall change request being authorized, which may be minutes from a CAB (Change Advisor Board) meeting or by a change control system tracking all changes.

  • Example Evidence: The following screenshot shows a firewall rule change being requested and authorized using a paper-based process. This could be achieved through something like DevOps or Jira, for example.

screenshot shows a firewall rule change being requested and authorized using a paper-based process

Control 23: Provide demonstrable evidence that the firewall rule base is configured to drop traffic not explicitly defined.

  • Intent: Most firewalls will process the rules in a top-down approach to try to find a matching rule. If a rule matches, the action of that rule will be applied, and all further processing of the rules will stop. If no matching rules are found, by default the traffic is denied. The intent of this control is, if the firewall doesn't default to dropping traffic if no matching rule is found, then the rule base must include a "Deny All" rule at the end of ALL firewall lists. This is to ensure that firewall don't default into a default allow state when processing the rules, thus allowing traffic that hasn't been explicitly defined.

  • Example Evidence Guidelines: Evidence can be provided by way of the firewall configuration, or by screenshots showing all the firewall rules showing a "Deny All" rule at the end, or if the firewall drops traffic that doesn't match a rule by default, then supply a screenshot of all the firewall rules and a link to vendor administrative guides highlighting that by default the firewall will drop all traffic not matched.

  • Example Evidence: Below is a screenshot of the WatchGuard firewall rule base which demonstrates that no rules are configured to permit all traffic. there's no deny rule at the end because the WatchGuard will drop traffic that doesn't match by default.

screenshot of the WatchGuard firewall rule base

The following WatchGuard Help Center link; https://www.watchguard.com/help/docs/help-center/en-US/Content/en-US/Fireware/policies/policies_about_c.html includes the following information:

Screenshot of the watchguard help center link which includes the language " The firebox denies all packets that aren't specifically allowed"

Control 24: Provide demonstrable evidence that the firewall supports only strong cryptography on all non-console administrative interfaces.

  • Intent: To mitigate man-in-the-middle attacks of administrative traffic, all non-console administrative interfaces should support only strong cryptography. The main intent of this control is to protect the administrative credentials as the non-console connection is setup. Additionally, this also can help to protect against eavesdropping into the connection, trying to replay administrative functions to reconfigure the device or as part of reconnaissance.

  • Example Evidence Guidelines: Provide the firewall configuration, if the configuration provides the cryptographic configuration of the non-console administrative interfaces (not all devices will include this as configurable options). If this isn't within the configuration, you may be able to issue commands to the device to display what is configured for these connections. Some vendors may publish this information within articles so this may also be a way to evidence this information. Finally, you may need to run tools to output what encryption is supported.

  • Example Evidence: The below screenshot shows the output of SSLScan against the Web Admin interface of the WatchGuard firewall on TCP port 8080. This shows TLS 1.2 or above with a minimum encryption cipher of AES-128bit. screenshot shows the output of SSLScan against the Web Admin interface of the WatchGuard firewall on TCP port 8080.

Note: The WatchGuard firewalls also support administrative functions using SSH (TCP Port 4118) and WatchGuard System Manager (TCP Ports 4105 & 4117). Evidence of these non-console administrative interfaces would also need to be provided.

Control 25: Provide demonstratable evidence that you're performing firewall rule reviews at least every 6 months.

  • Intent: Over time, there's a risk of configuration creep in system components with the in-scope environment. This can often introduce insecurities or misconfigurations that can increase the risk of compromise to the environment. Configuration creep can be introduced for numerous reasons, such as, temporary changes to aid troubleshooting, temporary changes for ad-hoc functional changes, to introduce quick fixes to issues which can sometimes be overly permissive due to the pressures of introducing a quick fix. As an example, you may introduce a temporary firewall rule "Allow All" to overcome an urgent issue. The intent of this control is twofold, firstly to identify where there are misconfigurations which may introduce insecurities and secondly to help identify firewall rules which are no longer needed and therefore can be removed, that is, if a service has been retired but the firewall rule has been left behind.

  • Example Evidence Guidelines: Evidence needs to be able to demonstrate that the review meetings have been occurring. This can be done by sharing meeting minutes of the firewall review and any additional change control evidence that shows any actions taken from the review. Ensure the dates are present as we'd need to see a minimum of two of these meetings (that is, every six months)

  • Example Evidence: The following screenshot shows evidence of a Firewall review taking place in Jan 2021.

screenshot shows evidence of a Firewall review taking place in Jan 2021.

The following screenshot shows evidence of a Firewall review taking place in July 2021.

screenshot shows evidence of a Firewall review taking place in July 2021.

Firewalls – WAFs

it's optional to deploy a Web Application Firewall (WAF) into the solution. If a WAF is used, this will count as extra credits for the scoring matrix within the 'Operational Security' security domain. WAFs can inspect web traffic to filter and monitor web traffic between the Internet and published web applications to identify web application specific attacks. Web applications can suffer from many attacks which are specific to web applications such as SQL Injection (SQLi), Cross Site Scripting (XSS), Cross Site Request Forgery (CSRF/XSRF), etc. and WAFs are designed to protect against these types of malicious payloads to help protect web applications from attack and potential compromise.

Control 26: Provide demonstratable evidence that the Web Application Firewall (WAF) is configured to actively monitor, alert, and block malicious traffic.

  • Intent: This control is in place to confirm that the WAF is in place for all incoming web connections, and that it's configured to either block or alert to malicious traffic. To provide an additional layer of defense for web traffic, WAFs need to be configured for all incoming web connections, otherwise, external activity groups could bypass the WAFs designed to provide this additional layer of protection. If the WAF isn't configured to actively block malicious traffic, the WAF needs to be able to provide an immediate alert to staff who can quickly react to the potential malicious traffic to help maintain the security of the environment and stop the attacks.

  • Example Evidence Guidelines: Provide configuration output from the WAF which highlights the incoming web connections being served and that the configuration actively blocks malicious traffic or is monitoring and alerting. Alternatively, screenshots of the specific settings can be shared to demonstrate an organization is meeting this control.

  • Example Evidence: The following screenshots shows the Contoso Production Azure Application Gateway WAF policy is enabled and that it's configured for 'Prevention' mode, which will actively drop malicious traffic.

screenshots shows the Contoso Production Azure Application Gateway WAF policy is enabled and that it's configured for 'Prevention' mode

The below screenshot shows the Frontend IP configuration

screenshot shows the Frontend IP configuration

Note: Evidence should demonstrate all public IPs used by the environment to ensure all ingress points are covered which is why this screenshot is also included.

The below screenshot shows the incoming web connections using this WAF.

screenshot shows the incoming web connections using this WAF

The following screenshot shows the Contoso_AppGW_CoreRules showing that this is for the api.contoso.com service.

screenshot shows the Contoso_AppGW_CoreRules showing that this is for the api.contoso.com service

Control 27: Provide demonstrable evidence that the WAF supports SSL offloading.

  • Intent: The ability for the WAF to be configured to support SSL Offloading is important, otherwise the WAF will be unable to inspect HTTPS traffic. Since these environments need to support HTTPS traffic, this is a critical function for the WAF to ensure malicious payloads within HTTPS traffic can be identified and stopped.

  • Example Evidence Guidelines: Provide configuration evidence via a configuration export or screenshots which shows that SSL Offloading is supported and configured.

  • Example Evidence: Within Azure Application Gateway, configuration of an SSL Listener enabled SSL Offloading, see the Overview of TLS termination and end to end TLS with Application Gateway Microsoft Docs page. The following screenshot shows this configured for the Contoso Production Azure Application Gateway.

screenshot shows this configured for the Contoso Production Azure Application Gateway.

Control 28: 'Provide demonstratable evidence that the WAF is protects against some, or all of the following classes of vulnerabilities as per the OWASP Core Rule Set (3.0 or 3.1):

  • protocol and encoding issues,

  • header injection, request smuggling, and response splitting,

  • file and path traversal attacks,

  • remote file inclusion (RFI) attacks,

  • remote code execution attacks,

  • PHP-injection attacks,

  • cross-site scripting attacks,

  • SQL-injection attacks,

  • session-fixation attacks.

  • Intent: WAFs need to be configured to identify attack payloads for common classes of vulnerabilities. This control intends to ensure that adequate detection of vulnerability classes is covered by leveraging the OWASP Core Rule Set.

  • Example Evidence Guidelines: Provide configuration evidence via a configuration export or screenshots demonstrate that most vulnerability classes identified above are being covered by the scanning.

  • Example Evidence: The below screenshot shows that the Contoso Production Azure Application Gateway WAF policy is configured to scan against the OWASP Core Rule Set Version 3.2.

screenshot shows that the Contoso Production Azure Application Gateway WAF policy is configured to scan against the OWASP Core Rule Set Version 3.2.

Change Control

An established and understood change control process is essential in ensuring that all changes go through a structured process which is repeatable. By ensuring all changes go through a structured process, organizations can ensure changes are effectively managed, peer reviewed and adequately tested before being signed off. Thisn't only helps to minimize the risk of system outages, but also helps to minimize the risk of potential security incidents through improper changes being introduced.

Control 29: Provide policy documentation that governs change control processes.

  • Intent: To maintain a secure environment and secure application, a robust change control process must be established to ensure all infrastructure and code changes are carried out with strong oversight and defined processes. This ensures that changes are documented, security implications are considered, thought has gone into what security impact the change will have, etc. The intent is to ensure the change control process is documented to ensure that a secure and consistent approach is taken to all changes within both the environment and application development practices.

  • Example Evidence Guidelines: The documented change control policies/procedures should be shared with the Certification Analysts.

  • Example Evidence: Below shows the start of an example change management policy. Please supply your full policies and procedures as part of the assessment.

the start of an example change management policy.

Note: This screenshot shows a policy/process document, the expectation is for ISVs to share the actual supporting policy/procedure documentation and not simply provide a screenshot.

Control 30: Provide demonstratable evidence that development and test environments enforce separation of duties from the production environment.

  • Intent: Most organization's development / test environments aren't configured to the same vigor as the production environments and are therefore less secure. Additionally, testing shouldn't be carried out within the production environment as this can introduce security issues or can be detrimental to service delivery for customers. By maintaining separate environments which enforce a separation of duties, organizations can ensure changes are being applied to the correct environments, thereby, reducing the risk of errors by implementing changes to production environments when it was intended for the development / test environment.

  • Example Evidence Guidelines: Screenshots could be provided which demonstrate different environments being used for development / test environments and production environments. Typically, you would have different people / teams with access to each environment, or where this isn't possible, the environments would utilize different authorization services to ensure users can't mistakenly log into the wrong environment to apply changes.

  • Example Evidence: The following screenshot shows an Azure subscription for Contoso's TEST environment.

screenshot shows an Azure subscription for Contoso's TEST environment.

This next screenshot shows a separate Azure subscription for Contoso's 'PRODUCTION' environment.

screenshot shows a separate Azure subscription for Contoso's 'PRODUCTION' environment.

Control 31: Provide demonstratable evidence that sensitive production data isn't used within the development or test environments.

  • Intent: As already discussed above, organizations won't implement security measures of a development/test environment to the same vigor as the production environment. Therefore, by utilizing sensitive production data in these development/test environments, you're increasing the risk of a compromise and must avoid using live/sensitive data within these development/test environments.

Note: You can use live data in development/test environments, providing the development/test is included within the scope of the assessment so the security can be assessed against the Microsoft 365 Certification controls.

  • Example Evidence Guidelines: Evidence can be provided by sharing screenshots of the output of the same SQL query against a production database (redact any sensitive information) and the development/test database. The output of the same commands should produce different data sets. Where files are being stored, viewing the contents of the folders within both environments should also demonstrate different data sets.

  • Example Evidence: The following screenshot shows the top 3 records (for evidence submission, please provide top 20) from the Production Database.

screenshot shows the top 3 records from the Production Database.

The next screenshot shows the same query from the Development Database, showing different records.

screenshot shows the same query from the Development Database, showing different records.

This demonstrates that the data sets are different.

Control 32: Provide demonstratable evidence that documented change requests contain impact of the change, details of back-out procedures and of testing to be carried out.

  • Intent: The intent of this control is to ensure thought has gone into the change being requested. The impact the change has to the security of the system/environment needs to be considered and clearly documented, any back-out procedures need to be document to aid in recovery should something go wrong, and finally details of testing that is needed to validate the change has been successful also needs to be thought about and documented.

  • Example Evidence Guidelines: Evidence can be provided by exporting a sample of change requests, providing paper change requests, or providing screenshots of the change requests showing these three details held within the change request.

  • Example Evidence: The image below shows a new Cross Site Scripting Vulnerability (XSS) being assigned and document for change request.

a new Cross Site Scripting Vulnerability (XSS) being assigned and document for change request.

The below tickets show the information that has been set or added to the ticket on its journey to being resolved.

Shows an informative description added to the ticket. Shows that the ticket has been approved.

The two tickets below show the impact of the change to the system and any back out procedures which may be needed in the event of an issue. You can see impact of changes and back out procedures have gone through an approval process and have been approved for testing.

On the left of the screen, you can see that testing the changes has been approved, on the right you see that the changes have now been approved and tested.

Throughout the process note that the person doing the job, the person reporting on it and the person approving the work to be done are different people.

Example of one person doing the work Example of another person approving the work

The ticket above shows that the changes have now been approved for implementation to the production environment. The right-hand box shows that the test worked and was successful and that the changes have now been implemented to Prod Environment.

Control 33: Provide demonstratable evidence that change requests undergo an authorization and sign out process.

  • Intent: Process must be implemented which forbids changes to be carried out without proper authorization and sign out. The change needs to be authorized before being implemented and the change needs to be signed off once complete. This ensures that the change requests have been properly reviewed and someone in authority has signed off the change.

  • Example Evidence Guidelines: Evidence can be provided by exporting a sample of change requests, providing paper change requests, or providing screenshots of the change requests showing the change has been authorized, prior to implementation, and that the change has been signed off after completed.

  • Example Evidence: The below screenshot shows an example Jira ticket showing that the change needs to be authorized before being implemented and approved by someone other than the developer/requester. You can see the changes here are approved by someone with authority. On the right has been signed by DP once complete.

screenshot shows an example Jira ticket showing that the change needs to be authorized before being implemented and approved by someone other than the developer/requester.

In the ticket below you can see the change has been signed off once complete and shows the job completed and closed.

Screenshot of completed ticket

Screenshot of completed ticket 2

Secure Software Development/Deployment

Organizations involved in software development activities are often faced with competing priorities between security and TTM (Time to Market) pressures, however, implementing security related activities throughout the software development lifecycle (SDLC) can not only save money, but can also save time. When security is left as an afterthought, issues are usually only identified during the test phase of the (DSLC), which can often be more time consuming and costly to fix. The intent of this security section is to ensure secure software development practices are followed to reduce the risk of coding flaws being introduced into the software which is developed. Additionally, this section looks to include some controls to aid in secure deployment of software.

Control 34: Provide policies and procedures that support secure software development and deployment, including secure coding best practice guidance against common vulnerability classes such as, OWASP Top 10 or SANS Top 25 CWE.

  • Intent: Organizations need to do everything in their power in ensuring software is securely developed and free from vulnerabilities. In a best effort to achieve this, a robust secure software development lifecycle (SDLC) and secure coding best practices should be established to promote secure coding techniques and secure development throughout the whole software development process. The intent is to reduce the number and severity of vulnerabilities in the software.

  • Example Evidence Guidelines: Supply the documented SDLC and/or support documentation which demonstrates that a secure development life cycle is in use and that guidance is provided for all developers to promote secure coding best practice. Take a look at OWASP in SDLC and the OWASP Software Assurance Maturity Model (SAMM).

  • Example Evidence: The following is an extract from Contoso's Secure Software Development Procedure, which demonstrates secure development and coding practices.

Screenshot of an extract from Contoso's Secure Software Development Procedure

Screenshot of an extract from Contoso's Secure Software Development Procedure 2

Screenshot of an extract from Contoso's Secure Software Development Procedure 3

Screenshot of an extract from Contoso's Secure Software Development Procedure 4

Note: These screenshots show the secure software development document, the expectation is for ISVs to share the actual supporting documentation and not simply provide a screenshot.

Control 35: Provide demonstratable evidence that code changes undergo a review and authorization process by a second reviewer.

  • Intent: The intent with this control is to perform a code review by another developer to help identify any coding mistakes which could introduce a vulnerability in the software. Authorization should be established to ensure code reviews are carried, testing is done, etc. prior to deployment. The authorization step can validate that the correct processes have been followed which underpins the SDLC defined above.

  • Example Evidence Guidelines: Provide evidence that code undergoes a peer review and must be authorized before it can be applied to the production environment. This evidence may be via an export of change tickets, demonstrating that code reviews have been carried out and the changes authorized, or it could be through code reviews software such as Crucible (https://www.atlassian.com/software/crucible).

  • Example Evidence

Change Ticket exampleBelow is a ticket that shows code changes undergo a review and authorization process by someone other than the original developer. It shows that a code review has been requested by the assignee and will be assigned to someone else for the code review.

The image below shows that the code review was assigned to someone other than the original developer as shown by the highlighted section on the right-hand side of the image below. On the left-hand side you can see that the code has been reviewed and given a 'PASSED CODE REVIEW' status by the code reviewer.

The ticket must now get approval by a manager before the changes can be put onto live production systems.

code review was assigned to someone other than the original developer

ticket must now get approval by a manager before the changes can be put onto live production systems. The image above shows that the reviewed code has been given approval to be implemented on the live production systems.

Example of final approval Once the code changes have been done the final job gets sign out as shown in the image above.

Note that throughout the process there are three people involved, the original developer of the code, the code reviewer and a manager to give approval and sign out. In order to meet the criteria for this control, it would be an expectation that your tickets will follow this process. Of a minimum of three people involved in the change control process for your code reviews.

Control 36: Provide demonstratable evidence that developers undergo secure software development training annually.

  • Intent: Coding best practices and techniques exist for all programming languages to ensure code is securely developed. There are external training courses that are designed to teach developers the different types of software vulnerabilities classes and the coding techniques that can be used to stop introducing these vulnerabilities into the software. The intention of this control is to teach these techniques to all developers and to ensure that these techniques aren't forgotten, or newer techniques are learned by carrying this out on a yearly basis.

  • Example Evidence Guidelines: Provide evidence by way of certificates if carried out by an external training company, or by providing screenshots of the training diaries or other artifacts which demonstrates that developers have attended training. If this training is carried out via internal resources, provide evidence of the training material also.

  • Example Evidence: Below is the email requesting staff in the DevOps team be enrolled into OWASP Top Ten Training Annual Training

email requesting staff in the DevOps team be enrolled into OWASP Top Ten Training Annual Training

The below shows that training has been requested with business justification and approval. This is then followed by screenshots taken from the training and a completion record showing that the person has finished the annual training.

training has been requested with business justification and approval.

Screenshot of training needed

Control 37: Provide demonstratable evidence that code repositories are secured with multi-factor authentication (MFA).

  • Intent: If an activity group can access and modify a software's code base, he/she could introduce vulnerabilities, backdoors, or malicious code into the code base and therefore into the application. There have been several instances of this already, with probably the most publicized being the NotPetya Ransomware attack which is reportedly infected through a compromised update to Ukrainian tax software called M.E.Doc (see What isn'tPetya).

  • Example Evidence Guidelines: Provide evidence by way of screenshots from the code repository that ALL users have MFA enabled.

  • Example Evidence: The following screenshot shows that MFA is enabled on all 8 GitLab users.

screenshot shows that MFA is enabled on all 8 GitLab users.

Control 38: Provide demonstratable evidence that access controls are in place to secure code repositories.

  • Intent: Leading on from the previous control, access controls should be implemented to limit access to only individual users who are working on particular projects. By limiting access, you're limiting the risk of unauthorized changes being carried out and thereby introducing insecure code changes. A least privileged approach should be taken to protect the code repository.

  • Example Evidence Guidelines: Provide evidence by way of screenshots from the code repository that access is restricted to individuals needed, including different privileges.

  • Example Evidence: The following screenshot shows members of the "Customers" project in GitLab which is the Contoso "Customer Portal". As can be seen in the screenshot, users have different "Roles" to limit access to the project.

screenshot shows members of the "Customers" project in GitLab which is the Contoso "Customer Portal"

Account Management

Secure account management practices are important as user accounts form the basis of allowing access to information systems, system environments and data. User accounts need to be properly secured as a compromise of the user's credentials can provide not only a foothold into the environment and access to sensitive data but may also provide administrative control over the entire environment or key systems if the user's credentials have administrative privileges.

Control 39: Provide policy documentation that governs account management practices and procedures.

  • Intent: User accounts continue to be targeted by activity groups and will often be the source of a data compromise. By configuring overly permissive accounts, organizations won't only increase the pool of 'privileged' accounts that can be used by an activity group to perform a data breach but can also increase the risk of the successful exploitation of a vulnerability that would require specific privileges to succeed.

  • BeyondTrust produces a "Microsoft Vulnerabilities Report" each year which analyzes Microsoft security vulnerabilities for the previous year and details percentages of these vulnerabilities that rely on the user account having admin rights. In a recent blog post "New Microsoft Vulnerabilities Report Reveals a 48% YoY Increase in Vulnerabilities & How They Could Be Mitigated with Least Privilege", 90% of Critical vulnerabilities in Internet Explorer, 85% of Critical vulnerabilities in Microsoft Edge and 100% of Critical vulnerabilities in Microsoft Outlook would have been mitigated by removing admin rights. To support secure account management, organizations need to ensure supporting policies and procedures which promote security best practices are in place and followed to mitigate these threats.

  • Example Evidence Guidelines: Supply the documented policies and procedure documents which cover your account management practices. At a minimum, the topics covered should align to the controls within the Microsoft 365 Certification.

  • Example Evidence: The following screenshot shows an example Account Management Policy for Contoso.

screenshot shows an example Account Management Policy for Contoso.

screenshot shows more detials of an Account Management Policy for Contoso.

Note: This screenshot shows a policy/process document, the expectation is for ISVs to share the actual supporting policy/procedure documentation and not simply provide a screenshot.

Control 40: Provide demonstratable evidence that default credentials are either disabled, removed, or changed across the sampled system components.

  • Intent: Although this is becoming less popular, there are still instances where activity groups can leverage default and well documented user credentials to compromise production system components. A popular example of this is with Dell iDRAC (Integrated Dell Remote Access Controller). This system can be used to remotely manage a Dell Server, which could be used by an activity group to gain control over the Server's operating system. The default credential of root::calvin is documented and can often be used by activity groups to gain access to systems used by organizations. The intent of this control is to ensure these default credentials are either disabled or removed

  • Example Evidence Guidelines: There are various ways in which evidence can be collected to support this control. Screenshots of configured users across all system components can help, that is, screenshots of the Linux /etc/shadow and /etc/passwd files will help to demonstrate if accounts have been disabled. Note, that the /etc/shadow file would be needed to demonstrate accounts are truly disabled by observing that the password hash starts with an invalid character such as '!' indicating that the password is unusable. The advice would be to only disable a few characters of the password has and redact the rest. Other options would be for a screensharing sessions where the assessor was able to manually try default credentials, for example in the above discussion on Dell iDRAC, the assessor need to try to authenticate against all Dell iDRAC interfaces using the default credentials.

  • Example Evidence: The following screenshot shows user accounts configured for the in-scope system component "CLARANET-SBU-WM". The shows several default accounts; Administrator, DefaultAccount and Guest, however, the following screenshots show that these accounts are disabled.

The following screenshot shows user accounts configured for the in-scope system component "CLARANET-SBU-WM".

This next screenshot shows the Administrator account is disabled on the in-scope system component "CLARANET-SBU-WM".

Screenshot shows the disabled Administrator account.

This next screenshot shows the Guest account is disabled on the in-scope system component "CLARANET-SBU-WM".

Screenshot shows the disabled Guest account.

This next screenshot shows that the DefaultAccount is disabled on the in-scope system component "CLARANET-SBU-WM".

Screenshot shows the disabled Default account.

Control 41: Provide demonstratable evidence that account creation, modification and deletion goes through an established approval process.

  • Intent: The intent is to have an established process to ensure all account management activities is approved ensuring that account privileges are maintaining the least privilege principles and that account management activities can be properly reviewed and tracked.

  • Example Evidence Guidelines: Evidence would typically be in the form of change request tickets, ITSM (IT Service Management) requests or paperwork showing requests for accounts to be created, modified, or deleted have gone through an approval process.

  • Example Evidence: The images below show account creation for a new starter to the DevOps team who is required to have role-based access control setting based on the production environment permissions with no access to dev environment and standard non privileged access to everything else.

The account creation has gone through the approval process and the sign out process once the account was created and the ticket closed.

Example of account creation

Sign off process in the ticket

Closed ticket example

Control 42: Provide demonstratable evidence that a process is in place to either disable or delete accounts not used within 3 months.

  • Intent: Inactive accounts can sometimes become compromised either because they're targeted in brute force attacks which may not be flagged as the user won't be trying to log into the accounts, or by way of a password database breach where a user's password has been reused and is available within a username/password dump on the Internet. Unused accounts should be disabled/removed to reduce the attack surface an activity group has to carry out account compromise activities. These accounts may be due to a leavers process not being carried out properly, a staff member going on long term sickness or a staff member going on maternity/paternity leave. By implementing a quarterly process to identify these accounts, organizations can minimize the attack surface.

  • Example Evidence Guidelines: Evidence should be two-fold. Firstly, a screenshot or file export showing the "last sign in" of all user accounts within the in-scope environment. This may be local accounts as well as accounts within a centralized directory service, such as Microsoft Entra ID. This will demonstrate that no accounts older than 3 months are enabled. Secondly, evidence of the quarterly review process which may be documentary evidence of the task being completed within ADO (Azure DevOps) or JIRA tickets, or through paper records which should be signed off.

  • Example Evidence: This first screenshot shows the output of the script which is executed quarterly to view the last sign in attribute for users within Microsoft Entra ID.

screenshot shows the output of the script which is executed quarterly to view the last logon attribute for users within Microsoft Entra ID.

As can be seen in the above screenshot, two users are showing as not have logged in for some time. The following two screenshots show that these two users are disabled.

Example of user being disabled

Another example of user being diabled

Control 43: Provide demonstratable evidence that a strong password policy or other suitable mitigations to protect user credentials are in place. The following should be used as a minimum guideline:

  • Minimum password length of 8 characters

  • Account lockout threshold of no more than 10 attempts

  • Password history of a minimum of 5 passwords

  • Enforcement of the use of strong password

  • Intent: As already discussed, user credentials are often the target of attack by activity groups attempting to gain access to an organization's environment. The intent of a strong password policy is to try to force users into picking strong passwords to mitigate the chances of activity groups being able to brute force them. The intention of adding the "or other suitable mitigations" is to recognize that organizations may implement other security measures to help protect user credentials based on industry developments such as "NIST Special Publication 800-63B".

  • Example Evidence Guidelines: Evidence to demonstrate a strong password policy may be in the form of a screenshot of an organizations Group Policy Object or Local Security Policy "Account Policies à Password Policy" and "Account Policies à Account Lockout Policy" settings. The evidence depends on the technologies being used; that is, for Linux it could be the /etc/pam.d/common-password config file, for BitBucket the "Authentication Policies" section within the Admin Portal (https://support.atlassian.com/security-and-access-policies/docs/manage-your-password-policy/), etc.

  • Example Evidence: The evidence below shows the password policy configured within the "Local Security Policy" of the in-scope system component "CLARANET-SBU-WM".

the password policy configured within the "Local Security Policy" of the in-scope system component "CLARANET-SBU-WM".

Another example of the password policy configured within the "Local Security Policy" of the in-scope system component "CLARANET-SBU-WM".

The screenshot below shows Account Lockout settings for a WatchGuard Firewall.

Account Lockout settings for a WatchGuard Firewall

Below is an example of a minimum passphrase length for the WatchGaurd Firewall.

minimum passphrase length for the WatchGaurd Firewall.

Control 44: Provide demonstratable evidence that unique user accounts are issued to all users.

  • Intent: The intent of this control is accountability. By issuing users with their own unique user accounts, users will be accountable for their actions as user activity can be tracked to an individual user.

  • Example Evidence Guidelines: Evidence would be by way of screenshots showing configured user accounts across the in-scope system components which may include servers, code repositories, cloud management platforms, Active Directory, Firewalls, etc.

  • Example Evidence: The following screenshot shows user accounts configured for the in-scope system component "CLARANET-SBU-WM".

screenshot shows user accounts configured for the in-scope system component "CLARANET-SBU-WM".

This next screenshot shows the Administrator account is disabled on the in-scope system component "CLARANET-SBU-WM".

Screenshot shows the Administrator account is disabled.

This next screenshot shows the Guest account is disabled on the in-scope system component "CLARANET-SBU-WM".

Screenshot shows the Guest account is disabled.

This next screenshot shows that the DefaultAccount is disabled on the in-scope system component "CLARANET-SBU-WM".

Screenshot shows that the DefaultAccount is disabled.

Control 45: Provide demonstratable evidence that least privilege principles are being followed within the environment.

  • Intent: Users should only be provided with the privileges necessary to fulfill their job function. This is to limit the risk of a user intentionally or unintentionally accessing data they shouldn't or carrying out a malicious act. By following this principle, it also reduces the potential attack surface (that is, privileged accounts) that can be targeted by a malicious activity group.

  • Example Evidence Guidelines: Most organizations will utilize groups to assign privileges based upon teams within the organization. Evidence could be screenshots showing the various privileged groups and only user accounts from the teams that require these privileges. Usually, this would be backed up with supporting policies/processes defining each defined group with the privileges required and business justification and a hierarchy of team members to validate group membership is configured correctly.

  • For example: Within Azure, the Owners group should be limited, so this should be documented and should have a limited number of people assigned to that group. Another example could be a limited number of staff with the ability to make code changes, a group may be setup with this privilege with the members of staff deemed as needing this permission configured. This should be documented so the certification analyst can cross reference the document with the configured groups, etc.

  • Example Evidence: The following screenshot shows that the environment is configured with groups assign according to job function. screenshot shows that the environment is configured with groups assign according to job function.

The following screenshot shows that users are allocated to groups based upon their job function.

screenshot shows that users are allocated to groups based upon their job function.

Control 46: Provide demonstratable evidence that a process is in place to secure or harden service accounts and the process is being followed.

  • Intent: Service accounts will often be targeted by activity groups because they're often configured with elevated privileges. These accounts may not follow the standard password policies because expiration of service account passwords often breaks functionality. Therefore, they may be configured with weak passwords or passwords that are reused within the organization. Another potential issue, particularly within a Windows environment, may be that the operating system caches the password hash. This can be a big problem if either: the service account is configured within a directory service, since this account can be used access across multiple systems with the level of privileges configured, or the service account is local, the likelihood is that the same account / password will be used across multiple systems within the environment. The above problems can lead to an activity group gaining access to more systems within the environment and can lead to a further elevation of privilege and/or lateral movement. The intent therefore is to ensure that service accounts are properly hardened and secured to help protect them from being taken over by an activity group, or by limiting the risk should one of these service accounts be compromised.

  • Example Evidence Guidelines: There are many guides on the Internet to help harden service accounts. Evidence can be in the form of screenshots which demonstrate how the organization has implemented secure hardening of the account. A few examples (the expectation is that multiple techniques would be used) includes:

  • Restricting the accounts to a set of computers within Active Directory,

  • Setting the account so interactive sign in isn't permitted,

  • Setting an extremely complex password,

  • For Active Directory, enable the "Account is sensitive and can't be delegated" flag. These techniques are discussed in the following article "Segmentation and Shared Active Directory for a Cardholder Data Environment".

  • Example Evidence: There are multiple ways to harden a service account, that will be dependent upon each individual environment. The mechanisms suitable for your environment, which are used would be documented within the Account Management policy/procedure document earlier which will help to review this evidence. Below are some of the mechanisms that may be employed:

The following screenshot shows the 'Account is sensitive and connect be delegated' option is selected on the service account "_Prod SQL Service Account".

 screenshot shows the 'Account is sensitive and connect be delegated' option is selected on the service account "_Prod SQL Service Account".

This next screenshot shows that the service account "_Prod SQL Service Account" is locked down to the SQL Server and can only sign in that server.

screenshot shows that the service account "_Prod SQL Service Account" is locked down to the SQL Server and can only log onto that server.

This next screenshot shows that the service account "_Prod SQL Service Account" is only allowed to sign in as a service.

screenshot shows that the service account "_Prod SQL Service Account" is only allowed to logon as a service.

Control 47: Provide demonstratable evidence that MFA is configured for all remote access connections and all non-console administrative interfaces.

Terms defined as:

  • Remote Access – Typically, this refers to technologies used to access the supporting environment. For example, Remote Access IPSec VPN, SSL VPN or Jumpbox/Bastian Host.

  • Non-console Administrative Interfaces – Typically, this refers to over the network administrative connections to system components. This could be over Remote Desktop, SSH or a web interface.

  • Intent: The intent of this control is to provide mitigations against brute forcing privileged accounts and accounts with secure access into the environment. By providing multi-factor authentication (MFA), a compromised password should still be protected against a successful sign in as the MFA mechanism should still be secured. This helps to ensure all access and administrative actions are only carried out by authorized and trusted staff members.

  • Example Evidence Guidelines: Evidence needs to show MFA is enabled on all technologies that fit in the above categories. This may be through a screenshot showing that MFA is enabled at the system level. By system level, we need evidence that it's enabled for all users and not just an example of an account with MFA enabled. Where the technology is backed off to an MFA solution, we need the evidence to demonstrate that it's enabled and in use. What is meant by this is; where the technology is setup for Radius Authentication, which points to a MFA provider, you also need to evidence that the Radius Server it's pointing to, is an MFA solution and that accounts are configured to utilize it.

  • Example Evidence 1: The following screenshots shows the authentication realms configured on Pulse Secure which is used for remote access into the environment. Authentication is backed off by the Duo SaaS Service for MFA Support.

screenshots shows the authentication realms configured on Pulse Secure which is used for remote access into the environment.

This screenshot demonstrates that an additional authentication server is enabled which is pointing to "Duo-LDAP" for the 'Duo - Default Route' authentication realm.

screenshot demonstrates that an additional authentication server is enabled which is pointing to "Duo-LDAP" for the 'Duo - Default Route' authentication realm.

This final screenshot shows the configuration for the Duo-LDAP authentication server which demonstrates that this is pointing to the Duo SaaS service for MFA.

screenshot shows the configuration for the Duo-LDAP authentication server which demonstrates that this is pointing to the Duo SaaS service for MFA.

Example Evidence 2: The following screenshots show that all Azure users have MFA enabled.

show that all Azure users have MFA enabled.

Note: you'll need to provide evidence for all non-console connections to demonstrate that MFA is enabled for them. So, for example, if you RDP or SSH to servers or other system components (that is, Firewalls).

Control 48: Provide demonstratable evidence that strong encryption is configured for all remote access connections and all non-console administrative interfaces, including access to any code repositories and cloud management interfaces.

Terms defined as:

  • Code Repositories – The code base of the app needs to be protected against malicious modification which could introduce malware into the app. MFA needs to be configured on the code repository.

  • Cloud Management Interfaces – Where some or all the environment is hosted within the Cloud Service Provider (CSP), the administrative interface for cloud management is included here.

  • Intent: The intention of this control is to ensure that all administrative traffic is suitably encrypted to protect against man-in-the-middle attacks.

  • Example Evidence Guidelines: Evidence could be provided by screenshots showing encryption settings for remote access technologies, RDP, SSH and Web Admin interfaces. For Web Admin interfaces, Qualys SSL Labs scanner (if publicly accessible, that is, cloud management interfaces, SaaS code repositories or SSL VPN connections) could be used.

  • Example Evidence: The evidence below shows the RDP encryption level on "Webserver01" being configured with a setting of 'High Level". As the help text shows, this is using strong 128-bit encryption (which is the highest level for Microsoft Windows RDP.

The evidence below shows the RDP encryption level on "Webserver01" being configured with a setting of 'High Level".

The below evidence also shows that the RDP transport security is configured to use TLS 1.0 on "Webserver01" (which is the highest for Windows Server).

shows that the RDP transport security is configured to use TLS 1.0 on "Webserver01"

Control 49: Provide demonstratable evidence that MFA is used to protect the admin portal that you use to manage and maintain all public domain name service (DNS) records.

  • Intent: If a malicious activity group can gain access to Public DNS records, there's a risk that they're able to modify URLs used by the app, or where the manifest file is pointing to introduce malicious code or to direct user traffic to an endpoint under the actors control. This could result in a loss of user data or to malware/ransomware infections across the user base of the app.

  • Example Evidence Guidelines: Provide evidence which demonstrates Public DNS administrative portals are protected by MFA. Even if Public DNS is hosted on servers within the in-scope environment (that is, control and operated by the organization), there may still be an Admin Portal somewhere where the Domain Name was registered, and the DNS Records were 'Managed' to point the DNS Servers to your own infrastructure. Where this is the case, MFA should be enabled on the domain registrar administrative interface if the domains DNS records can be modified. A screenshot should be provided showing the administrative interface is enabled for MFA at the system level (that is, all privileged accounts).

  • Example Evidence: The following screenshot show the contoso.com DNS is managed within Microsoft Azure for Contoso Corporation.

screenshot show the contoso.com DNS is managed within Microsoft Azure for Contoso Corporation.

Note: The IP addresses are private RFC 1918 addresses and not publicly routed. This is just for demonstration purposes only.

The following screenshots show that all Azure users have MFA enabled.

screenshots show that all Azure users have MFA enabled.

Intrusion Detection and Prevention (optional)

Intrusion Detection and Prevention Systems (IDPS) at the gateway can provide an additional layer of protection against a myriad of Internet based and internal threats. These systems can help to prevent these threats from succeeding and can provide crucial alerting capabilities to alert organizations to live compromise attempts to allow organizations to implement additional defensive strategies to further protect the environment against these active threats.

This section is for extra credit and is therefore optional. it's not a requirement, however if you do complete it your assessment will show a more complete picture of your environment and the controls and standards that you have put in place.

Control 50: Provide demonstratable evidence that Intrusion Detection and Prevention Systems (IDPS) is deployed at the perimeter of the in-scope environments.

  • Intent: Although some sources describe insider threats as now surpassing threats by external activity groups, insider threats also include negligence, with human error increasing in percentage year on year. The intent of installing IDPS on the perimeter of the in-scope environment(s) is that external threats can often be detected through IDPS mechanisms due to the nature and techniques used by these types of threats.

  • Example Evidence Guidelines: Evidence should be provided which demonstrates that IDPS is installed at the perimeter, this could be directly on the Firewall if running a NextGen Firewall or could be by deployment IDPS Sensors which are configured on mirror switch ports to ensure all traffic is seen by the deployed sensors. If IDPS Sensors are being used, additional evidence may need to be provided to demonstrate that the sensors are able to see all External traffic flows.

  • Example Evidence: The below screenshot shows the IDPS functionality is enabled on the WatchGuard Firewall.

screenshot shows the IDPS functionality is enabled on the WatchGuard Firewall.

The additional screenshot below demonstrates that IDPS is enabled on all the rules within the WatchGuard Firewall's config.

screenshot demonstrates that IDPS is enabled on all the rules within the WatchGuard Firewall's config.

Control 51: Provide demonstratable evidence that IDPS signatures are kept current (within 24 hours).

  • Intent: There are multiple modes of operation for IDPS, the most common is using signatures to identify attack traffic. As attacks evolve and newer vulnerabilities are identified, it's important that IDPS signatures are up to date to provide adequate protection. The intent of this control is to ensure IDPS are being maintained.

  • Example Evidence Guidelines: Evidence will likely be with a screenshot showing that the IDPS is configured to update signatures at least daily and showing the last update.

  • Example Evidence: Although this screenshot doesn't show that the IDPS signatures have been updated within the past 24 hours, it does demonstrate that the latest version is installed, which was from a week ago (Evidence collected on the 18__th May). This, combined with the screenshot that follows, shows that signatures will be up to date within a 24 hour period.

Screenshot demonstrates that the latest version of IDPS is installed

Shows that the signatures will be updated in a 24 hr period

Control 52: Provide demonstratable evidence that IDPS is configured to support TLS inspection of all incoming web traffic.

  • Intent: Since IDPS relies on signatures, it needs to be able to inspect all traffic flows to identify the attack traffic. TLS traffic is encrypted and therefore IDPS would be unable to properly inspect the traffic. This is critical for HTTPS traffic, since there's a myriad of threats that are common to web services. The intent of this control is to ensure that encrypted traffic flows can also be inspected for IDPS.

  • Example Evidence Guidelines: Evidence should be provided by way of screenshots, demonstrating that encrypted TLS traffic is also being inspected by the IDPS solution.

  • Example Evidence: This screenshot shows the HTTPS rules on the Firewall

screenshot shows the HTTPS rules on the Firewall

This next screenshot shows that IDPS is enabled on these rules.

screenshot shows that IDPS is enabled on these rules.

The following screenshot shows a "Proxy Action" is applied to the 'Inbound_Bot_Traffic' rule, which is used to turn on content inspection.

screenshot shows a "Proxy Action" is applied to the 'Inbound_Bot_Traffic' rule, which is used to turn on content inspection.

The following screenshot shows content inspection is enabled.

following screenshot shows content inspection is enabled

Control 53: Provide demonstratable evidence that IDPS is configured to monitor all inbound traffic flows.

  • Intent: As already discussed, it's important that all inbound traffic flows are monitored by IDPS to identify any form of attack traffic.

  • Example Evidence Guidelines: Evidence by way of screenshots should be provided to demonstrate that all inbound traffic flows are monitored. This can be using the NextGen firewall, showing that all incoming rules are enabled for IDPS, or it can be by way of using IDPS Sensors and demonstrating that all traffic is configured to reach the IDPS Sensor.

  • Example Evidence: This screenshot shows that IDPS is configured on all the WatchGuard Firewall's rules (policies).

IDPS is configured on all the WatchGuard Firewall's rules.

Control 54: Provide demonstratable evidence that IDPS is configured to monitor all outbound traffic flows.

  • Intent: As already discussed, it's important that all outbound traffic flows are monitored by IDPS to identify any form of attack traffic. Some IDPS systems can also identify potential internal breaches by monitoring all outbound traffic. This can be done by identifying traffic destined for 'Command and Control' endpoints.

  • Example Evidence Guidelines: Evidence by way of screenshots should be provided to demonstrate that all outbound traffic flows are monitored. This can be using the NextGen firewall, showing that all outgoing rules are enabled for IDPS, or it can be by way of using IDPS Sensors and demonstrating that all traffic is configured to reach the IDPS Sensor.

  • Example Evidence: This screenshot shows that IDPS is configured on all the WatchGuard Firewall's rules (policies).

Shows that IDPS is configured on all the WatchGuard Firewall's rules.

  • Example Evidence 2: Azure offers IDPS through third party apps. In the example below Netwatcher packet capture has been used to capture packets and used along with Suricata which is an Open-Source IDS tool.

Netwatcher packet capture has been used to capture packets and used along with Suricata which is an Open-Source IDS tool.

Combining packet capture provided by Network Watcher and open-source IDS tools such as Suricata, you can perform network intrusion detection for a wide range of threats. The image below shows the Suricata interface.

image shows the Suricata interface.

Signatures are used to trigger alerts, and these can be installed and updated easily. The image below shows a snapshot of some of signatures.

image shows a snapshot of some of signatures.

The image below shows how you would monitor your IDPS set up of Netwatcher and Suricata third party software using Sentinel SIEM/SOAR.

image shows how you would monitor your IDPS set up of Netwatcher and Suricata third party software using Sentinel SIEM/SOAR.

image shows more details on how you would monitor your IDPS set up of Netwatcher and Suricata third party software using Sentinel SIEM/SOAR.

  • Example Evidence 3: The image below shows how to add override intrusion signature or a bypass rule for intrusion detectio using CLI

image shows how to add override intrusion signature or a bypass rule for intrusion detection using CLI

The image below shows how to list all intrusion detection configuration using CLI

image shows how to list all intrusion detection configuration using CLI

  • Example Evidence 4: Azure recently started to offer IDPS named Azure Firewall Premium which will allow configuration of TLS, Threat Intelligence, IDPS through policies, however, please note that you'll need to still use Front Door or application gateway for SSL offloading of inbound traffic as Azure Firewall Premium doesn't support IDPS on inbound SSL connections.

In the example below the default premium settings have been used for configuration of policy rules and TLS inspection, IDPS mode, Threat Intelligence have all been enabled together with protection of the Vnet.

Screenshot IDPS mode enabled

Screenshot of IDPS alerts turned on

Screenshot of TLS inspection enabled

Screenshot of threat intelligence enabled

Screenshot of DLS enabled

Screenshot of secured virtual networks

Security Event logging

Security event logging is an integral part of an organization's security program. Adequate logging of security events coupled with tuned alerting and review processes help organizations to identify breaches or attempted breaches which can be used by the organization to enhance security and defensive security strategies. Additionally, adequate logging will be instrumental to an organizations incident response capability which can feed into other activities such as being able to accurately identify what and who's data has been compromised, the period of compromise, provide detailed analysis reports to government agencies, etc.

Control 55: Provide policy documentation for best practices and procedures that governs security event logging.

  • Intent: Security event logging is an important function of any organization's security program. Policies and procedures must be in place to provide clarity and consistency to help ensure organizations implement logging controls in line with vendor and industry recommended practices. This will help to ensure relevant and detailed logs are consumed which aren't only useful in identifying potential or actual security events, but they can also help an incident response activity identify the extent of a security breach.

  • Example Evidence Guidelines: Supply the organizations documented policy and procedure documents covering security event logging best practice.

  • Example Evidence: Below is an extract from the logging policy/procedure.

extract from the logging policy/procedure.

Note: This screenshot shows a policy/process document, the expectation is for ISVs to share the actual supporting policy/procedure documentation and not simply provide a screenshot.

Control 56: Provide demonstratable evidence that shows security event logging is set up across all sampled system components to log the following events:

  • User access to system components and the application

  • All actions taken by a high-privileged user

  • Invalid logical access attempts

  • Privileged account creation or modification

  • Event log tampering

  • Disabling of security tools, such as anti-malware or event logging

  • Anti-malware logging, such as updates, malware detection, and scan failures

  • IDPS and WAF events, if configured

  • Intent: To identify attempted and actual breaches, it's important that adequate security event logs are being collected by all systems that make up the environment. The intent of this control is to ensure the correct types of security events are being captured which can then feed into review and alerting processes to help identify and respond to these events.

  • Example Evidence Guidelines: Evidence by way of screenshots or configuration settings should be provided across all the sampled devices and any system components of relevance to demonstrate how logging is configured to provide assurance that these types of security events are captured.

  • Example Evidence 1: The following screenshot shows the configuration settings from one of the sampled devices called "VICTIM1-WINDOWS". The settings show various auditing settings enabled within the 'Local Security Policy  Local Policies  Audit Policy' settings.

The following screenshot shows the configuration settings from one of the sampled devices called "VICTIM1-WINDOWS".

This next screenshot shows an event where a user has cleared an event log from one of the sampled devices called "VICTIM1-WINDOWS".

screenshot shows an event where a user has cleared an event log from one of the sampled devices called "VICTIM1-WINDOWS".

This final screenshot shows the log message appear within the centralized logging solution.

screenshot shows the log message appear within the centralized logging solution.

Note: Screenshots are required across all the sampled system components AND MUST evidence all the security events detailed above.

Control 57: Provide demonstratable evidence that logged security events contain the following minimum information:

  • User

  • Type of event

  • Date and time

  • Success or failure indicators

  • Label that identifies the affected system

  • Intent: Logged security events need to provide enough information to help determine if attack traffic has been successful, what information has been accessed, to what level, who was responsible, where it originated, etc.

  • Example Evidence Guidelines: Evidence should show samples of logs from all system components showing these types of security events. The logs should include all the information listed above.

  • Example Evidence: The following screenshot shows the information from the security events within Windows Event Viewer from the in-scope system component "SEGSVR02".

The following screenshot shows the information from the security events within Windows Event Viewer from the in-scope system component "SEGSVR02".

Note: Screenshots are required across all the sampled system components AND MUST evidence all the security events detailed in the control above. it's likely that the evidence collected for the control above will also satisfy this control, providing adequate detail of the logging information was provided.

Control 58: Provide demonstratable evidence that all sampled system components are time-synchronized to the same primary and secondary servers.

  • Intent: A critical component of logging is ensuring logs across all systems have system clocks that are all in sync. This is important when an investigation is needed to track a compromise and/or data breach. Tracking the events through various systems can become near impossible if the logs have varying degrees of time stamps as important logs could be missed and it will be difficult to track.

  • Example Evidence Guidelines: Ideally, a time synchronization topology should be maintained which shows how time is synchronized across the estate. Evidence can then be provided by way of screenshots of time synchronization settings across the sampled system components. This should show that all time synchronization is to the same primary (or if in place secondary) server.

  • Example Evidence: This diagram shows the time synchronization topology in use.

diagram shows the time synchronization topology in use.

The next screenshot shows the WatchGuard configured as an NTP Server and pointing to time.windows.com as it's time source.

screenshot shows the WatchGuard configured as an NTP Server and pointing to time.windows.com as it's time source.

This final screenshot shows the in-scope system component, "CLARANET-SBU-WM" is configured for NTP to point to the primary server which is the WatchGuard Firewall (10.0.1.1).

screenshot shows the in-scope system component, "CLARANET-SBU-WM" is configured for NTP to point to the primary server which is the WatchGuard Firewall (10.0.1.1).

Control 59: Provide demonstratable evidence when public facing systems are in use that security event logs are being sent to a centralized logging solution not within the perimeter network.

  • Intent: The intent with this control is to ensure a logical or physical separation between the DMZ and the logging endpoint. With the DMZ being public facing, this is exposed to external activity groups and therefore at more risk than other components within the environment. Should a DMZ component be compromised, the integrity of the logging data needs to be maintained to not only stop the activity group from tampering with the logs to hide the compromise but also to aid in any forensic investigation work that may be required. By logging to systems outside of the DMZ, security controls employed to restrict traffic from the DMZ to these security systems should help to protect them from malicious activities and tampering attempts.

  • Example Evidence Guidelines: Evidence should be provided with screenshots, or configuration settings, demonstrating that logs are configured to be immediately (or close to immediately) sent to a centralized logging solution that is outside of the DMZ. We're looking for almost immediate shipping of logs because the longer it takes for logs to be shipped to the centralized logging solution, the more time a treat actor would have to tamper with the local logs before shipping occurs.

  • Example Evidence: The Contoso DMZ systems utilize NXLog for shipping of log files. The following screenshot shows the 'nxlog' service running on the "DESKTOP-7S65PN" DMZ jumpbox which is used to manage all the DMZ servers.

shows the 'nxlog' service running on the "DESKTOP-7S65PN" DMZ jumpbox which is used to manage all the DMZ servers.

The following screenshot shows an extract from the nxlog.conf file, showing that the destination is an internal log collector within the Application Subnet on 10.0.1.250 which is used to ship to AlienVault.

screenshot shows an extract from the nxlog.conf file

The following URL for NXLog (https://nxlog.co/documentation/nxlog-user-guide/modes.html) shows that log shipping is in real-time through the following extract:

Screenshot of offline log processing

Control 60: Provide demonstrable evidence to show that the centralized logging solution is protected against unauthorized tampering of logging data.

  • Intent: Although logical / physical separation is often in place between logging devices and the centralized logging solution, there's still a risk that someone could try to tamper with the logs to hide their activities. The intent of this control is to ensure adequate authorization mechanisms are in place to limit the number of users that can perform administrative actions against the centralized logging solution.

  • Example Evidence Guidelines: Evidence would usually be with screenshots showing the authorization and authentication configuration of the centralized logging solution, demonstrating that users are limited to those which are required for their job role/function.

  • Example Evidence: The Contoso outsourced SOC utilizes AlienVault as the centralized SIEM tooling. AlienVault was bought out by AT&T in 2018 and now goes by USM Anywhere. The following web page (https://cybersecurity.att.com/documentation/usm-anywhere/deployment-guide/admin/usm-anywhere-data-security.htm) discusses how USM Anywhere protects the data against unauthorized tampering. The following link (https://cybersecurity.att.com/documentation/usm-appliance/raw-logs/raw-log-management.htm) highlights how the USM Anywhere product also ensures the integrity of archived logs.

Note: If the SIEM is internal, evidence will need to be provided to demonstrate that access to the logging data is restricted to a select number of users based upon their job need and that the platform itself is protected against tampering (most solutions will build this into the functionality of the logging solution).

Control 61: Provide demonstrable evidence that a minimum of 30 days of security event logging data is immediately available, with 90 days of security event logs being retained.

  • Intent: Sometimes, there's a time difference between a compromise or security event and an organization identifying it. The intent of this control is to ensure the organization has access to historic event data to help with the incident response and any forensic investigation work that may be required.

  • Example Evidence Guidelines: Evidence will usually be with showing the centralized logging solution's configuration settings showing how long data is kept. 30days worth of security event logging data needs to be immediately available within the solution, however where data is archived, this need to demonstrate that 90 days worth is available. This could be by showing archive folders with dates of exported data.

  • Example Evidence 1: The following screenshots shows that 30 days worth of logs are available within AlienVault.

screenshots shows that 30days worth of logs are available within AlienVault.

Note: Since this is a public facing document, the firewall serial number has been redacted, however, we wouldn't envisage ISVs to support any redacted screenshots, unless it contains Personally Identifiable Information.

This next screenshot shows that logs are available by showing a log extract going back 5 months.

screenshot shows that logs are available by showing a log extract going back 5 months.

Note: Since this is a public facing document, the Public IP Addresses have been redacted, however, we wouldn't envisage ISVs to support any redacted screenshots, unless it contains Personally Identifiable Information.

  • Example Evidence 2: The following screenshot shows that log events are kept for 30 days available live and 90 days in cold storage within Azure.

screenshot shows that log events are kept for 30 days available live and 90 days in cold storage within Azure.

Security Event Log Review

Reviewing security logs is an important function in helping organizations identify security events that may be indicative of a security breach or reconnaissance activities that may be an indication of something to come. This can either be done through a manual process on a daily basis, or by using a SIEM (Security Information and Event Management) solution which helps by analyzing audit logs, looking for correlations and anomalies which can be flagged for a manual inspection.

Control 62: Provide policy documentation that governs log review practices and procedures.

  • Intent: A report by IBM entitled "Cost of a data breach Report 2020" highlights that the average time to identify and contain a data breach can take 280days, this is greater where the breach is by a malicious activity group which is reported as 315days. With the average cost of a data breach being reported to be in the millions of dollars, it's critical that this data breach lifecycle is reduced to not only minimize the exposure window to data, but also to reduce the timeframe an activity group has to exfiltrate data from the environment. By reducing this window, organizations can reduce the overall cost of a data breach.

  • By implementing a robust reviewing and alerting process, organizations are better equipped to identify breaches sooner in the data breach lifecycle to minimize its impact to the organization. Additionally, a strong process may help to identify breach attempts, allowing organizations to bolster security defensive mechanisms to mitigate this increased threat to further reduce the chances of a compromise by the attack campaign.

  • Example Evidence Guidelines: Supply the organizations documented policy and procedure documents covering log review best practice.

  • Example Evidence: Below is an extract from the log review policy/procedure.

an extract from the log review policy/procedure.

Note: This screenshot shows a policy/process document, the expectation is for ISVs to share the actual supporting policy/procedure documentation and not simply provide a screenshot.

Control 63: Provide demonstrable evidence that logs are reviewed on a daily basis by a human or automated tooling to identify potential security events.

  • Intent: The intent of this control is to ensure that daily log reviews are being carried out. This is important to identify any anomalies that may not be picked up by the alerting scripts/queries that are configured to provide security event alerts.

  • Example Evidence Guidelines: Evidence would usually be provided by screenshot or a screenshare, demonstrating that log reviews are being conducted. This may be by way of forms which are completed each day, or by way of a JIRA or DevOps ticket with relevant comments being posted to show this is carried out daily. For example, a weekly JIRA ticket may be created "Daily Log Review W/C 26th June 2021", each day someone posts the results of the daily log review. If any anomalies are flagged, this can be documented within this same ticket to demonstrate the next control all in a single JIRA.

  • If automated tooling is being used, then screenshot evidence can be provided to demonstrate the automation configured and to provide additional evidence to show the automation is running and someone is reviewing the automated output.

  • Example Evidence: Contoso utilizes a third-party SOC provider, Claranet Cyber Security, for log correlation and reviews. AlienVault is used by the SOC provider which has the capabilities of providing automated log analysis for anomalous logs and chained events that could highlight a potential security event. The following three screenshots show correlation rules within AlienVault.

This first screenshot identifies where a user has been added to the 'Domain Admins' group.

screenshot identifies where a user has been added to the 'Domain Admins' group.

This next screenshot identifies where multiple failed sign in attempts are then followed by a successful sign in which may highlight a successful brute force attack.

screenshot identifies where multiple failed logon attempts are then followed by a successful login which may highlight a successful brute force attack.

This final screenshot identifies where a password policy change has occurred setting the policy, so account passwords don't expire.

screenshot identifies where a password policy change has occurred setting the policy, so account passwords don't expire.

This next screenshot shows that a ticket is automatically raised within the SOC's ServiceNow tool, triggering the rule above.

screenshot shows that a ticket is automatically raised within the SOC's ServiceNow tool, triggering the rule above.

Control 64: Provide demonstrable evidence that potential security events and anomalies are investigated and remediated.

  • Intent: The intention is that any anomalies that are identified during the daily log review process are investigated, and appropriate remediation or action is carried out. This will usually involve a triage process to identify if the anomalies require action and then may need to invoke the incident response process.

  • Example Evidence Guidelines: Evidence should be provided with screenshot which demonstrates that anomalies identified as part of the daily log review are followed up on. As already discussed above, this may be through JIRA tickets showing an anomaly being flagged and then detailing what activities were carried out afterwards. This may prompt a specific JIRA ticket being raised to track all activities being carried out, or it may just be documented within the daily log review ticket. If an incident response action is required, then this should be documented as part of the incident response process and evidence should be provided to demonstrate this.

  • Example Evidence: The following screenshot example shows a security alert being tracked within ServiceNow by the Claranet Cyber Security MDR (Managed Detection and Response) SOC.

screenshot example shows a security alert being tracked within ServiceNow by the Claranet Cyber Security MDR (Managed Detection and Response) SOC.

This next screenshot shows confirmation that this has been resolved by David Ashton @ Contoso through an update within the ServiceNow customer portal.

screenshot shows confirmation that this has been resolved by David Ashton @ Contoso through an update within the ServiceNow customer portal.

Security Event Alerting

Critical security events need to be immediately investigated to minimize the impact to the data and operational environment. Alerting helps to immediately highlight potential security breaches to staff to ensure a timely response so the organization can contain the security event as quickly as possible. By ensuring alerting is working effectively, organizations can minimize the impact of a security breach, thus reducing the chance of a serious breach which could damage the organizations brand and impose financial losses through fines and reputational damage.

Control 65: Provide policy documentation that governs security event alerting practices and procedures.

  • Intent: Alerting should be used for key security events which require an immediate response from an organization as there's the potential of the event being indicative of an environment breach and/or a data breach. Strong process around the alerting process should be documented to ensure this is carried out in a consistent and repeatable way. This will hopefully help to reduce the "data breach lifecycle" timeline.

  • Example Evidence Guidelines: Supply the organizations documented policy and procedure documents covering security event alerting best practice.

  • Example Evidence: Below is an extract from the security event alerting policy/procedure. Please supply the full policy and procedure documents to support your assessment. an extract from the security event alerting policy/procedure. an expanded extract from the security event alerting policy/procedure.

Note: This screenshot shows a policy/process document, the expectation is for ISVs to share the actual supporting policy/procedure documentation and not simply provide a screenshot.

Control 66: Provide demonstrable evidence that alerts are triggered for immediate triage for the following types of security events:

  • Privileged account creation or modifications

  • Virus or malware events

  • Event log tampering

  • IDPS or WAF events, if configured

  • Intent: Above are a list of some types of security events which could highlight a security event has occurred which may point to an environment breach and/or data breach.

  • Example Evidence Guidelines: Evidence should be provided with screenshots of the alerting configuration AND evidence of the alerts being received. The configuration screenshots should show the logic that is triggering the alerts and how the alerts are sent. Alerts can be sent via SMS, Email, Teams channels, Slack channels, etc.

  • Example Evidence: Contoso utilize a third-party SOC provided by Claranet Cyber Security. The following example shows that alerting within AlienVault, utilized by the SOC, is configured to send an alert to a member of the SOC Team, Dan Turner at Claranet Cyber Security. example shows that alerting within AlienVault, utilized by the SOC, is configured to send an alert to a member of the SOC Team, Dan Turner at Claranet Cyber Security.

This next screenshot shows an alert being received by Dan. screenshot shows an alert being received by Dan.

Control 67: Provide demonstrable evidence showing that staff are always available, all day, every day, to respond to security alerts.

  • Intent: it's important that security alerts are triaged as soon as possible to limit exposure to the environment and/or data. Staff must always be available to respond to alerts and provide critical investigative work if a breach is identified. The quicker this process starts, the quicker the security incident can be contained to protect the data or to limit the impact of the breach.
  • Example Evidence Guidelines: Evidence should be provided which demonstrates members of staff are available 24 hours a day to respond to security alerts. This may be with an on-call rota.
  • Example Evidence: The following screenshot shows an on-call rota for December 2020 for Contoso. The Claranet Cyber Security SOC team would alert members of the Contoso on-call team.

screenshot shows an on-call rota for December 2020 for Contoso.

Information Security Risk Management

Information Security Risk Management is an important activity that all organizations should carry out at least annually. Organizations must understand their threats and risks to effectively mitigate these threats. Without effective risk management, organizations may implement security best practices in areas which they perceive to be important and therefore invest resource, time, and money in these areas, when other threats are much more likely and therefore should be mitigated. Effective risk management will help organizations to focus on risks that pose the most threat to the business. This should be carried out annually as the security landscape is ever changing and therefore threats and risks can change overtime. A good example of this can be seen with COVID-19 which saw a massive increase of Phishing attacks and the mass (and fast) rollout of remote working for hundreds or thousands of workers.

Control 68: Provide demonstratable evidence that a formal information security risk management process is established.

  • Intent: As we've discussed above, a robust information security risk management process is important to help organizations manage risks effectively. This will help organizations plan effective mitigations against threats to the environment.

it's important that the risk assessment includes Information Security Risk and not just general "business" risks.

  • Example Evidence Guidelines: The formally documented risk assessment management process should be supplied.
  • Example Evidence: The following evidence is a screenshot of part of Contoso's risk assessment process. The following evidence is a screenshot of part of Contoso's risk assessment process.

The following evidence is a screenshot of the expanded part of Contoso's risk assessment process.

Note: This screenshot shows a policy/process document, the expectation is for ISVs to share the actual supporting policy/procedure documentation and not simply provide a screenshot.

Control 69: Provide demonstrable evidence that a formal risk assessment occurs annually, at a minimum.

  • Intent: Security threats are constantly changing based on changes to the environment, changes to the services offered, external influences, evolution of the security threat landscape, etc. Organizations need to go through this process at least annually. it's recommended that this process is also carried out upon significant changes, as threats can change.

  • Example Evidence Guidelines: Evidence may be by way of version tracking or dated evidence. Evidence should be provided which shows the output of the information security risk assessment and NOT dates on the information security risk assessment process itself.

  • Example Evidence: This screenshot shows a risk assessment meeting being scheduled for every six months. screenshot shows a risk assessment meeting being scheduled for every six months.

These two screenshots show the meeting minutes from two risk assessment meeting.

screenshot show the meeting minutes from two risk assessment meeting.

screenshot show additional meeting minutes from two risk assessment meeting.

Control 70: Provide demonstrable evidence that the information security risk assessment includes threats, vulnerabilities, or the equivalent.

  • Intent: Information security risk assessments should be carried out against threats against the environment and data, and against possible vulnerabilities which may be present. This will help organizations identify the myriad of threats/vulnerabilities which can pose a significant risk.
  • Example Evidence Guidelines: Evidence should be provided by way of not only the information security risk assessment process already supplied, but also the output of the risk assessment (by way of a risk register / risk treatment plan) which should include risks and vulnerabilities.
  • Example Evidence: The following screenshot shows the risk register which demonstrates threats and vulnerabilities are included.

screenshot shows the risk register which demonstrates threats and vulnerabilities are included.

Note: The full risk assessment documentation should be provided instead of a screenshot.

Control 71: Provide demonstrable evidence that the information security risk assessment includes impact, likelihood risk matrix, or the equivalent.

  • Intent: Information security risk assessments should document impact and likelihood ratings. These matrices will usually be used to help identify a risk value which can be used by the organization to prioritize the risk treatment to help reduce the risk value.
  • Example Evidence Guidelines: Evidence should be provided by way of not only the information security risk assessment process already supplied, but also the output of the risk assessment (by way of a risk register / risk treatment plan) which should include impact and likelihood ratings.
  • Example Evidence: The following screenshot shows the risk register which demonstrates impact and likelihoods are included.

A risk register.

Note: The full risk assessment_ _document__ation should be provided instead of a screenshot.

Control 72: Provide demonstrable evidence that the information security risk assessment includes a risk register and treatment plan.

  • Intent: Organizations need to manage risks effectively. This needs to be properly tracked to either provide a record of one of the four risk treatments being applied. Risk treatments are:
  • Avoid/Terminate : The business may determine that the cost of dealing with the risk is more than the revenue generated from the service. The business may therefore choose to stop performing the service.
  • Transfer/Share : The business may choose to transfer the risk to a third-party by moving processing to a third-party.
  • Accept/Tolerate/Retain : The business may decide the risk is acceptable. This depends very much on the businesses risk appetite and can vary by organization.
  • Treat/Mitigate/Modify : The business decides to implement mitigation controls to reduce the risk to an acceptable level.
  • The intent of this control is to gain assurance that the organization is performing the risk assessment and acting upon this accordingly.
  • Example Evidence Guidelines: The risk treatment plan / risk register (or something equivalent) should be provided to demonstrate that the risk assessment process is being carried out properly.
  • Example Evidence: Below is a risk register for Contoso.

risk register for Contoso.

Note: The full risk assessment documentation should be provided instead of a screenshot.

The following screenshot demonstrates a risk treatment plan.

screenshot demonstrates a risk treatment plan.

Security Incident Response

A Security Incident Response is important for all organizations since this can reduce the time spent by an organization to contain a security incident and limiting the organizations level of exposure to data exfiltration. By developing a comprehensive and detailed security incident response plan, this exposure can be reduced from the time of identification to the time of containment.

A report by IBM entitled "Cost of a data breach Report 2020" highlights that on average, the time taken to contain a breach was 73days. Additionally, the same report identifies the biggest cost saver for organizations that suffered a breach, was incident response preparedness, providing an average $2,000,000 cost saving.

Organizations should be following best practices for security compliance using industry standard frameworks such as ISO 27001, NIST, SOC 2, PCI DSS etc.

Control 73: Provide the security incident response plan (IRP).

  • Intent: As already discussed, the intent of this control is to require a formally documented incident response plan. This will help to manage a security incident response more efficient which can ultimately limit the organizations data loss exposure and reduce the costs of the compromise.
  • Example Evidence Guidelines: Provide the full version of the incident response plan/procedure. This should include a documented communications process which is covered in the next control.
  • Example Evidence: The below screenshot shows the start of Contoso's incident response plan. As part of your evidence submission, you must supply the entire incident response plan.

screenshot shows the start of Contoso's incident response plan.

Note: This screenshot shows a policy/process document, the expectation is for ISVs to share the actual supporting policy/procedure documentation and not simply provide a screenshot.

Control 74: Provide demonstrable evidence that the security IRP includes a documented communication process to ensure timely notification to key stakeholders, such as payment brands and acquirers, regulatory bodies, supervisory authorities, directors, and customers.

  • Intent: Organizations may have breach notification obligations based upon the country/countries they operate in (for example, the General Data Protection Regulation; GDPR), or based upon functionality being offered (for example, PCI DSS if payment data is handled). Failure of timely notification can carry serious ramifications, therefore, to ensure notification obligations are met, incident response plans should include a communication process including communication with all stakeholders, media communication processes and who can and can't speak to the media.
  • Example Evidence Guidelines: Provide the full version of the incident response plan/procedure which should include a section covering the communication process.
  • Example Evidence: The following screenshot shows an extract from the incident response plan showing the communication process.

screenshot shows an extract from the incident response plan showing the communication process

Control 75: Provide demonstrable evidence that all member of the incident response team have completed annual training or a table top exercise.

  • Intent: As already discussed earlier, the longer it takes for an organization to contain a compromise, the greater the risk of data exfiltration, potentially resulting in a larger volume of exfiltrated data and the greater the overall cost of the compromise. it's important that organization's incident response teams are equipped to respond to security incidents in a timely manner. By undertaking regular training and carrying out tabletop exercises, this equips the team to handle security incidents quickly and efficiently.

  • The recommendation is to carry out both internal incident response training for the incident response team AND to carry out regular tabletop exercises, which should link to the information security risk assessment to identify the security incidents that are most likely to occur. This way, the team will know what steps to take to contain and investigate the most likely security incidents, quickly.

  • Example Evidence Guidelines: Evidence should be provided which demonstrates training has been carried out with sharing the training content, and records showing who attended (which should include all the incident response team). Alternatively, or as well as, records showing that a tabletop exercise has been carried out. All this must have been completed within a 12 month period from when the evidence is submitted.

  • Example Evidence: Contoso carried out an incident response tabletop exercise using an external security company called Claranet Cyber Security. Below is a sample of the report generated as part of the consultancy.

screenshot shows an extract from the incident response report generated by Claranet for Contoso1

screenshot shows an extract from the incident response report generated by Claranet for Contoso2

screenshot shows an extract from the incident response report generated by Claranet for Contoso3

Note: The full report would need to be shared. This exercise could also be carried out internally as there's no Microsoft 365 requirement for this to be carried out by a third-party company.

Control 76: Provide demonstrable evidence to show the security IRP is updated based on lessons learned or organizational changes.

  • Intent: Over time, the incident response plan (IRP) should evolve based upon organizational changes or based upon lessons learned when enacting the IRP. Changes to the operating environment may require changes to the IRP as the threats may change, or regulatory requirements may change. Additionally, as tabletop exercises and actual security incidents responses are carried out, this can often identify areas of the IRP that can be improved. This needs to be built into the plan and the intent of this control is to ensure that this process is included within the IRP.

  • Example Evidence Guidelines: This will often be evidenced by reviewing the results of security incidents or tabletop exercises where lessons learned have been identified and resulted within an update to the IRP. The IRP should maintain a changelog, which should also reference changes which were implemented based on lessons learned, or organizational changes.

  • Example Evidence: The following screenshots are from the supplied IRP which includes a section on updating the IRP based on lessons learned and/or organization changes.

screenshots are from the supplied IRP based on lessons learned and/or organization changes1

screenshots are from the supplied IRP based on lessons learned and/or organization changes2

The IRP change log shows an update being made on the back of the tabletop exercise carried out in July of 2021.

screenshots are from the supplied IRP based on lessons learned and/or organization changes3

Security Domain: Data Handling Security and Privacy

This security domain is included to ensure that any data consumed from Microsoft 365 is adequately protected both in transit and at rest. This domain also ensures consumers (data subjects) privacy concerns are being met by the ISV, in line with the General Data Protection Regulation (GDPR) which is concerned with the privacy of EU citizens.

Data in Transit

Due to the connectivity requirements of Microsoft 365 developed apps/add-ins, communication will occur over public networks, namely the Internet. For this reason, data in transit needs to be suitably protected. This section covers the protection of data communications over the Internet.

Control 1: Provide demonstratable evidence that TLS configuration meets or exceeds the encryption requirements within the TLS Profile Configuration Requirements.

  • Intent: The intention of this control is to ensure that Microsoft 365 Data that is being consumed by your organization is transmitted securely. The TLS Profile Configuration defines TLS specific requirements to help ensure traffic is secure against man-in-the-middle attacks.

  • Example Evidence Guidelines: The easiest way to evidence this is to run the Qualys SSL Server Test tool against ALL web listeners, including any that run on nonstandard ports.

  • Remember to tick the "Do not show the results on the boards" option, which stops the URL from being added to the website.

  • You can also provide evidence to demonstrate the individual checks within the TLS Profile Configuration Requirements. Configuration settings can be used, along with scripts and software tools to help provide evidence of some of the specific settings, that is, TLS Compression is disabled.

  • Example Evidence: The below screenshot shows the results for the www.clara.net:443 web listener.

screenshot shows the results for theclaranet web listener1

screenshot shows the results for theclaranet web listener2

Note: The Certification Analysts will review the full output to confirm all requirements of the TLS Profile Configuration Requirements are met (Please provide screenshots of the full scan output). Depending_ on _what evidence has been provided, the analysts may run their own Qualys scan.

  • Example Evidence 2: The following screenshot shows that TLS 1.2 is configured on the storage.

screenshot shows that TLS 1.2 is configured on the storage1

Note: This screenshot alone wouldn't be able to satisfy this requirement.

  • Example Evidence 3: The following screenshots show that TLS V1.3 is only enabled on the server.

screenshots show that TLS V1.3 is only enabled on the server

This example uses the Registry Keys to disable or enable a protocol by adjusting the values as follows:

Binary: 0 - off 1 - on

Hexadecimal: 0x00000000 - off 0xffffffff - on

Please Note : - Don't use this methodology if you don't understand it, as we (Microsoft) aren't responsible for you using or following this example or any effects its use may have on your systems. it's here to merely illustrate another way to show whether TLS is enabled or disabled.

Screenshot to display another way to show whether TLS is enabled or disabled1

Screenshot to display another way to show whether TLS is enabled or disabled2

Note: These screenshots alone wouldn't be able to satisfy this requirement.

Control 2: Provide demonstratable evidence that TLS compression is disabled across all public-facing services that handle web requests.

  • Intent: there's a specific TLS vulnerability, CRIME (CVE-2012-4929), which affects TLS Compression. For this reason, industry recommendations are to turn this functionality off.

  • Example Evidence Guidelines: This can be evidence through the Qualys SSL Labs tool.

  • Example Evidence: The following screenshot shows this through the Qualys SSL Labs tool.

screenshot shows evidence through the Qualys SSL Labs tool

Control 3: Provide demonstratable evidence that TLS HTTP strict transport security is enabled and configured to >= 15552000 across all sites.

  • Intent: HTTP Strict Transport Security (HSTS) is a security mechanism designed to protect websites against man-in-the-middle attacks by forcing TLS connections by way of a HTTPS response header field named "Strict-Transport-Security".

  • Example Evidence Guidelines: This can be evidence through the Qualys SSL Labs tool or other tools and web browser add-ins.

  • Example Evidence: The following screenshot shows this through a web browser add-in called 'HTTP Header Spy' for the www.microsoft.com website.

Screenshot shows evidence through the Qualys SSL Labs tool or other tools and web browser add-ins

Data at Rest

When data consumed from the Microsoft 365 platform is stored by ISVs, data needs to be suitably protected. This section covers protection requirements of data stored within databases and file stores.

Control 4: Provide demonstratable evidence that data at rest is encrypted inline with the encryption profile requirements, using encryption algorithms such as AES, Blowfish, TDES and encryption key sizes of 128-bit, and 256-bit.

  • Intent: Some older encryption algorithms are known to contain some cryptographic weaknesses which increases the chances of an activity group being able to decrypt the data without knowledge of the key. For this reason, the intent of this control is to ensure only industry accepted encryption algorithms are used to protect stored Microsoft 365 data.

  • Example Evidence Guidelines: Evidence can be provided by way of screenshots, showing the encryption being employed to protect Microsoft 365 data within databases and other storage locations. The evidence should demonstrate the encryption configuration is in line with the Encryption Profile Configuration Requirements of the Microsoft 365 Certification.

  • Example Evidence: The following screenshot shows that TDE (Transparent Data Encryption) is enabled on the Contoso Database. The second screenshot shows the Microsoft doc page 'Transparent data encryption for SQL Database, SQL Managed Instance, and Azure Synapse Analytics' showing that AES 256 encryption is used for Azure TDE.

screenshot shows that TDE (Transparent Data Encryption) is enabled on the Contoso Database

screenshot shows AES 256 encryption is used for Azure TDE

  • Example Evidence 2: The following screenshot shows Azure Storage configured with encryption for blobs and files. The following screenshot shows the Microsoft Docs page "Azure Storage encryption for data at rest" showing that Azure Storage uses AES-256 for encryption.

Screenshot shows Azure Storage configured with encryption for blobs and files

screenshot shows that Azure Storage uses AES-256 for encryption

Control 5: Provide demonstratable evidence that hash function or message authentication (HMAC-SHA1) is only used to protect data at rest inline with encryption profile requirements.

  • Intent: As with encryption algorithms, some hash functions and message authentication algorithms are based upon algorithms with cryptographic weaknesses. The intent of this control is to ensure that Microsoft 365 data is protected with strong hash functions if hashing is being used as a data protection mechanism. If this isn't used by the environment and/or application, evidence needs to be provided which can corroborate this.

  • Example Evidence Guidelines: Evidence may be in the form of screenshots showing snippets of code where the hashing function is working.

  • Example Evidence: Contoso utilizes hashing functionality within its application. The below screenshot demonstrates that SHA256 is utilized as part of the hashing function.

screenshot demonstrates that SHA256 is utilized as part of the hashing function

Control 6: Provide an inventory of all stored data, including storage location and encryption used to protect the data.

  • Intent: To properly protect data, organizations need to be aware of what data their environment / systems are consuming and where the data is being stored. Once this is fully understood and documented, organizations are then able to not only implement adequate data protection but are also able to consolidate where the data is located to implement protection more effectively. Additionally, when data is consolidated to as few places as possible, it's easier to implement adequate role-based access control (role-based access control) to limit access to as few employees as necessary.

  • Example Evidence Guidelines: Evidence should be provided by way of a document or export from an internal system, that is, SharePoint or Confluence, detailing all the data consumed, all storage locations and what level of encryption is implemented.

  • Example Evidence: The following screenshot shows an example of what a document showing data types could look like.

screenshot shows an example of what a document showing data types could look like

Data Retention and Disposal

Where ISVs consume and store Microsoft 365 data, this will be at risk of a data compromise should an activity group compromise the ISV environment. To minimize this risk, organizations should only be keeping data they need to delivery services and not data that "may" be of use in the future. Additionally, data should only be kept for as long as is needed to provide the services the data was captured for. Data retention should be defined and be communicated with users. Once data exceeds the defined retention period, this must be securely deleted so the data can't be reconstructed or recovered.

Control 7: Provide demonstratable evidence that an approved and documented data retention period is formally established.

  • Intent: A documented and followed retention policy is important not only to meet some legal obligations, for example data privacy legislation such as, but not limited to, the General Data Protection Regulation (EU GDPR) and the Data Protection Act (UK DPA 2018) but also to limit an organizations risk. By understanding the organizations data requirements and how long data is needed for the business to perform its functions, organizations can ensure that data is properly disposed of once its usefulness expires. By reducing the volumes of data stored, organizations are reducing the amount of data that would be exposed should a data compromise occur. This will limit the overall impact.

  • Often organizations will store data simply because it's "nice to have just in case", however, if the organization doesn't need the data to perform its service or business function, then data shouldn't be stored as this is increasing the organizations risks unnecessarily.

  • Example Evidence Guidelines: Supply the full data retention policy which clearly details how long data (must cover all data types) should be kept for so the business can perform its business functions.

  • Example Evidence: The screenshot below shows Contoso's data retention policy.

screenshot below shows Contoso's data retention policy1

screenshot below shows Contoso's data retention policy2

Note: This screenshot shows a policy/process document, the expectation is for ISVs to share the actual supporting policy/procedure documentation and not simply provide a screenshot.

Control 8: Provide demonstratable evidence that retained data matches defined retention period.

  • Intent: The intent of this control is to simply validate that the defined data retention periods are being met. As already discussed, organizations may have a legal obligation to meet this, but also by keeping data which is necessary and for as long as is necessary helps to reduce the risk to the organization should a data breach occur.

  • Example Evidence Guidelines: Provide screenshot evidence (or via screenshare) showing that stored data (in all the various data locations, that is, databases, file shares, archives, etc.) doesn't exceed the defined data retention policy. Examples could be screenshots of database records with a date field, searched in oldest record order, and/or file storage locations showing timestamps that are within the retention period.

Note: Any personal/sensitive customer data should be redacted within the screenshot.

  • Example Evidence: The following evidence shows a SQL query showing the contents of the database table ordered in ascending order on the 'DATE_TRANSACTION' field to show the oldest records within the database. This should data is two months old which doesn't exceed the retention period defined.

Screenshot shows a SQL query showing the contents of the database table ordered in ascending order

Note: This is a test database, therefore there'sn't a lot of historical data within it.

Control 9: Provide demonstratable evidence that processes are in place to securely delete data after its retention period.

  • Intent: The intent of this control is to ensure that the mechanism used to delete data which exceeds the retention period is doing so securely. Deleted data can sometimes be recovered; therefore, the deletion process needs to be robust enough to ensure data can't be recovered once deleted.

  • Example Evidence Guidelines: If the deletion process is done programmatically, then provide a screenshot of the script that is used to perform this. If it's executed on a schedule, provide a screenshot showing the schedule. For example, a script to delete files within a file share may be configured as a CRON job, screenshot the CRON job showing the schedule and script which is executed and provide the script showing the command used.

  • Example Evidence 1: This is a simple script which could be used to delete all data records retained based on date -WHERE DateAdd is -30 days which will purge all retained records older than 30 days past the selected data retention date. Note we'll need the script but also evidence of the job being run and results.

Screenshot of the script which could be used to delete all data records retained based on date

  • Example Evidence 2: The below has been taken from the Contoso Data Retention Plan in from Control 7 – This shows the procedures used for data destruction.

Screenshot of Contoso Data Retention Plan in from Control 7

Note: This screenshot shows a policy/process document, the expectation is for ISVs to share the actual supporting policy/procedure documentation and not simply provide a screenshot.

  • Example Evidence 3: In this example, a Runbook has been created and a corresponding schedule in Azure to securely delete records that have an end date created from the 30 days after expiry of the data record retention policy. This job is set to run every month on the last day of the month.

Screenshot of Data Retention Runbook

The window below shows the Runbook has been edited to find records and has delete commands not in view like the script. Note the full URL and username must be in view for these screenshots and ISVs will be required to show a screenshot of before deletion record count and a screenshot of after deletion record count. These screenshots are purely examples of the different ways this can be approached.

Screenshot shows the Runbook has been edited to find records and has delete commands not in view like the script.

Screenshot shows the pane in Runbook where properties can be edited.

Data Access Management

Data access needs limiting to as few people as required to reduce the chances of data being either maliciously or accidently compromised. Access to data and encryption keys should be limited to users with a legitimate business need for access to fulfill their job role. This should be well documented and a well-established process to request access should be implemented. Access to data and encryption keys should follow the least privilege principle.

Control 10:Provide a list of all individuals with access to data or encryption keys, including the business justification.

  • Intent: Organizations should limit access to data and encryption keys to as few employees as possible. The intent of this control is to ensure employee access to data and/or encryption keys are restricted to employees with a clear business need for said access.

  • Example Evidence Guidelines: Documentation or screenshots of internal systems which document all employees with access to data and/or encryption keys along with the business justification of why these individuals have access should be provided. This list will be used by the certification analyst to sample users for the next controls.

  • Example Evidence: The following document shows the documented list of users with access to data and the business justification.

A sample list of individuals with roles and required access privileges.

Control 11: Provide demonstratable evidence that the sampled individuals who have access to data or encryption keys were formally approved, detailing privileges required for their job function.

  • Intent: The process for granting access to data and/or encryption keys needs to include approval, ensuring that an individual's access is required for their job function. This ensures that employees without a genuine reason for access, doesn't get given unnecessary access.

  • Example Evidence Guidelines: Typically, the evidence provided for the previous control can help to support this control. If there'sn't a formal approval on the supplied documentation, then evidence may consist of a change request being raised and approved for the access within a tool such as, Azure DevOps or Jira.

  • Example Evidence: This set of images shows Jira Tickets created and approved for the above list in Control 10 to grant or deny access to sensitive data and/or encryption keys.

This image is demonstrating that a request has created in Jira to get Sam Daily approval for Encryption Keys on the systems backend environment. This is done as the next step to control 10 above where written authorization has been gained. Screenshot demonstrating that a request has created in Jira to get Sam Daily approval for Encryption Keys on the systems backend environment1

Screenshot demonstrating that a request has created in Jira to get Sam Daily approval for Encryption Keys on the systems backend environmen2

This shows that the request to give Sam Daily access has been approved by Jon Smith a person from management which can be seen in control 10. (Please note approval must come from someone with sufficient authority to allow the change request, it can't be another developer).

Screenshot shows that the request to give Sam Daily access has been approved by Jon Smith a person from management

The above shows a workflow in Jira for this process note that nothing can be added as Done unless it has been through the approval process which is automated therefore can't be by passed.

Screenshot shows a workflow in Jira

The Project board above is now showing that approval has been given for Sam Daily's access to encryption keys. Below the backlog shows Sam Daily's request approval and the person Assigned to do the work.

Screenshot showing that approval has been given for Sam Daily

To meet the requirements of this control you must show all these screenshots or similar with an explanation to demonstrate you've met the control requirement.

  • Example Evidence 2: In the example below, admin access and full control permissions have been requested for a user to the production DB. The request has been sent for approval as can be seen on the right of the image and this has been approved as you can see on the left.

Approval process1

Approval process2

Approval process3

Above you can see that the access has been approved and signed off as done.

Control 12: Provide demonstratable evidence that the sampled individuals who have access to data or encryption keys only have the privileges included in the approval.

  • Intent: The intent of this control is to confirm that data and/or encryption key access is configured as per documented.

  • Example Evidence Guidelines: Evidence could be provided by way of screenshot which shows the data and/or encryption key access privileges granted to the sampled individuals. Evidence must cover all data locations.

  • Example Evidence: This screenshot shows the permissions granted to the user "John Smith" which would be cross referenced against the approval request for this same user as per evidence for the previous control.

Screenshot shows the permissions granted to the user

Control 13: Provide a list of all third-parties that customer data is shared with.

  • Intent: Where third parties are used for storage or processing of Microsoft 365 data, these entities can pose a significant risk. Organizations should develop a good third-party due diligence and management process to ensure these third parties are storing/processing data securely and to ensure they'll honor any legal obligations that they may have, for example as a data processor under GDPR.

  • Organizations should maintain a list of all third parties with which they share data with some or all the following:

  • what service(s) is(are) being provided,

  • what data is shared,

  • why the data is shared,

  • key contact information (that is, primary contact, breach notification contact, DPO, etc.),

  • contract renewal/expiry

  • legal/compliance obligations (that is, GDPR, HIPPA, PCI DSS, FedRamp, etc.)

  • Example Evidence Guidelines: Supply documentation detailing ALL third parties with which Microsoft 365 data is shared.

Note: If third parties aren't in use, this will need to be confirmed in writing (email) by a member of the senior leadership team.

  • Example Evidence 1

Example Email1

Example Email2

  • Example Evidence 2: This screenshot shows an email example of a member of the senior leadership team confirming no third parties are used to process Microsoft 365 data.

Example Email3

Control 14: Provide demonstratable evidence that all third-parties consuming customer data have sharing agreements in place.

  • Intent: Where Microsoft 365 data is shared with third parties, it's important that data is being handled appropriately and securely. Data sharing agreements must be in place to ensure that third parties are processing data only as needed and that they understand their security obligations. An organizations security is only as strong as the weakest link. The intent of this control is to ensure that third parties don't become an organizations weak link.

  • Example Evidence Guidelines: Share the data sharing agreements that are in place with the third parties.

  • Example Evidence: The following screenshot shows a simplistic example data sharing agreement.

screenshot shows a simplistic example data sharing agreement1

screenshot shows a simplistic example data sharing agreement2

Note: The full agreement should be shared and not a screenshot.

GDPR

Most organizations will be processing data that is potentially a European Citizen's (data subjects) data. Where data of ANY data subject is processed, organizations will need to meet the General Data Protection Regulation (GDPR). This applies to both Data Controllers (you're directly capturing said data) or Data Processors (you're processing this data on behalf of a Data Controller). Although this section doesn't cover the entire regulation, it addresses some of the key elements of GDPR to help obtain some assurance that the organization is taking GDPR seriously.

Control 15: Provide a documented subject access request (SAR) process and provide evidence demonstrating that data subjects are able to raise SARs.

  • Intent: The GDPR includes specific obligations that must be met by organizations processing data subjects' data. The obligation for organizations to manage Subject Access Requests (SARs) is included within Article 12 which, under Article 12.3, gives a data controller one month of receipt of the SAR to respond to the request. An extension is permitted for a further two months where necessary. Even if your organization is acting as a Data Processor, this will still be needed to help your customers (the Data Controller) fulfill their SARs obligations.

  • Example Evidence Guidelines: Supply the documented process for handling SARs.

  • Example Evidence: The following example shows a documented process for handling of SARs.

Screenshot shows a documented process for handling of SARs

Note: This screenshot shows a policy/process document, the expectation is for ISVs to share the actual supporting policy/procedure documentation and not simply provide a screenshot.

Control 16: Provide demonstratable evidence that you're able to identify all locations of data subjects' data when responding to a SAR.

  • Intent: The intent of this control is to ensure the organization has a robust mechanism in place to identify all data subjects' data. This may be a manual process because all data storage is well documented, or other tooling may be used to ensure all data is located as part of the SARs process.

  • Example Evidence Guidelines: Evidence may be provided by way of a list of all data locations and a documented process to search all data locations for data. This would include any necessary commands to search for data, that is, if SQL locations are included, then specific SQL statements would be detailed to ensure data is found properly.

  • Example Evidence: The following screenshot is a snippet from the above SAR's procedure which shows how data will be found.

screenshot is a snippet from the above SAR's procedure which shows how data will be found.

The four images below show how the ISV data locations where queried and then Storage explorer used to drill down to the files or blob that needed to be removed from storage to comply with the SARs request.

Screenshot show how the ISV data locations where queried1

Screenshot show how the ISV data locations where queried2

This query confirms the storage accounts in use. You can query and remove storage, blobs and/ or files using Resource Graph Explorer (Kusto) or PowerShell (see below).

Screenshot show how the ISV data locations where queried3

Screenshot show how the ISV data locations where queried4

The above image shows the data that has been found within the blob container for the client which needs to be removed and below shows the action to delete or soft delete the information in the blob.

Control 17: Supply a link to the privacy notice which should contain all the required elements as follows:

  • Companies details (Name, Address, etc..).

  • Details the types of personal data being processed.

  • Details the lawfulness of processing personal data.

  • Details data subject's rights:

    • Right to be informed,
    • Right of access by the data subject,
    • Right to erasure,
    • Right to restriction of processing,
    • Right to data portability,
    • Right to object,
    • Rights in relation to automated decision-making, including profiling.
  • Details how long personal data will be kept for.

  • Intent: Article 13 of the GDPR includes specific information that must be provided to data subjects at the time of when personal data is obtained. The intent of this control is to ensure the organizations data privacy notice provides data subjects with some of the key information included within Article 13.

  • Example Evidence Guidelines: This would usually be provided by supplying the data privacy notice. Certification analysts will review this to ensure all the information provided within the control is included within the data privacy notice.

  • Example Evidence

Screenshot of Contoso Privacy Notice 1

Screenshot of Contoso Privacy Notice 2

The images of a Privacy Notice above and adjacent show an example of an online privacy policy with Article 13 of GDPR included.

Screenshot of Contoso Privacy Notice 3

Below is a Data Protection Policy which can be used in conjunction with the privacy notice shown previously.

Screenshot of Data Protection Policy which can be used in conjunction with the privacy notice shown previously1

Screenshot of Data Protection Policy which can be used in conjunction with the privacy notice shown previously2

Screenshot of Data Protection Policy which can be used in conjunction with the privacy notice shown previously3

The above image of Azure shows how Azure has been configured to meet the compliance requirements of GDPR for data stored in a backend environment. The policy (which can be custom made or built from Azure Blueprints) allows the ISV to ensure that client's data is stored correctly and that it accessible only by the set metrics and alerts are set to ensure compliance and will show non compliant data or user access on the Compliance Manager dashboard.

Books

Murdoch D. (2018) Blue Team Handbook: Incident Response Edition: A condensed field guide for the Cyber Security Incident Responder. 2nd Edition, Publisher: CreateSpace Independent Publishing Platform.

References

Images taken from Microsoft Documents