Security Control: Data protection
Data Protection covers control of data protection at rest, in transit, and via authorized access mechanisms, including discover, classify, protect, and monitor sensitive data assets using access control, encryption, key management and certificate management.|
DP-1: Discover, classify, and label sensitive data
CIS Controls v8 ID(s) | NIST SP 800-53 r4 ID(s) | PCI-DSS ID(s) v3.2.1 |
---|---|---|
3.2, 3.7, 3.13 | RA-2, SC-28 | A3.2 |
Security principle: Establish and maintain an inventory of the sensitive data, based on the defined sensitive data scope. Use tools to discover, classify and label the in- scope sensitive data.
Azure guidance: Use tools such as Microsoft Purview, which combines the former Azure Purview and Microsoft 365 compliance solutions, and Azure SQL Data Discovery and Classification to centrally scan, classify, and label the sensitive data that reside in the Azure, on-premises, Microsoft 365, and other locations.
Azure implementation and additional context:
- Data classification overview
- Labeling in the Microsoft Purview Data Map
- Tag sensitive information using Azure Information Protection
- How to implement Azure SQL Data Discovery
- Azure Purview data sources
AWS guidance: Replicate your data from various sources to a S3 storage bucket and use AWS Macie to scan, classify and label the sensitive data stored in the bucket. AWS Macie can detect sensitive data such as security credentials, financial information, PHI and PII data, or other data pattern based on the custom data identifier rules.
You may also use the Azure Purview multi-cloud scanning connector to scan, classify and label the sensitive data residing in a S3 storage bucket.
Note: You can also use third-party enterprise solutions from AWS marketplace for the purpose of data discovery classification and labeling.
AWS implementation and additional context:
GCP guidance: Use tools such as Google Cloud Data Loss Prevention to centrally scan, classify, and label the sensitive data that resides in the GCP and on-premises environments.
In addition, use Google Cloud Data Catalog to utilize the results of a Cloud Data Loss Prevention (DLP) scan to identify sensitive data with tag templates defined.
GCP implementation and additional context:
Customer security stakeholders (Learn more):
DP-2: Monitor anomalies and threats targeting sensitive data
CIS Controls v8 ID(s) | NIST SP 800-53 r4 ID(s) | PCI-DSS ID(s) v3.2.1 |
---|---|---|
3.13 | AC-4, SI-4 | A3.2 |
Security principle: Monitor for anomalies around sensitive data, such as unauthorized transfer of data to locations outside of enterprise visibility and control. This typically involves monitoring for anomalous activities (large or unusual transfers) that could indicate unauthorized data exfiltration.
Azure guidance: Use Azure Information protection (AIP) to monitor the data that has been classified and labeled.
Use Microsoft Defender for Storage, Microsoft Defender for SQL, Microsoft Defender for open-source relational databases, and Microsoft Defender for Cosmos DB to alert on anomalous transfer of information that might indicate unauthorized transfers of sensitive data information.
Note: If required for compliance of data loss prevention (DLP), you can use a host-based DLP solution from Azure Marketplace or a Microsoft 365 DLP solution to enforce detective and/or preventative controls to prevent data exfiltration.
Azure implementation and additional context:
- Enable Azure Defender for SQL
- Enable Azure Defender for Storage
- Enable Microsoft Defender for Azure Cosmos DB
- Enable Microsoft Defender for open-source relational databases and respond to alerts
AWS guidance: Use AWS Macie to monitor the data that has been classified and labeled, and use GuardDuty to detect anomalous activities on some resources (S3, EC2 or Kubernetes or IAM resources). Findings and alerts can be triaged, analyzed, and tracked using EventBridge and forwarded to Microsoft Sentinel or Security Hub for incident aggregation and tracking.
You may also connect your AWS accounts to Microsoft Defender for Cloud for compliance checks, container security, and endpoint security capabilities.
Note: If required for compliance of data loss prevention (DLP), you can use a host-based DLP solution from AWS Marketplace.
AWS implementation and additional context:
GCP guidance: Use Google Cloud Security Command Center/Event Threat Detection/Anomaly Detection to alert on anomalous transfer of information that might indicate unauthorized transfers of sensitive data information.
You may also connect your GCP accounts to Microsoft Defender for Cloud for compliance checks, container security, and endpoint security capabilities.
GCP implementation and additional context:
- Overview of Event Threat Detection
- Anomaly detection using streaming analytics & AI
- Anomaly Detection
Customer security stakeholders (Learn more):
DP-3: Encrypt sensitive data in transit
CIS Controls v8 ID(s) | NIST SP 800-53 r4 ID(s) | PCI-DSS ID(s) v3.2.1 |
---|---|---|
3.10 | SC-8 | 3.5, 3.6, 4.1 |
Security principle: Protect the data in transit against 'out of band' attacks (such as traffic capture) using encryption to ensure that attackers cannot easily read or modify the data.
Set the network boundary and service scope where data in transit encryption is mandatory inside and outside of the network. While this is optional for traffic on private networks, this is critical for traffic on external and public networks.
Azure guidance: Enforce secure transfer in services such as Azure Storage, where a native data in transit encryption feature is built in.
Enforce HTTPS for web application workloads and services by ensuring that any clients connecting to your Azure resources use transport layer security (TLS) v1.2 or later. For remote management of VMs, use SSH (for Linux) or RDP/TLS (for Windows) instead of an unencrypted protocol.
For remote management of Azure virtual machines, use SSH (for Linux) or RDP/TLS (for Windows) instead of an unencrypted protocol. For secure file transfer, use the SFTP/FTPS service in Azure Storage Blob, App Service apps, and Function apps, instead of using the regular FTP service.
Note: Data in transit encryption is enabled for all Azure traffic traveling between Azure datacenters. TLS v1.2 or later is enabled on most Azure services by default. And some services such as Azure Storage and Application Gateway can enforce TLS v1.2 or later on the server side.
Azure implementation and additional context:
- Double encryption for Azure data in transit
- Understand encryption in transit with Azure
- Information on TLS Security
- Enforce secure transfer in Azure storage
AWS guidance: Enforce secure transfer in services such as Amazon S3, RDS and CloudFront, where a native data in transit encryption feature is built in.
Enforce HTTPS (such as in AWS Elastic Load Balancer) for workload web application and services (either on the server side or client side, or on both) by ensuring that any clients connecting to your AWS resources use TLS v1.2 or later.
For remote management of EC2 instances, use SSH (for Linux) or RDP/TLS (for Windows) instead of an unencrypted protocol. For secure file transfer, use AWS Transfer SFTP or FTPS service instead of a regular FTP service.
Note: All network traffic between AWS data centers is transparently encrypted at the physical layer. All traffic within a VPC and between peered VPCs across regions is transparently encrypted at the network layer when using supported Amazon EC2 instance types. TLS v1.2 or later is enabled on most AWS services by default. And some services such as AWS Load Balancer can enforce TLS v1.2 or later on the server side.
AWS implementation and additional context:
GCP guidance: Enforce secure transfer in services such as Google Cloud Storage, where a native data in transit encryption feature is built in.
Enforce HTTPS for web application workloads and services ensuring that any clients connecting to your GCP resources use transport layer security (TLS) v1.2 or later.
For remote management Google Cloud Compute Engine use SSH (for Linux) or RDP/TLS (for Windows) instead of an unencrypted protocol. For secure file transfer, use the SFTP/FTPS service in services such as Google Cloud Big Query or Cloud App Engine instead of a regular FTP service.
GCP implementation and additional context:
Customer security stakeholders (Learn more):
- Security architecture
- Infrastructure and endpoint security
- Application Security and DevOps
- Data Security
DP-4: Enable data at rest encryption by default
CIS Controls v8 ID(s) | NIST SP 800-53 r4 ID(s) | PCI-DSS ID(s) v3.2.1 |
---|---|---|
3.11 | SC-28 | 3.4, 3.5 |
Security principle: To complement access controls, data at rest should be protected against 'out of band' attacks (such as accessing underlying storage) using encryption. This helps ensure that attackers cannot easily read or modify the data.
Azure guidance: Many Azure services have data at rest encryption enabled by default at the infrastructure layer using a service-managed key. These service-managed keys are generated on the customer’s behalf and automatically rotated every two years.
Where technically feasible and not enabled by default, you can enable data at rest encryption in the Azure services, or in your VMs at the storage level, file level, or database level.
Azure implementation and additional context:
- Understand encryption at rest in Azure
- Data at rest double encryption in Azure
- Encryption model and key management table
AWS guidance: Many AWS services have data at rest encryption enabled by default at the infrastructure/platform layer using an AWS-managed customer master key. These AWS-managed customer master keys are generated on the customer's behalf and rotated automatically every three years.
Where technically feasible and not enabled by default, you can enable data at rest encryption in the AWS services, or in your VMs at the storage level, file level, or database level.
AWS implementation and additional context:
GCP guidance: Many Google Cloud products and services have data at rest encryption enabled by default at the infrastructure layer using a service-managed key. These service managed keys are generated on the customer’s behalf and automatically rotated.
Where technically feasible and not enabled by default, you can enable data at rest encryption in the GCP services, or in your VMs at the storage level, file level, or database level.
Note: Refer to the document ""Granularity of encryption for Google Cloud services"" for additional detail.
GCP implementation and additional context:
- Default encryption at rest
- Data encryption options
- Granularity of encryption for Google Cloud services
Customer security stakeholders (Learn more):
DP-5: Use customer-managed key option in data at rest encryption when required
CIS Controls v8 ID(s) | NIST SP 800-53 r4 ID(s) | PCI-DSS ID(s) v3.2.1 |
---|---|---|
3.11 | SC-12, SC-28 | 3.4, 3.5, 3.6 |
Security principle: If required for regulatory compliance, define the use case and service scope where customer-managed key option is needed. Enable and implement data at rest encryption using customer-managed keys in services.
Azure guidance: Azure also provides an encryption option using keys managed by yourself (customer-managed keys) for most services.
Azure Key Vault Standard, Premium, and Managed HSM are natively integrated with many Azure Services for customer-managed key use cases. You may use Azure Key Vault to generate your key or bring your own keys.
However, using the customer-managed key option requires additional operational effort to manage the key lifecycle. This may include encryption key generation, rotation, revoke, and access control, etc.
Azure implementation and additional context:
- Encryption model and key management table
- Services that support encryption using customer-managed key
- How to configure customer managed encryption keys in Azure Storage
AWS guidance: AWS also provides an encryption option using keys managed by yourself (customer-managed customer master key stored in AWS Key Management Service) for certain services.
AWS Key Management Service (KMS) is natively integrated with many AWS services for customer-managed customer master key use cases. You may either use AWS Key Management Service (KMS) to generate your master keys or bring your own keys.
However, using the customer-managed key option requires additional operational efforts to manage the key lifecycle. This may include encryption key generation, rotation, revoke, and access control, etc.
AWS implementation and additional context:
GCP guidance: Google Cloud provides an encryption option using keys managed by yourself (customer-managed keys) for most services.
Google Cloud Key Management Service (Cloud KMS) is natively integrated with many GCP services for customer-managed encryption keys. These keys can be created and managed using Cloud KMS, and you store the keys as software keys, in an HSM cluster, or externally. You may use Cloud KMS to generate your key or supply your own keys (customer-supplied encryption keys).
However, using the customer-managed key option requires additional operational efforts to manage the key lifecycle. This may include encryption key generation, rotation, revoke, and access control, etc.
GCP implementation and additional context:
- Default encryption at rest
- Data encryption options
- Customer-managed encryption keys
- Customer-supplied encryption key
Customer security stakeholders (Learn more):
- Security architecture
- Infrastructure and endpoint security
- Application Security and DevOps
- Data Security
DP-6: Use a secure key management process
CIS Controls v8 ID(s) | NIST SP 800-53 r4 ID(s) | PCI-DSS ID(s) v3.2.1 |
---|---|---|
N/A | IA-5, SC-12, SC-28 | 3.6 |
Security principle: Document and implement an enterprise cryptographic key management standard, processes, and procedures to control your key lifecycle. When there is a need to use customer-managed key in the services, use a secured key vault service for key generation, distribution, and storage. Rotate and revoke your keys based on the defined schedule and when there is a key retirement or compromise.
Azure guidance: Use Azure Key Vault to create and control your encryption keys life cycle, including key generation, distribution, and storage. Rotate and revoke your keys in Azure Key Vault and your service based on the defined schedule and when there is a key retirement or compromise. Require a certain cryptographic type and minimum key size when generating keys.
When there is a need to use customer-managed key (CMK) in the workload services or applications, ensure you follow the best practices:
- Use a key hierarchy to generate a separate data encryption key (DEK) with your key encryption key (KEK) in your key vault.
- Ensure keys are registered with Azure Key Vault and implemented via key IDs in each service or application.
To maximize the key material lifetime and portability, bring your own key (BYOK) to the services (i.e., importing HSM-protected keys from your on-premises HSMs into Azure Key Vault). Follow the recommended guideline to perform the key generation and key transfer.
Note: Refer to the below for the FIPS 140-2 level for Azure Key Vault types and FIPS compliance/validation level.
- Software-protected keys in vaults (Premium & Standard SKUs): FIPS 140-2 Level 1
- HSM-protected keys in vaults (Premium SKU): FIPS 140-2 Level 2
- HSM-protected keys in Managed HSM: FIPS 140-2 Level 3
Azure Key Vault Premium uses a shared HSM infrastructure in the backend. Azure Key Vault Managed HSM uses dedicated, confidential service endpoints with a dedicated HSM for when you need a higher level of key security.
Azure implementation and additional context:
- Azure Key Vault overview
- Azure data encryption at rest--Key Hierarchy
- BYOK (Bring Your Own Key) specification
AWS guidance: Use AWS Key Management Service (KMS) to create and control your encryption keys life cycle, including key generation, distribution, and storage. Rotate and revoke your keys in KMS and your service based on the defined schedule and when there is a key retirement or compromise.
When there is a need to use customer-managed customer master key in the workload services or applications, ensure you follow the best practices:
- Use a key hierarchy to generate a separate data encryption key (DEK) with your key encryption key (KEK) in your KMS.
- Ensure keys are registered with KMS and implement via IAM policies in each service or application.
To maximize the key material lifetime and portability, bring your own key (BYOK) to the services (i.e., importing HSM-protected keys from your on-premises HSMs into KMS or Cloud HSM). Follow the recommended guideline to perform the key generation and key transfer.
Note: AWS KMS uses shared HSM infrastructure in the backend. Use AWS KMS Custom Key Store backed by AWS CloudHSM when you need to manage your own key store and dedicated HSMs (e.g. regulatory compliance requirement for higher level of key security) to generate and store your encryption keys.
Note: Refer to the below for the FIPS 140-2 level for FIPS compliance level in AWS KMS and CloudHSM:
- AWS KMS default: FIPS 140-2 Level 2 validated
- AWS KMS using CloudHSM: FIPS 140-2 Level 3 (for certain services) validated
- AWS CloudHSM: FIPS 140-2 Level 3 validated
Note: For secrets management(credentials, password, API keys etc.), use AWS Secrets Manager.
AWS implementation and additional context:
- AWS-managed and Customer-managed CMKs
- Importing key material in AWS KMS keys
- Secure transfer of keys into to CloudHSM
- Creating a custom key store backed by CloudHSM
GCP guidance: Use Cloud Key Management Service (Cloud KMS) to create and manage encryption key lifecycles in compatible Google Cloud services and in your workload applications. Rotate and revoke your keys in Cloud KMS and your service based on the defined schedule and when there is a key retirement or compromise.
Use Google’s Cloud HSM service to provide hardware-backed keys to Cloud KMS (Key Management Service) It gives you ability to manage and use your own cryptographic keys while being protected by fully managed Hardware Security Modules (HSM).
The Cloud HSM service uses HSMs, which are FIPS 140-2 Level 3-validated and are always running in FIPS mode. FIPS 140-2 Level 3-validated and are always running in FIPS mode. FIPS standard specifies the cryptographic algorithms and random number generation used by the HSMs.
GCP implementation and additional context:
Customer security stakeholders (Learn more):
DP-7: Use a secure certificate management process
CIS Controls v8 ID(s) | NIST SP 800-53 r4 ID(s) | PCI-DSS ID(s) v3.2.1 |
---|---|---|
N/A | IA-5, SC-12, SC-17 | 3.6 |
Security principle: Document and implement an enterprise certificate management standard, processes and procedures which includes the certificate lifecycle control, and certificate policies (if a public key infrastructure is needed).
Ensure certificates used by the critical services in your organization are inventoried, tracked, monitored, and renewed timely using automated mechanism to avoid service disruption.
Azure guidance: Use Azure Key Vault to create and control the certificate lifecycle, including the creation/import, rotation, revocation, storage, and purge of the certificate. Ensure the certificate generation follows the defined standard without using any insecure properties, such as insufficient key size, overly long validity period, insecure cryptography and so on. Setup automatic rotation of the certificate in Azure Key Vault and supported Azure services based on the defined schedule and when a certificate expires. If automatic rotation is not supported in the frontend application, use a manual rotation in Azure Key Vault.
Avoid using a self-signed certificate and wildcard certificate in your critical services due to the limited security assurance. Instead, you can create public signed certificates in Azure Key Vault. The following Certificate Authorities (CAs) are the partnered providers that are currently integrated with Azure Key Vault.
- DigiCert: Azure Key Vault offers OV TLS/SSL certificates with DigiCert.
- GlobalSign: Azure Key Vault offers OV TLS/SSL certificates with GlobalSign.
Note: Use only approved CA and ensure that known bad root/intermediate certificates issued by these CAs are disabled.
Azure implementation and additional context:
AWS guidance: Use AWS Certificate Manager (ACM) to create and control the certificate lifecycle, including creation/import, rotation, revocation, storage, and purge of the certificate. Ensure the certificate generation follows the defined standard without using any insecure properties, such as insufficient key size, overly long validity period, insecure cryptography and so on. Setup automatic rotation of the certificate in ACM and supported AWS services based on the defined schedule and when a certificate expires. If automatic rotation is not supported in the frontend application, use manual rotation in ACM. In the meantime, you should always track your certificate renewal status to ensure the certificate validity.
Avoid using a self-signed certificate and wildcard certificate in your critical services due to the limited security assurance. Instead, create public-signed certificates (signed by the Amazon Certificate Authority) in ACM and deploy it programmatically in services such as CloudFront, Load Balancers, API Gateway etc. You also can use ACM to establish your private certificate authority (CA) to sign the private certificates.
Note: Use only an approved CA and ensure that known bad CA root/intermediate certificates issued by these CAs are disabled.
AWS implementation and additional context:
GCP guidance: Use Google Cloud Certificate Manager to create and control the certificate lifecycle, including creation/import, rotation, revocation, storage, and purge of the certificate. Ensure the certificate generation follows the defined standard without using any insecure properties, such as insufficient key size, overly long validity period, insecure cryptography and so on. Setup automatic rotation of the certificate in Certificate Manager and supported GCP services based on the defined schedule and when a certificate expires. If automatic rotation is not supported in the frontend application, use manual rotation in Certificate Manager. In the meantime, you should always track your certificate renewal status to ensure the certificate validity.
Avoid using a self-signed certificate and wildcard certificate in your critical services due to the limited security assurance. Instead, you can create signed public certificates in Certificate Manager and deploy it programmatically in services such as Load Balancer and Cloud DNS etc. You also can use Certificate Authority Service to establish your private certificate authority (CA) to sign the private certificates.
Note: You can also use Google Cloud Secret Manager to store TLS certificates.
GCP implementation and additional context:
Customer security stakeholders (Learn more):
DP-8: Ensure security of key and certificate repository
CIS Controls v8 ID(s) | NIST SP 800-53 r4 ID(s) | PCI-DSS ID(s) v3.2.1 |
---|---|---|
N/A | IA-5, SC-12, SC-17 | 3.6 |
Security principle: Ensure the security of the key vault service used for the cryptographic key and certificate lifecycle management. Harden your key vault service through access control, network security, logging and monitoring and backup to ensure keys and certificates are always protected using the maximum security.
Azure guidance: Secure your cryptographic keys and certificates by hardening your Azure Key Vault service through the following controls:
- Implement access control using RBAC policies in Azure Key Vault Managed HSM at the key level to ensure the least privilege and separation of duties principles are followed. For example, ensure separation of duties are in place for users who manage encryption keys so they do not have the ability to access encrypted data, and vice versa. For Azure Key Vault Standard and Premium, create unique vaults for different applications to ensure the least privilege and separation of duties principles are followed.
- Turn on Azure Key Vault logging to ensure critical management plane and data plane activities are logged.
- Secure the Azure Key Vault using Private Link and Azure Firewall to ensure minimal exposure of the service
- Use managed identity to access keys stored in Azure Key Vault in your workload applications.
- When purging data, ensure your keys are not deleted before the actual data, backups and archives are purged.
- Backup your keys and certificates using Azure Key Vault. Enable soft delete and purge protection to avoid accidental deletion of keys.When keys need to be deleted, consider disabling keys instead of deleting them to avoid accidental deletion of keys and cryptographic erasure of data.
- For bring your own key (BYOK) use cases, generate keys in an on-premises HSM and import them to maximize the lifetime and portability of the keys.
- Never store keys in plaintext format outside of the Azure Key Vault. Keys in all key vault services are not exportable by default.
- Use HSM-backed key types (RSA-HSM) in Azure Key Vault Premium and Azure Managed HSM for the hardware protection and the strongest FIPS levels.
Enable Microsoft Defender for Key Vault for Azure-native, advanced threat protection for Azure Key Vault, providing an additional layer of security intelligence.
Azure implementation and additional context:
- Azure Key Vault overview
- Azure Key Vault security best practices
- Use managed identity to access Azure Key Vault
- Overview of Microsoft Defender for Key Vault
AWS guidance: For cryptographic keys security, secure your keys by hardening your AWS Key Management Service (KMS) service through the following controls:
- Implement access control using key policies (key-level access control) in conjunction with IAM policies (identity-based access control) to ensure the least privilege and separation of duties principles are followed. For example, ensure separation of duties are in place for users who manage encryption keys so they do not have the ability to access encrypted data, and vice versa.
- Use detective controls such as CloudTrails to log and track the usage of keys in KMS and alert you on critical actions.
- Never store keys in plaintext format outside of KMS.
- When keys need to be deleted, consider disabling keys in KMS instead of deleting them to avoid accidental deletion of keys and cryptographic erasure of data.
- When purging data, ensure your keys are not deleted before the actual data, backups and archives are purged.
- For bring your own key (BYOK) uses cases, generate keys in an on-premise HSM and import them to maximize the lifetime and portability of the keys.
For certificates security, secure your certificates by hardening your AWS Certificate Manager (ACM) service through the following controls:
- Implement access control using resource-level policies in conjunction with IAM policies (identity-based access control) to ensure the least privilege and separation of duties principles are followed. For example, ensure separation of duties is in place for user accounts: user accounts who generate certificates are separate from the user accounts who only require read-only access to certificates.
- Use detective controls such as CloudTrails to log and track the usage of the certificates in ACM, and alert you on critical actions.
- Follow the KMS security guidance to secure your private key (generated for certificate request) used for service certificate integration.
AWS implementation and additional context:
GCP guidance: For cryptographic keys security, secure your keys by hardening your Key Management Service through the following controls:
- Implement access control using IAM roles to ensure the least privilege and separation of duties principles are followed. For example, ensure separation of duties are in place for users who manage encryption keys so they do not have the ability to access encrypted data, and vice versa.
- Create a separate key ring for each project which allows you to easily manage and control access to the keys following least privilege best practice. It also makes it easier to audit who has access to which keys at when.
- Enable automatic rotation of keys to ensure keys are regularly updated and refreshed. This helps to protect against potential security threats such as brute force attacks or malicious actors attempting to gain access to sensitive information.
- Setup up an audit log sink to track all the activities that occur within you GCP KMS environment.
For certificates security, secure your certificates by hardening your GCP Certificate Manager and Certificate Authority Service through the following controls:
- Implement access control using resource-level policies in conjunction with IAM policies (identity-based access control) to ensure the least privilege and separation of duties principles are followed. For example, ensure separation of duties is in place for user accounts: user accounts who generate certificates are separate from the user accounts who only require read-only access to certificates.
- Use detective controls such as Cloud Audit Logs to log and track the usage of the certificates in Certificate Manager, and alert you on critical actions.
- Secret Manager also support storage of TLS certificate. You need to follow the similar security practice to implement the security controls in Secret Manager.
GCP implementation and additional context:
Customer security stakeholders (Learn more):