Events
Apr 29, 2 PM - Apr 30, 7 PM
Join the ultimate Windows Server virtual event April 29-30 for deep-dive technical sessions and live Q&A with Microsoft engineers.
Sign up nowThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
This article describes minimum hardware requirements for Storage Spaces Direct. For hardware requirements on Azure Local, our operating system designed for hyperconverged deployments with a connection to the cloud, see Before you deploy Azure Local: Determine hardware requirements.
For production, Microsoft recommends purchasing a validated hardware/software solution from our partners, which include deployment tools and procedures. These solutions are designed, assembled, and validated against our reference architecture to ensure compatibility and reliability, so you get up and running quickly. For hardware solutions, visit the Azure Local solutions website.
Tip
Want to evaluate Storage Spaces Direct but don't have hardware? Use Hyper-V or Azure virtual machines as described in Using Storage Spaces Direct in guest virtual machine clusters.
Important
In scenarios where cluster nodes are implemented, NIC adapters, drivers, and firmware must be an exact match for SET teaming to function properly.
Systems, components, devices, and drivers must be certified for the operating system you’re using in the Windows Server Catalog. In addition, we recommend that servers and network adapters have the Software-Defined Data Center (SDDC) Standard and/or Software-Defined Data Center (SDDC) Premium additional qualifications (AQs), as pictured below. There are over 1,000 components with the SDDC AQs.
The fully configured cluster (servers, networking, and storage) must pass all cluster validation tests per the wizard in Failover Cluster Manager or with the Test-Cluster
cmdlet in PowerShell.
In addition, the following requirements apply:
Storage Spaces Direct requires a reliable high bandwidth, low latency network connection between each node.
Minimum interconnect for small scale 2-3 node
Recommended interconnect for high performance, at scale, or deployments of 4+
Switched or switchless node interconnects
Storage Spaces Direct works with direct-attached SATA, SAS, NVMe, or persistent memory (PMem) drives that are physically attached to just one server each. For more help choosing drives, see the Choosing drives and Understand and deploy persistent memory articles.
Note
When using all flash drives for storage capacity, the benefits of storage pool caching will be limited. Learn more about the storage pool cache.
Here's how drives can be connected for Storage Spaces Direct:
Drives can be internal to the server, or in an external enclosure that is connected to just one server. SCSI Enclosure Services (SES) is required for slot mapping and identification. Each external enclosure must present a unique identifier (Unique ID).
The minimum number of capacity drives you require varies with your deployment scenario. If you're planning to use the storage pool cache, there must be at least 2 cache devices per server.
You can deploy Storage Spaces Direct on a cluster of physical servers or on virtual machine (VM) guest clusters. You can configure your Storage Spaces Direct design for performance, capacity, or balanced scenarios based on the selection of physical or virtual storage devices. Virtualized deployments take advantage of the private or public cloud's underlying storage performance and resilience. Storage Spaces Direct deployed on VM guest clusters allows you to use high availability solutions within virtual environment.
The following sections describe the minimum drive requirements for physical and virtual deployments.
This table shows the minimum number of capacity drives by type for hardware deployments such as Azure Local version 21H2 or later, and Windows Server.
Drive type present (capacity only) | Minimum drives required (Windows Server) | Minimum drives required (Azure Local) |
---|---|---|
All persistent memory (same model) | 4 persistent memory | 2 persistent memory |
All NVMe (same model) | 4 NVMe | 2 NVMe |
All SSD (same model) | 4 SSD | 2 SSD |
If you're using the storage pool cache, there must be at least 2 more drives configured for the cache. The table shows the minimum numbers of drives required for both Windows Server and Azure Local deployments using 2 or more nodes.
Drive type present | Minimum drives required |
---|---|
Persistent memory + NVMe or SSD | 2 persistent memory + 4 NVMe or SSD |
NVMe + SSD | 2 NVMe + 4 SSD |
NVMe + HDD | 2 NVMe + 4 HDD |
SSD + HDD | 2 SSD + 4 HDD |
Important
The storage pool cache cannot be used with Azure Local in a single node deployment.
This table shows the minimum number of drives by type for virtual deployments such as Windows Server guest VMs or Windows Server Azure Edition.
Drive type present (capacity only) | Minimum drives required |
---|---|
Virtual Hard Disk | 2 |
Tip
To boost the performance for guest VMs when running on Azure Local or Windows Server, consider using the CSV in-memory read cache to cache unbuffered read operations.
If you're using Storage Spaces Direct in a virtual environment, you must consider:
Learn more about deploying Storage Spaces Direct using virtual machines and virtualized storage.
Maximums | Windows Server 2019 or later | Windows Server 2016 |
---|---|---|
Raw capacity per server | 400 TB | 100 TB |
Pool capacity | 4 PB (4,000 TB) | 1 PB |
Events
Apr 29, 2 PM - Apr 30, 7 PM
Join the ultimate Windows Server virtual event April 29-30 for deep-dive technical sessions and live Q&A with Microsoft engineers.
Sign up nowTraining
Module
Implement Storage Spaces and Storage Spaces Direct - Training
Implement Storage Spaces and Storage Spaces Direct
Certification
Microsoft Certified: Windows Server Hybrid Administrator Associate - Certifications
As a Windows Server hybrid administrator, you integrate Windows Server environments with Azure services and manage Windows Server in on-premises networks.