Hello TomaszW-0873, I am Henry and I want to share my insight about this issue.
First, let me confirm that your proposed scenario is not only possible.
The confusion you're experiencing stems from mixing up two different, but similarly named, technologies:
- Storage Spaces (the standalone version, on a single server)
- Storage Spaces Direct (the clustered version for Hyper-V, which you want to use)
You are correct that Storage Bus Cache (SBC) cannot be used in a failover cluster. However, SBC is not used by Storage Spaces Direct.
- Storage Bus Cache (SBC): This is a feature for a single, standalone server running Storage Spaces. It allows you to use a few fast SSDs to cache for slower HDDs within that one server. It is not cluster-aware and is therefore incompatible with Failover Clustering.
- Storage Spaces Direct (S2D) Built-in Cache: S2D has its own, much more sophisticated caching mechanism built directly into the technology. When S2D detects a mix of fast drives (SSD/NVMe) and slower drives (HDD) in a cluster, it automatically configures a caching tier. You do not need to enable SBC or anything similar.
So, to be clear: You will not be using Storage Bus Cache. S2D will handle the caching for you automatically.
In your setup with 3 SSDs and 3 HDDs per server, Storage Spaces Direct will do the following:
- Automatic Tiering: It will identify all the SSDs across both nodes and claim them for the "cache tier." It will identify all the HDDs and claim them for the "capacity tier." This happens automatically during cluster creation.
- Write Caching: When a virtual machine writes data, the data is first written to the fast SSD cache on both nodes to ensure redundancy. The write is then acknowledged back to the VM, making it very fast.
- Read Caching: Frequently accessed data is kept in the SSD cache for fast read performance.
- Destaging: In the background, as the SSD cache fills, S2D intelligently moves "cold" (less frequently accessed) data from the SSD cache down to the HDD capacity tier, freeing up space in the cache for new writes.
Your plan for a 2-node cluster with a third server as a witness is the correct.
- 2-Node Cluster: Valid for S2D. For resiliency, you will typically use two-way mirroring, where a copy of all data is stored on each node. For even better protection against drive failures on a single node, you can look into nested resiliency.
- Witness/Quorum: With an even number of nodes (like two), a witness is mandatory to prevent a "split-brain" scenario where the nodes can't determine which one should be online if the network between them fails.
I hope this information and these keywords help point you in the right direction for your research.