Provision a scale-out file server (SOFS) cluster from bare metal computers in the VMM fabric
In addition to adding existing file servers to an SOFS cluster in the System Center Virtual Machine Manager (VMM) fabric, VMM can discover provision bare metal machines as SOFS cluster nodes. This article includes the steps for setting up a bare metal SOFS cluster in VMM.
Here's what you need for the deployment:
- Physical computers to deploy as SOFS cluster nodes. These computers must meet the prerequisites described in the table below. They can be running on operating system or an operating system that will be overwritten during the deployment process.
- Virtual hard disk with an appropriate operating system located on a VMM library share. When you create the virtual hard disk, you can create a virtual machine, install the guest operating system, and use Sysprep with the /generalize and the /oobe options.
The operating system on the virtual hard disk that you deploy on the cluster nodes must support the boot from the virtual hard disk (VHD) option. - PXE server configured with Windows Deployment Services is needed for bare metal deployment.
Prerequisite | Details |
---|---|
BMC | Each physical computer must have a baseboard management controller (BMC) installed that enables out-of-band management by VMM. Through a BMC, you can access the computer remotely, independent of the operating system and control system functions such as the ability to turn the computer off or on. The BMCs must use one of the supported out-of-band management protocols, and the management protocol must be enabled in the BMC settings. Supported protocols: Intelligent Platform Management Interface (IPMI) versions 1.5 or 2.0; Data Center Management Interface (DCMI) version 1.0; System Management Architecture for Server Hardware (SMASH) version 1.0 over WS-Management (WS-Man); custom protocols such as Integrated Lights-Out (iLO). The BMCs must use the latest version of firmware for the BMC model. The BMCs must be configured with sign-in credentials and must use either static IP addressing or DHCP. If you use DHCP, we recommend that you configure DHCP to assign a constant IP address to each BMC. For example, by using DHCP reservations. The VMM management server must be able to access the network segment on which the BMCs are configured. |
Operating system | Physical computers must be running Windows Server 2012 R2 or later. |
Accounts | You'll need two Run As accounts. A Run As account for joining computers to the domain, and an account for access to the BMC on each computer. |
Prerequisite | Details |
---|---|
BMC | Each physical computer must have a baseboard management controller (BMC) installed that enables out-of-band management by VMM. Through a BMC, you can access the computer remotely, independent of the operating system and control system functions such as the ability to turn the computer off or on. The BMCs must use one of the supported out-of-band management protocols, and the management protocol must be enabled in the BMC settings. Supported protocols: Intelligent Platform Management Interface (IPMI) versions 1.5 or 2.0; Data Center Management Interface (DCMI) version 1.0; System Management Architecture for Server Hardware (SMASH) version 1.0 over WS-Management (WS-Man); custom protocols such as Integrated Lights-Out (iLO). The BMCs must use the latest version of firmware for the BMC model. The BMCs must be configured with sign-in credentials and must use either static IP addressing or DHCP. If you use DHCP, we recommend that you configure DHCP to assign a constant IP address to each BMC. For example, by using DHCP reservations. The VMM management server must be able to access the network segment on which the BMCs are configured. |
Operating system | Physical computers must be running Windows Server 2016 or later. |
Accounts | You'll need two Run As accounts. A Run As account for joining computers to the domain, and an account for access to the BMC on each computer. |
Prerequisite | Details |
---|---|
BMC | Each physical computer must have a baseboard management controller (BMC) installed that enables out-of-band management by VMM. Through a BMC, you can access the computer remotely, independent of the operating system and control system functions such as the ability to turn the computer off or on. The BMCs must use one of the supported out-of-band management protocols, and the management protocol must be enabled in the BMC settings. Supported protocols: Intelligent Platform Management Interface (IPMI) versions 1.5 or 2.0; Data Center Management Interface (DCMI) version 1.0; System Management Architecture for Server Hardware (SMASH) version 1.0 over WS-Management (WS-Man); custom protocols such as Integrated Lights-Out (iLO). The BMCs must use the latest version of firmware for the BMC model. The BMCs must be configured with sign-in credentials and must use either static IP addressing or DHCP. If you use DHCP, we recommend that you configure DHCP to assign a constant IP address to each BMC. For example, by using DHCP reservations. The VMM management server must be able to access the network segment on which the BMCs are configured. |
Operating system | Physical computers must be running Windows Server 2019 or later. |
Accounts | You'll need two Run As accounts. A Run As account for joining computers to the domain, and an account for access to the BMC on each computer. |
Prerequisite | Details |
---|---|
Deployment requirements | You must have a PXE server configured with Windows Deployment Services. If you've an existing PXE server in your environment configured with Windows Deployment Services, you can add that server to VMM. Then you can use it for provisioning in VMM (and VMM will recognize only the resulting servers). All other requests will continue to be handled by the PXE server according to how it's configured. If you don't have an existing PXE server, you can deploy the Windows Deployment Services role on a server running a supported operating system (Windows Server 2012 R2 or later). |
Location | The PXE server must be in the same subnet as the physical computers that you want to provision. |
Windows Deployment Services installation | When you install Windows Deployment Services, you must install both the Deployment server and Transport server options. You don't need to add images. During host deployment, VMM uses a virtual hard disk that you've created and stored in the library. You don't need to configure settings on the PXE response tab. VMM provides its own PXE provider. |
Permissions | When you add a PXE server, you must specify account credentials for an account that has local administrator permissions on the PXE server. You can enter a user name and password or specify a Run As account. You can create the Run As account before you begin or during deployment. |
Prerequisite | Details |
---|---|
Deployment requirements | You must have a PXE server configured with Windows Deployment Services. If you've an existing PXE server in your environment configured with Windows Deployment Services, you can add that server to VMM. Then you can use it for provisioning in VMM (and VMM will recognize only the resulting servers). All other requests will continue to be handled by the PXE server according to how it's configured. If you don't have an existing PXE server, you can deploy the Windows Deployment Services role on a server running a supported operating system (Windows Server 2016 or later). |
Location | The PXE server must be in the same subnet as the physical computers that you want to provision. |
Windows Deployment Services installation | When you install Windows Deployment Services, you must install both the Deployment server and Transport server options. You don't need to add images. During host deployment, VMM uses a virtual hard disk that you've created and stored in the library. You don't need to configure settings on the PXE response tab. VMM provides its own PXE provider. |
Permissions | When you add a PXE server, you must specify account credentials for an account that has local administrator permissions on the PXE server. You can enter a user name and password or specify a Run As account. You can create the Run As account before you begin or during deployment. |
Prerequisite | Details |
---|---|
Deployment requirements | You must have a PXE server configured with Windows Deployment Services. If you've an existing PXE server in your environment configured with Windows Deployment Services, you can add that server to VMM. Then you can use it for provisioning in VMM (and VMM will recognize only the resulting servers). All other requests will continue to be handled by the PXE server according to how it's configured. If you don't have an existing PXE server, you can deploy the Windows Deployment Services role on a server running a supported operating system (Windows Server 2019 or later). |
Location | The PXE server must be in the same subnet as the physical computers that you want to provision. |
Windows Deployment Services installation | When you install Windows Deployment Services, you must install both the Deployment server and Transport server options. You don't need to add images. During host deployment, VMM uses a virtual hard disk that you've created and stored in the library. You don't need to configure settings on the PXE response tab. VMM provides its own PXE provider. |
Permissions | When you add a PXE server, you must specify account credentials for an account that has local administrator permissions on the PXE server. You can enter a user name and password or specify a Run As account. You can create the Run As account before you begin or during deployment. |
Prerequisite | Details |
---|---|
Virtual hard disk | Ensure that you've a generalized virtual hard disk in a VMM library share. It must be running Windows Server 2012 R2 or later. We recommend that for production servers, you use a fixed disk (.vhd or .vhdx file format) to increase performance and to help protect user data. Ensure that you've a generalized virtual hard disk in a VMM library share. It must be running Windows Server 2012 R2 or later. |
Dynamic disk | When you create a physical computer profile, VMM converts a dynamic disk to a fixed disk. |
Custom drivers | If you plan to assign custom drivers to a physical computer profile, you add them to a VMM library share in one or more folders with a .CR (custom resources) extension. VMM recognizes them as custom resources. |
Answer file | Like custom resources, if you want a physical computer profile to include references to an answer file (Unattend.xml file), create the answer file and add it to a VMM library share before you start deployment. For example, you might want to create an answer file to enable Remote Desktop Services and place it on a library share. Then you can select that file when you configure a physical computer profile. |
RDS | If you use Remote Desktop Services (RDS) to manage servers, we recommend that you enable the RDS connections in the image. You can also enable RDS using an answer file in the physical computer profile. |
Logical networks | If you've already configured logical networks or logical switches in VMM, you can include those configurations in the physical computer profile. To include static IP addressing controlled through a logical network in a physical computer profile, configure the logical network. The logical network must include at least one network site and static IP address pool. The network site must also be available to the host group or to a parent host group where you want to assign the hosts that you'll be creating from bare metal. |
Logical switch | To use a logical switch, install all the necessary virtual switch extensions and extension providers, and create the switch before you create the physical computer profile. In the logical switch, as a best practice, include one or more port classifications for the virtual ports. To apply a logical switch to physical adapters in a physical computer profile, ensure that you've installed the intended number of NICs on the physical computer. |
Prerequisite | Details |
---|---|
Virtual hard disk | Ensure that you've a generalized virtual hard disk in a VMM library share. It must be running Windows Server 2016 or later. We recommend that for production servers, you use a fixed disk (.vhd or .vhdx file format) to increase performance and to help protect user data. Ensure that you've a generalized virtual hard disk in a VMM library share. It must be running Windows Server 2016 or later. |
Dynamic disk | When you create a physical computer profile, VMM converts a dynamic disk to a fixed disk. |
Custom drivers | If you plan to assign custom drivers to a physical computer profile, you add them to a VMM library share in one or more folders with a .CR (custom resources) extension. VMM recognizes them as custom resources. |
Answer file | Like custom resources, if you want a physical computer profile to include references to an answer file (Unattend.xml file), create the answer file and add it to a VMM library share before you start deployment. For example, you might want to create an answer file to enable Remote Desktop Services and place it on a library share. Then you can select that file when you configure a physical computer profile. |
RDS | If you use Remote Desktop Services (RDS) to manage servers, we recommend that you enable the RDS connections in the image. You can also enable RDS using an answer file in the physical computer profile. |
Logical networks | If you've already configured logical networks or logical switches in VMM, you can include those configurations in the physical computer profile. To include static IP addressing controlled through a logical network in a physical computer profile, configure the logical network. The logical network must include at least one network site and static IP address pool. The network site must also be available to the host group or to a parent host group where you want to assign the hosts that you'll be creating from bare metal. |
Logical switch | To use a logical switch, install all the necessary virtual switch extensions and extension providers, and create the switch before you create the physical computer profile. In the logical switch, as a best practice, include one or more port classifications for the virtual ports. To apply a logical switch to physical adapters in a physical computer profile, ensure that you've installed the intended number of NICs on the physical computer. |
Prerequisite | Details |
---|---|
Virtual hard disk | Ensure that you've a generalized virtual hard disk in a VMM library share. It must be running Windows Server 2019 or later. We recommend that for production servers, you use a fixed disk (.vhd or .vhdx file format) to increase performance and to help protect user data. Ensure that you've a generalized virtual hard disk in a VMM library share. It must be running Windows Server 2019 or later. |
Dynamic disk | When you create a physical computer profile, VMM converts a dynamic disk to a fixed disk. |
Custom drivers | If you plan to assign custom drivers to a physical computer profile, you add them to a VMM library share in one or more folders with a .CR (custom resources) extension. VMM recognizes them as custom resources. |
Answer file | Like custom resources, if you want a physical computer profile to include references to an answer file (Unattend.xml file), create the answer file and add it to a VMM library share before you start deployment. For example, you might want to create an answer file to enable Remote Desktop Services and place it on a library share. Then you can select that file when you configure a physical computer profile. |
RDS | If you use Remote Desktop Services (RDS) to manage servers, we recommend that you enable the RDS connections in the image. You can also enable RDS using an answer file in the physical computer profile. |
Logical networks | If you've already configured logical networks or logical switches in VMM, you can include those configurations in the physical computer profile. To include static IP addressing controlled through a logical network in a physical computer profile, configure the logical network. The logical network must include at least one network site and static IP address pool. The network site must also be available to the host group or to a parent host group where you want to assign the hosts that you'll be creating from bare metal. |
Logical switch | To use a logical switch, install all the necessary virtual switch extensions and extension providers, and create the switch before you create the physical computer profile. In the logical switch, as a best practice, include one or more port classifications for the virtual ports. To apply a logical switch to physical adapters in a physical computer profile, ensure that you've installed the intended number of NICs on the physical computer. |
- Before you start: Verify the prerequisites above before you start.
- Prepare physical computer: Set up the BIOS on each physical computer to support virtualization.
- Prepare the PXE server environment: Add the PXE server to the VMM fabric.
- Add driver files: Add driver files to the VMM library if you want to use custom drivers.
- Create profile: Create a profile for the physical computers.
- Create the cluster: Run the Create Clustered File Server Wizard to discover the physical computers, configure the cluster, and start the cluster deployment. The physical computers boot from a customized Windows PE image on the PXE server. The Failover Cluster and File Server roles are enabled. After the cluster is created, the Scale-Out File Server role is enabled. The computer is then restarted.
- Add nodes to SOFS cluster: After you've provisioned the nodes, you can create a new cluster with them or add them to an existing one.
Prepare each computer to support virtualization, as follows:
- Set the BIOS boot order to boot from a Pre-Boot Execution Environment (PXE)-enabled network adapter as the first device.
- Configure the sign-in credentials and IP address settings for the BMC on each computer.
- If your environment has multiple DNS servers, where replication can take some time, we strongly recommend that you create DNS entries for the computer names that will be assigned to the physical computers, and allow time for DNS replication to occur. Otherwise, the deployment of the computers can fail.
- Select Fabric > Servers > Home > Add > Add Resources > PXE Server.
- In Computer name, specify the PXE server name.
- Add the credentials for an account that has local administrator permissions on the PXE server. You can specify an existing Run As account or create a new account. Manually enter user credentials in the format domain_name\user_name. Then select Add.
- In Jobs, verify that the job status is Completed, and close the dialog. The job sets up the new PXE server, installs the VMM agent on the PXE server, imports a new Windows Preinstallation Environment (Windows PE) image, and adds the machine account to VMM for the PXE server.
- Verify that the PXE server is added in Fabric > Servers > PXE Servers. The agent status must be Responding.
If you plan to assign custom drivers, the driver files must exist in the library. You can tag the drivers in the library so that you can later filter them by tag. After the files are added, when you configure a physical computer profile, you can specify the driver files. VMM installs the specified drivers when it installs the operating system on a physical computer.
In the physical computer profile, you can select to filter the drivers by tags, or you can select to filter drivers with matching Plug and Play (PnP) IDs on the physical computer. If you select to filter the drivers by tags, VMM determines the drivers to apply by matching the tags that you assign to the drivers in the library to the tags that you assign in the profile. If you select to filter drivers with matching PnP IDs, you don't need to assign custom tags.
- Locate a driver package that you want to add to the library.
- In the library share that is located on the library server associated with the group where you want to deploy the physical computers, create a folder to store the drivers and copy the driver package to the folder.
- We strongly recommend that you create a separate folder for each driver package, and that you don't mix resources in the driver folders. If you include other library resources such as .iso images, .vhd files, or scripts with an .inf file name extension in the same folder, the VMM library server won't discover those resources. Also, when you delete an .inf driver package from the library, VMM deletes the entire folder where the driver .inf file resides.
- In the VMM console > Library > Library Servers, expand the library server where the share is located, right-click the share, and select Refresh. After the library refreshes, the folder must appear.
- Assign tags if required. In Library, expand the folder that you created to store the drivers, and select the folder that contains the driver package.
- In the Physical Library Objects, right-click the driver .inf file and select Properties.
- In the Driver File Name Properties > Custom tags, enter custom tags separated by a semicolon, or select Select to assign available tags, or to create and assign new ones. If you select Select and then select New Tag, you can change the name of the tag after you select OK. For example, if you added a network adapter driver file, you could create a tag that is named ServerModel NetworkAdapterModel, where ServerModel is the server model and NetworkAdapterModel is the network adapter model.
Before you start, determine whether the physical computers use Extensible Firmware Interface (EFI) or BIOS. If you've both, create a separate profile for each type.
- Select Library > Home > Create > Physical Computer Profile.
- In the New Physical Computer Profiles Wizard > Profile Description, enter a name and description and select VM host.
- In OS Image > Virtual hard disk file > Browse, select the generalized virtual hard disk that you added to the library share. By default, if the disk is dynamic, VMM converts it to a fixed disk during host deployment. We recommend that for production servers, you use a fixed disk to increase performance and help protect user data.
- In Hardware Configuration > Management NIC, select the network adapter you'll use to communicate with VMM and whether to use DHCP or a static address. If you want to use Consistent Device Naming (CDN) for the adapter or configure logical switches and ports, select Physical Properties. Select Add to add the adapter.
- In Disk, specify the partitioning scheme for the first disk. You can use GPT if the physical computer profile is EFI. In Partition Information, select the volume label, whether to use all the remaining free space or a specific size, and whether to designate the partition as the boot partition. You can also add a new disk or partition. During deployment, VMM will copy the virtual hard disk file to the boot partition and automatically create a system partition on the same disk.
- In Driver filter, filter the drivers that will be applied to the operating system during host deployment. You can filter by Plug and Play ID or by specific tags. If you select to filter drivers with matching tags, ensure that you've added driver files to the library and assigned the corresponding tags.
- In OS Configuration, specify the domain that the Hyper-V host or cluster must join, and specify the local admin credentials and identity information. Add the product key for installation, and set the time zone. In GUIRunOnce, you can specify one or more commands that will run when the user signs in to the Hyper-V host for the first time.
- In Host Settings, specify the path of the host to store the files that are associated with virtual machines placed on the host. Don't specify drive C because it's not available for placement. If you don't specify a path, VMM placement will determine the most suitable location.
- In Summary, verify the settings. Wait until Jobs shows a status of completed, and verify the profile in Library > Profiles > Physical Computer Profiles.
The Create Clustered File Server Wizard does the following:
- Discovers the physical computers through out-of-band management.
- Deploys the Windows Server operating system image on the computers using the physical computer profile (if configured to do so).
- Enables the file server role on the computers.
- Enables the Scale-Out File Server role on the cluster.
- Adds the provisioned computers as a Scale-Out File Server cluster under VMM management.
Run the wizard:
Select Fabric > Servers > Home > Create > File Server Cluster.
In the Create Clustered File Server Wizard > General, enter a cluster name, file server name, and cluster IP addresses if needed.
In Resource Type, select the option to provision bare-metal computers. Select the physical computer profile and select Next.
In Credentials and Protocols, select Browse next to the Run As account and choose the account with permissions to access the BMC. In the Protocol list, select the out-of-band management protocol you want to use for discovery. If you want to use DCMI, select Intelligent Platform Management Interface (IPMI). DCMI 1.0 isn't listed, but it's supported. Ensure that you use the latest version of firmware for the BMC model.
In Discovery Scope, specify the IP address scope that includes the IP addresses of the BMCs. You can add a single address, a subnet, or range.
In Target Resources, select the computers you want to provision, allow time for deep discovery, and select items to review and modify information.
Note
If the number of physical network adapters doesn't match the number of physical adapters defined in the computer profile, you'll need to add the missing information. If you don't want to deploy a computer immediately, you can select its BMC IP address and select Remove.
In Deployment Customization, configure the settings and when there are no more warnings about missing information, select Next.
- DHCP: If your physical computer profile uses DHCP, select a BMC IP address and enter a computer name. Decide whether to skip the AD check. If you do the check, deployment will continue if the computer account exists. Select the entry for each BMC IP address.
- Static: If the profile uses static IP addresses for each BMC IP address, enter a MAC address of the computer's network adapter that's used to communicate with VMM. Select the logical network you want to use. The default logical network is the one indicated in the profile. Select the IP subnet you want to use. The subnet list is scope to what's defined for the logical network in the associated network sites. You must select the IP subnet that corresponds to the physical location in which you're deploying the server and the network to which the adapter is connected. You can automatically assign an IP address or assign a specific address.
In Summary, confirm the settings and select Finish. To confirm the cluster was added, select Fabric > Storage > File Servers.