Deployment–Installing SQL clusters with PDT

In the introductory post to PDT, I mentioned that PDT can deploy a highly available configuration. This is the first of three posts that will explain how PDT can be configured to deploy a highly available configuration. This post will address SQL clusters, the other posts will cover VMM clusters and scale out roles.

System Center 2012 SP1 supports traditional shared storage SQL clusters as well as SQL AlwaysOn.  PDT has the ability to install traditional shared storage SQL clusters with SQL Server 2012.  The shared storage used for SQL can be either SMB, or SAN based storage.

The cluster to which SQL will be installed must be pre-created, PDT does not do that for you.  You must have all nodes joined to a Windows failover cluster, and the storage to be used must be configured.  In the case of SMB, that just means the shares to be used must be created – preferably on a Windows Server 2012 continuously available scale out file server – and the SQL service account must have full control permissions to those shares.  For SAN storage, the storage has to be added as available storage to the cluster.

Once your cluster is created, you can enter the information about the cluster in the variable.xml.  The first example below shows the information needed for a SQL cluster using SMB.  Note that in the <Role> entry for the database server, we specify SQLCluster=”True”.  You can install multiple instances on the same cluster, just remember that the instance names need to be unique in the cluster.  Next, in the <SQL> section instead of specifying an <Instance>, you specify a <Cluster> and provide all required information as variables.  The variables in the example specify the SQL service account, the SQL Agent service account, the UNC locations for database files, log files, and tempDB files, the IP address and cluster network information, and finally the nodes in the cluster and which nodes should be set as preferred for the SQL cluster group being installed.  You can specify as many nodes and Windows and SQL clustering allows.

<Roles>
<Role Name="System Center 2012 SP1 Orchestrator Database Server" Server="ORDB.contoso.com" Instance="MSSQLSERVER" SQLCluster="True"></Role>
</Roles>
<SQL>
<Cluster Cluster="ORDB.contoso.com" Version="SQL Server 2012">
<Variable Name="SQLAdmins" Value="CONTOSOSQL Admins" />
<Variable Name="SQLAgtServiceAccount" Value="CONTOSOsql" />
<Variable Name="SQLAgtServiceAccountPassword" Value="password" />
<Variable Name="SQLServiceAccount" Value="CONTOSOsql" />
<Variable Name="SQLServiceAccountPassword" Value="password" />
<Variable Name="SQLInstallSQLDataDir" Value="\FS01CLDB1ORDB" />
<Variable Name="SQLUserDBDir" Value="\FS01CLDB1ORDBMSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLUserDBLogDir" Value="\FS01CLDB2ORDBMSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLTempDBDir" Value="\FS01CLDB3ORDBMSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLTempDBLogDir" Value="\FS01CLDB3ORDBMSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLClusterIPAddress" Value="192.168.1.226" />
<Variable Name="SQLClusterNetwork" Value="Cluster Network 1" />
<Variable Name="SQLClusterIPSubnet" Value="255.255.255.0" />
<Node Server="CLDB1A.contoso.com" Preferred="1"></Node>
<Node Server="CLDB1C.contoso.com" Preferred="2"></Node>
<Node Server="CLDB1B.contoso.com"></Node>
</Cluster>
</SQL>

Our next example is for SQL clusters using SAN storage.  The difference here from the SMB example above is that instead of UNC names we specify drive paths, and we also need to specify which cluster disks those drives correspond to.

<Roles>
<Role Name="System Center 2012 SP1 Orchestrator Database Server" Server="ORDB.contoso.com" Instance="MSSQLSERVER" SQLCluster="True"></Role>
</Roles>
<SQL>
<Cluster Cluster="ORDB.contoso.com" Version="SQL Server 2012">
<Variable Name="SQLAdmins" Value="CONTOSOSQL Admins" />
<Variable Name="SQLAgtServiceAccount" Value="CONTOSOsql" />
<Variable Name="SQLAgtServiceAccountPassword" Value="password" />
<Variable Name="SQLServiceAccount" Value="CONTOSOsql" />
<Variable Name="SQLServiceAccountPassword" Value="password" />
<Variable Name="SQLInstallSQLDataDir" Value="H:MSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLUserDBDir" Value="H:MSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLUserDBLogDir" Value="I:MSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLTempDBDir" Value="J:MSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLTempDBLogDir" Value="J:MSSQL11.$InstanceMSSQLDATA" />
<Variable Name="SQLClusterIPAddress" Value="192.168.1.226" />
<Variable Name="SQLClusterNetwork" Value="Cluster Network 1" />
<Variable Name="SQLClusterIPSubnet" Value="255.255.255.0" />
<Variable Name="SQLClusterDisks">
<Value>`"Cluster Disk 1`" `"Cluster Disk 2`" `"Cluster Disk 3`"</Value>
</Variable>
<Node Server="CLDB1A.contoso.com" Preferred="1"></Node>
<Node Server="CLDB1C.contoso.com" Preferred="2"></Node>
<Node Server="CLDB1B.contoso.com"></Node>
</Cluster>
</SQL>

PDT uses the advanced cluster setup option of SQL Server – meaning that all nodes are installed simultaneously, then one node is used to complete the cluster configuration. In my lab when installing multiple clustered SQL instances, the first instance takes about 30 minutes then each additional instance takes about 15 minutes.

Clustering can be complex, mainly because of the network and storage configuration involved.  However, once that is configured correctly, PDT makes it simple to deploy a SQL cluster as part of a highly-available configuration of System Center 2012.

A couple of notes here, specific to PDT deployment of SQL and SQL clusters:

  • In PDT, Configuration Manager, App Controller and Service Provider Foundation must use a default SQL instance MSSQLSERVER.
  • The Data Protection Manager database cannot be clustered.
  • PDT validates that all nodes are members of the same cluster before deployment can begin.
  • All other information provided for a cluster is not validated pre-installation – if it is entered incorrectly, the SQL install will fail.  However, as with other failures with PDT, the information can be corrected and the deployment can be restarted.
  • Be sure to use the latest version of PDT – currently 2.4 – since clusters failed operating system version validation in previous versions.