Partager via


Configurer la récupération d’urgence dans les batteries de serveurs SharePoint à l’aide de la copie des journaux de transaction SQL Server

Mise à jour : 2009-05-21

This article describes how to use Microsoft SQL Server 2005 or Microsoft SQL Server 2008 log shipping to create a disaster recovery farm in a geographically distributed data center for Microsoft Office SharePoint Server 2007 avec Service Pack 2 (SP2). By using this configuration, you can provide a disaster recovery site that provides current search results when a failover occurs. The article assumes that you are familiar with the concepts and terms presented in Planifier la disponibilité (Office SharePoint Server).

It often takes many teams or roles in an organization to create and configure a secondary data center and farm. In order to configure and test the secondary environment, you need to confer with the administrators of the authentication providers, the SQL Server database administrators, and all affected SharePoint farm administrators. This article is intended primarily for SharePoint farm administrators to help them do the following:

  • Understand the requirements for creating log-shipped disaster recovery farms

  • Set up trial log-shipped environments

  • Communicate with the SQL Server database administrators who will configure log shipping for the production environments.

This article includes the following sections:

  • Introduction to log shipping

  • Overview of Office SharePoint Server and log shipping

  • Requirements for the secondary data center and farm

  • Configuring the log shipping environment

  • Failing over

  • Considerations when testing failover

  • Reconfigure log shipping or fail back

  • Summary

Introduction to log shipping

Log shipping enables you to configure SQL Server to continually send transaction log backups on from a primary database on a primary server instance to one or more secondary databases on separate secondary server instances. The transaction log backups are applied to each secondary database individually. Continually backing up the transaction logs from a primary database and then copying and restoring them to a secondary database keeps the secondary database almost synchronized with the primary database. Log shipping can also include an optional third server instance, known as the monitor server, that records the history and status of backup and restore operations and raises alerts if these operations do not occur as scheduled.

Log shipping consists of three jobs. Each job performs one of the following operations:

  1. Backs up the transaction log at the primary server instance

  2. Copies the transaction log file to the secondary server instance

  3. Restores the log backup on the secondary server instance

The following diagram describes log shipping.

Vue d’ensemble du processus d’envoi des journaux

For more information, see the SQL Server Books Online article Log shipping (https://go.microsoft.com/fwlink/?LinkId=151252).

Overview of Office SharePoint Server and log shipping

SQL Server log shipping can be used to send content databases, including My Sites databases, and single sign-on (SSO) databases from one farm that is running Office SharePoint Server 2007 avec SP2 to one or more geographically dispersed secondary farms.

ImportantImportant :

Although you can configure log shipping on a version other than Office SharePoint Server 2007 avec SP2, we recommend that you use Office SharePoint Server 2007 avec SP2 because it provides the following benefits:

  • When a content database is put in read-only mode, the site collections associated with that database are also put in read-only mode, and the user interface is changed to remove activities that require database changes.

  • Search treats content databases that have been detached and reattached to the same Web application in Office SharePoint Server 2007 avec SP2 as known data sources, and performs incremental crawls, instead of full crawls. This is important, because in log shipped environments we recommend that you frequently detach and reattach content databases on the secondary farm to update the configuration database on the secondary farm so that the configuration database can recognize new or removed site collections. The new capability to perform incremental crawls after reattaching a database significantly reduces crawl time, and increases search currency.

Uses of secondary farms

We assume that the main purpose of a secondary farm is primarily disaster recovery. However, if you create a secondary farm that is running Office SharePoint Server 2007 avec SP2, you can expose sites on the log-shipped secondary farm to users. You can either distribute a hosts file that points to the sites on the secondary farm, or define a dedicated alternate access mapping for each Web application on the secondary farm that you want to expose with a secondary namespace; for example, http://secondary.contoso.com or http://readonly.contoso.com. The sites that you expose will not expose write functionality to users. This article assumes that you are running Office SharePoint Server 2007 avec SP2. For more information, see Faire fonctionner une batterie qui utilise des bases de données en lecture seule (Office SharePoint Server).

NoteRemarque :

If you create a secondary farm that is not running Office SharePoint Server 2007 avec SP2, we recommend that you not expose any sites to users. Log-shipped farms that do not have Office SharePoint Server 2007 avec SP2 installed are read-only, but provide no clear warnings for users who attempt to write data to the site. For more information about the issues in using Office SharePoint Server with a read-only content database, see the Knowledge Base article: Using Microsoft Windows SharePoint Services with a content database that is configured as read-only in Microsoft SQL Server (https://go.microsoft.com/fwlink/?LinkID=117362).

Log shipping topology

The following diagram describes a scenario with two data centers and two farms configured to use log shipping. In this scenario, the disaster recovery data center hosts a read-only secondary farm.

Batteries de serveurs avec copie des journaux de transaction avant le basculement

There are two logical farms, one in each data center. Each farm is a separate installation, with separate configuration and Central Administration content databases, and separate Shared Services Providers (SSPs). Only the content databases and SSO database are log shipped from the primary data center to the secondary data center. SSP A provides search in the primary farm, and SSP B provides search in the secondary farm. A configuration database refresh script (C) is run on the secondary farm. As the diagram shows, it is important to coordinate the timing of three processes in the secondary environment so that they do not overlap:

  1. Processing log shipped databases

  2. Search crawling

  3. Configuration database refresh script

This topology can be repeated across many data centers, if you configure SQL Server log shipping to one or more additional data centers.

General considerations when log shipping with Office SharePoint Server

This section describes limitations on the use of log shipping with Office SharePoint Server 2007 avec SP2.

  • By default, the failover process for log shipping is manual. You can create scripts to automate failover.

  • In the event of an unplanned failover, some data loss is possible, depending on the frequency of log shipping and the time of failure. It is possible to lose data from the last log shipping interval before a failure.

  • The configuration database cannot be log shipped to another farm because it contains computer-specific information. You must maintain the same customizations and configuration settings on both farms manually.

  • The search database cannot be log shipped successfully to the secondary farm because the search database, index file, and SSP database must be synchronized. To ensure search availability on a failover farm with log-shipped databases, you can use either of the following solutions:

    • Configure and run an SSP that is configured to provide search within the failover farm. This solution can provide search availability immediately after the secondary farm is up and running, and is suitable for large corpuses. This article describes how to configure and run a search SSP within the failover farm.

    • Restore the SSP from the primary farm to the failover farm by using SharePoint built-in backup and recovery. This solution provides search availability after the SSP has been restored, and search has crawled the content again. If the interval required for recovering the SSP is within the recovery time objective of the farm, you might want to consider this solution. This solution is not described in detail in this article. For more information about how to back up and restore the search SSP, see Sauvegarder et restaurer des fournisseurs de services partagés (Office SharePoint Server 2007).

  • If you are running the profile service in the primary farm, we recommend that you configure the SSP in the secondary farm to run the profile service. To keep the profiles on all SSPs synchronized, you must use the User Profile Replication Engine that is included in the 32-bit version of the Microsoft SharePoint Administration Toolkit x86 (https://go.microsoft.com/fwlink/?LinkId=151962) or the 64-bit version of the Microsoft SharePoint Administration Toolkit x64 (https://go.microsoft.com/fwlink/?LinkID=142035). For more information, see Outil User Profile Replication Engine (Office SharePoint Server).

  • We do not recommend log shipping databases other than content databases and SSO databases; for example, Microsoft Office Project Server 2007 databases. For databases not previously mentioned, we recommend backing up and restoring to the failover farm.

  • Site collections added to the primary farm are not automatically added to the configuration database on the secondary farm — they must be added by using either Stsadm operations or a script. For a sample script, see Create a script to refresh the list of sites in the secondary farm configuration database (refresh script).

Updates for Office SharePoint Server must be applied to binaries in both the primary and secondary farms, but can be applied to the databases in the primary farm, and then log shipped to the secondary farm. This paper does not cover patching in detail, but an overview of the process is as follows:

  1. Pause log shipping.

  2. Detach the content databases from the Web application in the secondary farm through Central Administration or a script.

  3. Update both farms, starting with the primary farm.

    ImportantImportant :

    Make sure that the update process has fully completed and is successful on both the primary farm and the secondary farm. The databases on the secondary farm are not updated by the upgrade process — they are updated by log shipping.

  4. Start log shipping.

  5. Because attempts to attach non-updated databases to the secondary farm fail and can leave the farm in an unsupported state, ensure that one or two cycles of log shipping are complete before you attach the log-shipped content databases in the secondary farm.

    Optional. You can also use the following query to determine whether the database schema from the primary farm was fully replicated to the secondary farm before you attach the databases.

    USE <contentdb>

    GO

    SELECT * FROM Versions

    The query returns versions numbers in the following format.

    00000000-0000-0000-0000-000000000000

    The last version in the list is the most-recently installed version of Office SharePoint Server 2007.

    ImportantImportant :

    Microsoft generally does not support running queries against the databases that are used by SharePoint Products and Technologies. The previous query is an allowed exception because it deals with metadata about the database. Direct queries can adversely affect system performance and reliability. For more information about the effects of direct changes to databases, see the Knowledge Base article Support for changes to the databases that are used by Office server products and by Windows SharePoint Services (https://go.microsoft.com/fwlink/?LinkID=105589)

  6. Attach the log-shipped databases to the secondary farm.

Performance considerations when log shipping with Office SharePoint Server

Analyze the amount of data that is being log shipped, so that you correctly set the backup, copy, and restore job intervals for log shipping. The amount of data being log shipped is affected by the daily amount of change in the content databases. In our experience, a typical farm undergoes change of 2 percent to 4 percent, but with maintenance changes, the level of change can reach 5 percent to 7 percent at peak times. To determine the amount of change in the content databases in the system, for each content database that you log ship, calculate the sum of changes in the transaction log backups over a given time interval, and calculate the percentage of change compared to the size of the primary database.

We have found that it is better to back up and copy many small transaction logs instead of a few large transaction logs. We recommend that you schedule log backups and copying at frequent intervals. You can restore the transactions logs at less-frequent intervals. You might want to start by using backup and copy intervals of 5 minutes, and a restore interval of 15 minutes. SQL Server 2008 includes the capability to have log shipping intervals that are less than a minute. For more information, see Scheduling Sub-Minute Log Shipping in SQL Server 2008 (https://go.microsoft.com/fwlink/?LinkId=151253)

You might encounter performance issues if the time that the system takes to ship logs consistently exceeds the time that is required to create new logs; that is, if you are always falling behind in the log shipping schedule. This kind of problem can be caused by throughput or latency issues. If you have throughput and latency issues, we recommend that you consider using Windows Distributed File System Replication (DFSR) with Active Directory directory service that is running on Windows Server 2003 R2 or Active Directory Domain Services (AD DS) running on Windows Server 2008 to replace the log-shipping copy job. For more information about how to use DFSR, see Overview of the Distributed File System Solution in Microsoft Windows Server 2003 R2 (https://go.microsoft.com/fwlink/?LinkId=150764) and DFS Step-by-Step Guide for Windows Server 2008 (https://go.microsoft.com/fwlink/?LinkId=150765).

The following graph compares the throughput provided by various replication technologies that can be used to copy log-shipped transaction logs.

Graphique du débit de réplication

SQL Server 2008 also includes the capability to compress backups to reduce the size of the files that you are log shipping. For more information, see Tuning the Performance of Backup Compression in SQL Server 2008, part 1 (https://go.microsoft.com/fwlink/?LinkId=151254) and Tuning Backup Compression Part 2 (https://go.microsoft.com/fwlink/?LinkId=151255).

Security considerations when log shipping with Office SharePoint Server

For SQL Server log shipping with Office SharePoint Server 2007 avec SP2, members of the team must have the following permissions:

  • To configure Office SharePoint Server 2007 avec SP2 with log shipping and perform the procedures in this article, a team member must be a member of the Farm Administrators SharePoint group.

  • To configure SQL Server log shipping and perform the procedures in this article, a team member must be a member of the sysadmin fixed server role on each server instance.

When a database administrator set up a log-shipped database, the SQL Server logins and permissions for the database to be used with a SharePoint farm are not automatically configured in the master and msdb databases on the log-shipped server. Instead, you must configure the permissions for the required logins. These permissions include, but are not limited to, the following:

  • The Central Administration application pool account must be a member of the dbcreator and securityadmin fixed server roles.

  • All application pool accounts and the search services and default content access accounts must have SQL Server logins, although these accounts are not assigned to SQL Server fixed server roles or fixed database roles.

  • Members of the Farm Administrators SharePoint group must also have SQL Server logins and must be members of the same roles as those of the Central Administration application pool account.

We recommend that you transfer the logins and permissions from the principal server to the mirror server by running a script. An example script is available in Knowledge Base article 918992 How to transfer the logins and the passwords between instances of SQL Server 2005 (https://go.microsoft.com/fwlink/?LinkId=122053). For more general information about how to transfer SQL Server metadata between instances, see the SQL Server Books Online article Managing Metadata When Making a Database Available on Another Server Instance (https://go.microsoft.com/fwlink/?LinkId=122055) and Knowledge Base article 321247 How to configure security for SQL Server log shipping (https://go.microsoft.com/fwlink/?LinkId=150830).

The backup and restore directories in the log shipping configuration must adhere to the following requirements:

  • For the backup job to be successful, the SQL Server service account on the primary server instance and the proxy account of the backup job (by default, the SQL Server Agent account on the primary server instance) must have read/write permissions to the backup directory.

  • For the copy job to be successful, the proxy account of the copy job (by default, the SQL Server Agent account on the secondary server instance) must have read permissions to the backup directory and write permissions to the copy directory.

  • For the restore job to be successful, the SQL Server service account on the secondary server instance and the proxy account of the restore job (by default, the SQL Server Agent account on the secondary server instance) must have read/write permission to the copy directory.

Requirements for the secondary data center and farm

We make the following assumptions about the environment in the secondary data center:

The failover farm must have the following characteristics:

  • A separate configuration database and a separate Central Administration content database must be installed and maintained on the failover farm, which means that all configuration changes in the primary farm must be replicated manually in the failover farm.

    Information stored in the configuration database includes the following.

    Activated features

    Diagnostic logging settings

    Administrator-deployed form templates

    E-mail settings

    Alternate access mapping settings

    External service connection settings

    Antivirus settings

    Farm-level search settings

    Application pool settings, including service accounts (all accounts that run as Web applications, including the crawler account and the search account)

    HTML viewer settings

    Blocked file types

    Recycle Bin settings and other Web application general settings

    Content deployment settings

    Timer job settings

    Crawler impact rules

    Usage analysis processing settings

    Database names and locations

    Web application names and databases. Be sure to document the content database names associated with each Web application.

    Default quota templates

    Workflow management settings

    NoteRemarque :

    If you have configured alternate access mapping for the primary farm, it is very important to configure it identically on the secondary farm at the point of failover. To document alternate access mapping settings, export them to a text file by using the command stsadm -o enumalternatedomains.

  • All customizations, such as features, solutions, site templates, and third-party solutions such as IFilters, must be deployed on both farms. We recommend that you package all customizations as solutions to enable rapid deployment. For more information, see Déployer des personnalisations.

  • Content databases must be set to use the full recovery model. For information about how to set the recovery model for a database, see How to: View or Change the Recovery Model of a Database (SQL Server Management Studio) (https://go.microsoft.com/fwlink/?LinkId=151701).

  • The primary server and secondary servers must be running the same edition of SQL Server 2005 or SQL Server 2008. Log shipping is available in the Standard, Developer, and Enterprise editions.

  • If you plan to expose the log-shipped secondary farm to users, configure alternate access mapping with a secondary namespace for the secondary farm; for example, http://secondary.contoso.com or http://readonly.contoso.com. For more information, see Configurer le mappage des accès de substitution. You replace this alternate access mapping with a mapping that is identical to the primary farm on failover.

Configuring the log shipping environment

This section provides detailed procedures for configuring log shipping.

The procedures in this section assume that the organization understands the following prerequisites:

  • How to deploy Office SharePoint Server

  • How to set application pool identities

  • How to stop and start the search service

  • How to configure a Domain Name System (DNS) to stop and start accepting traffic

  • How to use the hosts file to enable and disable local sites

The failover phase is composed of the following procedures:

  • Prepare the primary farm

  • Prepare the secondary farm

  • Configure log shipping

  • Attach the log-shipped databases to the secondary SharePoint farm

  • Configure search and profiles for the secondary farm

  • Create a script to refresh the list of sites in the secondary farm configuration database (refresh script)

  • Coordinate timing of log shipping jobs, search crawls, and the refresh script

  • Optional. Maintain SSO on the secondary farm

  • Optional. Provide users with access to the read-only farm

Prepare the primary farm

The steps to prepare the primary server include the following:

  1. Set the application pool identity on the Web applications to a domain account that is available on both farms. For more information, see Modifier l’identité du pool d’applications pour une application Web (Office SharePoint Server).

  2. Document all of the configuration settings so that they can be applied to the secondary farm. For more information, see Préparer la sauvegarde et la restauration d’une batterie de serveurs (Office SharePoint Server 2007). Make sure, especially, to document the alternate access mapping settings by exporting them to a text file; use the command stsadm -o enumalternatedomains to export the settings.

  3. Document all of the customizations. The customizations will be easiest to reapply to the secondary farm if they are packaged as solutions. For more information, see Déployer des personnalisations.

Prepare the secondary farm

  1. Install and configure Office SharePoint Server on the secondary farm. For more information, see Déployer Office SharePoint Server 2007 dans un environnement de batterie de serveurs.

    We recommend that if you have enough equipment, you configure the same number of front-end Web servers and databases as the primary farm. If you have insufficient equipment, you can use fewer servers in the secondary farm, but it may be unable to handle the same load as the primary farm.

    Ensure that the version number and patching level is the same on both the primary and secondary farms. For more information, see Updates Resource Center for SharePoint Products and Technologies (https://go.microsoft.com/fwlink/?LinkID=106182).

  2. Apply all configurations and customizations that you have made to the primary farm. For more information, see Déployer des personnalisations.

  3. Create duplicates of all Web applications that exist in the primary farm. Be sure to use the same application pool identity as the Web applications in the primary farm. For more information, see Créer et gérer des applications Web (Office SharePoint Server).

  4. Disable the following timer jobs. For more information, see Gérer les travaux du minuteur SharePoint (Office SharePoint Server).

    Bulk workflow task processing

    Profile Synchronization

    Site Collection: Delete

    Change Log

    Quick Profile Synchronization

    Usage Analysis

    Database Statistics

    Records Center Processing

    Variations Propagate Page Job Definition

    Dead Site Delete

    Recycle Bin

    Variations Propagate Site Job Definition

    Disk Quota Warning

    Scheduled Approval

    Windows SharePoint Services Watson Policy Update

    Expiration policy

    Scheduled Page Review

    Workflow

    Hold Processing and Reporting

    Scheduled Unpublish

    Workflow Auto Cleanup

    Immediate Alerts

    Search and Process

    Workflow Failover

    Information management policy

    Shared Services Provider Synchronizing Job

Configure log shipping

You can configure log shipping by using either SQL Server Management Studio, or Transact-SQL. This article describes how to use Management Studio.

Configure log shipping on the primary server

  1. Open Management Studio on a database server in the primary farm.

  2. In the Object Explorer navigation pane, right-click the content database for the Web application, point to Tasks, and then click Ship Transaction Logs.

    The Database Properties dialog box appears.

  3. Select Enable this as a primary database in a log shipping configuration.

  4. Click Backup settings.

    The Transaction Log Backup Settings dialog box appears.

    1. In Network path to backup folder enter the path of the backup folder in the primary farm.

    2. Enter values for Delete files older than and Alert if no backup occurs within.

    3. Review the schedule listed in the Backup job section. If you need to customize the schedule, click Schedule.

      Record when the log shipping jobs are scheduled to run so that you can schedule search crawls and other batch jobs around them.

    4. Optional. Review the setting in the Compression section if you want to use backup compression.

    5. Click OK.

  5. In the Database Properties dialog box, in the Secondary databases section, section, click Add.

    The Secondary Database Settings dialog box appears.

    • Click Connect and connect to the instance of SQL Server that you want to use as the secondary server. By default, the name of the secondary database is the same name as the database from the primary server.

    • On the Initialize Secondary Database tab, select Yes, generate a full backup of the primary database and restore it into the secondary database (and create the secondary database if it doesn’t exist).

    • On the Copy Files tab, in the Destination folder for copied files box, type the path of the folder on the secondary server into which you want the transaction logs backups copied.

    • On the Restore Transaction Log tab, in the Database state when restoring backups section, select Standby mode and clear Disconnect users in the database when restoring backups.

    • Click OK.

    • We recommend that you save settings in a script. In the Database Properties dialog box, click Script Configuration, and then click Script configuration to file.

      A Save As dialog box appears. Enter the folder where you want to save the file, and then click OK.

    • Click OK.

      All jobs will be executed one time to initialize log shipping, and will report success or failure.

  6. Repeat the previous procedure for all databases that you plan to log ship. For more information, see How to: Enable Log Shipping (SQL Server Management Studio) (https://go.microsoft.com/fwlink/?LinkId=151644).

Optional. Replace the log shipping copy task with DFS Replication

  1. Enable and configure DFS Replication (DFSR) for the environment. For more information, see Replication (https://go.microsoft.com/fwlink/?LinkId=151670). For an example of configuring DFS Replication, see DFS Step-by-Step Guide for Windows Server 2008 (https://go.microsoft.com/fwlink/?LinkID=150765).

  2. Because DFSR will be used for transport, you must disable the log shipping copy job for each database that participates in the log shipping configuration. For more information, see How to: Disable or Enable a Job (SQL Server Management Studio) (https://go.microsoft.com/fwlink/?LinkId=151673)

Validate that log shipping was successful

  1. Start Management Studio on the database server in the secondary farm.

  2. In the Object Explorer navigation pane, verify that all log-shipped content databases have Standby or Read-Only status.

  3. Determine how long average log shipping jobs take to run on the secondary farm, by running the jobs and timing their duration. For more information, see Monitoring Log Shipping (https://go.microsoft.com/fwlink/?LinkId=151682).

Attach the log-shipped databases to the secondary SharePoint farm

  1. On the SharePoint Central Administration Web site, on the Quick Launch, in the Central Administration section, click Application Management. The Application Management page opens.

    In the SharePoint Web Application Management section, click Content databases.

    The Manage Content Databases page opens.

  2. In the Database Name column, click the content database that you want to remove. The Manage Content Database Settings page opens.

  3. In the Remove Content Database section, select the Remove content database check box, and then click OK.

  4. On the Manage Content Databases page, click Add a content database. The Add Content Database page opens.

  5. Enter the appropriate database server and database name of the log-shipped content database, and then click OK.

  6. Repeat this procedure for all databases that you are log shipping.

    At this point, you can browse to the content on the secondary farm.

Configure search and profiles for the secondary farm

Configure search in the secondary farm to meet the business objectives for the disaster recovery scenario. Initially, you may want to search the same databases, with the same crawl settings and crawl rules as the primary farm. If you determine that you cannot schedule crawls and log shipping so that they avoid overlapping, we recommend that you adjust the content that you include in crawls. For example, before failover, you might crawl only those databases that contain content of high business impact, and then crawl other content only after failover. For more information, see Limiter ou augmenter la quantité de contenu analysé (Office SharePoint Server).

  1. Stop the SQL Server Agent job on the secondary farm to disable log shipping while you configure search.

  2. Configure search on the secondary farm.

    Determine how long search crawls take on the secondary farm. You can use data collected from the primary farm to estimate the time required on the secondary farm.

    ImportantImportant :

    Be sure to schedule search crawls for times when log shipping jobs are not running. For more information, see Coordinate timing of log shipping jobs, search crawls, and the refresh script.

  3. Start the SQL Server Agent job on the secondary farm to enable log shipping.

  4. If you are using profiles, the profiles on the failover SSPs are not synchronized to the profiles on the primary SSPs — they will be in the state they were in when first imported. To keep the profiles on all SSPs synchronized, use the User Profile Replication Engine that is included in the 32-bit version of the Microsoft SharePoint Administration Toolkit x86 (https://go.microsoft.com/fwlink/?LinkId=151962) or in the 64-bit version of the Microsoft SharePoint Administration Toolkit x64 (https://go.microsoft.com/fwlink/?LinkID=142035). For more information, see Outil User Profile Replication Engine (Office SharePoint Server).

Create a script to refresh the list of sites in the secondary farm configuration database (refresh script)

Use the following example as a model to create a refresh script that you can run on the secondary farm when site collections have been added or deleted on the primary farm.

In the example script replace <db_name1>, <URL>, and <db_name2>, <URL> with the names of the log shipped databases.

Add detach and attach sections to the script for each of the log shipped databases.

echo off

SET PATH=C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\BIN;%PATH%

echo %time Shutting down the Osearch service…
SC config Osearch start= disabled
SC stop Osearch

echo %time Shutting down the SQL Server Agent service…
SC \\<SQL Server> config SQLSERVERAGENT start= disabled
SC \\<SQL Server>  stop SQLSERVERAGENT
f
echo %time About to refresh Site Map…

echo %time About to detach db <db_name1>
stsadm.exe -o deletecontentdb -url <URL> -databasename <db_name1>  -databaseserver <SQL_Server>
echo %time About to attach db <db_name1>
stsadm.exe -o addcontentdb -url <URL> -databasename <db_name1> -databaseserver <SQL_Server>

echo %time  About to detach db <db_name2>
stsadm.exe -o deletecontentdb -url <URL> -databasename <db_name2>  -databaseserver <SQL_Server>
echo %time  About to attach db <db_name2>
stsadm.exe -o addcontentdb -url <URL> -databasename <db_name2> -databaseserver <SQL_Server>

rem --:: repeat for all databases ::--

echo %time Restarting the Osearch service…
SC config Osearch start= demand
SC start Osearch

echo %time Restarting the SQL Server Agent service…
SC \\<SQL Server> config SQLSERVERAGENT start= demand
SC \\<SQL Server>  start SQLSERVERAGENT

echo on

Coordinate timing of log shipping jobs, search crawls, and the refresh script

  1. Determine how long average log shipping jobs take to run on the secondary farm, and when you want to schedule the jobs to run.

  2. Determine how long incremental crawls take on the secondary farm, and when you want to schedule crawls. You may be able to use crawl data from the primary farm to determine how long incremental crawls take to complete. For more information about scheduling an incremental crawl, see Planifier une analyse incrémentielle (Office SharePoint Server 2007).

  3. If possible, schedule log shipping jobs and search crawls so that they do not overlap.

  4. If you cannot schedule log shipping and incremental search crawls so that they avoid overlapping, choose from the following options:

    • Run both log shipping jobs and search crawls manually, suspending one set of jobs while running the other.

    • Allow search crawls to take precedence over processing logs, and create a script to automatically start log shipping when the crawler process is not running.

    • If the only active process in the database is the crawler process, configure log shipping to wait until the database is not in use, and then process the shipped logs.

    If none of these options work, and if you cannot schedule the amount of data you are log shipping and the crawl times so that they do not overlap, consider actions to cut to make the system work; for example, if you cannot complete both log shipping and search in the times available, crawl only high-business-impact content databases before failover, and start crawling other content after failover).

  5. Schedule the refresh script to run. If no new site collections have been added to the primary farm, you do not have to run the refresh script. When new site collections are added, the refresh script must be run periodically by using the Windows Task Scheduler. When the refresh script runs, it suspends both the crawler and the log shipping processes. For more information about scheduling tasks, see Scheduling tasks (https://go.microsoft.com/fwlink/?LinkId=151894).

    If the refresh script is canceled while it is running, we recommend that you run the script manually to make sure that all the databases are reattached and that all the services are turned back on.

Optional. Maintain SSO on the secondary farm

  • Back up the encryption key after you initially set up SSO and then back up the key again each time it is regenerated. For more information, see Sauvegarder SSO (Office SharePoint Server 2007)

    Be aware of the following restrictions on backing up the encryption key:

    • You must be a member of the SSO administrator account to back up the encryption key.

    • The encryption key cannot be backed up remotely. You must be logged on to the encryption-key server locally to back up the encryption key.

    • You must physically transfer the removable storage device that contains the SSO encryption key to the secondary farm, and restore it.

Optional. Provide users with access to the read-only farm

  1. If possible, provide users with an updated hosts file that points to the Web applications on the secondary farm that you want to expose.

  2. If you cannot distribute a hosts file, define a dedicated alternate access mapping for each Web application that you want to expose, such as http//readonly.contoso.com or http://secondary.contoso.com, and configure the map in the DNS.

    NoteRemarque :

    If you have no more room to define alternate access mappings for a particular Web application, then this dedicated-mapping option is not possible.

Failing over

Failover can be performed manually, or can be scripted. This article describes manual failover only.

The following diagram shows a multifarm environment that has failed over. Log shipping is broken, and the farm administrators have performed the following actions:

  • Set the DNS to stop accepting traffic to the primary farm.

  • Applied unapplied transaction logs to the databases on the secondary server.

  • Switched the content databases in the secondary farm to read/write.

  • Set the DNS to accept traffic in the secondary farm.

Batteries de serveurs copiées dans les journaux après le basculement

NoteRemarque :

This section describes how to perform a full (nontest) failover. For information on how to test failover, see Considerations when testing failover.

The failover phase is composed of the following procedures.

  • Disable all log shipping jobs in the primary farm

  • Stop accepting traffic to the primary farm

  • Back up the transaction logs on the primary server

  • Restore the latest transaction logs to the secondary server

  • Set content databases to read/write

  • Optional. Restore the SSO encryption key

  • Direct traffic to the secondary farm

  • Finish configuring the secondary environment

Disable all log shipping jobs in the primary farm

  1. If the primary farm is still available, and log shipping has not already stopped, disable all log shipping jobs on the database servers in the primary farm.

  2. If you cannot access the databases on the servers, execute the following Transact-SQL statement for each database, and skip to the step: Set content databases to read/write.

    RESTORE DATABASE content_db WITH RECOVERY
    

Stop accepting traffic to the primary farm

  • Follow the recommended procedures for the DNS to stop accepting traffic to the primary farm. Do not forward traffic to the secondary farm yet.

    This paper does not describe how to use or configure DNS. For more information on DNS, see DNS Technical Reference(https://go.microsoft.com/fwlink/?LinkId=151820).

    NoteRemarque :

    Users will experience errors while the primary farm is unavailable.

Back up the transaction logs on the primary server

  1. Determine whether the primary farm is still available, and whether the network shared folder in which the backups are stored can be reached by both server farms. If neither of these conditions apply, go to the procedure Set content databases to read/write.

  2. In Management Studio, in the Object Explorer navigation pane, right-click a content database, point to Tasks, and click Backup. The Back Up Database dialog box appears.

  3. Click the Backup type drop-down list and select Transaction Log.

  4. In the Select a page pane, click Options.

  5. In the Transaction log section, select Back up the tail of the log and leave the database in the restoring state, and finally click OK.

  6. Repeat this procedure for all log shipped databases.

Restore the latest transaction logs to the secondary server

  1. This procedure is only useful if the primary farm is still available and if the network share on which the backups are stored can be reached by both server farms. If both of these conditions are not met, go to the procedure Set content databases to read/write.

    In Management Studio on the secondary server, right-click the content database, point to Tasks, click Restore, and then click Transaction Log. The Restore Transaction Log dialog box appears.

  2. On the General tab, select From File and Tape and enter the path to the backup file you created on the primary server.

  3. In the Recovery state section, select Leave the database ready for use by rolling back the uncommitted transactions. Additional transaction logs cannot be restored. (RESTORE WITH RECOVERY), and then click OK.

  4. Repeat this procedure for all log-shipped databases.

Set content databases to read/write

After you switch the databases in the secondary farm to read/write, you will need to re-establish log shipping from a new backup file on the secondary server that is then copied to the primary server.

  1. In Management Studio, right-click the content database that you want to switch to read/write, and then click Properties. The Database Properties dialog box appears.

  2. In the Select a page pane, click Options, and, in the Other options list, scroll to the State section.

  3. On the Database Read-Only entry, click the arrow next to True, select False, and then click OK.

  4. Repeat for all other content databases.

Optional. Restore the SSO encryption key

  1. On the secondary farm, restart the SSO service.

  2. Configure SSO on the secondary server.

  3. On the secondary farm, restart the SSO.

  4. Restore the key from the removable drive.

  5. Create an application definition to verify that new application definitions can be created.

  6. Verify that you can get the credentials for various applications using the GetCredentials method. For more information, see ISsoProvider.GetCredentials Method (Microsoft.SharePoint.Portal.SingleSignon(https://go.microsoft.com/fwlink/?LinkId=151824).

Direct traffic to the secondary farm

  1. Verify that the alternate access mapping settings on the secondary farm match the settings from the primary farm.

  2. Follow the recommended procedures for the DNS to direct traffic to the secondary farm.

    NoteRemarque :

    After you divert traffic on the DNS to the secondary farm, users might need to close and reopen their browsers for the redirect operation to take effect.

Finish configuring the secondary environment

  1. Establish common maintenance processes.

    • Set up monitoring.

    • Implement production level backup processes.

  2. Begin restoring the former primary environment.

Failover testing considerations

When you are testing failover, be clear about the level of failover testing that the service level agreements require you to perform. The following are some common examples of failover testing.

Verification that the secondary site is live, and is being crawled   For this type of failover test, you can provide the users with a hosts file or an alternate access mapping path to the secondary farm, so that they can verify that the farm is live and up to date. No additional steps are required.

Farm failover   In this type of test, the primary farm is taken down for a short announced interval, but the secondary farm is not switched to read/write status. For this type of test, follow the procedures in the Failing over section, with the following differences:

Steps for failover test Description

Perform

1. To start the failover test, on the secondary farm, stop the SQL Server Agent Job, so that no logs are being processed.

Do not perform

2. Disable all log shipping jobs in the primary farm

Perform

3. Stop accepting traffic to the primary farm

Perform

4. Back up the transaction logs on the primary server.

Perform

5. Restore the latest transaction logs to the secondary server.

Do not perform

6. Set content databases to read-write.

Do not perform

7. Optional. Restore the SSO encryption key

Perform

8. Direct traffic to the secondary farm

Do not perform

9. Finish configuring the secondary environment.

Planned data center failover with additional precautions   In this type of test, the primary data center is taken down for an announced interval. The secondary farm is switched to read/write status. For this type of test, follow the procedures in the Failing over section, with the following differences:

Steps for failover test Description

Perform

1. To start the failover test, on the secondary farm, stop the SQL Server Agent Job, so that no logs are being processed.

Perform

2. Disable all log shipping jobs in the primary farm

Perform

3. Stop accepting traffic to the primary farm

Perform

4. Back up the transaction logs on the primary server.

Perform

5. Restore the latest transaction logs to the secondary server.

Perform

6. Set content databases to read-write.

Perform

7. Optional. Restore the SSO encryption key

Perform

8. Direct traffic to the secondary farm

Do not perform

9. Finish configuring the secondary environment.

New step

10. Keep all of the log-shipped backups on the secondary farm so that you can use the database backup from the secondary farm to restart log shipping

Planned data center failover without additional precautions   In this type of test, the primary data center is taken down for an announced interval to determine how long a true recovery will take. Some data loss is possible. The secondary farm is switched to read/write status. For this type of test, follow the procedures in the Failing over section.

Steps for failover test Description

Perform

1. Before you begin, back up the log shipped databases on the primary farm, so that you have a current backup to use to restart log shipping.

Perform

2. Disable all log shipping jobs in the primary farm

Perform

3. Stop accepting traffic to the primary farm

Perform

4. Back up the transaction logs on the primary server.

Perform

5. Restore the latest transaction logs to the secondary server.

Perform

6. Set content databases to read-write.

Perform

7. Optional. Restore the SSO encryption key

Perform

8. Direct traffic to the secondary farm

Do not perform

9. Finish configuring the secondary environment.

Reconfiguring log shipping

When the secondary farm is functional, and the original primary database is accessible and the problem with that farm investigated and resolved, you can either make the former primary database into the new secondary database, or you can deliberately fail over from the secondary farm to the former primary farm, and then reconfigure log shipping as you initially had it structured.

  1. Configure log shipping between the secondary farm and the primary farm. Establish a log shipping relationship between the SQL Server instance on the secondary farm and the corresponding instance on the primary farm. For details, see the section Configuring the log shipping environment.

  2. On the primary farm, apply any unapplied transaction log backups to each database.

  3. Use the DNS to stop accepting traffic on the secondary farm.

  4. Fail over from the secondary farm to the original primary farm. For details, see the section Failing over, and then reconfigure log shipping.

  5. Optional. Restore SSO by using the local copy of the encryption key on media.

  6. Bring up the primary farm again, verify that all is working as expected and then change the DNS to direct incoming traffic to the primary farm.

  7. Reconfigure log shipping from the primary farm to the secondary farm.

Summary

The use of log shipping to provide a disaster recovery farm in a secondary data center is complex. Make sure that you establish clear service level agreements with users, and test the environment on a regular basis.

Acknowledgements

The Microsoft Office SharePoint Server Content Publishing team thanks the following contributors and technical reviewers:

  • Doron Bar-Caspi, Senior Program Manager, SharePoint Customer Advisory Team

  • Lindsay Allen, Principal Program Manager Lead, SQL Server Customer Programs

  • Sanjay Mishra, Senior Program Manager, SQL Server Customer Programs

  • Burzin Patel, Senior Program Manager, SQL Server Customer Programs

  • Bill Baer, Technology Architect, Microsoft SharePoint Online

  • Cory Burns, Operations Engineer, Microsoft SharePoint Online

  • Steve Peschka, Senior Architect

  • JP Poissant, Senior Consultant II, Microsoft Consulting Services, Canada