Dela via


Considerations for Merge Replication

There are a number of considerations for merge replication:

  • The addition of a uniqueidentifier column to published tables.

  • timestamp columns.

  • Large object (LOB) data types.

  • Bulk inserting data into published tables.

  • Publication compatibility level. For more information, see the "Compatibility Level for Merge Publications" section of Using Multiple Versions of SQL Server in a Replication Topology.

Addition of uniqueidentifier Column

Merge replication uses a globally unique identifier (GUID) column to identify each row during the merge replication process. If a published table does not have a uniqueidentifier column with the ROWGUIDCOL property and a unique index, replication adds one. Ensure that any SELECT and INSERT statements that reference published tables use column lists. If a table is no longer published and replication added the column, the column is removed; if the column already existed, it is not removed.

timestamp Columns

Merge replication supports timestamp columns. The timestamp column is replicated, but the literal timestamp values are not. The timestamp values are regenerated when the snapshot is applied at the Subscriber. Because the timestamp values are regenerated at the Subscriber, timestamp columns are filtered out when performing article validation.

Large Object (LOB) Data Types

If a published table includes any LOBs, we recommend that you specify a value of TRUE for the @stream_blob_columns parameter of sp_addmergearticle (Transact-SQL). If TRUE is not specified, the entire LOB must be built in memory at the Publisher, which can cause the Publisher to run out of RAM if the LOB is very large.

Important

Enabling this memory optimization may hurt the performance of the Merge Agent during synchronization. This option should only be used when replicating columns that contain megabytes of data.

Updates to text, ntext, and image columns are replicated only if the column has been updated explicitly by an UPDATE statement: the update causes a trigger to fire that updates metadata, ensuring that the transaction is propagated to other Subscribers. Using only the WRITETEXT and UPDATETEXT operations does not propagate the change to other sites. If your application uses WRITETEXT and UPDATETEXT to update the text or ntext columns, explicitly add a dummy UPDATE statement after the WRITETEXT or UPDATETEXT operations within the same transaction, to fire the trigger and thereby guarantee that the change is propagated to other sites. For more information on these data types, see ntext, text, and image (Transact-SQL).

Note

We recommend that you use the data types varchar(max), nvarchar(max), varbinary(max) instead of text, ntext, and image data types, respectively.

Bulk Inserting Data into Published Tables

Merge replication tracks changes to published tables using triggers and system tables. If you perform a bulk insert of data using the bcp utility or the BULK INSERT command, triggers do not fire by default. If triggers do not fire, the inserts are not tracked and are not propagated to other nodes. To ensure changes are tracked, we recommend that you use one of the following approaches when performing bulk inserts:

  • Use the FIRE_TRIGGERS option for the bcp utility or the BULK INSERT command. This option ensures triggers are fired as they would be in a typical insert operation. For more information, see bcp Utility and BULK INSERT (Transact-SQL).

  • Execute the stored procedure sp_addtabletocontents after performing the bulk insert. This procedure inserts references into the system tables for any rows in a source table that are not currently included. For more information, see sp_addtabletocontents (Transact-SQL).