Hi,Tim Braes
#$TableExportoptions.Indexes = $true #$TableExportoptions.NonClusteredIndexes = $true
Even if you comment out these two lines, the default value will still be executed at runtime. Judging from the results of two runs, the default value should be True.
Comparing the results of the two runs, the two tables should differ in their indexes. The check will only pass when the number of indexes, index types (clustered, non-clustered, columnstore, etc.), the columns included in the indexes and their order, and other index properties (such as uniqueness, fill factor, filter expressions, etc.) are all the same.
You can manually check the index differences between two tables using T-SQL:
USE [TestDatabase]
SELECT i.name AS IndexName,
OBJECT_NAME(i.object_id) AS TableName,
COL_NAME(ic.object_id, ic.column_id) AS ColumnName,
i.type_desc AS IndexType,
ic.index_column_id AS IndexColumnId,
i.is_primary_key AS IsPrimaryKey,
i.is_unique AS IsUnique,
i.is_unique_constraint AS IsUniqueConstraint
FROM sys.indexes i INNER JOIN sys.index_columns ic
ON i.object_id = ic.object_id AND i.index_id = ic.index_id
WHERE i.object_id = OBJECT_ID('[dbo].[TABLENAME]') ORDER BY i.name, ic.index_column_id;
or analyze the differences through a redirected log file.:
tablediff -strict > C:\Logs\analysis.log
Best regards,
Mikey Qiao
If the answer is the right solution, please click "Accept Answer" and kindly upvote it.