SQL Data verification post SAN migration

SeHor 66 Reputation points
2021-05-18T03:48:05.833+00:00

We will be migrating a 2 node active/active SQL 2014 cluster from old SAN to new SAN. As DBA I would like to verify the data post migration ( I will not have the old data available after migration, the old SAN is kept as backup if issues in the data (then can just change the paths to old SAN on the VMs). There are 3 TB of data on each node instance with 10000 tables of various sizes some of the tables are up to 60 GB. I was thinking to use BINARY_CHECKSUM to calculate and log for all tables in the instance before and after migration (data is pretty static), but I have columns which BINARY_CHECKSUM ignores (text, ntext, xml, image).

Wondering if you ever needed to verify large volume of data and what would you recommend for a solution.

Thank you,

SQL Server
SQL Server
A family of Microsoft relational database management and analysis systems for e-commerce, line-of-business, and data warehousing solutions.
13,361 questions
0 comments No comments
{count} votes

Accepted answer
  1. Tom Phillips 17,721 Reputation points
    2021-05-19T13:23:45.773+00:00

    There is really no need to do any of that. SQL Server does a very good job of self detecting if the physical files are damaged.

    If the database is up and running after the copy, all the data is unchanged.

    If you really want to check, you should create a hash of the entire FILE before and after the copy at the OS level.


3 additional answers

Sort by: Most helpful
  1. Olaf Helper 43,246 Reputation points
    2021-05-18T05:30:47.437+00:00

    Wondering if you ever needed to verify large volume of data and what would you recommend for a solution.

    Do you expect SQL Server or SAN modifies data after migration; I wouldn't?
    Just run a DBCC CHECKDB to validate that database (-files) are intact.


  2. Erland Sommarskog 107.2K Reputation points
    2021-05-18T22:02:00.487+00:00

    Yeah, I agree with Olaf. Computing checksums per table seems overly ambitious. I really hope that the migration is done by BACKUP/RESTORE or possibly DETACH/ATTACH. God forbid that someone gets the idea to script the table and do the migration by SQL.

    It is also worth noting that checksum/binary_checksum are weak algorithms that may not detect changes.


  3. CathyJi-MSFT 21,131 Reputation points Microsoft Vendor
    2021-05-19T09:34:58.477+00:00

    Hi @SeHor ,

    BINARY_CHECKSUM usually used to detect changes in a row. If any row has any value changed, this function can be used to figure out if the values are changed in the rows. DBCC CHECKDB checks the logical and physical integrity of all the objects in the specified database. If there is no change for the database I think using DBCC CHECKDB is a better choice after migration as Olaf mentioned.


    If the response is helpful, please click "Accept Answer" and upvote it, as this could help other community members looking for similar queries.