We will be migrating a 2 node active/active SQL 2014 cluster from old SAN to new SAN. As DBA I would like to verify the data post migration ( I will not have the old data available after migration, the old SAN is kept as backup if issues in the data (then can just change the paths to old SAN on the VMs). There are 3 TB of data on each node instance with 10000 tables of various sizes some of the tables are up to 60 GB. I was thinking to use BINARY_CHECKSUM to calculate and log for all tables in the instance before and after migration (data is pretty static), but I have columns which BINARY_CHECKSUM ignores (text, ntext, xml, image).
Wondering if you ever needed to verify large volume of data and what would you recommend for a solution.