**Follow answers for each one of your questions :
Hi, Roberto Is your issue resolve?**
Actually, we did many different actions around same time, so we cannot ensure which action caused the better performance when migration tables to SQL server : We added v-cores and memory to VM, also we created SQL server in production hyper scaler machine type (instead of general purposes type used before in Dev environment). Beside of that, our DBA did lot of tests increasing the batch size # from 10.000 to 100.000 and we ended up using 100.000 most of time.
Currently, after migrating 97% of tables, we faced some issues when loading few tables on source DB2 with the following error message : “The data value could not be converted for reasons other than sign mismatch or data overflow. For example, the data was corrupted in the data store but the row was still retrievable.”, attempt to solve that issue with no success, led us to use another tool ‘DBeaver’ to complete the data migration for the remaining tables (3%) with success.
What's the average row size of the tables mentioned?
10k is the total of tables migrated.
5 of them around 350 Mi of rows.
1 with 1,6 Billion of rows but business team noted that only <5% was needed for historical purposes, then it brought the rows to thousands rows only.
Record length was really different but we did not take it on count when setting batch size# on SSMA tool.
Higher throughput can be achieved by increasing the batch size of the migration in the project settings but setting it too high will cause timeouts to occur more frequently,
Migration is always a Timeout vs Throughput activity with a sweet spot in between based on row size, network config, and other factors.
Yes, agree.
Can you also check the resources being used by the VM during migration? Is there a bottleneck on the VM on which SSMA is running?
It seemed that bottleneck was on Network during business hours. The Network rate increased significantly on weekends and out of BH.
Can you increase batch size as well, set it to 50000 It may cause timeouts as I do not know the average row size, but it can be worth trying.
Yes, we did tests using many different batch size qtdy and ended up using 100.000.