Thanks for the detail steps. If table isn't partitioned , what would have been your recommendation just curious to know your thoughts on this .
Short answer: it depends.
I think with partitioning in place, that is such an obvious solution, that there is little reason not to use it. But since I'm not a big fan of partitioning, I don't think my prime suggestion would be to partition the table if the table had been unpartitioned.
I discussed snapshot isolation earlier, but that was based on the idea that we are only doing the archiving. If the table is not partitioned, we have the headache of deleting the data, and this is more prone to cause problem. So I would do deletion and archiving at the same time. And I would look into to do this batches of a fairly small size. The exact size depends on how many indexes there are on the table. If there are four non-clustered indexes on the table, I would go for a batch size of just below 1000 rows. This ensures that the delete operation does not result in lock escalation to table level, to permit the delete operation run while the system is live. (The limit for lock escalation is 5000 locks). Very, very important here that there is an index to support the deletion criteria.
I might also consider adding the command SET LOCK_TIMEOUT 100 to the operation, and then trap the lock-timeout error and retry after a short wait of 500 ms. The idea here is that if the DELETE operation conflicts with another process, the DELETE process is the one that should yield to prevent deadlocks.
Then again, if you tell me that there is a monthly maintenance window which is long enough to run the deletion and archiving job, I would go for a lot simpler solution where archiving is done in bigger batches, as that will be faster.
As for SSIS or ADF I would not consider them, of a very simple reason: I don't know any of these products! (But as I understand it, ADF would only be an option if your database is in the cloud, but maybe that is the case?)