If you used INSERT INTO desttable SELECT ... FROM sourcetable, then that is a single implicit transaction. To see the progress you would have to use
SELECT COUNT(*)
FROM desttable WITH (NOLOCK)
insert into ... select status progress.
hi folks, i ran into a scenario where i had to move data from one table to another with similar structure in same database. I have created empty table for destination and ran INSERT INTO..SELECT statement. table has 1 billion rows with 750G of data + index size. while the operation was going on. I can not query the destination table to see how many rows have been inserted. Destination table already has 1 clustered and 2 non clustered indexes created. Total operation took 28 hours.
My questions are,
1.why was not able to see the progress of the rows inserted? or progress of overall operation. i tried "show live execution plan" under activity monitor for this session, however that option was greyed out.
2.is there any way to find count of rows inserted into the table? Query on the table sys.dm_db_partition_stats didnt give me any rows too.
3.Destination table had identity column, so i tried running ident_current function on the table to get last identity value inserted to get idea about rows inserted , this gave me some idea about progress, however source table had 1B rows , but after around 17 hours ident_current value stuck at 1,000,500 value and was not progressing, that gave me clue that rows insertion might have been finished, however why would ident_current value show me 500 rows more than the total number of rows of source table?
SQL Server Other
4 answers
Sort by: Most helpful
-
Tom Phillips 17,771 Reputation points
2022-04-05T20:06:18.007+00:00 -
Erland Sommarskog 121.4K Reputation points MVP Volunteer Moderator
2022-04-05T21:42:55.96+00:00 Which version of SQL Server are you using? If you are on SQL 2016 or SQL 2017, you should enable trace flag 7412. Then you can use my beta_lockinfo, and it will show you plan with the actual values so far. Note that it does not help to activate the trace flag while the query is running.
If you are on SQL 2019, you don't need the trace flag. On SQL 2014, this feature is not available.
Keep in mind that somehow tracking the row count in the table, only give partial information about the progress. The plan may include an operator that assemble data and only release data in batches or when everything is ready. The process could also be held of things like growing the transaction log.
When loading this many rows, I would consider a chunking solution, rather than inserting all at once.
-
YufeiShao-msft 7,146 Reputation points
2022-04-06T08:39:56.753+00:00 Hi @SQLServerBro,
Look at your transaction log? Try this script, it will return the count of all rows that have been affected by the running transacrtion
DECLARE @SPID INT = 54 SELECT COUNT(*)--fn_dblog.* FROM fn_dblog(null,null) WHERE operation IN ('LOP_MODIFY_ROW', 'LOP_INSERT_ROWS','LOP_DELETE_ROWS') AND context IN ('LCX_HEAP', 'LCX_CLUSTERED') AND [Transaction ID] = (SELECT fn_dblog.[Transaction ID] FROM sys.dm_tran_session_transactions session_trans JOIN fn_dblog(null,null) ON fn_dblog.[Xact ID] = session_trans.transaction_id WHERE session_id = @SPID)
-------------
If the answer is the right solution, please click "Accept Answer" and kindly upvote it. If you have extra questions about this answer, please click "Comment".
Note: Please follow the steps in our documentation to enable e-mail notifications if you want to receive the related email notification for this thread.
-
Heisenberg 261 Reputation points
2022-04-06T19:40:46.617+00:00 Thank you all for the answer. @Erland Sommarskog I thought of chunking solution too, however to divide into certain number of batches i will have to use where clause on some unique column, dont you think that filter /where criteria will slow down the process too?
@YufeiShao-msft thanks for that solution, i will try it out. I hope it wont put any kind of lock on transaction log file. Can you clarify on that please?