We have rebuilt the table and the performance is now 1 second again. I assume this performance degradation was caused by fragmentation of the table? I'll close the question.
Azure SQL Database - Transient Errors
We have a relatively simple scenario where Logic App (la) executes a azure sql database stored procedure (which executes a number of sub procedures). However we have an issue where the store procedure ends without error which I assume is related to "transient errors'. I've assumed this whilst there are also "time out" logs at the same time as the "disconnection". The procedure has successfully executed for years. The point at which it ends differs each time and is not at a stage which is resource heavy/intensive.
1) we have logging for errors/time outs but there is no log of a disconnection "drop out"
2) Is there a best practice approach to manage such a scenario. Clearly the connection has been lost so the stored procedure cannot handle the issue. Assume we could use the logic app to check after a period of time and if a condition is not met try again? Just wanted to check if there is an advised/official more robust approach to handle the scenario.
We have not used ADF for the processing of the stored procedures due to the cost as it has to process and wait for a significant number of procedures to execute (the process can take a number of hours to complete)
Hi @jase jackson USA , welcome to Microsoft Q&A forum.
Would it be possible for you to share the logs of time out and disconnection by masking the sensitive information?
Below are the time out logs at time of disconnection.
The avg log write percentage is also high (97% - 100%) when manipulating this table. A number of columns are dropped and then added with default value contraints added; the constraints are then dropped.
Until recently this process lasted a couple of seconds whereas now it takes 5+ minutes and the table has not grown in size significantly.
It is also worth noting the disconnection always occurs at the same stage therefore the title of this question is now misleading. Clearly it is an issue with this particular table given the failure always occurs at this stage. Should we rebuild and load the table again?
Sign in to comment
Is the table a heap? I. E., a table without a clustered index? Sql server can leave lots and lots of free space in heaps, depending on the modification pattern. Either make it a non-heap table, or try to pin point what in your modification that causes this extremely sparse table. If that was the root of your problem, that is. Else you are just waiting for this to happen again.
The table is indeed a heap (no clustered index). The table has a relatively high number of inserts/deletes and also drop/add column actions. So these actions I assume cause the issue. I'll investigate whether adding a clustered index is feasible. Thanks
Sign in to comment