Hi @Sudip Bhatt ,
The parameter sniffing problem does not occur frequently. It only occurs when the data distribution in some tables is very uneven, or the parameter values brought in by the user are very uneven.
For example, a stored procedure for querying data for a period of time, if most users only check data for 1 day, SQL Server caches this kind of execution plan. For those users who want to check data for a year, they will encounter parameters The sniffing problem is at risk. If most users in the system have to check data for one year, but SQL Server happens to cache a stored procedure that only checks data for one day, most users will encounter parameter sniffing problems.
There are several solutions to deal with parameter sniffing as below:
- Use EXEC () to run dynamic SQL statements.
- Use local variables.
- Use query hint in the statement to specify the execution plan.
A:With Recompile
B:Specify the Join operation ({ LOOP | MERGE | HASH} JOIN)(not recommended)
C:OPTIMIZE FOR ( @Kritivasas _name = literal_constant [ ,...n ] )
D:Plan Guide
OPTIMIZE FOR:
After confirming the Parameter Sniffing problem, you will find that the execution plan generated according to some variable values will vary greatly between fast and slow, while the execution plan generated based on other variables has similar performance.
Just for example, in a join statement, when the variable @i is 50,000, it uses the best Nested Loops execution plan, which takes more than 10 milliseconds, and the execution plan with Hash Join is only more than 300 milliseconds. But when the variable is equal to 75124, the Hash Join execution plan takes more than 500 milliseconds, and when using Nested Loops, it takes more than 4,000 milliseconds.
In absolute terms, it is generally good to have users wait a few hundred milliseconds. But after a few seconds, it is easy to receive complaints. So Hash Join is a relatively "safe" execution plan. If SQL Server always uses the value of 75 124 as the execution plan, it will be safer for most queries.
option (optimize for (@i = 75124))
But as mentioned in this article, this solution is not always the best one, it depends on actual situations.
This solution is good when:
- The distribution of data seldom changes.
- You have development resources dedicated to tuning queries.
- You have resources available to review data distribution.
This solution is not good when:
- Data distribution can change rapidly.
- Data distribution change is not predictable.
- There are limited or no resources for tuning.
Best regards
Melissa
If the answer is helpful, please click "Accept Answer" and upvote it.
Note: Please follow the steps in our documentation to enable e-mail notifications if you want to receive the related email notification for this thread.