TokenAndPermUserStore related information on SQL server 2012
We like to limit the size of the TokenAndPermUserStore.
There are significant changes in SQL server 2008/SQL Server 2012 with respect to the TokenandPermUserStore. One of the main change is to have Separate SQL OS UserStore's for each AccessCheckResults cache (In SQL server 2008) instead of one single large cache(as in SQL Server 2005). All the hash tables in these caches now have spinlocks at the bucket level which should avoid the Concurrency and Locking issues noticed in previous version
- Advantages of using this new model:
- This makes sure the bucket linked lists doesn't get too long
- This allows removing cached entries [access check result] for just a single login or user without affecting other logins or users
- This addresses the Size and Number of issues observed in the previous version.
These stores have a quota by default of 4*NumOfBuckets. These are the default numbers documented in the KB article https://support.microsoft.com/kb/955644
We have two parameters that can be configured using the sp_configure which are.
1) access check cache bucket count
2) access check cache quota
You can take a look at these limits : https://support.microsoft.com/default.aspx?scid=kb;EN-US;955644
They limit only the entries present in SecCtxtACRUserStore* and ACRUserStore*.
However, it is not recommended to change the default values unless we observe and contention or performance issues related to the TokenAndPermUserStores.
Any changes to the default parameters should undergo end to end testing and a very rigorous stress testing routine
Spin lock contention, CMEMTHREAD wait types (or other wait types associated with the token perm memory pressure), and Thread states are some of the indicators of any contention due to token perm cache.
2. How does SQL 2012 behave if we limit size of the TokenAndPermUserStore and there is not enough free space in that store during runtime?
We cannot restrict the memory occupied by TokenandPermUserStore, we can only control the bucket count and the cache quota parameters using the sp_configure.
it is possible to have a large number of these stores depending upon the active logins and users spread across the different database.
Each one of these stores will consume a memory depending upon the number of the cached entries present in them.
When there is memory pressure, they will respond appropriately and remove the entries to reduce memory consumption.
3. How frequent shall we run the DBCC FREESYSTEMCACHE command to relieve the size of the TokenAndPermUserStore? What are the side effects of doing this?
It is not necessary to run DBCC FREESYSTEMCACHE frequently, until unless we see any contention associated to TokenAndPermUserStore. We could monitor the cache entries and establish threshold. Below are some of the indicator of contention.
- LoginTokens: Using the DMV information check for significantly more login tokens than unique users that login to the system. Based on information from the past, it appears like above 10,000 we start to observe issues.
- If the number of TokenPerm (the permission cache, not access check result) entries grows to an order of magnitude larger than the sum of LoginTokens and UserTokens (i.e., more than 10x) - that suggests potential problems there.
The side effect of removing entries from the cache is the increased CPU/IO cost in looking up and calculating permissions.
Perhaps greater contention on the spinlocks associated with these hashtables/caches as entries are inserted/removed - since these are more expensive operations than lookup. But I would expect the dominant cost to be extra CPU/IO during metadata lookup and permission calculation.
4. What causes the size increase of TokenAndPermUserStore and how can we prevent it proactively?
The size increase is caused by several factors few of them are.
1) Too many unique logins
2) Too many Adhoc queries per login, which generates many entries on the access cache.
3) Adhoc queries which are not parameterized.
Some of the workarounds are listed in the article below.