slow query performance

Sam 1,476 Reputation points
2023-02-27T20:52:44.0066667+00:00

I am seeing a lot of page splits happening on this table.

A lots INSERTS and DELETES happens on this table. Its a very high transactional table. A lot of concurrent operations happens on this table. Is there anything can be done to fix at table level to avoid page splits?

CREATE TABLE [dbo].[Repository_locking_table](

[ROWID_LOCK] nchar NOT NULL,

[CREATE_DATE] datetime2 NOT NULL,

[CREATOR] nvarchar NOT NULL,

[LAST_UPDATE_DATE] datetime2 NULL,

[UPDATED_BY] nvarchar NULL,

[ROWID_TABLE] nchar NOT NULL,

[LOCK_GROUP_STR] nvarchar NOT NULL,

[LOCK_EXCLUSIVE_IND] [bigint] NOT NULL,

[JOB_TYPE_STR] nchar NOT NULL,

[LOCK_QUERY_SQL] nvarchar NULL,

[MODULE_NAME] nvarchar NULL,

[INTERACTION_ID] nvarchar NULL,

CONSTRAINT [PK_Repository_locking_table] PRIMARY KEY NONCLUSTERED

(

[ROWID_LOCK] ASC

)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

)

GO

Thanks,

Sam

SQL Server
SQL Server
A family of Microsoft relational database management and analysis systems for e-commerce, line-of-business, and data warehousing solutions.
13,363 questions
0 comments No comments
{count} votes

Accepted answer
  1. Erland Sommarskog 107.2K Reputation points
    2023-02-28T22:29:30.64+00:00

    This is a small table. There are only two rows in the table. Interesting enough, a SELECT * on the table results in 285 logical reads. This suggests that there is a whole lot of empty space in the table.

    Judging from the name, the purpose of the table is to lock things in some roll-your-own locking scheme. I would guess that rows are deleted and inserted quite regularly.

    There was a suggestion to play with the fill factor, and indeed if you have a table where rows are inserted both here and there, setting a fill factor of 80-90 when you rebuild the index can be a good idea. However, this applies to a table where most of the rows stay around, which does not seem to be the case here. And the actual fill factor in this case is very low, below 1.

    Given that this is a vendor database, I don't think you can do much with this table. Overall, if you have performance issues with a vendor application, there is all reason to involve the vendor. If there are performance issues with this table C_REPOS_APPLIED_LOCK, this probably requires a new way of thinking in the application.


4 additional answers

Sort by: Most helpful
  1. Sam 1,476 Reputation points
    2023-02-27T21:09:01.9866667+00:00

    Re-posting my question as it was trimmed.

    Hi All,

    I am seeing a lot of page splits happening on this table.

    A lots INSERTS and DELETES happens on this table. Its a very high transactional table. A lot of concurrent operations happens on this table. Is there anything can be done to fix at table level to avoid page splits?

    CREATE TABLE [dbo].[Applied_Lock](

    [ROWID_LOCK] nchar NOT NULL,

    [CREATE_DATE] datetime2 NOT NULL,

    [CREATOR] nvarchar NOT NULL,

    [LAST_UPDATE_DATE] datetime2 NULL,

    [UPDATED_BY] nvarchar NULL,

    [ROWID_TABLE] nchar NOT NULL,

    [LOCK_GROUP_STR] nvarchar NOT NULL,

    [LOCK_EXCLUSIVE_IND] [bigint] NOT NULL,

    [JOB_TYPE_STR] nchar NOT NULL,

    [LOCK_QUERY_SQL] nvarchar NULL,

    [MODULE_NAME] nvarchar NULL,

    [INTERACTION_ID] nvarchar NULL,

    CONSTRAINT [PK_APPLIED_LOCK] PRIMARY KEY NONCLUSTERED

    (

    [ROWID_LOCK] ASC

    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)

    )

    GO

    Thanks,

    Sam

    0 comments No comments

  2. Erland Sommarskog 107.2K Reputation points
    2023-02-27T21:42:53.7066667+00:00

    It is very difficult to answer based on this thin information. I note that the primary key os type nchar, but it seems that you did not find it important to include the length, nor what sort of values that actually goes into that column. But nchar(n), that is a fixed string is a little unusual for a primary key column.

    If there are some kind of randomness of these values, like a GUID, then it could help to apply Jeff Moden's techniques, which he describes in this video:https://www.youtube.com/watch?v=jx-FuNp4fOA. Be warned that these are advanced techniques, and you need to understand what you are doing if you embrace them.


  3. Guoxiong 8,201 Reputation points
    2023-02-27T23:06:45.42+00:00

    Usually if you want to reduce the page splits, you might need to increase the fill factor on your indexes. You also might need to rebuild the indexes often. Any idea why you set the index "SVR1_13NC4ZI" as the clustered index? If there is no important reason to use this clustered index, you can change the index to the non-clustered and set the clustered primary key.


  4. LiHongMSFT-4306 25,651 Reputation points
    2023-02-28T03:11:08.2966667+00:00

    Hi @Samantha r

    Fragmentation and Index Maintenance plans

    All the fill factor can do is to postpone the build-up of fragmentation. You will then need to periodically refresh the free space by doing scheduled index rebuilds or reorganizations. The trick is in finding the right fill factor to use, along with the right frequency of index maintenance.

    Regardless of the fill factor, you will probably need to defragment certain susceptible indexes at least once a week in a working database to catch fragmentation before it becomes a problem. Your database should already have an index maintenance plan. Refer to Ola Hallengren’s SQL Server Index and Statistics Maintenance plan. This will just use the fill factor that you provide for each index. No index maintenance plan will help you to provide the right fill-factor.

    Finding the best Fill Factor

    The indications for the correct fill-factor are as follows: if the underlying table of an index is almost read-only, only occasionally updated, then a high fill-factor is best. The same is true of an index that is sorted on a column such as an identity column, sequence number or datetime, and whose rows are seldom updated after they are created.Otherwise, it may be better to experiment with a slightly lower value, making a small change at a time. A fill factor of 90 is a good place to start. The various factors that determine the best value are complicated and on a production server it is best to make gradual adjustments over an extended period, monitoring the results carefully.

    Please refer to this article for more details: When SQL Server Performance goes Bad: the Fill Factor and Excessive Fragmentation

    Best regards,

    Cosmog Hong


    If the answer is the right solution, please click "Accept Answer" and kindly upvote it. If you have extra questions about this answer, please click "Comment".

    Note: Please follow the steps in our Documentation to enable e-mail notifications if you want to receive the related email notification for this thread.