다음을 통해 공유


Improving temp table and table variable performance using memory optimization

In-Memory OLTP is a technology introduced in SQL Server 2014 that can provide phenomenal (up to 30-fold) performance improvements for transactional workloads, and SQL Server 2016 improves the performance even further. However, adopting this technology in an existing app comes at a cost, and you need to have enough available memory for the core transactional tables.

Tempdb can be a performance bottleneck for many applications. Workloads that intensively use table-valued parameters (TVPs), table variables and temp tables can cause contention on things like metadata and page allocation, and result in a lot of IO activity that you would rather avoid.

What if TVPs and temp tables could live just in memory, in the memory space of the user database? In-Memory OLTP can help! Memory-optimized table types and SCHEMA_ONLY memory-optimized tables can be used to replace traditional table types and traditional temp tables, bypassing tempdb completely, and providing additional performance improvements through memory-optimized data structures and data access methods.

In the remainder of this post you see how you can easily replace traditional tempdb-based table variables and temp tables with memory-optimized table variables and tables. Along the way you will get a flavor of the performance benefits you can expect from memory-optimization. At the bottom of the post there are the prerequisites for using memory-optimized tables and table types, as well as further reading, for those of you not as familiar with In-Memory OLTP.

Memory-optimize your table types

It is easy to start using memory-optimized table variables and table-valued parameters: just replace your table types with memory-optimized table types.

Consider, for example, this traditional tempdb-based table type:

 CREATE TYPE dbo.test_disk AS TABLE
(c1 INT NOT NULL,
 c2 CHAR(10));

To memory-optimize this table type simply add the option ‘memory_optimized=on’, and add an index if there is none on the original type:

 CREATE TYPE dbo.test_memory AS TABLE
(c1 INT NOT NULL INDEX ix_c1,
 c2 CHAR(10))
WITH (MEMORY_OPTIMIZED=ON);

Note that memory-optimized table types are required to have at least one index.

This memory-optimized table type can be used wherever a traditional table type is used, whether it is stored procedure parameter declarations or table variable declarations within T-SQL modules or in ad hoc batches.

Now, what to do if you have inline table variable declarations? Memory-optimized table variables cannot be declared inline; you need to create a table type up-front.

For example, you may have a variable declaration in a stored procedure or ad hoc batch like:

 DECLARE @tv TABLE
    ( c1 INT NOT NULL ,
      c2 CHAR(10));

To memory-optimize this variable, simply create the type dbo.test_memory as shown above and change the variable declaration as follows:

 DECLARE @tv dbo.test_memory;

Let’s look at memory-optimized table variables in action. The following T-SQL batch creates a traditional table variable based on the above-mentioned table type, inserts two rows and then deletes them. When you execute the following in SQL Server management studio, the batch is repeated 10,000 times:

 SET NOCOUNT ON
GO
DECLARE @tv dbo.test_disk
INSERT  @tv VALUES  ( 1, 'n' )
INSERT  @tv VALUES  ( 2, 'm' )
DELETE  FROM @tv
GO 10000

This takes about 10 seconds to run on my machine. To replicate this result in Azure SQL DB, make sure to run it from a VM in the same region as the database, since the script does perform 10,000 round trips to the server.

Let’s do the same with memory-optimized table variables:

 DECLARE @tv dbo.test_memory
INSERT  @tv VALUES  ( 1, 'n' )
INSERT  @tv VALUES  ( 2, 'm' )
DELETE  FROM @tv
GO 10000

This takes 1 second to run on my machine (10X gain! ), demonstrating some of the performance benefits you get by bypassing tempdb and memory-optimizing data storage and data access. You will see even more benefit when using memory-optimized table variables in concurrency scenarios, where you have many connections that use these table variables at the same time.

Memory-optimize your temp tables

To improve performance of your temp tables, replace traditional global (##global_temp) and session-level (#session_temp) temp tables with SCHEMA_ONLY memory-optimized tables. The differences between the two are:

  • Traditional ##temp and #temp tables live in tempdb, while memory-optimized tables live in the memory space of the user database.
  • Memory-optimized tables are user tables, and their schema is persisted as part of the user database, while the schema for ##temp and #temp tables disappears when the last session using it is closed.
  • Memory-optimized tables should be created at deployment time, not runtime: the schema is persisted (see previous point) and table compilation can take time (typically 0.5s).
  • Memory-optimized tables must have at least one index.

Replace global ##temp tables

To memory-optimize global temp tables (##temp):

  1. Create a new SCHEMA_ONLY memory-optimized table temp (for example) with the same schema as the global ##temp table
    • Ensure the new table has at least one index
  2. Change all references to ##temp in your Transact-SQL statements to the new memory-optimized table temp
  3. Replace the DROP TABLE ##temp statements in your code with DELETE FROM temp, to clean up the contents
  4. Remove the CREATE TABLE ##temp statements from your code – these are now redundant

As an example, consider the following stored procedure using a global temp table:

 SET NOCOUNT ON;
GO
CREATE PROCEDURE sp_temp
AS
    BEGIN
        IF NOT EXISTS (SELECT * FROM tempdb.sys.objects WHERE name=N'##temp1')
             CREATE TABLE ##temp1
                    (
                      c1 INT NOT NULL ,
                      c2 NVARCHAR(4000)
                    );
        BEGIN TRAN
        DECLARE @i INT = 0;
        WHILE @i < 100
            BEGIN
                INSERT  ##temp1
                VALUES  ( @i, N'abc' );
                SET @i += 1;
            END;
        COMMIT
    END;
GO

Execution of this procedure using a traditional ##temp table takes 0 seconds on my machine, according to Management Studio, so it’s pretty quick. But that is only a single execution. Let’s execute this procedure using 100 concurrent connections, with each connection executing the procedure 1,000 times, for a total of 100,000 executions of the procedure. I’m using the following ostress command, which connects to the default instance, database AdventureWorks2016CTP3:

 ostress -S. -dWideWorldImporters -q -n100 -r1000 -Q"sp_temp"

On my machine (24 cores) this takes about 1 minute.

Here is the same procedure using a memory-optimized table instead:

 CREATE TABLE dbo.temp1
(c1 INT NOT NULL INDEX ix_1 ,
 c2 NVARCHAR(4000))
WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY);
GO
DROP PROCEDURE IF EXISTS sp_temp
GO
CREATE PROCEDURE sp_temp
AS
    BEGIN
        BEGIN TRAN
        DECLARE @i INT = 0;
        WHILE @i < 100
            BEGIN
                INSERT  dbo.temp1
                VALUES  ( @i, N'abc' );
                SET @i += 1;
            END;
        COMMIT
    END;
GO

Executing this procedure 100,000 time total takes about 7 seconds on my machine. This represents an almost 9X performance gain.

Replace session-level #temp tables

To memory-optimize local temp tables (#temp):

  1. Create a new SCHEMA_ONLY memory-optimized table temp (for example) with the same schema as the session-level #temp table.
    • The new table should have a column session_id with default constraint @@SPID, such that all rows inserted into the table map to the right session
    • Ensure the new table has at least one index on session_id (for performance reasons)
    • (optionally) Add a check constraint session_id=@@SPID to ensure sessions cannot touch the data of other sessions
      • Note: this can also be achieved using a blocking security predicate, but using a check constraint results in a more performant query plan
    • Add a security predicate that filters on session_id=@@SPID, such that all SELECT/UPDATE/DELETE statement will only return and/or modify the rows for the current session
  2. Change all #temp references in your Transact-SQL statements to the new memory-optimized table temp
  3. Replace the CREATE TABLE #temp statements in your code with DELETE FROM temp, to ensure a session is not exposed to table contents inserted by a previous session with the same session_id
  4. Remove the DROP TABLE #temp statements from your code – optionally you can insert a DELETE FROM temp, in case memory size is a potential concern

As an example, consider the following stored procedure using a local temp table:

 SET NOCOUNT ON;
GO
CREATE PROCEDURE sp_temp
AS
    BEGIN
        DROP TABLE IF EXISTS #temp1
        CREATE TABLE #temp1
            (
              c1 INT NOT NULL ,
              c2 NVARCHAR(4000)
            );
        BEGIN TRAN
        DECLARE @i INT = 0;
        WHILE @i < 100
            BEGIN
                INSERT  #temp1
                VALUES  ( @i, N'abc' );
                SET @i += 1
            END;
        COMMIT
    END;

If you read the previous section in this blog, you may think it looks familiar. Execution of this procedure actually takes a little longer than the procedure with the global temp table, as a new temp table needs to be created with each execution. Total time for 100,000 executions using 100 connections, using the same ostress command as before, takes about 1 minute 3 seconds on my machine (24 logical cores).

Now, let’s consider using a SCHEMA_ONLY memory-optimized table with a filter function as a session-level temp table:

 -- a single filter function can be used for all session-level temp tables
CREATE FUNCTION dbo.fn_SessionFilter(@session_id smallint)
    RETURNS TABLE
WITH SCHEMABINDING, NATIVE_COMPILATION
AS
    RETURN SELECT 1 as fn_SessionFilter WHERE @session_id=@@spid;
GO
DROP TABLE IF EXISTS dbo.temp1
GO
CREATE TABLE dbo.temp1
    (
      c1 INT NOT NULL ,
      c2 NVARCHAR(4000) ,
      session_id SMALLINT NOT NULL DEFAULT ( @@spid ) ,
      INDEX IX_session_id ( session_id ) ,
      CONSTRAINT CHK_temp1_session_id CHECK ( session_id = @@spid ),
    )
    WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY);
GO
-- add security policy to enable filtering on session_id, for each temp table
CREATE SECURITY POLICY dbo.temp1Filter
ADD FILTER PREDICATE dbo.fn_SessionFilter(session_id)
ON dbo.temp1
WITH (STATE = ON);
GO
DROP PROCEDURE IF EXISTS sp_temp
GO
CREATE PROCEDURE sp_temp
AS
    BEGIN
        DELETE FROM dbo.temp1;
        BEGIN TRAN
        DECLARE @i INT = 0;
        WHILE @i < 100
            BEGIN
                INSERT  dbo.temp1 (c1, c2)
                VALUES  ( @i, N'abc' );
                SET @i += 1;
            END;
        COMMIT
    END;
GO

Total time for 100,000 executions with 100 connections, using the same ostress command as before, is about 7 seconds on my machine (24 logical cores). This is a 9X performance improvement, similar to the test with global temp tables.

The main reason for this performance benefit is that, for concurrency situation, even though you are sharing a single table across sessions, the different connections will not get in each other’s way due to the lock-free nature of memory-optimized tables. And because the table has the security policy, different sessions will not see or touch each other’s data.

Prerequisites

The memory-optimized session-level temp table scenario requires a couple of features that were added in both SQL Server 2016 (RC0) and Azure SQL Database. The memory-optimized table variable and global temp table scenarios are support in SQL Server 2014, although parallel plans are not supported in 2014, so you would not see perf benefits for large table variables or large temp tables in SQL Server 2014. SQL 2016 and Azure DB do support parallel plans with memory-optimized tables and table variables, so no concerns there.

For SQL Server specifically, to use any In-Memory OLTP objects, including memory-optimized table variables and SCHEMA_ONLY tables, the database needs to have a MEMORY_OPTIMIZED_DATA filegroup with at least one container. The following script adds a filegroup with a container in the default data directory:

 ALTER DATABASE CURRENT ADD FILEGROUP [mod] CONTAINS MEMORY_OPTIMIZED_DATA
DECLARE @sql nvarchar(max) = N'ALTER DATABASE CURRENT
       ADD FILE (name=''mod'', filename=' +
       QUOTENAME(cast(SERVERPROPERTY('InstanceDefaultDataPath') as nvarchar(max)) + db_name() + N'_mod', N'''') +
             ') TO FILEGROUP [mod]'
EXECUTE sp_executesql @sql

If you would like to control the location of the container you can use the following script, which places the container in ‘c:\data\’:

 ALTER DATABASE CURRENT ADD FILEGROUP [mod] CONTAINS MEMORY_OPTIMIZED_DATA
ALTER DATABASE CURRENT ADD FILE (name = [mod_c1], filename= 'c:\data\mod') TO FILEGROUP [mod]

Note that the container is included in database backups. You will see some IO activity in this container as database checkpoints happen, and there is some on-disk footprint, even if you’re only using SCHEMA_ONLY tables, but the actual IO activity is minimal.

Finally, memory-optimized table variables and SCHEMA_ONLY tables do need to fit in memory. The following articles on MSDN and Azure.com detail how to monitor memory usage:

https://msdn.microsoft.com/library/dn465869.aspx

https://azure.microsoft.com/documentation/articles/sql-database-in-memory-oltp-monitoring/

As long as temp tables and table variables don’t get too big, or you can assure there is enough available memory, there is really no reason not to memory-optimize your temp tables and table variables. It takes only a few simple steps to get started.

To get even more out of In-Memory OLTP, you can consider tuning indexes, with HASH, using natively compiled stored procedures and functions, and using SCHEMA_AND_DATA memory-optimized tables for your persisted user data.

Further reading

Comments

  • Anonymous
    March 21, 2016
    Does Natively Compiled Stored Procedure use for In-Memory Table variable/table for more performance ?
    • Anonymous
      March 22, 2016
      Yes, using natively compiled stored procedures will further improve performance. But note that you cannot access traditional disk-based tables from native procs - you can only access memory-optimized tables.
  • Anonymous
    March 24, 2016
    i made a comment yesterday about the article above but it has not been published yet. is something wrong with my comment and question?if i may repeat my question "i implemented the code above in our live system but saw an unreasonable performance downgrade. what am i missing?"
    • Anonymous
      March 25, 2016
      I don't know why the comment did not come through. Must have been a fluke.I'd like to understand why you saw a perf degradation. I will follow up offline.
  • Anonymous
    June 10, 2016
    The comment has been removed
    • Anonymous
      June 20, 2016
      Apologies for the delayed response.Which of your tables are memory-optimized in this scenario? You cannot access memory-optimized tables in other databases using three-part names, so if DB1.dim.Asset is memory-optimized in this case, the statement will fail.
  • Anonymous
    August 13, 2016
    Good performance gain using memory optimized table variable !!
  • Anonymous
    April 10, 2017
    The main drawback I find replacing session-level (#temp) tables with in-memory table types is that the HK table ends up not having useful histogram information (because we always need to filter on session id). Distribution statistics (not simply cardinality) are the main reason to prefer #temp tables over table variables, so it is a shame to lose this advantage.Perhaps at some point in the future, SQL Server will offer session-scoped in-memory tables (or in-memory table variables that allow statistics).