Plan your adoption of In-Memory OLTP Features in SQL Server
Applies to: SQL Server Azure SQL Database Azure SQL Managed Instance
This article describes the ways in which the adoption of in-memory features in SQL Server affects other aspects of your business system.
Note
- For more information specific to in-memory data in Azure SQL Database, see Optimize performance by using in-memory technologies in Azure SQL Database and Blog: In-Memory OLTP in Azure SQL Database.
- For more information specific to in-memory data in Azure SQL Managed Instance, see Optimize performance by using in-memory technologies in Azure SQL Managed Instance.
A. Adoption of In-Memory OLTP features
The following subsections discuss factors you must consider when you plan to adopt and implement In-Memory features.
A.1 Prerequisites
One prerequisite for using the In-Memory features can involve the edition or service tier of the SQL product. For this and other prerequisites, see:
- Requirements for Using Memory-Optimized Tables
- Editions and supported features of SQL Server 2022
- SQL Database pricing tier recommendations
A.2 Forecast the amount of active memory
Does your system have enough active memory to support a new memory-optimized table?
Microsoft SQL Server
A memory-optimized table that contains 200 GB of data requires more than 200 GB of active memory be dedicated to its support. Before you implement a memory-optimized table containing a large amount of data, you must forecast the amount of additional active memory you might need to add to your server computer. For estimation guidance, see:
Similar guidance is available for Azure SQL Managed Instance:
Azure SQL Database
For a database hosted in the Azure SQL Database cloud service, your chosen service tier affects the amount of active memory your database is allowed to consume. You should plan to monitor the memory usage of your database by using an alert. For details, see:
- Review the In-Memory OLTP Storage limits for your Pricing Tier
- Monitor In-Memory OLTP storage in Azure SQL Database
Memory-optimized table variables
A table variable which is declared to be memory-optimized is sometimes preferable to a traditional #TempTable that resides in the tempdb
database. Table variables can provide performance gains without using significant amounts of active memory.
A.3 Table must be offline to convert to memory-optimized
Some ALTER TABLE functionality is available for memory-optimized tables. But you cannot issue an ALTER TABLE statement to convert a disk-based table into a memory-optimized table. Instead you must use a more manual set of steps. What follows are various ways you can convert your disk-based table to be memory-optimized.
Manual scripting
One way to convert your disk-based table to a memory-optimized table is to code the necessary Transact-SQL steps yourself.
Suspend application activity.
Take a full backup.
Rename your disk-based table.
Issue a CREATE TABLE statement to create your new memory-optimized table.
INSERT INTO your memory-optimized table with a sub-SELECT from the disk-based table.
DROP your disk-based table.
Take another full backup.
Resume application activity.
Memory Optimization Advisor
The Memory Optimization Advisor tool can generate a script to help implement the conversion of a disk-based table to a memory-optimized table. The tool is installed as part of SQL Server Data Tools (SSDT).
.dacpac file
You can update your database in-place by using a .dacpac file, managed by SSDT. In SSDT, you can specify changes to the schema that is encoded in the .dacpac file.
You work with .dacpac files in the context of a Visual Studio project of type Database.
- Data-tier Applications and .dacpac files
A.4 Guidance for whether In-Memory OLTP features are right for your application
For guidance on whether In-Memory OLTP features can improve the performance of your particular application, see:
B. Unsupported features
Features that are not supported in certain In-Memory OLTP scenarios are described at:
The following subsections highlight some of the more important unsupported features.
B.1 SNAPSHOT of a database
After the first time that any memory-optimized table or module is created in a given database, no SNAPSHOT of the database can ever be taken. The specific reason is that:
- The first memory-optimized item makes it impossible to ever drop the last file from the memory-optimized FILEGROUP; and
- No database that has a file in a memory-optimized FILEGROUP can support a SNAPSHOT.
Normally a SNAPSHOT can be handy for quick testing iterations.
B.2 Cross-database queries
Memory-optimized tables do not support cross-database transactions. You cannot access another database from the same transaction or the same query that also accesses a memory-optimized table.
Table variables are not transactional. Therefore, memory-optimized table variables can be used in cross-database queries.
B.3 READPAST table hint
No query can apply the READPAST table hint to any memory-optimized table.
The READPAST hint is helpful in scenarios where several sessions are each accessing and modifying the same small set of rows, such as in processing a queue.
B.4 RowVersion, Sequence
No column can be tagged for RowVersion on a memory-optimized table.
A SEQUENCE cannot be used with a constraint in a memory-optimized table. For example, you cannot create a DEFAULT constraint with a NEXT VALUE FOR clause. SEQUENCEs can be used with INSERT and UPDATE statements.
C. Administrative maintenance
This section describes differences in database administration where memory-optimized tables are used.
C.1 Identity seed reset, increment > 1
DBCC CHECKIDENT, to reseed an IDENTITY column, cannot be used on a memory-optimized table.
The increment value is restricted to exactly 1 for an IDENTITY column on a memory-optimized table.
C.2 DBCC CHECKDB cannot validate memory-optimized tables
The DBCC CHECKDB command does nothing when its target is a memory-optimized table. The following steps are a work-around:
Back up the files in the memory-optimized FILEGROUP to a null device. The backup process invokes a checksum validation.
If corruption is found, proceed with the next steps.
Copy data from your memory-optimized tables into disk-based tables, for temporary storage.
Restore the files of the memory-optimized FILEGROUP.
INSERT INTO the memory-optimized tables the data you temporarily stored in the disk-based tables.
DROP the disk-based tables that temporarily held the data.
D. Performance
This section describes situations where the excellent performance of memory-optimized tables can be held below full potential.
D.1 Index considerations
All indexes on a memory-optimized table are created and managed by the table-related statements CREATE TABLE and ALTER TABLE. You cannot target a memory-optimized table with a CREATE INDEX statement.
The traditional B-tree nonclustered index is often the sensible and simple choice when you first implement a memory-optimized table. Later, after you see how your application performs, you can consider swapping another index type.
Note
Documentation uses the term B-tree generally in reference to indexes. In rowstore indexes, the Database Engine implements a B+ tree. This does not apply to columnstore indexes or indexes on memory-optimized tables. For more information, see the SQL Server and Azure SQL index architecture and design guide.
Two special types of indexes need discussion in the context of a memory-optimized table: Hash indexes, and Columnstore indexes.
For an overview of indexes on memory-optimized tables, see:
Hash indexes
Hash indexes can be the fastest format for accessing one specific row by its exact primary key value by using the '=' operator.
Inexact operators such as '!=', '>', or 'BETWEEN' would harm performance if used with a hash index.
A hash index might not be the best choice if the rate of key value duplication becomes too high.
Guard against underestimating how many buckets your hash index might need, to avoid long chains within individual buckets. For details, see:
Nonclustered columnstore indexes
Memory-optimized tables deliver high throughput of typical business transactional data, in the paradigm we call online transaction processing or OLTP. Columnstore indexes deliver high throughput of aggregations and similar processing we call Analytics. In years past the best approach available for satisfying the needs of both OLTP and Analytics was to have separate tables with heavy movement of data, and with some degree of data duplication. Today a simpler hybrid solution is available: have a columnstore index on a memory-optimized table.
A columnstore index can be built on a disk-based table, even as the clustered index. But on a memory-optimized table a columnstore index cannot be clustered.
LOB or off-row columns for a memory-optimized table prevent the creation of a columnstore index on the table.
No ALTER TABLE statement can be executed against a memory-optimized table while a columnstore index exists on the table.
- As of August 2016, Microsoft has near-term plans to improve the performance of re-creating the columnstore index.
D.2 LOB and off-row columns
Large objects (LOBs) are columns of such types as varchar(max). Having a couple of LOB columns on a memory-optimized table probably does not harm performance enough to matter. But do avoid having more LOB columns than your data needs. The same advice applies to off-row columns. Do not define a column as nvarchar(3072) if varchar(512) will suffice.
A bit more about LOB and off-row columns is available at:
E. Limitations of native procs
Particular elements of Transact-SQL are not supported in natively compiled T-SQL modules, including stored procedures. For details about which features are supported, see:
For considerations when migrating a Transact-SQL module that uses unsupported features to be natively compiled, see:
Besides limitations on certain elements of Transact-SQL, there are also limitations on query operators supported in natively compiled T-SQL modules. Because of these limitations, natively compiled stored procedures are not suitable for analytical queries that process large data sets.
No parallel processing in a native proc
Parallel processing cannot be a part of any query plan for a native proc. Native procs are always single-threaded.
Join types
Both hash joins and merge joins cannot be a part of any query plan for a native proc. Nested loop joins are used.
No hash aggregation
When the query plan for a native proc requires an aggregation phase, only stream aggregation is available. Hash aggregation is not supported in a query plan for a native proc.
- Hash aggregation is better when data from a large number of rows must be aggregated.
F. Application design: Transactions and retry logic
A transaction involving a memory-optimized table can become dependent on another transaction that involves the same table. If the count of dependent transactions reaches the allowed maximum, all the dependent transactions fail.
In SQL Server 2016:
- The allowed maximum is eight dependent transactions. Eight is also the limit of transactions that any given transaction can be dependent on.
- The error number is 41839. (In SQL Server 2014 the error number is 41301.)
You can make your Transact-SQL scripts more robust against a possible transaction error by adding retry logic to your scripts. Retry logic is more likely to help when UPDATE and DELETE calls are frequent, or if the memory-optimized table is referenced by a foreign key in another table. For details, see:
- Transactions with Memory-Optimized Tables
- Transaction dependency limits with memory optimized tables - Error 41839