SQL Server Best Practices Article
By Sanjay Mishra
Technical Reviewers: Lubor Kollar, Tom Davidson, Prem Mehra, Kevin Cox
Project Editor: Suzanne Bonney
Published: August 2006
Applies To: SQL Server 2005
Loading bulk data into tables is one of the most common tasks in data warehouse applications. As the data volume increases, so does the complexity of the loading process. While loading bulk data, you should not only consider how much time it takes to load the data, but also consider the concurrency and data availability requirements, and the maintenance operations, such as creating or rebuilding indexes. Bulk data loading can take a significant amount of time. Therefore, you should consider the options that enable fast loading of data. However, loading fast is not enough. Loading bulk data can, at times, cause escalation of locks and impact other concurrent transactions on the table. For applications requiring concurrent data access during load, you should consider options that provide the desired concurrency. Availability of existing data while the new data is being loaded is also an important consideration to help you choose the right option. Creating and rebuilding indexes is an essential part of the data loading process, but can be time-consuming for large tables. However, there are ways to create indexes on the freshly loaded data without impacting the data in the rest of the table.SQL Server 2005 provides partitioned tables, and while loading data into a partitioned table, you can have several options. Each option provides its own distinct advantages, but the option you choose depends upon the requirements of your application, such as performance, concurrency, temporary storage, and so on. This document presents the data loading options on a partitioned table and compares them in terms of performance, concurrency and temporary storage requirements.
On This Page
Introduction
Test Objective
Test Methodology
Test Results and Observations
Recommendations
Conclusion
Introduction
Partitioning provides some unique advantages to bulk data loading. You can load data to an intermediate staging table, and then create indexes and constraints on it. Then, by using the SWITCH clause of the ALTER TABLE statement, you can make this data part of the partitioned table, as shown in the following procedure:
Create a staging table that matches the logical structure of the target table. Load the bulk data into this table. This loading can be much faster (compared to loading directly into the target table) because the staging table has no indexes or constraints on it. More importantly, while the new data is being loaded, the existing data is fully available for all transactions without any impact, because the data load is taking place on a separate staging table.
Create constraints and indexes on the staging table that are equivalent to those that exist on the target table. While you create constraints and indexes on the staging table, the existing data is fully available for all transactions without any impact, because this is taking place on the staging table.
Execute the ALTER TABLE … SWITCH statement to move the data from the staging table to an empty partition of the target table. This is a metadata-only operation, and is very fast (usually under a second) when there are no long transactions on the target table. This way, all the loading and indexing happens outside the main table, and then the data quickly moves into the main table. One important thing to note here is that the ALTER TABLE … SWITCH operation requires a schema modification (Sch-M) lock on the table. Long running transactions can block the switch operation from acquiring the Sch-M lock and make it wait.
In data warehouse applications, it is very common to partition data on the date dimension. This helps in periodically archiving or deleting the old data as new data comes in, with minimal effect on performance and availability. Depending on data volume, database designers pick yearly, quarterly, monthly, weekly, daily or even hourly partitions.
One way to partition a table is to create one partition for each incremental load. For example, if you load data on a daily basis, create one partition for each day. By using this partitioning approach, you can use the technique described earlier in this paper (load and index data in a staging table, and then use ALTER TABLE … SWITCH) very efficiently. One partition is added for every incremental load, and the number of partitions increases over time.
However, there is one limitation to this approach. SQL Server 2005 allows a maximum of 1,000 partitions in a table. If you need to maintain data for more than 1,000 increments (in this example, more than 1,000 days), then you need alternative approaches to load and maintain data in the table without going beyond the 1,000 partitions limit. This results in more than one incremental load of data going into one partition. For example, you can partition a table to have one partition per week, but you load data into this table on a daily basis.
When loading data into a partitioned table (and storing multiple incremental loads in one partition), you can choose from the following three options.
Option 1: Incrementally load into one partition. All partitions are weekly, and every day you load data into an existing partition. After the first day of the week, you are loading data into an existing partition that already contains some data. Therefore, you cannot use the partition switching technique.
Option 2: Add one smaller partition per incremental load, and periodically merge the partitions. The current week’s data is partitioned into daily partitions. Every day, you load data into a new empty partition. Because one data load corresponds to one partition, you can use the SWITCH partition technique described earlier in this paper. At the end of the week, when all seven partitions for seven days are loaded, the partitions are merged to form a weekly partition.
Option 3: Reload the full partition each time, with increasingly more data. All partitions are weekly. As the week progresses, the cumulative data volume for the week is loaded each day. For example, on the first day of the week, one day’s worth of data is loaded. On the second day, two day’s worth of data is loaded, and so on. However, because you are loading the entire partition every day, you can load data into a temporary staging table and use the partition switching technique.
Test Objective
The objective of this exercise is to determine the following:
Which of these options provides best performance?
Which option provides the best concurrency?
Which option requires the least amount of temporary space for the intermediate staging table?
Test Methodology
A table is partitioned on weekly basis – one week’s data goes into one partition. 100 million rows of data (text data file size = 4.5 GB) are loaded each day. For the test setup, a time window of four weeks is considered, and the data for the first three weeks are already loaded into 3 partitions. We are into the fourth week now.
Option 1: Use one partition for the fourth week. Each day, data is directly loaded into the partitioned table. There is no need for reindexing, switching partitions, or merging partitions. Because data is loaded into an indexed partition, the time to load each day’s data gradually increases as the week progresses; therefore, the seventh day takes the most time.
Option 2: Use seven daily partitions for the fourth week. Each day, data is loaded into a temporary staging table, and then the staging table is indexed and a constraint is created. Then, by using ALTER TABLE … SWITCH PARTITION, the data is moved from the staging table to a partition. Because each daily partition is of the same size (one incremental load), it takes about the same amount of time to load data each day of the week. At the end of the week, the seven daily partitions are merged to create one weekly partition. The tasks on the seventh day of the week take the most amount of time, because they involve loading data into the seventh daily partition, and then merging the seven daily partitions into one weekly partition.
Option 3: Use one partition for the fourth week. Each day, a temporary staging table is loaded with cumulative data for the week up to that day, and then the staging table is indexed and a constraint is created. Then, by using ALTER TABLE … SWITCH PARTITION, the data is moved from the staging table to the fourth partition. Before the data is moved from the staging table to the fourth partition, the existing data in the fourth partition is removed by switching it out to another staging table. The tasks on the seventh day of the week take the most amount of time, because they involve loading seven days worth of data, indexing the staging table, and then switching it into the partition. The tasks on the seventh day also involve switching out the data for the previous six days of the week to another temporary table. While loading data into the staging table, the bulk load can be parallelized by using one stream for each day’s data.
Test Results and Observations
Results
Table 1 shows the results of the test, based on the scenario defined in the test methodology.
Table 1 Test Results
OPTION |
Data Load Time (hh:mm) |
Indexing, Constraint creation and Partition Switch (hh:mm) |
Merging Partitions (hh:mm) |
Total Worst case time (hh:mm) |
Development / Administrative Complexity |
Impact on Concurrency |
Space required for the staging table |
---|---|---|---|---|---|---|---|
1 |
02:37 (Longest time on the seventh day of the week) |
N/A |
N/A |
02:37 |
Least |
Lock escalation |
0 |
2 |
00:34 (Data load takes the same amount of time each day) |
00:07 |
13:10 (Merging partitions on the seventh day) |
13:51 |
Most |
Merging partitions is offline |
Low (Only for 1 day’s data) |
3 |
01:25 |
00:41 |
N/A |
02:06 |
Medium |
Impact of ongoing transactions on switching partitions. For more information, see the “Concurrency Impact of Switching Partitions” later in this paper. |
High (Seven days’ worth of data will require the most space). |
Observations
Option 1 does not require any post-loading tasks, such as indexing, switching, or merging of partitions. It doesn’t require any temporary staging space. Therefore, it is the easiest to implement. However, the possibility of lock escalation needs to be taken into account. For more information, see the “Bulk Data Load and Lock Escalation” section later in this paper.
Option 2 is the fastest for getting the data into the partitioned table. However, merging partitions takes a lot of time. Also, merging partitions is an offline activity, which can be prohibitive to some applications. Merging partitions periodically also requires extra programming and administrative overhead.
Option 3 tries to capture the best parts of option 1 and option 2. Among the three options, option 3 takes the least amount of time for the seventh day (tasks on the seventh day are the most time consuming in each of the three options). Because the data is first loaded into a staging table, lock escalation during data load does not impact the ongoing transactions. However, the ongoing transactions on the table can impact the switch partition task, even if the transactions exclude the partition being switched in or out. Also, this option requires the maximum amount of temporary storage of all the options. For best results, you need two staging tables – one to switch out the existing data in the fourth partition, and another to load data prior to switch in. On the seventh day of the week, the staging tables need to be big enough to hold a combined 13 days’ worth of data. The staging table to hold the data being switched out has to temporarily hold the previous six days’ worth of data, and the staging table holding the data being switched in holds seven days’ worth of data.
Bulk Data Load and Lock Escalation (Option 1)
When you load bulk data into a table, you must consider the possibility of lock escalation. Lock escalation is most relevant in option 1, because data is directly loaded into the table serving database transactions. In the other options, bulk data is loaded into the staging table.
One way to prevent lock escalation is to use a smaller BATCHSIZE option for the BULK INSERT statement. For BATCHSIZE < 5000, commits take place and row locks are released before the total number of locks becomes greater than 5000. Therefore, locks are not escalated. However, the downside of using a smaller batch size is that it takes longer to complete the load. The bigger the batch size, the less time it takes to complete the load, but bigger batch sizes might cause lock escalation. This will result in reduced concurrency of the applications that are being run at the time of the bulk load.
Another way to prevent lock escalation is to use the trace flag 1211, which disables escalation of locks. Disabling lock escalation can improve the concurrency of the applications that are being run at the time of the bulk load. However, using trace flag 1211 can cause memory pressure, because now a much larger number of locks are held in memory. This might be a concern for systems with less memory. The perfmon counter SQL Server: Page Life Expectancy (number of seconds a page will stay in the buffer pool) can provide evidence of memory pressure if this counter drops during the bulk load operation. Another trace flag, 1224, disables lock escalation based on the number of locks held, but memory pressures can still cause lock escalation. If both the trace flags are set, 1211 takes precedence over 1224. For more information about lock escalation, see the Lock Escalation (Database Engine) topic in SQL Server Books Online (https://go.microsoft.com/fwlink/?LinkId=70947). For more information about trace flags, see the Trace Flags (Transact-SQL) topic in SQL Server Books Online (https://go.microsoft.com/fwlink/?LinkId=70948).
Time to load each day’s data (Option 1)
The time it takes to load each day’s data over a week is illustrated in Figure 1.
Figure 1 Time to load each day’s data by using Option 1
Figure 1 shows that, for each day, the time to load the data is less for a bigger batch size. Another important thing to note here is that, because we are loading data into an indexed partition, each day it takes longer to load the data than the previous day. This is because the existing index must be maintained while loading new rows.
Switching Partitions and Concurrency (Option 2 and Option 3)
ALTER TABLE ... SWITCH ... PARTITION is a very convenient way to move bulk data in and out of a partitioned table. However, the locking requirements of the ALTER TABLE statement might cause problems in situations when there are ongoing queries/DML on the table. Table 2 shows the summary of the current behavior:
Table 2 Concurrency Impact of Ongoing Transactions on Switching Partitions
Scenario |
Description |
Impact on ALTER TABLE … SWITCH |
---|---|---|
Case 1 |
No query/DML running on the table |
No impact |
Case 2 |
Query/DML on the table. But doesn't involve the partition being switched in or out. Partition elimination. |
ALTER TABLE … SWITCH waits for the query/DML to complete |
Case 3 |
Query/DML on the table, and it involves the partition being switched IN or OUT. |
ALTER TABLE … SWITCH waits for the query/DML to complete |
If the ongoing transactions are small, there is no significant impact on the time it takes to switch partitions. However, in the presence of long running transactions, the wait for the switch to take place can be longer. The ALTER TABLE … SWITCH operation requires a schema modification (Sch-M) lock on the table. Long running transactions can block the switch from acquiring the Sch-M lock and make it wait longer. The stored procedure sp_lock can be used to determine which locks various sessions are waiting for.
Recommendations
If development or administrative complexity is of prime concern, then option 1 is the best choice. The only thing you need is a BULK INSERT statement. Compared to other options, this is much easier to develop and implement in an enterprise environment.
If storage for temporary staging is of prime concern, then option 1 is also the best choice. It doesn’t require any temporary staging space as data is directly loaded into the target table.
If you can have a long enough time window for bulk data loading when no online access to the data is required, then option 1 is a very good choice. You can choose a high batch size to get better data load performance and use the TABLOCK option of the BULK INSERT statement to prevent lock escalation.
If such a long time window is not available and you still want to use option 1, you need to choose a small batch size to avoid lock escalation. This will cause the load to take longer. To help avoid this problem, instead of a smaller batch size, you can set trace flag 1224 to prevent lock escalation based on the number of locks. If you choose trace flag 1211 to prevent lock escalation completely, then you should consider the memory requirements of holding a large number of locks.
If the frequency of data loading matches the partitioning scheme (for example, you load data once every week, and your table is partitioned once per week), then option 2 is the best option, because no time is wasted in merging partitions.
If the storage for staging tables is not a concern, option 3 provides the fastest way to load data. The only concern is the impact of ongoing transactions on the switch operation. The switch typically takes less than one second when there are no transactions, or there are some short transactions on the table. If you can have a very small data loading window, when you can get exclusive access to the table, option 3 is an excellent choice. However, if your application contains long transactions that need continuous access to the table, the switch operation might have to wait until it gains exclusive access to the table.
Conclusion
Partitioned tables provide several options for bulk data loading. Each option has its own advantages. One important criterion is the frequency of data loads and the number of load increments that you need to maintain in the table. If you need to keep 1000 load increments or less, creating one partition per incremental load might be a good option. SQL Server 2005 allows a maximum of 1000 partitions per table. Therefore, if you need to keep more than 1000 incremental loads in the table, the data loading options described in this document will be very useful. Take into account the performance, concurrency, and data availability requirements of the application when you choose one of the options. You should also consider the storage requirements for intermediate staging tables.