Sdílet prostřednictvím


Processing Options and Settings

When you process objects in Microsoft SQL Server Analysis Services, you can select a processing option to control the type of processing that occurs for each object. Also, you can enable Analysis Services to determine the appropriate type of processing. Processing methods differ from one object to another, and are based on the type of object and on the change that has occurred to the object because it was last processed. If you enable Analysis Services to automatically select a processing method, it will use the method that returns the object to a fully processed state in the least time.

Processing settings let you control the objects that are processed, and the methods that are used to process those objects. Some processing settings are primarily used for batch processing jobs. For more information about batch processing, see Batch Processing in Analysis Services.

Processing Options

The following table describes the processing methods that are available in Analysis Services, and identifies the objects for which each method is supported.

  • Process Default
    Detects the process state of an object, and performs processing necessary to deliver unprocessed or partially processed objects to a fully processed state. This processing option is supported for cubes, databases, dimensions, measure groups, mining models, mining structures, and partitions.

  • Process Full
    Processes an Analysis Services object and all the objects that it contains. When Process Full is executed against an object that has already been processed, Analysis Services drops all data in the object, and then processes the object. This kind of processing is required when a structural change has been made to an object, for example, when an attribute hierarchy is added, deleted, or renamed. This processing option is supported for cubes, databases, dimensions, measure groups, mining models, mining structures, and partitions.

  • Process Incremental
    Adds newly available fact data and process only to the relevant partitions. This processing option is supported for measure groups, and partitions.

  • Process Update
    Forces a re-read of data and an update of dimension attributes. Flexible aggregations and indexes on related partitions will be dropped. For example, this processing option can add new members to a dimension and force a complete re-read of the data to update object attributes. This processing option is supported for dimensions.

  • Process Index
    Creates or rebuilds indexes and aggregations for all processed partitions. This option causes an error on unprocessed objects. This processing option is supported for cubes, dimensions, measure groups, and partitions.

  • Process Data
    Processes data only without building aggregations or indexes. If there is data is in the partitions, it will be dropped before re-populating the partition with source data. This processing option is supported for dimensions, cubes, measure groups, and partitions.

  • Unprocess
    Drops the data in the object specified and any lower-level constituent objects. After the data is dropped, it is not reloaded. This processing option is supported for cubes, databases, dimensions, measure groups, mining models, mining structures, and partitions.

  • Process Structure
    If the cube is unprocessed, Analysis Services will process, if it is necessary, all the cube's dimensions. After that, Analysis Services will create only cube definitions. If this option is applied to a mining structure, it populates the mining structure with source data. The difference between this option and the Process Full option is that this option does not iterate the processing down to the mining models themselves. This processing option is supported for cubes and mining structures.

  • Process Clear Structure
    Removes all training data from a mining structure. This processing option is supported for mining structures only.

  • Process Script Cache
    This feature will be removed in the next version of Microsoft SQL Server. Do not use this feature in new development work, and modify applications that currently use this feature as soon as possible..

Processing Settings

The following table describes the processing settings that are available for use when you create a process operation.

Processing Option

Description

Parallel

Used for batch processing. This setting causes Analysis Services to fork off processing tasks to run in parallel inside a single transaction. If there is a failure, the result is a roll-back of all changes. You can set the maximum number of parallel tasks explicitly, or let the server decide the optimal distribution. The Parallel option is useful for speeding up processing. 

Sequential (Transaction Mode)

Controls the execution behavior of the processing job. Two options are available:

  • One Transaction. The processing job runs as a transaction. If all processes inside the processing job succeed, all changes by the processing job are committed. If one process fails, all changes by the processing job are rolled back. One Transaction is the default value.

  • Separate Transactions. Each process in the processing job runs as a stand-alone job. If one process fails, only that process is rolled back and the processing job continues. Each job commits all process changes at the end of the job.

When you process using One Transaction, all changes are committed after the processing job succeeds. This means that all Analysis Services objects affected by a particular processing job remain available for queries until the commit process. This makes the objects temporarily unavailable. Using Separate Transactions causes all objects that are affected by a process in processing job to be taken unavailable for queries as soon as that process succeeds.

Writeback Table Option

Controls how writeback tables are handled during processing. This option applies to writeback partitions in a cube, and uses the following options:

  • Use Existing. Uses the existing writeback table. This is default value.

  • Create. Creates a new writeback table and causes the process to fail if one already exists.

  • Create Always. Creates a new writeback table even if one already exists. An existing table is deleted and replaced.

Process Affected Objects

Controls the object scope of the processing job. An affected object is defined by object dependency. For example, partitions are dependent on the dimensions that determine aggregation, but dimensions are not dependent on partitions. You can use the following options:

  • False. The job processes the objects explicitly named in the job and all dependent objects. For example, if the processing job contains only dimensions, Analysis Services processes just those objects explicitly identified in the job. If the processing job contains partitions, partition processing automatically invokes processing of affected dimensions. False is the default setting.

  • True. The job processes the objects explicitly named in the job, all dependent objects, and all objects affected by the objects being processed without changing the state of the affected objects. For example, if the processing job contains only dimensions, Analysis Services also processes all partitions affected by the dimension processing for partitions that are currently in a processed state. Affected partitions that are currently in an unprocessed state are not processed. However, because partitions are dependent on dimensions, if the processing job contains only partitions, partition processing automatically invokes processing of affected dimensions, even when the dimension is currently in an unprocessed state.

Dimension Key Errors

Determines the action taken by Analysis Services when errors occur during processing. When you select Use custom error configuration, you can select values for the following actions to control error-handling behavior:

  • Key error action. If a key value does not yet exist in a record, one of these actions is selected to occur:

    • Convert to unknown. The key is interpreted as an unknown member. This is the default setting.

    • Discard record. The record is discarded.

  • Processing error limit. Controls the number of errors processed by selecting one of these options:

    • Ignore errors count. This will enable processing to continue regardless of the number of errors.

    • Stop on error. With this option, you control two additional settings. Number of errors lets you limit processing to the occurrence of a specific number of errors. On error action lets you determine the action when Number of errors is reached. You can select Stop processing, which causes the processing job to fail and roll back any changes, or Stop logging, which enables processing to continue without logging errors. Stop on error is the default setting with Number of errors set to 0 and On error action set to Stop processing.

  • Specific error conditions. You can set the following options to control specific error-handling behavior:

    • Key not found. Occurs when a key value exists in a partition but does not exist in the corresponding dimension. The default setting is Report and continue. Other settings are Ignore error and Report and stop.

    • Duplicate key. Occurs when more than one key value exists in a dimension. The default setting is Ignore error. Other settings are Report and continue and Report and stop.

    • Null key converted to unknown. Occurs when a key value is null and the Key error action is set to Convert to unknown. The default setting is Ignore error. Other settings are Report and continue and Report and stop.

    • Null key not allowed. Occurs when Key error action is set to Discard record. The default setting is Report and continue. Other settings are Ignore error and Report and stop.

When you select Use default error configuration, Analysis Services uses the error configuration that is set for each object being processed. If an object is set to use default configuration settings, Analysis Services uses the default settings that are listed for each option.