Monitor status and availability of entities

Completed

A data entity is a combination of one or multiple tables that are related to each other, and it facilitates the abstraction of an internal communication between those tables. Data entities play an important part in the integration process. You can run a synchronous integration through OData endpoints by using a data entity.

The Data management framework can run asynchronous data transfer by using a data entity. Make sure that you monitor the run status, availability, and performance of a data entity while a large amount of data is being transmitted in the integration process.

Configuration key

The availability of a data entity is dependent on the configuration key that's assigned to the following artifacts:

  • Data entity
  • Table that's used in the data entity
  • Table fields
  • Data entity fields

The following scenarios depict the configuration-based availability status of the data entity:

  • If the configuration key status is disabled in the data entity, then the data entity won’t be available for any purpose. The configuration key value on other artifacts doesn't matter.
  • If the configuration key of the data entity is enabled, and the configuration key of the primary table of the data entity is disabled, then the data entity won’t be available for any purpose.
  • If the configuration key of the data entity is enabled, and the configuration key of any child table of the data entity is disabled, then that table and all its child tables won’t be available in the data entity. However, the overall data entity is available for use.
  • If the configuration key of the data entity and the table are enabled, but some fields of the data entity or table are disabled, then those fields won't be available in the data entity. However, the overall data entity is available for use.

Entity list refresh

You can view the entity list from the Data entity tile of the Data management workspace. When you refresh the data entity list, the system rebuilds the configuration key metadata so that it's up to date. We recommend that you wait for the entity list refresh to complete before you use jobs and entities in the Data management framework. Otherwise, the configuration key metadata might not be up to date and could result in unexpected outcomes.

When changing the configuration key, you should refresh the entity list. Until the entity list is refreshed, a warning message appears, stating that The entity list must be refreshed from the framework parameters form to fetch configuration key information for the data entities. Make sure that you validate the existing data projects and jobs to ensure that they function as expected after the configuration key changes are put into effect.

Parallel processing

You can accelerate the import of data that uses data entities by enabling parallel processing of importing a file, if the entity supports parallel imports. In the Framework parameters of the Data management workspace, open Configure entity execution parameters on the Entity settings tab. Select the entity that requires parallel processing for better performance. In the Import threshold record count field, enter the threshold record count for import that determines the record count to be processed by a thread. In the Import task count field, enter the count of import tasks, which can't exceed the max batch threads that are allocated for batch processing in the Server configuration of the System administration module.

Job history clean-up

Use the job history clean-up functionality in Data management to schedule a periodic cleanup of the run history. This functionality replaces the previous staging table cleanup functionality, which is now deprecated.

You can use the system batch jobs for cleaning up the history. Using system batch jobs allows finance and operations apps to have the cleanup batch job automatically scheduled and running when the system is ready. Manual scheduling of the batch job isn't required.

The cleanup job will archive the deleted records to the blob storage that Data management framework uses for regular integrations. The archived file is in the Data management framework package format and is available for seven days in the blob, during which time you can download it.

Monitor run status

Running a data import or export job typically includes an execution status for the job. A data import/export job can have the following status in the Execution status field.

  • Not run - The initial value when the data job is created.
  • Executing - The status when the job is running.
  • Succeeded - Indication that the job ran successfully.
  • Failed- Indication that an error occurred. If you receive this status, review the View execution log to determine why the job has an error. You can also open the Infolog for more information and review View staging data, which provides a complete list of issues.

Status and errors might appear under Job history in the Data management framework. Then, you should be able to view execution logs and staging data. If you view staging data, you can validate data and copy data to the target after you fix issues.