Run data import
Data import runs directly on the Microsoft Dataverse server. To run data import, set up asynchronous jobs to run in the background that do the following, in this order:
- Parse source data that is contained in the import file.
- Transform parsed data by using the data map.
- Upload transformed data into Dataverse.
All Dataverse users who have appropriate permissions can run data import.
Parse source data
Parsing source data includes parsing of all import files associated with a particular import (data import).
Parsed data is stored in the temporary parse tables that are created for every imported file. The name of the parse table is stored in the ImportFile.ParsedTableName
column. The source file column headings are specified in the ImportFile.HeaderRow
column. If the source file does not include a first row that contains the column headings, this column specifies the system-generated default column headings.
Save parsed data in the parse table by using the ParseImport
message. Retrieve data from the parse table by using the GetDistinctValuesImportFile
message and the RetrieveParsedDataImportFile
message .
The following table lists the messages that you can use to parse the import files and retrieve the parsed data from the parse tables.
Message SDK Class Web API Action or Function |
Description |
---|---|
ParseImportRequest ParseImportRequest Class ParseImport Action |
Submits an asynchronous job that parses all import files associated with the specified import (data import). Pass the ID of the associated import (data import) in the ImportId property of this request. The ID of the asynchronous job that runs in the background and performs parsing of data is returned in the AsyncOperationId property of the response. |
GetDistinctValuesImportFile GetDistinctValuesImportFileRequest Class GetDistinctValuesImportFile Function |
Returns distinct values for a column in the source file that contains list values. Pass the ID of the associated import file in the ImportFileId property of this request. The distinct values are returned in an array of strings, in the Values property of the response. Use this message only after you have created a parse table by using the ParseImport message. Important: Do not use this message after you use the ImportRecordsImport message. You cannot access the parse table after the import job submitted by the ImportRecordsImport message has finished running. |
RetrieveParsedDataImportFile RetrieveParsedDataImportFileRequest Class RetrieveParsedDataImportFile Function |
Retrieves the data from the parse table. Pass the ID of the associated import file in the ImportFileId property of this request. The parsed data is returned in a two-dimensional array of strings in the Values property of the response. The data is returned with the same column order as the column order in the source file. Use this message only after you have created a parse table by using the ParseImport message. Important: Do not use this message after you use the ImportRecordsImport message. You cannot access the parse table after the import job submitted by the ImportRecordsImport message has finished running. |
Transform parsed data
During transformation, you change parsed data by applying all available data mappings and transformations that are associated with a particular import (data import) to the data.
Use the TransformImport
message using the TransformImportRequest Class or TransformImport Action to submit an asynchronous job to transform the parsed data. Pass a unique identifier of the associated import (data import) in the Import.ImportId
column of the request. A unique identifier of the asynchronous job that runs in the background and performs the transformation is returned in the AsyncOperationId
property of the message response.
Upload transformed data to the target server
After you successfully complete the transformation, the data is ready to be uploaded into the Dataverse server.
Use the ImportRecordsImport
message using the ImportRecordsImportRequest Class or ImportRecordsImport Action to submit an asynchronous job to upload the transformed data into Dataverse. The unique identifier of the associated import (data import) must be specified in the ImportId
parameter of the request. A unique identifier of the asynchronous job that runs in the background and uploads the data into Dataverse is returned in the AsyncOperationId
property of the response. All import files that are associated with the specified import (data import) are imported.
Each import job has a unique sequence number that it stores in the ImportSequenceNumber
column of records it creates. The Organization.CurrentImportSequenceNumber
column contains a unique sequence number of the last import job that ran in the system. You can use these unique sequence numbers to track records that belong to one import job.
Log failures
A failure to import a record can occur during parsing, transformation, or uploading of data. The failure reasons and other detailed information about the records that failed to import are captured in the import log (ImportLog) table.
To find out how many records failed to import, retrieve the ImportFile.FailureCount
column of the record. To verify how many records had partial failures during import, retrieve the ImportData.HasError
column. If the HasError
column is true
, a partial failure occurred, if it is false
, the record imported successfully.
Import auditing data
The Dataverse tables have four default columns that are used to track the date and time when a record was created and last modified, and the person who created and modified it.
The createdon
column specifies the date and time that the record was created. To import data in the createdon
column, map the source column that contains this data to the overriddencreatedon
column. During import, the record's createdon
column is updated with the value that was mapped to the overriddencreatedon
column and the overriddencreatedon
column is set to the date and time that the data was imported. If no source value is mapped to the overriddencreatedon
column, the createdon
column is set to the date and time that the data was imported and the overriddencreatedon
column is not set to any value.
Note
To override the value in the createdon
column during import, you need the prvOverrideCreatedOnCreatedBy
privilege. Note that the privilege name implies that you can also override the createdby
column during import. However, this capability is not currently supported.
You cannot import data into the modifiedon
, createdby
, and modifiedby
columns. If you have to store data related to who created and modified the data and when the data was modified, you can create custom columns in Dataverse and map the source columns to the new custom columns.
See Also
Import data
Prepare source files for import
Create data maps for import
Add transformation mappings for import
Configure data import
Data import tables
Sample: Export and import a data map
Sample: Import data using complex data map
Blog Post: How to Import attachments programmatically