Move data from Teradata using Azure Data Factory
Note
This article applies to version 1 of Data Factory. If you are using the current version of the Data Factory service, see Teradata connector in V2.
This article explains how to use the Copy Activity in Azure Data Factory to move data from an on-premises Teradata database. It builds on the Data Movement Activities article, which presents a general overview of data movement with the copy activity.
You can copy data from an on-premises Teradata data store to any supported sink data store. For a list of data stores supported as sinks by the copy activity, see the Supported data stores table. Data factory currently supports only moving data from a Teradata data store to other data stores, but not for moving data from other data stores to a Teradata data store.
Prerequisites
Data factory supports connecting to on-premises Teradata sources via the Data Management Gateway. See moving data between on-premises locations and cloud article to learn about Data Management Gateway and step-by-step instructions on setting up the gateway.
Gateway is required even if the Teradata is hosted in an Azure IaaS VM. You can install the gateway on the same IaaS VM as the data store or on a different VM as long as the gateway can connect to the database.
Note
See Troubleshoot gateway issues for tips on troubleshooting connection/gateway related issues.
Supported versions and installation
For Data Management Gateway to connect to the Teradata Database, you need to install the .NET Data Provider for Teradata version 14 or above on the same system as the Data Management Gateway. Teradata version 12 and above is supported.
Getting started
You can create a pipeline with a copy activity that moves data from an on-premises Cassandra data store by using different tools/APIs.
- The easiest way to create a pipeline is to use the Copy Wizard. See Tutorial: Create a pipeline using Copy Wizard for a quick walkthrough on creating a pipeline using the Copy data wizard.
- You can also use the following tools to create a pipeline: Visual Studio, Azure PowerShell, Azure Resource Manager template, .NET API, and REST API. See Copy activity tutorial for step-by-step instructions to create a pipeline with a copy activity.
Whether you use the tools or APIs, you perform the following steps to create a pipeline that moves data from a source data store to a sink data store:
- Create linked services to link input and output data stores to your data factory.
- Create datasets to represent input and output data for the copy operation.
- Create a pipeline with a copy activity that takes a dataset as an input and a dataset as an output.
When you use the wizard, JSON definitions for these Data Factory entities (linked services, datasets, and the pipeline) are automatically created for you. When you use tools/APIs (except .NET API), you define these Data Factory entities by using the JSON format. For a sample with JSON definitions for Data Factory entities that are used to copy data from an on-premises Teradata data store, see JSON example: Copy data from Teradata to Azure Blob section of this article.
The following sections provide details about JSON properties that are used to define Data Factory entities specific to a Teradata data store:
Linked service properties
The following table provides description for JSON elements specific to Teradata linked service.
Property | Description | Required |
---|---|---|
type | The type property must be set to: OnPremisesTeradata | Yes |
server | Name of the Teradata server. | Yes |
authenticationType | Type of authentication used to connect to the Teradata database. Possible values are: Anonymous, Basic, and Windows. | Yes |
username | Specify user name if you are using Basic or Windows authentication. | No |
password | Specify password for the user account you specified for the username. | No |
gatewayName | Name of the gateway that the Data Factory service should use to connect to the on-premises Teradata database. | Yes |
Dataset properties
For a full list of sections & properties available for defining datasets, see the Creating datasets article. Sections such as structure, availability, and policy of a dataset JSON are similar for all dataset types (Azure SQL, Azure blob, Azure table, etc.).
The typeProperties section is different for each type of dataset and provides information about the location of the data in the data store. Currently, there are no type properties supported for the Teradata dataset.
Copy activity properties
For a full list of sections & properties available for defining activities, see the Creating Pipelines article. Properties such as name, description, input and output tables, and policies are available for all types of activities.
Whereas, properties available in the typeProperties section of the activity vary with each activity type. For Copy activity, they vary depending on the types of sources and sinks.
When the source is of type RelationalSource (which includes Teradata), the following properties are available in typeProperties section:
Property | Description | Allowed values | Required |
---|---|---|---|
query | Use the custom query to read data. | SQL query string. For example: select * from MyTable. | Yes |
JSON example: Copy data from Teradata to Azure Blob
The following example provides sample JSON definitions that you can use to create a pipeline by using Visual Studio or Azure PowerShell. They show how to copy data from Teradata to Azure Blob Storage. However, data can be copied to any of the sinks stated here using the Copy Activity in Azure Data Factory.
The sample has the following data factory entities:
- A linked service of type OnPremisesTeradata.
- A linked service of type AzureStorage.
- An input dataset of type RelationalTable.
- An output dataset of type AzureBlob.
- The pipeline with Copy Activity that uses RelationalSource and BlobSink.
The sample copies data from a query result in Teradata database to a blob every hour. The JSON properties used in these samples are described in sections following the samples.
As a first step, setup the data management gateway. The instructions are in the moving data between on-premises locations and cloud article.
Teradata linked service:
{
"name": "OnPremTeradataLinkedService",
"properties": {
"type": "OnPremisesTeradata",
"typeProperties": {
"server": "<server>",
"authenticationType": "<authentication type>",
"username": "<username>",
"password": "<password>",
"gatewayName": "<gatewayName>"
}
}
}
Azure Blob storage linked service:
{
"name": "AzureStorageLinkedService",
"properties": {
"type": "AzureStorageLinkedService",
"typeProperties": {
"connectionString": "DefaultEndpointsProtocol=https;AccountName=<AccountName>;AccountKey=<AccountKey>"
}
}
}
Teradata input dataset:
The sample assumes you have created a table "MyTable" in Teradata and it contains a column called "timestamp" for time series data.
Setting "external": true informs the Data Factory service that the table is external to the data factory and is not produced by an activity in the data factory.
{
"name": "TeradataDataSet",
"properties": {
"published": false,
"type": "RelationalTable",
"linkedServiceName": "OnPremTeradataLinkedService",
"typeProperties": {
},
"availability": {
"frequency": "Hour",
"interval": 1
},
"external": true,
"policy": {
"externalData": {
"retryInterval": "00:01:00",
"retryTimeout": "00:10:00",
"maximumRetry": 3
}
}
}
}
Azure Blob output dataset:
Data is written to a new blob every hour (frequency: hour, interval: 1). The folder path for the blob is dynamically evaluated based on the start time of the slice that is being processed. The folder path uses year, month, day, and hours parts of the start time.
{
"name": "AzureBlobTeradataDataSet",
"properties": {
"published": false,
"location": {
"type": "AzureBlobLocation",
"folderPath": "mycontainer/teradata/yearno={Year}/monthno={Month}/dayno={Day}/hourno={Hour}",
"format": {
"type": "TextFormat",
"rowDelimiter": "\n",
"columnDelimiter": "\t"
},
"partitionedBy": [
{
"name": "Year",
"value": {
"type": "DateTime",
"date": "SliceStart",
"format": "yyyy"
}
},
{
"name": "Month",
"value": {
"type": "DateTime",
"date": "SliceStart",
"format": "MM"
}
},
{
"name": "Day",
"value": {
"type": "DateTime",
"date": "SliceStart",
"format": "dd"
}
},
{
"name": "Hour",
"value": {
"type": "DateTime",
"date": "SliceStart",
"format": "HH"
}
}
],
"linkedServiceName": "AzureStorageLinkedService"
},
"availability": {
"frequency": "Hour",
"interval": 1
}
}
}
Pipeline with Copy activity:
The pipeline contains a Copy Activity that is configured to use the input and output datasets and is scheduled to run hourly. In the pipeline JSON definition, the source type is set to RelationalSource and sink type is set to BlobSink. The SQL query specified for the query property selects the data in the past hour to copy.
{
"name": "CopyTeradataToBlob",
"properties": {
"description": "pipeline for copy activity",
"activities": [
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "RelationalSource",
"query": "$$Text.Format('select * from MyTable where timestamp >= \\'{0:yyyy-MM-ddTHH:mm:ss}\\' AND timestamp < \\'{1:yyyy-MM-ddTHH:mm:ss}\\'', SliceStart, SliceEnd)"
},
"sink": {
"type": "BlobSink",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
}
},
"inputs": [
{
"name": "TeradataDataSet"
}
],
"outputs": [
{
"name": "AzureBlobTeradataDataSet"
}
],
"policy": {
"timeout": "01:00:00",
"concurrency": 1
},
"scheduler": {
"frequency": "Hour",
"interval": 1
},
"name": "TeradataToBlob"
}
],
"start": "2014-06-01T18:00:00Z",
"end": "2014-06-01T19:00:00Z",
"isPaused": false
}
}
Type mapping for Teradata
As mentioned in the data movement activities article, the Copy activity performs automatic type conversions from source types to sink types with the following 2-step approach:
- Convert from native source types to .NET type
- Convert from .NET type to native sink type
When moving data to Teradata, the following mappings are used from Teradata type to .NET type.
Teradata Database type | .NET Framework type |
---|---|
Char | String |
Clob | String |
Graphic | String |
VarChar | String |
VarGraphic | String |
Blob | Byte[] |
Byte | Byte[] |
VarByte | Byte[] |
BigInt | Int64 |
ByteInt | Int16 |
Decimal | Decimal |
Double | Double |
Integer | Int32 |
Number | Double |
SmallInt | Int16 |
Date | DateTime |
Time | TimeSpan |
Time With Time Zone | String |
Timestamp | DateTime |
Timestamp With Time Zone | DateTimeOffset |
Interval Day | TimeSpan |
Interval Day To Hour | TimeSpan |
Interval Day To Minute | TimeSpan |
Interval Day To Second | TimeSpan |
Interval Hour | TimeSpan |
Interval Hour To Minute | TimeSpan |
Interval Hour To Second | TimeSpan |
Interval Minute | TimeSpan |
Interval Minute To Second | TimeSpan |
Interval Second | TimeSpan |
Interval Year | String |
Interval Year To Month | String |
Interval Month | String |
Period(Date) | String |
Period(Time) | String |
Period(Time With Time Zone) | String |
Period(Timestamp) | String |
Period(Timestamp With Time Zone) | String |
Xml | String |
Map source to sink columns
To learn about mapping columns in source dataset to columns in sink dataset, see Mapping dataset columns in Azure Data Factory.
Repeatable read from relational sources
When copying data from relational data stores, keep repeatability in mind to avoid unintended outcomes. In Azure Data Factory, you can rerun a slice manually. You can also configure retry policy for a dataset so that a slice is rerun when a failure occurs. When a slice is rerun in either way, you need to make sure that the same data is read no matter how many times a slice is run. See Repeatable read from relational sources.
Performance and Tuning
See Copy Activity Performance & Tuning Guide to learn about key factors that impact performance of data movement (Copy Activity) in Azure Data Factory and various ways to optimize it.
Feedback
Submit and view feedback for