Copy data from MySQL using Azure Data Factory or Synapse Analytics
APPLIES TO:
Azure Data Factory
Azure Synapse Analytics
This article outlines how to use the Copy Activity in Azure Data Factory and Synapse Analytics pipelines to copy data from a MySQL database. It builds on the copy activity overview article that presents a general overview of copy activity.
Note
To copy data from or to Azure Database for MySQL service, use the specialized Azure Database for MySQL connector.
Supported capabilities
This MySQL connector is supported for the following capabilities:
Supported capabilities | IR |
---|---|
Copy activity (source/-) | ① ② |
Lookup activity | ① ② |
① Azure integration runtime ② Self-hosted integration runtime
For a list of data stores that are supported as sources/sinks by the copy activity, see the Supported data stores table.
Specifically, this MySQL connector supports MySQL version 5.6, 5.7 and 8.0.
Prerequisites
If your data store is located inside an on-premises network, an Azure virtual network, or Amazon Virtual Private Cloud, you need to configure a self-hosted integration runtime to connect to it.
If your data store is a managed cloud data service, you can use the Azure Integration Runtime. If the access is restricted to IPs that are approved in the firewall rules, you can add Azure Integration Runtime IPs to the allow list.
You can also use the managed virtual network integration runtime feature in Azure Data Factory to access the on-premises network without installing and configuring a self-hosted integration runtime.
For more information about the network security mechanisms and options supported by Data Factory, see Data access strategies.
The Integration Runtime provides a built-in MySQL driver starting from version 3.7, therefore you don't need to manually install any driver.
Getting started
To perform the Copy activity with a pipeline, you can use one of the following tools or SDKs:
- The Copy Data tool
- The Azure portal
- The .NET SDK
- The Python SDK
- Azure PowerShell
- The REST API
- The Azure Resource Manager template
Create a linked service to MySQL using UI
Use the following steps to create a linked service to MySQL in the Azure portal UI.
Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
Search for MySQL and select the MySQL connector.
Configure the service details, test the connection, and create the new linked service.
Connector configuration details
The following sections provide details about properties that are used to define Data Factory entities specific to MySQL connector.
Linked service properties
The following properties are supported for MySQL linked service:
Property | Description | Required |
---|---|---|
type | The type property must be set to: MySql | Yes |
connectionString | Specify information needed to connect to the Azure Database for MySQL instance. You can also put password in Azure Key Vault and pull the password configuration out of the connection string. Refer to the following samples and Store credentials in Azure Key Vault article with more details. |
Yes |
connectVia | The Integration Runtime to be used to connect to the data store. Learn more from Prerequisites section. If not specified, it uses the default Azure Integration Runtime. | No |
A typical connection string is Server=<server>;Port=<port>;Database=<database>;UID=<username>;PWD=<password>
. More properties you can set per your case:
Property | Description | Options | Required |
---|---|---|---|
SSLMode | This option specifies whether the driver uses TLS encryption and verification when connecting to MySQL. E.g., SSLMode=<0/1/2/3/4> . |
DISABLED (0) / PREFERRED (1) (Default) / REQUIRED (2) / VERIFY_CA (3) / VERIFY_IDENTITY (4) | No |
SSLCert | The full path and name of a .pem file containing the SSL certificate used for proving the identity of the client. To specify a private key for encrypting this certificate before sending it to the server, use the SSLKey property. |
Yes, if using two-way SSL verification. | |
SSLKey | The full path and name of a file containing the private key used for encrypting the client-side certificate during two-way SSL verification. | Yes, if using two-way SSL verification. | |
UseSystemTrustStore | This option specifies whether to use a CA certificate from the system trust store, or from a specified PEM file. E.g. UseSystemTrustStore=<0/1>; |
Enabled (1) / Disabled (0) (Default) | No |
Example:
{
"name": "MySQLLinkedService",
"properties": {
"type": "MySql",
"typeProperties": {
"connectionString": "Server=<server>;Port=<port>;Database=<database>;UID=<username>;PWD=<password>"
},
"connectVia": {
"referenceName": "<name of Integration Runtime>",
"type": "IntegrationRuntimeReference"
}
}
}
Example: store password in Azure Key Vault
{
"name": "MySQLLinkedService",
"properties": {
"type": "MySql",
"typeProperties": {
"connectionString": "Server=<server>;Port=<port>;Database=<database>;UID=<username>;",
"password": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "<Azure Key Vault linked service name>",
"type": "LinkedServiceReference"
},
"secretName": "<secretName>"
}
},
"connectVia": {
"referenceName": "<name of Integration Runtime>",
"type": "IntegrationRuntimeReference"
}
}
}
If you were using MySQL linked service with the following payload, it is still supported as-is, while you are suggested to use the new one going forward.
Previous payload:
{
"name": "MySQLLinkedService",
"properties": {
"type": "MySql",
"typeProperties": {
"server": "<server>",
"database": "<database>",
"username": "<username>",
"password": {
"type": "SecureString",
"value": "<password>"
}
},
"connectVia": {
"referenceName": "<name of Integration Runtime>",
"type": "IntegrationRuntimeReference"
}
}
}
Dataset properties
For a full list of sections and properties available for defining datasets, see the datasets article. This section provides a list of properties supported by MySQL dataset.
To copy data from MySQL, the following properties are supported:
Property | Description | Required |
---|---|---|
type | The type property of the dataset must be set to: MySqlTable | Yes |
tableName | Name of the table in the MySQL database. | No (if "query" in activity source is specified) |
Example
{
"name": "MySQLDataset",
"properties":
{
"type": "MySqlTable",
"typeProperties": {},
"schema": [],
"linkedServiceName": {
"referenceName": "<MySQL linked service name>",
"type": "LinkedServiceReference"
}
}
}
If you were using RelationalTable
typed dataset, it is still supported as-is, while you are suggested to use the new one going forward.
Copy activity properties
For a full list of sections and properties available for defining activities, see the Pipelines article. This section provides a list of properties supported by MySQL source.
MySQL as source
To copy data from MySQL, the following properties are supported in the copy activity source section:
Property | Description | Required |
---|---|---|
type | The type property of the copy activity source must be set to: MySqlSource | Yes |
query | Use the custom SQL query to read data. For example: "SELECT * FROM MyTable" . |
No (if "tableName" in dataset is specified) |
Example:
"activities":[
{
"name": "CopyFromMySQL",
"type": "Copy",
"inputs": [
{
"referenceName": "<MySQL input dataset name>",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "<output dataset name>",
"type": "DatasetReference"
}
],
"typeProperties": {
"source": {
"type": "MySqlSource",
"query": "SELECT * FROM MyTable"
},
"sink": {
"type": "<sink type>"
}
}
}
]
If you were using RelationalSource
typed source, it is still supported as-is, while you are suggested to use the new one going forward.
Data type mapping for MySQL
When copying data from MySQL, the following mappings are used from MySQL data types to interim data types used by the service internally. See Schema and data type mappings to learn about how copy activity maps the source schema and data type to the sink.
MySQL data type | Interim service data type |
---|---|
bigint |
Int64 |
bigint unsigned |
Decimal |
bit(1) |
Boolean |
bit(M), M>1 |
Byte[] |
blob |
Byte[] |
bool |
Int16 |
char |
String |
date |
Datetime |
datetime |
Datetime |
decimal |
Decimal, String |
double |
Double |
double precision |
Double |
enum |
String |
float |
Single |
int |
Int32 |
int unsigned |
Int64 |
integer |
Int32 |
integer unsigned |
Int64 |
long varbinary |
Byte[] |
long varchar |
String |
longblob |
Byte[] |
longtext |
String |
mediumblob |
Byte[] |
mediumint |
Int32 |
mediumint unsigned |
Int64 |
mediumtext |
String |
numeric |
Decimal |
real |
Double |
set |
String |
smallint |
Int16 |
smallint unsigned |
Int32 |
text |
String |
time |
TimeSpan |
timestamp |
Datetime |
tinyblob |
Byte[] |
tinyint |
Int16 |
tinyint unsigned |
Int16 |
tinytext |
String |
varchar |
String |
year |
Int |
Lookup activity properties
To learn details about the properties, check Lookup activity.
Next steps
For a list of data stores supported as sources and sinks by the copy activity, see supported data stores.
Feedback
Submit and view feedback for