Edit

Share via


Migrate Data Import to Microsoft Fabric

Data Import (Preview) and Data Connections (Preview) are features in Azure Machine Learning that let you bring external data into your machine learning workflows. These features are scheduled for retirement on September 30, 2026. To continue importing external data, migrate to Microsoft Fabric.

In this article, you learn about the recommended migration paths from Data Import and Data Connections to Microsoft Fabric, and how to connect your Fabric data back to Azure Machine Learning via datastores.

Deprecation timeline

Important

Data Import (Preview) and Data Connections (Preview) are scheduled for retirement on September 30, 2026. Plan your migration before this date to avoid disruption to your data workflows.

The following table summarizes what to expect during and after the migration period:

Milestone Details
Deprecation announced March 31, 2026
Feature retirement September 30, 2026
Existing data connections Stop functioning after retirement. Scheduled refreshes will no longer run.
Replacement Microsoft Fabric

After retirement, existing Data Import schedules and Data Connections stop functioning. Migrate to Fabric before the retirement date to maintain uninterrupted access to your external data.

Migration options

Microsoft Fabric supports more than 170 data source connectors. You can bring external data into Fabric by using one of the following options:

Choose a migration option

The best option depends on your data source, latency requirements, and whether you need to copy data or reference it in place.

Option Best for Data movement Latency Supported sources
Fabric Pipelines Scheduled batch ETL from any source Copies data to OneLake Depends on schedule 170+ connectors
Snowflake mirroring Near real-time access to Snowflake Mirrors data in OneLake Near real-time Snowflake only
OneLake shortcuts Referencing data without copying No data movement Direct access Amazon S3, Azure Data Lake Storage Gen2

Use the following guidance to select your migration path:

  • Choose Fabric Pipelines when you need scheduled batch transfers from any of the 170+ supported sources, or when you need to transform data before it reaches OneLake.
  • Choose Snowflake mirroring when you need near real-time access to Snowflake data and want to avoid managing pipeline schedules.
  • Choose OneLake shortcuts when you want to reference data in Amazon S3 or Azure storage without copying it, and your tools can read from OneLake directly.

Connect Fabric data to Azure Machine Learning

After your data is in Fabric, connect it to Azure Machine Learning by using one of these options:

  • OneLake datastore — Create an Azure Machine Learning OneLake datastore to reference data directly in Fabric. This option avoids an extra copy step and keeps your data in one location. For more information, see Create a OneLake datastore.

  • Copy to Azure storage — Create a Fabric pipeline to copy data to Azure Blob Storage or Azure Data Lake Storage Gen2, then create the corresponding Azure Machine Learning datastore to reference the copied data. This option is useful when your downstream tools require data in Azure storage. For more information, see Create datastores.