共用方式為


適用於 Azure Synapse Analytics 的 SqlPackage

This article covers SqlPackage support for Azure Synapse Analytics. It includes information on the following topics:

  • integration with Azure Blob Storage for accessing data in parquet files
  • support for serverless SQL pools

Both dedicated and serverless SQL pools do not support the import/export actions in SqlPackage or .bacpac files. SqlPackage supports Azure Synapse Analytics with .dacpac files and can read and write data in parquet format files in Azure Blog Storage. To import or export data from a dedicated SQL pool, you must use the publish or extract actions with data as detailed below.

擷取 (匯出資料)

To export data from an Azure Synapse Analytics database to Azure Blob Storage, the SqlPackage extract action is used with following properties:

  • /p:AzureStorageBlobEndpoint
  • /p:AzureStorageContainer
  • /p:AzureStorageKey

數據庫存取 Blob 記憶體容器的存取權是透過記憶體帳戶密鑰授權的。 The database schema (.dacpac file) is written to the local client running SqlPackage and the data is written to Azure Blob Storage in parquet format.

An additional parameter is optional, which sets the storage root path within the container:

  • /p:AzureStorageRootPath

如果沒有此屬性,路徑預設為 servername/databasename/timestamp/。 Data is stored in individual folders named with 2-part table names.

範例

下列範例會將名為 databasename 的資料庫從名為 yourserver.sql.azuresynapse.net 的伺服器擷取到目前目錄中名為 databaseschema.dacpac 的本機檔案。 The data is written to a container named containername in a storage account named storageaccount using a storage account key named storageaccountkey. 數據會寫入容器中 servername/databasename/timestamp/ 的預設路徑。

SqlPackage /Action:Extract /SourceServerName:yourserver.sql.azuresynapse.net /SourceDatabaseName:databasename /SourceUser:sqladmin /SourcePassword:{your_password} /TargetFile:databaseschema.dacpac /p:AzureStorageBlobEndpoint=https://storageaccount.blob.core.windows.net /p:AzureStorageContainer=containername /p:AzureStorageKey=storageaccountkey

如需可用的驗證類型範例,請參閱 SqlPackage 擷取

發佈 (匯入資料)

To import data from parquet files in Azure Blob Storage to an Azure Synapse Analytics database, the SqlPackage publish action is used with the following properties:

  • /p:AzureStorageBlobEndpoint
  • /p:AzureStorageContainer
  • /p:AzureStorageRootPath
  • /p:AzureStorageKey 或 /p:AzureSharedAccessSignatureToken

發行的存取權可以透過記憶體帳戶密鑰或共用存取簽章 (SAS) 令牌獲得授權。 The database schema (.dacpac file) is read from the local client running SqlPackage and the data is read from Azure Blob Storage in parquet format.

範例

下列範例會從目前目錄中名為 databasename 的本機檔案,將名為 yourserver.sql.azuresynapse.net 的資料庫發行至名為 databaseschema.dacpac 的伺服器。 The data is read from a container named containername in a storage account named storageaccount using a storage account key named storageaccountkey. 數據會從容器中路徑 yourserver.sql.azuresynapse.net/databasename/6-12-2022_8-09-56_AM/ 下每個數據表的個別資料夾讀取。

SqlPackage /Action:Publish /SourceFile:databaseschema.dacpac /TargetServerName:yourserver.sql.azuresynapse.net /TargetDatabaseName:databasename /TargetUser:sqladmin /TargetPassword:{your_password} /p:AzureStorageBlobEndpoint=https://storageaccount.blob.core.windows.net /p:AzureStorageContainer=containername  /p:AzureStorageKey=storageaccountkey /p:AzureStorageRootPath="yourserver.sql.azuresynapse.net/databasename/6-12-2022_8-09-56_AM/"

如需可用的驗證類型範例,請參閱 SqlPackage 發佈

Support for serverless SQL pools

Synapse serverless SQL pools are supported for both the extract and publish actions. The T-SQL surface area of serverless SQL pools is limited by design to external tables, external views, stored procedures, functions, statistics, and credentials. The following features are included in support for serverless SQL pools:

  • Delta external file format
  • OPENROWSET provider CosmosDB with PROVIDER, CONNECTION, OBJECT, and CREDENTIAL or SERVER_CREDENTIAL specified
  • with () clause on OPENROWSET

後續步驟