แก้ไข

แชร์ผ่าน


Analyze data with a serverless SQL pool

In this tutorial, you'll learn how to analyze data with serverless SQL pool.

The Built-in serverless SQL pool

Serverless SQL pools let you use SQL without having to reserve capacity. Billing for a serverless SQL pool is based on the amount of data processed to run the query and not the number of nodes used to run the query.

Every workspace comes with a preconfigured serverless SQL pool called Built-in.

Analyze NYC Taxi data with a serverless SQL pool

  1. In the Synapse Studio, go to the Develop hub

  2. Create a new SQL script.

  3. Paste the following code into the script. (Update contosolake to the name of your storage account and users with the name of your container.)

    SELECT
        TOP 100 *
    FROM
        OPENROWSET(
            BULK 'https://contosolake.dfs.core.windows.net/users/NYCTripSmall.parquet',
            FORMAT='PARQUET'
        ) AS [result]
    
  4. Select Run.

Data exploration is just a simplified scenario where you can understand the basic characteristics of your data. Learn more about data exploration and analysis in this tutorial.

Create data exploration database

You can browse the content of the files directly via master database. For some simple data exploration scenarios, you don't need to create a separate database. However, as you continue data exploration, you might want to create some utility objects, such as:

  • External data sources that represent the named references for storage accounts.
  • Database scoped credentials that enable you to specify how to authenticate to external data source.
  • Database users with the permissions to access some data sources or database objects.
  • Utility views, procedures, and functions that you can use in the queries.
  1. Use the master database to create a separate database for custom database objects. Custom database objects can't be created in the master database.

    CREATE DATABASE DataExplorationDB 
                    COLLATE Latin1_General_100_BIN2_UTF8
    

    Important

    Use a collation with _UTF8 suffix to ensure that UTF-8 text is properly converted to VARCHAR columns. Latin1_General_100_BIN2_UTF8 provides the best performance in the queries that read data from Parquet files and Azure Cosmos DB containers. For more information on changing collations, see Collation types supported for Synapse SQL.

  2. Switch the database context from master to DataExplorationDB using the following command. You can also use the UI control use database to switch your current database:

    USE DataExplorationDB
    
  3. From DataExplorationDB create utility objects such as credentials and data sources.

    CREATE EXTERNAL DATA SOURCE ContosoLake
    WITH ( LOCATION = 'https://contosolake.dfs.core.windows.net')
    

    Note

    An external data source can be created without a credential. If a credential does not exist, the caller's identity will be used to access the external data source.

  4. Optionally, use the newly created DataExplorationDB database to create a login for a user in DataExplorationDB that will access external data:

    CREATE LOGIN data_explorer WITH PASSWORD = 'My Very Strong Password 1234!';
    

    Next create a database user in DataExplorationDB for the above login and grant the ADMINISTER DATABASE BULK OPERATIONS permission.

    CREATE USER data_explorer FOR LOGIN data_explorer;
    GO
    GRANT ADMINISTER DATABASE BULK OPERATIONS TO data_explorer;
    GO
    
  5. Explore the content of the file using the relative path and the data source:

    SELECT
        TOP 100 *
    FROM
        OPENROWSET(
                BULK '/users/NYCTripSmall.parquet',
                DATA_SOURCE = 'ContosoLake',
                FORMAT='PARQUET'
        ) AS [result]
    
  6. Publish your changes to the workspace.

Data exploration database is just a simple placeholder where you can store your utility objects. Synapse SQL pool enables you to do much more and create a Logical Data Warehouse - a relational layer built on top of Azure data sources. Learn more about building a logical data warehouse in this tutorial.

Next steps