Tutorial: Load sample data into a SQL Server big data cluster
Applies to: SQL Server 2019 (15.x)
Important
The Microsoft SQL Server 2019 Big Data Clusters add-on will be retired. Support for SQL Server 2019 Big Data Clusters will end on February 28, 2025. All existing users of SQL Server 2019 with Software Assurance will be fully supported on the platform and the software will continue to be maintained through SQL Server cumulative updates until that time. For more information, see the announcement blog post and Big data options on the Microsoft SQL Server platform.
This tutorial explains how to use a script to load sample data into a SQL Server 2019 Big Data Clusters. Many of the other tutorials in the documentation use this sample data.
Tip
You can find additional samples for SQL Server 2019 Big Data Clusters in the sql-server-samples GitHub repository. They are located in the sql-server-samples/samples/features/sql-big-data-cluster/ path.
Prerequisites
- A deployed big data cluster
- Big data tools
- azdata
- kubectl
- sqlcmd
- curl
Load sample data
The following steps use a bootstrap script to download a SQL Server database backup and load the data into your big data cluster. For ease of use, these steps have been broken out into Windows and Linux sections. If you want to use basic username/password as authentication mechanism then set AZDATA_USERNAME and AZDATA_PASSWORD environment variables before executing the script. Otherwise the script will use integrated authentication to connect to SQL Server Master instance and Knox gateway. Also, DNS name should be specified for the endpoints in order to use integrated authentication.
Windows
The following steps describe how to use a Windows client to load the sample data into your big data cluster.
Open a new Windows command prompt.
Important
Do not use Windows PowerShell for these steps. In PowerShell, the script will fail as it will use the PowerShell version of curl.
Use curl to download the bootstrap script for the sample data.
curl -o bootstrap-sample-db.cmd "https://raw.githubusercontent.com/Microsoft/sql-server-samples/master/samples/features/sql-big-data-cluster/bootstrap-sample-db.cmd"
Download the bootstrap-sample-db.sql Transact-SQL script. This script is called by the bootstrap script.
curl -o bootstrap-sample-db.sql "https://raw.githubusercontent.com/Microsoft/sql-server-samples/master/samples/features/sql-big-data-cluster/bootstrap-sample-db.sql"
The bootstrap script requires the following positional parameters for your big data cluster:
Parameter Description <CLUSTER_NAMESPACE> The name you gave your big data cluster. <SQL_MASTER_ENDPOINT> The DNS name or IP address of your master instance. <KNOX_ENDPOINT> The DNS name or IP address of the HDFS/Spark Gateway. Tip
Use kubectl to find the IP addresses for the SQL Server master instance and Knox. Run
kubectl get svc -n <your-big-data-cluster-name>
and look at the EXTERNAL-IP addresses for the master instance (master-svc-external) and Knox (gateway-svc-external). The default name of a cluster is mssql-cluster.Run the bootstrap script.
.\bootstrap-sample-db.cmd <CLUSTER_NAMESPACE> <SQL_MASTER_ENDPOINT> <KNOX_ENDPOINT>
Linux
The following steps describe how to use a Linux client to load the sample data into your big data cluster.
Download the bootstrap script, and assign executable permissions to it.
curl -o bootstrap-sample-db.sh "https://raw.githubusercontent.com/Microsoft/sql-server-samples/master/samples/features/sql-big-data-cluster/bootstrap-sample-db.sh" chmod +x bootstrap-sample-db.sh
Download the bootstrap-sample-db.sql Transact-SQL script. This script is called by the bootstrap script.
curl -o bootstrap-sample-db.sql "https://raw.githubusercontent.com/Microsoft/sql-server-samples/master/samples/features/sql-big-data-cluster/bootstrap-sample-db.sql"
The bootstrap script requires the following positional parameters for your big data cluster:
Parameter Description <CLUSTER_NAMESPACE> The name you gave your big data cluster. <SQL_MASTER_ENDPOINT> The DNS name or IP address of your master instance. <KNOX_ENDPOINT> The DNS name or IP address of the HDFS/Spark Gateway. Tip
Use kubectl to find the IP addresses for the SQL Server master instance and Knox. Run
kubectl get svc -n <your-big-data-cluster-name>
and look at the EXTERNAL-IP addresses for the master instance (master-svc-external) and Knox (gateway-svc-external). The default name of a cluster is mssql-cluster.Run the bootstrap script.
./bootstrap-sample-db.sh <CLUSTER_NAMESPACE> <SQL_MASTER_ENDPOINT> <KNOX_ENDPOINT>
Next steps
After the bootstrap script runs, your big data cluster has the sample databases and HDFS data. The following tutorials use the sample data to demonstrate big data cluster capabilities:
Data Virtualization:
- Tutorial: Query HDFS in a SQL Server big data cluster
- Tutorial: Query Oracle from a SQL Server big data cluster
Data ingestion:
- Tutorial: Ingest data into a SQL Server data pool with Transact-SQL
- Tutorial: Ingest data into a SQL Server data pool with Spark jobs
Notebooks: