Perform an offline deployment of a SQL Server big data cluster


The Microsoft SQL Server 2019 Big Data Clusters add-on will be retired. Support for SQL Server 2019 Big Data Clusters will end on February 28, 2025. All existing users of SQL Server 2019 with Software Assurance will be fully supported on the platform and the software will continue to be maintained through SQL Server cumulative updates until that time. For more information, see the announcement blog post and Big data options on the Microsoft SQL Server platform.

This article describes how to perform an offline deployment of a SQL Server 2019 Big Data Clusters. Big data clusters must have access to a Docker repository from which to pull container images. An offline installation is one where the required images are placed into a private Docker repository. That private repository is then used as the image source for a new deployment.



The parameter imagePullPolicy is required to be set as "Always" in the deployment profile control.json file.

Load images into a private repository

The following steps describe how to pull the big data cluster container images from the Microsoft repository and then push them into your private repository.


The following steps explain the process. However, to simplify the task, you can use the automated script instead of manually running these commands.

  1. Pull the big data cluster container images by repeating the following command. Replace <SOURCE_IMAGE_NAME> with each image name. Replace <SOURCE_DOCKER_TAG> with the tag for the big data cluster release, such as 2019-CU12-ubuntu-20.04.

  2. Log in to the target private Docker registry.

  3. Tag the local images with the following command for each image:

  4. Push the local images to the private Docker repository:



Do not modify the big data cluster images once they are pushed into your private repository. Performing a deployment with modified images will result in an unsupported big data cluster setup.

Big data cluster container images

The following big data cluster container images are required for an offline installation:

  • mssql-app-service-proxy
  • mssql-control-watchdog
  • mssql-controller
  • mssql-dns
  • mssql-hadoop
  • mssql-mleap-serving-runtime
  • mssql-mlserver-py-runtime
  • mssql-mlserver-r-runtime
  • mssql-monitor-collectd
  • mssql-monitor-elasticsearch
  • mssql-monitor-fluentbit
  • mssql-monitor-grafana
  • mssql-monitor-influxdb
  • mssql-monitor-kibana
  • mssql-monitor-telegraf
  • mssql-security-knox
  • mssql-security-support
  • mssql-server-controller
  • mssql-server-data
  • mssql-ha-operator
  • mssql-ha-supervisor
  • mssql-service-proxy
  • mssql-ssis-app-runtime

Automated script

You can use an automated python script that will automatically pull all required container images and push them into a private repository.


Python is a prerequisite for using the script. For more information about how to install Python, see the Python documentation.

  1. From bash or PowerShell, download the script with curl:

    curl -o ""
  2. Then run the script with one of the following commands:




    sudo python
  3. Follow the prompts for entering the Microsoft repository and your private repository information. After the script completes, all required images should be located in your private repository.

  4. Follow the instructions here to learn how to customize the control.json deployment configuration file to make use of your container registry and repository. Note that you must set DOCKER_USERNAME and DOCKER_PASSWORD environment variables before deployment to enable access to your private repository.

Install tools offline

Big data cluster deployments require several tools, including Python, Azure Data CLI (azdata), and kubectl. Use the following steps to install these tools on an offline server.

Install python offline

  1. On a machine with internet access, download one of the following compressed files containing Python:

    Operating system Download
  2. Copy the compressed file to the target machine and extract it to a folder of your choice.

  3. For Windows only, run installLocalPythonPackages.bat from that folder and pass the full path to the same folder as a parameter.

    installLocalPythonPackages.bat "C:\python-3.6.6-win-x64-0.0.1-offline\0.0.1"

Install azdata offline

  1. On a machine with internet access and Python, run the following command to download all off the Azure Data CLI (azdata) packages to the current folder.

    pip download -r
  2. Copy the downloaded packages and the requirements.txt file to the target machine.

  3. Run the following command on the target machine, specifying the folder that you copied the previous files into.

    pip install --no-index --find-links <path-to-packages> -r <path-to-requirements.txt>

Install kubectl offline

To install kubectl to an offline machine, use the following steps.

  1. Use curl to download kubectl to a folder of your choice. For more information, see Install kubectl binary using curl.

  2. Copy the folder to the target machine.

Deploy from private repository

To deploy from the private repository, use the steps described in the deployment guide, but use a custom deployment configuration file that specifies your private Docker repository information. The following Azure Data CLI (azdata) commands demonstrate how to change the Docker settings in a custom deployment configuration file named control.json:

azdata bdc config replace --config-file custom/control.json --json-values "$.spec.docker.repository=<your-docker-repository>"
azdata bdc config replace --config-file custom/control.json --json-values "$.spec.docker.registry=<your-docker-registry>"
azdata bdc config replace --config-file custom/control.json --json-values "$.spec.docker.imageTag=<your-docker-image-tag>"

The deployment prompts you for the docker username and password, or you can specify them in the DOCKER_USERNAME and DOCKER_PASSWORD environment variables.

Next steps

For more information about big data cluster deployments, see How to deploy SQL Server Big Data Clusters on Kubernetes.