Upgrade Azure SQL Managed Instance indirectly connected Azure Arc using the CLI

This article describes how to upgrade a SQL Managed Instance deployed on an indirectly connected Azure Arc-enabled data controller using the Azure CLI (az).

Prerequisites

Install tools

Before you can proceed with the tasks in this article, install:

The arcdata extension version and the image version are related. Check that you have the correct arcdata extension version that corresponds to the image version you want to upgrade to in the Version log.

Limitations

The Azure Arc Data Controller must be upgraded to the new version before the managed instance can be upgraded.

If Active Directory integration is enabled then Active Directory connector must be upgraded to the new version before the managed instance can be upgraded.

The managed instance must be at the same version as the data controller and active directory connector before a data controller is upgraded.

There's no batch upgrade process available at this time.

Upgrade the managed instance

A dry run can be performed first. The dry run validates the version schema and lists which instance(s) will be upgraded.

For example:

az sql mi-arc upgrade --name <instance name> --k8s-namespace <namespace> --dry-run --use-k8s

The output will be:

Preparing to upgrade sql sqlmi-1 in namespace arc to data controller version.
****Dry Run****1 instance(s) would be upgraded by this commandsqlmi-1 would be upgraded to <version-tag>.

General Purpose

During a SQL Managed Instance General Purpose upgrade, the pod will be terminated and reprovisioned at the new version. This will cause a short amount of downtime as the new pod is created. You will need to build resiliency into your application, such as connection retry logic, to ensure minimal disruption. Read Overview of the reliability pillar for more information on architecting resiliency and retry guidance for Azure Services.

Business Critical

During a SQL Managed Instance Business Critical upgrade with more than one replica:

  • The secondary replica pods are terminated and reprovisioned at the new version
  • After the replicas are upgraded, the primary will fail over to an upgraded replica
  • The previous primary pod is terminated and reprovisioned at the new version, and becomes a secondary

There is a brief moment of downtime when the failover occurs.

Upgrade

To upgrade the managed instance, use the following command:

az sql mi-arc upgrade --name <instance name> --desired-version <version> --k8s-namespace <namespace> --use-k8s

Example:

az sql mi-arc upgrade --name instance1 --desired-version v1.0.0.20211028 --k8s-namespace arc1 --use-k8s

Monitor

CLI

You can monitor the progress of the upgrade with the show command.

az sql mi-arc show --name <instance name> --k8s-namespace <namespace> --use-k8s

Output

The output for the command will show the resource information. Upgrade information will be in Status.

During the upgrade, State will show Updating and Running Version will be the current version:

Status:
  Log Search Dashboard:  https://30.88.222.48:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqlmi-1'))
  Metrics Dashboard:     https://30.88.221.32:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqlmi-1-0
  Observed Generation:   2
  Primary Endpoint:      30.76.129.38,1433
  Ready Replicas:        1/1
  Running Version:       v1.0.0_2021-07-30
  State:                 Updating

When the upgrade is complete, State will show Ready and Running Version will be the new version:

Status:
  Log Search Dashboard:  https://30.88.222.48:5601/app/kibana#/discover?_a=(query:(language:kuery,query:'custom_resource_name:sqlmi-1'))
  Metrics Dashboard:     https://30.88.221.32:3000/d/40q72HnGk/sql-managed-instance-metrics?var-hostname=sqlmi-1-0
  Observed Generation:   2
  Primary Endpoint:      30.76.129.38,1433
  Ready Replicas:        1/1
  Running Version:       <version-tag>
  State:                 Ready

Troubleshooting

When the desired version is set to a specific version, the bootstrapper job will attempt to upgrade to that version until it succeeds. If the upgrade is successful, the RunningVersion property of the spec is updated to the new version. Upgrades could fail for scenarios such as an incorrect image tag, unable to connect to registry or repository, insufficient CPU or memory allocated to the containers, or insufficient storage.

  1. Run the below command to see if any of the pods show an Error status or have high number of restarts:

    kubectl get pods --namespace <namespace>
    
  2. To look at Events to see if there is an error, run

    kubectl describe pod <pod name> --namespace <namespace>
    
  3. To get a list of the containers in the pods, run

    kubectl get pods <pod name> --namespace <namespace> -o jsonpath='{.spec.containers[*].name}*'
    
  4. To get the logs for a container, run

    kubectl logs <pod name> <container name> --namespace <namespace>
    

To view common errors and how to troubleshoot them go to Troubleshooting resources.