SQL server always on High availability on AKS : How to reset the password and check the service status

Rajaniesh Kaushikk 426 Reputation points
2021-05-03T00:13:11.853+00:00

Hi,

I am running the SQLserver on AKS and when I use this command
kubectl exec -it $podname -- systemctl status mssql-server it throws this error:

kubectl exec -it $podname -- systemctl status mssql-server
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
command terminated with exit code 1

And if I use this command to reset the password it throws another error
kubectl exec -it $podname -- /opt/mssql/bin/mssql-conf set-sa-password
This program must be run as superuser or as a user with membership in the mssql
group.
command terminated with exit code 1

What is wrong with it.
Any clue!!

Regards
Rajaniesh

Azure Kubernetes Service (AKS)
Azure Kubernetes Service (AKS)
An Azure service that provides serverless Kubernetes, an integrated continuous integration and continuous delivery experience, and enterprise-grade security and governance.
1,122 questions
No comments
{count} votes

1 answer

Sort by: Most helpful
  1. vipullag-MSFT 16,741 Reputation points Microsoft Employee
    2021-05-04T10:12:03.793+00:00

    @Rajaniesh Kaushikk

    Apologies in delayed response on this.

    Checking the SQL Server Always On AG on AKS it looks like this solution makes use of environment variables SQL_MASTERKEYPASSWORD and SQL_SAPASSWORD for the SQL Server.

    Now there is no way to change the environment variables in a container without restarting the container.

    Also, from this document it seems like the environment variables are being populated from secrets that the CNAB package installation creates.

    Hence, you can try to edit the sa password and/or master password secrets (kubectl edit) using the base64 encrypted values of the new password you want to set.

    However, this will require restart of the pod (by deleting pods if controlled by a deployment/replicaset/statefulset/daemonset or in case of standalone pods by saving the output of kubectl get po <podname> -o yaml> $filename , deleting the old pod and kubectl apply -f $filename)

    Maybe the image does not boot with systemd as init system hence systemctl does not help. You can try service --status-all.

    Hope this helps.

    Please 'Accept as answer' if the provided information is helpful, so that it can help others in the community looking for help on similar topics.

    No comments