Leer en inglés

Compartir a través de


Configuración e implementación de un clúster de Valkey en Azure Kubernetes Service (AKS)

En este artículo, se configura e implementa un clúster de Valkey en Azure Kubernetes Service (AKS).

Nota

Este artículo contiene referencias a los términos maestro y esclavo, que son términos que Microsoft ya no usa. Cuando se quite el término del software Valkey, lo quitaremos de este artículo.

Creación de un espacio de nombres

  1. Cree un espacio de nombres para el clúster de Valkey mediante el comando kubectl create namespace.

    kubectl create namespace ${SERVICE_ACCOUNT_NAMESPACE} --dry-run=client --output yaml | kubectl apply -f -
    

    Ejemplo:

    namespace/valkey created
    

Crear secretos

  1. Genere una contraseña aleatoria para el clúster de Valkey mediante SSL abierto y almacénelo en el almacén de claves de Azure mediante el comando az keyvault secret set. Establezca la directiva para permitir que la identidad asignada por el usuario obtenga el secreto mediante el comando az keyvault set-policy.

    SECRET=$(openssl rand -base64 32)
    echo requirepass $SECRET > /tmp/valkey-password-file.conf
    echo primaryauth $SECRET >> /tmp/valkey-password-file.conf
    az keyvault secret set --vault-name $MY_KEYVAULT_NAME --name valkey-password-file --file /tmp/valkey-password-file.conf --output table
    rm /tmp/valkey-password-file.conf
    az keyvault set-policy --name $MY_KEYVAULT_NAME --object-id $userAssignedObjectID --secret-permissions get --output table
    
  2. Cree un recurso SecretProviderClass para acceder a la contraseña de Valkey almacenada en el almacén de claves mediante el comando kubectl apply.

    kubectl apply -f - <<EOF
    ---
    apiVersion: secrets-store.csi.x-k8s.io/v1
    kind: SecretProviderClass
    metadata:
      name: valkey-password
      namespace: valkey
    spec:
      provider: azure
      parameters:
        usePodIdentity: "false"
        useVMManagedIdentity: "true"
        userAssignedIdentityID: "${userAssignedIdentityID}"
        keyvaultName: ${MY_KEYVAULT_NAME}              # the name of the AKV instance
        objects: |
          array:
            - |
              objectName: valkey-password-file
              objectAlias: valkey-password-file.conf
              objectType: secret
        tenantId: "${TENANT_ID}" # the tenant ID of the AKV instance
    EOF
    

Implementación del clúster de Valkey

  1. Cree un ConfigMap volumen montado en valkey StatefulSet para configurar el clúster de Valkey mediante el comando kubectl apply.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: valkey-cluster
      namespace: valkey
    data:
      valkey.conf:  |+
        cluster-enabled yes
        cluster-node-timeout 15000
        cluster-config-file /data/nodes.conf
        appendonly yes
        protected-mode yes
        dir /data
        port 6379
        include /etc/valkey-password/valkey-password-file.conf
    EOF
    

    Ejemplo:

    configmap/valkey-cluster created
    
  2. Crear un recurso StatefulSet con un objetivo spec.affinity es mantener todas las principales en la zona 1 y la zona 2, preferiblemente en distintos nodos, mediante el comando kubectl apply.

    kubectl apply -f - <<EOF
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: valkey-masters
      namespace: valkey
    spec:
      serviceName: "valkey-masters"
      replicas: 3
      selector:
        matchLabels:
          app: valkey
      template:
        metadata:
          labels:
            app: valkey
            appCluster: valkey-masters
        spec:
          terminationGracePeriodSeconds: 20
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: agentpool
                    operator: In
                    values:
                    - valkey
                  - key: topology.kubernetes.io/zone
                    operator: In
                    values:
                    - ${MY_LOCATION}-1
                - matchExpressions:
                  - key: agentpool
                    operator: In
                    values:
                    - valkey
                  - key: topology.kubernetes.io/zone
                    operator: In
                    values:
                    - ${MY_LOCATION}-2
            podAntiAffinity:
              preferredDuringSchedulingIgnoredDuringExecution:
              - weight: 90
                podAffinityTerm:
                  labelSelector:
                    matchExpressions:
                    - key: app
                      operator: In
                      values:
                      - valkey
                  topologyKey: topology.kubernetes.io/zone
              - weight: 90
                podAffinityTerm:
                  labelSelector:
                    matchExpressions:
                    - key: app
                      operator: In
                      values:
                      - valkey
                  topologyKey: kubernetes.io/hostname
          containers:
          - name: valkey
            image: "${MY_ACR_REGISTRY}.azurecr.io/valkey:latest"
            env:
            - name: VALKEY_PASSWORD_FILE
              value: "/etc/valkey-password/valkey-password-file.conf"
            - name: MY_POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            command:
              - "valkey-server"
            args:
              - "/conf/valkey.conf"
              - "--cluster-announce-ip"
              - "\$(MY_POD_IP)"
            resources:
              requests:
                cpu: "100m"
                memory: "100Mi"
            ports:
                - name: valkey
                  containerPort: 6379
                  protocol: "TCP"
                - name: cluster
                  containerPort: 16379
                  protocol: "TCP"
            volumeMounts:
            - name: conf
              mountPath: /conf
              readOnly: false
            - name: data
              mountPath: /data
              readOnly: false
            - name: valkey-password
              mountPath: /etc/valkey-password
              readOnly: true
          volumes:
          - name: valkey-password
            csi:
              driver: secrets-store.csi.k8s.io
              readOnly: true
              volumeAttributes:
                secretProviderClass: valkey-password
          - name: conf
            configMap:
              name: valkey-cluster
              defaultMode: 0755
      volumeClaimTemplates:
      - metadata:
          name: data
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: managed-csi-premium
          resources:
            requests:
              storage: 20Gi
    EOF
    

    Ejemplo:

    statefulset.apps/valkey-masters created
    
  3. Cree un segundo recurso StatefulSet para los secundarios de Valkey con un objetivo spec.affinity para mantener todas las réplicas en la zona 3, preferiblemente en distintos nodos, mediante el comando kubectl apply.

    kubectl apply -f - <<EOF
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: valkey-replicas
      namespace: valkey
    spec:
      serviceName: "valkey-replicas"
      replicas: 3
      selector:
        matchLabels:
          app: valkey
      template:
        metadata:
          labels:
            app: valkey
            appCluster: valkey-replicas
        spec:
          terminationGracePeriodSeconds: 20
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: agentpool
                    operator: In
                    values:
                    - valkey
                  - key: topology.kubernetes.io/zone
                    operator: In
                    values:
                    - ${MY_LOCATION}-3
            podAntiAffinity:
              preferredDuringSchedulingIgnoredDuringExecution:
              - weight: 90
                podAffinityTerm:
                  labelSelector:
                    matchExpressions:
                    - key: app
                      operator: In
                      values:
                      - valkey
                  topologyKey: kubernetes.io/hostname
          containers:
          - name: valkey
            image: "${MY_ACR_REGISTRY}.azurecr.io/valkey:latest"
            env:
            - name: VALKEY_PASSWORD_FILE
              value: "/etc/valkey-password/valkey-password-file.conf"
            - name: MY_POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            command:
              - "valkey-server"
            args:
              - "/conf/valkey.conf"
              - "--cluster-announce-ip"
              - "\$(MY_POD_IP)"
            resources:
              requests:
                cpu: "100m"
                memory: "100Mi"
            ports:
                - name: valkey
                  containerPort: 6379
                  protocol: "TCP"
                - name: cluster
                  containerPort: 16379
                  protocol: "TCP"
            volumeMounts:
            - name: conf
              mountPath: /conf
              readOnly: false
            - name: data
              mountPath: /data
              readOnly: false
            - name: valkey-password
              mountPath: /etc/valkey-password
              readOnly: true
          volumes:
          - name: valkey-password
            csi:
              driver: secrets-store.csi.k8s.io
              readOnly: true
              volumeAttributes:
                secretProviderClass: valkey-password
          - name: conf
            configMap:
              name: valkey-cluster
              defaultMode: 0755
      volumeClaimTemplates:
      - metadata:
          name: data
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: managed-csi-premium
          resources:
            requests:
              storage: 20Gi
    EOF
    

    Ejemplo:

    statefulset.apps/valkey-replicas created
    
  4. Compruebe que master-N y replica-N se ejecutan en distintos nodos y zonas mediante los comandos kubectl get nodes y kubectl get pods.

    kubectl get pods -n valkey -o wide
    kubectl get node -o custom-columns=Name:.metadata.name,Zone:".metadata.labels.topology\.kubernetes\.io/zone"
    

    Ejemplo:

    NAME                READY   STATUS    RESTARTS   AGE     IP             NODE                             NOMINATED NODE   READINESS GATES
    valkey-masters-0    1/1     Running   0          2m55s   10.224.0.4     aks-valkey-18693609-vmss000004   <none>           <none>
    valkey-masters-1    1/1     Running   0          2m31s   10.224.0.137   aks-valkey-18693609-vmss000000   <none>           <none>
    valkey-masters-2    1/1     Running   0          2m7s    10.224.0.222   aks-valkey-18693609-vmss000001   <none>           <none>
    valkey-replicas-0   1/1     Running   0          88s     10.224.0.237   aks-valkey-18693609-vmss000005   <none>           <none>
    valkey-replicas-1   1/1     Running   0          70s     10.224.0.18    aks-valkey-18693609-vmss000002   <none>           <none>
    valkey-replicas-2   1/1     Running   0          48s     10.224.0.242   aks-valkey-18693609-vmss000005   <none>           <none>
    Name                                Zone
    aks-nodepool1-17621399-vmss000000   centralus-1
    aks-nodepool1-17621399-vmss000001   centralus-2
    aks-nodepool1-17621399-vmss000003   centralus-3
    aks-valkey-18693609-vmss000000      centralus-1
    aks-valkey-18693609-vmss000001      centralus-2
    aks-valkey-18693609-vmss000002      centralus-3
    aks-valkey-18693609-vmss000003      centralus-1
    aks-valkey-18693609-vmss000004      centralus-2
    aks-valkey-18693609-vmss000005      centralus-3
    

    Espere a que todos los pods se ejecuten antes de continuar con el paso siguiente.

  5. Cree tres recursos sin Service encabezados (el primero para todo el clúster, el segundo para las principales y el tercero para los secundarios) que se usarán para obtener las direcciones IP de los pods de Valkey mediante el comando kubectl apply.

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: valkey-cluster
      namespace: valkey
    spec:
      clusterIP: None
      ports:
      - name: valkey-port
        port: 6379
        protocol: TCP
        targetPort: 6379
      selector:
        app: valkey
      sessionAffinity: None
      type: ClusterIP
    EOF
    
    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: valkey-masters
      namespace: valkey
    spec:
      clusterIP: None
      ports:
      - name: valkey-port
        port: 6379
        protocol: TCP
        targetPort: 6379
      selector:
        app: valkey
        appCluster: valkey-masters
      sessionAffinity: None
      type: ClusterIP
    EOF
    
    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: Service
    metadata:
      name: valkey-replicas
      namespace: valkey
    spec:
      clusterIP: None
      ports:
      - name: valkey-port
        port: 6379
        protocol: TCP
        targetPort: 6379
      selector:
        app: valkey
        appCluster: valkey-replicas
      sessionAffinity: None
      type: ClusterIP
    EOF
    

    Ejemplo:

    service/valkey-cluster created
    service/valkey-masters created
    service/valkey-replicas created
    

Ejecución del clúster de Valkey

  1. Agregue las principales de Valkey, en las zonas 1 y 2, al clúster mediante el comando kubectl exec.

    kubectl exec -it -n valkey valkey-masters-0 -- valkey-cli --cluster create --cluster-yes --cluster-replicas 0 \
                        valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379 \
                        valkey-masters-1.valkey-masters.valkey.svc.cluster.local:6379 \
                        valkey-masters-2.valkey-masters.valkey.svc.cluster.local:6379 \
                        --pass ${SECRET}
    

    Ejemplo:

    >>> Performing hash slots allocation on 3 nodes...
    Master[0] -> Slots 0 - 5460
    Master[1] -> Slots 5461 - 10922
    Master[2] -> Slots 10923 - 16383
    M: ee6ac1d00d3f016b6f46c7ce11199bc1a7809a35 valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379
       slots:[0-5460] (5461 slots) master
    M: fd1fb98db83976478e05edd3d2a02f9a13badd80 valkey-masters-1.valkey-masters.valkey.svc.cluster.local:6379
       slots:[5461-10922] (5462 slots) master
    M: ea47bf57ae7080ef03164a4d48b662c7b4c8770e valkey-masters-2.valkey-masters.valkey.svc.cluster.local:6379
       slots:[10923-16383] (5461 slots) master
    >>> Nodes configuration updated
    >>> Assign a different config epoch to each node
    >>> Sending CLUSTER MEET messages to join the cluster
    Waiting for the cluster to join
    ...
    >>> Performing Cluster Check (using node valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379)
    M: ee6ac1d00d3f016b6f46c7ce11199bc1a7809a35 valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379
       slots:[0-5460] (5461 slots) master
    M: ea47bf57ae7080ef03164a4d48b662c7b4c8770e 10.224.0.176:6379
       slots:[10923-16383] (5461 slots) master
    M: fd1fb98db83976478e05edd3d2a02f9a13badd80 10.224.0.247:6379
       slots:[5461-10922] (5462 slots) master
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    
  2. Agregue las réplicas de Valkey, en la zona 3, al clúster mediante el comando kubectl exec.

    kubectl exec -ti -n valkey valkey-masters-0 -- valkey-cli --cluster add-node \
                        valkey-replicas-0.valkey-replicas.valkey.svc.cluster.local:6379 \
                        valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379  --cluster-slave \
                        --pass ${SECRET}
    
    kubectl exec -ti -n valkey valkey-masters-0 -- valkey-cli --cluster add-node \
                        valkey-replicas-1.valkey-replicas.valkey.svc.cluster.local:6379 \
                        valkey-masters-1.valkey-masters.valkey.svc.cluster.local:6379  --cluster-slave \
                        --pass ${SECRET}
    
    kubectl exec -ti -n valkey valkey-masters-0 -- valkey-cli --cluster add-node \
                        valkey-replicas-2.valkey-replicas.valkey.svc.cluster.local:6379 \
                        valkey-masters-2.valkey-masters.valkey.svc.cluster.local:6379  --cluster-slave \
                        --pass ${SECRET}
    

    Ejemplo:

    >>> Adding node valkey-replicas-0.valkey-replicas.valkey.svc.cluster.local:6379 to cluster valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379
    >>> Performing Cluster Check (using node valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379)
    M: ee6ac1d00d3f016b6f46c7ce11199bc1a7809a35 valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379
       slots:[0-5460] (5461 slots) master
    M: ea47bf57ae7080ef03164a4d48b662c7b4c8770e 10.224.0.176:6379
       slots:[10923-16383] (5461 slots) master
    M: fd1fb98db83976478e05edd3d2a02f9a13badd80 10.224.0.247:6379
       slots:[5461-10922] (5462 slots) master
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    Automatically selected master valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379
    >>> Send CLUSTER MEET to node valkey-replicas-0.valkey-replicas.valkey.svc.cluster.local:6379 to make it join the cluster.
    Waiting for the cluster to join
    
    >>> Configure node as replica of valkey-masters-0.valkey-masters.valkey.svc.cluster.local:6379.
    [OK] New node added correctly.
    >>> Adding node valkey-replicas-1.valkey-replicas.valkey.svc.cluster.local:6379 to cluster valkey-masters-1.valkey-masters.valkey.svc.cluster.local:6379
    >>> Performing Cluster Check (using node valkey-masters-1.valkey-masters.valkey.svc.cluster.local:6379)
    M: fd1fb98db83976478e05edd3d2a02f9a13badd80 valkey-masters-1.valkey-masters.valkey.svc.cluster.local:6379
       slots:[5461-10922] (5462 slots) master
    S: 0ebceb60cbcc31da9040159440a1f4856b992907 10.224.0.224:6379
       slots: (0 slots) slave
       replicates ee6ac1d00d3f016b6f46c7ce11199bc1a7809a35
    M: ea47bf57ae7080ef03164a4d48b662c7b4c8770e 10.224.0.176:6379
       slots:[10923-16383] (5461 slots) master
    M: ee6ac1d00d3f016b6f46c7ce11199bc1a7809a35 10.224.0.14:6379
       slots:[0-5460] (5461 slots) master
       1 additional replica(s)
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    Automatically selected master valkey-masters-1.valkey-masters.valkey.svc.cluster.local:6379
    >>> Send CLUSTER MEET to node valkey-replicas-1.valkey-replicas.valkey.svc.cluster.local:6379 to make it join the cluster.
    Waiting for the cluster to join
    
    >>> Configure node as replica of valkey-masters-1.valkey-masters.valkey.svc.cluster.local:6379.
    [OK] New node added correctly.
    >>> Adding node valkey-replicas-2.valkey-replicas.valkey.svc.cluster.local:6379 to cluster valkey-masters-2.valkey-masters.valkey.svc.cluster.local:6379
    >>> Performing Cluster Check (using node valkey-masters-2.valkey-masters.valkey.svc.cluster.local:6379)
    M: ea47bf57ae7080ef03164a4d48b662c7b4c8770e valkey-masters-2.valkey-masters.valkey.svc.cluster.local:6379
       slots:[10923-16383] (5461 slots) master
    S: 0ebceb60cbcc31da9040159440a1f4856b992907 10.224.0.224:6379
       slots: (0 slots) slave
       replicates ee6ac1d00d3f016b6f46c7ce11199bc1a7809a35
    S: fa44edff683e2e01ee5c87233f9f3bc35c205dce 10.224.0.103:6379
       slots: (0 slots) slave
       replicates fd1fb98db83976478e05edd3d2a02f9a13badd80
    M: ee6ac1d00d3f016b6f46c7ce11199bc1a7809a35 10.224.0.14:6379
       slots:[0-5460] (5461 slots) master
       1 additional replica(s)
    M: fd1fb98db83976478e05edd3d2a02f9a13badd80 10.224.0.247:6379
       slots:[5461-10922] (5462 slots) master
       1 additional replica(s)
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [OK] All 16384 slots covered.
    Automatically selected master valkey-masters-2.valkey-masters.valkey.svc.cluster.local:6379
    >>> Send CLUSTER MEET to node valkey-replicas-2.valkey-replicas.valkey.svc.cluster.local:6379 to make it join the cluster.
    Waiting for the cluster to join
    
    >>> Configure node as replica of valkey-masters-2.valkey-masters.valkey.svc.cluster.local:6379.
    [OK] New node added correctly.
    
  3. Compruebe los roles de los pods mediante los siguientes comandos:

    for x in $(seq 0 2); do echo "valkey-masters-$x"; kubectl exec -n valkey valkey-masters-$x  -- valkey-cli --pass ${SECRET} role; echo; done
    for x in $(seq 0 2); do echo "valkey-replicas-$x"; kubectl exec -n valkey valkey-replicas-$x -- valkey-cli --pass ${SECRET} role; echo; done
    

    Salida de ejemplo:

    valkey-masters-0
    master
    84
    10.224.0.224
    6379
    84
    
    valkey-masters-1
    master
    84
    10.224.0.103
    6379
    84
    
    valkey-masters-2
    master
    70
    10.224.0.200
    6379
    70
    
    valkey-replicas-0
    slave
    10.224.0.14
    6379
    connected
    98
    
    valkey-replicas-1
    slave
    10.224.0.247
    6379
    connected
    98
    
    valkey-replicas-2
    slave
    10.224.0.176
    6379
    connected
    84
    

Pasos siguientes

Para más información acerca de la implementación de software de código abierto en Azure Kubernetes Service (AKS), consulte los artículos siguientes:

Colaboradores

Microsoft se encarga del mantenimiento de este artículo. Originalmente lo escribieron los siguientes colaboradores:

  • Nelly Kiboi | Ingeniera de servicios
  • Saverio Proto | Ingeniero principal de Experiencia del cliente
  • Don High | Ingeniero principal de clientes
  • LaBrina Loving | Ingeniera principal de servicio
  • Ken Kilty | TPM de entidad de seguridad
  • Russell de Pina | TPM de entidad de seguridad
  • Colin Mixon | Administrador de productos
  • Ketan Chawda | Ingeniero sénior de clientes
  • Naveed Kharadi | Ingeniero de Experiencia del cliente
  • Erin Schaffer | Desarrollador de contenido 2