Container images should be deployed from trusted registries only

Olivier Neu 21 Reputation points
2022-07-13T14:30:16.927+00:00

Hello,

We are subscribed to Microsoft Defender for Cloud. This reports a policy "Container images should be deployed from trusted registries only" of our Kubernetes cluster.

The regex, defining our organization private registries is configured, via the security policy parameters.

The regex excludes all containers from all pods except two pods. The problem is that the recommendation always detects two pods

However according to the regex all the containers of the two pods match well with the regex

How can I know the exact container which a priori does not match with the regex?

Azure Container Registry
Azure Container Registry
An Azure service that provides a registry of Docker and Open Container Initiative images.
508 questions
{count} votes

Accepted answer
  1. Steve Down 101 Reputation points
    2022-07-19T18:03:56.817+00:00

    A couple of things: first, triple check the regex - if you copied/pasted and there are extra characters that could throw things off; second, there is a significant lag in compliance detection sometimes.

    Try az policy state trigger-scan --no-wait, and give it a while, then go back to Policy -> Compliance, pull up that policy and see if things improve.

    Also, the lag for compliance for this policy to make it into Secure Score calculations in DfC can be long. Days long.

    1 person found this answer helpful.

3 additional answers

Sort by: Most helpful
  1. Steve Down 101 Reputation points
    2022-07-19T11:54:03.263+00:00

    I would do this:

    1. Put the regex for your policy into https://regex101.com - click the delimiter in the regular expression field and change it to something other than "/"
    2. Look at the explanation panel - the site evaluates your regex, will tell you if there's a problem with it, will tell you what it's trying to do if it's syntax is correct.
    3. Log in to your cluster with az aks get-credentials and kubelogin
    4. kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}" | tr -s '[[:space:]]' '\n' | sort | uniq -c
    5. Compare the result against the regex - try copying and pasting lines from the result into the test field in regex101 and see what matches and what doesn't
    1 person found this answer helpful.
    0 comments No comments

  2. Olivier Neu 21 Reputation points
    2022-07-15T12:58:46.647+00:00

    Thx srbhatta-msft,

    Regex :

    ^(registry\.gitlab\.com|docker\.io|quay\.io|cr\.l5d\.io|emberstack|ghcr\.io|google|grafana|k8s\.gcr\.io|mcr\.microsoft\.com).*$  
    

  3. Olivier Neu 21 Reputation points
    2022-07-19T13:12:01.3+00:00

    Thanks again for your response.

    This confirms to me that what I was doing corresponds to your recommendation. But despite this, I still have 2 detected pods which a priori do not fit into the regex.

    I put you the list emerged by the commance kubectl that you previously gave us.

    I removed the count at the beginning of the line and I pasted the list on the site to control the regex. but you will also find that all lines match with the regex. But I still have the Azure policy which gives me two pods.

    4 cr.l5d.io/linkerd/controller:stable-2.11.4
    1 cr.l5d.io/linkerd/policy-controller:stable-2.11.4
    10 cr.l5d.io/linkerd/proxy:stable-2.11.4
    1 emberstack/kubernetes-reflector:6.1.23
    1 ghcr.io/fluxcd/helm-controller:v0.17.1
    1 ghcr.io/fluxcd/kustomize-controller:v0.21.1
    1 ghcr.io/fluxcd/notification-controller:v0.22.2
    1 ghcr.io/fluxcd/source-controller:v0.21.2
    1 ghcr.io/kyverno/kyverno:v1.6.0
    1 google/apparmor-loader:latest
    1 grafana/grafana:8.3.6
    1 k8s.gcr.io/ingress-nginx/controller:v1.1.3@sha256:31f47c1e202b39fadecf822a9b76370bd4baed199a005b3e7d4d1455f4fd3fe2
    1 k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.4.1
    1 mcr.microsoft.com/azure-policy/policy-kubernetes-addon-prod:prod_20220114.1
    1 mcr.microsoft.com/azure-policy/policy-kubernetes-webhook:prod_20211230.1
    1 mcr.microsoft.com/azuredefender/stable/low-level-collector:1.3.31
    2 mcr.microsoft.com/azuredefender/stable/pod-collector:1.0.51
    1 mcr.microsoft.com/azuredefender/stable/security-publisher:1.0.43
    3 mcr.microsoft.com/azuremonitor/containerinsights/ciprod:ciprod05192022
    2 mcr.microsoft.com/oss/azure/aad-pod-identity/mic:v1.8.7
    1 mcr.microsoft.com/oss/azure/aad-pod-identity/nmi:v1.8.7
    1 mcr.microsoft.com/oss/azure/secrets-store/provider-azure:v1.1.0
    1 mcr.microsoft.com/oss/calico/kube-controllers:v3.21.4
    1 mcr.microsoft.com/oss/calico/node:v3.21.4
    1 mcr.microsoft.com/oss/calico/typha:v3.21.4
    1 mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.19.0
    1 mcr.microsoft.com/oss/kubernetes-csi/azurefile-csi:v1.19.0
    3 mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.5.0
    3 mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.6.0
    1 mcr.microsoft.com/oss/kubernetes-csi/secrets-store/driver:v1.1.1.5
    2 mcr.microsoft.com/oss/kubernetes/apiserver-network-proxy/agent:v0.0.30
    1 mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-proportional-autoscaler:1.8.5
    1 mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.23.11
    2 mcr.microsoft.com/oss/kubernetes/coredns:v1.8.7
    1 mcr.microsoft.com/oss/kubernetes/kube-proxy:v1.23.5-hotfix.20220331.3
    2 mcr.microsoft.com/oss/kubernetes/metrics-server:v0.5.2
    3 mcr.microsoft.com/oss/open-policy-agent/gatekeeper:v3.7.1
    1 mcr.microsoft.com/oss/tigera/operator:v1.24.2
    1 quay.io/jetstack/cert-manager-cainjector:v1.7.0
    1 quay.io/jetstack/cert-manager-controller:v1.7.0
    1 quay.io/jetstack/cert-manager-webhook:v1.7.0
    2 quay.io/kiwigrid/k8s-sidecar:1.15.6
    1 quay.io/prometheus-operator/prometheus-config-reloader:v0.54.1
    1 quay.io/prometheus-operator/prometheus-operator:v0.54.1
    1 quay.io/prometheus/node-exporter:v1.3.1
    1 quay.io/prometheus/prometheus:v2.33.4
    1 registry.gitlab.com/greybox-solutions/greybox-a:813a3e38
    1 registry.gitlab.com/greybox-solutions/takecare-reports:79bdc402

    For additional information,

    The cluster is destroyed every night and automatically rebuilt in the morning (Terraform/Flux). So normally no weirdness due to manual handling.

    I put below the result of the following command which could be helped to see what I failed to see.

    kubectl get pods -n monitoring kube-prometheus-stack-grafana-695b54cfb9-kd98m -o yaml

    apiVersion: v1
    kind: Pod
    metadata:
    annotations:
    checksum/config: 150468de60c6174a544b936c6f0583a6fecd915f48ac4a20dc1b60535d6fe47d
    checksum/dashboards-json-config: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
    checksum/sc-dashboard-provider-config: a4767dcf5556c8eec8a3dbfcbd7fd43d7e4747c0f3d1587046a013c0424ab135
    cni.projectcalico.org/containerID: 8aed13d89075279005c8f5ad68acdf2093229676f7ed32471f6f97107b4f9dac
    cni.projectcalico.org/podIP: 192.168.0.46/32
    cni.projectcalico.org/podIPs: 192.168.0.46/32
    linkerd.io/created-by: linkerd/proxy-injector stable-2.11.4
    linkerd.io/identity-mode: default
    linkerd.io/proxy-version: ""
    creationTimestamp: "2022-07-19T10:23:14Z"
    generateName: kube-prometheus-stack-grafana-695b54cfb9-
    labels:
    app.kubernetes.io/instance: kube-prometheus-stack
    app.kubernetes.io/name: grafana
    linkerd.io/control-plane-ns: linkerd
    linkerd.io/proxy-deployment: kube-prometheus-stack-grafana
    linkerd.io/workload-ns: monitoring
    pod-template-hash: 695b54cfb9
    name: kube-prometheus-stack-grafana-695b54cfb9-kd98m
    namespace: monitoring
    ownerReferences:

    • apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: ReplicaSet
      name: kube-prometheus-stack-grafana-695b54cfb9
      uid: 415dc535-3a42-4391-865e-58d1358ff643
      resourceVersion: "10704"
      uid: 8c08f16a-369c-40a0-9ab2-7af650359d0d
      spec:
      automountServiceAccountToken: true
      containers:
    • env:
    • name: _pod_name
      valueFrom:
      fieldRef:
      apiVersion: v1
      fieldPath: metadata.name
    • name: _pod_ns
      valueFrom:
      fieldRef:
      apiVersion: v1
      fieldPath: metadata.namespace
    • name: _pod_nodeName
      valueFrom:
      fieldRef:
      apiVersion: v1
      fieldPath: spec.nodeName
    • name: LINKERD2_PROXY_LOG
      value: warn,linkerd=info
    • name: LINKERD2_PROXY_LOG_FORMAT
      value: plain
    • name: LINKERD2_PROXY_DESTINATION_SVC_ADDR
      value: linkerd-dst-headless.linkerd.svc.cluster.local.:8086
    • name: LINKERD2_PROXY_DESTINATION_PROFILE_NETWORKS
      value: 10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16
    • name: LINKERD2_PROXY_POLICY_SVC_ADDR
      value: linkerd-policy.linkerd.svc.cluster.local.:8090
    • name: LINKERD2_PROXY_POLICY_WORKLOAD
      value: $(_pod_ns):$(_pod_name)
    • name: LINKERD2_PROXY_INBOUND_DEFAULT_POLICY
      value: all-unauthenticated
    • name: LINKERD2_PROXY_POLICY_CLUSTER_NETWORKS
      value: 10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16
    • name: LINKERD2_PROXY_INBOUND_CONNECT_TIMEOUT
      value: 100ms
    • name: LINKERD2_PROXY_OUTBOUND_CONNECT_TIMEOUT
      value: 1000ms
    • name: LINKERD2_PROXY_CONTROL_LISTEN_ADDR
      value: 0.0.0.0:4190
    • name: LINKERD2_PROXY_ADMIN_LISTEN_ADDR
      value: 0.0.0.0:4191
    • name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR
      value: 127.0.0.1:4140
    • name: LINKERD2_PROXY_INBOUND_LISTEN_ADDR
      value: 0.0.0.0:4143
    • name: LINKERD2_PROXY_INBOUND_IPS
      valueFrom:
      fieldRef:
      apiVersion: v1
      fieldPath: status.podIPs
    • name: LINKERD2_PROXY_INBOUND_PORTS
      value: 80,3000
    • name: LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES
      value: svc.cluster.local.
    • name: LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE
      value: 10000ms
    • name: LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE
      value: 10000ms
    • name: LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION
      value: 25,587,3306,4444,5432,6379,9300,11211
    • name: LINKERD2_PROXY_DESTINATION_CONTEXT
      value: |
      {"ns":"$(_pod_ns)", "nodeName":"$(_pod_nodeName)"}
    • name: _pod_sa
      valueFrom:
      fieldRef:
      apiVersion: v1
      fieldPath: spec.serviceAccountName
    • name: _l5d_ns
      value: linkerd
    • name: _l5d_trustdomain
      value: cluster.local
    • name: LINKERD2_PROXY_IDENTITY_DIR
      value: /var/run/linkerd/identity/end-entity
    • name: LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS
      value: |
      -----BEGIN CERTIFICATE-----
      MIIBszCCAVigAwIBAgIRALPxixnwPWoSbMcRZyfm7mcwCgYIKoZIzj0EAwIwKTEn
      MCUGA1UEAxMeaWRlbnRpdHkubGlua2VyZC5jbHVzdGVyLmxvY2FsMB4XDTIyMDcx
      OTEwMTMyNloXDTI0MDcxODEwMTMyNlowKTEnMCUGA1UEAxMeaWRlbnRpdHkubGlu
      a2VyZC5jbHVzdGVyLmxvY2FsMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEy0Xe
      5T6aw4tpl3B+HLSmHOMoLpv4aVKesm91zag1ds7Q7cQ3bPAJmBCoDVD0buz2RqAK
      HymbJ01n/nbzqelBHqNhMF8wDgYDVR0PAQH/BAQDAgEGMB0GA1UdJQQWMBQGCCsG
      AQUFBwMBBggrBgEFBQcDAjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRXTWg/
      H1NVcT5OYCjug0ILqVlLRjAKBggqhkjOPQQDAgNJADBGAiEAsBzkn4qXIQhPwmOa
      II04n07rTeYFHQcdsVxeWBvJ4XgCIQCfX0qVR27PrxV0b59aKVG4MyImG4q6cRPK
      Y3mmGMVWpg==
      -----END CERTIFICATE-----
    • name: LINKERD2_PROXY_IDENTITY_TOKEN_FILE
      value: /var/run/secrets/kubernetes.io/serviceaccount/token
    • name: LINKERD2_PROXY_IDENTITY_SVC_ADDR
      value: linkerd-identity-headless.linkerd.svc.cluster.local.:8080
    • name: LINKERD2_PROXY_IDENTITY_LOCAL_NAME
      value: $(_pod_sa).$(_pod_ns).serviceaccount.identity.linkerd.cluster.local
    • name: LINKERD2_PROXY_IDENTITY_SVC_NAME
      value: linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local
    • name: LINKERD2_PROXY_DESTINATION_SVC_NAME
      value: linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
    • name: LINKERD2_PROXY_POLICY_SVC_NAME
      value: linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
      image: cr.l5d.io/linkerd/proxy:stable-2.11.4
      imagePullPolicy: IfNotPresent
      lifecycle:
      postStart:
      exec:
      command:
    • /usr/lib/linkerd/linkerd-await
    • --timeout=2m
      livenessProbe:
      failureThreshold: 3
      httpGet:
      path: /live
      port: 4191
      scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
      name: linkerd-proxy
      ports:
    • containerPort: 4143
      name: linkerd-proxy
      protocol: TCP
    • containerPort: 4191
      name: linkerd-admin
      protocol: TCP
      readinessProbe:
      failureThreshold: 3
      httpGet:
      path: /ready
      port: 4191
      scheme: HTTP
      initialDelaySeconds: 2
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
      resources:
      limits:
      cpu: 50m
      memory: 128Mi
      requests:
      cpu: 50m
      memory: 128Mi
      securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      runAsUser: 2102
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
    • mountPath: /var/run/linkerd/identity/end-entity
      name: linkerd-identity-end-entity
    • mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-6vklw
      readOnly: true
    • env:
    • name: METHOD
      value: WATCH
    • name: LABEL
      value: grafana_dashboard
    • name: LABEL_VALUE
      value: "1"
    • name: FOLDER
      value: /tmp/dashboards
    • name: RESOURCE
      value: both
    • name: NAMESPACE
      value: ALL
      image: quay.io/kiwigrid/k8s-sidecar:1.15.6
      imagePullPolicy: IfNotPresent
      name: grafana-sc-dashboard
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
    • mountPath: /tmp/dashboards
      name: sc-dashboard-volume
    • mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-6vklw
      readOnly: true
    • env:
    • name: METHOD
      value: WATCH
    • name: LABEL
      value: grafana_datasource
    • name: LABEL_VALUE
      value: "1"
    • name: FOLDER
      value: /etc/grafana/provisioning/datasources
    • name: RESOURCE
      value: both
    • name: REQ_USERNAME
      valueFrom:
      secretKeyRef:
      key: admin_user
      name: azure-kvname-user-ms-grafana
    • name: REQ_PASSWORD
      valueFrom:
      secretKeyRef:
      key: admin_password
      name: azure-kvname-user-ms-grafana
    • name: REQ_URL
      value: http://localhost:3000/api/admin/provisioning/datasources/reload
    • name: REQ_METHOD
      value: POST
      image: quay.io/kiwigrid/k8s-sidecar:1.15.6
      imagePullPolicy: IfNotPresent
      name: grafana-sc-datasources
      resources: {}
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
    • mountPath: /etc/grafana/provisioning/datasources
      name: sc-datasources-volume
    • mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-6vklw
      readOnly: true
    • env:
    • name: GF_SECURITY_ADMIN_USER
      valueFrom:
      secretKeyRef:
      key: admin_user
      name: azure-kvname-user-ms-grafana
    • name: GF_SECURITY_ADMIN_PASSWORD
      valueFrom:
      secretKeyRef:
      key: admin_password
      name: azure-kvname-user-ms-grafana
    • name: GF_SMTP_USER
      valueFrom:
      secretKeyRef:
      key: smtp_user
      name: azure-kvname-user-ms-grafana
    • name: GF_SMTP_PASSWORD
      valueFrom:
      secretKeyRef:
      key: smtp_password
      name: azure-kvname-user-ms-grafana
    • name: GF_PATHS_DATA
      value: /var/lib/grafana/
    • name: GF_PATHS_LOGS
      value: /var/log/grafana
    • name: GF_PATHS_PLUGINS
      value: /var/lib/grafana/plugins
    • name: GF_PATHS_PROVISIONING
      value: /etc/grafana/provisioning
    • name: GF_AUTH_DATASOURCE_DATABASE
      value: takecare_dev
    • name: GF_AUTH_DATASOURCE_URL
      value: takecare-cacn-nprod.postgres.database.azure.com
    • name: GF_AUTH_DATASOURCE_USER
      value: grafana_dev
    • name: GF_DATASOURCE_ISDEFAULT
      value: "false"
    • name: GF_FROM_ADDRESS
      value: ******@greybox.biz
    • name: GF_SMTP_HOST
      value: smtp.sendgrid.net:465
      envFrom:
    • secretRef:
      name: azure-kvname-user-ms-grafana-azuread
      optional: false
      image: grafana/grafana:8.3.6
      imagePullPolicy: IfNotPresent
      livenessProbe:
      failureThreshold: 10
      httpGet:
      path: /api/health
      port: 3000
      scheme: HTTP
      initialDelaySeconds: 60
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 30
      name: grafana
      ports:
    • containerPort: 80
      name: http-web
      protocol: TCP
    • containerPort: 3000
      name: grafana
      protocol: TCP
      readinessProbe:
      failureThreshold: 3
      httpGet:
      path: /api/health
      port: 3000
      scheme: HTTP
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
      resources:
      limits:
      cpu: 50m
      memory: 128Mi
      requests:
      cpu: 50m
      memory: 128Mi
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
    • mountPath: /etc/grafana/grafana.ini
      name: config
      subPath: grafana.ini
    • mountPath: /var/lib/grafana
      name: storage
    • mountPath: /tmp/dashboards
      name: sc-dashboard-volume
    • mountPath: /etc/grafana/provisioning/dashboards/sc-dashboardproviders.yaml
      name: sc-dashboard-provider
      subPath: provider.yaml
    • mountPath: /etc/grafana/provisioning/datasources
      name: sc-datasources-volume
    • mountPath: /mnt/secrets-store
      name: secrets-store
      readOnly: true
    • mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-6vklw
      readOnly: true
      dnsPolicy: ClusterFirst
      enableServiceLinks: true
      initContainers:
    • command:
    • chown
    • -R
    • 65534:472
    • /var/lib/grafana
      image: busybox:1.31.1
      imagePullPolicy: IfNotPresent
      name: init-chown-data
      resources: {}
      securityContext:
      runAsNonRoot: false
      runAsUser: 0
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      volumeMounts:
    • mountPath: /var/lib/grafana
      name: storage
    • mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-6vklw
      readOnly: true
    • args:
    • --incoming-proxy-port
    • "4143"
    • --outgoing-proxy-port
    • "4140"
    • --proxy-uid
    • "2102"
    • --inbound-ports-to-ignore
    • 4190,4191,4567,4568
    • --outbound-ports-to-ignore
    • 4567,4568
      image: cr.l5d.io/linkerd/proxy-init:v1.5.3
      imagePullPolicy: IfNotPresent
      name: linkerd-init
      resources:
      limits:
      cpu: 50m
      memory: 20Mi
      requests:
      cpu: 50m
      memory: 20Mi
      securityContext:
      allowPrivilegeEscalation: true
      capabilities:
      add:
    • NET_ADMIN
    • NET_RAW
      privileged: false
      readOnlyRootFilesystem: true
      runAsNonRoot: false
      runAsUser: 0
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: FallbackToLogsOnError
      volumeMounts:
    • mountPath: /run
      name: linkerd-proxy-init-xtables-lock
    • mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-6vklw
      readOnly: true
      nodeName: aks-devext01-57764890-vmss000000
      preemptionPolicy: PreemptLowerPriority
      priority: 0
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
      fsGroup: 472
      runAsGroup: 472
      runAsNonRoot: true
      runAsUser: 65534
      serviceAccount: kube-prometheus-stack-grafana
      serviceAccountName: kube-prometheus-stack-grafana
      terminationGracePeriodSeconds: 30
      tolerations:
    • effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    • effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
    • effect: NoSchedule
      key: node.kubernetes.io/memory-pressure
      operator: Exists
      volumes:
    • configMap:
      defaultMode: 420
      name: kube-prometheus-stack-grafana
      name: config
    • name: storage
      persistentVolumeClaim:
      claimName: grafana-files
    • emptyDir: {}
      name: sc-dashboard-volume
    • configMap:
      defaultMode: 420
      name: kube-prometheus-stack-grafana-config-dashboards
      name: sc-dashboard-provider
    • emptyDir: {}
      name: sc-datasources-volume
    • csi:
      driver: secrets-store.csi.k8s.io
      readOnly: true
      volumeAttributes:
      secretProviderClass: azure-kvname-user-msi-provider-grafana
      name: secrets-store
    • name: kube-api-access-6vklw
      projected:
      defaultMode: 420
      sources:
    • serviceAccountToken:
      expirationSeconds: 3607
      path: token
    • configMap:
      items:
    • key: ca.crt
      path: ca.crt
      name: kube-root-ca.crt
    • downwardAPI:
      items:
    • fieldRef:
      apiVersion: v1
      fieldPath: metadata.namespace
      path: namespace
    • emptyDir: {}
      name: linkerd-proxy-init-xtables-lock
    • emptyDir:
      medium: Memory
      name: linkerd-identity-end-entity
      status:
      conditions:
    • lastProbeTime: null
      lastTransitionTime: "2022-07-19T10:24:59Z"
      status: "True"
      type: Initialized
    • lastProbeTime: null
      lastTransitionTime: "2022-07-19T10:25:24Z"
      status: "True"
      type: Ready
    • lastProbeTime: null
      lastTransitionTime: "2022-07-19T10:25:24Z"
      status: "True"
      type: ContainersReady
    • lastProbeTime: null
      lastTransitionTime: "2022-07-19T10:23:14Z"
      status: "True"
      type: PodScheduled
      containerStatuses:
    • containerID: containerd://1129539230486338bd74c2f1666374cfc8781efebac8be1b6d548baa8a93c99a
      image: docker.io/grafana/grafana:8.3.6
      imageID: docker.io/grafana/grafana@sha256:5b71534e0a0329f243994a09340db6625b55a33ae218d71e34ec73f824ec1e48
      lastState: {}
      name: grafana
      ready: true
      restartCount: 0
      started: true
      state:
      running:
      startedAt: "2022-07-19T10:25:12Z"
    • containerID: containerd://8313440d72175d5c79a9934d13a14d3ad32482e5c277868200ab501805d649b5
      image: quay.io/kiwigrid/k8s-sidecar:1.15.6
      imageID: quay.io/kiwigrid/k8s-sidecar@sha256:1f025ae37b7b20d63bffd179e5e6f972039dd53d9646388c0a8c456229c7bbcb
      lastState: {}
      name: grafana-sc-dashboard
      ready: true
      restartCount: 0
      started: true
      state:
      running:
      startedAt: "2022-07-19T10:25:04Z"
    • containerID: containerd://ccf3a8fbd84cfff831844b5088d0ebf1cea5d668f0f86a419a0aec54331c66bd
      image: quay.io/kiwigrid/k8s-sidecar:1.15.6
      imageID: quay.io/kiwigrid/k8s-sidecar@sha256:1f025ae37b7b20d63bffd179e5e6f972039dd53d9646388c0a8c456229c7bbcb
      lastState: {}
      name: grafana-sc-datasources
      ready: true
      restartCount: 0
      started: true
      state:
      running:
      startedAt: "2022-07-19T10:25:04Z"
    • containerID: containerd://adbc550b9043103858b2daf64a1036e8ff9901b40c1d205fbbb6912b74790638
      image: cr.l5d.io/linkerd/proxy:stable-2.11.4
      imageID: cr.l5d.io/linkerd/proxy@sha256:7119826e266625add8eebd8aafda98e24773842b6489390f6037641a40a6b72d
      lastState: {}
      name: linkerd-proxy
      ready: true
      restartCount: 0
      started: true
      state:
      running:
      startedAt: "2022-07-19T10:25:00Z"
      hostIP: 10.241.0.4
      initContainerStatuses:
    • containerID: containerd://ea6d0d0e1c9401efd40c9d5243f9f43172eee7b7f290010e344c2f08b937de06
      image: docker.io/library/busybox:1.31.1
      imageID: docker.io/library/busybox@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209
      lastState: {}
      name: init-chown-data
      ready: true
      restartCount: 0
      state:
      terminated:
      containerID: containerd://ea6d0d0e1c9401efd40c9d5243f9f43172eee7b7f290010e344c2f08b937de06
      exitCode: 0
      finishedAt: "2022-07-19T10:24:49Z"
      reason: Completed
      startedAt: "2022-07-19T10:24:49Z"
    • containerID: containerd://f46a8f47c2d68fe69be96ef904bf9afba4a3dcdff99e2d0e5d5317d292133b8a
      image: cr.l5d.io/linkerd/proxy-init:v1.5.3
      imageID: cr.l5d.io/linkerd/proxy-init@sha256:66eddbca64f0490d89df97e5c7e9f265b34928fc77a664a0237b9a00c4387e21
      lastState: {}
      name: linkerd-init
      ready: true
      restartCount: 0
      state:
      terminated:
      containerID: containerd://f46a8f47c2d68fe69be96ef904bf9afba4a3dcdff99e2d0e5d5317d292133b8a
      exitCode: 0
      finishedAt: "2022-07-19T10:24:58Z"
      reason: Completed
      startedAt: "2022-07-19T10:24:58Z"
      phase: Running
      podIP: 192.168.0.46
      podIPs:
    • ip: 192.168.0.46
      qosClass: Burstable
      startTime: "2022-07-19T10:23:14Z"
    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.