Use GPU workloads with Azure Red Hat OpenShift

This article shows you how to use Nvidia GPU workloads with Azure Red Hat OpenShift (ARO).


  • OpenShift CLI
  • jq, moreutils, and gettext package
  • Azure Red Hat OpenShift 4.10

If you need to install an ARO cluster, see Tutorial: Create an Azure Red Hat OpenShift 4 cluster. ARO clusters must be version 4.10.x or higher.


As of ARO 4.10, it is no longer necessary to set up entitlements to use the Nvidia Operator. This has greatly simplified the setup of the cluster for GPU workloads.


sudo dnf install jq moreutils gettext


brew install jq moreutils gettext

Request GPU quota

All GPU quotas in Azure are 0 by default. You will need to sign in to the Azure portal and request GPU quota. Due to competition for GPU workers, you may have to provision an ARO cluster in a region where you can actually reserve GPU.

ARO supports the following GPU workers:

  • NC4as T4 v3
  • NC6s v3
  • NC8as T4 v3
  • NC12s v3
  • NC16as T4 v3
  • NC24s v3
  • NC24rs v3
  • NC64as T4 v3

The following instances are also supported in additional MachineSets:

  • Standard_ND96asr_v4
  • NC24ads_A100_v4
  • NC48ads_A100_v4
  • NC96ads_A100_v4
  • ND96amsr_A100_v4


When requesting quota, remember that Azure is per core. To request a single NC4as T4 v3 node, you will need to request quota in groups of 4. If you wish to request an NC16as T4 v3, you will need to request quota of 16.

  1. Sign in to Azure portal.

  2. Enter quotas in the search box, then select Compute.

  3. In the search box, enter NCAsv3_T4, check the box for the region your cluster is in, and then select Request quota increase.

  4. Configure quota.

    Screenshot of quotas page on Azure portal.

Sign in to your ARO cluster

Sign in to OpenShift with a user account with cluster-admin privileges. The example below uses an account named kubadmin:

oc login <apiserver> -u kubeadmin -p <kubeadminpass>

Pull secret (conditional)

Update your pull secret to make sure you can install operators and connect to


Skip this step if you have already recreated a full pull secret with enabled.

  1. Log into to

  2. Browse to

  3. Select Download pull secret and save the pull secret as pull-secret.txt.


    The remaining steps in this section must be run in the same working directory as pull-secret.txt.

  4. Export the existing pull secret.

    oc get secret pull-secret -n openshift-config -o json | jq -r '.data.".dockerconfigjson"' | base64 --decode > export-pull.json
  5. Merge the downloaded pull secret with the system pull secret to add

    jq -s '.[0] * .[1]' export-pull.json pull-secret.txt | tr -d "\n\r" > new-pull-secret.json
  6. Upload the new secret file.

    oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=new-pull-secret.json

    You may need to wait about 1 hour for everything to sync up with

  7. Delete secrets.

    rm pull-secret.txt export-pull.json new-pull-secret.json

GPU machine set

ARO uses Kubernetes MachineSet to create machine sets. The procedure below explains how to export the first machine set in a cluster and use that as a template to build a single GPU machine.

  1. View existing machine sets.

    For ease of setup, this example uses the first machine set as the one to clone to create a new GPU machine set.

    MACHINESET=$(oc get machineset -n openshift-machine-api -o=jsonpath='{.items[0]}' | jq -r '[] | @tsv')
  2. Save a copy of the example machine set.

    oc get machineset -n openshift-machine-api $MACHINESET -o json > gpu_machineset.json
  3. Change the field to a new unique name.

    jq ' = "nvidia-worker-<region><az>"' gpu_machineset.json| sponge gpu_machineset.json
  4. Ensure spec.replicas matches the desired replica count for the machine set.

    jq '.spec.replicas = 1' gpu_machineset.json| sponge gpu_machineset.json
  5. Change the field to match the field.

    jq '.spec.selector.matchLabels."" = "nvidia-worker-<region><az>"' gpu_machineset.json| sponge gpu_machineset.json
  6. Change the to match the field.

    jq '.spec.template.metadata.labels."" = "nvidia-worker-<region><az>"' gpu_machineset.json| sponge gpu_machineset.json
  7. Change the spec.template.spec.providerSpec.value.vmSize to match the desired GPU instance type from Azure.

    The machine used in this example is Standard_NC4as_T4_v3.

    jq '.spec.template.spec.providerSpec.value.vmSize = "Standard_NC4as_T4_v3"' gpu_machineset.json | sponge gpu_machineset.json
  8. Change the to match the desired zone from Azure.

    jq ' = "1"' gpu_machineset.json | sponge gpu_machineset.json
  9. Delete the .status section of the yaml file.

    jq 'del(.status)' gpu_machineset.json | sponge gpu_machineset.json
  10. Verify the other data in the yaml file.

Create GPU machine set

Use the following steps to create the new GPU machine. It may take 10-15 minutes to provision a new GPU machine. If this step fails, sign in to Azure portal and ensure there are no availability issues. To do so, go to Virtual Machines and search for the worker name you created previously to see the status of VMs.

  1. Create the GPU Machine set.

    oc create -f gpu_machineset.json

    This command will take a few minutes to complete.

  2. Verify GPU machine set.

    Machines should be deploying. You can view the status of the machine set with the following commands:

    oc get machineset -n openshift-machine-api
    oc get machine -n openshift-machine-api

    Once the machines are provisioned (which could take 5-15 minutes), machines will show as nodes in the node list:

    oc get nodes

    You should see a node with the nvidia-worker-southcentralus1 name that was created previously.

Install Nvidia GPU Operator

This section explains how to create the nvidia-gpu-operator namespace, set up the operator group, and install the Nvidia GPU operator.

  1. Create Nvidia namespace.

    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
      name: nvidia-gpu-operator
  2. Create Operator Group.

    cat <<EOF | oc apply -f -
    kind: OperatorGroup
      name: nvidia-gpu-operator-group
      namespace: nvidia-gpu-operator
     - nvidia-gpu-operator
  3. Get the latest Nvidia channel using the following command:

    CHANNEL=$(oc get packagemanifest gpu-operator-certified -n openshift-marketplace -o jsonpath='{.status.defaultChannel}')


If your cluster was created without providing the pull secret, the cluster won't include samples or operators from Red Hat or from certified partners. This will result in the following error message:

Error from server (NotFound): "gpu-operator-certified" not found.

To add your Red Hat pull secret on an Azure Red Hat OpenShift cluster, follow this guidance.

  1. Get latest Nvidia package using the following command:

    PACKAGE=$(oc get packagemanifests/gpu-operator-certified -n openshift-marketplace -ojson | jq -r '.status.channels[] | select(.name == "'$CHANNEL'") | .currentCSV')
  2. Create Subscription.

    envsubst  <<EOF | oc apply -f -
    kind: Subscription
      name: gpu-operator-certified
      namespace: nvidia-gpu-operator
      channel: "$CHANNEL"
      installPlanApproval: Automatic
      name: gpu-operator-certified
      source: certified-operators
      sourceNamespace: openshift-marketplace
      startingCSV: "$PACKAGE"
  3. Wait for Operator to finish installing.

    Don't proceed until you have verified that the operator has finished installing. Also, ensure that your GPU worker is online.

    Screenshot of installed operators on namespace.

Install node feature discovery operator

The node feature discovery operator will discover the GPU on your nodes and appropriately label the nodes so you can target them for workloads.

This example installs the NFD operator into the openshift-ndf namespace and creates the "subscription" which is the configuration for NFD.

Official Documentation for Installing Node Feature Discovery Operator.

  1. Set up Namespace.

    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Namespace
      name: openshift-nfd
  2. Create OperatorGroup.

    cat <<EOF | oc apply -f -
    kind: OperatorGroup
      generateName: openshift-nfd-
      name: openshift-nfd
      namespace: openshift-nfd
  3. Create Subscription.

    cat <<EOF | oc apply -f -
    kind: Subscription
      name: nfd
      namespace: openshift-nfd
      channel: "stable"
      installPlanApproval: Automatic
      name: nfd
      source: redhat-operators
      sourceNamespace: openshift-marketplace
  4. Wait for Node Feature discovery to complete installation.

    You can log in to your OpenShift console to view operators or simply wait a few minutes. Failure to wait for the operator to install will result in an error in the next step.

  5. Create NFD Instance.

    cat <<EOF | oc apply -f -
    kind: NodeFeatureDiscovery
      name: nfd-instance
      namespace: openshift-nfd
        configData: |
          #    - name: "more.kernel.features"
          #      matchOn:
          #      - loadedKMod: ["example_kmod3"]
          #    - name: ""
          #      value: customValue
          #      matchOn:
          #      - nodename: ["special-.*-node-.*"]
        image: >-

        servicePort: 12000
        configData: |
          #  labelWhiteList:
          #  noPublish: false
            sleepInterval: 60s
          #  sources: [all]
          #  klog:
          #    addDirHeader: false
          #    alsologtostderr: false
          #    logBacktraceAt:
          #    logtostderr: true
          #    skipHeaders: false
          #    stderrthreshold: 2
          #    v: 0
          #    vmodule:
          ##   NOTE: the following options are not dynamically run-time
          ##          configurable and require a nfd-worker restart to take effect
          ##          after being changed
          #    logDir:
          #    logFile:
          #    logFileMaxSize: 1800
          #    skipLogHeaders: false
          #  cpu:
          #    cpuid:
          ##     NOTE: attributeWhitelist has priority over attributeBlacklist
          #      attributeBlacklist:
          #        - "BMI1"
          #        - "BMI2"
          #        - "CLMUL"
          #        - "CMOV"
          #        - "CX16"
          #        - "ERMS"
          #        - "F16C"
          #        - "HTT"
          #        - "LZCNT"
          #        - "MMX"
          #        - "MMXEXT"
          #        - "NX"
          #        - "POPCNT"
          #        - "RDRAND"
          #        - "RDSEED"
          #        - "RDTSCP"
          #        - "SGX"
          #        - "SSE"
          #        - "SSE2"
          #        - "SSE3"
          #        - "SSE4.1"
          #        - "SSE4.2"
          #        - "SSSE3"
          #      attributeWhitelist:
          #  kernel:
          #    kconfigFile: "/path/to/kconfig"
          #    configOpts:
          #      - "NO_HZ"
          #      - "X86"
          #      - "DMI"
                - "0200"
                - "03"
                - "12"
          #      - "class"
                - "vendor"
          #      - "device"
          #      - "subsystem_vendor"
          #      - "subsystem_device"
          #  usb:
          #    deviceClassWhitelist:
          #      - "0e"
          #      - "ef"
          #      - "fe"
          #      - "ff"
          #    deviceLabelFields:
          #      - "class"
          #      - "vendor"
          #      - "device"
          #  custom:
          #    - name: "my.kernel.feature"
          #      matchOn:
          #        - loadedKMod: ["example_kmod1", "example_kmod2"]
          #    - name: "my.pci.feature"
          #      matchOn:
          #        - pciId:
          #            class: ["0200"]
          #            vendor: ["15b3"]
          #            device: ["1014", "1017"]
          #        - pciId :
          #            vendor: ["8086"]
          #            device: ["1000", "1100"]
          #    - name: "my.usb.feature"
          #      matchOn:
          #        - usbId:
          #          class: ["ff"]
          #          vendor: ["03e7"]
          #          device: ["2485"]
          #        - usbId:
          #          class: ["fe"]
          #          vendor: ["1a6e"]
          #          device: ["089a"]
          #    - name: "my.combined.feature"
          #      matchOn:
          #        - pciId:
          #            vendor: ["15b3"]
          #            device: ["1014", "1017"]
          #          loadedKMod : ["vendor_kmod1", "vendor_kmod2"]
  6. Verify that NFD is ready.

    The status of this operator should show as Available.

    Screenshot of node feature discovery operator.

Apply Nvidia Cluster Config

This section explains how to apply the Nvidia cluster config. Please read the Nvidia documentation on customizing this if you have your own private repos or specific settings. This process may take several minutes to complete.

  1. Apply cluster config.

    cat <<EOF | oc apply -f -
    kind: ClusterPolicy
      name: gpu-cluster-policy
        enabled: true
        defaultRuntime: crio
        initContainer: {}
        runtimeClass: nvidia
        deployGFD: true
        enabled: true
      gfd: {}
          name: ''
          nlsEnabled: false
          configMapName: ''
          name: ''
          name: ''
          configMapName: ''
          config: ''
        enabled: true
        use_ocp_driver_toolkit: true
      devicePlugin: {}
        strategy: single
            - name: WITH_WORKLOAD
              value: 'true'
        enabled: true
      daemonsets: {}
        enabled: true
  2. Verify cluster policy.

    Log in to OpenShift console and browse to operators. Ensure sure you're in the nvidia-gpu-operator namespace. It should say State: Ready once everything is complete.

    Screenshot of existing cluster policies on OpenShift console.

Validate GPU

It may take some time for the Nvidia Operator and NFD to completely install and self-identify the machines. Run the following commands to validate that everything is running as expected:

  1. Verify that NFD can see your GPU(s).

    oc describe node | egrep 'Roles|pci-10de' | grep -v master

    The output should appear similar to the following:

    Roles:              worker
  2. Verify node labels.

    You can see the node labels by logging into the OpenShift console -> Compute -> Nodes -> nvidia-worker-southcentralus1-. You should see multiple Nvidia GPU labels and the pci-10de device from above.

    Screenshot of GPU labels on OpenShift console.

  3. Nvidia SMI tool verification.

    oc project nvidia-gpu-operator
    for i in $(oc get pod -lopenshift.driver-toolkit=true --no-headers |awk '{print $1}'); do echo $i; oc exec -it $i -- nvidia-smi ; echo -e '\n' ;  done

    You should see output that shows the GPUs available on the host such as this example screenshot. (Varies depending on GPU worker type)

    Screenshot of output showing available GPUs.

  4. Create Pod to run a GPU workload

    oc project nvidia-gpu-operator
    cat <<EOF | oc apply -f -
    apiVersion: v1
    kind: Pod
      name: cuda-vector-add
      restartPolicy: OnFailure
        - name: cuda-vector-add
          image: ""
  5. View logs.

    oc logs cuda-vector-add --tail=-1


If you get an error Error from server (BadRequest): container "cuda-vector-add" in pod "cuda-vector-add" is waiting to start: ContainerCreating, try running oc delete pod cuda-vector-add and then re-run the create statement above.

The output should be similar to the following (depending on GPU):

[Vector addition of 5000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory

If successful, the pod can be deleted:

oc delete pod cuda-vector-add