NC A100 v4-series


This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the CentOS End Of Life guidance.

Applies to: ✔️ Linux VMs ✔️ Windows VMs ✔️ Flexible scale sets ✔️ Uniform scale sets

The NC A100 v4 series virtual machine (VM) is a new addition to the Azure GPU family. You can use this series for real-world Azure Applied AI training and batch inference workloads.

The NC A100 v4 series is powered by NVIDIA A100 PCIe GPU and third generation AMD EPYC™ 7V13 (Milan) processors. The VMs feature up to 4 NVIDIA A100 PCIe GPUs with 80 GB memory each, up to 96 non-multithreaded AMD EPYC Milan processor cores and 880 GiB of system memory. These VMs are ideal for real-world Applied AI workloads, such as:

  • GPU-accelerated analytics and databases
  • Batch inferencing with heavy pre- and post-processing
  • Autonomy model training
  • Oil and gas reservoir simulation
  • Machine learning (ML) development
  • Video processing
  • AI/ML web services

Supported features

To get started with NC A100 v4 VMs, refer to HPC Workload Configuration and Optimization for steps including driver and network configuration.

Due to increased GPU memory I/O footprint, the NC A100 v4 requires the use of Generation 2 VMs and marketplace images. While the Azure HPC images are recommended, Azure HPC Ubuntu 20.04 and Azure HPC CentOS 7.9, RHEL 8.8, RHEL 9.2, Windows Server 2019, and Windows Server 2022 images are supported.

Size vCPU Memory (GiB) Temp Disk1 (GiB) NVMe Disks2 GPU3 GPU Memory (GiB) Max data disks Max uncached disk throughput (IOPS / MBps) Max NICs/network bandwidth (MBps)
Standard_NC24ads_A100_v4 24 220 64 960 GB 1 80 8 30000/1000 2/20,000
Standard_NC48ads_A100_v4 48 440 128 2x960 GB 2 160 16 60000/2000 4/40,000
Standard_NC96ads_A100_v4 96 880 256 4x960 GB 4 320 32 120000/4000 8/80,000

1 NC A100 v4 series VMs have a standard SCSI based temp resource disk for OS paging/swap file use. This ensures the NVMe drives can be fully dedicated to application use. This disk is Ephemeral, and all data will be lost on stop/deallocate.

2 Local NVMe disks are ephemeral, data will be lost on these disks if you stop/deallocate your VM. Local NVMe disk is coming as RAM and it needs to be manually formatted in newly deployed VM.

31 GPU = one A100 80GB PCIe GPU card

Size table definitions

  • Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.

  • Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.

  • Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.

  • To learn how to get the best storage performance for your VMs, see Virtual machine and disk performance.

  • Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see Virtual machine network bandwidth.

    Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see Optimize network throughput for Azure virtual machines. To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see Bandwidth/Throughput testing (NTTTCP).

Other sizes and information

You can use the pricing calculator to estimate your Azure VMs costs.

For more information on disk types, see What disk types are available in Azure?

Next step