Breyta

Deila með


'NC' sub-family GPU accelerated VM size series

Applies to: ✔️ Linux VMs ✔️ Windows VMs ✔️ Flexible scale sets ✔️ Uniform scale sets

The 'NC' sub-family of VM size series are one of Azure's GPU-optimized VM instances. They're designed for compute-intensive workloads, such as AI and machine learning model training, high-performance computing (HPC), and graphics-intensive applications. Equipped with powerful NVIDIA GPUs, NC-series VMs offer substantial acceleration for processes that require heavy computational power, including deep learning, scientific simulations, and 3D rendering. This makes them particularly well-suited for industries such as technology research, entertainment, and engineering, where rendering and processing speed are critical to productivity and innovation.

Workloads and use cases

AI and Machine Learning: NC-series VMs are ideal for training complex machine learning models and running AI applications. The NVIDIA GPUs provide significant acceleration for computations typically involved in deep learning and other intensive training tasks.

High-Performance Computing (HPC): These VMs are suitable for scientific simulations, rendering, and other HPC workloads that can be accelerated by GPUs. Fields like engineering, medical research, and financial modeling often use NC-series VMs to handle their computational needs efficiently.

Graphics Rendering: NC-series VMs are also used for graphics-intensive applications, including video editing, 3D rendering, and real-time graphics processing. They are particularly useful in industries such as game development and movie production.

Remote Visualization: For applications requiring high-end visualization capabilities, such as CAD and visual effects, NC-series VMs can provide the necessary GPU power remotely, allowing users to work on complex graphical tasks without needing powerful local hardware.

Simulation and Analysis: These VMs are also suitable for detailed simulations and analyses in areas like automotive crash testing, computational fluid dynamics, and weather modeling, where GPU capabilities can significantly speed up processing times.

Series in family

NC-series V1

Important

NC and NC_Promo series Azure virtual machines (VMs) will be retired on September 6, 2023. For more information, see the NC and NC_Promo retirement information. For how to migrate your workloads to other VM sizes, see the GPU compute migration guide.

This retirement announcement doesn't apply to NCv3, NCasT4v3 and NC A100 v4 series VMs.

NC-series VMs are powered by the NVIDIA Tesla K80 card and the Intel Xeon E5-2690 v3 (Haswell) processor. Users can crunch through data faster by using CUDA for energy exploration applications, crash simulations, ray traced rendering, deep learning, and more. The NC24r configuration provides a low latency, high-throughput network interface optimized for tightly coupled parallel computing workloads.

View the full NC-series page.

Part Quantity
Count Units
Specs
SKU ID, Performance Units, etc.
Processor 6 - 24 vCPUs Intel Xeon E5-2690 v3 (Haswell) [x86-64]
Memory 56 - 224 GiB
Local Storage 1 Disk 340 - 1440 GiB
Remote Storage 24 - 64 Disks
Network 1 - 4 NICs Mbps
Accelerators 1 - 4 GPUs Nvidia Tesla K80 GPU (24GB)

NCads_H100_v5-series

The NCads H100 v5 series virtual machines (VMs) are a new addition to the Azure GPU family. You can use this series for real-world Azure Applied AI training and batch inference workloads. The NCads H100 v5 series virtual machines are powered by NVIDIA H100 NVL GPU and 4th-generation AMD EPYC™ Genoa processors. The VMs feature up to 2 NVIDIA H100 NVL GPUs with 94GB memory each, up to 96 non-multithreaded AMD EPYC Genoa processor cores and 640 GiB of system memory. These VMs are ideal for real-world Applied AI workloads, such as:

  • GPU-accelerated analytics and databases
  • Batch inferencing with heavy pre- and post-processing
  • Autonomy model training
  • Oil and gas reservoir simulation
  • Machine learning (ML) development
  • Video processing
  • AI/ML web services

View the full NCads_H100_v5-series page.

Part Quantity
Count Units
Specs
SKU ID, Performance Units, etc.
Processor 40 - 80 vCPUs AMD EPYC (Genoa) [x86-64]
Memory 320 - 640 GiB
Local Storage 1 Disk 3576 - 7152 GiB
IOPS (RR)
MBps (RR)
Remote Storage 8 - 16 Disks 100000 - 240000 IOPS
3000 - 7000 MBps
Network 2 - 4 NICs 40,000 - 80,000 Mbps
Accelerators 1 - 2 GPUs Nvidia PCIe H100 GPU (94GB)

NCCads_H100_v5-series

The NCCads H100 v5 series of virtual machines are a new addition to the Azure GPU family. In this VM SKU, Trusted Execution Environment (TEE) spans confidential VM on the CPU and attached GPU, enabling secure offload of data, models, and computation to the GPU. The NCCads H100 v5 series is powered by 4th-generation AMD EPYC™ Genoa processors and NVIDIA H100 Tensor Core GPU. These VMs feature 1 NVIDIA H100 NVL GPUs with 94 GB memory, 40 non-multithreaded AMD EPYC Genoa processor cores, and 320 GiB of system memory. These VMs are ideal for real-world Applied AI workloads, such as:

  • GPU-accelerated analytics and databases
  • Batch inferencing with heavy pre- and post-processing
  • Machine Learning (ML) development
  • Video processing
  • AI/ML web services

View the full NCCads_H100_v5-series page.

Part Quantity
Count Units
Specs
SKU ID, Performance Units, etc.
Processor 40 vCPUs AMD EPYC (Genoa) [x86-64]
Memory 320 GiB
Local Storage 1 Disk 800 GiB
Remote Storage 8 Disks 100000 IOPS
3000 MBps
Network 2 NICs 40000 Mbps
Accelerators 1 GPU Nvidia H100 GPU (94GB)

NCv2-series

Important

NCv2 series Azure virtual machines (VMs) will be retired on September 6, 2023. For more information, see the NCv2 retirement information. For how to migrate your workloads to other VM sizes, see the GPU compute migration guide.

This retirement announcement doesn't apply to NCv3, NCasT4v3 and NC A100 v4 series VMs.

NCv2-series VMs are powered by NVIDIA Tesla P100 GPUs. These GPUs can provide more than 2x the computational performance of the NC-series. Customers can take advantage of these updated GPUs for traditional HPC workloads such as reservoir modeling, DNA sequencing, protein analysis, Monte Carlo simulations, and others. In addition to the GPUs, the NCv2-series VMs are also powered by Intel Xeon E5-2690 v4 (Broadwell) CPUs. The NC24rs v2 configuration provides a low latency, high-throughput network interface optimized for tightly coupled parallel computing workloads.

View the full NCv2-series page.

Part Quantity
Count Units
Specs
SKU ID, Performance Units, etc.
Processor 6 - 24 vCPUs Intel Xeon E5-2690 v4 (Broadwell) [x86-64]
Memory 112 - 448 GiB
Local Storage 1 Disk 736 - 2948 GiB
Remote Storage 12 - 32 Disks 20000 - 80000 IOPS
200 - 800 MBps
Network 4 - 8 NICs
Accelerators 1 - 4 GPUs Nvidia Tesla P100 GPU (16GB)

NCv3-series

NCv3-series VMs are powered by NVIDIA Tesla V100 GPUs. These GPUs can provide 1.5x the computational performance of the NCv2-series. Customers can take advantage of these updated GPUs for traditional HPC workloads such as reservoir modeling, DNA sequencing, protein analysis, Monte Carlo simulations, and others. The NC24rs v3 configuration provides a low latency, high-throughput network interface optimized for tightly coupled parallel computing workloads. In addition to the GPUs, the NCv3-series VMs are also powered by Intel Xeon E5-2690 v4 (Broadwell) CPUs.

Important

For this VM series, the vCPU (core) quota in your subscription is initially set to 0 in each region. Request a vCPU quota increase for this series in an available region. These SKUs aren't available to trial or Visual Studio Subscriber Azure subscriptions. Your subscription level might not support selecting or deploying these SKUs.

View the full NCv3-series page.

Part Quantity
Count Units
Specs
SKU ID, Performance Units, etc.
Processor 6 - 24 vCPUs Intel Xeon E5-2690 v4 (Broadwell) [x86-64]
Memory 112 - 448 GiB
Local Storage 1 Disk 736 - 2948 GiB
IOPS (RR)
MBps (RR)
Remote Storage 12 - 32 Disks 20000 - 80000 IOPS
200 - 800 MBps
Network 4 - 8 NICs Mbps
Accelerators 1 - 4 Nvidia Tesla V100 GPU (16GB)

NCasT4_v3-series

The NCasT4_v3-series virtual machines are powered by Nvidia Tesla T4 GPUs and AMD EPYC 7V12(Rome) CPUs. The VMs feature up to 4 NVIDIA T4 GPUs with 16 GB of memory each, up to 64 non-multithreaded AMD EPYC 7V12 (Rome) processor cores(base frequency of 2.45 GHz, all-cores peak frequency of 3.1 GHz and single-core peak frequency of 3.3 GHz) and 440 GiB of system memory. These virtual machines are ideal for deploying AI services- such as real-time inferencing of user-generated requests, or for interactive graphics and visualization workloads using NVIDIA's GRID driver and virtual GPU technology. Standard GPU compute workloads based around CUDA, TensorRT, Caffe, ONNX and other frameworks, or GPU-accelerated graphical applications based on OpenGL and DirectX can be deployed economically, with close proximity to users, on the NCasT4_v3 series.

View the full NCasT4_v3-series page.

Part Quantity
Count Units
Specs
SKU ID, Performance Units, etc.
Processor 4 - 64 vCPUs AMD EPYC 7V12 (Rome) [x86-64]
Memory 28 - 440 GiB
Local Storage 1 Disk 176 - 2816 GiB
IOPS (RR)
MBps (RR)
Remote Storage 8 - 32 Disks IOPS
MBps
Network 2 - 8 NICs 8000 - 32000 Mbps
Accelerators 1 - 4 GPUs Nvidia Tesla T4 GPU (16GB)

NC_A100_v4-series

The NC A100 v4 series virtual machine (VM) is a new addition to the Azure GPU family. You can use this series for real-world Azure Applied AI training and batch inference workloads. The NC A100 v4 series is powered by NVIDIA A100 PCIe GPU and third generation AMD EPYC™ 7V13 (Milan) processors. The VMs feature up to 4 NVIDIA A100 PCIe GPUs with 80 GB memory each, up to 96 non-multithreaded AMD EPYC Milan processor cores and 880 GiB of system memory. These VMs are ideal for real-world Applied AI workloads, such as:

GPU-accelerated analytics and databases Batch inferencing with heavy pre- and post-processing Autonomy model training Oil and gas reservoir simulation Machine learning (ML) development Video processing AI/ML web services

View the full NC_A100_v4-series page.

Part Quantity
Count Units
Specs
SKU ID, Performance Units, etc.
Processor 24 - 96 vCPUs AMD EPYC 7V13 (Milan) [x86-64]
Memory 220 - 880 GiB
Local Storage 1 Temp Disk
1 - 4 NVMe Disks
64 - 256 GiB Temp Disk
960 GiB NVMe Disks
Remote Storage 8 - 32 Disks 30000 - 120000 IOPS
1000 - 4000 MBps
Network 2 - 8 NICs 20,000 - 80,000 Mbps
Accelerators 1 - 4 GPUs Nvidia PCIe A100 GPU (80GB)

Previous-generation NC family series

For older sizes, see previous generation sizes.

Other size information

List of all available sizes: Sizes

Pricing Calculator: Pricing Calculator

Information on Disk Types: Disk Types

Next steps

Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure SKUs.

Check out Azure Dedicated Hosts for physical servers able to host one or more virtual machines assigned to one Azure subscription.

Learn how to Monitor Azure virtual machines.