'NC' sub-family GPU accelerated VM size series
Applies to: ✔️ Linux VMs ✔️ Windows VMs ✔️ Flexible scale sets ✔️ Uniform scale sets
The 'NC' sub-family of VM size series are one of Azure's GPU-optimized VM instances. They're designed for compute-intensive workloads, such as AI and machine learning model training, high-performance computing (HPC), and graphics-intensive applications. Equipped with powerful NVIDIA GPUs, NC-series VMs offer substantial acceleration for processes that require heavy computational power, including deep learning, scientific simulations, and 3D rendering. This makes them particularly well-suited for industries such as technology research, entertainment, and engineering, where rendering and processing speed are critical to productivity and innovation.
Workloads and use cases
AI and Machine Learning: NC-series VMs are ideal for training complex machine learning models and running AI applications. The NVIDIA GPUs provide significant acceleration for computations typically involved in deep learning and other intensive training tasks.
High-Performance Computing (HPC): These VMs are suitable for scientific simulations, rendering, and other HPC workloads that can be accelerated by GPUs. Fields like engineering, medical research, and financial modeling often use NC-series VMs to handle their computational needs efficiently.
Graphics Rendering: NC-series VMs are also used for graphics-intensive applications, including video editing, 3D rendering, and real-time graphics processing. They are particularly useful in industries such as game development and movie production.
Remote Visualization: For applications requiring high-end visualization capabilities, such as CAD and visual effects, NC-series VMs can provide the necessary GPU power remotely, allowing users to work on complex graphical tasks without needing powerful local hardware.
Simulation and Analysis: These VMs are also suitable for detailed simulations and analyses in areas like automotive crash testing, computational fluid dynamics, and weather modeling, where GPU capabilities can significantly speed up processing times.
Series in family
NC-series V1
NC-series VMs are powered by the NVIDIA Tesla K80 card and the Intel Xeon E5-2690 v3 (Haswell) processor. Users can crunch through data faster by using CUDA for energy exploration applications, crash simulations, ray traced rendering, deep learning, and more. The NC24r configuration provides a low latency, high-throughput network interface optimized for tightly coupled parallel computing workloads.
Part | Quantity Count Units |
Specs SKU ID, Performance Units, etc. |
---|---|---|
Processor | 6 - 24vCores | Intel® Xeon® E5-2690 v3 (Haswell) |
Memory | 56 - 224GiB | |
Data Disks | 24 - 64Disks | |
Network | 1 - 4NICs | |
Accelerators | 1 - 4GPUs | NVIDIA Tesla K80 12GiB 12 - 48GiB per VM |
NCads_-_H100_v5-series
The NCads H100 v5 series virtual machines (VMs) are a new addition to the Azure GPU family. You can use this series for real-world Azure Applied AI training and batch inference workloads. The NCads H100 v5 series virtual machines are powered by NVIDIA H100 NVL GPU and 4th-generation AMD EPYC™ Genoa processors. The VMs feature up to 2 NVIDIA H100 NVL GPUs with 94GB memory each, up to 96 non-multithreaded AMD EPYC Genoa processor cores and 640 GiB of system memory.
View the full NCads_-_H100_v5-series page.
Part | Quantity Count Units |
Specs SKU ID, Performance Units, etc. |
---|---|---|
Processor | 40 - 80vCores | AMD EPYC™ (Genoa) |
Memory | 320 - 640GiB | |
Data Disks | 8 - 16Disks | 100000 - 240000IOPS / 3000 - 7000MBps |
Network | 2 - 4NICs | 40000 - 80000Mbps |
Accelerators | 1 - 2GPUs | NVIDIA H100 NVL 94GiB 94 - 188GiB per VM |
NCv2-series
NCv2-series VMs are powered by NVIDIA Tesla P100 GPUs. These GPUs can provide more than 2x the computational performance of the NC-series. Customers can take advantage of these updated GPUs for traditional HPC workloads such as reservoir modeling, DNA sequencing, protein analysis, Monte Carlo simulations, and others. In addition to the GPUs, the NCv2-series VMs are also powered by Intel Xeon E5-2690 v4 (Broadwell) CPUs. The NC24rs v2 configuration provides a low latency, high-throughput network interface optimized for tightly coupled parallel computing workloads.
View the full NCv2-series page.
Part | Quantity Count Units |
Specs SKU ID, Performance Units, etc. |
---|---|---|
Processor | 6 - 24vCores | Intel® Xeon® E5-2690 v4 (Broadwell) |
Memory | 112 - 448GiB | |
Data Disks | 12 - 32Disks | 20000 - 80000IOPS / 200 - 800MBps |
Network | 4 - 8 NICs | |
Accelerators | 1 - 4GPUs | NVIDIA Tesla P100 16GiB 16 - 64GiB per VM |
NCv3-series
NCv3-series VMs are powered by NVIDIA Tesla V100 GPUs. These GPUs can provide 1.5x the computational performance of the NCv2-series. Customers can take advantage of these updated GPUs for traditional HPC workloads such as reservoir modeling, DNA sequencing, protein analysis, Monte Carlo simulations, and others. The NC24rs v3 configuration provides a low latency, high-throughput network interface optimized for tightly coupled parallel computing workloads. In addition to the GPUs, the NCv3-series VMs are also powered by Intel Xeon E5-2690 v4 (Broadwell) CPUs.
View the full NCv3-series page.
Part | Quantity Count Units |
Specs SKU ID, Performance Units, etc. |
---|---|---|
Processor | 6 - 24vCores | Intel® Xeon® E5-2690 v4 (Broadwell) |
Memory | 112 - 448GiB | |
Data Disks | 12 - 32Disks | 20000 - 80000IOPS / 200 - 800MBps |
Network | 4 - 8 NICs | |
Accelerators | 1 - 4 GPUs | NVIDIA Tesla V100 16GiB 16 - 64GiB per VM |
NCasT4_v3-series
The NCasT4_v3-series virtual machines are powered by Nvidia Tesla T4 GPUs and AMD EPYC 7V12(Rome) CPUs. The VMs feature up to 4 NVIDIA T4 GPUs with 16 GB of memory each, up to 64 non-multithreaded AMD EPYC 7V12 (Rome) processor cores(base frequency of 2.45 GHz, all-cores peak frequency of 3.1 GHz and single-core peak frequency of 3.3 GHz) and 440 GiB of system memory. These virtual machines are ideal for deploying AI services- such as real-time inferencing of user-generated requests, or for interactive graphics and visualization workloads using NVIDIA's GRID driver and virtual GPU technology. Standard GPU compute workloads based around CUDA, TensorRT, Caffe, ONNX and other frameworks, or GPU-accelerated graphical applications based on OpenGL and DirectX can be deployed economically, with close proximity to users, on the NCasT4_v3 series.
View the full NCasT4_v3-series page.
Part | Quantity Count Units |
Specs SKU ID, Performance Units, etc. |
---|---|---|
Processor | 4 - 64vCores | AMD EPYC™ 7V12 (Rome) |
Memory | 28 - 440GiB | |
Data Disks | 8 - 32Disks | 20000 - 80000IOPS / 200 - 800MBps |
Network | 2 - 8 NICs | 8000 - 32000Mbps |
Accelerators | 1 - 4GPUs | NVIDIA Tesla T4 16GiB 16 - 64GiB per VM |
NC_A100_v4-series
The NC A100 v4 series virtual machine (VM) is a new addition to the Azure GPU family. You can use this series for real-world Azure Applied AI training and batch inference workloads. The NC A100 v4 series is powered by NVIDIA A100 PCIe GPU and third generation AMD EPYC™ 7V13 (Milan) processors. The VMs feature up to 4 NVIDIA A100 PCIe GPUs with 80 GB memory each, up to 96 non-multithreaded AMD EPYC Milan processor cores and 880 GiB of system memory.
View the full NC_A100_v4-series page.
Part | Quantity Count Units |
Specs SKU ID, Performance Units, etc. |
---|---|---|
Processor | 24 - 96vCores | EPYC™ 7V13 (Milan) |
Memory | 220 - 880GiB | |
Data Disks | 8 - 32Disks | 30000 - 120000IOPS / 1000 - 4000MBps |
Network | 2 - 8 NICs | 20000 - 80000Mbps |
Accelerators | 1 - 4GPUs | NVIDIA A100 (PCIe) 80GiB 80 - 320GiB per VM |
Previous-generation NC family series
For older sizes, see previous generation sizes.
Other size information
List of all available sizes: Sizes
Pricing Calculator: Pricing Calculator
Information on Disk Types: Disk Types
Next steps
Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure SKUs.
Check out Azure Dedicated Hosts for physical servers able to host one or more virtual machines assigned to one Azure subscription.
Learn how to Monitor Azure virtual machines.
Povratne informacije
https://aka.ms/ContentUserFeedback.
Stiže uskoro: Tokom 2024. godine postepeno ćemo ukidati probleme sa uslugom GitHub kao mehanizam povratnih informacija za sadržaj i zameniti ga novim sistemom povratnih informacija. Dodatne informacije potražite u članku:Prosledite i prikažite povratne informacije za