H-series
Applies to: ✔️ Linux VMs ✔️ Windows VMs ✔️ Flexible scale sets ✔️ Uniform scale sets
H-series VMs are optimized for applications driven by high CPU frequencies or large memory per core requirements. H-series VMs feature 8 or 16 Intel Xeon E5 2667 v3 processor cores, up to 14 GB of RAM per CPU core, and no hyperthreading. H-series features 56 Gb/sec Mellanox FDR InfiniBand in a non-blocking fat tree configuration for consistent RDMA performance. H-series VMs are not SR-IOV enabled currently and support Intel MPI 5.x and MS-MPI.
ACU: 290-300
Premium Storage: Not Supported
Premium Storage caching: Not Supported
Live Migration: Not Supported
Memory Preserving Updates: Not Supported
VM Generation Support: Generation 1
Accelerated Networking: Not Supported
Ephemeral OS Disks: Not Supported
Size | vCPU | Processor | Memory (GiB) | Memory bandwidth GB/s | Base CPU frequency (GHz) | All-cores frequency (GHz, peak) | Single-core frequency (GHz, peak) | RDMA performance (Gb/s) | MPI support | Temp storage (GiB) | Max data disks | Max disk throughput: IOPS | Max Ethernet vNICs |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Standard_H8 | 8 | Intel Xeon E5 2667 v3 | 56 | 40 | 3.2 | 3.3 | 3.6 | - | Intel 5.x, MS-MPI | 1000 | 32 | 32 x 500 | 2 |
Standard_H16 | 16 | Intel Xeon E5 2667 v3 | 112 | 80 | 3.2 | 3.3 | 3.6 | - | Intel 5.x, MS-MPI | 2000 | 64 | 64 x 500 | 4 |
Standard_H8m | 8 | Intel Xeon E5 2667 v3 | 112 | 40 | 3.2 | 3.3 | 3.6 | - | Intel 5.x, MS-MPI | 1000 | 32 | 32 x 500 | 2 |
Standard_H16m | 16 | Intel Xeon E5 2667 v3 | 224 | 80 | 3.2 | 3.3 | 3.6 | - | Intel 5.x, MS-MPI | 2000 | 64 | 64 x 500 | 4 |
Standard_H16r 1 | 16 | Intel Xeon E5 2667 v3 | 112 | 80 | 3.2 | 3.3 | 3.6 | 56 | Intel 5.x, MS-MPI | 2000 | 64 | 64 x 500 | 4 |
Standard_H16mr 1 | 16 | Intel Xeon E5 2667 v3 | 224 | 80 | 3.2 | 3.3 | 3.6 | 56 | Intel 5.x, MS-MPI | 2000 | 64 | 64 x 500 | 4 |
1 For MPI applications, dedicated RDMA backend network is enabled by FDR InfiniBand network.
Note
Among the RDMA capable VMs, the H-series are not SR-IOV enabled. Therefore, the supported VM Images, InfiniBand driver requirements and supported MPI libraries are different from the SR-IOV enabled VMs.
A quirk of the alternate NIC virtualization solution in place for the H-series is that the OS may occasionally report inaccurate link speeds for the synthetic NIC that is used for RDMA connections. This issue does not, however, impact actual performance experienced by jobs using the VM's RDMA capability, so outputs like the following are not a cause for concern.
$ ethtool eth1
Settings for eth1:
...
Speed: 10000Mb/s
Software specifications
Software Specifications | H-series VM |
---|---|
Max MPI Job Size | 4800 cores (300 VMs in a single virtual machine scale set with singlePlacementGroup=true) |
MPI Support | Intel MPI 5.x, MS-MPI |
OS Support for non-SRIOV RDMA | CentOS/RHEL 6.5 - 7.4, SLES 12 SP4+, WinServer 2012 - 2016 |
Orchestrator Support | CycleCloud, Batch, AKS |
Get Started
- Overview of HPC on InfiniBand-enabled H-series and N-series VMs.
- Configuring VMs and supported OS and VM Images.
- Enabling InfiniBand with HPC VM images, VM extensions or manual installation.
- Setting up MPI, including code snippets and recommendations.
- Cluster configuration options.
- Deployment considerations.
Size table definitions
Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
To learn how to get the best storage performance for your VMs, see Virtual machine and disk performance.
Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see Virtual machine network bandwidth.
Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see Optimize network throughput for Azure virtual machines. To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see Bandwidth/Throughput testing (NTTTCP).
Other sizes and information
- General purpose
- Memory optimized
- Storage optimized
- GPU optimized
- High performance compute
- Previous generations
Pricing Calculator : Pricing Calculator
For more information on disk types, see What disk types are available in Azure?
Next steps
- Read about the latest announcements, HPC workload examples, and performance results at the Azure Compute Tech Community Blogs.
- For a higher level architectural view of running HPC workloads, see High Performance Computing (HPC) on Azure.
- Learn more about how Azure compute units (ACU) can help you compare compute performance across Azure SKUs.