HBv3-series virtual machine performance

Applies to: ✔️ Linux VMs ✔️ Windows VMs ✔️ Flexible scale sets ✔️ Uniform scale sets

Performance expectations using common HPC microbenchmarks are as follows:

Workload HBv3
STREAM Triad 330-350 GB/s (amplified up to 630 GB/s)
High-Performance Linpack (HPL) 4 TF (Rpeak, FP64), 8 TF (Rpeak, FP32) for 120-core VM size
RDMA latency & bandwidth 1.2 microseconds (1 byte), 192 GB/s (one-way)
FIO on local NVMe SSDs (RAID0) 7 GB/s reads, 3 GB/s writes; 186k IOPS reads, 201k IOPS writes

Process pinning

Process pinning works well on HBv3-series VMs because we expose the underlying silicon as-is to the guest VM. We strongly recommend process pinning for optimal performance and consistency.

MPI latency

The MPI latency test from the OSU microbenchmark suite can be executed as shown. Sample scripts are on GitHub.

./bin/mpirun_rsh -np 2 -hostfile ~/hostfile MV2_CPU_MAPPING=[INSERT CORE #] ./osu_latency

MPI bandwidth

The MPI bandwidth test from the OSU microbenchmark suite can be executed per below. Sample scripts are on GitHub.

./mvapich2-2.3.install/bin/mpirun_rsh -np 2 -hostfile ~/hostfile MV2_CPU_MAPPING=[INSERT CORE #] ./mvapich2-2.3/osu_benchmarks/mpi/pt2pt/osu_bw

Mellanox Perftest

The Mellanox Perftest package has many InfiniBand tests such as latency (ib_send_lat) and bandwidth (ib_send_bw). An example command is below.

numactl --physcpubind=[INSERT CORE #]  ib_send_lat -a

Next steps