low result for MPI test in HPC Cluster Manager

Anonymous
2024-02-21T13:00:27.43+00:00

Hi Everyone, We have setup a HPC cluster containing 4 nodes with HPC network topology 2( all nodes on enterprise and private network). When I run the built-in MPI test in Cluster Manager, the result is between 6-7GB/s for both throughput and latency, is this normal when running theses tests with 100Gb/s Mellanox EDR cards in Ethernet mode? I have updated to latest firmware and latest mellanox driver compatible with these cards, also running two nodes directly connected to each other. OS: Windows Server 2022 2019 HPC Pack Update 2 MPI Version: 10.1.12498.52 Packet size during test: 4194304 Bytes

Windows for business | Windows Server | User experience | Other
0 comments No comments
{count} votes

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.