directX error: my 3080 has been identified as two identical GPU,it has two LUID
I have a problem when I use some deep learning software for inference, such as roop and topaz, which can choose to use GPU or CPU for inference. However, no matter if I force the GPU or let the software automatically choose the best method, it cannot properly select my 3080. But if I use stable diffusion, which is another software, it can correctly select my 3080 for inference,because it is cuda. I checked and found that the model format of topaz is ov, and the model format of roop is onnx. If I use onnxruntime, which runs on directml, I cannot use the GPU properly. The possible reason is that directML recognizes my one 3080 as two 3080, which prevents me from correctly specifying the 3080 for inference. I can also see that I have two 3080 in the hardware selection of topaz, but I only have one. How can I solve this problem?[