Share via

directX error: my 3080 has been identified as two identical GPU,it has two LUID

z li 0 Reputation points
2023-11-29T14:41:48.34+00:00

I have a problem when I use some deep learning software for inference, such as roop and topaz, which can choose to use GPU or CPU for inference. However, no matter if I force the GPU or let the software automatically choose the best method, it cannot properly select my 3080. But if I use stable diffusion, which is another software, it can correctly select my 3080 for inference,because it is cuda. I checked and found that the model format of topaz is ov, and the model format of roop is onnx. If I use onnxruntime, which runs on directml, I cannot use the GPU properly. The possible reason is that directML recognizes my one 3080 as two 3080, which prevents me from correctly specifying the 3080 for inference. I can also see that I have two 3080 in the hardware selection of topaz, but I only have one. How can I solve this problem?[Image

](https://learn-attachment.microsoft.com/api/attachments/330b525f-0e57-4b20-b24e-1f4061a298bc?platform=QnA"/api/attachments/8fc0c719-2f90-4025-8e84-36638a9e6095?platform=QnA" alt="QQ截图20231129214254" />

QQ截图20231129214247

0 comments No comments
{count} votes

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.