The difference is probably attributable to the "programs" that perform the recursive listing and is not directly related to the operating systems (Windows vs. Linux).
Network and/or event tracing can be used to gain insights into what the programs are doing and where the time goes.
I don't have a Linux system to check the "find" command, but analysing the "dir /s /b /a" command provides some hints as to why it is slower. It appears to traverse each folder/directory twice: once to obtain the names of the entries in the folder/directory and once to identify folders/directories for recursion. Furthermore, for each directory discovered, two queries for properties are made (FileStandardInformation, FileBasicInformation) even though the data is not needed when the "/b" qualifier is present.
The Windows "dir" command is not necessarily badly coded - it appears that emphasis was given to flexibility and reusability. This is normally not a problem, but mapping this style to network protocol operations on large trees reveals a performance problem.
It would be possible to write a program that is as quick as Linux "find" and it might be possible to find some public domain utility that is coded that way; however, it would not be easy to convince your users to use it.