Just to clarify, your setup is this, correct?
- One touch-enabled display [A]
- One non-touch-enabled display [B]
- Display settings configured to show only [B]
This sounds by design from the Windows API. The OS has to use a series of heuristics to map touch digitizers to displays, since not all digitizers and displays report the proper information to automatically map them. The heuristics will (a) only ever map digitizers to active displays, and (b) always map a digitizer somewhere (with the ultimate fallback being the primary monitor). In fact, if you tried to touch display [A] in your configuration above, you'll find that it does in fact result in touch input on display [B], which is why the API is technically returning the result you're seeing.
It could be reasonably argued that the mapping heuristics should consider all displays, not just active ones, and ignore digitizers that end up getting mapped to inactive displays. The risk would be (a) ignoring digitizers that were incorrectly associated with an inactive display, or (b) people actually relying on the behavior I described above where their touchscreen essentially acts like an external digitizer (might be more reasonable for pen, since you'd at least see a hover effect and can know what you're interacting with).
If you file feedback under the Input & Language > Touch path and link it here, I can promote it internally and discuss with the team. That said, the risk of disabling digitizers when the user wants to use them may outweigh any perceived benefit of updating the heuristics. This logic has been in place for over a decade at this point.
For your scenario, can you instruct the user to leave both displays active? You could also potentially add code to specifically detect the scenario of "touch digitizer is internal but current display is external", but it's not trivial to determine if the digitizer is internal.