How to handle touch device mapping in multi-monitor setups using Windows API?

Julio Silva20 5 Reputation points
2024-10-30T17:17:27.23+00:00

When the Windows settings are adjusted to use only the external monitor, my application—designed for hardware diagnostics and reliant on the Windows API—still allows the touchscreen module to be activated and tested. This is problematic because the external monitor does not have touchscreen capabilities, yet the application cannot distinguish between monitors. As a result, users may inadvertently access touchscreen functionalities that should not be available, leading to confusion and incorrect test results.

This problem arises from the way the Windows API, specifically the "POINTER_DEVICE_INFO" structure, handles device mapping. The HMONITOR field indicates the monitor associated with the touch device, but when the application is running on the external monitor, the API incorrectly associates the touch device with this monitor. This leads to erroneous behavior where the touchscreen functionality remains accessible.

Given the complexities involved and the limitations of the current API, I would appreciate any guidance on potential solutions or workarounds that may help to accurately distinguish between active touchscreen monitors and non-touch monitors in multi-display configurations.

Your assistance in addressing this issue would be invaluable, as it directly impacts the usability and functionality of applications relying on touch input in multi-monitor environments. Thank you for your attention to this matter.

Windows development Windows API - Win32
{count} vote

1 answer

Sort by: Most helpful
  1. Ross Nichols 0 Reputation points Microsoft Employee
    2024-12-07T01:17:51.9433333+00:00

    Just to clarify, your setup is this, correct?

    • One touch-enabled display [A]
    • One non-touch-enabled display [B]
    • Display settings configured to show only [B]

    This sounds by design from the Windows API. The OS has to use a series of heuristics to map touch digitizers to displays, since not all digitizers and displays report the proper information to automatically map them. The heuristics will (a) only ever map digitizers to active displays, and (b) always map a digitizer somewhere (with the ultimate fallback being the primary monitor). In fact, if you tried to touch display [A] in your configuration above, you'll find that it does in fact result in touch input on display [B], which is why the API is technically returning the result you're seeing.

    It could be reasonably argued that the mapping heuristics should consider all displays, not just active ones, and ignore digitizers that end up getting mapped to inactive displays. The risk would be (a) ignoring digitizers that were incorrectly associated with an inactive display, or (b) people actually relying on the behavior I described above where their touchscreen essentially acts like an external digitizer (might be more reasonable for pen, since you'd at least see a hover effect and can know what you're interacting with).

    If you file feedback under the Input & Language > Touch path and link it here, I can promote it internally and discuss with the team. That said, the risk of disabling digitizers when the user wants to use them may outweigh any perceived benefit of updating the heuristics. This logic has been in place for over a decade at this point.

    For your scenario, can you instruct the user to leave both displays active? You could also potentially add code to specifically detect the scenario of "touch digitizer is internal but current display is external", but it's not trivial to determine if the digitizer is internal.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.