Trying to calculate the difference in map views will quickly become very fragmented as the user moves around the map as oyu will end up with a bunch of different size boxes (due to zoom change) and then need to merge them together to create a polygon, then calculate the difference between the old polygon area and the new bounding box map view. It can be done, but there are several other more common approaches that are used instead of this one. Here are couple to consider:
- If you only have 1000 locations, that's nothing for the map to handle (Bing Maps Web SDK can handle easily 10K points with ease). Load all the points into the map when the map loads. If you use this approach a good optimization is to load the bar minimum data needed to load the points on the map, usually the coordinate and a unique ID that can be used to retrieve detailed information from the database for that point. Then when a user clicks on the point, make a call to the database to get the additional details. You can tweak this approach and include a couple more bits of info by default, like the name of the location so that there is a more progressive loading experiences for the user when showing a popup. Since the detailed information for a single point should be relatively small, the loading time should be fairly easy to keep under 1 second. I would recommend this approach for your scenario, unless there are plans for you to have a lot more than 1000 points in the future.
- Usually when working with large data sets 100K+ points, one approach is to do the loading of data for the current map view as you outlined, but instead of trying to figure out differences between map views, simply load all points for the new view. Sure, you could optimize it a bit with the approach you described, but if performance is the end goal, this actually plays against you as end up having to do a polygon intersection query which is more complex than a bounding box query. This can lead to longer processing times on the server side, which also reduces scalability or increases costs (to counter act the reduced scalability). That said, a couple other optimizations that are common when using this approach are: A: only load the minimum info needed to display the point (ID/coordinates like #1), then load in additional info when needed separately. B: Add server-side clustering logic to your database to group points based on zoom level so that the total number of points that are displayed when zoomed out is never a crazy amount.
- Another approach for working with massive datasets 100K+ (works with billions of points too), is to turn your data into a vector tile set. This basically indexes your points using the same tile structure as the map. As you move the map, the tiles needed for the current map view are loaded and include all the points. You can pre-generate the tiles and host them as a static dataset (good for static data and high-performance situations) or create a dynamic tile service that generates the tiles on the fly (good for real time or regularly changing data).