In my 2D game, I have static and dynamic objects. There may be several cameras. My problem: Identify objects that intersect with the current camera view rectangle.
Currently, I just iterate over all existing objects (without worrying about dynamic or static) and perform an AABB check when the cameras are looking directly at them. This seems acceptable for very dynamic objects, but not for static objects, where there can be tens of thousands (static-level geometry scattered throughout the scene).
I looked at several data structures that could solve my problem:
This was the first thing I looked at, however the problem is that it will make my scenes have a fixed size. (Valid for static, but not for dynamic objects)
It seems good, but the overhead for rebalancing seems too big for many dynamic objects.
The main problem for me was that if you significantly scale down the image using the camera, you had to request a huge amount of mostly non-existent spatial hash codes, which led to poor performance.
In general, my criteria for a good solution to this problem:
Dynamic size: the solution should not limit the size of the scene or require heavy recalculations for resizing
Good query performance (for camera)
Good support for very dynamic objects: the calculations needed to process objects with a constantly changing position should be good:
The maximum normal number of dynamic objects in my game at one time is probably 5000. Consider that they all change their position for each frame. Is there even a data structure that can be faster, given frequent insertions and deletions, than comparing AABB objects with a camera in each frame?
source share