I assume that the trees are distributed approximately evenly throughout the forest. If so, just use the 30x30 (or 15x15) lattice blocks as hash keys in the private hash table . Find the keys for all blocks intersecting the search circle, and check all hash entries starting with this key until one of them is marked as the last in its bucket.
0---------10---------20---------30--------40---------50----- address # line (0,0) (0,30) (0,60) (30,0) (30,30) (30,60) hash key values (1,3) (10,15) (3,46) (24,9.) (23,65.) (15,55.) tree coordinates + "." flag
For example, to get trees at (0,0) ... (30,30), compare (0,0) with address 0 and read the entries (1,3), (10,15), reject (3,46) , because it goes beyond, read (24.9) and stop because it is marked as the last tree in this sector.
To get the trees at (0.60) ... (30.90), match (0.60) with address 20. Skip (24, 9), read (23, 65) and stop as you last.
This will be quite efficient in terms of memory, since it avoids storing pointers that would otherwise be of considerable size compared to the actual data. However, private hashing requires some space left.
The illustration does not “scale”, since there will actually be space for several entries between the hash key tokens. Therefore, you do not need to skip any entries if there are more trees in the previous local sector than the average.
This uses hash collisions to your advantage, so it is not as random as a hash function. (Not every record corresponds to a clear hash value.) However, since dense areas of the forest will often be adjacent, you must randomize the mapping of sectors to "buckets", so this dense sector, we hope, will overflow into a less dense one, or the next, or the next.
In addition, there is the problem of empty sectors and the completion of the iteration. You can insert a dummy tree in each sector to mark it as empty or some other simple hack.
Sorry for the long explanation. Such things are easier to implement than to document. But productivity and floor space can be great.