I think there is a much simpler way to do this. Consider storing all the data in a balanced binary search tree, where the keys are the login time, and the values ββare a list of all the people who logged in at that time (provided that there can be several logins at the same time time), From there you can find all the people who logged in during the time interval between T1 and T2, finding the smallest node in BST whose time is greater than or equal to T1, and then continuously calculates the successor of the node sequence until you come to node, which is on time with three after time T2.
Doing a search in BST to find the first node at times greater than or equal to T1 will take O (log n) time in a balanced BST, and computing an inorder successor will take O (k) times many times, where k is the total number of matches you report . This requires a total time of O (log n + k). Since you have to spend at least O (k) time reporting all the coincidence logic in any algorithm, this has very low overhead.
Alternatively, if you receive data in a stream from the server (i.e. new logins always occur as time develops), you can simply use the standard array to store all requests. You can simply add new queries to the end of the array. Since time always moves forward, this means the array is always sorted, so you can use a binary search to find the starting point of the range. Assuming the data is not pathologically constructed, you can also use the interpolation search so that the expected search time is O (log log n) rather than O (log n), giving the expected O (log log n + k) search times when searching k full elements.
As for the processing of overlapping ranges, there are standard algorithms for collecting a set of ranges and combining them into a minimum number of non-overlapping ranges. You can always apply one of these methods before performing a search to handle this case.
Hope this helps!
source share