To compress and grow the vector at both ends, you can use the idea of slicing, reserving additional memory to speed it back and forth if effective growth is needed.
Just create a class with not only the length, but also the indices for the first and last elements and a vector of the appropriate size to create a data window in the base block of the stored floats. A C ++ class can provide built-in functions for things like deleting elements, accessing an array, searching for the nth maximum value, shifting slice values up or down to insert new elements that support sorted order. If backup elements are not available, then the dynamic allocation of a new larger floating-point storage allows continued growth through a copy of the array.
The circular buffer is designed as a FIFO, with the addition of new elements at the end, removal at the front and the inability to insert in the middle, a self-defined class can also (trivially) support array index values other than 0..N -1
Due to the locality of memory, avoiding excessive indirectness due to pointer chains and pipelining of substring calculations on a modern processor, a solution based on an array (or vector) is likely to be the most efficient, despite the fact that copying elements to inserts. Deque is suitable, but it cannot guarantee continuous storage.
Additional additional information. Examining the slicing classes allows you to find some plausible alternatives to evaluate:
A) std :: slice, which uses slice_arrays B) Increase the range of classes
I hope that this is the specific information that you hoped for, in general, a simpler solution is more convenient than complex. I would expect that slices and ranges on sorted data sets would be fairly common, such as filtering experimental data, where “outliers” are excluded as erroneous readings.
I think that a good solution should be - O (NlogN), 2xO (1), with any binary searches O (logN +1) to filter by remote values instead of deleting a fixed number of small or large values; it is important that "O" is relatively fast, sometimes the O (1) algorithm may in practice be slower for practical N values than O (N).