I implemented this some time ago with downsampling "on the fly" for some graphs. The downside is that an adult loses permission, but I think this is acceptable to you. And if you are interested in peaks, you can specify the values โโmax, avg and min.
The algorithm is also not too complicated. If you have 5 samples per second and want to maintain this granularity, you may need to store 5 * 60 * 60 = 18000 samples for that hour for an hour.
During the day, you can go down to 1 time every 5 seconds, reducing the amount of 25 times. Then the algorithm will be executed every 5 seconds and calculate the average value, min and a maximum of 5 seconds, which has passed 24 several hours ago. Results in 12 * 60 * 23 = 16560 more samples per day and if you store
Further back, I recommend a sample every minute, reducing the number by 12, possibly by two weeks, so you have 60 * 24 * 13 = 18720 more samples in two weeks of data.
Particular attention should be paid to storing data in the database. To get maximum performance, you need to make sure that the data of one sensor is stored in one block in the database. If you use, for example, PostgreSQL, you know that one block has a length of 8192 bytes, and two records are not stored in one block. Assuming that one sample has a length of 4 bytes, and given the overhead per row, I could add 2048 minus several samples in one block. At maximum resolution, this is 2040/5/60 = 6 minutes of data. It might be a good idea now to always add 6 minutes at a time, maybe 5, to just be mannequins for updating at later minutes, so requests from blocks of a single sensor can get faster.
At least I would use different tables for varying degrees of detail of the sensor.
source share