I came across a similar situation that you encountered, based on my experience, which I could say:
Whenever a query is launched in the azure storage table, it performs a full table scan if the corresponding partition key is not specified. In other words, the storage table is indexed by the partition key, and proper data partitioning is the key to quickly get results.
However, now you have to think about what queries you will shoot on the table. Such as magazines, occurred over a period of time, for a product, etc.
One way is to use reverse ticks to the nearest hour instead of using exact ticks as part of the partition key. Thus, based on this partition key, you can request data about the clock. Depending on the number of rows that fall into each section, you can change the accuracy by one day. In addition, it will be advisable to store related data together, which means that the data for each product will go to another table. Thus, you can reduce the number of sections and the number of rows in each section.
Basically, make sure that you know the partition keys (exact or rangefinder) in advance and call queries with those partition keys in order to get results faster.
To speed up writing to a table, you can use a batch operation. Be careful though, as if one entity in a party stopped working with a full party. Proper retry and error checking can save you here.
At the same time, you can use blob storage to store a lot of related data. The idea is to store a piece of related serialized data as a single blob. You can hit one such blob to get all the data in it and make further predictions on the client side. For example, the hourly data value for the product will go to blob, you can develop a specific blob prefix naming pattern and, if necessary, click on the exact frame. This will help you retrieve your data fairly quickly, rather than performing a table scan for each query.
I used the blob approach and used it for several years without any problems. I convert my collection to IList<IDictionary<string,string>> and use binary serialization and Gzip to store each blob. I use the Reflection.Emmit helper methods for quick access to object properties, so serialization and deserialization do not affect the processor and memory.
Storing data in blobs helps me store more in less time and get my data faster.