I am studying using Azure CosmosDB for an application that will require high read throughput and scalability. 99% of the activity will be readable, but sometimes we will need to insert somewhere from several documents into a potential batch of several millions.
I created a collection for testing and providing 2500 RU / sec. However, I ran into problems inserting even a total of 120 small (500 bytes) documents (I get a "request frequency large").
How can I use a db document in any useful way, if at any time I want to insert some documents, it will use all my RU and prevent someone from reading it?
Yes, I can increase the prepared RU, but if I only need 2500 to read, I do not want to pay 10,000 for a random insert.
Reading should be as fast as possible, ideally in the "single-digit-millisecond" range that Microsoft advertises. The inserts should not be as fast as possible, but faster.
I tried using the stored procedure that I saw, but could not insert everything reliably, I tried to create my own volume insert method using multiple threads, as suggested in the answer here , but this leads to very slow results, and often errors on at least for some documents, and, on average, seems to exceed the RU level, which is lower than I envisaged.
I feel that something is missing for me, do I need to provide RU only for recording in droves? Is there any functionality built in to limit the use of RU for insertion? How can I embed hundreds of thousands of documents in a reasonable amount of time without making the assembly unusable?