Are there any (documented or known undocumented) restrictions on the maximum supported insertion frequency (simple and BCP) in the table; and on the maximum parallel high-frequency inserts in independent tables?
We have 4 tables (A, B, C, D) that are in the same DB (sql server 2012, final), and they have their own filegroup. One filegroup for these large tables is another for other data — the same SSD. The recovery mode is simple and the log file is on a separate SSD. A, B, C, D are indexed chronologically (each of them has one cluster index with a chronological order), and inserts are chronologically. Not read from tables. Inserts are performed by performing an SP insert for each new record. We insert up to tens of records per second in each of these tables. Everything works very well (about 0% of the processor on the sql server, about 0% of the time on disk for data files and log disks on the sql server) until a certain threshold of insertion frequency is crossed (I do not have an exact number,but ~ 100 inserts per second) The input / output (write and read) for the data data disk is 100%, and the database is unsuitable for use and (almost) all insert timeouts are attempted. After stopping our service, DB quickly to a normal level, but after a reboot and receiving a similar threashold record, the situation repeats. There is no sign that this situation is approaching - the database is either completely fast or unusable
What does not work:
- creating A, B, C, D from scratch (so that they are empty) - even with several thousand records inside them, the situation repeats itself.
- creating A, B, C, D from scratch like heaps (without any indexes)
What do we do:
- paste this data through BCP; however, there are 5 more tables that will also need to use this approach - in the end they are also of high insertion frequency, but they need to be provided with data with a maximum length of 1 second.
Actual issues:
- Can we also clog BCP?
- Should we somehow limit the maximum parallel operation of BCP to independent tables? (each table will have only one BCP stream, but there will be ~ 9 tables, of which more than half can have minutes of delayed data, and several tables should have about 1 second of old data).