Separation can improve performance - I have seen this many times. Cause markup was developed and there was performance, especially for inserts. Here is an example from the real world:
I have several tables in a SAN with one large blinding disk, as far as we can tell. SAN administrators insist that the SAN knows everything, so it will not optimize data distribution. How can a section help? Fact: he did and does.
We split several tables in the same way (FileID% 200) with 200 ALL partitions on a primary basis. What good would it be if the only reason for having a separation scheme is βswapβ? No, but the goal of separation is performance. You see, each of these sections has its own swap scheme. I can immediately write data to everyone, and there is no way to get into a dead end. Pages cannot be locked because each writing process has a unique identifier, which equates to a section. 200 partitions increased the performance of 2000x (fact), and deadlocks decreased from 7500 per hour to 3-4 per day. This is for the simple reason that page lock escalation always occurs with large amounts of data, and a high OLTP system and page locks cause deadlocks. Partitioning, even in the same volume and group of files, places partitioned data on different pages and blocking escalation has no effect, because processes do not try to access the same pages.
Itβs beneficial for data selection, but not so good. But, as a rule, the separation scheme would be developed taking into account the database. I am sure that Remus developed his incremental loading scheme (e.g. daily workloads) rather than transactional processing. Now, if you often selected rows with a lock (reading was committed), then deadlocks may occur if processes tried to access the same page at the same time.
But Remus is right - in your example, I see no use, in fact there may be some overhead when searching for strings in different sections.
source share