Is it wise to split the data into different tables based on the column value?

If I have a large table with a column that has a fairly limited range of values ​​(for example, <100), is it wise to split this table into several tables with names associated with this column value?

eg. column-like table:

table "TimeStamps": [Id] [DeviceId] [MessageCounter] [SomeData]

where [DeviceId]is a column with a limited range, it will be divided into several different tables:

table "TimeStamps1": [Id] [MessageCounter] [SomeData]
table "TimeStamps2": [Id] [MessageCounter] [SomeData]
...
table "TimeStampsN": [Id] [MessageCounter] [SomeData]

The problem with my source table is that it takes a very long time to find the largest MessageCounter for some DeviceId values ​​(see this post.).

If the tables are split, the search for the maximum column number should be an O (1) operation.

[change]

Just stumbled upon this, thought I updated it. With some correct configuration of indexes and reorganization tasks with a planned index, I was able to get excellent performance with a normalized form. I recommend trying the SSMS Database Engine Tuning Advisor tool for each query with a bottleneck, it really helped (for those who do not design the database as their primary goal).

+3
4

, . , .

, . DeviceID, . , , ...

+6

, , . , , , , - DeviceID. , , .

.

+5

. . , . , .

+2
source

Have you considered database partitioning? This is a baked solution for the type of problem you described. See: Split tables and indexes in SQL Server 2005

+1
source

Source: https://habr.com/ru/post/1770711/


All Articles