Is nvarchar (max) less productive than nvarchar (100)?

Is nvarchar (max) less productive than nvarchar (100)?

+4
source share
4 answers

The same question was answered here (SO) and here (MSDN)

Quote david creps answer:

When you store data in a VARCHAR (N) column, the values ​​are physically stored the same. But when you store it in the VARCHAR (MAX) column, behind the screen, the data is processed as a TEXT value. Thus, additional processing is required when working with the VARCHAR (MAX) value. (only if size exceeds 8000)

VARCHAR (MAX) or NVARCHAR (MAX) is considered to be a "type of large value." Larger value types are usually stored off-line. This means that the data string will have a pointer to another place where the "big value" is stored ...

+5
source

Note that in addition to Alex's answer, you cannot index the nvarchar(max) column, so in this case this may be a performance limitation.

+4
source

The general rule is to use the data type that is most suitable for the data that you store in it. If you are talking about varchar (100), then it seems unlikely that you will need to store data the size of varchar (max), which can be processed and probably should be bound to varchar (100)

The questions you ask yourself are the data you store, how often you store and retrieve it. How do you use it, search, search or storage?

Regarding the differences, varchar (max) is NOT equivalent to TEXT. The main improvement in varchar (max) is that the data is still stored in the line, if it does not exceed the maximum length of 8k at which the point is stored in the blob.

See this question, which is more specific regarding the differences between varchar (max) and text.

Using varchar (MAX) and TEXT on SQL Server

+2
source

Yes. When comparing a variable or column of type MAX, the internal code uses the semantics of the stream. Types of variables less than 8000 in length use the direct comparison semantics. A simple example:

 create table A (k int, x varchar(8000)); create clustered index cdxA on A(k); go insert into A (k, x) select number, name from master..spt_values; go declare @s datetime = getutcdate(), @i int = 0; set nocount on; while(@i < 100000) begin declare @x varchar(8000); select @x = x from A where k = 1 and x = 'rpc'; set @i = @i + 1; end select datediff(ms, @s, getutcdate()); 

Performing this repeatedly gives the measured cycle time 2786, 2746, 2746, 2900, 2623, 2736, so the average value is about 2.7 s.

The same code, but replaced the two varchar (8000) inputs with varchar (max), gives the measured times 4916, 5203, 5280, 5040, 5543, 5130, the average time 5.2s is much higher, -max.

The conclusion is that in very tight loops, varchar (max) compares and assigns more slowly compared to non-max types. Like all optimizations, it should be considered and applied only after a thorough measurement, which shows that this is a bottleneck.

Please note that the difference is visible even according to the actual length of 3 characters and is not obtained from differences in storage.

+2
source

Source: https://habr.com/ru/post/1304341/


All Articles