To answer your question
row_overflow data was not converted to in_row data after changing or updating a table
there has never been row_overflow data in the table, so there is nothing to convert when changing the column type.
Add a few more columns to the query:
select s.type_desc ,s.total_pages ,s.used_pages ,s.data_pages from sys.allocation_units s join sys.partitions p on p.partition_id=s.container_id where p.object_id=object_id('test_allocation_units')
You will see that only IN_ROW_DATA
has a nonzero number of pages, and ROW_OVERFLOW_DATA
reserved and 0 pages are used.
In any case, when you store the value "banana" in the varchar(5000)
column, you only save 6 bytes per row (plus some overhead). Thus, changing this column type to varchar(50)
does not change anything significant on the data pages.
There would be a difference in the number of pages a table occupies on disk if the column was of type char(5000)
and then changed to char(50)
. When you store the value "banana" in a char(5000)
column, the value will be placed right with 4994 spaces before the declared column length, so here the declared column size has a direct effect.
Again, independently, these spaces will be stored in the line. If you try to create a table with char(4000)
and char(5000)
columns, the creation will fail because the maximum row size is 8060 bytes in SQL Server.
To see non-zero values ββfor ROW_OVERFLOW_DATA
, you need to save long rows in varchar
columns in a table. For instance:
INSERT INTO TEST_ALLOCATION_UNITS (NAME,NAME1) VALUES (REPLICATE('APPLE', 800) ,REPLICATE('BANANA', 833)) GO 500
In this case, we are trying to save 4000 bytes in name
and 4998 bytes in name1
, which is only 8998 (plus some additional service ones), which is more than a maximum of 8060 bytes, so the off-line value is pressed.
I would recommend trying to add / remove multiple lines with short and long lines and check how the result of the view changes. This should help you understand the results of the review.