How to track the cost of updating statistics on a table?

I have a table with 2 billion + rows and 7 indexes over 8 sections, which has only standard SQL Server AUTOUPDATE to maintain statistics, which very rarely updates statistics. Approximately 1.3 million entries are added to this table at night. Obviously, this is not a behavior that makes sense, but since it is a production server, I am very afraid of just getting an update without knowing the cost of execution. I do not have a complete table in any test environment. (This table is the worst offender, there are others that are almost equally bad.)

Where / What should I look for when determining performance costs associated with updating statistics regularly?

How often to update statistics? Should I use FULLSCAN ?

EDIT

I think traceflag 2371 would be a dangerous tool for me, because it can run its own updates during the day when I need everything I need and works, unlike the window that I control.

After doing some more research, I think this question has evolved to include the use of sp_updatestats (similar to a sledgehammer, as it updates statistics, if> 1 row is updated by Erin Stellato ) or UPDATE STATISTICS , and if UPDATE STATISTICS , what percentage on a large table. 10% of the sample per billion rows took about 2 hours.

If this is an important part of PK, this is an upstream datetime column.

End edit

+4
source share
1 answer

If you are using SQL Server 2008 R2 SP1 or later, try using trace flag 2371. It reduces the update threshold. http://blogs.msdn.com/b/saponsqlserver/archive/2011/09/13/10207658.aspx

+1
source

Source: https://habr.com/ru/post/1502101/


All Articles