I’ve written about statistics in SQL Server a few times now. Through conversations I am reminded that not everyone keeps their table statistics up to date. Keeping your statistics up to date is one of the more critical components of SQL Server performance issues. Please be sure you are doing more than just index maintenance to maintain your statistics. Index maintenance only maintains statistics created by indexes and single field predicate created table statistics. There are also column statistics that need to be maintained that are created by predicates usage in your queries or even manually created. For each insert, update, or delete the distribution of your data changes and can skew the optimizer’s estimates, so ensuring that your execution plans’ number of Actual Rows versus Estimated Rows are aligned will allow SQL Server to generate the optimal execution plan.
Over Estimations of Rows (Actual > Estimated) leads to:
- Selection of parallel plan when a serial plan might be more optimal
- Inappropriate join strategy selections
- Inefficient Index Navigation (scan verses seek)
- Inflated Memory Grants
Under Estimations of Rows (Actual < Estimated) leads to:
- Leads to SPILLS to DISK because of not enough MEMORY was requested
- Selection of serial plan which parallelism would be more optimal
- Inappropriate join strategies
- Inefficient Index selection and navigation strategies
Maintain your statistics by doing the below at least weekly. (Note: some systems may require far more frequent updates–I’ve had to update stats every 10 minutes on a particularly troublesome table)
Set AUTO_UPDATE_STATISTICS =TRUE for each database, however, this option will only update statistics created for indexes or single-columns in query predicates. Optionally also Set AUTO_UPDATE_STATISTICS_ASYNC =TRUE for performance gains, you can read more on that in my prior blogs.
If you have larger tables in your environment and are not using SQL Server 2016 or higher be sure to examine Trace Flag 2371. This trace flag is available to assist in keeping stats up-to-date in large tables. Currently, the algorithm that is used to automatically update statistics is 20% + 500 rows. Trace Flag 2371 changes this algorithm to a sliding scale. Using this trace flag will drastically increase the frequency of which statistics updates occur on your larger tables, which in turn give the optimizer much better estimates to work with.
Run EXEC sp_updatestats to update ALL statistics routinely. I suggest creating a SQL Agent job to run routinely.
OR
If you are using Ola’s Index Scripts consider adding the below parameters.
@UPDATESTATS=ALL
@ONLYMODIFIEDSTATICS=Y * (this can create old stats if the fields have not been modified in a while, I am not a huge fan of this option)
Example:
sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d master -Q “EXECUTE [dbo].[IndexOptimize] @Databases = ‘USER_DATABASES’, @UpdateStatistics = ‘ALL’ , @ONLYMODIFIEDSTATICS=Y
More Information
Here are the links to my prior statistics blogs. I recommend reading them in the below order to help you better understand statistics and their role in SQL Server performance. Make sure you do your part as a DBA and remember to keep your statistics up to date as much as possible to help the optimizer better do its job.
Synchronous VS Asynchronous Statistics Updates
Keeping Large Table Statistics Current -TF2371
Can you explain your comment more about what you meant:
“@ONLYMODIFIEDSTATICS=Y * (this can create old stats if the fields have not been modified in a while, I am not a huge fan of this option)”.
Also I requested new param for fine tuning which stats you want to be updated,and now it has been implemented for a while:
StatisticsModificationLevel
Specify a percentage of modified rows for when the statistics should be updated. Statistics will also be updated when the number of modified rows has reached a decreasing, dynamic threshold, SQRT(number of rows * 1000).