Please Don’t Do This! Default Index Fill Factor

Please, please, please Admins do not leave your default index fill factor at 0. This means you are telling SQL Server to fill the page 100% full when creating indexes. This also means you are forcing it to a new page when additional inserts are done. These are called PAGE SPLITS which can take time to perform and is a resource intensive operation. Having a high fill factor will cause more index fragmentation, decrease performance and increase IO.

If you find that this is how your system is configured, all is not lost. You can correct this by changing the default value so that new indexes will be created with proper factor and rebuilding your existing indexes with another fill factor value. I like to use 80 across the board for most, of course there is always the “it depends” scenario that arises but 80 is a pretty safe bet. One of those “it depends” would be on logging table that has the correct clustering key and never gets updates in between values (make sense?), I don’t want a fill factor of 80.  I’d want 0/100 to maximize page density as page splits wouldn’t occur if the clustered key is monotonically increasing.

Note, having the additional 20% free on a page will increase your storage requirements but the benefit outweighs the cost.

Example syntax for changing the default

EXEC sys.sp_configure N'fill factor (%)', N'80'

GO

RECONFIGURE WITH OVERRIDE

GO

Example script for rebuilding in index with new fill factor
USE DEMO;

-------------------------------------------------------------------------

 --Drops and re-creates the CREATE INDEX IDX_DepartmentID_DepartmentName

 --index on the dbo.Departments table with a fill factor of 80. 

-------------------------------------------------------------------------

CREATE INDEX IDX_DepartmentID_DepartmentName ON dbo.Departments 

   (DepartmentID, IDX_DepartmentID_DepartmentName)  

WITH (DROP_EXISTING = ON, FILLFACTOR = 80);  

GO

 

About Monica Rathbun

Monica Rathbun lives in Virginia, is a Microsoft MVP for Data Platform and Microsoft Certified Solutions Expert. She has nearly two decades of experience working with a wide variety of database platforms with a focus on SQL Server and the Microsoft Data Platform. She is a frequent speaker at IT industry conferences on topics including performance tuning and configuration management. She is the Leader of the Hampton Roads SQL Server User Group. She is passionate about SQL Server and the SQL Server community, doing anything she can to give back. Monica can always be found on Twitter (@sqlespresso) handing out helpful tips. You can find Monica blogging at sqlespresso.com 
Bookmark the permalink.

11 Comments

  1. This is a really bad recommendation.

    I suggest you watch this video: https://www.youtube.com/watch?v=iEa6_QnCFMU (Why Defragmenting Your Indexes Isn’t Helping with Brent Ozar).
    He explains in fine detail why a) you shouldn’t worry all to much about fragmentation and b) why a fill factor of 100 is not a bad thing most of the time.

  2. Pingback: Set A Fill Factor – Curated SQL

  3. Agree with Richard and Brent here. I’d rather not make all my indexes 20% bigger in memory and on disk.

  4. Setting the fillfactor to 80 means, that you have 20 % empty space in your database that will be only very seldom be used, but uses space in your buffer pool, on your disk, in your backups (log, diff and full). It will increase the numbers of pages read when you are doing index range scans / seeks (between, like, >, < etc.) and slow down the queries for als this reasons.

    I would argue, that in a common database only a few percent of all data are hot and often modified. Inserts occour usually at the end of a table (except you have UNIDs as PK, but in this case you have another problem).

    In the most databases INSERTs occur seldom – compared to the total number of records in a table.
    If a table has only 100 records, there will be usually only an INSERT every few months / years.
    If it is a hot table as the Order / OrderPos table, you may have hundrets or thousands of Inserts per day, but you will have several million rows in this table and the INSERTS occur at the end of table while all the old orders remain unaffected.

    Furthermore many of those INSERTS will be linked to some manual actions (some clerks are entering the orders or the customer places the order on your website). And usually it does not really matter, if a person has to wait 10 ms longer or shorter because of the single page split that occurs at the moment when he places his 50th order this day.

    You should also remind, that – when usually 100 records would fit into a page and you set the FILLFACTOR to 80, only 80 records will fit after this. So when you insert e.g. 50,000 rows, it would have been 500 "splits" with FF = 100 and are now 625 new pages with FF = 80.

    Of course there are cases, where you should consider to use a lower fillfactor, e.g. the index on the lastname column in the customer table (because new customers occur usually in random order), but even in this case a fillfactor of e.g. 95 would be sufficient and would usually prevent page splits, when you have a regular index optimize process. But this should be an exception and set manually on index level, where you know, how and how often data are inserted and never for all tables accross board.

    If you do not agree, please tell me, why you argue to blow up my 8 GB database to 10 GB only for those 100 MB hot data will be often inserted / updated.

    PS:
    Personally I use usually 95 – 98 for my hot indexes with random insert order, since 95 % of 1,000 customers are still 50 new customers that could be inserted without a split

    • First let me say I LOVE when my blog posts spur conversation, thanks for leaving your thoughts! So here is my thoughts, with a great index maintenance regiment and tables that insert in a sequential order yes you are right. However, not all tables work like that and not all environments have the luxury of a frequent rebuild strategy. I have come across waaaayy to many environments where fragmentation is a HUGE issue due to a very high OLTP transactions and a not so index rebuild friendly layout. In these cases using the disk space and memory that comes with a lower FF is advantageous and have greatly improved performance. Everything depends on your frequency of your inserts and index maintenance. I rarely encourage a 100 or 0 FF unless its for tables with that lovely sequential insert pattern or very infrequent inserts. My point is take the time to evaluate defaults and think through your index creations. Leaving your FF at the default is unfortunately for many an “I didnt know or think about it” kind of thing, my hope is that this will help others begin to ask what should their FF be. Thanks again for your input!

  5. …unless you work in a data warehouse environment where your tables are built overnight and then not changed until the following night’s build (I know, but we can’t do incremental builds 🙁 )

  6. No harm in experimenting and testing results, but make sure you have a common baseline, and make sure you monitor page splits, before and after!

    For index rebuilds, a common baseline must compare updating statistics with reindexing. All to often I have seen DBAs confuse the advantages of a stats update (which index rebuilds also perform) with the advantages of an index rebuild (with or without a new fill factor). In my experience, updating stats, alone, is often far more beneficial than a reindex with a lower fill factor.

    Even if you see page splits reduced by a lower fill factor (for a constant set of DML statements), and even if you see a performance benefit, consider whether the clustered index is appropriately defined and used. Ideally, a clustered index should be nonvolatile and monotonous. If the clustered index is being populated using NEWID(), it will not be monotonic, so test using NEWSEQUENTIALID() instead. You will likely find the pagesplit woes are now moot.

  7. Everything mentioned so far assumes that inserts will happen at a ‘hotspot’, as rows are added at the end of the table. What if that is not the case?

    Real world example: I have a database that has as primary key on most tables is a Char(36) field. Yep, you guessed it, it’s a GUID, cast to Char data type. What this means is that inserts can (will!) be anywhere, on any page. Table sizes will range from a few dozen rows to several hundreds of thousands.

    Would this situation change your recommendation of 100% fill factor? What about rebuilds vs. reorganizing indexes?

    • This is exactly one of the scenarios I am referring too. I would not use 100% and would probably suggest a FF of 80% to start.

Comments are closed