System-Versioned Temporal Tables

Every once in a while, I like to take a moment and learn something new about the latest SQL Server gizmos and gadgets. Today I came across system-versioned temporal tables and it peeked my interest, so I figured I’d investigate and share my finding with you.

How many of you have need to track data changes over time? I’ve needed this many times for things like auditing, investigating data changes, data fixes, and trend analysis of values over time. Having to do this is the past has been a very daunting task at times and sometimes nearly impossible. This is where system-versioned temporal tables will really help out. They have given us a new way to do just that with a new user table type. It keeps a full history of those data changes and gives us a way query it order to do point in time analysis. What I really like about this is that you can’t INSERT or UPDATE data into the datetime columns as they are automatically generated with the insert, which is great for auditing.

Syntax for Temporal Table Creation

Note we now have 2 required datetime2 fields that will be populated with our temporal history data for each row.

Lets insert some records using INSERTS and see how the data looks.

This great illustration from Microsoft shows just how the history is tracked. For each INSERTED record, the SysStartTime will be populated in our BeginDate field. Each additional UPDATE/DELETE/MERGE our current record is copied to a history table and EndDate is updated with SysEndTime.

How do you query it?

It uses a new clause FOR SYSTEM_TIME that you can now query using it combined with AS OF, FROM TO, BETWEEN AND, CONTAINED IN, ALL

Results

Note the current record has end date of  9999-12-31 23:59:59 because its the current state of the record. It hasn’t been modified yet so it gets the default future datetime.

If you are lucky enough to be using SQL Server 2017 I highly recommend playing around with this new gadget, it may be of significant use to you.

VLFs the Forgotten Foe

How many of you check the amount of Virtual Log Files (VLFs) your transaction logs have?

Working as a consultant now, I see this as something that is often ignored by DBAs.  This is an easy thing maintain and yet so many don’t know how to. Keeping these in check can give you a performance boost not only on startup but with your insert/update/delete as well as backup/restore operations. SQL Server performs better with a smaller number of right sized virtual log files.  I highly recommend you add this to your server reviews.

What is a VLF?

Every transaction log is composed of smaller segments called virtual log files. Every time a growth event occurs new segments, virtual log files, are created at the end of your transaction log file. A large number of VLFs can slow things down.

What causes High VLFs?

As transactions force growth of the log file, inappropriate log file sizing or auto-growth settings can cause a high number of VLFs to occur.  Each growth event adds VLFs to the log file.  The more often you grow in conjunction with smaller growth segments, the more VLFs your transaction log will have.

Example

If you grow your log by the default 1 MB you may end up with thousands of VLFs as opposed to growing by 1GB increments. MSDN does a great job on explaining how a transaction logs work for a deeper dive I recommend reading it.

How do I know how many VLFs my log files have?

It’s very easy to figure out how many VLFs you have in your log file.

Make sure you are on the context of the database you want to run it against. In this case TEMPDB and run the DBCC LOGINFO command.

The query will return a result set of all LSNs created for that database, the COUNT of those rows is the amount of VLFs you have.

Now there are many ways you can get fancy with it using TSQL, so have fun with it. Write something that rolls through all your databases and gives you the record counts for each. There are plenty of useful examples on the internet.

The VLF counts should be under 100 ideally, anything above should be addressed.

*New for 2017 is a DMV that will give you an even easier way to get the VLF counts sys.dm_db_log_stats ( database_id ) .

How do you Fix?

These transaction log files should be shrunk until there are only two VLFs, then grown in chunks back to the current size.

  • Perform Shrink using DBCC SHRINKFILE

  • Regrow your log in an increment that makes sense to your environment. However, if your file growth is in excess 8GB it is recommended to grow in 8000MB chunks while manually regrowing the file. Your autogrowth should be set to a lower value. There is no set rule to what those values should be, it may take trial and error to figure out what is best for your environment.

Note: Growing out you log can cause a performance hit and block on going transactions, be sure to perform this during a maintenance window.

It’s that simple, now go take a look at your files. You may be surprise on what you find.