Thankful DBA

This week is Thanksgiving in the United States, so I thought it fitting to write a quick blog on what I am thankful for as a DBA. These are in no particular order and feel free to respond with something you are thankful for. I’d love to hear it.

  1. Glenn Berry’s Diagnostic Scripts- (B|T) Used these for years. Really a great set of scripts and explanations that we all should be grateful for.
  2. Ola Hallengren’s (BMaintenance scripts. Index Optimization, Backup, and Integrity Checks for all! They have become an industry standard and continue to get better and better.
  3. RCSI (Read Committed Snapshot Isolation) –My Readers can stop blocking Writers! Thanks to Kendra Little (B|T) for this great blog.
  4. SSMS Results to grid and copy with header- I do this a million and one times a day. Ctrl+Shift+C .
  5. Query Store – Having the plan run stats and being able to force a plan, LOVE IT! Thanks Conor Cunningham and Microsoft for that one.
  6. Availability Groups – Easy setup and trustworthy. And, well, I like the name better than Mirroring.
  7. DMV’s (Dynamic Management Views)- Show me the money! It has all the SQL Server Internals goodies, mine for the taking.
  8. Profiler– #ProfilerForLife nuff said, my most trusted friend.
  9. Columnstore Indexes – I feel the need, the need for speed! Who doesn’t like up to 10x Query Performance gains and 10x the data compression?
  10. Paul Randal’s Waits Library (B|T)– I can’t tell you how many times I’ve referred to this. So much useful information!
  11. Adam Machanic’s SP_whosisactive (B|T) – This is my GO TO, for seeing what’s actively going on, it’s the first thing I run.
  12. Sentry One Plan Explorer– Execution Plans on STEROIDS! Yes, please. Love the detail and ease of use.
  13. RedGate’s SQL Prompt- My coding is downright ugly. With a quick Ctrl+K, Ctrl+Y my code is sleek and readable. Not to mention I love the code snippets.
  14. Grant Fritchey’s (B|T) Execution Plans book- I can’t wait for 3rd Edition, someone took my very loved highlighted, tabbed, marked up copy. I need another!
  15. Power BI – It puts the slicing and dicing into the user’s hands, giving Management easy visualizations of their data for analysis. Less reports for me to write, yippie.  Thank you Microsoft.
  16. dbatools – Great Power Shell Modules for migrating databases. No more doing it the hard way.

Last and most importantly I am grateful for #SQLFamily, Bloggers, and Twitter. I learn from you every damn day!

Happy Thanksgiving!

~Monica

Do Not Pass GO!

What is the GO statement and why is it so important to use? When do I have to use it? When do I not use it? These are questions that have passed through my head from time to time while writing T-SQL within SQL Server.

First What Is It and When Should I Use It?  

The GO statement lets SSMS (the interface) know when it’s the end of the batch. It basically defines the scope of what you are trying to send to the Database Engine. The below example sends two separate statements. The first statement changes the database context to run the next statement under, followed by the execution of the SELECT running against the database Demo. Simple, yes.

Example

Gotcha’s

I’ve been caught out by this behavior in the past. Using GO in stored procedures can be tricky. There are times when you want to run a batch of statements together, but if you put a GO into the procedure and compile it you will notice that you lost any code that came after the GO. The GO signaled to that my ALTER or CREATE Procedure statement was done. It then ignored all the statement below it as part of the stored procedure.

Another Gotcha which can be both good and bad depending on your need. A Variable’s life span ends after each GO statement. If you declare a variable, run a statement to populate that variable and use that variable you can no longer use it once you send a GO.

Example

Cool things to do with GO

This is learned by chance just messing round. Did you know that if you put a number after GO it will run those statements that many times? This can be handy for generating a lot of load against a database for demos.

Don’t like the word go, change it. Yep you can change it to anything you want. Tool> Options> Query Execution

Change it to RUNNOW.

Let’s Try

HMMM Why didn’t that work… because I ran it in an existing Open Window (Session).  Let’s try that again.

TADA! Much better.

Now that you know what it does, feel free to advance to GO and collect your $200. Enjoy.

What Are These Backup Settings All About?

I ran across a client the other day that had these Backup and Recovery options set like the picture below because it is defaulted this way. The Database Administrator didn’t know what they should configure them as so he left them alone. I find this is the case with a lot of options. For the most part leaving the defaults can be just fine, but other leaving others cause leave you missing out on some good features.

Let’s start from top to bottom.

Default backup media retention in days. Now the first things that comes to my mind is that “hey this is a cleanup job” SCORE! Thinking that maybe this will auto delete old backups. After all isn’t that what retention means? NOPE, not in this case.

In this case it’s just a number of days before that a backup media can be OVERWRITTEN. If the DBA goes to overwrite the media before those days it will give a warning message. You’ll note in every back up action you do the RETAINDAYS option is filled in. In this case it will always reflect to 90 now that we have changed it. In general, this a pointless option to me. I don’t normally OVERWRITE backup media. To me this was more relevant when Tapes were used and disk were harder to come by, so I leave it alone.

TSQL

 

GUI

Compress backup. This one is exactly what you think it is, no guessing here. Backup Compression is one I highly recommend changing from the default. Compression is a HUGE topic I will save for another time. But in short, the smaller the files the less space it takes up, less data stored means IO (and less data sent to your backup device) and therefore your databases back up and restore faster. Here is a great MDSN link to learn more about the benefits of backup compression. Backup compression is included in all editions of SQL Server since 2008 R2, so use it!

Recovery Interval (in minutes). Now this one I always thought meant Recovery Point Objective, in other words how much data am I willing to lose in minutes. I am partially right. According to MSDN, this option defines an upper limit on the time recovering a database should take. The SQL Server Database Engine uses the value specified for this option to determine approximately how often to issue automatic checkpoints on a given database.

This is an option I don’t change. I have yet to see a scenario where I want to override when SQL Server does a check point on the database by default.  There are times when I want to force a check point but it’s not something I am going to set a standard for. The only reason I have heard was to reduce IO on a data drive, but to me that’s at too high of a cost.

TSQL

So, there you have it, three more options that may not be a mystery for some any longer.

Quick Model Database Tidbit

Are you using your Model Database to its full potential?

I am finding more and more that Database Admins are not using the Model database to its fullest potential and some not at all.

What is that Model Database for?

The model database is basically the default setup (template) for all other databases created on a SQL Server instance. All databases created after install will inherit the properties of this database.

Why Configure It?

Using the model can insure consistency within your environment and is a quick way to automate your database setups. Below is a list of things I’ve used in my environments and others.

Top (in no particular order) Settings I have Implemented Through Model

  • Default Growth Settings
  • Query Store Settings
  • Recovery Models
  • Read Committed Snapshot Isolation
  • Allow Snapshot Isolation
  • Auto Update Statistics Asynchronously
  • Compatibility Levels

Now there are some things that databases will NOT inherit from the model, some of these I learned the hard way.

  • File Groups
  • CDC (Change Data Capture)
  • Collations
  • Database Owner
  • Encryption

Scripts to turn these options on

What Other Things Can You Do?

Now, you can go above and beyond just the database properties. You can add tables, views, triggers, functions etc. to your model database and every time a new database is created those objects will also exist. Why is this useful? In the past, I’ve used this for tracking my DDL (data definition language) changes. I created a trigger that would insert into a table the user, object, date and time, text snippet of any ALTER\DROP\CREATE statement that was run on a database. For it to work, the trigger needed to exist on all databases.

Final Words

We all know each environment is different, so don’t just go and implement everything, tailor it to your needs. I suggest you take a look at yours and see if there is anything you can adjust. You may be surprised on what you can tweak.

Note:

*In testing this, I have found that if you create a new database using CREATE DATABASE with T-SQL the Auto-Growth sizes do not get inherited by new database, but everything else did. If I create new database using GUI these setting do propagate.  Not sure if this is by design or a bug.

Synchronous VS Asynchronous Statistics Updates

One of the things I’ve been able to implement to help with performance is changing from Update Statistics Synchronous to Auto Update Statistics Asynchronously. It’s a simple change that can have a big impact when implemented in highly transactional OLTP environments. Notice I said OLTP not OLAP, since data in an OLAP environment tends to not be as dynamic, so it’s rare to enable this in a data warehouse.

So, what’s the difference between the two and why does it help?

Synchronous (defaulted as AUTO_UPDATE_STATISTICS =TRUE)

By default, when Auto Update Statistics is set to True, the SQL Server Query Optimizer will automatically update statistics when data has met a threshold of changes (insert, update, delete, or merge) and the estimated rows are now potentially stale. When statistics are stale, execution plans can become suboptimal which can lead to degradation in performance.

This best practice option ensures your statistics stay up to date as much as possible. Each time a cached query plan is executed the Optimizer checks for data changes and potentially generates new statistics. This behavior is exactly what we want, but there is a catch. The caveat to this is that a cached query plan will be “held” while the statistics are updated and will recompile to use the new values before running. This caveat can slow down the execution process dramatically.

Auto Update Statistics Asynchronously (AUTO_UPDATE_STATISTICS_ASYNC =TRUE)

This option does the same thing as the above, but with one significant difference. It allows the Optimizer to run a query and then use the updated the statistics. Where this option differs from synchronous is that a query will NOT be “held” while the statistics are updated. Queries can run “as is” until the query optimizer completes the statistics updates and then the query will recompile to begin to use them the next time it runs.

Confused Yet, so now in English. 

When the Asynchronous setting is set the query will run like it is until all statistics its uses are up-to-date, then it will run with the new numbers. It does not have to wait for all the new numbers to be updated to run. That’s where you get your performance boost, by not having to wait.

Check your settings using TSQL on ALL Databases

How to Turn it on TSQL

GUI

Under Database Properties > Options

NOTE: To enable this option Auto Update Statistics must be left ON.

Last Words

Remember every environment is different be sure to test this before implementing into production. A simple change from synchronous to asynchronous can make a difference.  It is definitely something to add to your performance tuning tool belt.

Does Your Code Have a Preamble?

Okay, here is a pet peeve of mine, I think every stored procedure, function, view etc. should all contain a block of code I refer to as a preamble. If yours doesn’t I strongly recommend you start adding it. It drives me crazy when I see code with no documentation of any kind telling me what it is for and when it was written or changed.

Why? A preamble documents the use, need, and changes for the code. It also leaves bread crumbs as to how why and what you did. I don’t know about you but I may code something and not have to change it for two years. When I do, I then think back and say why did I do that or who changed this code last. Working as a lone DBA, leaving bread crumbs was critical as I constantly jumped from task to task.

Above is the example of my preamble I use for all code I write. It tells who wrote it, what it is, what it is called by, how to run it, and lists any changes done to it.  I find one of the most helpful items on this is the Run documentation.  Here I place an exact run statement. It will show how the parameters should look and gives me a quick way to test it.

There are a million and one reasons as to why you should be doing this in your code. If you’re not doing it just take a second and start doing it. You’ll thank me for it later.

SQL Sequence vs Identity Column

Let’s take a look at what a Sequence is in relation to an Identity Column in SQL Server. Did you know Sequence even existed? I didn’t until I was asked about them. It’s amazing how much you can skip over and never notice in SSMS. See this little folder, ever notice it under Programmability in Management Studio. Yep it’s there, SQL Server has this very handy thing called Sequences. Sequences are a relatively new feature that have only existed since SQL Server 2012, but have long existed in Oracle (where there a no identity columns).

What is a Sequence?

Per MSDN, A sequence is a user-defined schema bound object that generates a sequence of numeric values according to the specification with which the sequence was created. The sequence of numeric values is generated in an ascending or descending order at a defined interval and can be configured to restart (cycle) when exhausted. It’s important to note the sequences can be cached and are not guaranteed to be in sequential order.

The Code

After creation, you can look at the properties in the GUI. Note you can set the Increment by, you can restart the sequence and even set min and max values.

How to Query

What’s an Identity Column?

A property of a table that is set by initial seed value (starting value). For each insert it assigns a new incremental value that is added to the identity value of the previous row that was loaded.

The Code

Note: after the field type IDENTITY, you declare the SEED (1), then INCREMENT Value (1). You can see this in the GUI below for the Column properties.

How to Query

Let’s insert two records and see the NameID Identity column increment.

Comparing the two

Attribute Sequence Identity
Object Level Database Table
Limit Can set a limit Limited by data type INT vs BIG INT
Values Generated by application call using NEXT VALUE FOR Generated on INSERT on a table
Increments Declared as INCREMENT at setup and can be anything. Can be a negative number to cause the sequence to descend instead of ascending numbers Declared as INCREMENT at setup and can be any positive number, numbers will ascend
Scope Generated outside the scope of a transaction Generated within a Transaction
Number Assignment Sequences can be preallocated (example assign me number 1-25) Cannot be preallocated, assigned in order by INSERT
Gaps Can experience Gaps Can experience Gaps
Uniqueness No, this number can be reset and reused. Often used as Primary Key (you must choose this property to ensure the unique value).

Summary

So, this was just a quick look what a Sequence is compared to an Identity column. Both can be very useful. If you’re looking for a unique value your best bet it to go with an Identity Column and the Primary Key option. If you want just an auto-generated value to be able to use in an application outside of a table a Sequence is a sure bet. Play around with it, I am sure you can come up with a million and one uses for each.

***UPDATE NEW to SQL 2017 ***

Per MSDN there is a new option

IDENTITY_CACHE = { ON | OFF }

Applies to: SQL Server 2017 and Azure SQL Database (feature is in public preview)

Enables or disables identity cache at the database level. The default is ON. Identity caching is used to improve INSERT performance on tables with Identity columns. To avoid gaps in the values of the Identity column in cases where the server restarts unexpectedly or fails over to a secondary server, disable the IDENTITY_CACHE option. This option is similar to the existing SQL Server Trace Flag 272, except that it can be set at the database level rather than only at the server level.

Just Check ALL the Boxes

Today I ran into something on a client server I unfortunately see too often.  The DBA goes through the trouble of configuring and setting up alerts\operators but doesn’t really understand what the options in the configurations mean. So unfortunately, that means they take the CYA (cover your ass) approach and they check all of them. Now, not only have I seen this with alerts but also with things like security configurations as well. My advice is to always in to take a second and research what each option is before you check the little boxes, especially when it comes to security. Always follow the rule of less is more.

In the example below the administrator enabled alerts for an operator using the CYA approach. They checked email, pager, and netsend.

So, what’s the big deal? This server experienced an insufficient resources (space) alert that fired every minute and by having PAGER notifications enabled it caused the error log to bloat, consumed unnecessary space, and created noise in the logs.

The administrator of this environment really only needed to configure the email notification, as the company did not use netsend nor have pagers duties configured. To be honest, I have yet to see an environment use more than that, and per Microsoft both Pager and Net Send will be removed in future versions.

So, the morale of the story is, please take the time to research what the little checkboxes are before you enable them. The example above is a pretty benign one, but you can imagine what kind of messes you can get yourself in for other more critical things like security.

A Side Note:

If you want to learn how to setup your alerts and operators I’ve already written a blog on that with scripts you can find it here.

You can also visit github.com/dc-ac for a full install script that includes the Alert and Operator setups https://github.com/DC-AC/SQL2016_Scripted_Install

What does this little check box do?

Ever wander around SQL Server properties and wonder what these little check boxes turn on? I do, and I get very tempted to check them. Here is one of those tempting little boxes that seems pretty handy, Use query governor to prevent long running queries.

Syntax

How Does it Work?

It’s simple. This option, available in SQL Server 2008 standard and forward, will prevent long running queries based on run time measured in seconds. If I specify a value of 180 the query governor will not allow any execution of a query that it estimates will exceed that value. Notice it says ESTIMATES which means it will be based on optimizer estimates and not ACTUAL run times. It does NOT KILL an actively running query after designated amount of time.  There is no worries for rollback scenarios or partial data.

CAUTION

This is an advanced option, keep in mind this is a server instance wide option. This will also effect your maintenance queries, so please use with caution, this is not “a let me check this box for fun” option.

But Wait There’s More

Now there is a query “transaction” based option available to us that will limit a specific query. This option will estimate a transaction and prevent it from running if it will go over the boundary we have set. Notice we set the limit before the query and then back to 0 after.

Again, playing with any old check box is not a recommended practice. Make sure you research it first and understand the full impact before checking that tempting little box.

Please Don’t Do This!

Please, please, please Admins do not leave your default index fill factor at 0. This means you are telling SQL Server to fill the page 100% full when creating indexes. This also means you are forcing it to a new page when additional inserts are done. These are called PAGE SPLITS which can take time to perform and is a resource intensive operation. Having a high fill factor will cause more index fragmentation, decrease performance and increase IO.

If you find that this is how your system is configured, all is not lost. You can correct this by changing the default value so that new indexes will be created with proper factor and rebuilding your existing indexes with another fill factor value. I like to use 80 across the board for most, of course there is always the “it depends” scenario that arises but 80 is a pretty safe bet. One of those “it depends” would be on logging table that has the correct clustering key and never gets updates in between values (make sense?), I don’t want a fill factor of 80.  I’d want 0/100 to maximize page density as page splits wouldn’t occur if the clustered key is monotonically increasing.

Note, having the additional 20% free on a page will increase your storage requirements but the benefit outweighs the cost.

Example syntax for changing the default

Example script for rebuilding in index with new fill factor