Maintaining Balance

Last week, I got the chance to give my I’m It Survival Tips for the Lone DBA  in a webcast for the first time thanks to the PASS Women in Technology Virtual Chapter. This is by far my favorite session to give because it’s real life and can pertain to all us. Whether you are a Lone DBA or part of a team, we all encounter the struggles when handling a heavy work load. I always love the interactions I get from this session. It allows us to share our stories and learn from each other.

As a Lone DBA you are on call 24/7. You are required to stop everything and fix things when things go wrong. One of the hardest parts of the job is maintaining a balance and making sure it doesn’t interfere with you family life. I had one listener this week comment that “this doesn’t work with families”, implying that you cannot manage being a lone dba and be a parent at the same time.

This struck a chord for me. I understand what they were saying, and I know a lot of people that struggle with managing work and home no matter what career they are in.  However, I fully beg to differ. As a single mom, I was not only a Lone DBA, I ran my local user group, was a Regional Mentor, a speaker, blogger, Microsoft MVP and a very involved Dance Mom for my two girls. Not only that, but I maintained the household and all that it entailed. Most of all, I loved every minute of it. It can be done, believe me. My girls still got\get a lot of my time. I play with them, do homework with them and all the normal non-working mom stuff. I think your ability to balance it all is a great example to set for them as show them what you’re capable of.

I believe it’s all about time management, prioritizing and keeping a positive attitude. For example, while my daughters were at dance or outside playing, I would utilize that time to work. I would also allot enough time in the morning schedule to fix issues before waking them up.  This allowed for a less chaotic morning when problems arose that required immediate attention. But the most important piece is having flexibly in your job. Working with your boss to make sure they allow you to work from home from time to time and for you leave work early to attend your kid’s activities. Lone dba’s give so many hours after normal work hours that you need to maintain a balance and work with your boss to gain an understanding of that. All of these things made a huge difference in keeping balance.

Once people start feeling negatively about it, the stress invades the home and makes it more difficult for everyone. In my session, I go over ways to mitigate challenges like these as well as how to manage company’s expectations of you.

I invite those who haven’t attended my session to check out the webcast and share your thoughts. I love getting feedback and hearing other stories.

It’s All in the Name, Index Naming Conventions

Awhile back, if you are on Twitter, you can probably recall my ranting about the 949 indexes I was reviewing. The process was to determine duplicate indexes and consolidate them or discard the unneeded ones. My ranting was not about the duplicates per se it was about the index names. It only takes a second to name an object with some name that tells what the thing is. Below I will show you some examples and give you an easy script that will help you generate your index names. Taking a little time to name things appropriately can go a long way, it can not only be time saving but can help to reduce redundancy.

The DONT’s

As you can see from above, none of the names gave a complete indication of what the index encompassed. The first one did indicate it was a Non Clustered Index so that was good, but includes the date which to me is not needed. At least I knew it was not a Clustered Index. The second index did note it is a “Covering Index” which gave some indication that many columns could be included I also know it was created with the Data Tuning Advisor due to the dta prefix. The third index was also created with dta but it was left with the default dta naming convention. Like the first one I know the date it was created but instead of the word Cover2, I know there are 16 key columns are noted by the “K#” and regular numbers tell me those are included columns. However, I still have no idea exactly what these numbers denote without looking deeper into the index. The last index is closer to what I like however the name only tells me one column name when in fact it encompasses five key columns and two included columns.

The DO’s

Above we see a few good examples with varying naming conventions, but each tell me a much more than what we saw in the “Donts” list. The first one I know right away is a non-clustered index with two fields. The second is a clustered index with one field. The third is an index that has 9 fields, probably a covering index of some sort, which tells me that it is probably important to a specific query or procedure. Index four uses the name of the table and the field, which does give me more information but given the name of indexes are limited to 128 characters I prefer to leave that out. The last one closer to one of my favorites, because it does give more information. The name lets us know that it has an included column of Birthdate.

The Script

Here is the script I use when creating indexes. It will go thru and identify a missing index and create Index statement using a standard name convention.
NOTE: This modified version of what we use at DCAC is for just showing you how I include and create a standard statement in my code, this is not to be used to identify missing indexes, as it is not the purpose of my post. I have removed pieces of that from this script.

Create Statement Output

This statement gives the proper database context and create statement syntax, it adds all the needed key columns within the () and separated by commas. In addition, it adds the word INCLUDE and encompasses the included columns also in () and comma separated. Note the index name only includes the Key columns, which is just my preference.

Summary

Now everyone has their own naming conventions. You do you, however should stay consistent and give some meaning to it. When others look at the objects we should be able to know what it’s doing or be given a good clue as to what it’s for. This not only helps to quickly identity its definition but also keep you from creating duplicates. By looking at names you can tell with columns you need are already included in other indexes. Naturally you can’t just trust the name you have to dig deeper while examining your indexes but it at least will give you a realistic starting point.

AHHH I need a Blog Topic!!!

One of the hardest things you can do as a blogger is to come up with a post topic. Do you make it simple for newbies, technical, or something personal?  After figuring out a topic, now you have to write.  However, there is a difference in what you say and what will people actually want to read.

Blogging is not easy, but without it, all of our google searches to help solve problems would be much less fruitful.   It’s important to put your experience into written words to help others, and let’s be honest lots of us use out blog posts to remind ourselves of how we did the first time. So, I figured I’d take a minute and let you know what I do to come up with a topic. Maybe it could help others break into the blogosphere or fix their writer’s block.

First, I look at what I have done recently in my job.

Did I fix something?

Come across an error?

Did I find something I didn’t like?

Did I find something I really liked?

Did I just do something really interesting that I got excited about?

If those don’t help I move on to.

Is there something I wish I knew when I was starting out in SQL Server?

A tip?

An option I didn’t know?

A how to?

Nothing coming to mind to write about still? Then I move on to complete RANDOMNESS.  Yes, you read that correctly.   I will go into SQL Management Studio and randomly pick a check box or option and research, test and then proceed write about it.

Lastly, if I am still at a loss I’ll write something like this post.   Something I think might be useful to others that is just a stream of consciousness. It may not be ground breaking, but it might get someone else thinking or motivated. My point of this, is that not all blogs you create have to be ground breaking, technical, or even long in length. Just blog, say what you want to say.

It makes a difference in more ways then you know.

Thankful DBA

This week is Thanksgiving in the United States, so I thought it fitting to write a quick blog on what I am thankful for as a DBA. These are in no particular order and feel free to respond with something you are thankful for. I’d love to hear it.

  1. Glenn Berry’s Diagnostic Scripts- (B|T) Used these for years. Really a great set of scripts and explanations that we all should be grateful for.
  2. Ola Hallengren’s (BMaintenance scripts. Index Optimization, Backup, and Integrity Checks for all! They have become an industry standard and continue to get better and better.
  3. RCSI (Read Committed Snapshot Isolation) –My Readers can stop blocking Writers! Thanks to Kendra Little (B|T) for this great blog.
  4. SSMS Results to grid and copy with header- I do this a million and one times a day. Ctrl+Shift+C .
  5. Query Store – Having the plan run stats and being able to force a plan, LOVE IT! Thanks Conor Cunningham and Microsoft for that one.
  6. Availability Groups – Easy setup and trustworthy. And, well, I like the name better than Mirroring.
  7. DMV’s (Dynamic Management Views)- Show me the money! It has all the SQL Server Internals goodies, mine for the taking.
  8. Profiler– #ProfilerForLife nuff said, my most trusted friend.
  9. Columnstore Indexes – I feel the need, the need for speed! Who doesn’t like up to 10x Query Performance gains and 10x the data compression?
  10. Paul Randal’s Waits Library (B|T)– I can’t tell you how many times I’ve referred to this. So much useful information!
  11. Adam Machanic’s SP_whosisactive (B|T) – This is my GO TO, for seeing what’s actively going on, it’s the first thing I run.
  12. Sentry One Plan Explorer– Execution Plans on STEROIDS! Yes, please. Love the detail and ease of use.
  13. RedGate’s SQL Prompt- My coding is downright ugly. With a quick Ctrl+K, Ctrl+Y my code is sleek and readable. Not to mention I love the code snippets.
  14. Grant Fritchey’s (B|T) Execution Plans book- I can’t wait for 3rd Edition, someone took my very loved highlighted, tabbed, marked up copy. I need another!
  15. Power BI – It puts the slicing and dicing into the user’s hands, giving Management easy visualizations of their data for analysis. Less reports for me to write, yippie.  Thank you Microsoft.
  16. dbatools – Great Power Shell Modules for migrating databases. No more doing it the hard way.

Last and most importantly I am grateful for #SQLFamily, Bloggers, and Twitter. I learn from you every damn day!

Happy Thanksgiving!

~Monica

Quick Model Database Tidbit

Are you using your Model Database to its full potential?

I am finding more and more that Database Admins are not using the Model database to its fullest potential and some not at all.

What is that Model Database for?

The model database is basically the default setup (template) for all other databases created on a SQL Server instance. All databases created after install will inherit the properties of this database.

Why Configure It?

Using the model can insure consistency within your environment and is a quick way to automate your database setups. Below is a list of things I’ve used in my environments and others.

Top (in no particular order) Settings I have Implemented Through Model

  • Default Growth Settings
  • Query Store Settings
  • Recovery Models
  • Read Committed Snapshot Isolation
  • Allow Snapshot Isolation
  • Auto Update Statistics Asynchronously
  • Compatibility Levels

Now there are some things that databases will NOT inherit from the model, some of these I learned the hard way.

  • File Groups
  • CDC (Change Data Capture)
  • Collations
  • Database Owner
  • Encryption

Scripts to turn these options on

What Other Things Can You Do?

Now, you can go above and beyond just the database properties. You can add tables, views, triggers, functions etc. to your model database and every time a new database is created those objects will also exist. Why is this useful? In the past, I’ve used this for tracking my DDL (data definition language) changes. I created a trigger that would insert into a table the user, object, date and time, text snippet of any ALTER\DROP\CREATE statement that was run on a database. For it to work, the trigger needed to exist on all databases.

Final Words

We all know each environment is different, so don’t just go and implement everything, tailor it to your needs. I suggest you take a look at yours and see if there is anything you can adjust. You may be surprised on what you can tweak.

Note:

*In testing this, I have found that if you create a new database using CREATE DATABASE with T-SQL the Auto-Growth sizes do not get inherited by new database, but everything else did. If I create new database using GUI these setting do propagate.  Not sure if this is by design or a bug.

Synchronous VS Asynchronous Statistics Updates

One of the things I’ve been able to implement to help with performance is changing from Update Statistics Synchronous to Auto Update Statistics Asynchronously. It’s a simple change that can have a big impact when implemented in highly transactional OLTP environments. Notice I said OLTP not OLAP, since data in an OLAP environment tends to not be as dynamic, so it’s rare to enable this in a data warehouse.

So, what’s the difference between the two and why does it help?

Synchronous (defaulted as AUTO_UPDATE_STATISTICS =TRUE)

By default, when Auto Update Statistics is set to True, the SQL Server Query Optimizer will automatically update statistics when data has met a threshold of changes (insert, update, delete, or merge) and the estimated rows are now potentially stale. When statistics are stale, execution plans can become suboptimal which can lead to degradation in performance.

This best practice option ensures your statistics stay up to date as much as possible. Each time a cached query plan is executed the Optimizer checks for data changes and potentially generates new statistics. This behavior is exactly what we want, but there is a catch. The caveat to this is that a cached query plan will be “held” while the statistics are updated and will recompile to use the new values before running. This caveat can slow down the execution process dramatically.

Auto Update Statistics Asynchronously (AUTO_UPDATE_STATISTICS_ASYNC =TRUE)

This option does the same thing as the above, but with one significant difference. It allows the Optimizer to run a query and then use the updated the statistics. Where this option differs from synchronous is that a query will NOT be “held” while the statistics are updated. Queries can run “as is” until the query optimizer completes the statistics updates and then the query will recompile to begin to use them the next time it runs.

Confused Yet, so now in English. 

When the Asynchronous setting is set the query will run like it is until all statistics its uses are up-to-date, then it will run with the new numbers. It does not have to wait for all the new numbers to be updated to run. That’s where you get your performance boost, by not having to wait.

Check your settings using TSQL on ALL Databases

How to Turn it on TSQL

GUI

Under Database Properties > Options

NOTE: To enable this option Auto Update Statistics must be left ON.

Last Words

Remember every environment is different be sure to test this before implementing into production. A simple change from synchronous to asynchronous can make a difference.  It is definitely something to add to your performance tuning tool belt.

Does Your Code Have a Preamble?

Okay, here is a pet peeve of mine, I think every stored procedure, function, view etc. should all contain a block of code I refer to as a preamble. If yours doesn’t I strongly recommend you start adding it. It drives me crazy when I see code with no documentation of any kind telling me what it is for and when it was written or changed.

Why? A preamble documents the use, need, and changes for the code. It also leaves bread crumbs as to how why and what you did. I don’t know about you but I may code something and not have to change it for two years. When I do, I then think back and say why did I do that or who changed this code last. Working as a lone DBA, leaving bread crumbs was critical as I constantly jumped from task to task.

Above is the example of my preamble I use for all code I write. It tells who wrote it, what it is, what it is called by, how to run it, and lists any changes done to it.  I find one of the most helpful items on this is the Run documentation.  Here I place an exact run statement. It will show how the parameters should look and gives me a quick way to test it.

There are a million and one reasons as to why you should be doing this in your code. If you’re not doing it just take a second and start doing it. You’ll thank me for it later.

Just Check ALL the Boxes

Today I ran into something on a client server I unfortunately see too often.  The DBA goes through the trouble of configuring and setting up alerts\operators but doesn’t really understand what the options in the configurations mean. So unfortunately, that means they take the CYA (cover your ass) approach and they check all of them. Now, not only have I seen this with alerts but also with things like security configurations as well. My advice is to always in to take a second and research what each option is before you check the little boxes, especially when it comes to security. Always follow the rule of less is more.

In the example below the administrator enabled alerts for an operator using the CYA approach. They checked email, pager, and netsend.

So, what’s the big deal? This server experienced an insufficient resources (space) alert that fired every minute and by having PAGER notifications enabled it caused the error log to bloat, consumed unnecessary space, and created noise in the logs.

The administrator of this environment really only needed to configure the email notification, as the company did not use netsend nor have pagers duties configured. To be honest, I have yet to see an environment use more than that, and per Microsoft both Pager and Net Send will be removed in future versions.

So, the morale of the story is, please take the time to research what the little checkboxes are before you enable them. The example above is a pretty benign one, but you can imagine what kind of messes you can get yourself in for other more critical things like security.

A Side Note:

If you want to learn how to setup your alerts and operators I’ve already written a blog on that with scripts you can find it here.

You can also visit github.com/dc-ac for a full install script that includes the Alert and Operator setups https://github.com/DC-AC/SQL2016_Scripted_Install

VLFs the Forgotten Foe

How many of you check the amount of Virtual Log Files (VLFs) your transaction logs have?

Working as a consultant now, I see this as something that is often ignored by DBAs.  This is an easy thing maintain and yet so many don’t know how to. Keeping these in check can give you a performance boost not only on startup but with your insert/update/delete as well as backup/restore operations. SQL Server performs better with a smaller number of right sized virtual log files.  I highly recommend you add this to your server reviews.

What is a VLF?

Every transaction log is composed of smaller segments called virtual log files. Every time a growth event occurs new segments, virtual log files, are created at the end of your transaction log file. A large number of VLFs can slow things down.

What causes High VLFs?

As transactions force growth of the log file, inappropriate log file sizing or auto-growth settings can cause a high number of VLFs to occur.  Each growth event adds VLFs to the log file.  The more often you grow in conjunction with smaller growth segments, the more VLFs your transaction log will have.

Example

If you grow your log by the default 1 MB you may end up with thousands of VLFs as opposed to growing by 1GB increments. MSDN does a great job on explaining how a transaction logs work for a deeper dive I recommend reading it.

How do I know how many VLFs my log files have?

It’s very easy to figure out how many VLFs you have in your log file.

Make sure you are on the context of the database you want to run it against. In this case TEMPDB and run the DBCC LOGINFO command.

The query will return a result set of all LSNs created for that database, the COUNT of those rows is the amount of VLFs you have.

Now there are many ways you can get fancy with it using TSQL, so have fun with it. Write something that rolls through all your databases and gives you the record counts for each. There are plenty of useful examples on the internet.

The VLF counts should be under 100 ideally, anything above should be addressed.

*New for 2017 is a DMV that will give you an even easier way to get the VLF counts sys.dm_db_log_stats ( database_id ) .

How do you Fix?

These transaction log files should be shrunk until there are only two VLFs, then grown in chunks back to the current size.

  • Perform Shrink using DBCC SHRINKFILE

  • Regrow your log in an increment that makes sense to your environment. However, if your file growth is in excess 8GB it is recommended to grow in 8000MB chunks while manually regrowing the file. Your autogrowth should be set to a lower value. There is no set rule to what those values should be, it may take trial and error to figure out what is best for your environment.

Note: Growing out you log can cause a performance hit and block on going transactions, be sure to perform this during a maintenance window.

It’s that simple, now go take a look at your files. You may be surprise on what you find.

Lone DBA Podcast

I recently had the pleasure of being a guest on a Podcast episode with the SQL Data Partners Carlos Chacon (B|T) and Steve Stedman (B|T).  If you haven’t had a chance to attend one of my sessions on Survival Tips for the Lone DBA, this is great insight into it. I share via questions and answers how it is to be a Lone DBA.

http://sqldatapartners.com/2017/03/28/episode-89-lone-dba/