How to Get Started with Always Encrypted for Beginners Part 2

In this post we will pick up where we left off in Part 1, if you haven’t read that please go back and do so.

Now that we have encrypted our columns, it’s time to take a look at how we decrypt them inside SQL Server Management Studio or through our applications. You’ll be surprised to see how easy it is.

Verify Your Setup

First, let’s verify that the table is still encrypted, and nothing changed after you ran through the Part 1 examples. To confirm, simply query sys.columns, script out the table, or query the data to check that the Birthdate column is still encrypted.

You can also just SELECT and look at the data. Here you see the encrypted values for the data in the birthdate column.

Check system tables

Decrypt with SQL Server Management Studio

Viewing decrypted data within SQL Server Management Studio (SSMS) is very easy. SSMS uses .NET 4.6 and the modern SQL Server client, so you can pass in the necessary encryption options. SSMS uses the connection string to access the Master Key and return the data in its decrypted format.

First create a new SQL Connection and Click Options to expand the window.

Then go to the Additional Connections Parameters Tab of the login window and simply type column encryption setting = enabled. Then choose Connect.

Now try SELECT From your columns.

If you did it correctly you will see the decrypted BirthDate column.

Now the reason this works is both the Column Key and Master Key are stored in the Windows Certificate Store of this SQL Server. The Master Key was setup in Part 1 in the Windows Certificate Store.

Decrypt with an Application

According to MSDN for the application to decrypt data the account that connects to the database must have the VIEW ANY COLUMN MASTER KEY DEFINITION and VIEW ANY COLUMN ENCRYPTION KEY DEFINITION database permissions. These permissions are required to access the metadata about Always Encrypted keys in the database.

Once those permissions are established all you must do is change your application connection string to include Column Encryption Setting=enabled. Below is an example using SQL Server integrated security.

Example

string connectionString = “Data Source=server63; Initial Catalog=Clinic; Integrated Security=true; Column Encryption Setting=enabled”; SqlConnection connection = new SqlConnection(connectionString);

 Summary

Decrypting the data when you have the Master Key stored on your Database Server makes it easy, but it also gives access to the encrypted data to the DBA. Make sure when you are planning to use Always Encrypted you consider who you want to have access to the data and where you want to store the keys.  There are many more layers of security you can add to this by defining those items. The example I gave in both Part 1 and Part 2 are the least complex and therefore not the most secure, but it gives you a beginner’s overview of how implement it. You need to examine your application to understand if it fits with in the current supported features of always encrypted.

How to get started with Always Encrypted for Beginners Part 1

Encryption has always been intriguing to me but seemed like it could be a very complex process to set up. However, SQL Server has made it very simple when they introduced Always Encrypted (AE) into SQL Server 2016 and Azure SQL Database. Unlike Transparent Data Encryption (TDE) which only encrypts data files and backups at rest, AE is configured on a column level and not database level. Additionally, Always Encrypted is available in Standard (and Express) Edition, starting with SQL Server 2016 SP1.  You can easily encrypt a social security number (SSN) which is considered very sensitive within the United States or Salary column in a table with just a few clicks. In past versions of SQL Server, you could use cell-level encryption (CLE) perform this, but it required code changes and the keys were stored in the database, and the data was sent to the application unencrypted. Which brings us to the other benefit of AE, which is that DBAs can no longer see the unencrypted values of the data, as they could with CLE, because the column encryption key is stored outside of SQL Server.

Let’s see how you do it and walk through what each of these options means.

Using Adventure Work 2016 CTP3 HumanResources.Employee Table we are going to encrypt the Birthdate column.

Start by Right Clicking on the Table > Choose Encrypt Columns

It brings up a Wizard, one of the two recommend ways to configure AE. The other option is to use PowerShell.

Click Next on the Intro Screen

You will note in the example below, that it lists the columns and then shows the encryption STATE, which indicates if the column is eligible for encryption. There are several unsupported column characteristics that may make it so a column cannot be encrypted. This link to MSDN describes this in further detail. The items on this list are unsupported because they have a default constraint or a check constraint defined:

This is just an example of one of the road blocks you may encounter. So, let’s take a step back and setup an example we can easily use.

The Setup

Run the below to create a copy of the Employee table. We are doing this to make a table without any constraints.

Now again, Right Click on the Table > Choose Encrypt Columns

In this case, the column we want is BirthDate, so I place a check next to it. To continue I need to Choose a Type of Encryption.

There are two possibilities Deterministic and Randomized.

MSDN defines Deterministic encryption as always generates the same encrypted value for any given plain text value. Which means that if you have a birthdate of 01/03/1958 it will always be encrypted with the same value each time such as ABCACBACB. This allows you to index it, use it in WHERE clauses, GROUP BY and JOINS.

Randomized encryption per MSDN- uses a method that encrypts data in a less predictable manner. This makes Randomized encryption more secure, because using the example above each encrypted value of 01/03/1958 will be different. It could be ABCACBACB, BBBCCAA, or CCCAAABBB. All three encrypted values are subsequently decrypted to the same value. Since the encrypted value is random you cannot perform search operations etc. as you can with Deterministic.

In most cases, you will want to use deterministic encryption. The places where random encryption makes sense is where you have a low range of distinct values. An attacker might be able to determine what the encrypted value was by brute force attacking using a variety of parameters. Some examples of this data include birth date, blood type, or credit card verification numbers (CVV).

So, going back to our example, select deterministic from the drop down.

The next step is to choose an Encryption Key. Let’s choose CEKAUTO (NEW). This stands for Column Encryption Key. You can use the same Key for every column or choose a new one for each.

Then click NEXT

Every Encryption Key must have a MASTER KEY. This is the value that is used to protect the other column keys. In the below we are going to just go with the defaults. If you have already generated a master key in you SQL Server instance, you can choose to reuse it for any new column you add.

One of the most complex parts of encryption is determining where to store these keys and who will have access to it.  You can store these keys on a client machine using a Windows Certificate store or in Azure Key store.

The next screen has a great feature and kudos to Microsoft for this add-in. You can choose to generate a PowerShell Script, so you can rerun this again, or store in your source control.

After clicking NEXT, you’re done. The wizard will create all the keys, and encrypt the selected columns.

Now if you SELECT from the table you will see the values in Birthdate are now encrypted.

Key Management in Windows Certificate Store

If you would like to see where the keys are stored within Windows, you can do so by doing the below. Go to Microsoft Management Console (type MMC your run bar Win+R). Then go to File, then Add/Remove Snap In. Certificates will be the third one down, click Add.

If you scroll back up you will note the when we created our Master Key it did so under CURRENT USER so choose My user account.

Expand Personal and Click Certificates (Key)

So, there you have it. Encryption made easy. This is only the tip of the iceberg. You need to understand how your environment will access and decrypt the data, encrypting is only part of the puzzle. I will cover how to get SSMS to decrypt the data in Part 2, in the meantime play around with it.

Thankful DBA

This week is Thanksgiving in the United States, so I thought it fitting to write a quick blog on what I am thankful for as a DBA. These are in no particular order and feel free to respond with something you are thankful for. I’d love to hear it.

  1. Glenn Berry’s Diagnostic Scripts- (B|T) Used these for years. Really a great set of scripts and explanations that we all should be grateful for.
  2. Ola Hallengren’s (BMaintenance scripts. Index Optimization, Backup, and Integrity Checks for all! They have become an industry standard and continue to get better and better.
  3. RCSI (Read Committed Snapshot Isolation) –My Readers can stop blocking Writers! Thanks to Kendra Little (B|T) for this great blog.
  4. SSMS Results to grid and copy with header- I do this a million and one times a day. Ctrl+Shift+C .
  5. Query Store – Having the plan run stats and being able to force a plan, LOVE IT! Thanks Conor Cunningham and Microsoft for that one.
  6. Availability Groups – Easy setup and trustworthy. And, well, I like the name better than Mirroring.
  7. DMV’s (Dynamic Management Views)- Show me the money! It has all the SQL Server Internals goodies, mine for the taking.
  8. Profiler– #ProfilerForLife nuff said, my most trusted friend.
  9. Columnstore Indexes – I feel the need, the need for speed! Who doesn’t like up to 10x Query Performance gains and 10x the data compression?
  10. Paul Randal’s Waits Library (B|T)– I can’t tell you how many times I’ve referred to this. So much useful information!
  11. Adam Machanic’s SP_whosisactive (B|T) – This is my GO TO, for seeing what’s actively going on, it’s the first thing I run.
  12. Sentry One Plan Explorer– Execution Plans on STEROIDS! Yes, please. Love the detail and ease of use.
  13. RedGate’s SQL Prompt- My coding is downright ugly. With a quick Ctrl+K, Ctrl+Y my code is sleek and readable. Not to mention I love the code snippets.
  14. Grant Fritchey’s (B|T) Execution Plans book- I can’t wait for 3rd Edition, someone took my very loved highlighted, tabbed, marked up copy. I need another!
  15. Power BI – It puts the slicing and dicing into the user’s hands, giving Management easy visualizations of their data for analysis. Less reports for me to write, yippie.  Thank you Microsoft.
  16. dbatools – Great Power Shell Modules for migrating databases. No more doing it the hard way.

Last and most importantly I am grateful for #SQLFamily, Bloggers, and Twitter. I learn from you every damn day!

Happy Thanksgiving!

~Monica

Do Not Pass GO!

What is the GO statement and why is it so important to use? When do I have to use it? When do I not use it? These are questions that have passed through my head from time to time while writing T-SQL within SQL Server.

First What Is It and When Should I Use It?  

The GO statement lets SSMS (the interface) know when it’s the end of the batch. It basically defines the scope of what you are trying to send to the Database Engine. The below example sends two separate statements. The first statement changes the database context to run the next statement under, followed by the execution of the SELECT running against the database Demo. Simple, yes.

Example

Gotcha’s

I’ve been caught out by this behavior in the past. Using GO in stored procedures can be tricky. There are times when you want to run a batch of statements together, but if you put a GO into the procedure and compile it you will notice that you lost any code that came after the GO. The GO signaled to that my ALTER or CREATE Procedure statement was done. It then ignored all the statement below it as part of the stored procedure.

Another Gotcha which can be both good and bad depending on your need. A Variable’s life span ends after each GO statement. If you declare a variable, run a statement to populate that variable and use that variable you can no longer use it once you send a GO.

Example

Cool things to do with GO

This is learned by chance just messing round. Did you know that if you put a number after GO it will run those statements that many times? This can be handy for generating a lot of load against a database for demos.

Don’t like the word go, change it. Yep you can change it to anything you want. Tool> Options> Query Execution

Change it to RUNNOW.

Let’s Try

HMMM Why didn’t that work… because I ran it in an existing Open Window (Session).  Let’s try that again.

TADA! Much better.

Now that you know what it does, feel free to advance to GO and collect your $200. Enjoy.

What Are These Backup Settings All About?

I ran across a client the other day that had these Backup and Recovery options set like the picture below because it is defaulted this way. The Database Administrator didn’t know what they should configure them as so he left them alone. I find this is the case with a lot of options. For the most part leaving the defaults can be just fine, but other leaving others cause leave you missing out on some good features.

Let’s start from top to bottom.

Default backup media retention in days. Now the first things that comes to my mind is that “hey this is a cleanup job” SCORE! Thinking that maybe this will auto delete old backups. After all isn’t that what retention means? NOPE, not in this case.

In this case it’s just a number of days before that a backup media can be OVERWRITTEN. If the DBA goes to overwrite the media before those days it will give a warning message. You’ll note in every back up action you do the RETAINDAYS option is filled in. In this case it will always reflect to 90 now that we have changed it. In general, this a pointless option to me. I don’t normally OVERWRITE backup media. To me this was more relevant when Tapes were used and disk were harder to come by, so I leave it alone.

TSQL

 

GUI

Compress backup. This one is exactly what you think it is, no guessing here. Backup Compression is one I highly recommend changing from the default. Compression is a HUGE topic I will save for another time. But in short, the smaller the files the less space it takes up, less data stored means IO (and less data sent to your backup device) and therefore your databases back up and restore faster. Here is a great MDSN link to learn more about the benefits of backup compression. Backup compression is included in all editions of SQL Server since 2008 R2, so use it!

Recovery Interval (in minutes). Now this one I always thought meant Recovery Point Objective, in other words how much data am I willing to lose in minutes. I am partially right. According to MSDN, this option defines an upper limit on the time recovering a database should take. The SQL Server Database Engine uses the value specified for this option to determine approximately how often to issue automatic checkpoints on a given database.

This is an option I don’t change. I have yet to see a scenario where I want to override when SQL Server does a check point on the database by default.  There are times when I want to force a check point but it’s not something I am going to set a standard for. The only reason I have heard was to reduce IO on a data drive, but to me that’s at too high of a cost.

TSQL

So, there you have it, three more options that may not be a mystery for some any longer.

Quick Model Database Tidbit

Are you using your Model Database to its full potential?

I am finding more and more that Database Admins are not using the Model database to its fullest potential and some not at all.

What is that Model Database for?

The model database is basically the default setup (template) for all other databases created on a SQL Server instance. All databases created after install will inherit the properties of this database.

Why Configure It?

Using the model can insure consistency within your environment and is a quick way to automate your database setups. Below is a list of things I’ve used in my environments and others.

Top (in no particular order) Settings I have Implemented Through Model

  • Default Growth Settings
  • Query Store Settings
  • Recovery Models
  • Read Committed Snapshot Isolation
  • Allow Snapshot Isolation
  • Auto Update Statistics Asynchronously
  • Compatibility Levels

Now there are some things that databases will NOT inherit from the model, some of these I learned the hard way.

  • File Groups
  • CDC (Change Data Capture)
  • Collations
  • Database Owner
  • Encryption

Scripts to turn these options on

What Other Things Can You Do?

Now, you can go above and beyond just the database properties. You can add tables, views, triggers, functions etc. to your model database and every time a new database is created those objects will also exist. Why is this useful? In the past, I’ve used this for tracking my DDL (data definition language) changes. I created a trigger that would insert into a table the user, object, date and time, text snippet of any ALTER\DROP\CREATE statement that was run on a database. For it to work, the trigger needed to exist on all databases.

Final Words

We all know each environment is different, so don’t just go and implement everything, tailor it to your needs. I suggest you take a look at yours and see if there is anything you can adjust. You may be surprised on what you can tweak.

Note:

*In testing this, I have found that if you create a new database using CREATE DATABASE with T-SQL the Auto-Growth sizes do not get inherited by new database, but everything else did. If I create new database using GUI these setting do propagate.  Not sure if this is by design or a bug.

Synchronous VS Asynchronous Statistics Updates

One of the things I’ve been able to implement to help with performance is changing from Update Statistics Synchronous to Auto Update Statistics Asynchronously. It’s a simple change that can have a big impact when implemented in highly transactional OLTP environments. Notice I said OLTP not OLAP, since data in an OLAP environment tends to not be as dynamic, so it’s rare to enable this in a data warehouse.

So, what’s the difference between the two and why does it help?

Synchronous (defaulted as AUTO_UPDATE_STATISTICS =TRUE)

By default, when Auto Update Statistics is set to True, the SQL Server Query Optimizer will automatically update statistics when data has met a threshold of changes (insert, update, delete, or merge) and the estimated rows are now potentially stale. When statistics are stale, execution plans can become suboptimal which can lead to degradation in performance.

This best practice option ensures your statistics stay up to date as much as possible. Each time a cached query plan is executed the Optimizer checks for data changes and potentially generates new statistics. This behavior is exactly what we want, but there is a catch. The caveat to this is that a cached query plan will be “held” while the statistics are updated and will recompile to use the new values before running. This caveat can slow down the execution process dramatically.

Auto Update Statistics Asynchronously (AUTO_UPDATE_STATISTICS_ASYNC =TRUE)

This option does the same thing as the above, but with one significant difference. It allows the Optimizer to run a query and then use the updated the statistics. Where this option differs from synchronous is that a query will NOT be “held” while the statistics are updated. Queries can run “as is” until the query optimizer completes the statistics updates and then the query will recompile to begin to use them the next time it runs.

Confused Yet, so now in English. 

When the Asynchronous setting is set the query will run like it is until all statistics its uses are up-to-date, then it will run with the new numbers. It does not have to wait for all the new numbers to be updated to run. That’s where you get your performance boost, by not having to wait.

Check your settings using TSQL on ALL Databases

How to Turn it on TSQL

GUI

Under Database Properties > Options

NOTE: To enable this option Auto Update Statistics must be left ON.

Last Words

Remember every environment is different be sure to test this before implementing into production. A simple change from synchronous to asynchronous can make a difference.  It is definitely something to add to your performance tuning tool belt.

Does Your Code Have a Preamble?

Okay, here is a pet peeve of mine, I think every stored procedure, function, view etc. should all contain a block of code I refer to as a preamble. If yours doesn’t I strongly recommend you start adding it. It drives me crazy when I see code with no documentation of any kind telling me what it is for and when it was written or changed.

Why? A preamble documents the use, need, and changes for the code. It also leaves bread crumbs as to how why and what you did. I don’t know about you but I may code something and not have to change it for two years. When I do, I then think back and say why did I do that or who changed this code last. Working as a lone DBA, leaving bread crumbs was critical as I constantly jumped from task to task.

Above is the example of my preamble I use for all code I write. It tells who wrote it, what it is, what it is called by, how to run it, and lists any changes done to it.  I find one of the most helpful items on this is the Run documentation.  Here I place an exact run statement. It will show how the parameters should look and gives me a quick way to test it.

There are a million and one reasons as to why you should be doing this in your code. If you’re not doing it just take a second and start doing it. You’ll thank me for it later.

Why I Go to Summit Each Year?

This year will be my 6th PASS Summit that I will have attended. Some people have asked me why I still go, and what I get out of Summit that I don’t get from attending and speaking at SQL Saturdays. That’s an easy one for me to answer, but a long answer at that.

Networking

First and foremost, it’s for the networking. Getting to meetup with so many other like minded people is gratifying. This networking allows you to exchange ideas, war stories, and downright geek out with others that know what you are talking about and don’t have their eyes glaze over when discussing optimizer internals.

On top of all that it’s career building. You never know when you will have that one conversation with someone that leads to your next career step. It’s the meeting of the #SQLFamily face to face that can sometimes lead to your name being brought up about a new position that may be opening.  Believe me, I know of many people that have landed their next opportunity just from the interactions they have had at Summit.

Sessions

Learning, learning, learning is the name of the game. The ever-changing world of data happens so fast these days and I want to keep up. It is impossible in every environment to have a chance to get exposure to all the facets of SQL Server and other Data related topics. Summit gives us a wider view into what’s out there, and provides ideas of things I may be able to implement.

Vendors

I love gadgets and gizmos. Learning about what products are out there to make me more efficient at my job is always something I look forward too. Tools were essential for me when I was a lone DBA so I always looked forward to visiting each booth and see what’s new with them. Spending a lot of time on the show floor is a good investment of time–you get to meet folks who work for the vendors who support the community, and the tools and offerings they have.

CAT Team

Did you know the Microsoft Customary Advisory Team (CAT) team is onsite and willing to answer all your questions FOR FREE? Yep, no $300 for support to just pick up the phone, they are there in person. I’ve asked them a question each year pertaining to an environment I was working in, in which they were able to help me solve. The CAT team isn’t support–they work with the biggest workloads SQL Server supports and have seen almost everything.

Inspiration and Renewal

I always find a sense I renewal after attending Summit. I get a little kick start on new things to do at work that makes my job even more exciting. Many people get into rut or routine and their jobs can become mundane. Getting out of the office and seeing others that have the same passion for data as you can really help.

Lastly, Friendships

Summit is a #SQLFamily reunion of sorts. Many of us converse daily on twitter and only get to see each other in person once a year. Summit brings us all together. Some of us are lucky and are able to meet up with each other more frequently as we travel to SQL Saturdays and other events, but Summit is kinda like a home base for us. We all look forward to those #SQLHugs, catching up, and meeting new family members.

This year’s Summit will be a little different for me. When I am not getting my learn on, you will find me in the Denny Cherry & Associates (DCAC) Vendor Booth. Yep, this time I get to play booth babe. I can’t wait to be on the other side of the table telling other attendees what DCAC is all about and helping talk through some of their performance issues.

Anyhow this maybe my 6th PASS Summit but it won’t be my last. I encourage all who have never been or have gone before to go. It’s not too late to register, if you use discount code in the image you can save $150. There is so much you can get out of it each and every time to attend.

See you there!

SQL Sequence vs Identity Column

Let’s take a look at what a Sequence is in relation to an Identity Column in SQL Server. Did you know Sequence even existed? I didn’t until I was asked about them. It’s amazing how much you can skip over and never notice in SSMS. See this little folder, ever notice it under Programmability in Management Studio. Yep it’s there, SQL Server has this very handy thing called Sequences. Sequences are a relatively new feature that have only existed since SQL Server 2012, but have long existed in Oracle (where there a no identity columns).

What is a Sequence?

Per MSDN, A sequence is a user-defined schema bound object that generates a sequence of numeric values according to the specification with which the sequence was created. The sequence of numeric values is generated in an ascending or descending order at a defined interval and can be configured to restart (cycle) when exhausted. It’s important to note the sequences can be cached and are not guaranteed to be in sequential order.

The Code

After creation, you can look at the properties in the GUI. Note you can set the Increment by, you can restart the sequence and even set min and max values.

How to Query

What’s an Identity Column?

A property of a table that is set by initial seed value (starting value). For each insert it assigns a new incremental value that is added to the identity value of the previous row that was loaded.

The Code

Note: after the field type IDENTITY, you declare the SEED (1), then INCREMENT Value (1). You can see this in the GUI below for the Column properties.

How to Query

Let’s insert two records and see the NameID Identity column increment.

Comparing the two

Attribute Sequence Identity
Object Level Database Table
Limit Can set a limit Limited by data type INT vs BIG INT
Values Generated by application call using NEXT VALUE FOR Generated on INSERT on a table
Increments Declared as INCREMENT at setup and can be anything. Can be a negative number to cause the sequence to descend instead of ascending numbers Declared as INCREMENT at setup and can be any positive number, numbers will ascend
Scope Generated outside the scope of a transaction Generated within a Transaction
Number Assignment Sequences can be preallocated (example assign me number 1-25) Cannot be preallocated, assigned in order by INSERT
Gaps Can experience Gaps Can experience Gaps
Uniqueness No, this number can be reset and reused. Often used as Primary Key (you must choose this property to ensure the unique value).

Summary

So, this was just a quick look what a Sequence is compared to an Identity column. Both can be very useful. If you’re looking for a unique value your best bet it to go with an Identity Column and the Primary Key option. If you want just an auto-generated value to be able to use in an application outside of a table a Sequence is a sure bet. Play around with it, I am sure you can come up with a million and one uses for each.

***UPDATE NEW to SQL 2017 ***

Per MSDN there is a new option

IDENTITY_CACHE = { ON | OFF }

Applies to: SQL Server 2017 and Azure SQL Database (feature is in public preview)

Enables or disables identity cache at the database level. The default is ON. Identity caching is used to improve INSERT performance on tables with Identity columns. To avoid gaps in the values of the Identity column in cases where the server restarts unexpectedly or fails over to a secondary server, disable the IDENTITY_CACHE option. This option is similar to the existing SQL Server Trace Flag 272, except that it can be set at the database level rather than only at the server level.