How to get started with Always Encrypted for Beginners Part 1

Encryption has always been intriguing to me but seemed like it could be a very complex process to set up. However, SQL Server has made it very simple when they introduced Always Encrypted (AE) into SQL Server 2016 and Azure SQL Database. Unlike Transparent Data Encryption (TDE) which only encrypts data files and backups at rest, AE is configured on a column level and not database level. Additionally, Always Encrypted is available in Standard (and Express) Edition, starting with SQL Server 2016 SP1.  You can easily encrypt a social security number (SSN) which is considered very sensitive within the United States or Salary column in a table with just a few clicks. In past versions of SQL Server, you could use cell-level encryption (CLE) perform this, but it required code changes and the keys were stored in the database, and the data was sent to the application unencrypted. Which brings us to the other benefit of AE, which is that DBAs can no longer see the unencrypted values of the data, as they could with CLE, because the column encryption key is stored outside of SQL Server.

Let’s see how you do it and walk through what each of these options means.

Using Adventure Work 2016 CTP3 HumanResources.Employee Table we are going to encrypt the Birthdate column.

Start by Right Clicking on the Table > Choose Encrypt Columns

It brings up a Wizard, one of the two recommend ways to configure AE. The other option is to use PowerShell.

Click Next on the Intro Screen

You will note in the example below, that it lists the columns and then shows the encryption STATE, which indicates if the column is eligible for encryption. There are several unsupported column characteristics that may make it so a column cannot be encrypted. This link to MSDN describes this in further detail. The items on this list are unsupported because they have a default constraint or a check constraint defined:

This is just an example of one of the road blocks you may encounter. So, let’s take a step back and setup an example we can easily use.

The Setup

Run the below to create a copy of the Employee table. We are doing this to make a table without any constraints.

Now again, Right Click on the Table > Choose Encrypt Columns

In this case, the column we want is BirthDate, so I place a check next to it. To continue I need to Choose a Type of Encryption.

There are two possibilities Deterministic and Randomized.

MSDN defines Deterministic encryption as always generates the same encrypted value for any given plain text value. Which means that if you have a birthdate of 01/03/1958 it will always be encrypted with the same value each time such as ABCACBACB. This allows you to index it, use it in WHERE clauses, GROUP BY and JOINS.

Randomized encryption per MSDN- uses a method that encrypts data in a less predictable manner. This makes Randomized encryption more secure, because using the example above each encrypted value of 01/03/1958 will be different. It could be ABCACBACB, BBBCCAA, or CCCAAABBB. All three encrypted values are subsequently decrypted to the same value. Since the encrypted value is random you cannot perform search operations etc. as you can with Deterministic.

In most cases, you will want to use deterministic encryption. The places where random encryption makes sense is where you have a low range of distinct values. An attacker might be able to determine what the encrypted value was by brute force attacking using a variety of parameters. Some examples of this data include birth date, blood type, or credit card verification numbers (CVV).

So, going back to our example, select deterministic from the drop down.

The next step is to choose an Encryption Key. Let’s choose CEKAUTO (NEW). This stands for Column Encryption Key. You can use the same Key for every column or choose a new one for each.

Then click NEXT

Every Encryption Key must have a MASTER KEY. This is the value that is used to protect the other column keys. In the below we are going to just go with the defaults. If you have already generated a master key in you SQL Server instance, you can choose to reuse it for any new column you add.

One of the most complex parts of encryption is determining where to store these keys and who will have access to it.  You can store these keys on a client machine using a Windows Certificate store or in Azure Key store.

The next screen has a great feature and kudos to Microsoft for this add-in. You can choose to generate a PowerShell Script, so you can rerun this again, or store in your source control.

After clicking NEXT, you’re done. The wizard will create all the keys, and encrypt the selected columns.

Now if you SELECT from the table you will see the values in Birthdate are now encrypted.

Key Management in Windows Certificate Store

If you would like to see where the keys are stored within Windows, you can do so by doing the below. Go to Microsoft Management Console (type MMC your run bar Win+R). Then go to File, then Add/Remove Snap In. Certificates will be the third one down, click Add.

If you scroll back up you will note the when we created our Master Key it did so under CURRENT USER so choose My user account.

Expand Personal and Click Certificates (Key)

So, there you have it. Encryption made easy. This is only the tip of the iceberg. You need to understand how your environment will access and decrypt the data, encrypting is only part of the puzzle. I will cover how to get SSMS to decrypt the data in Part 2, in the meantime play around with it.

Thankful DBA

This week is Thanksgiving in the United States, so I thought it fitting to write a quick blog on what I am thankful for as a DBA. These are in no particular order and feel free to respond with something you are thankful for. I’d love to hear it.

  1. Glenn Berry’s Diagnostic Scripts- (B|T) Used these for years. Really a great set of scripts and explanations that we all should be grateful for.
  2. Ola Hallengren’s (BMaintenance scripts. Index Optimization, Backup, and Integrity Checks for all! They have become an industry standard and continue to get better and better.
  3. RCSI (Read Committed Snapshot Isolation) –My Readers can stop blocking Writers! Thanks to Kendra Little (B|T) for this great blog.
  4. SSMS Results to grid and copy with header- I do this a million and one times a day. Ctrl+Shift+C .
  5. Query Store – Having the plan run stats and being able to force a plan, LOVE IT! Thanks Conor Cunningham and Microsoft for that one.
  6. Availability Groups – Easy setup and trustworthy. And, well, I like the name better than Mirroring.
  7. DMV’s (Dynamic Management Views)- Show me the money! It has all the SQL Server Internals goodies, mine for the taking.
  8. Profiler– #ProfilerForLife nuff said, my most trusted friend.
  9. Columnstore Indexes – I feel the need, the need for speed! Who doesn’t like up to 10x Query Performance gains and 10x the data compression?
  10. Paul Randal’s Waits Library (B|T)– I can’t tell you how many times I’ve referred to this. So much useful information!
  11. Adam Machanic’s SP_whosisactive (B|T) – This is my GO TO, for seeing what’s actively going on, it’s the first thing I run.
  12. Sentry One Plan Explorer– Execution Plans on STEROIDS! Yes, please. Love the detail and ease of use.
  13. RedGate’s SQL Prompt- My coding is downright ugly. With a quick Ctrl+K, Ctrl+Y my code is sleek and readable. Not to mention I love the code snippets.
  14. Grant Fritchey’s (B|T) Execution Plans book- I can’t wait for 3rd Edition, someone took my very loved highlighted, tabbed, marked up copy. I need another!
  15. Power BI – It puts the slicing and dicing into the user’s hands, giving Management easy visualizations of their data for analysis. Less reports for me to write, yippie.  Thank you Microsoft.
  16. dbatools – Great Power Shell Modules for migrating databases. No more doing it the hard way.

Last and most importantly I am grateful for #SQLFamily, Bloggers, and Twitter. I learn from you every damn day!

Happy Thanksgiving!

~Monica

Do Not Pass GO!

What is the GO statement and why is it so important to use? When do I have to use it? When do I not use it? These are questions that have passed through my head from time to time while writing T-SQL within SQL Server.

First What Is It and When Should I Use It?  

The GO statement lets SSMS (the interface) know when it’s the end of the batch. It basically defines the scope of what you are trying to send to the Database Engine. The below example sends two separate statements. The first statement changes the database context to run the next statement under, followed by the execution of the SELECT running against the database Demo. Simple, yes.

Example

Gotcha’s

I’ve been caught out by this behavior in the past. Using GO in stored procedures can be tricky. There are times when you want to run a batch of statements together, but if you put a GO into the procedure and compile it you will notice that you lost any code that came after the GO. The GO signaled to that my ALTER or CREATE Procedure statement was done. It then ignored all the statement below it as part of the stored procedure.

Another Gotcha which can be both good and bad depending on your need. A Variable’s life span ends after each GO statement. If you declare a variable, run a statement to populate that variable and use that variable you can no longer use it once you send a GO.

Example

Cool things to do with GO

This is learned by chance just messing round. Did you know that if you put a number after GO it will run those statements that many times? This can be handy for generating a lot of load against a database for demos.

Don’t like the word go, change it. Yep you can change it to anything you want. Tool> Options> Query Execution

Change it to RUNNOW.

Let’s Try

HMMM Why didn’t that work… because I ran it in an existing Open Window (Session).  Let’s try that again.

TADA! Much better.

Now that you know what it does, feel free to advance to GO and collect your $200. Enjoy.

What Are These Backup Settings All About?

I ran across a client the other day that had these Backup and Recovery options set like the picture below because it is defaulted this way. The Database Administrator didn’t know what they should configure them as so he left them alone. I find this is the case with a lot of options. For the most part leaving the defaults can be just fine, but other leaving others cause leave you missing out on some good features.

Let’s start from top to bottom.

Default backup media retention in days. Now the first things that comes to my mind is that “hey this is a cleanup job” SCORE! Thinking that maybe this will auto delete old backups. After all isn’t that what retention means? NOPE, not in this case.

In this case it’s just a number of days before that a backup media can be OVERWRITTEN. If the DBA goes to overwrite the media before those days it will give a warning message. You’ll note in every back up action you do the RETAINDAYS option is filled in. In this case it will always reflect to 90 now that we have changed it. In general, this a pointless option to me. I don’t normally OVERWRITE backup media. To me this was more relevant when Tapes were used and disk were harder to come by, so I leave it alone.

TSQL

 

GUI

Compress backup. This one is exactly what you think it is, no guessing here. Backup Compression is one I highly recommend changing from the default. Compression is a HUGE topic I will save for another time. But in short, the smaller the files the less space it takes up, less data stored means IO (and less data sent to your backup device) and therefore your databases back up and restore faster. Here is a great MDSN link to learn more about the benefits of backup compression. Backup compression is included in all editions of SQL Server since 2008 R2, so use it!

Recovery Interval (in minutes). Now this one I always thought meant Recovery Point Objective, in other words how much data am I willing to lose in minutes. I am partially right. According to MSDN, this option defines an upper limit on the time recovering a database should take. The SQL Server Database Engine uses the value specified for this option to determine approximately how often to issue automatic checkpoints on a given database.

This is an option I don’t change. I have yet to see a scenario where I want to override when SQL Server does a check point on the database by default.  There are times when I want to force a check point but it’s not something I am going to set a standard for. The only reason I have heard was to reduce IO on a data drive, but to me that’s at too high of a cost.

TSQL

So, there you have it, three more options that may not be a mystery for some any longer.