Memory Optimized Tables in SQL Server

Sometimes when I try to learn about a concept my brain blocks out everything about it. Talking about anything that uses the “In Memory” concept tends to do this to me on occasion. It’s important to note that “In Memory” is a marketing term for a series of features in SQL Server that have common behaviors but are not inherently related. So, in my next few blogs I am going to attempt to explain some In-Memory concepts as it relates to SQL Server starting with a dive into Memory Optimized Tables. I’ve already written about Columnstore which has vastly different use cases to In Memory OLTP, and you can find those here. Columnstore  is a perfect example of an In-Memory concept that took me sometime to wrap my head around.

What are Memory Optimized Tables?

A Memory Optimized Table, starting in SQL Server 2014, is simply a table that has two copies one in active memory and one durable on disk whether that includes data or just Schema Only, which I will explain later. Since memory is flushed upon restart of SQL Services, SQL Server keeps a physical copy of the table that is recoverable. Even though there are two copies of the table, the memory copy is completely transparent and hidden to you.

What is the added benefit for using these in-memory tables?

That’s always something I ask when looking at SQL Server options or features. For in-memory tables, it’s the way SQL Server handles the latches and locks. According to Microsoft, the engine uses an optimistic approach for this, meaning it does not place locks or latches on any version of updated rows of data, which is very different than normal tables. It’s this mechanism that reduces contention and allows the transactions to process exponentially faster. Instead of locks In-Memory uses Row Versions, keeping the original row until after the transaction is committed. Much like Read Committed Snapshot Isolation (RCSI) this allows other transactions to read the original row, while updating the new row version. The In-Memory structured version is “pageless” and optimized for speed inside active memory,  giving a significant performance impact depending on workloads.

SQL Server also changes it’s logging for these tables. Instead of fully logging, this duality of both on disk and in memory versions (row versions) of the table allows less to be logged. SQL Server can use the before and after versions to gain information it would normally acquire from a log file. In SQL Server 2019, the same concept applies to the new Accelerated Data Recovery (ADR) approach to logging and recovery.

Finally, another added benefit is the DURABILITY option shown in the below example. The use of SCHEMA_ONLY can be a great way to get around the use of # TEMP tables and add a more efficient way to process temporary data especially with larger tables. You can read more on that here.

Things to Consider

Now this all sounds great, you would think everyone would add this to all their tables, however like all SQL Server options this is not for all environments. There are things you need to consider before implementing In Memory Tables. First and foremost, the amount of memory and the configuration of that memory before considering this. You MUST have that setup correctly in SQL Server as well adjust for the increased use of memory which may mean adding more memory to your server before starting. Secondly know that, like Columnstore indexes, these tables are not applicable for everything. These table are optimized for high volume WRITEs,  not a data warehouse which is mostly for reads for example.

Let’s see how we create one

The key to having a table “In-Memory” is the use of the key word “MEMORY-OPTIMIZED” on the create statement when you first create the table. Note there is no ability to ALTER a table to make an existing one memory optimized, you will need to recreate the table and load the data in order to take advantage of this option on an existing table.  There’s just a couple more setting you need to have configured to make this work as you can see from below.

First step is to make sure you are on compatibility level >=130

SELECT d.compatibility_level
    FROM sys.databases as d
    WHERE d.name = Db_Name();

If you are not you will need to change it.
ALTER DATABASE AdventureWorks2016CTP3; 
SET COMPATIBILITY_LEVEL = 130;

Next you must alter your database in order to take advantage of In- Memory you must alter your database and enable this setting.
ALTER DATABASE AdventureWorks2016CTP3; 
SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT = ON;

Lastly your database will need to have a memory optimized file group added.
ALTER DATABASE AdventureWorks2016CTP3 
ADD FILEGROUP AdventureWorks2016CTP3_mod CONTAINS MEMORY_OPTIMIZED_DATA;

The below actually creates the file into the new filegroup.
ALTER DATABASE AdventureWorks2016CTP3 
ADD FILE (name=' AdventureWorks2016CTP3_mod1', 
filename='c:\data\AdventureWorks2016CTP3) 
TO FILEGROUP AdventureWorks2016CTP3_mod

Now let’s create a table
USE AdventureWorks2016CTP3

CREATE TABLE dbo.InMemoryExample

    (

        OrderID   INTEGER   NOT NULL   IDENTITY

            PRIMARY KEY NONCLUSTERED,

        ItemNumber   INTEGER    NOT NULL,

        OrderDate    DATETIME   NOT NULL

    )

        WITH

            (MEMORY_OPTIMIZED = ON,

            DURABILITY = SCHEMA_AND_DATA);

You can see inserting and selecting against the table is syntactically exactly the same as any other regular table, however internally it is far different. Above and beyond the table creation its structured behavior is basically the same in these actions including adding or removing a column.  Now one caveat to these tables is that you cannot CREATE or DROP and Index the same way. You must use ADD/DROP Index to accomplish this, believe me I tried.

Remember the DURABILITY option I briefly mentioned before? This is important. The example above has it set to SCHEMA_AND_DATA which means upon database going offline both the schema and data are preserved on disk. If you choose SCHEMA_ONLY this means that only the structure will be preserved, and data will be deleted. This is very important to note as it can introduce data loss.

Summary

As you can see, In-Memory tables are not as complicated as my brain wanted to make them. It’s a relatively simple concept that just in corporates row versioning and two copies of the table. Once you pull the concept into its parts it really makes it easier to understand. Stay tuned for more In Memory in my upcoming blogs.

About Monica Rathbun

Monica Rathbun lives in Virginia, is a Microsoft MVP for Data Platform and Microsoft Certified Solutions Expert. She has nearly two decades of experience working with a wide variety of database platforms with a focus on SQL Server and the Microsoft Data Platform. She is a frequent speaker at IT industry conferences on topics including performance tuning and configuration management. She is the Leader of the Hampton Roads SQL Server User Group. She is passionate about SQL Server and the SQL Server community, doing anything she can to give back. Monica can always be found on Twitter (@sqlespresso) handing out helpful tips. You can find Monica blogging at sqlespresso.com 
Bookmark the permalink.

3 Comments

  1. Pingback: Memory Optimizer Advisor – A Shot of SQLEspresso

  2. One thing I learned the hard way, was that once you enable the database for memory-optimized tables, it cannot be removed. You have to migrate all of your data out to a new database.

  3. Pingback: In Memory Table Indexes – A Shot of SQLEspresso

Comments are closed