T-SQL Tuesday #101 – Some Great SQL Server Tools

tsql2sday150x150

This month for T-SQL Tuesday #101 Jens Vestergaard asks us to blog about the essential tools in our SQL Toolbelt.

http://t-sql.dk/?p=1947

I’d just completed by post on CMS when I realised I’ve blogged about a few of my favourite tools in the past and that this would be a good opportunity to share them again in a quick post. So, here’s my second of two T-SQL Tuesday posts.

First we have these two great tools:

Statistics  Parser – If you do performance tuning and you’re not using this then you’re missing out. Formats and summarises the results of your STATISTICS IO and TIME commands. Big shout to Richie Rump for creating this – I use it so much I just need to type “st” in my browser search and it comes straight up!

Live Query Stats – Watch an animated execution plan while your query executes. I’ll admit I’ve not used this a lot but I still think it’s really cool tool.

And then a few different ones around the same tool:

Query Store – track query performance and plan changes over time as well as forcing SQL to use the plan you want. I’m lucky to have quite a bit of stuff on SQL 2016 and Query Store is great! I was just using it minutes ago to look at how a performance fix put in last week has improved things.

Clone Database with Query Store – If you want to see how your queries would behave in your production environment then Clone Database copies out the stats used to generate the plan. With SQL 2016 you can also a copy of your query store data so you can query away at it happily in your dev environment.

Capture the most expensive queries across your instance with Query Store – blowing my own trumpet a bit here, but I’ve used this script a lot since I wrote it.

That’s it for now – at least for tools I’ve written about – I look forward to seeing what the rest of the T-SQLT community comes up with.

How does Query Store capture cross database queries?

When I was writing my post  Capture the most expensive queries across your SQL Server using Query Store a question crossed my mind:

Query Store is a configuration that is enabled per database, and the plans and stats for queries executed in that database are stored in the database itself. So what does query store do when a query spans more than one database?

Does it record the execution stats in all databases involved or does it store them in one based on some criteria (e.g. the one where the most work occurs)? Or does it somehow proportion them out between the databases?

This was relevant as it crossed my mind that if it records them in multiple database then my query in the above post could be double counting.

Time to test and find out.

I created three databases, Fred, Bert and Ernie. Then a table called Fred in database Fred, and a table called Bert in database Bert. In table Fred I created a bunch of records, then in table Bert I created a much bigger bunch of records:

DROP DATABASE IF EXISTS Fred;
DROP DATABASE IF EXISTS Bert;
DROP DATABASE IF EXISTS Ernie;

CREATE DATABASE Fred;
CREATE DATABASE Bert;
CREATE DATABASE Ernie;

USE Fred;
CREATE TABLE dbo.Fred(Id INT IDENTITY(1,1) PRIMARY KEY CLUSTERED, FredText NVARCHAR(500));

INSERT INTO dbo.Fred(FredText)
SELECT a.name + b.name
FROM sys.objects a, sys.objects b;

USE Bert;
CREATE TABLE dbo.Bert(Id INT IDENTITY(1,1) PRIMARY KEY CLUSTERED, BertText NVARCHAR(500));

INSERT INTO dbo.Bert(BertText)
SELECT a.name + b.name + c.name 
FROM sys.objects a, sys.objects b, sys.objects c;

Then I turned on Query Store for all three databases:

USE MASTER;
ALTER DATABASE Fred SET query_store = ON;
ALTER DATABASE Bert SET query_store = ON;
ALTER DATABASE Ernie SET query_store = ON;

Once that was done I concocted a horrible query that was bound to be horrendously slow – so I knew it would be easy to find when I queried the Query Store runtime stats:

SET STATISTICS IO ON

SELECT TOP 100000 *
FROM Fred.dbo.Fred f
INNER JOIN Bert.dbo.Bert b
   ON b.BertText LIKE  '%' + f.FredText + '%';

I turned STATISTICS IO on so I could see how much work was happening in each database.

I ran the query first in a query window pointing at the Fred database, then I ran my query store query from the previous post (Capture the most expensive queries across your SQL Server using Query Store) to see what had been captured. I made it slightly easier for myself by adding an additional where clause to the cursor so that it only looked at these databases:

--Cursor to step through the databases
DECLARE curDatabases CURSOR FAST_FORWARD FOR
SELECT [name]
FROM sys.databases 
WHERE is_query_store_on = 1
AND name IN ('Fred','Bert','Ernie');

I cleared down Query Store for all the databases:

USE MASTER;
ALTER DATABASE Fred SET QUERY_STORE CLEAR;
ALTER DATABASE Bert SET QUERY_STORE CLEAR;
ALTER DATABASE Ernie SET QUERY_STORE CLEAR;

Then I repeated these steps for Bert and Ernie.

The Statistics IO for the query (regardless of which database context I had set) was as follows:
Table ‘Bert’. Scan count 24, logical reads 5095742, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘Fred’. Scan count 25, logical reads 50, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

So you can see most of the work occurs in the Bert database, a little in Fred, and none in Ernie.

Now let’s see what query store captured when I ran the query pointing at database Fred:
QS_Fred

And pointing at database Bert:
QS_Bert

And pointing at database Ernie:
QS_Ernie

You can see that the figures get recorded against whichever database you are pointing at – regardless of where the data being accessed resides. I left the “TotalLogicalReads %” in the above screen shots so you can see I’m not hiding anything.

This has a few implications. First, I’m happy because it means my “Expensive queries” script isn’t double counting.

Second though, as you can’t turn on query store on in any of the system databases, you won’t be able to capture details for any queries executed with those as the context. That includes ad-hoc queries where the user may connect to master, but execute queries against your other databases.

Fortunately (because Query Store would be pretty pointless if it did) this doesn’t apply to stored procedures.

I’m going to wrap my horrible query into a stored procedure, and deploy it into database Ernie:

USE Ernie;
CREATE PROCEDURE dbo.Horrible
AS
BEGIN
   SELECT TOP 100000 *
   FROM Fred.dbo.Fred f
   INNER JOIN Bert.dbo.Bert b
      ON b.BertText LIKE  '%' + f.FredText + '%';
END;

Now I clear my Query Stores for the three database one last time. Then I’ll called the stored procedure from database Fred:

USE Fred;
EXEC Ernie.dbo.Horrible;

Here’s what I get from query store now:
QS_Sproc

So, Query Store logs the execution against database Ernie – where the stored procedure resides, rather than Fred – where it was called from, or Bert – where most of the work was done.

I hope you’ll trust me enough on that that I don’t have to demonstrate all the other combinations!

Capture the most expensive queries across your SQL Server using Query Store

I’m a big fan of using queries based on the dynamic management view sys.dm_exec_query_stats to capture the most resource hungry queries across a SQL instance.

That approach has some drawbacks though. First, it is cleared out every time an instance restarts, and second it only keeps figures for currently cached plans, so when a query recompiles, data is lost.

It was some time ago I blogged about SQL Server Query Store but one of the great features of it is that performance stats are stored in the database, so they aren’t lost in either of the above scenarios.

So I wanted to write a query store equivalent that will produce a ranked list of your most expensive queries.

The only downside is that the DMVs for Query Store are per database whereas dm_exec_query_stats is a view across the whole instance. So I had to use a cursor and a temp table, populating the temp table for each database in turn.

Anyway, enough preamble – here’s the query:

--Gather and report on most resource hungry queries
DECLARE @Reportinginterval int;
DECLARE @Database sysname;
DECLARE @StartDateText varchar(30);
DECLARE @TotalExecutions decimal(20,3);
DECLARE @TotalDuration decimal(20,3);
DECLARE @TotalCPU decimal(20,3);
DECLARE @TotalLogicalReads decimal(20,3);
DECLARE @SQL varchar(MAX);

--Set Reporting interval in days
SET @Reportinginterval = 1;

SET @StartDateText = CAST(DATEADD(DAY, -@Reportinginterval, GETUTCDATE()) AS varchar(30));

--Cursor to step through the databases
DECLARE curDatabases CURSOR FAST_FORWARD FOR
SELECT [name]
FROM sys.databases 
WHERE is_query_store_on = 1;

--Temp table to store the results
DROP TABLE IF EXISTS #Stats;
CREATE TABLE #Stats (
   DatabaseName sysname,
   SchemaName sysname NULL,
   ObjectName sysname NULL,
   QueryText varchar(1000),
   TotalExecutions bigint,
   TotalDuration decimal(20,3),
   TotalCPU decimal(20,3),
   TotalLogicalReads bigint
);

OPEN curDatabases;
FETCH NEXT FROM curDatabases INTO @Database;

--Loop through the datbases and gather the stats
WHILE @@FETCH_STATUS = 0
BEGIN
    
    SET @SQL = '
	   USE [' + @Database + ']
	   INSERT intO #Stats
	   SELECT 
		  DB_NAME(),
		  s.name AS SchemaName,
		  o.name AS ObjectName,
		  SUBSTRING(t.query_sql_text,1,1000) AS QueryText,
		  SUM(rs.count_executions) AS TotalExecutions,
		  SUM(rs.avg_duration * rs.count_executions) AS TotalDuration,
		  SUM(rs.avg_cpu_time * rs.count_executions) AS TotalCPU,
		  SUM(rs.avg_logical_io_reads * rs.count_executions) AS TotalLogicalReads
	   FROM sys.query_store_query q
	   INNER JOIN sys.query_store_query_text t
		  ON q.query_text_id = t.query_text_id
	   INNER JOIN sys.query_store_plan p
		  ON q.query_id = p.query_id
	   INNER JOIN sys.query_store_runtime_stats rs
		  ON p.plan_id = rs.plan_id
	   INNER JOIN sys.query_store_runtime_stats_interval rsi
		  ON rs.runtime_stats_interval_id = rsi.runtime_stats_interval_id
	   LEFT JOIN sys.objects o
		  ON q.OBJECT_ID = o.OBJECT_ID
	   LEFT JOIN sys.schemas s
		  ON o.schema_id = s.schema_id     
	   WHERE rsi.start_time > ''' + @StartDateText + '''
	   GROUP BY s.name, o.name, SUBSTRING(t.query_sql_text,1,1000)
	   OPTION(RECOMPILE);';

    EXEC (@SQL);

    FETCH NEXT FROM curDatabases INTO @Database;
END;

CLOSE curDatabases;
DEALLOCATE curDatabases;

--Aggregate some totals
SELECT 
    @TotalExecutions = SUM(TotalExecutions),
    @TotalDuration = SUM (TotalDuration),
    @TotalCPU  = SUM(TotalCPU),
    @TotalLogicalReads = SUM(TotalLogicalReads)
FROM #Stats

--Produce output
SELECT TOP 20
    DatabaseName,
    SchemaName,
    ObjectName,
    QueryText,
    TotalExecutions,
    CAST((TotalExecutions/@TotalExecutions)*100 AS decimal(5,2)) AS [TotalExecutions %],
    CAST(TotalDuration/1000000 AS decimal(19,2)) AS [TotalDuration(s)],
    CAST((TotalDuration/@TotalDuration)*100 AS decimal(5,2)) AS [TotalDuration %],
    CAST((TotalDuration/TotalExecutions)/1000 AS decimal(19,2)) AS [AverageDuration(ms)],
    CAST(TotalCPU/1000000  AS decimal(19,2)) [TotalCPU(s)],
    CAST((TotalCPU/@TotalCPU)*100 AS decimal(5,2)) AS [TotalCPU %],
    CAST((TotalCPU/TotalExecutions)/1000 AS decimal(19,2)) AS [AverageCPU(ms)],   
    TotalLogicalReads,
    CAST((TotalLogicalReads/@TotalLogicalReads)*100 AS decimal(5,2)) AS [TotalLogicalReads %],
  CAST((TotalLogicalReads/TotalExecutions) AS decimal(19,2)) AS [AverageLogicalReads]   
FROM #Stats
--Order by the resource you're most interested in

--ORDER BY TotalExecutions DESC
--ORDER BY TotalDuration DESC
ORDER BY TotalCPU DESC
--ORDER BY TotalLogicalReads DESC

DROP TABLE #Stats;

The script limits itself to looking at databases where query store is enabled.

If you want to bring back more results you can just change the TOP statement, and if you want to look at the results ordered by a different resource (e.g. Reads) then just make sure the relevant ORDER BY clause is uncommented.

I consider this something that will evolve over time, so if you have any suggestions of things you think I should change or add then let me know.

Understanding Keys and Certificates with Always Encrypted

Always Encrypted on SQL 2016 is pretty easy to set up. There’s even a single wizard to guide you through the whole process. However, the best way to get into trouble with this technology is by not clearly understanding what is going on.

I personally find I understand things best when I am solid on the basics. It also means I can then usually logically extrapolate to answer any questions that come up.

So here we’ll go into a little detail about the keys and certificates involved…

To get up and running with Always Encrypted you need three things:

  • An Always Encrypted Certificate to store on your application server / client machine
  • A Column Master Key in your database
  • A Column Encryption Key in your database

You can actually create all of these through SQL Server Management Studio, in fact the first two items get created in the same step, so let’s run through that process, and we can look at what these items are, and what they get used for in more detail as we go along.

Creating the Certificate and the Column Master Key

You can find Always Encrypted Keys under the Security folder for your database. Right-click over column master keys and select to create a new one:

SSMS_AE

And up comes the GUI:

AE_ColumnMasterKey

The Name field is the name for the Column Master Key within your database – you can see I’ve decided to call mine “Wibble”.

Beneath that you can select – or create – the certificate to be associated with the Key. You can see I already have a couple in the selected key store as I’ve been through this process before, and the selected certificate is highlighted. If you don’t have one, you can select from the “Key Store” drop down where you want the certificate to be stored, and then click the “Generate Certificate” button to generate a new certificate in that store.

It’s worth noting at this point that the certificate is created on your local machine – not the server hosting SQL Server. We’re used to SSMS being the tool through which we interact with and do things at the server end, so this can be a bit counter-intuitive, but in terms of what we are trying to achieve it makes sense. We want to create a certificate that exists on the client – so in this case that’s your local machine.

Click okay and the Column Master Key and certificate get created. You can see your new key in SSMS:

AE_Wibble

To understand what this key actually is, I’m going to right-click and generate the SQL definition:

CREATE COLUMN MASTER KEY [Wibble]

WITH

(

KEY_STORE_PROVIDER_NAME = N'MSSQL_CERTIFICATE_STORE',

KEY_PATH = N'CurrentUser/My/D492BA86737FECDABB14D13476122C1E4BD217D2'

)

GO

We only have two values, KEY_STORE_PROVIDER_NAME and KEY_PATH. All this is telling us is where to find the certificate we just created. So you can see that the Column Master Key is just a pointer to a certificate stored on client machines.

Creating the Column Encryption Key

This is the last thing we need before we can set up encryption on our columns. You can find “Column Encryption Keys” underneath “Column Master Keys” in SSMS. Again, right-click and bring up the GUI:

AE_ColumnKey

You can see I’ve called mine MyColumnKey, and I’ve selected from the drop down that it should be created using the Column Master Key I created earlier – “Wibble”. Click okay and the key gets created.

Again I’m going to generate the SQL definition for my new Column Encryption Key so we can see what’s inside it:

CREATE COLUMN ENCRYPTION KEY [MyColumnKey]

WITH VALUES

(

COLUMN_MASTER_KEY = [Wibble],

ALGORITHM = 'RSA_OAEP',

ENCRYPTED_VALUE = 0x016E000001630075007200720065006E00740075007300650072002F006D0079002F006400340039003200620061003800360037003300370066006500630064006100620062003100340064003100330034003700360031003200320063003100650034006200640032003100370064003200A6296213760796AA271152F6D1F1D8C5B4B84ADDB660EC700936B010918CF62FA63C9F716BCB4337D4D48E7F5ABB879F09EA0ABF2ABFDB54E4740DCD88631C533BE530A1CF491A63B0A334FBBA1971C4B22C18E7B8DFCB9975754F9232E47C5DED5857B6032D4BB95B3C1232D3168366B6A1DA4FFC637849FBC3226FFB78981BD1596DE0CD3ED894B127855FD93F3F40E6DB9602482A9AD69CE96C3D879F544534135FFCCA789CB69E3C52C3A7CCB3AFC55850DA7DD82BE963A36A3AC77ECB26CE77A01F3CC43CEA26757548785A4C5A538250C325CEB106495211A625CDAFF5E9EED96EDE3AFFC8E6518E0E29C4F796209EE5ED35B04257844E6ACF74B0EEF907F667D699FE01BCF29BDA275BC9260E03130DA7008F30CD6CBB12F60AD6B47E1B5338D18FAF50DD6EBE3712FC4E814E71CA1E4F605F97F6DB51B1BBB6819E5BF5E8DBB6EB453B268173DB0B69B6CE78A57BC7C98FBBC34BA205B7DB59FEECCEB235A38B089D31C4892FB870A305D5FB1FC69A91C009B9F9D2B572C0DA2DCAC76C6AF48DF54B3C84D7110BD903F06817FD9BC100773028F357966F029B89DF49B8564DD901349F1304124B3DC691A626568C30224766C18B631CD42097DCE0B927A9A401A51EFE05BD5DC6CB9F7F2F3166A5D2A1DECC7D060C26B41D42BF2EF9ACA5919568979F713EFA655EEEF2A4280060D0D73553633B0BEA4925E595805C

)

GO

We can see it tells us what Column Master Key this is based on, what Algorithm has been used, and a long binary Encrypted Value. So what is this?

What has happened is that in the background a value has been created that will act as the Column Encryption Key when MyColumnKey is used to encrypt (or decrypt) data in a column. That value has then itself been encrypted using the certificate that the Master Column Key is based on. The encrypted value of the Column Encryption key is what then gets actually stored in the database.

That might seem overcomplicated. If we can encrypt data using the certificate, then why do we need a separate key in the database?

It actuality it’s very important that we have both. Imagine the scenario where we just use a certificate as the key (or just a single key in another form) and that expires, or worse there is a breach and it gets copied. In this case we are going to have to change our certificate or key. But if we change the key, how are we going to read the existing data in the table?

There is only one way, which is to decrypt all the existing data, and then re-encrypt it with the new key. Let’s say you have tens of millions of records, this might take a while, and will certainly require the application to be taken offline. Also, while this process is being undertaken, the data is unencrypted and at risk.

With the approach taken by Always Encrypted, we have a way around this. Let’s say we have to replace our certificate. All we need to do is create a new COLUMN MASTER KEY based on the new certificate, then we can decrypt our COLUMN ENCRYPTION KEY, ENCRYPTED_VALUE using the old certificate and re-encrypt that value with our new certificate. Because the actual Key value itself hasn’t changed then we can continue to read the existing data using the new certificate.

In practice this is done for you in the background through a process called “Key Rotation”. A given COLUMN ENCRYPTION KEY can actually support having two COLUMN MASTER KEYS at the same time – each of which would have it’s own encrypted version of the same underlying key. This allows the process to add a new master key and associated encrypted value, before removing the old one – which allows the key rotation process to occur fully online.

There is (understandably) a performance overhead to having two Keys at once so avoid doing this for any other reason.

I’m not going to go any further into the process of Key Rotation here, but it is generally seen as good practice to retire and refresh your certificates regularly – when you create them through SSMS they get a default expiry data one year in the future. So if you are implementing Always Encrypted, it is important to be confident you know how you will approach things when your certificate expires.

So how do the keys get used in practice?

When you run a query over a connection with column encryption enabled, the following steps occur:

  1. The client sends a preliminary call to the SQL Server Instance containing the query text. In this call it is asking for details of potential encryption on any columns involved in the query and what keys may be involved.
  2. If there are encrypted columns involved, SQL supplies that information, as well as the Column Master Key (which tells the client where to find the relevant certificate locally), and the encrypted value of the Column Encryption Key for each encrypted column.
  3. Using the information and keys, the client can then encrypt any values it needs to before sending to the database. For instance, if it is inserting data into, or looking for a particular value, in an encrypted column.
  4. The modified query is now sent to SQL for execution.
  5. SQL returns any results – with values in encrypted columns remaining in their encrypted form.
  6. If a resultset is returned, the client uses the information and keys supplied in step 2 in order to decrypt any encrypted values.

So you can see there are a few extra steps over and above your normal query execution, and that does create some performance overhead.

Final thoughts

Once you get into the business of having encryption keys then the process by which you manage them is going to be critical. If they are too easy for anyone in your organisation to access, then you may sabotage your efforts in encrypting the data in the first place.

You are going to want to have different keys in your production environments to those you have in test and development as a minimum.

Think about how you are going to manage that through your deployment processes. Do you want to generate new keys and certificates with fresh deployments? In which case where are they backed up, and who has access to them? You want to be absolutely sure you can recover the certificate your Master Column Key is based on if it is lost from your production environment. Otherwise your encrypted data is gone for good.

If you are shipping a product to a client with Always Encryption enabled – do they have the knowledge and processes to manage the keys moving forward?

And what is your plan for key rotation? How will the right people be reminded before certificates expire that they need to do something about it?

This is a great technology, and Microsoft has made it as simple as possible to use in practice. Nonetheless, it does need to be used with care.

SQL Server 2016 SP1 – A significant change to licencing

Myself of all people am probably least likely to get excited by licencing changes in a product, but with the SP1 release of SQL Server 2016 there are some significant changes to benefit ISVs. I had to look up that acronym and apparently it’s Independent Software Vendors – which is the type of organisation I work for.

The change has brought many features that previously were only available in Enterprise Edition into the Standard version of the product – and something that has been mentioned less – even into the Express version in most cases.

This is fantastic where we are delivering software to clients, some of whom who may wish to use Enterprise, some Standard (or even Express). We no longer have to keep usage of these features out of our core product, which means we are able to maintain a single codebase. I can think immediately of Table Partitioning, Database Snapshots and Data Compression as things we might like to be able to use as standard in our products – but which were previously only available in Enterprise Edition (or Expensive Edition as it’s otherwise known).

On top of those, the new security features in SQL 2016, including AlwaysEncrypted (more on that in a later post) are now available in all the versions. These features are likely to become critical to us as the GDPR comes into effect (the European General Data Protection Guidelines) and we need to start making sure the Personal Identifiable Information in our products is protected to the highest levels. Having these features available in standard edition is going to make our lives one hell of a lot easier.

Here’s the breakdown of features available in the different versions of SQL 2016, contrasting what was available in the RTM version to what is now available in SP1 (picture taken from MSDN):

sql2016sp1licensing

There are still a bunch of Enterprise features not included in that list, but as a generalisation, most things that are configured at the level of database code (rather than server level and HADR) are included. The change is being sold as primary targeted at software organisations like ourselves and giving us the ability to have consistent database code for all our deployments.

On top of that, in Standard edition you can now have up to 24 processor cores and 128GB of memory (plus a bit more if you use memory optimised tables or columnstore indexes). That makes it fairly viable for even enterprise level workloads.

All in all, this change is great news for us software vendors in terms of what we can offer to our clients, and another good reason to upgrade to 2016.

Understanding Keys and Certificates with Always Encrypted

Always Encrypted on SQL 2016 is pretty easy to set up. There’s even a single wizard to guide you through the whole process. However, the best way to get into trouble with this technology is by not clearly understanding what is going on.

I personally find I understand things best when I am solid on the basics. It also means I can then usually logically extrapolate to answer any questions that come up.

So here we’ll go into a little detail about the keys and certificates involved…

To get up and running with Always Encrypted you need three things:

  • An Always Encrypted Certificate to store on your application server / client machine
  • A Column Master Key in your database
  • A Column Encryption Key in your database

You can actually create all of these through SQL Server Management Studio, in fact the first two items get created in the same step, so let’s run through that process, and we can look at what these items are, and what they get used for in more detail as we go along.
Creating the Certificate and the Column Master Key
You can find Always Encrypted Keys under the Security folder for your database. Right-click over column master keys and select to create a new one:

SSMS_AE

And up comes the GUI:

AE_ColumnMasterKey

The Name field is the name for the Column Master Key within your database – you can see I’ve decided to call mine “Wibble”.

Beneath that you can select – or create – the certificate to be associated with the Key. You can see I already have a couple in the selected key store as I’ve been through this process before, and the selected certificate is highlighted. If you don’t have one, you can select from the “Key Store” drop down where you want the certificate to be stored, and then click the “Generate Certificate” button to generate a new certificate in that store.

It’s worth noting at this point that the certificate is created on your local machine – not the server hosting SQL Server. We’re used to SSMS being the tool through which we interact with and do things at the server end, so this can be a bit counter-intuitive, but in terms of what we are trying to achieve it makes sense. We want to create a certificate that exists on the client – so in this case that’s your local machine.

Click okay and the Column Master Key and certificate get created. You can see your new key in SSMS:

AE_Wibble

To understand what this key actually is, I’m going to right-click and generate the SQL definition:

CREATE COLUMN MASTER KEY [Wibble]
WITH
(
KEY_STORE_PROVIDER_NAME = N'MSSQL_CERTIFICATE_STORE',
KEY_PATH = N'CurrentUser/My/D492BA86737FECDABB14D13476122C1E4BD217D2'
)
GO

We only have two values, KEY_STORE_PROVIDER_NAME and KEY_PATH. All this is telling us is where to find the certificate we just created. So you can see that the Column Master Key is just a pointer to a certificate stored on client machines.

Creating the Column Encryption Key
This is the last thing we need before we can set up encryption on our columns. You can find “Column Encryption Keys” underneath “Column Master Keys” in SSMS. Again, right-click and bring up the GUI:

AE_ColumnKey

You can see I’ve called mine MyColumnKey, and I’ve selected from the drop down that it should be created using the Column Master Key I created earlier – “Wibble”. Click okay and the key gets created.
Again I’m going to generate the SQL definition for my new Column Encryption Key so we can see what’s inside it:
CREATE COLUMN ENCRYPTION KEY [MyColumnKey]
WITH VALUES
(
COLUMN_MASTER_KEY = [Wibble],
ALGORITHM = 'RSA_OAEP',
ENCRYPTED_VALUE = 0x016E000001630075007200720065006E00740075007300650072002F006D0079002F006400340039003200620061003800360037003300370066006500630064006100620062003100340064003100330034003700360031003200320063003100650034006200640032003100370064003200A6296213760796AA271152F6D1F1D8C5B4B84ADDB660EC700936B010918CF62FA63C9F716BCB4337D4D48E7F5ABB879F09EA0ABF2ABFDB54E4740DCD88631C533BE530A1CF491A63B0A334FBBA1971C4B22C18E7B8DFCB9975754F9232E47C5DED5857B6032D4BB95B3C1232D3168366B6A1DA4FFC637849FBC3226FFB78981BD1596DE0CD3ED894B127855FD93F3F40E6DB9602482A9AD69CE96C3D879F544534135FFCCA789CB69E3C52C3A7CCB3AFC55850DA7DD82BE963A36A3AC77ECB26CE77A01F3CC43CEA26757548785A4C5A538250C325CEB106495211A625CDAFF5E9EED96EDE3AFFC8E6518E0E29C4F796209EE5ED35B04257844E6ACF74B0EEF907F667D699FE01BCF29BDA275BC9260E03130DA7008F30CD6CBB12F60AD6B47E1B5338D18FAF50DD6EBE3712FC4E814E71CA1E4F605F97F6DB51B1BBB6819E5BF5E8DBB6EB453B268173DB0B69B6CE78A57BC7C98FBBC34BA205B7DB59FEECCEB235A38B089D31C4892FB870A305D5FB1FC69A91C009B9F9D2B572C0DA2DCAC76C6AF48DF54B3C84D7110BD903F06817FD9BC100773028F357966F029B89DF49B8564DD901349F1304124B3DC691A626568C30224766C18B631CD42097DCE0B927A9A401A51EFE05BD5DC6CB9F7F2F3166A5D2A1DECC7D060C26B41D42BF2EF9ACA5919568979F713EFA655EEEF2A4280060D0D73553633B0BEA4925E595805C
)
GO

We can see it tells us what Column Master Key this is based on, what Algorithm has been used, and a long binary Encrypted Value. So what is this?

What has happened is that in the background a value has been created that will act as the Column Encryption Key when MyColumnKey is used to encrypt (or decrypt) data in a column. That value has then itself been encrypted using the certificate that the Master Column Key is based on. The encrypted value of the Column Encryption key is what then gets actually stored in the database.

That might seem overcomplicated. If we can encrypt data using the certificate, then why do we need a separate key in the database?

It actuality it’s very important that we have both. Imagine the scenario where we just use a certificate as the key (or just a single key in another form) and that expires, or worse there is a breach and it gets copied. In this case we are going to have to change our certificate or key. But if we change the key, how are we going to read the existing data in the table?

There is only one way, which is to decrypt all the existing data, and then re-encrypt it with the new key. Let’s say you have tens of millions of records, this might take a while, and will certainly require the application to be taken offline. Also, while this process is being undertaken, the data is unencrypted and at risk.

With the approach taken by Always Encrypted, we have a way around this. Let’s say we have to replace our certificate. All we need to do is create a new COLUMN MASTER KEY based on the new certificate, then we can decrypt our COLUMN ENCRYPTION KEY, ENCRYPTED_VALUE using the old certificate and re-encrypt that value with our new certificate. Because the actual Key value itself hasn’t changed then we can continue to read the existing data using the new certificate.

In practice this is done for you in the background through a process called “Key Rotation”. A given COLUMN ENCRYPTION KEY can actually support having two COLUMN MASTER KEYS at the same time – each of which would have it’s own encrypted version of the same underlying key. This allows the process to add a new master key and associated encrypted value, before removing the old one – which allows the key rotation process to occur fully online.

There is (understandably) a performance overhead to having two Keys at once so avoid doing this for any other reason.

I’m not going to go any further into the process of Key Rotation here, but it is generally seen as good practice to retire and refresh your certificates regularly – when you create them through SSMS they get a default expiry data one year in the future. So if you are implementing Always Encrypted, it is important to be confident you know how you will approach things when your certificate expires.

So how do the keys get used in practice?
When you run a query over a connection with column encryption enabled, the following steps occur:

  1. The client sends a preliminary call to the SQL Server Instance containing the query text. In this call it is asking for details of potential encryption on any columns involved in the query and what keys may be involved.
  2. If there are encrypted columns involved, SQL supplies that information, as well as the Column Master Key (which tells the client where to find the relevant certificate locally), and the encrypted value of the Column Encryption Key for each encrypted column.
  3. Using the information and keys, the client can then encrypt any values it needs to before sending to the database. For instance, if it is inserting data into, or looking for a particular value, in an encrypted column.
  4. The modified query is now sent to SQL for execution.
  5. SQL returns any results – with values in encrypted columns remaining in their encrypted form.
  6. If a resultset is returned, the client uses the information and keys supplied in step 2 in order to decrypt any encrypted values.

So you can see there are a few extra steps over and above your normal query execution, and that does create some performance overhead.

Final thoughts
Once you get into the business of having encryption keys then the process by which you manage them is going to be critical. If they are too easy for anyone in your organisation to access, then you may sabotage your efforts in encrypting the data in the first place.

You are going to want to have different keys in your production environments to those you have in test and development as a minimum.

Think about how you are going to manage that through your deployment processes. Do you want to generate new keys and certificates with fresh deployments? In which case where are they backed up, and who has access to them? You want to be absolutely sure you can recover the certificate your Master Column Key is based on if it is lost from your production environment. Otherwise your encrypted data is gone for good.

If you are shipping a product to a client with Always Encryption enabled – do they have the knowledge and processes to manage the keys moving forward?

And what is your plan for key rotation? How will the right people be reminded before certificates expire that they need to do something about it?

This is a great technology, and Microsoft has made it as simple as possible to use in practice. Nonetheless, it does need to be used with care.

SQL Server 2016 – Always Encrypted and the GDPR

The European General Data Protection Regulation (GDPR) is coming, bringing new rules about the protection of Personally Identifiable Information (PII).

For those of us in Europe – or wishing to sell software products and solutions within Europe – this is a big deal. It’s going to mean an increased focus on data protection, and the potential for large penalties in the case of data breaches. Those penalties are also increased if we can’t show that we have taken sufficient measures to protect that data.

This affects us where we host applications and data for our clients, but also when we sell software to clients to use on-premise. They are going to want to be sure that the data in those applications is as secure as possible.

Security infrastructure and data governance are going to be of critical importance to everyone, but for software vendors it seems likely that Encryption is going to be a keyword.

In terms of the UK and Brexit, the British government has confirmed that the Brexit process will not affect the commencement of the GDPR in the UK.

The new regulations come into effect in the UK on the 25th May 2018. In the world of development lifecycles, roadmaps and deployments, that’s not a massive amount of time. So wouldn’t it be nice if there was something reasonably easy to implement that we could use to protect our clients’ data.

Transparent Data Encryption (TDE)
In SQL Server we already had Transparent Data Encryption.

TDE works by encrypting all the data for a database on disk – so if someone steals your data files, or backup files then they can’t read your data.

What TDE doesn’t do is keep data encrypted as it’s in memory or passed across the network to a requesting application. Also, if you have been granted access to the database then you will be able to read the data with no issues. That may not seem like much of a problem if you have our security locked down correctly, but usually you at least need to have a DBA who has system admin rights over the database – that DBA could abuse their vast privilege and power!

TDE is also an Enterprise edition only feature. To put that in context, if you currently have an application running on standard edition and which is fully provisioned with the 24 cores you are allowed to run on standard edition, and if you want to implement TDE, then it would cost you $5,000 per core to upgrade to Enterprise – that’s $120,000 – before you can go ahead.

Enter Always Encrypted
Always Encrypted offers a solution to these issues. You can choose to encrypt data on a column by column basis. The data is encrypted by the application before being sent across the network and remains encrypted as it is stored in SQL Server (both in memory and on disk). When it’s retrieved, it remains encrypted until it gets to the client application where it can be decrypted and used.

This is achieved by use of an Always Encrypted Certificate which is only stored on the application server, i.e. the database server has no access to it. Even under the covers SQL Server can’t read your encrypted data without the certificate.

So pesky DBA – your evil plans are foiled!

Seriously though, it’s unfortunately rather common for people to have levels of access to databases that they don’t need and shouldn’t have. Or worse, for access to granted to people through the use of the built-in Sa account – whose password is rarely changed.

To use Always Encrypted you need to be on SQL Server 2016 (or higher), and your application must be using at least version 4.6 of the .NET Framework.

From SQL 2016 SP1, Always Encrypted is a feature available in the Standard and Express editions – so that makes it free for users of those editions, not just those on Enterprise.

As I’ve said about a few of the other new features in 2016 – this isn’t rocket science – this isn’t an amazing new technology that’s going to take us to the stars. What it is though is, easy to use, effective, and minimal effort to implement. It also, once set up, doesn’t require developers to remember to do anything like sending data through a third party library to ensure encryption remains. In terms of hitting an imminent deadline for Data Protection without vast expense – this could be rather handy.

There are of course limitations, impacts and a few things in general that are worth understanding.

Initially I planned just to write a single post on this subject. It was quite straightforward to go through the process of setting up an Always Encrypted test – but once that was done I found I started to ask myself a lot of “what ifs”. I then found that by delving deeper, and understanding what was actually going on with Always Encrypted in a bit more detail at a lower level, that these questions started answering themselves, and that some of the limitations of Always Encrypted made perfect logical sense.

So we’ll have a look at some of that in the next few posts.

SQL 2016 – Clone Database with Query Store

This is a quickie post to introduce the new DBCC command CLONEDATABASE.

Okay so this isn’t technically a SQL Server 2016 only feature as it’s also available in SQL 2014 SP2 onwards – but that wasn’t released until July 2016.

CLONEDATABASE allows you to do something you could only do before through the GUI in SSMS which is to create a copy of a database without the data, but with the Statistics objects that describe the distribution of data.

Why is this useful?

Often in our test and development environments we don’t have access to a set of databases with an amount and distribution of data that we would have in production. This means that queries aren’t likely to perform the same way they would in a production system, so we don’t spot performance issues until they are happening out in the wild. If we can obtain a cloned copy of a production database, that includes the statistics, we can at least see what execution plans would be formed by our queries, and that can give us a good idea ahead of time what changes or indexes might be needed.

It’s also great for dealing with production issues as you can see how a query would behave and as you make changes to progress a fix you can use the cloned copy to see if the new query will produce the plan you expect.

Because the cloned database doesn’t include the data we are able to get around the restrictions that would prevent us from copying live data into our test or development environments. I should mention that the statistics information could be seen as being “data” so it is worth making sure that it is acceptable to take even this information out of production.

Even better in SQL 2016

What’s even better in 2016, is that if you have Query Store enabled, CLONEDATABASE will by default also take a copy of your Query Store. This means you have access to the actual execution plans being used in production – and how they have changed over time – as well as the numbers on execution performance. This saves you having to get a production DBA or client to run queries for you to return this information. For many scenarios, just getting a copy of the cloned database will be all the information you need to dig into and start resolving issues with query performance – all in one package.

So how do I do it?

To Clone a database, the command is:

DBCC CLONEDATABASE('{Database Name}','{Name for Cloned Copy');

If I create one for the WideWorldImporters database used in my Query Store examples, it is simply:
DBCC CLONEDATABASE('WideWorldImporters','WideWorldImporters_Clone');

I can then see it in my set of database in SSMS:

CloneDatabase

As you can see it is Read-only – and also contains a read-only copy of the Query Store.

Then you just need to back it up and restore it to a development or test (or local environment) to do your investigation and troubleshooting.

Magic 🙂

SQL Server Query Store

Query Store was, probably without doubt, the most anticipated and talked out new feature in SQL 2016. Certainly by the DBA community.

In this post we’ll just take a brief look at Query Store, what it is, how you set it running, and what you can use it for. This will be a fairly superficial overview – you’d need a book to cover it in detail – but hopefully this will give you a flavour of how useful this will be and how to get started.

What it does, at a base level, is actually quite simple. It just stores information relating to query execution over time.

That information consists of two things:
• Execution Plans – the execution plans generated for each query are stored in the query store, and if the plan changes the new plan is also stored.
• Performance metrics – information such as CPU consumption, reads and writes, are captured and stored for each query.

This information is aggregated over intervals (default is one hour) so you can see how query performance changes over time.

This isn’t earth shatteringly new, you can already query to find out the execution plan for a query and you can also query to find aggregated performance metrics related to a given query.

The difference is that now a history can be maintained without implementing additional monitoring. Previously the performance metrics would be aggregated as a single total since the last restart of the SQL instance – and would be cleared at the next restart. Now they are persisted and time-sliced so you can actually measure changes over time.

The simple activity of storing old execution plans is also profound for performance troubleshooting. Anyone who’s worked with large scale production data will have experienced the issue when a function that was working fine, fairly suddenly starts to develop performance problems.

A common cause of this is what’s known as “plan regression”. Basically this is where the execution plan has changed – and the new one’s just not as good as the old one for most executions. Previously you might suspect this was the cause of an issue you were seeing, but there was no way to easily prove it, now you can use query store to view and compare the old and new plans to verify this. You can even with a click or two force the query to go back to using the old (better) plan – though we hope people won’t overuse this and will try to at least delve into the cause and think about resolving it. There is usually a reason SQL thought the new plan would be better – and a particular plan may work for now but may not always be the best plan in the future as your data changes.

Let’s have a look at these features in little more detail.

Enabling Query Store
Query store is a database level configuration. It’s important to understand that, and that the information stored is actually stored within system tables in the database. That means that if you backup and restore the database, the information is retained. Also very importantly, the information is stored asynchronously – so there shouldn’t be any performance impact on the executed queries themselves. Unlike with the Live Query Stats, in this case a watched pot should boil just as quickly. There will of course be some overall server overhead at the point the data does get saved, but that shouldn’t be too significant.

So you can enable Query Store for a database through T-SQL (or in your source code) or through the GUI in SSMS. I just want to demonstrate doing this through the GUI so you can see some of the options. Right-click on the database, select properties, and then select the Query Store page all the way at the bottom:

QueryStore1

Above you can see Query Store enabled for the WideWorldImporters database, with all default settings.

The first setting is “Operation Mode”. By default this is set to “Off”. To enable Query Store and get it running for a particular database you change it to “Read Write”. Job Done.

The Data Flush interval is how often the query store data gets written to disk – remembered I said this was asynchronous.

The Statistics Collection interval determines the size of the time slices that your query performance metrics get aggregated into.

Then we have some stuff about data retention. It’s worth noting that if your query store fills up and nothing is happening to clear it out then it flips to Read-Only mode and won’t store any more data until space is freed up. The default amount of space set for it is 100MB – that’s not a lot of space so I really can’t see any justification from that point of view for not enabling this feature.

Leaving the “Size Based Cleanup Mode” set to Auto should make sure that old data gets purged if the query store starts to fill up. Above that is the “Query Store Capture Mode” – if you leave that to AUTO it will ignore infrequent queries or those with negligible overhead.

The last setting “Stale Query Threshold” is how long it keeps data for in days. So 30 days for default. I can see it being useful to up this significantly it we want to use Query Store to be able to monitor performance over a long period, but it may depend on how much space Query Store wants to consume for your database – remember the default is 100MB but you can up that to whatever you like.

At the bottom of the properties page you can also see some nice pie charts that show how much of a proportion of your database Query Store has allocated, and how much of that space it is using.

So that’s Query store set up and configured, let’s have a look at a few of the things it gives us.

Query Store in Action and Forcing a Plan
I’ve set up Query Store as above in a copy of the WideWorldImporters databases on a SQL 2016 instance. I’ve created a stored procedure that I’m running every two seconds and I’ve set the Statistics Collection Interval in Query Store to 1 minute (rather than an hour) so that I can get some figures and graphs out fairly quickly.

Under the database in SSMS, there is now a Query Store folder where some built in reports reside:

QueryStore2

For the sake of this blog post I’m just going to look at a couple of these. Let’s open the “Top Resource Consuming Queries” Report:

QueryStore3

You can see a few things here. On the top left is a bar chart of the most expensive queries (you’ll notice one large one and the rest are negligible in comparison – the large one’s my query). You can configure whether you want to look by CPU or Logical Reads amongst other options and whether you want to look at averages, or maximums or minimums. Basically there a whole bunch of ways you can configure your view.

I’ll be honest that I struggled with some of these built-in Query Store reports to get them to show me what I wanted, so expect a bit of playing around to figure things out if you are using this feature.

In the bar chart, the bar highlighted in green is the currently selected query, on the right we can then see a scatter graph of the execution figures for this query across our statistics intervals (remember I’ve set it to have intervals of 1 minute). You can see I’m looking at average logical reads. You will also see that this query was ticking along nicely until about 14:05 when something happened (that was me!) and the logical reads for each execution of the query suddenly shot up. The blobs on the scatter graph have also changed colour at this point and that represents that the query is now using a new execution plan.

Next to this graph is a key telling us which plan each colour of blob represents and if you click on the plan in the key that you want, the plan itself is displayed in the bottom pane. At the moment I’m looking at the original plan (Plan 1). You will notice that the title specifies that it is “not forced”, you’ll also notice a button to the right of the title that gives us the option to “Force Plan”. Let’s just hold off a minute before we do that.

Before we change anything to try and fix the issue with this query, let’s look at the “Regressed Queries” report. This is pretty similar, but you may use it from a different direction. I.e. it may not be one your most expensive queries that has started going bad, so if you look in the Regressed Queries report it will focus on ones for which the execution plan has changed in the interval you are looking at. Again I found it a little challenging to get this report to show me the query I was interested in, some playing around can be required:

QueryStore4

You can see here that I have just one big fat bar on my bar chart – as I only have one regressed query in the interval (last 30 minutes) I chose to look at. This can make it easier to identify queries suffering this issue.

I’m going to go back the the previous Resource Consumers report and try and fix the problem. Now, in reality I know what I did and why the query went bad. It was the result of something caused parameter sniffing, which is where, if a stored procedure recompiles, the execution plan that is formed may be different depending on the parameters it is executed with. Basically it forms the best plan for the parameters supplied – but that might not be the best plan for all sets of parameters. In this case I forced the stored procedure to form a plan that was going to be expensive in most cases. More on that in my next set of performance tuning workshops.

That issue would be best fixed in the code of the stored procedure, but in production, turning around a fix may take days and we have the problem right now. So let’s use the Force Plan functionality to fix the symptom – just for the moment.

I select the plan I want, and I click the “Force Plan” button. The effect is immediate and I notice it within minutes because my statistics collection interval is so small. I’ll let it run for a bit and then show you the new graph:

QueryStore5

You can see the query has now returned back to healthy (quick) execution. Note the Orange blobs all now have a tick over them to denote that this plan is now forced.

Comparing Plans

A related feature in SQL 2016 is the ability to compare two execution plans to see what’s changed. I haven’t found this that amazing myself when I’ve looked at it, but that’s mainly due to the natural limitations – if two plans are significantly different then something that highlights the differences is just going to highlight the whole thing. However it can be useful at least to have both plans on screen at the same time so you can go back and forth easily.

You don’t need to do this through Query Store – if you right-click on any Execution Plan in SSMS there is now the “Compare ShowPlan” option, and as long as you have the plan you want to compare against saved as a file then you can go ahead. Note that one good thing is that this is an SSMS feature, so as long as you have SSMS 2016 you can use it to compare plans from on earlier versions of SQL Server.

With Query Store you can compare plans directly from the Store. If we go back to one of the reports above, the plans are listed in the key for the scatter graph. You can select more than one by using Shift+Click. Then you can click the button in the toolbar above the scatter graph which has the ToolTip “Compare the Plans for the selected query in separate window.”

Let’s do that for the two plans formed for our query above. The resulting view shows us two views side by side. It may be useful to look at these separately so they fit better on this page. On the left we have:

QueryStore6

The area highlighted in Red is where the tool had identified that the two plans are the same. The rest it is not so sure about. All the same it’s a nice visual view just to be able to see what both plans are doing. On the right hand side of the screen you then get this view:

QueryStore7

This shows us a comparison of the properties of whichever operator is selected in each plan – note this need not be equivalent operator in each plan. You can select the Nested Loop operator in the top and the Index Scan operator in the bottom and it will show you those – though the comparison may not be that meaningful!

So, this is kind of a useful tool, but don’t expect it to magically solve the process of comparing plans for you. In general too much changes from one plan to the next for it to be that simple – but this may help – a bit…

Query Store Catalog Views

Like everything else in SQL Server – all the information you can access through the GUI in SSMS, is available directly through the use of system catalogs and views. So if you want to examine the information in a way that isn’t supported by the built in reports then you can just go ahead and write your own queries.

The new views available are:

sys.database_query_store_options
sys.query_context_settings
sys.query_store_plan
sys.query_store_query
sys.query_store_query_text
sys.query_store_runtime_stats
sys.query_store_runtime_stats_interval

Rather than me going into detail here, I’ll just refer you to the MSDN reference:

https://msdn.microsoft.com/en-gb/library/dn818149.aspx

Conclusions

Query store is a great new feature. It’s not rocket-science but it is going to be very useful. In particular it will massively aid the investigation of production issues, saving time for those troubleshooting them at exactly the point they need time saving – right when everything’s hitting the fan.

It should also be very useful for monitoring performance over time and being able to keep ahead of scalability issues with specific queries.

The new methods for forcing a plan are also excellent for quick fixes – but there is the fear that this will be overused. It is possible to force plans in earlier versions of SQL – but tricky, so people usually just fixed the code. Forcing plans can end up being a case of treating the symptoms rather than the cause – and can lead to other problems later on.

From a personal l point of view I’m looking forward to the day when someone asks me to look at some problems in a production system, and I find out it’s on SQL 2016 and Query Store is enabled. My confidence in finding the answers quickly will be massively improved.

Of course then I’ll probably find out that is hasn’t been configured correctly to purge old data and has been read-only for the last 6 months-  but hey-ho!

SQL Server 2016: The Coolest New Feature – Live Query Stats

This is the first in a series of blog posts about how great SQL Server 2016 is, and why you should all want to upgrade your applications to this platform.

I’ll be honest with you, I’m not usually a person to be bothered about new versions, about being up there at the bleeding edge. I’m happy and comfortable in my old tools – like a well worn sofa and pair of slippers.

In my head I still don’t really see anything wrong with SQL Server 7 (which was the first version I used in anger). Although to be fair, if you make me go back as far as SQL 2000 now, I do start to realise how much we were missing back then. If you ask me what was my favourite new feature in SQL 2014 though, I might have to rack my brain a bit – were there any useful new features in 2014? I know hardly anyone adopted it.

SQL Server 2016 however has a number of features I’m (dare I say it) quite excited about. The sort of things where you go – oh wow, that is seriously going to make my life easier. And that’s what we all want isn’t it? An easier life. 2016 is probably the most significant release in over 10 years, and the features provided are going to make life easier for DBAs, for developers and also for those responsible for the security of our data.

So enough of the intro, let’s get started, in today’s post we’ll look at what is arguably the coolest new feature…

Live Query Stats

Rather than try and explain what this is, in the first instance it’s probably easier just to show you.

I’m going to create an arbitrary query that takes a little while to run. Here it is:

--Don't do stuff like this:
SELECT count_big(*)
FROM
[Purchasing].[PurchaseOrderLines] a,
[Purchasing].[PurchaseOrderLines] b,
[Purchasing].[PurchaseOrderLines] c

You can turn on Live Query stats through the toolbar in SSMS (the button is next to the one for including the Actual Execution Plan), or right click over your query window and select it from the menu:

livequerystats1

Then when you run your query, you’re going to see something like this:

livequerystats2

I’ve only shown you part of the execution plan so you can see clearly what is going on. You’ll see it basically shows you the progress of each of the operators in your execution plan.

If your brain works like mine, you’ll immediately think of all those times when you’re waiting for a query to finish and wondering if you should wait or just kill it, and you’re thinking – “This is great, this will tell me how long my query has left to run!”

Then you’ll go “Oh hang on, I have to have turned live query stats on before I run my query if I want to see this info. Maybe not that useful after all.”

That was literally what I thought when I first encountered this, but then I found you could connect to (potentially) any currently running query. Production DBAs are going to love that part of it, being able to check out long running queries and see from the level of progress whether it’s worth killing them, or not.

One way you can do that is through the Activity Monitor (right-click on your SQL Instance in SSMS and select from the menu). You can probably find the query you’re interested in under “Active Expensive Queries”:

livequerystats3

You can right-click over the query and select “Show Live Execution Plan” from the menu.

There is a Caveat though for this, which is that you need a particular Trace Flag enabled globally on your instance to enable this. Otherwise the option will be greyed out. The traceflag is 7412 which enables a “lightweight statistics infrastructure which can be used to view the live query statistics from other sessions”.

You can set this flag from any session even once the query you’re interested in is already running, however it will be lost when SQL restarts unless you have set is as a start-up parameter for your SQL Instance:

--Enabling a Global Trace Flag
DBCC traceon(7412,-1);
GO

There are other ways of achieving the same functionality but I think the trace flag is the easiest.

Of course, you may be in a situation where you can’t make such changes as enabling a trace flag. Don’t worry, there is still a way to view the progress for a running query.

All you need is the SessionId for the running query. There’s lots of ways of getting this. In the above example, you can see it in the Activity Monitor, if it’s your own query then it’s shown at the bottom of your query window in SSMS.Once you have the Id then you can just run this query:

--Get the execution plan and current progress for session 159
SELECT *
FROM sys.dm_exec_query_statistics_xml(159);

That will give you output like this:

livequerystats4

Simply click on the query plan XML and you can see the actual plan being executed. Within the properties for each operator you can see “Actual Number of Rows” which tells you how many rows have been processed so far. You can compare this to the Estimated Number of Rows to get an idea of progress – as long as the Estimates are accurate!

livequerystats5

It’s not animated, and doesn’t update, but does give a current snapshot which is all you generally need.

So, as seen, SQL 2016 offers you the ability to look at live queries and see what’s going on in more detail, in particular to be able to assess the progress which is something you couldn’t do before. Plus the animated view is a bit cool. I think this is going to a really useful feature not just for DBAs, but for anyone who has sat looking at the time ticking by while they wait and wonder if their SQL query is ever going to finish.

Performance – Read This!

Nothing comes for free in life and a watched pot never boils.

I anticipate that turning on the traceflag will have negligible affect on performance in general as the base functionality is enabled anyway – as can be seen by the data being accessible through the sys.dm_exec_query_statistics_xml DMV. It is also implemented using Extended Events which have been shown to have minimal overhead compared to profiler traces. Only testing with a workload can prove this though and I haven’t been able to undertake that activity.

Watching an individual query via the animated Live Query Statistics view however does seem to have a high overhead, and it’s also already known that turning on the generation of Actual Execution Plans in itself has an overhead. I did some basic testing with a reasonably long running query that took about 150 seconds to complete without this turned on and then compared the results. In this case, a watched pot took significantly longer to boil. The results were reasonably consistent over a series or runs. With Live Query Statistics turned on, and watching it in SSMS, the query took approximately 50% longer – about 225 seconds.

This is quite disappointing but doesn’t mean you shouldn’t use this. The recommendation though, would be that if you are using this to check the progress of a query in production, check the Live Query Stats to see how it’s progressing, and then close them again so the query can continue without the additional overhead. Also consider using the DMV instead. I imagine there will be some overhead in calling the DMV, but that is a one off operation rather then the ongoing polling that the animated view uses.

One thing to note is that this sort of usage is not the only point to the Live Query Statistics. They can also be very useful for a quick visual way of verifying where your query is getting bogged down, as well as a helpful tool for understanding the flow of data.