Cycle Through Your Clipboard History In SSMS

This is my favourite SSMS trick I’ve discovered recently, probably some time towards the end of last year.

Basically, when you paste in Management Studio, you have not just the option to paste the last thing you selected and copied, but can cycle back through previous things that were in the clipboard. In a quick test I was able to choose from the last 20 clipboard entries.

You can find a menu item for this in the “Edit menu”:

ClipboardRing

Or as you can see in the menu (and much more easily) you can use Ctrl+Shift+V

Here’s an animation of me using it to cycle through the SQL statements from my last blog post:

ClipboardRing

This is a great little productivity tool. I was forever copying and pasting one thing, then a second, then having to go back to copy and paste the first thing again. Never again!

 

The Importance of ORDER BY

Everyone, at the beginning of their SQL career, get’s told that it is important to include an ORDER BY if they want the results ordered. Otherwise the order in which they are returned is not guaranteed.

But then you run queries a lot of times that don’t need a specific order – and you see that they (at least seem to) come out in the same order every time. You could (almost) be forgiven for thinking you can rely on that.

There was even a question on a Microsoft SQL certification exam a few years ago that asked what the default order was for records returned by a simple SELECT – the answer it was looking for was that it would be according to the order of the clustered index. So you can rely on that – right?

Wrong. The question was a bad question, and the answer was incorrect. Let’s look at this in action. First I’ll create a table and add some rows:

CREATE TABLE dbo.NoSuchThingAsDefaultSortOrder (
   Id INT IDENTITY(1,1) 
      CONSTRAINT PK_NoSuchThingAsDefaultSortOrder PRIMARY KEY CLUSTERED,
   FirstName VARCHAR(30)
);

INSERT INTO dbo. NoSuchThingAsDefaultSortOrder (FirstName)
VALUES
('John'),
('Fred'),
('Bob'),
('Sue'),
('Jenny'),
('Matt'),
('Alexis'),
('Zebedee');

Now let’s SELECT from the table, without specifying an order:

SELECT Id, FirstName
FROM dbo.NoSuchThingAsDefaultSortOrder;

Here’s our results:

OrderBy1

Okay, so they’ve come out ordered by Id according to our clustered index.

But what if someone comes along and decides they want to be able to look up records in the table by name, so they create a non-clustered index for that:

CREATE NONCLUSTERED INDEX IX_NoSuchThingAsDefaultSortOrder_FirstName
   ON dbo.NoSuchThingAsDefaultSortOrder(FirstName);

We run our original query again and this time these are the results we get:

OrderBy2

They’ve come out in a different order – this time ordered by FirstName – like the index we just added.

The reason is that this time SQL decided that it would be better to use our new index to execute the query. In general, it will try and use the smallest index that will satisfy the query – and in this case it decided that was the non-clustered index. I’d show you the execution plans to prove that, but I think it’s pretty obvious from the order the results are returned in.

So, don’t go relying on there being a “default” sort order. It’s just not true, changes to indexing – or even conceivably to statistics about the data distribution, could change the way SQL chooses to execute your query, and that could change your order.

Unless you make sure to specify ORDER BY.

 

 

 

Thoughts on Query Performance with TDE enabled

Microsoft state that enabling TDE (Transparent Data Encryption) usually has a performance  overhead of 2-4%. That doesn’t sound like very much, and personally I wouldn’t let it bother me if I want to make sure my data is encrypted at rest.

However, you may have heard other sources saying that it’s actually a lot more than that – and the performance impact is a high price to pay for the level of protection offered.

So, what’s the truth?

The critical thing to remember is that with TDE your data is encrypted on disk, but data held in memory (i.e. the buffer pool) is unencrypted. Therefore, you would only expect an overhead when reading from and writing to disk.

SQL Server tries to keep data that is referenced repeatedly in the buffer pool. So, if your SQL instance is provisioned with enough memory, a lot of your read queries can access the buffer pool and don’t have to go out to disk. Such queries should not be affected performance-wise by TDE.

There may be other read queries however that access older data that hasn’t been read for a while, these queries would need to retrieve that data from disk and so there would be an overhead from TDE.

Any queries that modify data will need the outcome to be written to disk so in these cases we will see an overhead. This overhead is likely to come in two parts, first when the transaction is written to the logfile before committing, and then later as the updated data gets written to the data file as part of a checkpoint operation.

We also have other operations that write or update encrypted data on disk, so we would also expect these to have some overhead. This would include operations such backups, or index rebuild operations.

You can see from this that the overhead will very much depend on how your application interacts with your data. At one extreme, if you have a set of static data that is small enough to be held in memory and is queried regularly then there should be no overhead. At the other end of the spectrum, if you have an application that writes to the database a lot, and reads less often, then the overhead will be higher.

Let’s look at some examples and try and quantify what amount of overhead we might be talking about. In this post we’re just going to focus on the effect TDE has when you are reading data.

First, I’ll create two databases, one with TDE enabled and one without. Then I’ll load the same set of data into each (Total size about 1GB).

You can find the script I used for this in my previous blog post:

Encrypting an existing database with TDE

In the first test we’ll perform a like query of the worst kind, one that tries to match for a value within a column. We have no indexes on the table, but none would be that helpful with this query anyway.

SELECT *
FROM dbo.SomeData 
WHERE SomeText LIKE '%Wibble%';

I’ll run across 4 test cases (capturing the total CPU consumed in each case). The test cases are:

  • TDE Protected database where the buffer cache is empty (i.e. all data has to be read from disk)
  • TDE protected database where all the data for the table is in the buffer cache (i.e. no data has to be read from disk)
  • Database without TDE where the buffer cache is empty (i.e. all data has to be read from disk)
  • Database without TDE where all the data for the table is in the buffer cache (i.e. no data has to be read from disk)

In each test I’ll run the query 5 times and total the CPU to even out variance between executions. For the tests involving disk reads I’ll run the command DBCC DROPCLEANBUFFERS in between executions of the query to empty the buffer cache.

The results looked like this, with time shown in seconds. Note that MAXDOP was set to 4 and each query went parallel over 4 threads:

TDE_ReadPerf1

There’s quite a variance between each run so I’m not going to take anything significant from small differences we see. However, we can see that the timings are pretty much the same when the data is in memory, but there seems to be about a 10% overhead with TDE when reading from disk.

In case you’re wondering why reading from disk didn’t add much elapsed time for the No-TDE database – the reads were “read-ahead” so were able to complete while the CPU was burning through the data.

Let’s try a different query, this one will still have to scan the whole table as we are dealing with a heap, but it uses an equality predicate so there is less work to do in matching the data:

SELECT *
FROM dbo.SomeData 
WHERE Id = 100000000;

I’ll run the same set of tests as above and we can look at the results:

TDE_ReadPerf2

The first thing we notice is that this query runs a lot quicker in general. And again, there is little difference with and without TDE when the data is in memory.

But look at the comparison when the data has to be read from disk. With TDE the CPU consumption is more than 10 times as large, or in percentages, over 1000% worse.

At this point you could be forgiven for panicking – are you willing to risk that TDE makes your queries that much worse.

In both the above two tests, the same amount of data is being read from disk. And if you re-examine the numbers, you’ll see that (very roughly) the same amount of CPU has been added in each case where we have TDE enabled – about 50 seconds. That 50 seconds was split over 4 cores so it would have been about 12.5 seconds per core.

In terms of elapsed time, we had approximately that increase with the first query because CPU was the resource under most contention – i.e. the reads were able to occur while waiting for the CPU to complete. In the second query we can see the reading from disk occupied most of the elapsed time for those queries, so the extra CPU consumption didn’t make the query run particularly longer.

By the time it had been executed 5 times (with the memory flushed between each execution) each query read about 600,000 pages sized at 8kb each – just under 5GB. If it took 50 seconds on the decryption of those pages, then each page took about 1 twelfth of a milli-second to decrypt – or alternatively, TDE decrypted about 12 pages per millisecond. Or in terms of disk size, 100MB per second. These were tests on a server with magnetic spinning disks (not SSDs) and you can see from the above figures, the straight disk access took about 40 seconds on its own.

When TDE doesn’t read from disk it doesn’t add any overhead, but how do we quantify what the overhead to queries is when it does have to access the disk?

From the above tests we could suggest it adds from 10% to over 1000% CPU.

Or alternatively between 10% to 0% elapsed time. Note that those figures are the same way round, i.e. when it added 10% CPU it added 10% elapsed time, but when it added 1000% CPU time – the elapsed time was about the same.

I could go on with this type of confusing analysis, but instead I’ll suggest this is the wrong way to think about performance in terms of TDE.

Don’t think about query performance.

Think about read performance.

TDE overhead depends on the level of your physical disk access. In the case of read query performance, it depends wholly on the level of physical reads, and seems to be a reasonable fixed overhead for each physical read.

That makes perfect sense, the overhead for querying is in decrypting data, and surely it will take pretty much the same amount of CPU to decrypt each 8KB page.

In theory this makes it simple for us to calculate what this overhead would look like on our production SQL Servers. There’s all sorts of ways of capturing physical reads (and writes).

Let’s say I take a quick look at Resource Monitor to get a ballpark figure for one of my databases on this server that I know to be quite heavy on physical reads. And let’s say I see that it averages 25MB/s during the peak hour.

From that, and from the figures above I can estimate what impact enabling TDE for this database would have on the CPU.

Well I know that 25MB equates to about 0.25 seconds of CPU to decrypt the data, and I know I have 4 cores, so I can expect that in the average second this adds 0.0625 seconds of CPU per core. I multiple that be 100 and I find that I’ve added 6.25% CPU.

The calculation I’ve just done is:

(Reads/Second) * 100

Divided by

(MBs TDE decrypts every second/CPU) * (Number of CPU cores)

This doesn’t include writes, and it doesn’t include backups – I hope to look at that in a later post.

Now, let’s say that produces a scary number, and I’m worried about the strain that’s going to put on my CPU…

My first question would be why am I experiencing so many reads and can I alleviate that? Does data have a short shelf-life in memory? Do I have enough memory in my server – and is enough allocated to SQL?

This isn’t just in terms of TDE. SQL Server is going to perform much better if your current dataset – i.e. the data you are currently accessing most, can be held in memory. So, if TDE is causing a problem, then it’s possible your queries are slow anyway.

Again, I’m not talking about writes just yet.

Or maybe your database is heavy on physical reads because it’s a data warehouse, regularly querying historical data. In that case, is it a suitable target for encryption? Hopefully the data is well anonymised if you’re using it for reporting and therefore doesn’t contain anything personal or sensitive.

I hope.

In summary…

Just to repeat myself, if you’re wondering about TDE and its impact of query performance, which we all have done, try to reframe the question and think about its impact on read performance.

It makes more sense and it may help you to more easily quantify the impact on your servers. And if it does look like the performance may be an issue – perhaps there is tuning you can perform on your database instance to reduce the physical disk access.

Hitting my first big blogging milestone – 100k Reads

After blogging about SQL Server for just over a year, at some in the last month I went over 100,000 reads in total for my blog posts.

That breaks down today as about 17,500 reads on my WordPress site, and 89,000 on SQL Server Central where my blog is syndicated, across 52 posts. I’m sure this is nothing for the better-known bloggers out there, but for me it has far exceeded my expectations.

Thanks to everyone who has read my posts, and a special thanks to everyone who has commented, liked, or corrected me when I’ve got something wrong. Especially at the beginning, it has meant so much to have your feedback.

When I started, I didn’t really know who – if anyone – would read what I was writing. I think if I’d eventually averaged 100 reads on each post I’d have been more than happy, to average two thousand per post has been amazing.

So, if there are any of you out there who are thinking about starting a SQL blog – but aren’t sure – I say go for it, you won’t regret it!

And, if the same as I was, you wonder where your readers will actually come from, then seriously consider syndicating on SQL Server Central. You just need an RSS feed, which most blogging platforms will provide you by default. I’d say to get a few posts under your belt, then drop them an email – they do everything for you and are super helpful. Here’s a link:

http://www.sqlservercentral.com/blogs/steve_jones/2010/07/23/syndication-on-sqlservercentral/

Also, consider joining the SQL Server Community on Slack. There’s an #blogging channel where other bloggers are happy to help you out – or share your woes, and there’s the #blogs channel where you can set up a feed so your posts automatically appear there.

Trying to explain and demonstrate something in a clear manner is one of the best ways to harden and deepen your own knowledge of a subject. I’ve learnt so much since I started this blog. The blog also starts to become a great repository of knowledge for your own use – more and more often now I try to remember some aspect of SQL Server’s behaviour and then remember the answer is in one of my posts.

And if you’re thinking that everything you have to say has been said before – some of my most popular posts are rehashing old ideas, or going quite basic concepts. In fact, the one’s where I’ve done deep research and feel I’m presenting information that hasn’t been seen before are often far less popular.

Some of the highlights of the journey so far

Being shared in the Brent Ozar Weekly links – the first time it happened the traffic blew up so much on my WordPress site I thought I must be under attack!

BlogMilestone1

WHO’S HACKING MY SITE??!!

Being featured on the SQL Server Central homepage quite a few times.

Having a colleague tell me he had found a great resource on Always Encrypted – only to then realise I wrote it.

Having a great deal of fun writing my post “How to be a bad interviewer” for T-SQL Tuesday #93, a stream of consciousness rant that had to be toned down a few times before it was suitable for publication.

 

It’s been a fun and rewarding undertaking, am now looking forward to the next big milestone –  250k!

Database Files Down The Wrong Path

I manage a few servers used to host SQL Instances for development and test purposes. Each of those instances hosts databases covering multiple environments. So I’ve got multiple servers, with multiple instances, with multiple environments.

It’s important that issues in those environments don’t block development tasks, or invalidate or block testing cycles, so I like to keep them a little bit locked down. Code changes into those environments are generally made through builds and releases.

Equally though, sometimes developers need to be able to make changes that require sysadmin rights, without waiting for me to have time to help – or I might be on holiday.

Consequently, I’ve had to give elevated permissions to a few “trusted” devs. Result – bad things happen occasionally.

One common issue that bugs me is where databases have been moved from one instance to another, usually through backup and restore, and the files haven’t been moved as part of the restore so they get recreated in the data\log folders for the old instance.

This has caused me various problems, e.g. working out which instance is hammering the disk or using up all the space.

You can avoid this using the tick box in the Files page in the Restore dialog (if you’re restoring through the GUI):

FilesDownWrongPath

I’ve written a quick script to identify any database files that are suffering this and identifying the user responsible for the restore (where this is available). I thought I’d share in case anyone else suffers from the same problem:

--Check for database files down the wrong path
DECLARE @DefaultDataPath VARCHAR(512);
DECLARE @DefaultLogPath VARCHAR(512);

SET @DefaultDataPath = CAST(SERVERPROPERTY('InstanceDefaultDataPath') AS VARCHAR(512)) + '%';
SET @DefaultLogPath = CAST(SERVERPROPERTY('InstanceDefaultLogPath') AS VARCHAR(512)) + '%';

SELECT @DefaultDataPath AS CorrectDataPath, @DefaultLogPath AS CorrectLogPath;

SELECT 
   SUSER_SNAME(d.owner_sid) AS Culprit, 
   d.name, 
   f.type_desc, 
   f.physical_name
FROM sys.master_files f
INNER JOIN sys.databases d
   ON f.database_id = d.database_id
WHERE d.database_id > 4
AND d.name != 'SSISDB'
AND (
      (f.type_desc = 'ROWS' AND f.physical_name NOT LIKE @DefaultDataPath)
   OR (f.type_desc = 'LOG' AND f.physical_name NOT LIKE @DefaultLogPath)
);

System databases and the SSIS catalog are excluded.

For once I won’t show the results of the script in action – I am however off to have a quiet word with a few culprits!

RDS encryption vs TDE

If you’re starting to use cloud services, the number of options available can be confusing. Particularly when they may seem to do the same thing.

If you’re on AWS, and you want to create a SQL Server instance on RDS (Relational Database Service), then you potentially have a couple of different options for enabling encryption at rest.

If you’re deploying an Enterprise Edition SQL Server instance then you could use TDE (Transparent Database Encryption), the technology most of us in the SQL Server world already have some awareness of.

RDS also has its own at-rest encryption though, so what’s the difference?

The answer is that (at least in terms of what they protect) they are pretty much equivalent. RDS encryption can be used with whichever database platform you choose to use – Aurora, MySQL, MariaDB, PostgreSQL, Oracle or SQL Server. As many people want this functionality it made sense for Amazon to provide it.

Only Oracle and SQL Server have their own built-in equivalent – TDE – so in these cases you have a choice of which want you want to use. You may prefer to use the one you are familiar with from your on-premise deployments, or you may prefer to go with the RDS one.

A key difference is going to be database backups. RDS handles backups for you, and of course these backups will be encrypted whichever option you choose. However, you would be wise to also take your own set of backups which you store outside the RDS instance. With RDS encryption these backups would not be encrypted, however with TDE they would be.

Even this though is not a killer point, with SQL Server (from 2014 onward) you have backup encryption, so even if you were using RDS, you could use this to make sure that externally stored backups were also encrypted.

A big reason you might want to use the RS encryption is price. TDE is only available on SQL Server Enterprise Edition, whereas you can use RDS encryption on Standard Edition also.

The difference in cost for SQL Server Standard vs Enterprise licences is significant. Last time I checked the standard price was around $2,000 dollars per CPU core for Standard, but $7,000 with Enterprise.

The price difference in RDS is also large. If I look at a “db.m4.xlarge” instance which is 4 virtual CPUs and 16GB RAM, then the price quoted is roughly  $750 dollars a month for SQL Server Standard Edition, $1,650 for Enterprise.

Of course, there are differences between each technology in terms of how you set it up and manage it, how and where the keys are stored etc. But if you’re confronted with the choice, then mostly, you can just pick the one you prefer, it doesn’t really matter.

What does matter is that if you prefer not to pay for Enterprise Edition, then you still have the option of at-rest encryption. Which is great news.

 

T-SQL Tuesday #101 – Some Great SQL Server Tools

tsql2sday150x150

This month for T-SQL Tuesday #101 Jens Vestergaard asks us to blog about the essential tools in our SQL Toolbelt.

http://t-sql.dk/?p=1947

I’d just completed by post on CMS when I realised I’ve blogged about a few of my favourite tools in the past and that this would be a good opportunity to share them again in a quick post. So, here’s my second of two T-SQL Tuesday posts.

First we have these two great tools:

Statistics  Parser – If you do performance tuning and you’re not using this then you’re missing out. Formats and summarises the results of your STATISTICS IO and TIME commands. Big shout to Richie Rump for creating this – I use it so much I just need to type “st” in my browser search and it comes straight up!

Live Query Stats – Watch an animated execution plan while your query executes. I’ll admit I’ve not used this a lot but I still think it’s really cool tool.

And then a few different ones around the same tool:

Query Store – track query performance and plan changes over time as well as forcing SQL to use the plan you want. I’m lucky to have quite a bit of stuff on SQL 2016 and Query Store is great! I was just using it minutes ago to look at how a performance fix put in last week has improved things.

Clone Database with Query Store – If you want to see how your queries would behave in your production environment then Clone Database copies out the stats used to generate the plan. With SQL 2016 you can also a copy of your query store data so you can query away at it happily in your dev environment.

Capture the most expensive queries across your instance with Query Store – blowing my own trumpet a bit here, but I’ve used this script a lot since I wrote it.

That’s it for now – at least for tools I’ve written about – I look forward to seeing what the rest of the T-SQLT community comes up with.

T-SQL Tuesday #101. CMS – Effortlessly run queries against multiple SQL Servers at once

tsql2sday150x150

This month for T-SQL Tuesday #101 Jens Vestergaard asks us to blog about the essential tools in our SQL Toolbelt.

http://t-sql.dk/?p=1947

The concept of a Central Management Server (CMS) is one I’ve been meaning to blog about for a while – just because I get the impression not a lot of people know about it or use it even though it’s been around since at least SQL 2008.

A central management server is simply one of your SQL instances on which you can register a group or groups of other instances. You can then run queries against those groups as a set, it’s a very easy way of being able to run diagnostic queries – or deploy common objects across a group.

There are other ways of performing these sorts of tasks – such as using Powershell, but with a CMS it’s really quick and easy, and all you have to know is SQL.

I’ll run through how you set one up, then give a couple of examples of using it.

Setting up your CMS

Go to the View menu in SSMS and select Registered Servers.

Then in the Registered Servers window, expand Database Engine, right-click Central Management Servers and click Register Central Management Server.

CMS1

In the New Server Registration dialog select the instance you want to use as a Central Management Server.

CMS2

You just need to put in the connection details – though you can also give it a friendly name in the Registered server name field.

It’s worth noting that this server isn’t going to be doing any particularly heavy lifting, it’s just going to store registrations for other servers, so there’s no need to use a dedicated instance for this.

You’ll now see this instance listed under Central Management Servers in the Registered Servers window.

You can right-click on it and either register individual servers underneath it or create one or more groups beneath which you can register servers. You can even nest groups if you want.

Here’s what it looks like after I’ve registered a few servers / groups:

CMS3

And that’s it setup – told you it was easy.

Running Queries against multiple instances

This is the clever bit. I can run a query against an individual SQL instance or against a group and any sub-groups.

I want to run something against all my instances in both groups, so I right-click on My CMS Server and select New Query.

CMS4

Let’s have a look at the query window as there’s some differences:

CMS5

You can see the status bar at the bottom is pink, that tells me I’m connected to multiple instances via a CMS. On the bottom left you can see I’m connected to 6 out of 6, i.e. I tried to connect to 6 instances and all are successfully connected.

Now let’s run a query. Suppose I want to check MAXDOP across my instances:

SELECT name, value_in_use
FROM sys.configurations
WHERE name = 'max degree of parallelism';

Here’s the results:

CMS6

You can see the queries against each instance have been combined into one result set and a column added for the ServerName (the friendly name I specified).

In the messages tab, we also have the output from each query:

CMS7

There’s no limit to what you can do with this, as long as you can express what you want to do in T-SQL, you can execute it easily across as many instances as you want.

I use this for deploying updated versions of Ola Hallengren’s procedures across multiple instances, or the First Responder Toolkit from Brent Ozar Inc.

I use this for running checks across multiple instances like the MAXDOP one above. Earlier, I used it to check for any databases that didn’t have PAGE_VERIFY set to CHECKSUM, and then I used it to enable that across the databases where it wasn’t set.

I’ll finish with another query to change my server configurations. Having run the first one to look at MAXDOP I’ve decided I want to set it to 4 for all the servers where it’s not set:

IF (SELECT value_in_use FROM sys.configurations 
    WHERE name = 'max degree of parallelism') = 0
BEGIN
   EXEC sp_configure 'show advanced options', 1;  
   RECONFIGURE;  
   EXEC sp_configure 'max degree of parallelism', 4;  
   RECONFIGURE;  
END

Here’s the output from the messages tab:

CMS8

(The really sharp eyed amongst you will notice SQL2012_Local is missing, that’s because I tested the query on that one first).

And if I run the original query again to check the settings:

SELECT name, value_in_use
FROM sys.configurations
WHERE name = 'max degree of parallelism';

CMS9

Sorted!

Summary

Content Management Servers are an exceedingly simple way of performing admin tasks or checks against a whole bunch of SQL Instances at once.

Once you’ve set it up, anyone else with access can connect to it run queries – or add other servers and groups.

There’s still a place for Powershell, particularly in automation of tasks. But when you’re doing tasks that are a bit more ad-hoc then using a CMS is perfect.

TDE – Regenerating the Database Encryption Key

As discussed in the last post, with TDE (Transparent Data Encryption), you generally do not need to change the Database Encryption Key (DEK) as it is separately protected by a certificate in the master database and you can rotate the certificate periodically.

Rotating TDE Certificates without re-encrypting data

As also mentioned, if you did need to change the DEK then this will be a resource intensive operation as SQL Server will need to decrypt all your existing data using the existing DEK value, then re-encrypt it with the new one.

Nonetheless, there may be scenarios when you wish to do this. There could be a security breach where you don’t feel confident unless all the keys are updated. Or you might need to support a higher level of encryption than had been previously used.

More commonly, you may publish a database from a development environment and wish to regenerate the keys so that there are unique to your production environment.

In either case, the process is simple and transparent from your point of view, you issue the ALTER DATABASE ENCRYPTION KEY command, specifying REGENERATE and the encryption algorithm you wish to use:

ALTER DATABASE ENCRYPTION KEY
REGENERATE WITH ALGORITHM = AES_256;

In the previous post I talked about rotating the certificate, and it’s worth noting that you can perform both these actions at the same time e.g.

ALTER DATABASE ENCRYPTION KEY
REGENERATE WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE
MyTDECert_with_longevity

Where you have a lot of data in the database this could take a while to complete. You can check the status of the operation by looking in the dm_database_encryption_keys view:

SELECT k.encryption_state, k.create_date, k.regenerate_date,k.percent_complete
FROM sys.dm_database_encryption_keys k
INNER JOIN sys.databases d
ON k.database_id = d.database_id
WHERE d.name = ‘TestTDE’

TDE_Regenerate1

Note that the encryption_state is set to 4 which means “key change in progress”.

In a brief test I compared the time taken to change the key with the time taken to encrypt a fresh un-encrypted database. Surprisingly it was about the same, I had expected it would be longer due to the need to decrypt and re-encrypt each data item.

Rotating TDE Certificates without re-encrypting data

I talked previously about why we have each of the layers in the encryption hierarchy used to support TDE (Transparent Data Encryption).

Understanding Keys and Certificates with Transparent Data Encryption (TDE)

At the time I mused about why we need the encryption key in the database itself, rather than just encrypting with the keys associated with the certificate stored in the master database. I mentioned one reason – that these keys are intended for asymmetric encryption which has a higher overhead (the DEK is a symmetric key).

In a light-bulb moment, the more important reason has come to me since, and that is about key rotation.

In any form of encryption, it is possible that at some point an individual could gain access to the keys. For example, administrators of your SQL Server instance can freely make a backup of the certificate with a password they choose which can be used to encrypt it. Whilst that in itself is not a problem, it is good practice to periodically refresh those keys so that if someone gained access to an old copy it would be of no use to them. Or if you are aware that there has been a potential breach of security, you can quickly replace your keys.

If you have any form of security certification in your organisation, then it is likely it will be stipulated that you must change these in a defined timescale-  e.g. annually.

However, this forms a problem if you want to change the key that is used to actually encrypt the data, to change that you would need to decrypt all the data using the old key and re-encrypt it using the old one. That would be an intensive activity, but also could lead to vulnerability while the process was underway.

As a result, many forms of encryption use separation between the actual key that is used for the encryption, and another object that is used to protect that key. You can back up the latter separately outside the database to ensure you can recover your data in case of a failure.

With TDE we have the database encryption key (DEK) – the key that is actually used to encrypt/decrypt the data, stored encrypted in the database itself. In the normal run of things, we never access this key directly and never store it anywhere else, so it should be fairly safe, i.e. we should rarely have to worry about replacing it – though the functionality exists to allow us to do so if we require.

Then we have the certificate which is used to encrypt that key – without the certificate the key is useless and TDE can’t work. The certificate we keep a password protected backup of, so is a bit more vulnerable/public.

Due to that, certificates have an expiry date. This date is a reminder to us that, as a good practice, we should create a new certificate and use that going forward before the existing one expires. TDE doesn’t stop working if the certificate expires, it is up to you to monitor your certificates and replace them when they come to the end of their life.

If you think about it, we might not like it if TDE did enforce this, it would be a big impact if we suddenly could no longer read or write to our database. Of course, MS could probably implement functionality to lock us out, but still allow us to use the certificate for rotating the key, but still it would probably upset a lot of people and be less than “transparent”.

If we look at the sys.certificates view we can find our TDE certificate and examine the expiry date:
SELECT name, subject, expiry_date
FROM
sys.certificates
WHERE name = ‘MyTDECert’;

TDE_Rotate1

I didn’t specify an expiry date for this certificate when I created it, so it was automatically given one that was a year in the future.

Let’s create a new certificate, and this time we’ll specify a longer expiry. Then we rotate the encryption to use that one.
CREATE CERTIFICATE MyTDECert_with_longevity
WITH SUBJECT = ‘Certificate used for TDE in the TestTDE database for years to come’,
EXPIRY_DATE = ‘20201231’;


And let’s have a look at the values that has in the store:

TDE_Rotate2

What do we mean by rotation?
Rotation is the process of moving from the old certificate to the new one. In this case all that happens is that the encrypted value of the database encryption key (stored in our TDE protected database) is decrypted with the old certificate, re-encrypted with the new certificate, and that new encrypted value is stored in the database, overwriting the old one.

The Key value itself hasn’t changed, just the object protecting it, and as such we can still read/write data from the database without any additional work.

To rotate the certificate used is as simple as:
USE TestTDE;

ALTER DATABASE ENCRYPTION KEY
ENCRYPTION BY SERVER CERTIFICATE
MyTDECert_with_longevity;

And you’re done. The operation is almost instantaneous, shouldn’t require any database down time, or create additional overhead on your server. Though in production I’d still do it when things are quiet just in case anything goes wrong!