Hitting my first big blogging milestone – 100k Reads

After blogging about SQL Server for just over a year, at some in the last month I went over 100,000 reads in total for my blog posts.

That breaks down today as about 17,500 reads on my WordPress site, and 89,000 on SQL Server Central where my blog is syndicated, across 52 posts. I’m sure this is nothing for the better-known bloggers out there, but for me it has far exceeded my expectations.

Thanks to everyone who has read my posts, and a special thanks to everyone who has commented, liked, or corrected me when I’ve got something wrong. Especially at the beginning, it has meant so much to have your feedback.

When I started, I didn’t really know who – if anyone – would read what I was writing. I think if I’d eventually averaged 100 reads on each post I’d have been more than happy, to average two thousand per post has been amazing.

So, if there are any of you out there who are thinking about starting a SQL blog – but aren’t sure – I say go for it, you won’t regret it!

And, if the same as I was, you wonder where your readers will actually come from, then seriously consider syndicating on SQL Server Central. You just need an RSS feed, which most blogging platforms will provide you by default. I’d say to get a few posts under your belt, then drop them an email – they do everything for you and are super helpful. Here’s a link:

http://www.sqlservercentral.com/blogs/steve_jones/2010/07/23/syndication-on-sqlservercentral/

Also, consider joining the SQL Server Community on Slack. There’s an #blogging channel where other bloggers are happy to help you out – or share your woes, and there’s the #blogs channel where you can set up a feed so your posts automatically appear there.

Trying to explain and demonstrate something in a clear manner is one of the best ways to harden and deepen your own knowledge of a subject. I’ve learnt so much since I started this blog. The blog also starts to become a great repository of knowledge for your own use – more and more often now I try to remember some aspect of SQL Server’s behaviour and then remember the answer is in one of my posts.

And if you’re thinking that everything you have to say has been said before – some of my most popular posts are rehashing old ideas, or going quite basic concepts. In fact, the one’s where I’ve done deep research and feel I’m presenting information that hasn’t been seen before are often far less popular.

Some of the highlights of the journey so far

Being shared in the Brent Ozar Weekly links – the first time it happened the traffic blew up so much on my WordPress site I thought I must be under attack!

BlogMilestone1

WHO’S HACKING MY SITE??!!

Being featured on the SQL Server Central homepage quite a few times.

Having a colleague tell me he had found a great resource on Always Encrypted – only to then realise I wrote it.

Having a great deal of fun writing my post “How to be a bad interviewer” for T-SQL Tuesday #93, a stream of consciousness rant that had to be toned down a few times before it was suitable for publication.

 

It’s been a fun and rewarding undertaking, am now looking forward to the next big milestone –  250k!

Database Files Down The Wrong Path

I manage a few servers used to host SQL Instances for development and test purposes. Each of those instances hosts databases covering multiple environments. So I’ve got multiple servers, with multiple instances, with multiple environments.

It’s important that issues in those environments don’t block development tasks, or invalidate or block testing cycles, so I like to keep them a little bit locked down. Code changes into those environments are generally made through builds and releases.

Equally though, sometimes developers need to be able to make changes that require sysadmin rights, without waiting for me to have time to help – or I might be on holiday.

Consequently, I’ve had to give elevated permissions to a few “trusted” devs. Result – bad things happen occasionally.

One common issue that bugs me is where databases have been moved from one instance to another, usually through backup and restore, and the files haven’t been moved as part of the restore so they get recreated in the data\log folders for the old instance.

This has caused me various problems, e.g. working out which instance is hammering the disk or using up all the space.

You can avoid this using the tick box in the Files page in the Restore dialog (if you’re restoring through the GUI):

FilesDownWrongPath

I’ve written a quick script to identify any database files that are suffering this and identifying the user responsible for the restore (where this is available). I thought I’d share in case anyone else suffers from the same problem:

--Check for database files down the wrong path
DECLARE @DefaultDataPath VARCHAR(512);
DECLARE @DefaultLogPath VARCHAR(512);

SET @DefaultDataPath = CAST(SERVERPROPERTY('InstanceDefaultDataPath') AS VARCHAR(512)) + '%';
SET @DefaultLogPath = CAST(SERVERPROPERTY('InstanceDefaultLogPath') AS VARCHAR(512)) + '%';

SELECT @DefaultDataPath AS CorrectDataPath, @DefaultLogPath AS CorrectLogPath;

SELECT 
   SUSER_SNAME(d.owner_sid) AS Culprit, 
   d.name, 
   f.type_desc, 
   f.physical_name
FROM sys.master_files f
INNER JOIN sys.databases d
   ON f.database_id = d.database_id
WHERE d.database_id > 4
AND d.name != 'SSISDB'
AND (
      (f.type_desc = 'ROWS' AND f.physical_name NOT LIKE @DefaultDataPath)
   OR (f.type_desc = 'LOG' AND f.physical_name NOT LIKE @DefaultLogPath)
);

System databases and the SSIS catalog are excluded.

For once I won’t show the results of the script in action – I am however off to have a quiet word with a few culprits!

RDS encryption vs TDE

If you’re starting to use cloud services, the number of options available can be confusing. Particularly when they may seem to do the same thing.

If you’re on AWS, and you want to create a SQL Server instance on RDS (Relational Database Service), then you potentially have a couple of different options for enabling encryption at rest.

If you’re deploying an Enterprise Edition SQL Server instance then you could use TDE (Transparent Database Encryption), the technology most of us in the SQL Server world already have some awareness of.

RDS also has its own at-rest encryption though, so what’s the difference?

The answer is that (at least in terms of what they protect) they are pretty much equivalent. RDS encryption can be used with whichever database platform you choose to use – Aurora, MySQL, MariaDB, PostgreSQL, Oracle or SQL Server. As many people want this functionality it made sense for Amazon to provide it.

Only Oracle and SQL Server have their own built-in equivalent – TDE – so in these cases you have a choice of which want you want to use. You may prefer to use the one you are familiar with from your on-premise deployments, or you may prefer to go with the RDS one.

A key difference is going to be database backups. RDS handles backups for you, and of course these backups will be encrypted whichever option you choose. However, you would be wise to also take your own set of backups which you store outside the RDS instance. With RDS encryption these backups would not be encrypted, however with TDE they would be.

Even this though is not a killer point, with SQL Server (from 2014 onward) you have backup encryption, so even if you were using RDS, you could use this to make sure that externally stored backups were also encrypted.

A big reason you might want to use the RS encryption is price. TDE is only available on SQL Server Enterprise Edition, whereas you can use RDS encryption on Standard Edition also.

The difference in cost for SQL Server Standard vs Enterprise licences is significant. Last time I checked the standard price was around $2,000 dollars per CPU core for Standard, but $7,000 with Enterprise.

The price difference in RDS is also large. If I look at a “db.m4.xlarge” instance which is 4 virtual CPUs and 16GB RAM, then the price quoted is roughly  $750 dollars a month for SQL Server Standard Edition, $1,650 for Enterprise.

Of course, there are differences between each technology in terms of how you set it up and manage it, how and where the keys are stored etc. But if you’re confronted with the choice, then mostly, you can just pick the one you prefer, it doesn’t really matter.

What does matter is that if you prefer not to pay for Enterprise Edition, then you still have the option of at-rest encryption. Which is great news.

 

T-SQL Tuesday #101 – Some Great SQL Server Tools

tsql2sday150x150

This month for T-SQL Tuesday #101 Jens Vestergaard asks us to blog about the essential tools in our SQL Toolbelt.

http://t-sql.dk/?p=1947

I’d just completed by post on CMS when I realised I’ve blogged about a few of my favourite tools in the past and that this would be a good opportunity to share them again in a quick post. So, here’s my second of two T-SQL Tuesday posts.

First we have these two great tools:

Statistics  Parser – If you do performance tuning and you’re not using this then you’re missing out. Formats and summarises the results of your STATISTICS IO and TIME commands. Big shout to Richie Rump for creating this – I use it so much I just need to type “st” in my browser search and it comes straight up!

Live Query Stats – Watch an animated execution plan while your query executes. I’ll admit I’ve not used this a lot but I still think it’s really cool tool.

And then a few different ones around the same tool:

Query Store – track query performance and plan changes over time as well as forcing SQL to use the plan you want. I’m lucky to have quite a bit of stuff on SQL 2016 and Query Store is great! I was just using it minutes ago to look at how a performance fix put in last week has improved things.

Clone Database with Query Store – If you want to see how your queries would behave in your production environment then Clone Database copies out the stats used to generate the plan. With SQL 2016 you can also a copy of your query store data so you can query away at it happily in your dev environment.

Capture the most expensive queries across your instance with Query Store – blowing my own trumpet a bit here, but I’ve used this script a lot since I wrote it.

That’s it for now – at least for tools I’ve written about – I look forward to seeing what the rest of the T-SQLT community comes up with.

T-SQL Tuesday #101. CMS – Effortlessly run queries against multiple SQL Servers at once

tsql2sday150x150

This month for T-SQL Tuesday #101 Jens Vestergaard asks us to blog about the essential tools in our SQL Toolbelt.

http://t-sql.dk/?p=1947

The concept of a Central Management Server (CMS) is one I’ve been meaning to blog about for a while – just because I get the impression not a lot of people know about it or use it even though it’s been around since at least SQL 2008.

A central management server is simply one of your SQL instances on which you can register a group or groups of other instances. You can then run queries against those groups as a set, it’s a very easy way of being able to run diagnostic queries – or deploy common objects across a group.

There are other ways of performing these sorts of tasks – such as using Powershell, but with a CMS it’s really quick and easy, and all you have to know is SQL.

I’ll run through how you set one up, then give a couple of examples of using it.

Setting up your CMS

Go to the View menu in SSMS and select Registered Servers.

Then in the Registered Servers window, expand Database Engine, right-click Central Management Servers and click Register Central Management Server.

CMS1

In the New Server Registration dialog select the instance you want to use as a Central Management Server.

CMS2

You just need to put in the connection details – though you can also give it a friendly name in the Registered server name field.

It’s worth noting that this server isn’t going to be doing any particularly heavy lifting, it’s just going to store registrations for other servers, so there’s no need to use a dedicated instance for this.

You’ll now see this instance listed under Central Management Servers in the Registered Servers window.

You can right-click on it and either register individual servers underneath it or create one or more groups beneath which you can register servers. You can even nest groups if you want.

Here’s what it looks like after I’ve registered a few servers / groups:

CMS3

And that’s it setup – told you it was easy.

Running Queries against multiple instances

This is the clever bit. I can run a query against an individual SQL instance or against a group and any sub-groups.

I want to run something against all my instances in both groups, so I right-click on My CMS Server and select New Query.

CMS4

Let’s have a look at the query window as there’s some differences:

CMS5

You can see the status bar at the bottom is pink, that tells me I’m connected to multiple instances via a CMS. On the bottom left you can see I’m connected to 6 out of 6, i.e. I tried to connect to 6 instances and all are successfully connected.

Now let’s run a query. Suppose I want to check MAXDOP across my instances:

SELECT name, value_in_use
FROM sys.configurations
WHERE name = 'max degree of parallelism';

Here’s the results:

CMS6

You can see the queries against each instance have been combined into one result set and a column added for the ServerName (the friendly name I specified).

In the messages tab, we also have the output from each query:

CMS7

There’s no limit to what you can do with this, as long as you can express what you want to do in T-SQL, you can execute it easily across as many instances as you want.

I use this for deploying updated versions of Ola Hallengren’s procedures across multiple instances, or the First Responder Toolkit from Brent Ozar Inc.

I use this for running checks across multiple instances like the MAXDOP one above. Earlier, I used it to check for any databases that didn’t have PAGE_VERIFY set to CHECKSUM, and then I used it to enable that across the databases where it wasn’t set.

I’ll finish with another query to change my server configurations. Having run the first one to look at MAXDOP I’ve decided I want to set it to 4 for all the servers where it’s not set:

IF (SELECT value_in_use FROM sys.configurations 
    WHERE name = 'max degree of parallelism') = 0
BEGIN
   EXEC sp_configure 'show advanced options', 1;  
   RECONFIGURE;  
   EXEC sp_configure 'max degree of parallelism', 4;  
   RECONFIGURE;  
END

Here’s the output from the messages tab:

CMS8

(The really sharp eyed amongst you will notice SQL2012_Local is missing, that’s because I tested the query on that one first).

And if I run the original query again to check the settings:

SELECT name, value_in_use
FROM sys.configurations
WHERE name = 'max degree of parallelism';

CMS9

Sorted!

Summary

Content Management Servers are an exceedingly simple way of performing admin tasks or checks against a whole bunch of SQL Instances at once.

Once you’ve set it up, anyone else with access can connect to it run queries – or add other servers and groups.

There’s still a place for Powershell, particularly in automation of tasks. But when you’re doing tasks that are a bit more ad-hoc then using a CMS is perfect.

TDE – Regenerating the Database Encryption Key

As discussed in the last post, with TDE (Transparent Data Encryption), you generally do not need to change the Database Encryption Key (DEK) as it is separately protected by a certificate in the master database and you can rotate the certificate periodically.

Rotating TDE Certificates without re-encrypting data

As also mentioned, if you did need to change the DEK then this will be a resource intensive operation as SQL Server will need to decrypt all your existing data using the existing DEK value, then re-encrypt it with the new one.

Nonetheless, there may be scenarios when you wish to do this. There could be a security breach where you don’t feel confident unless all the keys are updated. Or you might need to support a higher level of encryption than had been previously used.

More commonly, you may publish a database from a development environment and wish to regenerate the keys so that there are unique to your production environment.

In either case, the process is simple and transparent from your point of view, you issue the ALTER DATABASE ENCRYPTION KEY command, specifying REGENERATE and the encryption algorithm you wish to use:

ALTER DATABASE ENCRYPTION KEY
REGENERATE WITH ALGORITHM = AES_256;

In the previous post I talked about rotating the certificate, and it’s worth noting that you can perform both these actions at the same time e.g.

ALTER DATABASE ENCRYPTION KEY
REGENERATE WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE
MyTDECert_with_longevity

Where you have a lot of data in the database this could take a while to complete. You can check the status of the operation by looking in the dm_database_encryption_keys view:

SELECT k.encryption_state, k.create_date, k.regenerate_date,k.percent_complete
FROM sys.dm_database_encryption_keys k
INNER JOIN sys.databases d
ON k.database_id = d.database_id
WHERE d.name = ‘TestTDE’

TDE_Regenerate1

Note that the encryption_state is set to 4 which means “key change in progress”.

In a brief test I compared the time taken to change the key with the time taken to encrypt a fresh un-encrypted database. Surprisingly it was about the same, I had expected it would be longer due to the need to decrypt and re-encrypt each data item.

Rotating TDE Certificates without re-encrypting data

I talked previously about why we have each of the layers in the encryption hierarchy used to support TDE (Transparent Data Encryption).

Understanding Keys and Certificates with Transparent Data Encryption (TDE)

At the time I mused about why we need the encryption key in the database itself, rather than just encrypting with the keys associated with the certificate stored in the master database. I mentioned one reason – that these keys are intended for asymmetric encryption which has a higher overhead (the DEK is a symmetric key).

In a light-bulb moment, the more important reason has come to me since, and that is about key rotation.

In any form of encryption, it is possible that at some point an individual could gain access to the keys. For example, administrators of your SQL Server instance can freely make a backup of the certificate with a password they choose which can be used to encrypt it. Whilst that in itself is not a problem, it is good practice to periodically refresh those keys so that if someone gained access to an old copy it would be of no use to them. Or if you are aware that there has been a potential breach of security, you can quickly replace your keys.

If you have any form of security certification in your organisation, then it is likely it will be stipulated that you must change these in a defined timescale-  e.g. annually.

However, this forms a problem if you want to change the key that is used to actually encrypt the data, to change that you would need to decrypt all the data using the old key and re-encrypt it using the old one. That would be an intensive activity, but also could lead to vulnerability while the process was underway.

As a result, many forms of encryption use separation between the actual key that is used for the encryption, and another object that is used to protect that key. You can back up the latter separately outside the database to ensure you can recover your data in case of a failure.

With TDE we have the database encryption key (DEK) – the key that is actually used to encrypt/decrypt the data, stored encrypted in the database itself. In the normal run of things, we never access this key directly and never store it anywhere else, so it should be fairly safe, i.e. we should rarely have to worry about replacing it – though the functionality exists to allow us to do so if we require.

Then we have the certificate which is used to encrypt that key – without the certificate the key is useless and TDE can’t work. The certificate we keep a password protected backup of, so is a bit more vulnerable/public.

Due to that, certificates have an expiry date. This date is a reminder to us that, as a good practice, we should create a new certificate and use that going forward before the existing one expires. TDE doesn’t stop working if the certificate expires, it is up to you to monitor your certificates and replace them when they come to the end of their life.

If you think about it, we might not like it if TDE did enforce this, it would be a big impact if we suddenly could no longer read or write to our database. Of course, MS could probably implement functionality to lock us out, but still allow us to use the certificate for rotating the key, but still it would probably upset a lot of people and be less than “transparent”.

If we look at the sys.certificates view we can find our TDE certificate and examine the expiry date:
SELECT name, subject, expiry_date
FROM
sys.certificates
WHERE name = ‘MyTDECert’;

TDE_Rotate1

I didn’t specify an expiry date for this certificate when I created it, so it was automatically given one that was a year in the future.

Let’s create a new certificate, and this time we’ll specify a longer expiry. Then we rotate the encryption to use that one.
CREATE CERTIFICATE MyTDECert_with_longevity
WITH SUBJECT = ‘Certificate used for TDE in the TestTDE database for years to come’,
EXPIRY_DATE = ‘20201231’;


And let’s have a look at the values that has in the store:

TDE_Rotate2

What do we mean by rotation?
Rotation is the process of moving from the old certificate to the new one. In this case all that happens is that the encrypted value of the database encryption key (stored in our TDE protected database) is decrypted with the old certificate, re-encrypted with the new certificate, and that new encrypted value is stored in the database, overwriting the old one.

The Key value itself hasn’t changed, just the object protecting it, and as such we can still read/write data from the database without any additional work.

To rotate the certificate used is as simple as:
USE TestTDE;

ALTER DATABASE ENCRYPTION KEY
ENCRYPTION BY SERVER CERTIFICATE
MyTDECert_with_longevity;

And you’re done. The operation is almost instantaneous, shouldn’t require any database down time, or create additional overhead on your server. Though in production I’d still do it when things are quiet just in case anything goes wrong!

Crystal Balls

tsql2sday150x150

In the centenary edition of the monthly blogging party that is T-SQL Tuesday, Adam Machanic asks us to look forward 100 months and speculate what the world of SQL Server will look like:

http://dataeducation.com/invitation-t-sql-tuesday-100-looking-forward-100-months/

For Adam’s challenge, we’re looking forward just over 8 years. If we look back that amount then we’re looking at SQL Server 2008, with R2 coming out soon.

So how much has changed in the that time? Well, quite a lot, but it would be fair to say that more has stayed the same than has changed. For those of us who sit mainly in the data platform (rather than BI or Data Science) space, then most of what we do is the same, same T-SQL, same types of performance problems, same arguments with developers over database design, scalabilty and good practice.

What we have got though is enhanced tooling and improved performance. I love Query Store, I love the stuff starting to come out with adaptive query plans, even the automatic tuning features are looking good. I love that improved tooling means we can get away from the drudge tasks and concentrate on the clever stuff. Technologies like Always Encrypted give us great new ways to protect our data. The growth of the cloud offerings in Paas with Azure SQLDB and amazon RDS offer great possibilities.

Of course most people are still in the past with SQL Server and don’t get to use any of this, according to Spotlight, a quarter of SQL servers are still 2008 versions ( https://www.spotlightessentials.com/public/collectiveiq ), and I bet the truth is much bigger than that if you include all the severs that no-one bothers to  monitor.

So my biggest prediction for 2026 is that DBAs will finally have the go-ahead from their companies to upgrade to the versions of SQL that are current now and start using this stuff!

Some other random predictions:

Just as it didn’t when it was called Data Mining, Machine Learning won’t have solved all the world’s problems. It’ll be re-branded something like “Deep Intelligence” and be hailed as the new saviour going forward.

Similarly, Big Data will have been due for a re-brand  and re-hype for a while – err.. “Bigger Data” anyone?

It turned out no-one ended up getting the really big fines under GDPR, so companies have relaxed about abusing your data and storing backups on open cloud drives.

Implementations based on the successor to No-SQL, “No-Data” have run into problems. It becomes reluctantly accepted that data is useful in some contexts and the movement is relabelled “Not-only-data”.

The DBA role is dead – various people announce.

Many data lakes rotted and festered over time, becoming swamps. Cost of calling in the clean-up teams was too much so they just got concreted over.

But best, of all, Microsoft will have changed the default settings for a parallelism, and we’ll know what part of our string or binary data would have been truncated. Hallejulah! All hail our robot overlords! Where’s my Soylent Green?!

Recovering a TDE protected database without the Certificate

If you’ve been careful and done everything right when you’re setting up TDE then you shouldn’t run into this problem.

We all make mistakes though, and we’ve all been asked to deal with environments that haven’t been so carefully managed.

But what if you do? You have access to the backups for one or more TDE protected databases, but you don’t have the certificate and private key backups – or you don’t have the password to decrypt them.

I’m assuming here that you also can’t simply recover your old server from a complete file system backup. Obviously if you can do that then you are going to be fine.

If the two following things are true, then you can still recover your database:

  • You have a backup of the master database from the previous instance.
  • The previous instance used a domain account as its service account.

The reason you are going to be okay is that all the objects in the SQL Server Encryption Hierarchy that sit above the Database Encryption Key (that exists in your TDE database) are stored in the  master database. That is until we get right to the top, the Service Master Key (SMK) which exists in the master database is itself encrypted. There are two copies of it:

  • One encrypted by the machine account
  • Once encrypted by the SQL Server service account

The first copy is only going to be of any use to us if we can recover the old machine (and its account) direct from backups but we’ve already ruled that out.

If the service account is a domain account though then we should be able to use it.

The method is going to involve:

  • Setting up a new SQL instance using the same service account as the old instance
  • Restore your backup of master from the old instance onto the new instance

My personal opinion is that it’s not the greatest of ideas to restore the master database from one instance onto a new one and expect everything to simple work okay. So I’m only suggesting you use this so you can recover the certificate. Once you’ve got that, I would go back to the steps in the previous post [LINK] for recovering your TDE protected database(s).

  • Reboot your new server – that’s the whole server, not just SQL.
  • Backup your certificate and private key – and don’t lose them this time!

That’s fairly straightforward, but let’s just go into a little more detail.

Setting up a new SQL instance using the same service account as the old instance

What this obviously means is that your server must be on the same domain as the old server (or at least another domain that is trusted). You also must have the credentials for the service account.

You can’t fake this, for example setting up a new account on another domain called the same thing as the old one. The new account won’t have the same keys associated with it as the one’s used to encrypt your SMK, so you will achieve nothing.

Restore your backup of master from the old instance onto the new instance

There are a lot of resources out there that that tell you how to do this in detail such as Thomas LaRock’s post here:

https://thomaslarock.com/2014/01/restore-the-master-database-in-sql-server-2012/

In short you need to first stop your new SQL Server instance and then from a command prompt start it in single user mode e.g.

sqlservr.exe -c -m -s {InstanceName}

Then you need to (again from a command line) issue the command to restore/overwrite the master database. First start SQLCMD:

C:\> sqlcmd -s {InstanceName}

Then at the prompt that opens up within your command window:

1> RESTORE DATABASE master FROM DISK = ‘C:\Test\master.bak’ WITH REPLACE;

2> GO

Reboot your new server –the whole server, not just SQL

If you restart SQL before doing this, you can still go in and everything looks okay. You can even restore a TDE database from your old instance and you’ll find you can access the data.

Everything is not okay though, and if you tried to backup your certificate and private key you would get an error:

Msg 15151, Level 16, State 1, Line 7
Cannot find the certificate ‘MyTDECert’, because it does not exist or you do not have permission.

The reason for this error is that the SMK isn’t in the correct state. The copy that is encrypted by the service account is fine, but the copy that is encrypted by the machine account is currently using the wrong machine account. You need to reboot the whole server to fix this, just restarting SQL doesn’t do it. On a full restart the SMK is retrieved from the copy encrypted by the service account, and then encrypted with the current machine account. That version then replaces the one using the wrong machine account.

Once that’s done the encryption hierarchy is fully fixed, and the certificate becomes accessible for a backup command.

Backup your certificate and private key – and don’t lose them this time

I’ve given the command to backup these a few times, but here it is again:

BACKUP CERTIFICATE MyTDECert   
TO FILE = 'C:\Test\MyTDECert'  
WITH PRIVATE KEY   
(  
    FILE = 'C:\Test\MyTDECert_PrivateKeyFile',  
    ENCRYPTION BY PASSWORD = 'UseAStrongPasswordHereToo!£$7'  
);  
GO  

You can now take those backup files and use them to restore the certificate and key to the SQL Server instance of your choice, and then restore the backups of your TDE protected database(s).

Not losing these backups – or the password – is a serious issue. If you’re responsible for setting up any form of encryption you need to think about the process that’s going to manage the objects used to protect your data. People move from one role to another, from one company to another, and often things tick along happily for many years before a failure happens.

You need to be confident that come next year, or in five or ten years, whoever is responsible for the data will be able to recover it if the worst happens.

Other articles about TDE:

What is Transparent Data Encryption?

Setting up Transparent Data Encryption (TDE)

Encrypting an existing database with TDE

Understanding Keys and Certificates with Transparent Data Encryption (TDE)

How Secure is Transparent Data Encryption (TDE) – and How to Prevent Hacking

Migrating or Recovering a TDE protected Database

TDE and backup compression – still not working?

TDE and backup compression – still not working?

Until SQL 2016 if you used TDE (Transparent Data Encryption) you couldn’t use backup compression.

In 2016 Microsoft changed this, but it has been a rocky road. Backups work okay but in some circumstances people have found that they are corrupt when they come to restore them.

We thought that was all fixed, as highlighted by Brent Ozar, MS have been recommending that you are on at least SQL Server 2016 RTM CU6 or 2016 SP1 CU4 where these issues are resolved.

https://www.brentozar.com/archive/2017/09/breaking-news-using-tde-2016-backups-time-patch/

https://social.msdn.microsoft.com/Forums/sqlserver/en-US/dad3dfe3-c914-44f8-96b9-ab56cc825fe9/tde-corrupt-backups-when-using-backup-database-with-compression-maxtransfersize?forum=sqldatabaseengine

It seems like people are still having problems though. Ken Johnson is on 2106 SP1 CU6 and is having problems:

https://social.msdn.microsoft.com/Forums/sqlserver/en-US/dad3dfe3-c914-44f8-96b9-ab56cc825fe9/tde-corrupt-backups-when-using-backup-database-with-compression-maxtransfersize?forum=sqldatabaseengine

Our production DBA has just run a test and confirmed that in one test backing up 20 databases from one server and restoring them to another, about 10 have failed. Although in other environments this is working fine.

You can see the error if you try to verify the backup:

Date and time: 2018-02-16 09:52:10
Command: BACKUP DATABASE [XXXXX] TO DISK = XXXXX’ WITH CHECKSUM, COMPRESSION, MAXTRANSFERSIZE = 131072
Processed 95488 pages for database ‘XXXXX’, file ‘XXXXX’ on file 1.
Processed 3 pages for database ‘XXXXX’, file ‘XXXXX’ on file 1.
BACKUP DATABASE successfully processed 95491 pages in 2.375 seconds (314.113 MB/sec).
Outcome: Succeeded
Duration: 00:00:02
Date and time: 2018-02-16 09:52:12

Date and time: 2018-02-16 09:52:12
Command: RESTORE VERIFYONLY FROM DISK = ‘XXXXX’ WITH CHECKSUM
Msg 3189, Level 16, State 1, Line 1
Damage to the backup set was detected.
Msg 3013, Level 16, State 1, Line 1
VERIFY DATABASE is terminating abnormally.
Outcome: Failed
Duration: 00:00:03
Date and time: 2018-02-16 09:52:15

And it certainly doesn’t then work if you try to restore it:

5 percent processed.
Msg 3183, Level 16, State 1, Line 2
RESTORE detected an error on page (1:7763) in database “XXXXXXX as read from the backup set.
Msg 3013, Level 16, State 1, Line 2
RESTORE DATABASE is terminating abnormally.

We’re raising a call with Microsoft, will update when we know more.

In the meantime, can we trust backups of TDE with backup compression enabled? Given that we want to be absolutely sure our backups can save us in event of disaster the answer may be no. At a minimum we need to test each time we update our applications or patch SQL Server.

The side lesson if course is that we should always be regularly checking our backups are restorable.

More articles on TDE

What is Transparent Data Encryption?

Setting up Transparent Data Encryption (TDE)

Encrypting an existing database with TDE

Understanding Keys and Certificates with Transparent Data Encryption (TDE)

How Secure is Transparent Data Encryption (TDE) – and How to Prevent Hacking

Migrating or Recovering a TDE protected Database