Checking for Membership of a Specific Active Directory Group

As part of my job I manage a bunch of SQL instances for Development and Test.

Access is managed though Active Directory groups, so I rarely have to do anything regards managing permissions. Nonetheless I often get requests from people to give them access. This is usually for a new starter or someone who has moved from one team to another.

Of course, the answer is usually that they just need adding to the right AD group. Rather than assume though, I always get them to check before I pass the request on to the AD team. You never know, there could be something else wrong.

T-SQL has a lovely little function for this, IS_MEMBER. For instance, If I want to know if I’m a member of MyDomain\SQLAdmins I just run:

SELECT IS_MEMBER('MyDomain\SQLAdmins');

If it returns 1 then I am a member. Zero then I am not. Null means it can’t find the group, probably because I’ve spelt it wrong.

So, I can get the requester to check for themselves if they are a member of the group, and then we can raise the request to get them added.

IS_MEMBER is also useful if you want to check if you are a member of a specific database role – either one of the built-in ones or a user-defined one e.g.

SELECT IS_MEMBER('db_owner');

I’m not sure how well known this is and I just had one of those requests, so I thought I’d create a quick post on the topic.

RDS encryption vs TDE

If you’re starting to use cloud services, the number of options available can be confusing. Particularly when they may seem to do the same thing.

If you’re on AWS, and you want to create a SQL Server instance on RDS (Relational Database Service), then you potentially have a couple of different options for enabling encryption at rest.

If you’re deploying an Enterprise Edition SQL Server instance then you could use TDE (Transparent Database Encryption), the technology most of us in the SQL Server world already have some awareness of.

RDS also has its own at-rest encryption though, so what’s the difference?

The answer is that (at least in terms of what they protect) they are pretty much equivalent. RDS encryption can be used with whichever database platform you choose to use – Aurora, MySQL, MariaDB, PostgreSQL, Oracle or SQL Server. As many people want this functionality it made sense for Amazon to provide it.

Only Oracle and SQL Server have their own built-in equivalent – TDE – so in these cases you have a choice of which want you want to use. You may prefer to use the one you are familiar with from your on-premise deployments, or you may prefer to go with the RDS one.

A key difference is going to be database backups. RDS handles backups for you, and of course these backups will be encrypted whichever option you choose. However, you would be wise to also take your own set of backups which you store outside the RDS instance. With RDS encryption these backups would not be encrypted, however with TDE they would be.

Even this though is not a killer point, with SQL Server (from 2014 onward) you have backup encryption, so even if you were using RDS, you could use this to make sure that externally stored backups were also encrypted.

A big reason you might want to use the RS encryption is price. TDE is only available on SQL Server Enterprise Edition, whereas you can use RDS encryption on Standard Edition also.

The difference in cost for SQL Server Standard vs Enterprise licences is significant. Last time I checked the standard price was around $2,000 dollars per CPU core for Standard, but $7,000 with Enterprise.

The price difference in RDS is also large. If I look at a “db.m4.xlarge” instance which is 4 virtual CPUs and 16GB RAM, then the price quoted is roughly  $750 dollars a month for SQL Server Standard Edition, $1,650 for Enterprise.

Of course, there are differences between each technology in terms of how you set it up and manage it, how and where the keys are stored etc. But if you’re confronted with the choice, then mostly, you can just pick the one you prefer, it doesn’t really matter.

What does matter is that if you prefer not to pay for Enterprise Edition, then you still have the option of at-rest encryption. Which is great news.

 

TDE – Regenerating the Database Encryption Key

As discussed in the last post, with TDE (Transparent Data Encryption), you generally do not need to change the Database Encryption Key (DEK) as it is separately protected by a certificate in the master database and you can rotate the certificate periodically.

Rotating TDE Certificates without re-encrypting data

As also mentioned, if you did need to change the DEK then this will be a resource intensive operation as SQL Server will need to decrypt all your existing data using the existing DEK value, then re-encrypt it with the new one.

Nonetheless, there may be scenarios when you wish to do this. There could be a security breach where you don’t feel confident unless all the keys are updated. Or you might need to support a higher level of encryption than had been previously used.

More commonly, you may publish a database from a development environment and wish to regenerate the keys so that there are unique to your production environment.

In either case, the process is simple and transparent from your point of view, you issue the ALTER DATABASE ENCRYPTION KEY command, specifying REGENERATE and the encryption algorithm you wish to use:

ALTER DATABASE ENCRYPTION KEY
REGENERATE WITH ALGORITHM = AES_256;

In the previous post I talked about rotating the certificate, and it’s worth noting that you can perform both these actions at the same time e.g.

ALTER DATABASE ENCRYPTION KEY
REGENERATE WITH ALGORITHM = AES_256
ENCRYPTION BY SERVER CERTIFICATE
MyTDECert_with_longevity

Where you have a lot of data in the database this could take a while to complete. You can check the status of the operation by looking in the dm_database_encryption_keys view:

SELECT k.encryption_state, k.create_date, k.regenerate_date,k.percent_complete
FROM sys.dm_database_encryption_keys k
INNER JOIN sys.databases d
ON k.database_id = d.database_id
WHERE d.name = ‘TestTDE’

TDE_Regenerate1

Note that the encryption_state is set to 4 which means “key change in progress”.

In a brief test I compared the time taken to change the key with the time taken to encrypt a fresh un-encrypted database. Surprisingly it was about the same, I had expected it would be longer due to the need to decrypt and re-encrypt each data item.

Rotating TDE Certificates without re-encrypting data

I talked previously about why we have each of the layers in the encryption hierarchy used to support TDE (Transparent Data Encryption).

Understanding Keys and Certificates with Transparent Data Encryption (TDE)

At the time I mused about why we need the encryption key in the database itself, rather than just encrypting with the keys associated with the certificate stored in the master database. I mentioned one reason – that these keys are intended for asymmetric encryption which has a higher overhead (the DEK is a symmetric key).

In a light-bulb moment, the more important reason has come to me since, and that is about key rotation.

In any form of encryption, it is possible that at some point an individual could gain access to the keys. For example, administrators of your SQL Server instance can freely make a backup of the certificate with a password they choose which can be used to encrypt it. Whilst that in itself is not a problem, it is good practice to periodically refresh those keys so that if someone gained access to an old copy it would be of no use to them. Or if you are aware that there has been a potential breach of security, you can quickly replace your keys.

If you have any form of security certification in your organisation, then it is likely it will be stipulated that you must change these in a defined timescale-  e.g. annually.

However, this forms a problem if you want to change the key that is used to actually encrypt the data, to change that you would need to decrypt all the data using the old key and re-encrypt it using the old one. That would be an intensive activity, but also could lead to vulnerability while the process was underway.

As a result, many forms of encryption use separation between the actual key that is used for the encryption, and another object that is used to protect that key. You can back up the latter separately outside the database to ensure you can recover your data in case of a failure.

With TDE we have the database encryption key (DEK) – the key that is actually used to encrypt/decrypt the data, stored encrypted in the database itself. In the normal run of things, we never access this key directly and never store it anywhere else, so it should be fairly safe, i.e. we should rarely have to worry about replacing it – though the functionality exists to allow us to do so if we require.

Then we have the certificate which is used to encrypt that key – without the certificate the key is useless and TDE can’t work. The certificate we keep a password protected backup of, so is a bit more vulnerable/public.

Due to that, certificates have an expiry date. This date is a reminder to us that, as a good practice, we should create a new certificate and use that going forward before the existing one expires. TDE doesn’t stop working if the certificate expires, it is up to you to monitor your certificates and replace them when they come to the end of their life.

If you think about it, we might not like it if TDE did enforce this, it would be a big impact if we suddenly could no longer read or write to our database. Of course, MS could probably implement functionality to lock us out, but still allow us to use the certificate for rotating the key, but still it would probably upset a lot of people and be less than “transparent”.

If we look at the sys.certificates view we can find our TDE certificate and examine the expiry date:
SELECT name, subject, expiry_date
FROM
sys.certificates
WHERE name = ‘MyTDECert’;

TDE_Rotate1

I didn’t specify an expiry date for this certificate when I created it, so it was automatically given one that was a year in the future.

Let’s create a new certificate, and this time we’ll specify a longer expiry. Then we rotate the encryption to use that one.
CREATE CERTIFICATE MyTDECert_with_longevity
WITH SUBJECT = ‘Certificate used for TDE in the TestTDE database for years to come’,
EXPIRY_DATE = ‘20201231’;


And let’s have a look at the values that has in the store:

TDE_Rotate2

What do we mean by rotation?
Rotation is the process of moving from the old certificate to the new one. In this case all that happens is that the encrypted value of the database encryption key (stored in our TDE protected database) is decrypted with the old certificate, re-encrypted with the new certificate, and that new encrypted value is stored in the database, overwriting the old one.

The Key value itself hasn’t changed, just the object protecting it, and as such we can still read/write data from the database without any additional work.

To rotate the certificate used is as simple as:
USE TestTDE;

ALTER DATABASE ENCRYPTION KEY
ENCRYPTION BY SERVER CERTIFICATE
MyTDECert_with_longevity;

And you’re done. The operation is almost instantaneous, shouldn’t require any database down time, or create additional overhead on your server. Though in production I’d still do it when things are quiet just in case anything goes wrong!

Recovering a TDE protected database without the Certificate

If you’ve been careful and done everything right when you’re setting up TDE then you shouldn’t run into this problem.

We all make mistakes though, and we’ve all been asked to deal with environments that haven’t been so carefully managed.

But what if you do? You have access to the backups for one or more TDE protected databases, but you don’t have the certificate and private key backups – or you don’t have the password to decrypt them.

I’m assuming here that you also can’t simply recover your old server from a complete file system backup. Obviously if you can do that then you are going to be fine.

If the two following things are true, then you can still recover your database:

  • You have a backup of the master database from the previous instance.
  • The previous instance used a domain account as its service account.

The reason you are going to be okay is that all the objects in the SQL Server Encryption Hierarchy that sit above the Database Encryption Key (that exists in your TDE database) are stored in the  master database. That is until we get right to the top, the Service Master Key (SMK) which exists in the master database is itself encrypted. There are two copies of it:

  • One encrypted by the machine account
  • Once encrypted by the SQL Server service account

The first copy is only going to be of any use to us if we can recover the old machine (and its account) direct from backups but we’ve already ruled that out.

If the service account is a domain account though then we should be able to use it.

The method is going to involve:

  • Setting up a new SQL instance using the same service account as the old instance
  • Restore your backup of master from the old instance onto the new instance

My personal opinion is that it’s not the greatest of ideas to restore the master database from one instance onto a new one and expect everything to simple work okay. So I’m only suggesting you use this so you can recover the certificate. Once you’ve got that, I would go back to the steps in the previous post [LINK] for recovering your TDE protected database(s).

  • Reboot your new server – that’s the whole server, not just SQL.
  • Backup your certificate and private key – and don’t lose them this time!

That’s fairly straightforward, but let’s just go into a little more detail.

Setting up a new SQL instance using the same service account as the old instance

What this obviously means is that your server must be on the same domain as the old server (or at least another domain that is trusted). You also must have the credentials for the service account.

You can’t fake this, for example setting up a new account on another domain called the same thing as the old one. The new account won’t have the same keys associated with it as the one’s used to encrypt your SMK, so you will achieve nothing.

Restore your backup of master from the old instance onto the new instance

There are a lot of resources out there that that tell you how to do this in detail such as Thomas LaRock’s post here:

https://thomaslarock.com/2014/01/restore-the-master-database-in-sql-server-2012/

In short you need to first stop your new SQL Server instance and then from a command prompt start it in single user mode e.g.

sqlservr.exe -c -m -s {InstanceName}

Then you need to (again from a command line) issue the command to restore/overwrite the master database. First start SQLCMD:

C:\> sqlcmd -s {InstanceName}

Then at the prompt that opens up within your command window:

1> RESTORE DATABASE master FROM DISK = ‘C:\Test\master.bak’ WITH REPLACE;

2> GO

Reboot your new server –the whole server, not just SQL

If you restart SQL before doing this, you can still go in and everything looks okay. You can even restore a TDE database from your old instance and you’ll find you can access the data.

Everything is not okay though, and if you tried to backup your certificate and private key you would get an error:

Msg 15151, Level 16, State 1, Line 7
Cannot find the certificate ‘MyTDECert’, because it does not exist or you do not have permission.

The reason for this error is that the SMK isn’t in the correct state. The copy that is encrypted by the service account is fine, but the copy that is encrypted by the machine account is currently using the wrong machine account. You need to reboot the whole server to fix this, just restarting SQL doesn’t do it. On a full restart the SMK is retrieved from the copy encrypted by the service account, and then encrypted with the current machine account. That version then replaces the one using the wrong machine account.

Once that’s done the encryption hierarchy is fully fixed, and the certificate becomes accessible for a backup command.

Backup your certificate and private key – and don’t lose them this time

I’ve given the command to backup these a few times, but here it is again:

BACKUP CERTIFICATE MyTDECert   
TO FILE = 'C:\Test\MyTDECert'  
WITH PRIVATE KEY   
(  
    FILE = 'C:\Test\MyTDECert_PrivateKeyFile',  
    ENCRYPTION BY PASSWORD = 'UseAStrongPasswordHereToo!£$7'  
);  
GO  

You can now take those backup files and use them to restore the certificate and key to the SQL Server instance of your choice, and then restore the backups of your TDE protected database(s).

Not losing these backups – or the password – is a serious issue. If you’re responsible for setting up any form of encryption you need to think about the process that’s going to manage the objects used to protect your data. People move from one role to another, from one company to another, and often things tick along happily for many years before a failure happens.

You need to be confident that come next year, or in five or ten years, whoever is responsible for the data will be able to recover it if the worst happens.

Other articles about TDE:

What is Transparent Data Encryption?

Setting up Transparent Data Encryption (TDE)

Encrypting an existing database with TDE

Understanding Keys and Certificates with Transparent Data Encryption (TDE)

How Secure is Transparent Data Encryption (TDE) – and How to Prevent Hacking

Migrating or Recovering a TDE protected Database

TDE and backup compression – still not working?

TDE and backup compression – still not working?

Until SQL 2016 if you used TDE (Transparent Data Encryption) you couldn’t use backup compression.

In 2016 Microsoft changed this, but it has been a rocky road. Backups work okay but in some circumstances people have found that they are corrupt when they come to restore them.

We thought that was all fixed, as highlighted by Brent Ozar, MS have been recommending that you are on at least SQL Server 2016 RTM CU6 or 2016 SP1 CU4 where these issues are resolved.

https://www.brentozar.com/archive/2017/09/breaking-news-using-tde-2016-backups-time-patch/

https://social.msdn.microsoft.com/Forums/sqlserver/en-US/dad3dfe3-c914-44f8-96b9-ab56cc825fe9/tde-corrupt-backups-when-using-backup-database-with-compression-maxtransfersize?forum=sqldatabaseengine

It seems like people are still having problems though. Ken Johnson is on 2106 SP1 CU6 and is having problems:

https://social.msdn.microsoft.com/Forums/sqlserver/en-US/dad3dfe3-c914-44f8-96b9-ab56cc825fe9/tde-corrupt-backups-when-using-backup-database-with-compression-maxtransfersize?forum=sqldatabaseengine

Our production DBA has just run a test and confirmed that in one test backing up 20 databases from one server and restoring them to another, about 10 have failed. Although in other environments this is working fine.

You can see the error if you try to verify the backup:

Date and time: 2018-02-16 09:52:10
Command: BACKUP DATABASE [XXXXX] TO DISK = XXXXX’ WITH CHECKSUM, COMPRESSION, MAXTRANSFERSIZE = 131072
Processed 95488 pages for database ‘XXXXX’, file ‘XXXXX’ on file 1.
Processed 3 pages for database ‘XXXXX’, file ‘XXXXX’ on file 1.
BACKUP DATABASE successfully processed 95491 pages in 2.375 seconds (314.113 MB/sec).
Outcome: Succeeded
Duration: 00:00:02
Date and time: 2018-02-16 09:52:12

Date and time: 2018-02-16 09:52:12
Command: RESTORE VERIFYONLY FROM DISK = ‘XXXXX’ WITH CHECKSUM
Msg 3189, Level 16, State 1, Line 1
Damage to the backup set was detected.
Msg 3013, Level 16, State 1, Line 1
VERIFY DATABASE is terminating abnormally.
Outcome: Failed
Duration: 00:00:03
Date and time: 2018-02-16 09:52:15

And it certainly doesn’t then work if you try to restore it:

5 percent processed.
Msg 3183, Level 16, State 1, Line 2
RESTORE detected an error on page (1:7763) in database “XXXXXXX as read from the backup set.
Msg 3013, Level 16, State 1, Line 2
RESTORE DATABASE is terminating abnormally.

We’re raising a call with Microsoft, will update when we know more.

In the meantime, can we trust backups of TDE with backup compression enabled? Given that we want to be absolutely sure our backups can save us in event of disaster the answer may be no. At a minimum we need to test each time we update our applications or patch SQL Server.

The side lesson if course is that we should always be regularly checking our backups are restorable.

More articles on TDE

What is Transparent Data Encryption?

Setting up Transparent Data Encryption (TDE)

Encrypting an existing database with TDE

Understanding Keys and Certificates with Transparent Data Encryption (TDE)

How Secure is Transparent Data Encryption (TDE) – and How to Prevent Hacking

Migrating or Recovering a TDE protected Database

 

Migrating or Recovering a TDE protected Database

When encrypting a database with Transparent Data Encryption(TDE), a vital consideration is to make sure we are prepared for the scenario when something goes wrong. For instance, if the server hosting our SQL instance goes belly-up, can we recover the data that we have encrypted with TDE?

In the ordinary recovery scenario, we would make sure that we have appropriate backups of our database, and that they (or copies of them) are stored off the server itself so that we can access them in case of a failure.

If you have followed the instructions in Setting up Transparent Data Encryption (TDE) then you will also have a backup of the certificate and private key used to protect the database encryption key used by TDE e.g:

BACKUP CERTIFICATE MyTDECert   
TO FILE = 'C:\Test\MyTDECert'  
WITH PRIVATE KEY   
(  
    FILE = 'C:\Test\MyTDECert_PrivateKeyFile',  
    ENCRYPTION BY PASSWORD = 'UseAStrongPasswordHereToo!£$7'  
);  
GO  

You need to make sure that these are also stored securely off the server and that you have kept the password you used somewhere you can access it – but not so accessible that unauthorised users can get it otherwise you are defeating the object of TDE somewhat.

In summary you need:

  • The database backup file
  • The backup of the certificate
  • The backup of the private key
  • The password used to encrypt the private key

Armed with those objects, you are equipped to restore your database to another SQL Instance.

Working on the new SQL instance, the steps are straightforward.

Create a Database Master Key if one doesn’t exist

USE MASTER;
CREATE MASTER KEY
ENCRYPTION BY PASSWORD = 'UseAStrongPasswordHere!£$7';

Note that this will be a new and unique database master key, it will not be the same as the one you had on your old instance – and you don’t need to use the same password to protect it.

Restore the Certificate and Private Key

On the new SQL instance you need to restore the certificate and private key into the master database:

USE MASTER;

CREATE CERTIFICATE MyTDECert
FROM FILE = 'C:\Test\MyTDECert.cer'
WITH PRIVATE KEY 
( 
   FILE = 'C:\Test\MyTDECert_PrivateKeyFile.pvk',
   DECRYPTION BY PASSWORD = 'UseAStrongPasswordHereToo!£$7' 
);

This will decrypt your key using the password supplied, and then re-encrypt it using the database master key you created. Then the certificate and its key will be stored in the master database on your new SQL instance.

If you’ve done something wrong, it’s entirely possible you may get an error at this stage, commonly:
Msg 15208, Level 16, State 6, Line 56
The certificate, asymmetric key, or private key file is not valid or does not exist; or you do not have permissions for it.

If you’re confident that all details specified are correct, and that the certificate and private key were backed up properly, then the most likely issue is that the current SQL instance doesn’t have access to the file path you’ve placed the files in.

Restore the Database

Once you’ve completed the previous steps you are ready to restore the database(s) from the backup(s). You do that as you would restore any other database. Potentially as simple as:

RESTORE DATABASE TestTDE FROM DISK = 'C:\Test\TestTDE.bak';

Then you’ll find you can access your database and view data without any issues. At this point you can celebrate – you are done.

You only get a problem if you haven’t set up the certificate and key correctly, or you have the wrong one:
Msg 33111, Level 16, State 3, Line 2
Cannot find server certificate with thumbprint ‘0x682C8797633B9AD8875967502861CCAE33ECAD66’.
Msg 3013, Level 16, State 1, Line 2
RESTORE DATABASE is terminating abnormally.

So what do I do if I can’t restore the certificate?

Of course you’re never going to run into this problem because you’ve followed all the instructions carefully, and you’ve made sure you have your certificate and key backups – and the password used to protect them.

Let’s say for the sake of argument though that you’ve taken ownership of an environment that hasn’t been so carefully managed. You’ve had a server failure and there are no certificate and key backups – or they exist but no-one knows the password.

Is your data lost forever? Or rather is it now so safe that no-one can access it – even those who are supposed to? Don’t panic just yet, we’ll look at a technique you may be able to use to recover your data in my next blog post.

Other articles on TDE:

What is Transparent Data Encryption?

Setting up Transparent Data Encryption (TDE)

Encrypting an existing database with TDE

Understanding Keys and Certificates with Transparent Data Encryption (TDE)

How Secure is Transparent Data Encryption (TDE) – and How to Prevent Hacking

 

How Secure is Transparent Data Encryption (TDE) – and How to Prevent Hacking

TDE is commonly described as “at-rest” encryption, i.e. it protects your data wherever it is stored on disk. This includes the database files, any backups taken (including Log and Differential), and any data that may get temporarily persisted to TempDB (when you use TDE to encrypt any database on an instance TempDB will get automatically encrypted also).

TDE does not however give any additional protection against those accessing data by querying the database. If you have access to the database then TDE transparently allows that to continue without any change in the functionality.
Understanding that, if we are implementing TDE we can only expect it to fully protect us from people who have access to the file system but do not have access to the data stored through any other avenue – for instance by querying SQL Server.

In most scenarios we probably don’t plan on there being such a person, i.e. we don’t grant the world read access to our database servers and say “go on, feel free, have a play”. In most production systems access is tightly tied down, and if you work in an environment where this is not the case then you probably have things you should address before you start looking at implementing encryption.

In reality though, mistakes or oversights happen, or sometimes malicious parties are able to gain access they should not have. What TDE offers us is, like many other forms of encryption, an additional layer or protection. The first line of defence gets breached, but there is still the next one to get past.

There is also the scenario of database backups, where these might be stored off the database server and even offsite. TDE gives us some assurance that in these scenarios the data is protected irrespective of the access restrictions in force in those other places. We’ve all read of breaches which were enabled by a database backup being stored on a location that was easier to access than it should have been.

Of course this protection of the backups could simply be achieved by using backup encryption – which is a separate feature of SQL Server and does not have the performance overhead associated with TDE.

So, if we are specifically implementing TDE rather than just backups it might be that we are specifically trying to protect ourselves from someone gaining direct access to the database files. That could be seen as quite an edge case – but that doesn’t mean it isn’t worth protecting against.

One thing to remember with TDE is that it is not (on its own) going to protect you against someone who has Admin right either over the box where SQL Server is installed, or over SQL itself. A local Admin can add themselves as an administrator on SQL – though they will need to restart the instance to do so. And an administrator on SQL can do pretty much whatever they like, including exporting a copy of the certificate and key used by TDE so that they can use that to read stolen data.

Still, there is some level of protection gained from TDE and if you combine it with tight access controls, and auditing, then it can help you reduce the scenarios where your data is at risk.

Hacking TDE

There is a great post and accompanying video out there by Simon McAuliffe that shows a method that can be used to break TDE:
https://simonmcauliffe.com/technology/tde/

Simon’s not a big fan of TDE and he makes a good argument, the reality is that a lot of the time people are using it to tick a box “yes, all our data in encrypted at rest” and no-one questions the particular scenarios where it is – and is not – offering protection.

While I agree with Simon’s points, I don’t feel as strongly as he does. You just need to be making an informed decision to implement TDE based on a full understanding of what it can and cannot protect you from. It’s not a magic bullet, but it’s not totally useless either.

Nonetheless the method Simon outlines for breaking TDE is concerning as it only requires basic level read access over the server to be able to grab all the objects you need to crack a TDE protected database. Sure, you need to have a certain knowledge base to be able to do that, but protection that is based on an attacker not having the right skillset, is not of the greatest value.

I won’t go into Simon’s method in full detail – you can check out his post for that. But I’ll give you enough of a summary so you can hopefully understand the principles involved.

What he details is an inevitable vulnerability that he specifies would not be unique to TDE – but common to any similar technology.

It’s back to the “T” of TDE again, “Transparent”. The concept of TDE is that it is fully self-contained and managed in the background for you by SQL Server. It doesn’t require you to supply additional passwords or keys when querying data, and it doesn’t need to access any objects stored outside of the SQL instance.
Let’s look again at the diagram with the encryption key hierarchy:

TDE_Key_Hierarchy

Starting at the bottom, each key is protected by the one above it. But what happens when we get to the top? There is nothing left to protect the uppermost DPAPI keys, so they must be stored unencrypted somewhere on the system.

You simply cannot get around that, SQL needs to be able to – all on its own – encrypt and decrypt data. Even if we say we’ll encrypt the DPAPI keys, what will we encrypt them with and where will we store that object? You can’t get away from the fact that at some point we need to store an unencrypted key, or password, or whatever, somewhere.

As a consequence, it’s remarkable easy to pick these root DPAPI keys up, though you need a few basic hacker skills/tools to be able to do anything with them.

Once you have the DPAPI keys, you can use those tools – in combination with a copy of the database files (or backup) from the Master database to obtain the SMK and thus break encryption all the way back down the hierarchy.

So what has just happened? A skilled and motivated hacker with mere read permissions to your server was able to steal your files and decrypt your data.

Just the scenario you implemented TDE to prevent.

They keys are held in the following directory on most Windows systems:
C:\Windows\System32\Microsoft\Protect\S-1-5-18

And by default, as long as you have read access to the file system you can read them.

Preventing this Hacking of TDE

Fortunately there is a basic action you can take to mitigate this. Strangely, this step isn’t mentioned (as far as I know) in any of the other documentation about setting up TDE.

The solution is that when you are setting up TDE you should take an additional step. Very simply, you need to secure this directory so that only Local Administrators to the server, the LOCAL SYSTEM account, and the SQL Server service account can read anything within it.

Once that’s done TDE should be offering you what you expect, i.e. casual read access to the server will not allow people to read your data. Only administrators would be able to gain access to the data, and as we’ve discussed already – Admins can access your TDE protected data anyway.

The only downside of doing this is that you are restricting access to the DPAPI (Data Protection API) from any other accounts. If you are running a pure SQL server then this should not be a problem – i.e. the server is for an installation of SQL and nothing else. If you also have application services running on the same box though, and these services wish to use encryption, then you may need to extend the permissions assigned to that directory. Though a better choice would be to move these services off the box.

Understanding Keys and Certificates with Transparent Data Encryption (TDE)

I’ve been taking a bit of a deep dive into understanding Transparent Data Encryption (TDE). As part of that I’ve been reading a lot of blog posts, stack overflow answers and technical documentation to try and deepen my knowledge.

Within that I’ve found a lot of contradiction I’ve needed to overcome. In particular this has been around what objects you need to recover an encrypted database to another SQL Server – be that when you’re doing a straight restore, working with log shipping, or using Availability Groups.

Most of the solutions offered work, but many describe additional steps that are not necessary. I feel that part of the problem is that people are misunderstanding the basics about encryption keys in SQL Server, so I thought it would be worthwhile going over that in a bit of detail before digging deeper into TDE in general.

I think it’s important to understand this stuff clearly, because then you have a clear view of when you’re protected and when you are vulnerable. If we’re engaging in encryption then we clearly have a desire for security – to be sure of that we have to be clear in our understanding.

Keys in SQL usually have three components (and this is the same for the Column Encryption Keys in Always Encrypted that I spoke about previously):
Understanding Keys and Certificates with Always Encrypted

The Key itself – Usually can be thought of a number expressed in binary format. Long and random enough to make it difficult to guess even by brute force attempts.

Another object that’s used to protect the key – This object might be another key, it might be a certificate, or it might just be a password. This object is used the encrypt the key.

The encrypted value of the key – Formed from the original value of the key, encrypted by the protecting object.

In SQL we rarely (maybe even never) see the actual value of the key. We have the encrypted version and we usually know what object was used to encrypt it. That second object may even be another key that is itself encrypted by a third object.

When it comes down to it though, the actual thing that is used to encrypt or decrypt data is the Key itself, not the encrypted value, and not the hierarchy of objects that may have been used to protect it.

So, all I really need to read your data, is your key.

Let’s look at that in the context of TDE. Here’s a standard diagram from books online that shows the hierarchy of encryption for TDE. There are other ways of working with TDE but this is the standard:

TDE_Key_Hierarchy

So, right down at the bottom of the diagram is the Database Encryption Key (DEK). That is what is used to encrypt/decrypt the data in the database. It sits in the database itself. So when you backup the database the DEK is held in the backup.

BUT – the DEK is itself encrypted by the certificate that sits in the master database, so even if someone has your backup they can’t access the key – and nor can any system they try and restore your backup to. So they also can’t access your data.

To be able to decrypt the DEK, the certificate is required. In fact, what is really required is the private key associated with the certificate as that’s what’s used for encrypting stuff. The private key however is itself encrypted by the Database Master Key (DMK) that sits in the master database.

So you’ll be forgiven at this stage for thinking that in order to read our backup of the database, we need the backup (containing the DEK), the Certificate (include the private key) and the DMK.

When you realise that the DMK is itself encrypted by the Service Master Key(SMK), and that the SMK is also encrypted then you might think you need to include those too – and whatever encrypted the SMK.

Where will it end!?

In reality we just go back to our certificate and its associated private key.
Let’s say I have a certificate called MyTDECert. I can (and must) backup this up outside of the database. If this is lost, then so is my data. The command for backing it up looks something like this:

BACKUP CERTIFICATE MyTDECert   
TO FILE = 'D:\Temp\MattTest\MyTDECert'  
WITH PRIVATE KEY   
(  
    FILE = 'D:\Temp\MattTest\SQLPrivateKeyFile',  
    ENCRYPTION BY PASSWORD = 'C0rrecth0rserbatterystab1e'  
);  
GO

When I back this up I specify a password to encrypt the Private Key. Remember that the Private Key was already encrypted by the DMK? Well, this backup certificate command doesn’t just encrypt it a second time – what would be the point of that? No, the reason I need to supply a password is that the command retrieves the unencrypted version of the private key, re-encrypts it with the password INSTEAD and then that is what gets save to disk.

Remember that at the point I run the command SQL has access to all the objects, all the way up the chain, that are used for the encryption. So it has no problem getting the actual value for the Private Key.

Now, when I restore that certificate to – let’s say another instance of SQL Server)- the command looks like this:

CREATE CERTIFICATE TestSQLServerCert   
FROM FILE = 'D:\Temp\MattTest\MyTDECert'  
WITH PRIVATE KEY   
(  
    FILE = 'D:\Temp\MattTest\SQLPrivateKeyFile',  
    DECRYPTION BY PASSWORD = 'C0rrecth0rserbatterystab1e' 
);  
GO  

Considering what the BACKUP command did, you can intuit that the above command will do the opposite. It first of all decrypts the Private Key using the password supplied. Then it encrypts it again using the DMK for the local instance before saving it in the master database locally.

Remember it is the Actual Values of Keys that get used for encryption – not their encrypted value. So the private key for this certificate is exactly the same as it was where we backed it up from – though the encrypted value will be different as it has used a different DMK.

So if I now go to restore a copy of a database whose DEK has been encrypted with this certificate onto the new instance I can do so without any problem.

Many articles will tell you that you also need to migrate the DMK from your old instance, and some will say that you also need the SMK. This is simply not correct – I mean, it will work if you do that (in the right order) but it isn’t necessary.

And in some scenarios it may simply not be possible. Imagine that the instance you are migrating to is already using encryption based on its current SMK and DMK, if you replace those with the ones from the instance you are migrating from then you are going to break existing stuff.

So all you need to restore a database protected by TDE is the database backup, the certificate/private key, and the password specified when the certificate and key were backup up.

Equally that’s all someone else needs too, so make sure those things are protected.

In my next post we’ll look at a possible way of hacking TDE and the additional steps you need to protect yourself to make sure TDE is giving you the level of protection you expect. When that post is live you should be able to see a link to it in the comments.

Implementing Temporal Tables Where You Have Existing Data

In my post about auditing tools in SQL Server I mentioned a few tools we have at our disposal for capturing the change history of data.

One was Change Data Capture which I want into a bit more detail about in Setting up Change Data Capture (CDC)

I also referred to Temporal Tables which are available to us in the brave new world of SQL 2016.

Temporal tables aren’t just about monitoring change, they also provide really nice methods for being able to query historical data – to see what the values were at a particular point in time e.g.

SELECT * FROM dbo.SomeData FOR SYSTEM_TIME AS OF '1900-01-01';

This makes them easy to work with, but also opens up other applications for them over and above auditing purposes – for instance being able to maintain configuration data that changes over time, but where we need to reference what the value was historically.

I wanted to look in more detail about the process for changing an existing table so that it becomes a temporal one.

Let’s start by quickly looking at how we would define a temporal table for a fresh deployment:

CREATE TABLE dbo.SomeData(
   Id INT IDENTITY(1,1) NOT NULL
      CONSTRAINT PK_SomeData PRIMARY KEY CLUSTERED,
   SomeText VARCHAR(1000) NULL,
   ModifiedBy VARCHAR(128) NULL,
   StartDate datetime2(7) GENERATED ALWAYS AS ROW START NOT NULL,
   EndDate datetime2(7) GENERATED ALWAYS AS ROW END NOT NULL,
   PERIOD FOR SYSTEM_TIME (StartDate, EndDate)
)
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.SomeData_History));

There are a few things here additional to a normal table definition. First thing to note is we need two columns to support the Temporal versioning. I’ve called my columns StartDate and EndDate, these both have to be of DATETIME2 datatype to the highest precision (100 nanoseconds). These are specified as GENERATED ALWAYS AS ROW START\END so SQL will take care of assigning values when new rows are inserted or updated.

I’ve specified the PERIOD FOR SYSTEM TIME, so SQL knows that these are the fields to use for the temporal versioning.

Finally I’ve specified SYSTEM_VERSIONING for the table and what the history table should be called.

When I execute this you can find the table and corresponding history table displayed in SSMS as below:
Tempopral1
We have the table itself, and also the history table has been created for us. This is where the previous versions of rows of data will be stored.

So that’s pretty straightforward – but what if you want to change an existing table?

First of all let’s drop this one:

DROP TABLE dbo.SomeData;

Unfortunately that gives me an error message:
Msg 13552, Level 16, State 1, Line 90
Drop table operation failed on table ‘TestAudit.dbo.SomeData’ because it is not supported operation on system-versioned temporal tables.

As the message says, you can’t just drop Temporal tables. First you must turn system versioning off, then you can go ahead:

ALTER TABLE dbo.SomeData SET (SYSTEM_VERSIONING = OFF); 
DROP TABLE dbo.SomeData;

Dropping the table though leaves the History table behind – now seen as a normal table: Tempopral2

That’s handy as it means you don’t lose the history when the main table is deleted. For now though let’s just delete that as well:

DROP TABLE dbo.SomeData_History;

Now let’s create a new table that will be our “existing table” we want to change. For this example I’ll create a test table with a trigger that maintains a modified date, and throw in 1000 rows:

CREATE TABLE dbo.SomeData(
   Id INT IDENTITY(1,1) 
      CONSTRAINT PK_SomeData PRIMARY KEY CLUSTERED NOT NULL,
   SomeText VARCHAR(1000) NULL,
   ModifiedBy VARCHAR(128) NULL,
   ModifiedDate DATETIME NOT NULL 
      CONSTRAINT DF_SomeData_ModifiedDate DEFAULT (GETUTCDATE())
);
GO

CREATE TRIGGER dbo.TR_SomeData_ModifiedDate
   ON dbo.SomeData 
   AFTER UPDATE
AS
BEGIN
   UPDATE dbo.SomeData 
   SET ModifiedDate = GETUTCDATE()
   FROM dbo.SomeData 
   INNER JOIN Inserted
      ON dbo.SomeData.Id = Inserted.Id;
END;
GO

INSERT INTO dbo.SomeData(SomeText, ModifiedBy)
SELECT TOP 1000 'Blah', 'Me'
FROM sys.objects a CROSS JOIN sys.objects b;

I’m going to complicate things and say that when I turn this into a temporal table I want to keep the existing ModifiedDate column and use that as the start date for my system versioning.

The first thing you might try is to alter the column:

ALTER TABLE dbo.SomeData
ALTER COLUMN ModifiedDate DATETIME GENERATED ALWAYS AS ROW START NOT NULL;

Error:
Msg 13589, Level 16, State 1, Line 44
Column ‘ModifiedDate’ in table ‘TestAudit.dbo.SomeData’ cannot be specified as ‘GENERATED ALWAYS’ in ALTER COLUMN statement.

So I can’t do this in an alter statement. What I’ll actually have to do is create a new column specified to work with the system versioning, copy the data over, drop the old column and rename the new one.

First let’s create the additional columns and define the PERIOD FOR SYSTEM_TIME:

ALTER TABLE dbo.SomeData
ADD 
StartDate datetime2 GENERATED ALWAYS AS ROW start NOT NULL
   CONSTRAINT DF_SomeData_StartDate DEFAULT (GETUTCDATE()),
EndDate datetime2 GENERATED ALWAYS AS ROW END NOT NULL
    CONSTRAINT DF_SomeData_EndDate DEFAULT ('9999-12-31 23:59:59.9999999'),
PERIOD FOR SYSTEM_TIME (StartDate, EndDate);

Note that I’ve had to specify defaults for the new columns so that the existing data gets values populated (I didn’t have to do that when I simply defined the Temporal Table from scratch).

For the StartDate I’ve specified the current UTC date – this could be any date and time that’s not in the future, though note it should be a UTC one. If I’d tried to use GETDATE, as I’m currently on British Summer Time, I would get the following error:
Msg 13542, Level 16, State 0, Line 51
ADD PERIOD FOR SYSTEM_TIME on table ‘TestAudit.dbo.SomeData’ failed because there are open records with start of period set to a value in the future.

For the end date I have put the maximum value a DATETIME2 column can hold, this is required and is a way of saying that this record is (at the moment) valid indefinitely. If I try to put in an earlier date I receive this error:
Msg 13575, Level 16, State 0, Line 51
ADD PERIOD FOR SYSTEM_TIME failed because table ‘TestAudit.dbo.SomeData’ contains records where end of period is not equal to MAX datetime.

Now I’ve got the columns I could try copying my data over:

UPDATE dbo.SomeData SET StartDate = ModifiedDate;

I get another error:
Msg 13537, Level 16, State 1, Line 70
Cannot update GENERATED ALWAYS columns in table ‘TestAudit.dbo.SomeData’.

It turns out you can get around this by temporarily dropping the period for system time:

ALTER TABLE dbo.SomeData DROP PERIOD FOR SYSTEM_TIME;
UPDATE dbo.SomeData SET StartDate = ModifiedDate;
ALTER TABLE dbo.SomeData ADD PERIOD FOR SYSTEM_TIME (StartDate, EndDate);

Your logical thought might be here that I didn’t need to create the period for system time when I altered the table to add the new columns. I could have just added the columns, migrated the data, then only added the period afterwards. Trust me, I tried that too:
Msg 13509, Level 16, State 1, Line 51
Cannot create generated always column when SYSTEM_TIME period is not defined.

From here it turns out to be reasonable clear sailing.

Let’s run the final command to make the table Temporal:

ALTER TABLE dbo.SomeData 
SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.SomeData_History));

Here’s what it looks like in SSMS:
Tempopral3
You can see this is the same as the original fresh creation of a temporal table. The history table has been created for us. Just with an additional column in both tables.

Let’s remove the extra column and do the final rename:

ALTER TABLE dbo.SomeData DROP CONSTRAINT DF_SomeData_ModifiedDate;
ALTER TABLE dbo.SomeData DROP COLUMN ModifiedDate;
EXEC sp_rename 'dbo.SomeData.StartDate','ModifiedDate','COLUMN';

I was amazed the sp_rename didn’t give me an error. In fact if we look at the table again:
Tempopral4
You can see the column has been renamed in both the main table and the history table. Pretty neat!

I have at this stage forgotten to do one thing. Let’s just try to update a record and see what happens:

UPDATE dbo.SomeData SET SomeText = 'Wibble' WHERE Id = 2;

Msg 13537, Level 16, State 1, Procedure TR_SomeData_ModifiedDate, Line 7 [Batch Start Line 185]
Cannot update GENERATED ALWAYS columns in table ‘TestAudit.dbo.SomeData’.
The statement has been terminated.

Yep, don’t forget to remove your trigger!

DROP TRIGGER dbo.TR_SomeData_ModifiedDate;

And now we’re all done.

 

What if I use SQL Server Data Tools (SSDT) to make my change?

If you’re working with Database Projects in Visual Studio then you will be used to publishing changes straight from there – or deploying them using a DACPAC file.

In that case SSDT handles all this for you – though it does use a slightly different method. Rather than creating a new column, copying that data over, dropping the old column and renaming the new one, it does it at the table level.
That’s the same way some other changes are made with SSDT (or if you do them through the GUI in SSMS). The steps are broadly the same, just with tables instead of columns:

1) Create new temporal table
2) Turn SYSTEM_VERSIONING off
3) Copy across the data from the old table to the new
4) Turn SYSTEM_VERSIONING back on
5) Drop the old table
6) Rename the new table to the original name

If you’re doing this by “hand”, you could do it this way if you prefer, but it is probably a heavier operation that will require more down time. For instance you’ll need the whole table to be locked while the operation progresses as you don’t want to risk data changes once you’ve started copying the data over. The column method will also lock the table, but potentially there is less to update\copy so it should hopefully be quicker.

As a final point I should note that you may run into problems dropping the trigger with SSDT, it can be a little quirky around doing that. For the moment I’ll leave off from going into that in more detail – but if you get that issue feel free to give me a yell!