Don’t Just Rely on Query Execution Stats for T-SQL Execution

I recently had an incident where I was looking into the cause of a long running process for a client. It was a batch process that ran overnight, and execution time had been growing until it was now taking over 4 hours.

The database involved is a kind of staging area for MI, so I wanted to look at any heavy queries being executed against the database – I knew that these would all be part of the batch process. I asked them to run the following query for me:

SELECT TOP 50
   DB_NAME(qt.dbid) AS DatabaseName, 
   qt.TEXT, 
   qs.execution_count,
   qs.total_worker_time,
   qs.total_physical_reads,
   qs.total_logical_writes,
   qs.total_logical_reads,
   qs.total_elapsed_time
FROM sys.dm_exec_query_stats qs
CROSS apply sys.dm_exec_sql_text(qs.sql_handle) qt
WHERE ISNULL(qt.dbid,0) = DB_ID();

I got less than 50 rows back so figured I had everything covered, but the total elapsed time across all the queries was less than ten minutes, I knew the server hadn’t been rebooted for about a month so potentially that could be as little as 20 seconds of query execution a night. Even if the full 10 minutes was from the last 24 hours that still didn’t account for the long run times.

So, I convinced myself it wasn’t the queries and started looking at other things they could check. It was a long list. Lots of theories revolved around the idea that something else was running on the host at the same time.

As part of that I decided to get them to run an extended events trace so I could look at exact timings and waits during the batch process (and anything else that might be going on in SQL). I got a lot of data back which took a fair while to index and load. Figuring I’d already ruled out query execution times for the batch process itself, I expected the analysis was going to be tricky to drill down on the problem, however as soon as I looked for heavy queries running during the execution window I found one specific stored procedure with the following metrics:

QueryExStats1

Ouch.

Thing is, these figures were for a stored procedure that was part of the batch run. I date for this in the original output but the figures for query execution duration and CPU were tiny in comparison to these. So, were the query execution stats wrong?

No, they weren’t, dm_exec_query_stats only captures figures for cached query plans, so you might have a query that’s not caching a plan for whatever reason – or might have something that’s doing work that’s not actually a query, imagine a while loop that’s trying to find all the prime numbers under a trillion without using sets or queries. That would consume a lot of CPU and duration but wouldn’t show in in the query execution stats – because it’s not a query.

In this case it turned out the stored procedure was:

  • Trying to process over 2 million rows using a cursor
  • For each row it would attempt to execute a stored procedure that didn’t exist using dynamic SQL
  • It would catch the failure and update an error message against the row – using the cursor
  • Every day it would attempt to process the failed rows again
  • Every day more rows would be inserted to the table that would fail

In particular it was the use of the cursor that was most expensive, and not being captured in the execution stats.

To be kind to the original developer, it was a clever generic solution to a specific problem. It wasn’t optimal from a performance point of view, but they only expected it to have to deal with a few rows daily.

Then the volume of data increased.

Then a subsequent developer made a change but didn’t know they needed to add a new supporting stored procedure. The resulting issue didn’t get picked up in testing.

The number of failed rows had been growing every night for a couple of years, and the execution time creeping up until it went exponential.

This post isn’t about bad code though, it’s about remembering that not all resource consumption on our SQL boxes comes from cached queries, so when trying to troubleshoot remember that query execution stats are one place to look, but don’t cover everything that might be being executed in T-SQL.

T-SQL Tuesday #106 – The Trouble with Triggers

tsql2sday150x150

For this month’s T-SQL Tuesday Steve Jones asks us to talk about our experiences with Triggers:

https://voiceofthedba.com/2018/09/03/t-sql-tuesday-106-trigger-headaches-or-happiness/

In my post I’m going to focus on DML (Data Modification Language) triggers. Their counterparts, DDL (Data Definition Language) triggers have any number of interesting applications. It’s usually DML triggers however that attract the most controversy.

DML triggers seem like a great functionality when you first discover them in your SQL career, especially if you come from an application coding background. Event driven programming for data – sounds brilliant. If someone does something to table X, have a trigger than in turn does something to tables Y and Z, you can even have triggers on Y and Z and cascade actions to other tables and beyond.

Once you’ve been around for a while though you realise what a headache this can end up being. You think you’re just inserting a record into table X but there’s this whole bunch of stuff going on behind the scenes that you may not know about – or want.

This is the biggest objection to triggers. They hide logic. If I have table called FeeTransaction and there is a bunch of logic that MUST happen when I insert a record here, then that’s clearer if it’s all encapsulated in a stored procedure that inserts the record. That makes it easier to maintain and troubleshoot rather than hunting around to find independent events that might be kicking off all over the place.

The only place I’m really a fan of DML triggers is in capturing Audit information. Most commonly for maintaining basic things like the last updated date and time for a record. I like having a mechanism that means I can see when data was last changed – whatever method was used to change it. i.e. if it was through the application, or if someone with write-access just ran an update query. Then I don’t need to rely on that person remembering to alter the LastUpdated column when they made their change.

The need to maintain this sort of basic audit info is so ubiquitous that it seems silly it is not a core feature of SQL Server, i.e. the ability to add a column to a table that will always automatically capture the time a record was modified, presumably this could also be done in a tamper-proof manner. In the same way we could have the ability to capture the Login context used to make the change or potentially anything else we need to capture.

Adam Machanic has raised a request with Microsoft to include functionality, if you agree it would be handy then please upvote here:

https://feedback.azure.com/forums/908035-sql-server/suggestions/32901964-provide-a-mechanism-for-columns-to-automatically-u

Exporting Extended Events Session Data to a Table

If you’re a long time Profiler user like me then you probably often take the option of saving (or loading) your trace results to a table for easy analysis. Well, with Extended Events (XE) it’s easy to do that too.

Once you’ve opened the session to view the data you get an extra drop-down menu “Extended Events” on the menu bar in SSMS. If you open that and browse down, you can quickly find the option to export the results to a table:

xe_table1

Or to a CSV or XEL file if you wish.

You just need to select a destination database connection and table name and the export starts. Be warned that it doesn’t default to the current database connection. I’ve fallen for that and overwritten the data in a table with the same filename on a different SQL instance – whoops!

If the option is greyed out when you open the menu it may be that your event data is still loading. If you look closely in the above screenshot you can see I have over 8 million events captured by this session, so it took a while to load before I was able to export.

Once the export has finished you end up with a table that looks a bit like this (the exact set of columns will depend on which fields you have collected):

xe_table2

One gripe I have with this functionality, and that you can see above, is that all the text columns come out defined as nvarchar(max). Apart from anything else this means you can’t index the columns without changing the data type.

That’s one thing that Profiler did better, for instance database_name captured from Profiler would have been an nvarchar (128).

I thought I’d knock this post together while waiting for my statement to complete which alters database_name to the preferred nvarchar(128). With 8 million records that’s taken 21 minutes so far. And then I’ll have to index it.

Yes, maybe it’s my fault for capturing so many events, but in my defence, I’m trying to do some analysis in a dev environment to work out what I can filter out safely, before passing the same session to a client to use in production to diagnose the problem with a long running SSIS package.

Still, as you can see it’s an easy to process to export your Extended Events data to a table so you can do your analysis there – rather than having to resort to querying XML.

 

Using the built-in System Health session

When Microsoft introduced Extended Events (XE) in 2008, they also gave us a built-in XE session called system_health (though it’s worth noting that in 2008 MS hadn’t yet provided us a GUI for this so it becomes most useful in 2012 and beyond).

This is a great little tool. I mainly use it for troubleshooting deadlocks as it logs all the information for any deadlocks that occur. No more having to mess about making sure specific trace flags are enabled to ensure deadlock information is captured in the error log.

It also captures the SQL text and Session Id (along with other relevant data) in a number of other scenarios you may need to troubleshoot:

  • Where an error over severity 20 is encountered
  • Where a session has waited on a latch for over 15 seconds
  • Where a session has waited on a lock for over 30 seconds
  • Sessions that have encountered other long waits (the threshold varies by wait type)

There are other events captured too, you can see the full list here:

https://docs.microsoft.com/en-us/sql/relational-databases/extended-events/use-the-system-health-session?view=sql-server-2017

You can find the system_health session here in SSMS under your server instance:

system_health1

Just double click on the event file to view historical data.

The view that comes up immediately can seem non-intuitive to work with (do I just have to scroll through thousands of events looking for the one I want?):

system_health2

If you know the type of event you are looking for though, you can right-click on the “name” column and select the option to group by the values in that column. Then you see something more like this:

system_health3

With the example of looking at deadlocks (what I mostly use this for) I can then just expand that group and look for the one I want.

Or you can right-click and use “Find in Column” – or “Choose Columns” to add extra columns you might want to search in. For instance, I might want to see if it’s captured any information about why my backups are being delayed so I can add the “sql_text” column, order by that and then search for “backup”:

system_health4

Once an event is selected it will show me the additional information gathered in the bottom pane:

system_health5

Like I say, it’s pretty useful. My only issue is that by default it captures only 20MB of data, which on a busy system can mean events are only kept for a couple of days. So, I often want to increase the retention. I find the easiest way to do that is to right-click on the session and select “Script as CREATE to New Query Window”. I then edit the script to change the number of roll-over files to 20 (from 4):

system_health6

You can then delete the existing system_health session and re-create it from the script – you do have to remember to right-click on it in the GUI and start it again.

One great thing is that when you do this is that you don’t lose the events already saved to file, as the files are retained and continue to be accessible from your newly created session.

All in all, a handy little tool.

There’s Still a Place for SQL Server Profiler

Follow a few of the SQL Family on Twitter and you’ll mostly see one view regards SQL Profiler, and it’s not generally friendly.

So much so that I’ve been tempted to buy this t-shirt for some trolling at the next SQL conference I attend:

I Heart SQL Profiler

 

At SQLBits earlier this year, Erin Stellato ran a great session “Kicking and Screaming: Replacing Profiler with Extended Events”. Erin’s run it a few times at various conferences and you can check it out on YouTube:

https://www.youtube.com/watch?v=HnqLGJuRgvw

As a stubborn continuing user of Profiler I attended because I liked the kicking and screaming bit, being willing to accept that might fairly accurately describe my stance.

Erin ended the session asking for a commitment from the audience that next time they used profiler – just put aside 15 minutes to try seeing what it’s like to use extended events instead.

I’d used Extended Events (XE) on occasion, but generally when it was something that couldn’t be done in Profiler. I also regularly make use of the built-in system_health Extended Events session, particularly for retrieving details of deadlocks.

But if I needed a quick trace for something – to profiler I go. In my defence, I’m not a production DBA, so I’m checking things on dev and test instances where performance isn’t critical. But even where performance is a concern you can define the trace in profiler and then export that to implement as a server-side trace where the performance impact is minimal.

As a side story here, profiler can slow things down as it needs to write events back to the app before each event can complete. A Microsoft consultant shared a story with me where they’d been called out to troubleshoot a client who was experiencing high CPU on the database server. The MS guy started a quick trace to see what was going on, and as that slowed queries down it actually reduced the CPU. The manager ran out yelling “What have you done – it’s fixed!” There had to be a fair amount of convincing to persuade the manager that leaving profiler running permanently wasn’t a solution to the CPU problems.

Back to the subject of the post. There are a couple of reasons why you may still want to use profiler (apart from where you’re running seriously old versions of SQL):

  • When you work with clients, it’s a technology they’re all familiar with. I can ask a client to run a trace and send me the results. I don’t have to explain in detail what to do so it’s easy to get what I need. If I asked for the results of an XE session I’d generally get the verbal equivalent of head scratching – they’ve heard of it (probably) but wouldn’t know how to go about it. They may also wish to go through a change process to implement one as they don’t understand well what the impact might be.
  • I want something I can replay. You can’t do this with extended events, but you can if you take the right sort of trace. Moving forward you can use “Distributed Replay” for this, but I don’t know anyone who’s done that.
  • You’re using Amazon Relational Database Services – with RDS you have restricted rights over your database instance, and the ability to create extended events sessions is something you don’t have access to.

There’s also a couple of reasons people say you shouldn’t use traces that I disagree with:

  • Performance. I’ve already mentioned this. Yes, XE is more lightweight, but if you implement your trace as a server side one then the impact is small.
  • Profiler is deprecated. Well, yes it is, but only from SQL 2016. The ability to not have to terminate your SQL statements with a semicolon has been deprecated since 2005 and we can still do that. And as Hugo Korenelius informs me, there are features that have been deprecated even longer and are still with us. Trust me, trace will be with us for a good while. I’ll happily take a wager on that if you don’t believe me. I might even wager that I’ll be retired before Profiler is.

Profiler_twitter

However, since making the pledge at Erin’s session I forced myself to try Extended Events for some of the more basic tasks I would have turned to Profiler for and I feel there are some reasons people (myself included) haven’t been using XE that are no longer valid:

  • It’s quicker/easier to set up a profiler trace. That was true in the early days of XE, but arguably now (depending on what you want to do) you can have your XE session going with remarkably few clicks.
  • XE produces XML output. I hate XML, I don’t want to start using XQuery to find what I want. Trust me, we all hate XML. With the GUI we now have for XE you don’t have to mess with that stuff.
  • With XE I can’t save my results to a table. Um, yes you can. A table, a csv file (with no XML I promise) or just copy and paste to Excel. And unlike profiler you can search the data without having to export it anywhere.

 

So yes, I believe that there is still a place for profiler, but no longer because it is quicker, simpler and more usable.

In subsequent posts I plan to show easy and convenient XE is to work with these days.

And yeah, okay it is more powerful, more lightweight, allows me to look at things profiler never could. People have been telling me that for years and it wasn’t enough to make me change – but “easier”? Give me that thing!

Still probably going to buy that t-shirt though…

T-SQL Tuesday 104 – Capture expensive queries with query store

tsql2sday150x150

For this month’s T-SQL Tuesday Bert Wagner invites us to share SQL scripts we’ve written that we’d hate to live without.

Code You Would Hate To Live Without (T-SQL Tuesday #104 Invitation)

I don’t know that I’d go quite as far as “hate”, but I’ve been getting a lot of re-use out of a script I wrote and shared on this blog last year.

I’m lucky enough to have a number of SQL 2106 instances with Query Store enabled for the user databases, in particular that includes a performance test environment for one of the applications I work on.

In the past I’ve always been a sucker for those queries that you can use to find the most expensive queries on your SQL instance. Be that in terms of CPU, IO, or just longest running.

I wanted to use Query Store for an improved version of those scripts I’d been using, as Query Store retains statistics when a server restarts – or a plan gets cleared out of the cache.

With Query Store it’s easy enough to get figures per database, but I wanted to be able to view across all databases – and that was why I wrote the script.

Here’s the original post (and script):

Capture the most expensive queries across your SQL Server using Query Store

I find this useful as it is, but also I often cut it about to ask slightly different questions of the query store data.

Alter Multiple Databases at Once

This is a quick and dirty method I often use when I want to make a change to multiple databases on a SQL Server instance, usually based on a criteria.

It’s a fairly basic level thing to do, and while it is probably trivial to a SQL expert, I find most beginners wouldn’t consider it. So, I thought it was worth sharing.

Let’s say that I have the following set of databases:

MultipleDBs1

I’ve got a set of databases suffixed _dev, and another set suffixed _uat, as well as a couple of others.

Let’s say I need to take the _uat databases offline. I could do that one at a time through the GUI in SSMS. Or I could write SQL statements to do that.

Or, I could write a query to generate the SQL for me. In this case I only have three databases so it’s not going to save me a lot of time, but when you have tens or hundreds…

First, I write a query to make sure I can select out just the databases I want:

SELECT name
FROM sys.databases
WHERE name LIKE '%_uat';

And I view the results:

MultipleDBs2

That’s what I wanted. So, the next stage is to alter that query to generate the SQL statements to take those databases offline. The SQL to set a single database offline is:

ALTER DATABASE [{Database Name}] SET OFFLINE;

So, I alter my query as follows:

SELECT 'ALTER DATABASE [' + name + '] SET OFFLINE;'
FROM sys.databases
WHERE name LIKE '%_uat';

Which gives me:

MultipleDBs3

I can then copy that results into my query window and execute it as a block:

ALTER DATABASE [Users_uat] SET OFFLINE;
ALTER DATABASE [App_uat] SET OFFLINE;
ALTER DATABASE [Config_uat] SET OFFLINE;

Usually I’ll execute the first line initially to make sure I haven’t messed up the syntax and then I’ll run the rest in one go.

If we look at the list of databases again in the Object Explorer you can see it’s done the job:

MultipleDBs4

You can get clever and create a cursor to execute the result-set one at a time using dynamic SQL, but if you’re just performing a one-off task and you want the quickest solution then I find this the best method.

You can use this technique for all sorts of ad-hoc admin tasks, so it’s a very useful little tool to have in your belt.

Also, you’re not just restricted to performing administrative actions on databases, it could be any set of SQL objects. Personally I find doing things using methods like this a lot easier than messing about with PowerShell.

Cycle Through Your Clipboard History In SSMS

This is my favourite SSMS trick I’ve discovered recently, probably some time towards the end of last year.

Basically, when you paste in Management Studio, you have not just the option to paste the last thing you selected and copied, but can cycle back through previous things that were in the clipboard. In a quick test I was able to choose from the last 20 clipboard entries.

You can find a menu item for this in the “Edit menu”:

ClipboardRing

Or as you can see in the menu (and much more easily) you can use Ctrl+Shift+V

Here’s an animation of me using it to cycle through the SQL statements from my last blog post:

ClipboardRing

This is a great little productivity tool. I was forever copying and pasting one thing, then a second, then having to go back to copy and paste the first thing again. Never again!

 

The Importance of ORDER BY

Everyone, at the beginning of their SQL career, get’s told that it is important to include an ORDER BY if they want the results ordered. Otherwise the order in which they are returned is not guaranteed.

But then you run queries a lot of times that don’t need a specific order – and you see that they (at least seem to) come out in the same order every time. You could (almost) be forgiven for thinking you can rely on that.

There was even a question on a Microsoft SQL certification exam a few years ago that asked what the default order was for records returned by a simple SELECT – the answer it was looking for was that it would be according to the order of the clustered index. So you can rely on that – right?

Wrong. The question was a bad question, and the answer was incorrect. Let’s look at this in action. First I’ll create a table and add some rows:

CREATE TABLE dbo.NoSuchThingAsDefaultSortOrder (
   Id INT IDENTITY(1,1) 
      CONSTRAINT PK_NoSuchThingAsDefaultSortOrder PRIMARY KEY CLUSTERED,
   FirstName VARCHAR(30)
);

INSERT INTO dbo. NoSuchThingAsDefaultSortOrder (FirstName)
VALUES
('John'),
('Fred'),
('Bob'),
('Sue'),
('Jenny'),
('Matt'),
('Alexis'),
('Zebedee');

Now let’s SELECT from the table, without specifying an order:

SELECT Id, FirstName
FROM dbo.NoSuchThingAsDefaultSortOrder;

Here’s our results:

OrderBy1

Okay, so they’ve come out ordered by Id according to our clustered index.

But what if someone comes along and decides they want to be able to look up records in the table by name, so they create a non-clustered index for that:

CREATE NONCLUSTERED INDEX IX_NoSuchThingAsDefaultSortOrder_FirstName
   ON dbo.NoSuchThingAsDefaultSortOrder(FirstName);

We run our original query again and this time these are the results we get:

OrderBy2

They’ve come out in a different order – this time ordered by FirstName – like the index we just added.

The reason is that this time SQL decided that it would be better to use our new index to execute the query. In general, it will try and use the smallest index that will satisfy the query – and in this case it decided that was the non-clustered index. I’d show you the execution plans to prove that, but I think it’s pretty obvious from the order the results are returned in.

So, don’t go relying on there being a “default” sort order. It’s just not true, changes to indexing – or even conceivably to statistics about the data distribution, could change the way SQL chooses to execute your query, and that could change your order.

Unless you make sure to specify ORDER BY.

 

 

 

Thoughts on Query Performance with TDE enabled

Microsoft state that enabling TDE (Transparent Data Encryption) usually has a performance  overhead of 2-4%. That doesn’t sound like very much, and personally I wouldn’t let it bother me if I want to make sure my data is encrypted at rest.

However, you may have heard other sources saying that it’s actually a lot more than that – and the performance impact is a high price to pay for the level of protection offered.

So, what’s the truth?

The critical thing to remember is that with TDE your data is encrypted on disk, but data held in memory (i.e. the buffer pool) is unencrypted. Therefore, you would only expect an overhead when reading from and writing to disk.

SQL Server tries to keep data that is referenced repeatedly in the buffer pool. So, if your SQL instance is provisioned with enough memory, a lot of your read queries can access the buffer pool and don’t have to go out to disk. Such queries should not be affected performance-wise by TDE.

There may be other read queries however that access older data that hasn’t been read for a while, these queries would need to retrieve that data from disk and so there would be an overhead from TDE.

Any queries that modify data will need the outcome to be written to disk so in these cases we will see an overhead. This overhead is likely to come in two parts, first when the transaction is written to the logfile before committing, and then later as the updated data gets written to the data file as part of a checkpoint operation.

We also have other operations that write or update encrypted data on disk, so we would also expect these to have some overhead. This would include operations such backups, or index rebuild operations.

You can see from this that the overhead will very much depend on how your application interacts with your data. At one extreme, if you have a set of static data that is small enough to be held in memory and is queried regularly then there should be no overhead. At the other end of the spectrum, if you have an application that writes to the database a lot, and reads less often, then the overhead will be higher.

Let’s look at some examples and try and quantify what amount of overhead we might be talking about. In this post we’re just going to focus on the effect TDE has when you are reading data.

First, I’ll create two databases, one with TDE enabled and one without. Then I’ll load the same set of data into each (Total size about 1GB).

You can find the script I used for this in my previous blog post:

Encrypting an existing database with TDE

In the first test we’ll perform a like query of the worst kind, one that tries to match for a value within a column. We have no indexes on the table, but none would be that helpful with this query anyway.

SELECT *
FROM dbo.SomeData 
WHERE SomeText LIKE '%Wibble%';

I’ll run across 4 test cases (capturing the total CPU consumed in each case). The test cases are:

  • TDE Protected database where the buffer cache is empty (i.e. all data has to be read from disk)
  • TDE protected database where all the data for the table is in the buffer cache (i.e. no data has to be read from disk)
  • Database without TDE where the buffer cache is empty (i.e. all data has to be read from disk)
  • Database without TDE where all the data for the table is in the buffer cache (i.e. no data has to be read from disk)

In each test I’ll run the query 5 times and total the CPU to even out variance between executions. For the tests involving disk reads I’ll run the command DBCC DROPCLEANBUFFERS in between executions of the query to empty the buffer cache.

The results looked like this, with time shown in seconds. Note that MAXDOP was set to 4 and each query went parallel over 4 threads:

TDE_ReadPerf1

There’s quite a variance between each run so I’m not going to take anything significant from small differences we see. However, we can see that the timings are pretty much the same when the data is in memory, but there seems to be about a 10% overhead with TDE when reading from disk.

In case you’re wondering why reading from disk didn’t add much elapsed time for the No-TDE database – the reads were “read-ahead” so were able to complete while the CPU was burning through the data.

Let’s try a different query, this one will still have to scan the whole table as we are dealing with a heap, but it uses an equality predicate so there is less work to do in matching the data:

SELECT *
FROM dbo.SomeData 
WHERE Id = 100000000;

I’ll run the same set of tests as above and we can look at the results:

TDE_ReadPerf2

The first thing we notice is that this query runs a lot quicker in general. And again, there is little difference with and without TDE when the data is in memory.

But look at the comparison when the data has to be read from disk. With TDE the CPU consumption is more than 10 times as large, or in percentages, over 1000% worse.

At this point you could be forgiven for panicking – are you willing to risk that TDE makes your queries that much worse.

In both the above two tests, the same amount of data is being read from disk. And if you re-examine the numbers, you’ll see that (very roughly) the same amount of CPU has been added in each case where we have TDE enabled – about 50 seconds. That 50 seconds was split over 4 cores so it would have been about 12.5 seconds per core.

In terms of elapsed time, we had approximately that increase with the first query because CPU was the resource under most contention – i.e. the reads were able to occur while waiting for the CPU to complete. In the second query we can see the reading from disk occupied most of the elapsed time for those queries, so the extra CPU consumption didn’t make the query run particularly longer.

By the time it had been executed 5 times (with the memory flushed between each execution) each query read about 600,000 pages sized at 8kb each – just under 5GB. If it took 50 seconds on the decryption of those pages, then each page took about 1 twelfth of a milli-second to decrypt – or alternatively, TDE decrypted about 12 pages per millisecond. Or in terms of disk size, 100MB per second. These were tests on a server with magnetic spinning disks (not SSDs) and you can see from the above figures, the straight disk access took about 40 seconds on its own.

When TDE doesn’t read from disk it doesn’t add any overhead, but how do we quantify what the overhead to queries is when it does have to access the disk?

From the above tests we could suggest it adds from 10% to over 1000% CPU.

Or alternatively between 10% to 0% elapsed time. Note that those figures are the same way round, i.e. when it added 10% CPU it added 10% elapsed time, but when it added 1000% CPU time – the elapsed time was about the same.

I could go on with this type of confusing analysis, but instead I’ll suggest this is the wrong way to think about performance in terms of TDE.

Don’t think about query performance.

Think about read performance.

TDE overhead depends on the level of your physical disk access. In the case of read query performance, it depends wholly on the level of physical reads, and seems to be a reasonable fixed overhead for each physical read.

That makes perfect sense, the overhead for querying is in decrypting data, and surely it will take pretty much the same amount of CPU to decrypt each 8KB page.

In theory this makes it simple for us to calculate what this overhead would look like on our production SQL Servers. There’s all sorts of ways of capturing physical reads (and writes).

Let’s say I take a quick look at Resource Monitor to get a ballpark figure for one of my databases on this server that I know to be quite heavy on physical reads. And let’s say I see that it averages 25MB/s during the peak hour.

From that, and from the figures above I can estimate what impact enabling TDE for this database would have on the CPU.

Well I know that 25MB equates to about 0.25 seconds of CPU to decrypt the data, and I know I have 4 cores, so I can expect that in the average second this adds 0.0625 seconds of CPU per core. I multiple that be 100 and I find that I’ve added 6.25% CPU.

The calculation I’ve just done is:

(Reads/Second) * 100

Divided by

(MBs TDE decrypts every second/CPU) * (Number of CPU cores)

This doesn’t include writes, and it doesn’t include backups – I hope to look at that in a later post.

Now, let’s say that produces a scary number, and I’m worried about the strain that’s going to put on my CPU…

My first question would be why am I experiencing so many reads and can I alleviate that? Does data have a short shelf-life in memory? Do I have enough memory in my server – and is enough allocated to SQL?

This isn’t just in terms of TDE. SQL Server is going to perform much better if your current dataset – i.e. the data you are currently accessing most, can be held in memory. So, if TDE is causing a problem, then it’s possible your queries are slow anyway.

Again, I’m not talking about writes just yet.

Or maybe your database is heavy on physical reads because it’s a data warehouse, regularly querying historical data. In that case, is it a suitable target for encryption? Hopefully the data is well anonymised if you’re using it for reporting and therefore doesn’t contain anything personal or sensitive.

I hope.

In summary…

Just to repeat myself, if you’re wondering about TDE and its impact of query performance, which we all have done, try to reframe the question and think about its impact on read performance.

It makes more sense and it may help you to more easily quantify the impact on your servers. And if it does look like the performance may be an issue – perhaps there is tuning you can perform on your database instance to reduce the physical disk access.