Why Does Everyone Have More Friends Than You?

No, the answer isn’t because you’re a DBA.

This isn’t a technical post about databases, but rather a discussion of a statistical paradox that I read about recently. Statistics and data often go hand in hand, and many of us who work with data often use statistics in our work – particularly if we cross over into BI, Machine Learning or Data Science.

So, let’s state the problem.

I think I have an average number of friends, but it seems like most people have more friends than me.

I can see this on facebook – even though facebook friendship isn’t the same thing as the real kind. A quick google tells me that the average number of facebook friends is 338 and the median is 200. My number is 238. That’s less than average but greater than the median – so according to those figures I do have more friends than most people (on facebook at least).

I could probably do with culling some of those though…

I’m going to pick ten of those facebook friends at random. I use a random number generator to pick a number from 1 to 20 and I’m going to pick that friend and the 20th friend after that, repeating until I get 10 people.

I get the number 15 to start so let’s start gathering some data. How many friends does my 15th friend have, my 35th, my 55th, 75th, 95th 115th etc.

Here’s the numbers sorted lowest to highest:

93
226
239
319
359
422
518
667
1547
4150

If I look at that list only two people have less friends than me – or 20%. 80% have more friends than me. Why do I suddenly feel lonely \ How can this be?

If I have more friends than the median, then I should be in the second half.

Friendship is based on random accidents, and I’ve picked people out of my friends list at random. Surely I’ve made a mistake.

I could repeat the sampling but I’m likely to get similar findings.

The answer is that it’s not all random, or at least not evenly so. The person with over 4,000 friends is over 40 times as likely to know me as the person with less than 100. Maybe they get out a lot more (actually they’re a musician).

I’m more likely to know people if they have a lot of friends than if they have fewer.

This is difficult to get your head round, but it’s important if you’re ever in the business of making inferences from sampled data. It’s called “The Inspection Paradox”.

It’s also one of many reasons why people may get feelings of inferiority from looking at social media.

You can find a lot more examples and explanations in this post:

https://towardsdatascience.com/the-inspection-paradox-is-everywhere-2ef1c2e9d709

SQL Server Wishlist

This month for T-SQL Tuesday Kevin Chant asks us what our fantasy SQL feature would be.

https://www.kevinrchant.com/2019/09/03/t-sql-tuesday-118-your-fantasy-sql-feature/

I think it’s appropriate to give a shout-out to Microsoft at this point, because over the last few releases they’ve given us some of the items that are top of my list.

Recommending and setting MAXDOP during setup (coming with SQL 2019) will hopefully mean I no longer have to have arguments about why the out-of the-box setting isn’t appropriate.

The same with setting max memory in the setup (also with SQL 2019).

A more verbose error where string or binary data might be truncated – we got that in SQL 2017.

It’s the little things like these that make me happy – and make my job easier.

A few other little things I’d like – though I accept that something small to describe isn’t always small in its execution…

A change to the default cost threshold for parallelism – be it 20, be it 30, be it 50. I’d prefer any of those to 5.

I think it would also be great to have a cardinality optimizer hint, e.g. OPTIMIZE FOR 5 ROWS. Oracle has this, and it’s not good having to feel jealous of those working on the dark side 😉 You can do the equivalent but it’s convoluted and not clear what’s going on when the uninitiated see the code:

https://dba.stackexchange.com/questions/168726/sql-server-cardinality-hint

There is one big thing I’d like – but it’s never going to happen. Get rid of Enterprise Edition – or rather, make Enterprise Edition the standard. Enterprise is comparatively so expensive, it’s rare I’m going to recommend it. It’s interesting to see that in Azure SQLDB we only have one edition to work with – I’d love to see that in the box product. I understand that change would be a massive revenue loss so can’t see it happening.

If not that though, if we could just have at-rest encryption (i.e. TDE) in the Standard Edition that would make me very happy. In these days of security and privacy consciousness it seems that should be a core  functionality.

UPDATE: TDE is going to be available on Standard Edition from SQL Server 2019. I get my wish!

Finally, I’d just like to upvote Brent’s idea that it would be great to be able to restore just a single table.

That all said, I’d like to go back to my first point. I love what MS is doing with SQL Server, and the continual improvements that are being made. I particularly love that we sometimes get them for free in service packs and cumulative updates – without having to upgrade to a new version.

Keep it coming!

The 99 Flavours of SQL Server are all Vanilla

There are so many options for running SQL Server now. It can feel a little intimidating. 

You’ve spent years gaining experience with SQL running on Windows Server. On bare metal or in a VM. Locally or in a data centre.

Now people are talking about SQL on Linux. SQL on VMs in the cloud. SQL Server Platform as a Service (PaaS) offering such as Azure SQL DB, Azure Managed Instance, or Amazon RDS. 

Running SQL in a container on Docker or Kubernetes. 

It’s easy to feel like you’re at risk of getting left behind. Feel like you’ve got to start learning from the beginning all over again.

The good news is that whatever the flavour – it’s still SQL Server. For everything that’s different there are a hundred things that are still the same.

Database design, writing and executing queries and stored procedures – it’s still the same.

Query tuning, indexing, looking at execution plans. All the same.

Even administration, many of the core concepts are the same, understanding how security works, backups, high-availability. The main difference is often that some of these might be taken care of for you and you don’t need to worry about them any more.

Caveat – you still need to worry about them a bit!

The point is, most of what you already know, the experience you have gained over the years, is still totally valid. Learning about SQL Server on a new platform may feel like a big learning curve, but in reality, the new stuff you need to get to grips with is small compared to all the stuff you already know.

And in some cases, the skills you already have become even more valuable. People might not care if your query tuning on physical kit takes your CPU down from 50% to 10%. But tell them you’ve just reduced their cloud bill by 80% and they really care!

So don’t be intimidated, and don’t feel you need to learn every flavour. Have a play with SQL Server in the cloud, have a play with containers, set up SQL on Linux. You’ll quickly find it’s not that hard, and once it’s running – it’s pretty much the same as ever.

And remember, if someone comes to you with a question about why SQL is running slow, or why a query isn’t doing what they want – on RDS, Docker, Linux, or whatever. You don’t need to know that platform inside out to be able to help, you already know SQL Server and that’s the important bit. 

To paraphrase a popular lyric :

If you’ve got SQL problems I can help you son. I’ve got 99 problems but SQL aint one.

Query Store stores dates as UTC

One of the powerful aspects of Query Store is the ability to directly query the DMVs for details of historical executions and performance.

A key view for this is sys.query_store_runtime_stats (but also sys.query_store_runtime_stats_interval). 

If you’re querying these views to look at what was happening during a particular time period then it’s important to understand that the dates and times are stored according to UTC time (which is equivalent to GMT with no adjustment for daylight savings times).

You can see this if you look at a few rows from the view:

SELECT runtime_stats_id, first_execution_time, last_execution_time
FROM sys.query_store_runtime_stats;

Though it will be more obvious to you if you’re in a time zone other than the UK.

The datetimes are stored as DATETIMEOFFSET which you can see from the +00:00 at the end of each entry. DATETIMEOFFSET allows you to store both the datetime as well as the timezone being used. 

This means that if you’re querying against these columns you need to convert the values you are looking for to UTC first. You can do that my making sure you use GETUTCDATE() instead of GETDATE(), or if you are using specific local times you can use the AT TIME ZONE function e.g.

SELECT CAST('2019-08-21 11:50:40.400 +9:00' AS datetimeoffset) AT TIME ZONE('UTC');

I’m a big fan of storing all datetimes as the UTC version. It avoids things going out of sequence due to daylight saving changes – and can be helpful in mitigating problems when you have application users in multiple countries accessing the same database.

I’ll admit I might be biased though as – being based in the UK – UTC is the same as my local time half the year.

Adding a Column with a Default Constraint

Last week a question came up about adding a column to a table, and giving that column a default constraint. Would that default value be assigned to all existing rows, and how much processing would be involved.

Unsurprisingly, the answer is that – “it depends”.

I’ve got a table with about a million rows that just has an identity column and a text column I’ve populated from sys.objects:

CREATE TABLE dbo.TestAddColumn (Id int IDENTITY(1,1), TextValue sysname);
INSERT INTO dbo.TestAddColumn(TextValue)
SELECT a.name FROM sys.objects a, sys.objects b, sys.objects c;

Let’s add a nullable bit column and give it a default value of Zero:

ALTER TABLE dbo.TestAddColumn
ADD NewFlag bit NULL CONSTRAINT DF_TestAddColumn_NewFlag DEFAULT 0;

If I look at the table I can see that the new column contains Null values:

i.e. the default value that I’ve defined hasn’t been assigned for existing rows.

I remove the column and the default constraint:

ALTER TABLE dbo.TestAddColumn
DROP CONSTRAINT DF_TestAddColumn_NewFlag;
ALTER TABLE dbo.TestAddColumn DROP COLUMN NewFlag;

Now let’s add the same column but we’ll disallow Null values:

ALTER TABLE dbo.TestAddColumn
ADD NewFlag bit NOT NULL CONSTRAINT DF_TestAddColumn_NewFlag DEFAULT 0;

If we check the table again:

This time we can see that the default value has been assigned.

So whether our default value gets assigned to existing rows depends on whether your column is nullable or not, a nullable column will retain Null as the value. A non-nullable column will get assigned the new default value.

If you want to override that behaviour, and have your default assigned even where the column is nullable, you can use the WITH VALUES statement. First I’ll remove the constraint and column then add it again with values:

ALTER TABLE dbo.TestAddColumn
ADD NewFlag bit NULL CONSTRAINT DF_TestAddColumn_NewFlag DEFAULT 0 WITH VALUES;

We look at the data again:

You can see that the value has been assigned even though the column is Nullable.

One neat thing to note, is the performance impact when carrying out these actions.

Each time I added the column I captured the execution overhead using:

SET STATISTICS IO, TIME ON;

In all cases the resource usage measured was Zero. The Add Column operation was a meta-data only operation – no data in the table needed to be updated – even where the new column was assigned a value.

This was some clever jiggery-pokery added in SQL Server 2012 .

Thanks for reading!

T-SQL Tuesday #114 Puzzle Party – Roundup

Here’s my round-up for this month’s T-SQL Tuesday.

Thanks to everyone who contributed last week. It was great reading your posts and seeing the different ways you interpreted the puzzle theme.

We had real-life problems, we had SQL coding questions, we had puzzles, we had solutions, we had games, and we had the imaginarium.

Hopefully I’ve included all the posts, if I’ve missed any – please yell and it will be corrected soonest possible!

Without further ado, here are the contributions…

Real-life problems

Michal Poreba presents us with a real-life performance tuning issue he faced. Can you solve it? https://dbain.wales/2019/05/14/the-sql-puzzle-party/

Kevin Chant shares an issue he faced with a SQL Agent job using PowerShell. Can you find another way of fixing it? https://www.kevinrchant.com/2019/05/14/t-sql-tuesday-114-powershell-puzzle/

Eugene Meidinger shows a real-life SQL puzzle from his own demos with heaps that he’d love an answer to… https://www.sqlgene.com/2019/05/14/t-sql-tuesday-114-an-unsolved-sql-puzzle/

Coding Questions

Kathi Kellenberger tells us about a deceptively tricky SQL question posed at a family party… https://auntkathisql.com/2019/05/14/may-2019-t-sql-tuesday-puzzle/

Jon Shaulis gives us a problem he was posed in a recruitment scenario, along with a detailed solution that demonstrates how he approaches finding the best solution… https://jonshaulis.com/index.php/2019/05/14/t-sql-tuesday-114-puzzle-party/

Kenneth Fisher asks us to solve some query problems using older versions of SQL Server – so we can appreciate just how much easier our lives are these days… https://sqlstudies.com/2019/05/14/how-was-it-done-back-in-the-day-t-sql-tuesday-114/

Games and Puzzles

Nate Johnson challenges us to write a SQL solution to help evaluating your hand in the card game cribbage… https://natethedba.wordpress.com/2019/05/14/t-sql-tuesday-114-a-puzzle/

Jason Brimhall supplies his solution for solving Sudoku puzzles using T-SQL as well as details of a database he uses to provide daily trivia questions to those he mentors in learning SQL… http://jasonbrimhall.info/2019/05/14/puzzles-and-daily-trivia/

Shane O’Neill takes us on a Loller-Coaster ride… https://nocolumnname.blog/2019/05/14/t-sql-tuesday-114-puzzle-party/

Steve Jones shares some puzzle resources from SQL Server Central and other sources as well as presenting his T-SQL solution to a Christmas coding question… https://voiceofthedba.com/2019/05/14/puzzles-t-sql-tuesday-114/

Is Bert Wagner a video star? The magic 8-ball says “Without a doubt!” https://bertwagner.com/2019/05/14/sql-server-magic-8-ball/

Rob Farley presents puzzles he’s encountered and played with before and reminds us to never stop puzzling if we want to keep our minds sharp! http://blogs.lobsterpot.com.au/2019/05/14/puzzling-times/

The Imaginarium

Finally, in a category on his own, Todd Kleinhans talks about his SQL Imaginarium project. A VR experience for people new to SQL Server: https://toddkleinhans.wordpress.com/2019/05/14/project-sqlimaginarium-1-year-update/

Thanks again everyone and I look forward to next month’s episode of T-SQL Tuesday!

T-SQL Tuesday #114 – Puzzle Party

A few years back I started running regular SQL workshops in my workplace. Teaching beginners the basics of querying databases with SQL, as well as more advanced topics for the more advanced.

During one session we were discussing the issue of knowledge acquired being quickly lost when people didn’t get the chance to regularly practice what they’d learnt. One of the attendees suggested that I should be assigning them homework.

I could see from the faces of everyone else present that the word “homework” struck an unpleasant chord. Perhaps reminding them of school days struggling to get boring bookwork done when they’d rather be at relaxation or play.

Okay, so homework maybe wasn’t going to go down well, but I figured everyone likes a good puzzle. So every Friday I started creating and sharing a puzzle to be solved using SQL. This went on for the best part of a year, then other things got in the way and gradually I stopped.

This is my invitation to you this T-SQL Tuesday. Write a blog post combining puzzles and T-SQL. There’s quite a few ways you could approach this, so hopefully no-one needs be left out for lack of ideas:

  • Present a puzzle to be solved in SQL and challenge your readers to solve it.
  • Or give us a puzzle or quiz about SQL or databases.
  • Show the SQL solution to a classic puzzle or game.
  • Provide a method for solving a classic sort of querying puzzle people face.
  • Show how newer features in SQL can be used to solve old puzzles in new ways.
  • Tell us about a time you solved a problem or overcame a technical challenge that was a real puzzle.
  • Or just make your own interpretation of “puzzle” and go for it!

There’s some great stuff out there already. Itzik Ben-Gan’s done a bunch of them. There’s Kenneth Fisher’s crosswords. The SQL Server Central questions of the day. Pinal Dave’s SQL Puzzles. And there’s a few on my blog too if you take a look back:

https://matthewmcgiffen.com/2017/06/07/sql-puzzle-1-magic-squares/

https://matthewmcgiffen.com/2017/07/19/sql-puzzle-2-eight-queens/

https://matthewmcgiffen.com/2017/08/22/sql-puzzle-3-knights-and-queens/

https://matthewmcgiffen.com/2017/11/08/sql-puzzle-4-the-beale-papers/

https://matthewmcgiffen.com/2017/12/19/sql-puzzle-5-prime-magic/

Let’s puzzle together, trying to solve the challenges each other sets, and make it a real puzzle party!

Have fun all 🙂

The Rules

  1. Your post must be published on Tuesday 14th May. This counts as long as it’s still Tuesday anywhere in the world.
  2. Include the T-SQL Tuesday Logo and make it link to this invitation post.
  3. Pingbacks should work, but to be sure, add a comment to this post with a link to your own so I know where to find it.
  4. Tweet about your post using the #tsql2sday hashtag

MAXDOP, Parallelism and the Cloud

Parallelism and MAXDOP

The pros and cons of parallelism have always been with us in SQL Server and I blogged about this a couple of years ago. This is an updated version of that post to include details of the new wait stat related to parallelism that was added in 2017 (CXCONSUMER), as well as to discuss the options available for cloud based SQL Server solutions.

There’s no doubt that parallelism in SQL is a great thing. It enables large queries to share the load across multiple processors and get the job done quicker.

However it’s important to understand that it has an overhead. There is extra effort involved in managing the separate streams of work and synchronising them back together to – for instance – present the results.

That can mean in some cases that adding more threads to a process doesn’t actually benefit us and in some cases it can slow down the overall execution.

We refer to the number of threads used in a query as the DOP (Degree of Parallelism) and in SQL Server we have the setting MAXDOP (Maximum Degree of Parallelism) which is the maximum DOP that will be used in executing a single query.

Microsoft generally recommend caution setting MAXDOP above 8:

https://support.microsoft.com/en-gb/help/2806535/recommendations-and-guidelines-for-the-max-degree-of-parallelism-configuration-option-in-sql-server

Here’s a nice post from Kendra Little talking about how having higher settings can actually slow down your query execution time:

https://www.brentozar.com/archive/2013/12/q-can-high-maxdop-make-a-query-slower/

Out of the box, MAXDOP is set to 0, which means there is no limit to the DOP for an individual query. It is almost always worth changing this to a more optimal setting for your workload.

Cost Threshold for Parallelism

This is another setting available to us in SQL Server and defines the cost level at which SQL will consider a parallel execution for a query. Out of the box this is set to 5 which is actually a pretty low number. Query costing is based on Algorithm’s from “Nick’s machine” the box used by the original developer who benchmarked queries for Microsoft.

Nicks_Machine

(Nick’s Machine)

Compared to modern servers Nick’s machine was pretty slow and as the Cost Threshold hasn’t changed for many years, it’s now generally considered too low for modern workloads/hardware. In reality we don’t want all our tiny queries to go parallel as the benefit is negligible and can even be negative, so it’s worth upping this number. Advice varies but generally recommendations say to set this somewhere in the range from 30 to 50 (and then tuning up and down based on your production workload).

There are many articles in the SQL Server community about how the out of the box setting is too low, and asking Microsoft to change it. Here’s a recent one:

http://www.scarydba.com/2017/03/13/change-the-cost-threshold-for-parallelism/

CXPACKET and CXCONSUMER waits

Often in tuning a SQL Server instance we will look at wait stats – which tell us what queries have been waiting for when they run. CXPACKET waits are usually associated with parallelism and particularly the case where multi-threaded queries have been stuck waiting for one or more of the threads to complete – i.e. the threads are taking different lengths of time because the load hasn’t been split evenly. Brent Ozar talks about that here:

https://www.brentozar.com/archive/2013/08/what-is-the-cxpacket-wait-type-and-how-do-you-reduce-it/

High CXPACKET waits can be – but aren’t necessarily – a problem. You can cure CXPACKET waits by simply setting MAXDOP to 1 at a server level (thus preventing parallelism) – but this isn’t necessarily the right solution. Though in some cases in can be, SharePoint for instance is best run with MAXDOP set to 1.

What you can definitely deduce from high CXPACKET waits however is that there is a lot of parallelism going on and that it is worth looking at your settings.

To make it easier to identify issues with parallelism, with SQL Server 2017 CU3 Microsoft added a second wait type related to parallelism – CXCONSUMER. This wait type was also added to SQL Server 2016 in SP2.

Waits related to parallelism are now split between CXPACKET and CXCONSUMER.

Here’s the original announcement from Microsoft regarding the change and giving more details:

https://blogs.msdn.microsoft.com/sql_server_team/making-parallelism-waits-actionable/

In brief, moving forward CXPACKET waits are the ones you might want to worry about, and CXCONSUMER waits are generally benign, encountered as a normal part of parallel execution.

Tuning Parallelism

In tuning parallelism we need to think about how we want different sized queries to act on our server.

Small Queries

In general we don’t want these to go parallel so we up the Cost Threshold to an appropriate number to avoid this. As discussed above 30 is a good number to start with. You can also query your plan cache and look at the actual costs of queries that have been executed on your SQL Instance to get a more accurate idea of where you want to set this. Grant Fritchey has an example of how to do that here:

http://www.scarydba.com/2017/02/20/estimated-costs-queries/

As he mentions in the post, this assessment can be quite expensive to run – so do it when things are quiet.

Medium to Large Queries

This is where we want to take advantage of parallelism, and do so by setting MAXDOP to an appropriate level. Follow the guidelines from the Microsoft article referenced above. Here it is again:

https://support.microsoft.com/en-gb/help/2806535/recommendations-and-guidelines-for-the-max-degree-of-parallelism-configuration-option-in-sql-server

Often the answer is going to be simply to set it to 8 – but then experiment with tuning it up and down slightly to see whether that makes things better or worse.

Very Large Queries

If we have a mixed workload on our server which includes some very expensive queries – possibly for reporting purposes – then we may want to look at upping the MAXDOP for these queries to allow them to take advantage of more processors. One thing to consider though is – do we really want these queries running during the day when things are busy? Ideally they should run in quieter times. If they must run during the day, then do we want to avoid them taking over all the server power and blocking our production workload? In which case we might just let them run at the MAXDOP defined above.

If we decide we want to let them have the extra power then we can override the server MAXDOP setting with a query hint OPTION(MAXDOP n):

https://docs.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-query

You will want to experiment to find the “best” value for the given query. As discussed above and as shown in Kendra Little’s article, just setting it to the maximum number of cores available isn’t necessarily going to be the fastest option.

Exceptions to the Rule

Regardless of the size, there are some queries that just don’t benefit from parallelism so you may need to assess them on an individual basis to find the right degree of parallelism to use.

With SQL server you can specify the MAXDOP at the server level, but also override it at the database level using a SCOPED CONFIGURATION or for individual queries using a query hint. There are even other ways you can control this:

https://www.brentozar.com/archive/2016/12/ten-ways-set-maxdop/

Options in the Cloud

If your SQL Server is hosted in the cloud, then most of this still applies. You still need to think about tuning parallelism – it isn’t done for you, and the defaults are the same – so probably not optimal for most workloads.

There are in general two flavours of cloud implementation. The first is Infrastructure as a Service (IaaS) where you simply have a VM provided by your cloud provider and run an OS with SQL server on top of it in that VM. Regardless of your cloud provider (e.g. Azure, AWS etc.), if you’re using IaaS for SQL Server then the same rules apply, and you go about tuning parallelism in exactly the same way.

The other type of cloud approach is Platform as a Service (PaaS). This is where you use a managed service for SQL Server. This would include Azure SQL Database, Azure SQL Database Managed Instance, and Amazon RDS for SQL Server. In these cases, the rules still apply, but how you manage these settings may differ. Let’s look at that for the three PaaS options mentioned above.

Azure SQL Database

This is a single SQL Server database hosted in Azure. You don’t have access to server level settings, so you can’t change MAXDOP or the cost threshold. You can however specify MAXDOP at the database level e.g.

ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = 4;

Cost threshold for Parallelism however is unavailable to change in Azure SQL Database.

Azure SQL Database Managed Instance

This presents you with something that looks very much like the SQL Server you are used to, you just can’t access the box behind it. And similar to your regular SQL instance, you can set MAXDOP and the Cost threshold as normal.

Amazon RDS for SQL Server

This is similar to managed instance. It looks and acts like SQL Server but you can’t access the machine or OS. You access your RDS instance through an account that has permissions that are more limited than your usual sa account or sysadmin role allows. And one of the things you can’t do with your limited permissions is to change the parallelism settings.

Amazon have provided a way around this though and you can change both settings using something called a parameter group:

https://www.mssqltips.com/sqlservertip/5329/setting-sql-server-configuration-options-with-aws-rds-parameter-groups/

Closing Thoughts

Parallelism is a powerful tool at our disposal, but like all tools it should be used wisely and not thrown at every query to its maximum – and this is often what happens with the out of the box settings on SQL Server. Tuning parallelism is not a knee-jerk reaction to high CXPACKET waits, but something we should be considering carefully in all our SQL Server implementations.

Acknowledgements

I wanted to update my original article to include the cloud options noted above, but didn’t have access to an Azure SQL Database Managed Instance to check the state of play. Thanks to TravisGarland via Twitter (@RockyTopDBA) and Chrissy LeMaire via the SQL community slack (@cl) for checking this and letting me know!

Finding “Orphaned” database files – script

When you drop a database from a SQL Server instance the underlying files are usually removed. This doesn’t happen however if you set the database to be offline first, or if you detach the database rather than dropping it.

The scenario with offline databases is the one that occurs most often in practice. I might ask if a database is no longer in use and whether I can remove it. A common response is that people don’t think it’s in use, but can I take it offline and we’ll see if anyone screams. I’ll often put a note in my calendar to remove it after a few weeks if no-one has complained. When I do come to remove it, hopefully I’ll remember to put it back online before I drop it so the files get removed, but sometimes I might forget, and in an environment where many people have permissions to create and drop databases you can end up with a lot of files left behind for databases that no longer exist – these are what I’m referring to as orphaned files.

Obviously this shouldn’t happen in production environments where change should be carefully controlled, but if you manage a lot of development and test environments this can certainly occur.

So I created a script I can run on an instance to identify any files in its default data and log directories that are not related to any databases on the instance. Here it is:

--Orphaned database files
DECLARE @DefaultDataPath VARCHAR(512);
DECLARE @DefaultLogPath VARCHAR(512);

SET @DefaultDataPath = CAST(SERVERPROPERTY('InstanceDefaultDataPath') AS VARCHAR(512));
SET @DefaultLogPath = CAST(SERVERPROPERTY('InstanceDefaultLogPath') AS VARCHAR(512));

IF OBJECT_ID('tempdb..#Files') IS NOT NULL
  
DROP TABLE #Files;

CREATE TABLE #Files (
  
Id INT IDENTITY(1,1),
  
[FileName] NVARCHAR(512),
  
Depth smallint,
  
FileFlag bit,
  
Directory VARCHAR(512) NULL,
  
FullFilePath VARCHAR(512) NULL);

INSERT INTO #Files ([FileName], Depth, FileFlag)
EXEC MASTER..xp_dirtree @DefaultDataPath, 1, 1;

UPDATE #Files
SET Directory = @DefaultDataPath, FullFilePath = @DefaultDataPath + [FileName]
WHERE Directory IS NULL;

INSERT INTO #Files ([FileName], Depth, FileFlag)
EXEC MASTER..xp_dirtree @DefaultLogPath, 1, 1;

UPDATE #Files
SET Directory = @DefaultLogPath, FullFilePath = @DefaultLogPath + [FileName]
WHERE Directory IS NULL;

SELECT
  
f.[FileName],
  
f.Directory,
  
f.FullFilePath
FROM #Files f
LEFT JOIN sys.master_files mf
  
ON f.FullFilePath = REPLACE(mf.physical_name,'\\', '\')
WHERE mf.physical_name IS NULL
  AND
f.FileFlag = 1
ORDER BY f.[FileName], f.Directory

DROP TABLE #Files;

I wouldn’t say that you can just go delete these once you’ve identified them, but at least now you have a list and can investigate further.

By the way, you might notice a nasty join statement in the above query. This is to deal with instances where the default directories have been defined with a double backslash at the end. SQL Server setup allows this and it doesn’t cause any day-to-day problems, but can make this sort of automation challenging. I’ve included it in this query as I’ve encountered a few people having this issue. In general I’d avoid such joins like the plague.

Making things more complicated

One complication can be where you have multiple SQL Server instances on the same server. This isn’t greatly recommended in production, but is common in dev\test. Where a database has been migrated from one instance to another, it’s possible that hasn’t been done correctly and the files still exist under the directories for the old instance and you might then see them as orphaned. I previously posted a script to identify such files:

https://matthewmcgiffen.com/2018/04/24/database-files-down-the-wrong-path/

Also in the case of multiple instances, you might want to report across all of them at once. For that you could use SQL CMS. I’ve posted on that too:

https://matthewmcgiffen.com/2018/04/10/cms-effortlessly-run-queries-against-multiple-sql-servers-at-once/

Combining these three techniques makes it relatively easy to identify files that are probably no longer needed. You can get a list of all files that don’t belong to databases on the instances they live under, correlate that to any files that are down the wrong path for any of your instances, then look at what’s left over.

Using the Sequence Object to Generate Reference Numbers

The SEQUENCE object was added to T-SQL in SQL Server 2012. It’s reasonably well known to DBAs, but less so to developers or those new to SQL, so I thought I’d produce a quick post to demonstrate its use.

In basic terms, a SEQUENCE is a way of generating a sequence of numerical values. The following are examples of sequences you could generate:

1, 2, 3, 4, 5 6, 7, 8, 9…

1, 6 , 11, 16, 21, 26, 31…

1000, 1001, 1002, 1003, 1004…

You can pick a starting number (and an ending number if you want), a data type (which might produce a natural maximum to the size of the sequence) and an increment (i.e. how much you want to be added to produce the next number in the sequence). There are other options, but I’m going to focus on the simplest use case. You can find the full documentation here:

https://docs.microsoft.com/en-us/sql/t-sql/statements/create-sequence-transact-sql?view=sql-server-2017

So, let’s define my use case. I have a table to hold customer orders. For each record want to define an Order Reference Number of the format ORN0000000001.

Now you could implement something using an IDENTITY column to manage this – but there may be times when that is not ideal, for instance your table may not already have a suitable identity to use (you might have a unique identifier as the primary key) and if you want to store the actual reference then you’d need to add an IDENTITY column in addition to the reference column. Or you might need a reference that is unique across multiple tables.

The SEQUENCE object is also designed to be faster than IDENTITY, creating less blocking when you have a lot of concurrent inserts.

First of all, creating the sequence to generate the numeric part of my reference is easy. Let’s say that a bunch of reference numbers have already been used so I want to start with ORN0000100001

Let’s look at the SQL…

CREATE SEQUENCE dbo.OrderRefSequence 
  
AS bigint
   START
WITH 100001
   INCREMENT
BY 1;

Then I can request numbers from the sequence using NEXT VALUE FOR e.g.

SELECT NEXT VALUE FOR dbo.OrderRefSequence; 

The first time I run that I get the starting number 100,001.

Another nice addition to SQL Server 2012 was the FORMAT function which we can use to format the number into a string whilst padding it with leading zeroes and adding the text prefix:

SELECT FORMAT(NEXT VALUE FOR dbo.OrderRefSequence, 'ORN0000000000#');


That returns me ORN00000100002.

If I keep executing it then the reference increases:

ORN00000100003

ORN00000100004

ORN00000100005

ORN00000100006…

So, now I can just use that when inserting values to my table to get a new reference number each time.

But, what’s even nicer is that you can do it all by defining a default for your column and referencing the sequence in the default.

I’ll create the following table:

CREATE TABLE dbo.Orders (
  
Id UNIQUEIDENTIFIER PRIMARY KEY DEFAULT NEWSEQUENTIALID(),
  
CustomerId UNIQUEIDENTIFIER NOT NULL,
  
OrderReference VARCHAR(20)
   DEFAULT
(FORMAT(NEXT VALUE FOR dbo.OrderRefSequence, 'ORN0000000000#')),
  
OrderDate DATETIME DEFAULT(GETUTCDATE()));

You can see that the OrderReference is defined with a default using our sequence object.

I insert a bunch of rows into the table. For the sake of this rather contrived example, I only need to specify the CustomerId. I do that by generating a bunch of random unique identifiers – one for each row in the sys.objects table.

INSERT INTO dbo.Orders (CustomerId)
SELECT NEWID() FROM sys.objects;

Let’s have a look at an extract from the table:

You can see I’ve got a nice series of ascending, non-duplicating reference numbers.

One thing to note is that, while the sequence will generally produce unique number, it is still worth enforcing that in your table definition with a unique constraint i.e.

ALTER TABLE dbo.Orders ADD CONSTRAINT UQ_Orders_OrderReference UNIQUE(OrderReference);

This prevents someone from issuing an UPDATE command that might create a duplicate reference. Also, once the sequence runs out of numbers it will repeat back at the beginning unless you specify NO CYCLE in the defintion of the sequence – obviously in most applications this is unlikely to be an issue if you’re using a bigint for the sequence.

There was also a bug in some versions of SQL 2012 and 2014 that meant a duplicate could get created when your server was under memory pressure:

https://support.microsoft.com/en-gb/help/3011465/fix-sequence-object-generates-duplicate-sequence-values-when-sql-serve

This was fixed with SQL Server 2012 SP2 CU4 and SQL Server 2014 CU6 – but it’s better to be safe than sorry.

As a final note, it’s worth remembering that with the GDPR, these sorts of references are defined as personal data.That’s one good reason not to ever consider using these sorts of references as the primary key of your table (there are many others) – but also a reason why – where you already have an identity based primary keys that you could use to generate the references –  it may be worth decoupling them from the primary key and basing them on a separate sequence instead.