T-SQL Tuesday #96: Folks Who Have Made A Difference

 

This T-SQL Tuesday was an interesting one, and I’ve been spending quite a few hours thinking about how to formulate my thoughts. Unfortunately the end of said Tuesday is nearing with unreasonable speed, so I’ll have to be brief. This month’s challenge comes from Ewald Cress and is to recognize those who have made a meaningful contribution in your life in the world of data.

There is a saying that “all good deeds are only made possible by people standing on the shoulders of giants”. That in turn requires an environment of sharing and nurturing – the more the better. There are many giants in our community; way more than I could possibly write in a single blog post.

While the number of people having made meaningful contributions to my life in data is huge, two names stand out. The first is Jes Borland (@grrl_geek) who, despite having yet to actually talk to me in person, gave me the inspiration to actually stop thinking of speaking and actually do something to get there. As is quite common, the small, at the moment somewhat insignificant actions often turn out to be the pivotal ones. There is another saying that “no raindrop ever believes it is to blame for the flood”, but the short words of encouragement from Jes started the trickle that became the stream and is heading full tilt to becoming a flood.

And “flood” is a good way to describe the other person: Cathrine Wilhelmsen (@cathrinew). From the moment I bumped into the Norwegian Whirlwind at Summit 2016, my life has literally not been the same. She introduced me to every hot shot in the business, all while I was having eyes large as saucers and a seriously hard time keeping up.

This year at Summit, I had a fair amount of steam going in. Due to the many acquaintances I made the year before, it was easier to socialize and make even more friends. I did what Jes and Cathrine had done to me, each in the own way, and tried to help first timers and people eager but lacking in direction to connect with people that might get them further than I could. This way I got to meet Laura Muise (@Laura_SQL, an amazing woman who will undoubtedly go far in our world of data) and maybe, hopefully, gave her something to in turn enable her to pay it forward whenever the time comes. I can’t wait for her to speak at her first SQL Saturday.

Thinking about this topic while teaching a SQL Server course this week, I came to the realization that the folks who have made the most of a difference is the #SQLFamily itself. The giants I wrote about earlier are all regular people – people just like you and me. For some people (albeit a very small number) I am the giant. Many, many others are my giants. The point is, that a community so focused on giving to such a degree that the SQL Server community is, become self-sustaining with giants.

I’d like to give the biggest shout-out to you.

All of you.

You make meaningful contributions to my life of data every day.

Thank you for being part of the most awesome community there is.

 

 

Another kind of contention in TEMPDB

I’ve been helping troubleshoot a SQL Server 2014 that kind of tips over every hour when a job is run on one of the application servers.

All eight cores of the server go through the roof together with the counter for batch requests/sec – and then everything grinds to a halt. This behavior continues for about ten minutes (until the application servers are done with their horrors) and then the server recovers back down to 5-10% CPU load and everyone breathes a sigh of relief. For about an hour.

So how does one troubleshoot an issue like this? Well, the first thing I did was to look at the wait statistics during one of these windows of disaster. This can be done in one of several ways, but I ran a script that takes a snapshot of the sys.dm_os_waiting_tasks, waits a number of seconds (30 in my specific case), takes a new snapshot and calculates the difference (I used Paul Randals script HERE). This way I’m not burdened with whatever the server has been doing up to this point in time.

From the result one thing sticks out somewhat – PAGELATCH_EX. So what IS PAGELATCH_EX anyway? A full explanation is beyond the scope of this blog post, but Paul Randal (as usual) has a great writeup about the wait HERE.

The next step was to find out what query was causing this wait using sys.dm_os_waiting_tasks and filtering on wait_type=’PAGELATCH_EX’.

SELECT *
FROM sys.dm_os_waiting_tasks WHERE wait_type='PAGELATCH_EX'

 

I ran this several times to see if there was a pattern to the madness, and it turned out it was. All waits were concentrated in database ID 2 – TEMPDB. Many people perk up by now and jump to the conclusion that this is your garden variety SGAM/PFS contention – easily remedied with more TEMPDB files and a trace flag. But, alas- this was further inside the TEMPDB. The output from the query above gave me the exact page number, and plugging that into DBCC PAGE gives the metadata object ID.

DBCC TRACEON(3604)
GO
DBCC PAGE(2,7,1639,0)
GO

 

 

This ID can then be run through the OBJECT_NAME function to get the name of the object in question.

 

 

 

It turned out to be sys.sysschobjs that can be read about in detail HERE. In short, it is a table inside TEMPDB that contains one row per object in TEMPDB.

Thus there is apparently contention for an object that tracks the existence of tables in the TEMPDB. This pretty much has to do with the server creating and demolishing TEMPDB tables in a rate that is simply too high for the poor server to handle. To see just how many TEMPDB tables were actually created and destroyed during these windows of despair, I took a look at the performance monitor counters SQL Server: General Statistics/Temp Tables Creation Rate and Temp Tables For Destruction. During most of the day these were running at around 1-2 per sec of creation rate and 7-8 in the queue for destruction. Then this happened.

 

It is kind of obvious WHEN this window opens, yes? The numbers soar to 27 tables created per second and some 280 queued for destruction.

So, to recap: the job on the application servers is creating temporary tables at a rate with which the database simply can’t keep up. This leads to PAGELATCH_EX waits on sysschobjs and this shows as high CPU.

The million dollar question boils down to “what can be done about it?”. Not much, it turns out. In this case I was assisting the vendor and they have some homework to do with their code. There are essentially two things that can be done from their end: either cut down on the use of TEMPDB tables or implement an in-memory table instead. Now, this behavior might very well be due to a bug where the TEMPDB table caching is not working as expected or someone has mistakenly put a CREATE TABLE #WHATNOT on the wrong side of a loop, but I’ll have to get back to you on that one.

From the customer’s side there is nothing to do but throw hardware at the problem and impatiently wait for a fix from the vendor.

 

Azure database maintenance using Azure automation

One of the things often overlooked with Azure SQL databases is the fact that they need love in exactly the same way as their on-premises brethren. Sure, Azure will keep an eye out for the most blatant issues, but the old adage “a penny of prevention is worth a pound of cure” still holds water. Pieter Vanhove did a great blog post a few years back, but since then Larry Silverman made some updates to the original script so I decided to revisit the whole thing and put everything in the same place. A big thank you to both Pieter and Larry for the excellent groundwork.

The trouble with Azure SQL Database is the lack of an SQL Agent to schedule the usual scripts – like Ola Hallengren’s excellent Maintenance Solution. Another issue is the lack of cross-database scripting support in Azure, making the possibility of executing Ola’s scripts from a DBA-DB somewhere impossible.

There is a way to have Ola’s script maintain each and every database on your Azure SQL Database server, but due to the lack of cross-database scripting mentioned earlier, it requires that the procedures IndexOptimize and CommandExecute as well as the table CommandLog be set up in each and every database you deploy to your server. I’d recommend putting them in their own schema to be easily distinguished from the rest of the database, and in the example below I use the schema “dba”.

Download IndexOptimize.sql, CommandExecute.sql and CommandLog.sql from Ola’s site here.

Open them up in a text editor and run a search/replace for all instances of dbo and change that to dba. This is to make sure they end up where they are supposed to. Then let’s get cooking:

1. Create a DBA schema in the Azure SQL Database.

CREATE SCHEMA dba
GO

 

2. Run the three scripts you downloaded earlier, making sure that the two procedures and the table end up in the DBA schema.

At this stage you can test the scripts by running the following code and taking a look in the dba.CommandLog table:

EXECUTE dba.IndexOptimize
@Databases = 'USER_DATABASES',
@FragmentationLow = 'INDEX_REORGANIZE',
@FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@FragmentationLevel1 = 5,
@FragmentationLevel2 = 30,
@UpdateStatistics = 'ALL',
@LogToTable = 'Y',
@Execute = 'Y'

 

3. Let’s create an automation account.


Click “Add” to create one.

Then we input a name for the account, a resource group and a location. For performance reasons it’s best to let the automation account be placed in the same datacenter as the database itself.

With the account set up, go ahead and click on “credentials” on the left. We need to set up a stored credential to enable the runbook to log into the database server.


Set a name of the credential (this will be referenced later), input the administrative user for the database server and the password for said user.


Go ahead and click on “overview”, folloed by “runbooks”.

On the runbook blade, click “Add a runbook” and fill out the name and the runbook type. The name must be the same as in the code we’re going to paste in and the type must be PowerShell Workflow.

Paste in the following code, overwriting the text that’s already there:

<# .SYNOPSIS Perform index maintenance .DESCRIPTION This runbook uses Azure Automation to perform reindexing and statistics maintenance of all databases on a target server. As prerequisite, please create an Azure Automation credential asset that contains the username and password for the target Azure SQL DB logical server ($SqlServerName). Make sure that you have installed the scripts IndexOptimize.sql, CommandLog.sql and CommandExecute.sql of Ola Hallengren (https://ola.hallengren.com/downloads.html) Make sure to get Ola's modified scripts which work on Azure here: https://ola.hallengren.com/downloads.html .EXAMPLE SQLServerIndexMaintenance .NOTES AUTHOR: Original author Pieter Vanhove: http://pietervanhove.azurewebsites.net/?p=14137 Heavily modified by Larry Silverman, CTO, TrackAbout Very slightly modified by Alexander Arvidsson: http://www.arcticdba.se #>

workflow AzureMaint
{
param (
# Fully-qualified name of the Azure DB server
[parameter(Mandatory=$true)]
[string] $SqlServerName,

# Credentials for $SqlServerName stored as an Azure Automation credential asset
# When using in the Azure Automation UI, please enter the name of the credential asset for the "Credential" parameter
[parameter(Mandatory=$true)]
[PSCredential] $Credential
)

inlinescript {
# Set up credentials
$ServerName = $Using:SqlServerName
$UserId = $Using:Credential.UserName
$Password = ($Using:Credential).GetNetworkCredential().Password
$databases = @()

# Create connection to Master DB
Try {
$MasterDatabaseConnection = New-Object System.Data.SqlClient.SqlConnection
$MasterDatabaseConnection.ConnectionString = "Server = $ServerName; Database = Master; User ID = $UserId; Password = $Password;"
$MasterDatabaseConnection.Open();

# Create command to query the name of active databases in $ServerName
$MasterDatabaseCommand = New-Object System.Data.SqlClient.SqlCommand
$MasterDatabaseCommand.Connection = $MasterDatabaseConnection
$MasterDatabaseCommand.CommandText =
"
select name from sys.databases
where state_desc='ONLINE'
and name <> 'master'
"

$MasterDbResult = $MasterDatabaseCommand.ExecuteReader()

while($MasterDbResult.Read()) {
$databases += @($MasterDbResult[0].ToString())
}
}
# Catch errors connecting to master database.
Catch {
Write-Error $_
}
Finally {
if ($null -ne $MasterDatabaseConnection) {
$MasterDatabaseConnection.Close()
$MasterDatabaseConnection.Dispose()
}
}

# Create connection for each individual database
# Iterate through each database under $ServerName
foreach ($DbName in $databases) {
Try {
# Setup connection string for $DbName
$ChildDatabaseConnection = New-Object System.Data.SqlClient.SqlConnection
$ChildDatabaseConnection.ConnectionString = "Server=$ServerName; Database=$DbName; User ID=$UserId; Password=$Password;"
$ChildDatabaseConnection.Open();

# Create command for a specific database $DBName
$DatabaseCommand = New-Object System.Data.SqlClient.SqlCommand
$DatabaseCommand.Connection = $ChildDatabaseConnection

Write-Output "Performing index and statistics maintenance on $DbName"

$DatabaseCommand.CommandText ="
EXECUTE dba.IndexOptimize
@Databases = '" + $DbName + "',
@FragmentationLow = NULL,
@FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@FragmentationLevel1 = 5,
@FragmentationLevel2 = 30,
@UpdateStatistics = 'ALL',
@OnlyModifiedStatistics = 'Y',
@LogToTable = 'Y'
"
$DatabaseCommand.CommandTimeout = 0
# Write-Output $DatabaseCommand.CommandText
$NonQueryResult = $DatabaseCommand.ExecuteNonQuery()
}

# Inner catch for individual database failures.
# We want to keep processing the next database.
Catch {
Write-Error $_
}

Finally {
if ($null -ne $ChildDatabaseConnection)
{
$ChildDatabaseConnection.Close()
$ChildDatabaseConnection.Dispose()
}
}
}
}
}

 

I’ve made some very, very slight modifications to the script that Larry Silverman in turn modified from Pieter Vanhove’s original script. My modifications consist of changing schema name from DBO to DBA and turning on logging.


At this stage it’s a good idea to test te runbook, so click “Test pane” up top. Input the parameters (the fully qualified server name and the name of the credential we set up previously) and make sure everything runs as it should.

If everything worked as expected, let’s go ahead and schedule the runbook.

Click schedule.

This part looks a lot like scheduling a job via the agent.

This is the only difference from using the agent. Input the parameters required (database server and credential for the server administrator) and you’re good to go.

A word of caution: as each and every database will hold it’s own copy of the scripts and table, it’s imperative to keep them at the same version. I.e, when updating the scripts in one database, make sure you do it on all the other databases on the same server.

“File is read only” when trying to change TCP settings in SQL Server Configuration Manager

Today I hit upon this bug that’s apparently been around for ages. In short, sometimes SQL Server Configuration Manager works as expected when it comes to enabling and disabling specific IP addresses for the instance to listen on, and sometimes you are greeted with a very irritating message telling you that “the specified file is read only”. Not very helpful, unfortunately.

Digging deeper into this, there is a fairly simple way to get around the issue, provided one is not afraid of dipping into the registry. All of the settings in the configuration manager are stored in the registry and can be edited there without the pesky error message. Go to the following key:

HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\***INST***\MSSQLServer\SuperSocketNetLib\Tcp\

where ***INST*** is the name of the instance you want to edit.

Under this registry folder in the left tree you will find subfolders for IP1, IP2 and so on – these correspond to what can be seen in the configuration manager. The next step is simply to click on folder for the IP you wish to edit and thange whatever settings you want. In my case I wanted the instance to only listen to a specific IP, so I set all the other IPs to inactive (i.e set the “Active” key to 0). A quick restart of the SQL Server service later and I was able to verify via the error log that the instance was indeed only listening on that specific IP.

PASS Summit and the #SQLFamily

PASS Summit 2016 was a week ago, and I’ve tried (unsuccessfully) to put my thoughts into words. I’ve read many excellent blog posts about the conference and I wholly agree with them all, but I wanted to articulate my feelings in a blog post of my own.

PASS Summit 2016 was my third Summit on paper, but was more like my first in reality. The previous two times, I went to the sessions, spent time on the exhibition floor, ate my lunch with other attendees and then I went back to my hotel to read up, do research, work or just watch TV.

The Summit lends itself to this kind of activity very well, as the sessions and speakers are generally excellent, there is fun stuff to be had, done and seen on the exhibition floor and the other attendees are very friendly. It has consistently been one of the best conferences I’ve been to, and I’ve been to quite a few by now.

But this year was to be something different. This summer I held a presentation at SQL Saturday #536 in Gothenburg and there I had the pleasure of meeting a Norwegian MVP by the name of Cathrine Wilhelmsen. Not only was she firmly established in the PASS community, but she thrives on bringing new people in and helping others grow their network. We kept in contact and she promised to introduce me to other members of the PASS community and I went to Seattle with an open mind.

I had just picked up my badge when Cathrine happened. She grabbed hold of me and proceeded to physically haul me around the hall and introduce me to a multitude of people, several of whom I recognize from the community and whose blogs I frequently read. I must have looked like a deer i headlights, much to the delight of Cathrine. Among the people I was introduced to was the guy I went on to spend the week with – Adam Saxton (of Guy in a Cube fame). We hit it off immediately and hung out almost every night. He in turn introduced me to more people than I can remember, and for this I will be forever thankful to them both.

It proceeded to become the best conference week in my professional career. Let that sink in for a moment. The people I met and my growing network was the best thing that could have happened. I keep going on about that I’ve been doing this for close to 20 years and while that is all good and dandy, it also means that *I’ve* been doing this. Singular. The magic happens when there’s more than one person, when ideas get exchanged and networks grow. The discussions I’ve had with amazing people over the week, over karaoke, drinks, food, pool or just hanging out at the conference center has done more to boost my career to the next level than a lot of the technical work I’ve done the last couple of years. Technical stuff is extremely important, but it is only part of the equation – the other parts are a willingness to share, wide-ranging contacts and a network of like-minded people.

That’s where the SQL Family comes into play, and the whole reason I needed some time to put my thoughts in order. At Ignite in Atlanta I had the idea to have silicone wristbands with the text #SQLFamily made up to give to people in the community. I got a good deal for 200 bands, and kind of expected to give out some 30 or so. This turned out to be the single best idea I’ve had in years – they took off on Twitter like crazy, and people from near and far sought me out to get one. I met even more amazing people this way and the reaction of everyone who saw the band was the same: “I love my #SQLFamily! Where can I get a band like that?” At the end of the conference I had less than 80 left.

wristband

The SQL Family is unlike anything I’ve seen – a collection of like-minded individuals who live and breathe the Microsoft data stack, ranging from wide-eyed newbies to hardened veterans such as Kalen Delaney or Bob Ward. Everyone is invited to come play, and everybody takes care of everyone else. This community is the reason I think PASS Summit 2016 was the best conference of my career, and this community is the reason I strive to share my knowledge through teaching, blogging and presenting. Together we are strong and together we can grow exponentially. Come join us!

SQLHangout #38 and Microsoft TechDays

My life is … hectic, to say the least. I just finished a talk at Atea IT-arena in Karlstad last week, as well as a 60-minute recap of Micrsoft Ignite in Atlanta that I held yesterday for 40 people at AddSkills in Linköping. The crowd was very attentive and lots of good questions were raised. Hopefully even more Swedes will go to Ignite next year! I’m preparing for speaking at the Atea IT-arena i Norrköping in late November as well as teaching two courses in December: 10986 (Upgrading your administration skills to SQL Server 2016) and 10989A (Analyzing data with Power BI). Full speed ahead!

At 1800 CET On Monday the 17th of October I have the pleasure to join MVP Cathrine Wilhelmsen for a chat about career transitions in IT. I’ve spent close to 20 years deep under the hood of database systems and only recently crawled up and decided to tackle something new – data visualization and the cloud. Or is it really new? That’s what we’ll cover on Monday, so be sure to tune in and don’t hesitate to tweet us if you have any questions!

SQLHangout #38 live stream

In November it’s time for Microsoft TechDays in Stockholm. I’m happy to say that I’ve received a speaking slot where I will be talking about Azure SQL Database – the cloud awakens. Or, as attendees will find out – what REALLY happened on the Death Star and what led to the downfall of the Empire. Few people know the inner workings of these events, and even fewer realized that it has a lot to do with IT…

SQL Database: the Cloud Awakens

 

Reducing SSIS anxiety

Story time again. This tuesday I was to create a set of import packages in SQL Server Integration Services (SSIS) to automate data loading from a couple of my customers’ storage systems. Said data comes in the shape of several text files – some delimited, some fixed length, all of varying size and shape. Anyone who’s worked with SSIS know how “fun” it is to click-click-click your way through the incomprehensibly boring GUI – a GUI that does NOT lend itself to any efficient work at all. We had been struggling with this for a couple of days before summer already, but the data format has changed and it’s basically the same amount of work to just redo the whole thing again from scratch than to try to edit all the tiny details in all the gazillion boxes everywhere.

I can’t really say that I was looking forward to this work. Again. And I knew that if we made this proof-of-concept work, we’d have to do it several times over for each and every customer that wanted in on this idea. Ugh. Then something interesting happened.
This weekend I had the honour of presenting at SQL Saturday #536 in Gothenburg and I had the pleasure of meeting several amazing people. Among these was a Norwegian MVP named Cathrine Wilhelmsen, whose presentation “From accidental to efficient BI developer” I decided on a whim to attend. She showed some interesting things about the SQL Server Management Studio that I had no idea about, and I felt that those tidbits alone were well worth my time listening to someone talk about something I know basically nothing about.

Then she whipped out BIML and everything I thougth I knew about SSIS went out the window. BIML stands for Business Intelligence Markup Language and is basically a XML wrapper language for generating SSIS packages. This way I can write a bunch of XML code, generate a package or a hundred, over and over again. I don’t have to click on forty different places to change one thing in forty packages, I just change one thing in the XML and regenerate the packages. This. Is. Awesome. She showed us some websites and her blog, and home I went to study.

Tuesday came around, and I first spent about 20 minutes outlining BIML for my colleague, and then we set off – basically from scratch. We are both old hands at programming in different languages, so that was nothing new for either of us. Using different websites, examples, blogs and the like we managed to do in about six hours what we had kind of failed to do in three days. We now have a few hundred lines of code that does exactly what we want, is well documented(!), is easy to read and above all – is extensible. When I need to add a new file or two to the input – I copy and paste some code. Need to add a derived column? Sure thing, a tad more XML. And so on and so forth. We’ve barely scratched the surface of what BIML can do, but the key takeaway is this: I could get from zero knowledge of BIML and a VERY difficult relationshop to SSIS to a fair grasp of the basics of BIML and a newfound appreciation of the power of SSIS in about six hours. When was the last time you had such a return of invested time? Exactly.
Now, BIML is definately not new but as a DBA I had not come across it earlier. Now I will sing the praise of BIML to all my DBA colleagues.

Now – I would not have attended Cathrine’s session had i not been invited to SQL Saturday in Gothenburg. I would not have been invited to Gothenburg had I not written an abstract and sent it for review to the organizers of said SQL Saturday. I had not had the idea to send in an abstract had I not met representatives of PASS (the association for SQL Server and BI professionals) at PASS Summit in Seattle and seen first-hand what the community can do. I’ve only been a part of the PASS community for a couple of years, but I can’t praise it enough. The community is about sharing above all – knowledge, contacts, the works. The people I’ve met have all been friendly, caring, eager to help and very, very good. I rate my getting involved in the community as one of the best and most pivotal choices of my professional career. My one regret is that I’m a bit late to the party. But I’m bringing cookies and 20 years’ worth of experience…

Useful links:
http://www.cathrinewilhelmsen.net/
http://bimlscript.com
http://www.sqlservercentral.com/stairway/100550/
https://bidshelper.codeplex.com/

Thoughts on communities

I’ve just stepped on the train from Gothenburg and SQL Saturday #536, where I had the honor of giving a talk about consolidation with SQL Server 2016.

This SQL Saturday was organized by Mikael Wedham, a very friendly and exceedingly knowledgeable Swede. He had managed quite an amazing lineup with speakers this year: John Q. Martin, Erland Sommarskog, Mark Broadbent and Cathrine Wilhelmsen just to mention a few. As events go, this one was a small-to-mid-sized event. I think I heard Mikael say that about 90 were registered and about 15-20 were no-shows. I’m sorry to see so many no-shows, especially as this is an all-volunteer event all around, but then I was very happy to see everybody else that DID show up and make the event spectacular.

This was my first SQL Saturday presentation despite having giving talks and such for almost 20 years. I had the privilege of going first, meaning that I could relax and attend other sessions after mine. I’ve been to PASS Summit twice, and I’m going there this year as well, but I’ve had a surprisingly hard time getting to know new people there. Here it felt like I stepped into the legendary SQL family right through the door, and I’m very happy so stay that I’ve made quite a few new aquaintances.

Listening to the attendees mingling about made it clear that the SQL Saturday is very much a social event; some people seemed to know each other from other PASS events, and some were at their very first. The feeling among the attendees were one of friendly curiosity, and everywhere I went I heard fragments of interesting conversations and people sharing their knowledge.

And that’s what the community is about – sharing knowledge. That’s why I’m very happy to be part of it and very happy to get the oportunity to share some of the knowledge I’ve spent 20 years accumulating. The IT landscape is always changing rapidly, but I’m here, now, during *this* change, and I intend to do what I can to help others. Come join me!

SQL Saturday #536

I had a fairly short vacation this year in anticipation for a trip to Japan next year, but the time running up to said vacation was … hectic to say the least. Some time during this hectic workload I had the brilliant idea to send in a few abstracts for SQL Saturdays around the world. One fine day this summer I got a phone call from fellow Sweden Mikael Wedham, organizer of SQL Saturday #536 in Gothenburg.

In short: I’ll be presenting “Unicorn safari – alleviating consolidation pains with SQL Server 2016” in two weeks time. The title does not entirely reflect the content, though, but I’m hoping my audience will forgive me. It’s an intermediate level talk, not necessarily because of the technical content but the fact that a lot of business factors are involved.

SQL Saturdays are the best thing that’s happened since sliced bread, and what’s even better is that they’re free. Now is the time to go to Gothenburg and SQL Saturday!

Three amigos screw up a cluster

Three amigos joined me for a cluster installation the other day. Neither of them was very friendly, so I decided to call them all out in one place in order for others to steer clear.

The opinionated folder

Using mount point folders is a great idea for neatness and to avoid having a gazillion drive letters in play. It is also a great idea to review the best practices for SQL Server clusters on mount point folders, as there are a few gotchas. This blog post lays it all out.

If you fail to set up the correct permissions, funky stuff will ensue. The issue I’ve seen most often is something along the lines of lack of privileges on the SYSVOL\{some-long-GUID}. In that case, it’s time to read the blog post and berate oneself for trying to put stuff in the root folder.

The unwilling mover

The following error stumped me for a while:

“The resource ‘LogDisk (INST1)’ could not be moved from cluster group ‘Available Storage’ to cluster group ‘SQL Server (INST)’ “

For some reason the cluster didn’t feel like moving the cluster disks into the SQL Server group. Unforturnately I didn’t have enough time to really dig in to why this occurs, but it does have something to do with running with mounted folders. Fortunately the solution is fairly simple – just create an empty cluster group, call it what you want your SQL Server cluster group to be called, and move the disks into it before you install SQL Server. This is usually very simple, but in this case the GUI didn’t let me move the disks. Basically nothing happened, which made it somwhat difficult to troubleshoot… When searching for the correct syntax for moving disks via Powershell, I came across this blog post that covers this issue and the Powershell workaround.

The lost agent (with a slight detour via insufficient privileges)

clip_image003

The SQL Server installation failed thoroughly when it became evident that the cluster network object (CNO) for some reason lacked the privileges to create computer objects. This led to a failure of the cluster to bring up the name resource, and subsequently to abort the installation. The solution is to make sure the CNO has the permission to create computer objects. This blog post goes into excellent detail.

After rectifying the above I decided to salvage the installation, and I noticed that there was something missing. Like the agent. In order to create that resource, take a look here. With that done this specific issue was taken care of.