Basic settings for SQL Server 2012/2014, part 1

Whenever I set up an SQL Server I usually do the same things. I’ve collected a list of best practice settings over the years (and most often put them in a script, as most of my friends are scripts). Throughout a few blog posts I’ll go over the settings I do use and discuss a bit why I’ve chosen to set the parameters the way I do.

Part 1: configuring the server

There are a few hard rules when it comes to setting up the environment for SQL Server. The first two applies to storage:

Always make sure you align your partitions with the underlying stripe size. Windows 2008 and above (usually) takes care of this by itself, but always double check your alignment. Failure to align the partition can lead to significant performance degradation.

Format your partitions where you will store SQL Server related files to a NTFS Allocation Size of 64KB. The reason is very simple – SQL Server primarily works with 64KB I/O (1 64KB extent = 8 pages of 8KB each). With a default NTFS Allocation Size set to 4KB, the server has to do 16 I/Os for each extent – not optimal in the least. (In some cases with Analysis Services, 32KB might yield better results. See the link below for further information).

More information:

SQL Server loves memory, and in my opinion you can’t give it too much memory. Many of my clients have a tendency to give it way too little (6-8GB) where double that amount might be a good starting point. This all depends (as always) on the workload and the requirements, but remember – memory transfer is fast, disk transfer is slow – regardless of if you’re sporting all SSDs.
Jeremiah Peschka have a great blog post that summarises my thinking:

Allen McGuire wrote a great set of scripts to harvest and report on SQL Server IO that can be useful:

CPU choice is another interesting topic. Remember a few years ago when it was all about the MHz? Well, these days it’s all about cores, and since the licenses are purchased per core, well, that’s not necessarily a good idea. Parallellizing is a difficult concept, and depending on your workload it might be more or less tricky to parallelize said workload. In a worst case scenario with, say, SharePoint – where you are forced to set Max Degree of Parallelism – a CPU with many slow cores will give you worse performance than a CPU with fewer fast cores.
In general, I prefer few fast cores over many slow ones. Most of my clients’ workloads benefit more from this choice.

Glenn Berry lays it out here:

Instant File Initialization is something that always should be turned on. If it not, then every time a file is created it needs to be filled with zeroes. Not zeroes like myself, but actual zero bits. This means a huge hit on the storage system whenever a file is created. With instant file initialization, the pointer is created without the need for zeroing the actual data. This means instant file creation and no I/O effects. This will affect restores too…
Sure, the blocks are not zeroed and hence are vulnareble for reading with a hex editor. But as Kevin Boles (TheSQLGuru) put it in a presentation for SQL Server User Group Sweden: “if someone is reading your SQL Server data files with a hex editor, you are already pwnd”.

Check this blog post from the premier field engineer team at Microsoft:

Power settings have a funny way of being ignored. The bad thing is that Windows 2012 has a habit of setting the power plan to “balanced”, meaning that you won’t get the maximum performance from your server. Set the power plan to “high” and keep an eye out from time to time – this has been known to “magically” change itself.



Failover on Oracle Standard Edition?

We have all heard (and gnashed our teeth over) that in order to get a proper automatic failover environment for Oracle (i.e Data Guard) we need Enterprise Edition. Oracle Enterprise Edition (or as Brent Ozar so succinctly put it: Expensive Edition) is hideously expensive, especially if you’re only ever going to use the Data Guard feature.

Sure, you can always use the classic Oracle Standby with a bunch of scripts – much like log shipping in SQL Server. It works (if your scripts are OK this is basically bomb-proof) and is fairly simple to set up. Trouble is that there is no (good) way of automating the failover, and when itis time to go back, well, you’re basically looking at a complete rebuild of the database previously known as the primary. Depending on your the amount of data this can take a long time, as well as saturate the network links between the systems (and the rest of the datacenter), making end users (and sysadmins) less than overjoyed.

Oracle is a huge company and different divisions peddle different software – the database people don’t sell applications, the application people don’t sell databases, and neither of them sell … Golden Gate. Golden Gate is the result of a company purchase Oracle did several years ago, and Golden Gate is an exceptionally versatile data transfer tool. It is mostly used for heterogenous data transfer (that is, going from one database engine to another) or basically anything that involves massaging the data. That’s probably why so few people think of it when it comes to – yes, you guessed it – high availability.

First things first – what is it going to cost me? Well, the answer is ‘a lot’. But not nearly as much as a complete Enterprise Edition solution. Golden Gate is licensed per core, just like Enterprise Edition, but has a price tag of roughly half of an Enterprise Edition core license. That means that you will be paying A LOT more for Golden Gate than for your Standard Edition database, but again, not nearly as much as if you’d decided to bite the bullet (hailstorm?) and forked up the cash for an all-Enterprise Edition setup.

Golden Gate brings a lot of cool stuff to the party as well – remember the ‘heterogenous’ part of the previous paragraph? Oh yes, that means that we can use Golden Gate to sync up two different Oracle versions. Thinking slightly outside the box, this means that we can use this solution for a transparent upgrade without any downtime at all. Or we can do data transfer between the Oracle databasen and, say, an SQL Server data warehouse. Or, that you came here for – a plain, simple Data Guard analogy with automatic failover.

This is just a teaser of WHAT you can do, not HOW you do it. I’ll make a blog post about that in the future!

Hello world!

So here we are. Again. The blog is live. Hopefully better, stronger and tastier. You be the judge 🙂