SQL Server Best Practice Settings - Database Settings

For SQL Server 2008R2-2014 I recommend setting the following trace flags:

1117 – force all datafiles within a file group to grow uniformly
1118 – forces uniform extent allocations instead of mixed page allocations
2371 – changes the default behavior for automatic statistics recomputation
2562 & 2549 – improve performance of DBCC CHECKDB
3226 – suppress transaction log backup entries from the SQL Server log
4199 – controls multiple query optimizer changes previously residing under multiple trace flags

https://www.brentozar.com/archive/2014/06/trace-flags-1117-1118-tempdb-configuration/
https://www.virtual-dba.com/sql-server-statistics-trace-flag-2371/
https://support.microsoft.com/en-us/help/2634571/improvements-for-the-dbcc-checkdb-command-may-result-in-faster-perform
http://www.sqlservercentral.com/blogs/denniss-sql-blog-1/2016/01/08/trace-flag-3226-suppress-all-successful-backups-in-sql-server-error-log/
https://support.microsoft.com/en-us/help/974006/sql-server-query-optimizer-hotfix-trace-flag-4199-servicing-model

Depending on situation the following flags could come in useful:

2389 & 2390 – Auto statistics on ascending keys
1224 – prevent lock escalation except under extreme circumstances
4139 – Enable auto-quick-statistics update for stationary columns

http://sqlmag.com/t-sql/making-most-automatic-statistics-updating
http://www.sqlservergeeks.com/sql-server-trace-flag-1224
https://milossql.wordpress.com/2014/12/22/beyond-statistics-histogram-part-3-tf-2389-feat-tf-4139/

Many of these flags are kind of obsolete with SQL Server 2016 and up, but do keep in mind that setting a COMPATIBILITY LEVEL to anything else than 130 will require their use again. I would still set 3226 by default, and again, keep in mind that most of the new defaults ONLY come into play for COMPATIBILITY LEVEL 130 or higher. Also worth noting is that 1117 and 1118 apply to ALL filegroups, not only TEMPDB.

There are several ways of setting the flags, but my favorite one is using DBATools like this:

Set-DbaStartupParameter -SqlInstance server1\instance1 -TraceFlags 1117, 1118, 2371, 2562, 2549, 3226, 4199
I recommend turning on “lock pages in memory” to keep the operating system from swapping out SQL Server memory to disk if there is memory pressure. Now, this can have potentially interesting effects in a memory starved environment.

Jonathan Kehayias has a great blog post with both sides of the argument and lots of interesting information:

https://www.simple-talk.com/sql/database-administration/great-sql-server-debates-lock-pages-in-memory/

This CAN be done via Powershell as explained here: http://keepingitgeek.blogspot.com/2015/01/grant-sql-server-account-access-to-lock.html

One of the most amazing initiatives of later years is the DBATools collection of Powershell cmdlets. They simplify a huge range of tasks and I can’t recommend it enough. Dead easy to set up and use.

https://dbatools.io/getting-started/

In order to enable proactive alerts from SQL Server a basic requirement is to be able to send email from the server. This requires SQL Mail to be configured.

https://docs.microsoft.com/en-us/sql/relational-databases/database-mail/configure-database-mail
https://www.itprotoday.com/sql-server/script-your-database-mail-setup
https://cmatskas.com/configure-database-mail-on-sql-server-using-t-sql

By default SQL Server will allocate all available RAM. Since SQL doesn’t even take into account other instances, this default is both dangerous and dumb. MAX_SERVER_MEMORY must always be set, and ample amount of memory must be left to other processes and the operating system. Either use the calculator linked below or the Test-DbaMaxMemory cmdlet from DBATools.

http://sqlmax.chuvash.eu/

Dedicated Admin Connection – this is used as a “get out of jail for free”-card for a DBA if all the schedulers are cranky and won’t accept connections for one reason or another. Using the DAC, it might be possible to sneak in through the “back door” and solve whatever problem that is plaguing the system without having to go through the hassle of a reboot/service restart. Turn. It. On. Period.

https://msdn.microsoft.com/en-us/library/ms190468.aspx
https://msdn.microsoft.com/en-us/library/ms189595.aspx

Maximum degree of parallelism will hurt you out of the box. There is no established consensus, but my opinion is this: most of the OLTP environments out there generally don’t benefit from parallelism – quite the contrary. I’ve found that more often than not the database spends more time reassembling the query results (as seen by a CXPACKET wait) than the query would have taken just burning one CPU. That’s why I generally set MAXDOP to 2 for 1-4 CPUs and MAYBE 2-4 for 4-8 CPUs. Then I keep an eye on the cache and tune accordingly. It is beyond me why the default is 0 (as in PARALLELIZE OVER EVERY CPU THERE IS!).

https://support.microsoft.com/en-us/help/2806535/recommendations-and-guidelines-for-the-max-degree-of-parallelism-confi

This runs hands in hand with cost threshold for parallelism. This defaults to 5, which is a ridiculously low number and results in SQL Server trying to parallelize EVERYTHING. Combine that with the above MAXDOP of 0 and well, all hell is bound to break loose. Again, there is no “right” way to do it, but I start out with 50 as a baseline and tune up or down accordingly. In OLTP workloads, expensive queries are generally a bad idea anyway.

Optimize for adhoc was introduced way, way back to minimize the impact of adhoc plans in the plan cache. Simply put, it works by not storing a complete plan in the cache until it is reused at least once. Instead a “stub” is stored, and less memory is consumed by plans that are never reused. Having read the same recommendation from several sources, I always turn this on – BUT – keep an eye out for surprises in the plan cache.

http://www.sqlskills.com/blogs/kimberly/plan-cache-and-optimizing-for-adhoc-workloads

Recovery model is always down to the SLA requirements, but so commonly overlooked by non-DBAs thhttp://www.sqlskills.com/blogs/kimberly/plan-cache-and-optimizing-for-adhoc-workloads/at it’s well worth to mention. In short: full recovery model gives the ability to (potentially) restore to any moment in time as long as there are backups, while simple recovery model only enables restore back to the last full backup. The downside is that full recovery mode will require transaction log backups, something that in turn requires disk space and maintenance. Ola Hallengrens scripts linked below can simplify backup immensely.

https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/recovery-models-sql-server

Backup compression is another no-brainer. Or is it? The prevailing opinion is to turn it on as the servers out there usually has a lot more CPU than I/O resources. That is undoubtedly the case, but keep in mind the environment outside the database. If you’re running a compressing backup solution like TSM, NetBackup or similar, the compression and subsequent deduplication of that system will suffer greatly. In essence: make sure you only compress your backups at one place, and if at all possible do it at the SQL Server.

http://www.sqlskills.com/blogs/paul/sql-server-2008-backup-compression-cpu-cost/

TEMPDB datafiles has been a hot topic for contention for many years. I’ve read everything between 2-8 files depending on number of cores, and then tune upwards if you run into problems. I quote Bob Ward who wrote the KB below:
As a general rule, if the number of logical processors is less than or equal to 8, use the same number of data files as logical processors. If the number of logical processors is greater than 8, use 8 data files and then if contention continues, increase the number of data files by multiples of 4 (up to the number of logical processors) until the contention is reduced to acceptable levels or make changes to the workload/code.

Too many files can actually hurt performance if you have too few spindles (but then again – everything hurts with too few spindles).

Paul Randal lays out a lot of information here: http://www.sqlskills.com/blogs/paul/correctly-adding-data-files-tempdb/ and http://www.sqlskills.com/blogs/paul/a-sql-server-dba-myth-a-day-1230-tempdb-should-always-have-one-data-file-per-processor-core/
This KB is also relevant: https://support.microsoft.com/kb/2154845

Worth noting is also that all the data files need to have a reasonable size/growth configuration. All too often I see TEMPDB files that begin at 10MB and are set to grow 10%. This will create problems down the line with something called Virtual Log Files (VLFs), see this link: http://adventuresinsql.com/2009/12/a-busyaccidental-dbas-guide-to-managing-vlfs/

In SQL Server 2016 and up things changed (somewhat) as the above calculation of number of is included in the installer, and a (again somewhat) more reasonable setting is default for file size/growth.

The simplest way of doing this I know of is (again) using DBATools: https://docs.dbatools.io/#Set-DbaTempdbConfig

All data and log files in SQL Server needs to have reasonable settings for file growth. While it is difficult to give a all-encompassing opinion on what is a good number of megabytes or gigabytes for file growth, percent is NEVER a good idea. This will make the files grow inconsistently and can in some circumstances lead to surprises and full disks.
By setting reasonable numbers for file growth on the MODEL database, all new databases inherit these settings. While one set of numbes might not be the best for every situation, go change the settings for the specific database and rest easy knowing that your default settings are way better than 10% file size increase.

In SQL Server 2016 the Query Store came into being. It has been eagerly awaited and tracks stored plans over time. I turn it on for every database I ever create, but keep in mind that it might have to be kept pruned somewhat.
For people running earlier versions of SQL Server, there is an open source initiative that does a lot of what the 2016 Query Store does. I wholeheartedly recommend everyone to check it out!

https://blogs.technet.microsoft.com/dataplatform/2017/01/31/query-store-how-it-works-how-to-use-it/
http://www.openquerystore.com/

Add operators, alerts and be proactive. There are several problems that can be proactively monitored by SQL Server itself. By creating operators and connecting these to e-mail addresses or pager alerts, it is possible to wake people up in the middle of the night if something goes wrong, as opposed to remaining completely oblivious to the fact that the server is slowly sinking in the depths. With alerts, whenever SQL Server logs an error of a specific severity, the alert is triggered and something interesting happens in the life of the DBA.

Brent Ozar has a script to create the alerts here: http://www.brentozar.com/blitz/configure-sql-server-alerts/ Also remember that when creating the operator, it is quite possible to specify several e-mail addresses if you just separate them with a comma.

On the topic of proactivity, make sure the agent is configured to use the mail profile and has a failsafe operator (when all else fails, call mommy) set.

https://docs.microsoft.com/en-us/sql/ssms/agent/designate-a-fail-safe-operator
https://technet.microsoft.com/en-us/library/ms177408(v=sql.105).aspx

Make sure that all SQL Server Agent jobs have failure emails set. That is, if anything goes wrong with the execution of the jobs a mail is sent to the operator designated as the receiver of the failure email. This way the operator is immediately informed of a failed job.

https://www.databasejournal.com/features/mssql/automating-sql-server-agent-notification.html

A good maintenance solution is key, and unfortunately the Maintenance Plans just don’t cut it. I recommend Ola Hallengren’s free Maintenance Solution that gives you a very flexible tool for doing integrity checks (CHECKDB), backups of all kinds and reindexing, should you need them. Two things to note with the reindexing job: if you’re running pure SSDs, you really don’t need to reindex most of the time. If you still choose to run the reindexing for any reason, consider changing the default thresholds as 5% fragmentation for reorg and 30% for reindex is way too low. I usually set 30% for reorg and 50% for reindex.

http://www.ola.hallengren.com/
Installation can also be done very easily with DBATools:
Install-DbaMaintenanceSolution -SqlInstance ServerName -Database SomeDatabase -BackupLocation “X:\Some\Directory” -CleanupTime 72

I recommend creating a DBA database to hold all the scripts, Ola’s maintenance solution and such. There are several scripts I always put in new servers:
Brent Ozar’s First Responder Kit: https://www.brentozar.com/first-aid/
Adam Machanic’s SP_WHOISACTIVE: http://whoisactive.com/

MSDB needs love as well – the amount of backup history in it can be simply staggering. Set up a job that runs sp_delete_backuphistory every day. How this could be done and several other tips can be found here:

https://solutioncenter.apexsql.com/how-to-perform-sql-backup-and-restore-history-cleanup/

Purge MSDB Backup History in chunks