Part 3: simplifying the life of the DBA
When I’ve got the environment set up quite the way I like it, I usually set up a repository where I keep all my scripts. I designate one server as the “master” for my purposes, and set up a database called DBADB. In here I put all my standard scripts (like the bunch from Brent Ozar & co., SP_WHOISACTIVE from Adam Machanic, the log table for Ola Hallengren’s maintenance solution and such) to keep track of them, and to have only one place to update.
In my standard setup scripts for new databases, I also create a local DBADB as well as a local user for replication. When this is done, I enable replication between the “master” server and the new server, and presto! I’ve got a) a central repository for scripts and b) all the scripts available at all my servers.
Set up a good index maintenance solution, like either Ola Hallengrens script suite or Minion Reindex from Sean McCown of MidnightDBA fame. These run circles around maintenance plans and do a much better job of keeping your database server in good working order.
Create DNS aliases for each and every database connection there is. After making sure that each and every system is accessing the DNS alias instead of the actual server name, I’ve simplified my life tremendously should I ever have to move a database to another server. I’ve got a good example of when this was a customer of mine that had to move a whole lot of databases from an old server to a new one. Since we had the DNS alias solution in place, we only had to stop the databases, move them, change the DNS pointer and restart – we knew for a fact that we did not have to change anything in the client connection strings.
Document everything. Yes, this falls squarely under the category “simplifying the life of the DBA”. Good documentation simplifies your life. Every. Single. Time. Keep a good record of what jobs runs when on which servers. Document what users have sysadmin privileges and why. Make sure that any deviations from the standard disk setup is meticulously written down. Catalog what install options you’ve chosen for each server. The list can go on and on, but I think you get the point.
Add operators, alerts and be proactive. There are several problems that can be proactively monitored by SQL Server itself. By creating operators and connecting these to e-mail addresses or pager alerts, it is possible to wake people up in the middle of the night if something goes wrong, as opposed to remaining completely oblivious to the fact that the server is slowly sinking in the depths. With alerts, whenever SQL Server logs an error of a specific severity, the alert is triggered and something interesting happens in the life of the DBA.
Brent Ozar has a script to create the alerts here: http://www.brentozar.com/blitz/configure-sql-server-alerts/ Also remember that when creating the operator, it is quite possible to specify several e-mail addresses if you just separate them with a comma.
Set up a failsafe operator – this is basically the go-to guy if no-one else can be found to harass. Take a look at this post on MSDN: https://technet.microsoft.com/en-us/magazine/gg313743.aspx
Change the settings on the model database to some sensible settings. Increasing a database file with 1MB every time it becomes full is not only dumb but also very wasteful. The problem becomes even bigger with transaction logs and virtual log files (see this youtube link for more info – https://youtu.be/lcmYeE-cqQo). The solution is to change the settings for the model database to something sensible (50-100MB size increase a good starting point – adapt to your specific needs!) so that every database that gets created at least have some sense in the growth settings.
These are the things I do every time and at every client. Nothing is new, nothing is exotic, it’s just hard, consistent work. And despite it being nothing new, it can’t be said often enough…