Thoughts on communities

I’ve just stepped on the train from Gothenburg and SQL Saturday #536, where I had the honor of giving a talk about consolidation with SQL Server 2016.

This SQL Saturday was organized by Mikael Wedham, a very friendly and exceedingly knowledgeable Swede. He had managed quite an amazing lineup with speakers this year: John Q. Martin, Erland Sommarskog, Mark Broadbent and Cathrine Wilhelmsen just to mention a few. As events go, this one was a small-to-mid-sized event. I think I heard Mikael say that about 90 were registered and about 15-20 were no-shows. I’m sorry to see so many no-shows, especially as this is an all-volunteer event all around, but then I was very happy to see everybody else that DID show up and make the event spectacular.

This was my first SQL Saturday presentation despite having giving talks and such for almost 20 years. I had the privilege of going first, meaning that I could relax and attend other sessions after mine. I’ve been to PASS Summit twice, and I’m going there this year as well, but I’ve had a surprisingly hard time getting to know new people there. Here it felt like I stepped into the legendary SQL family right through the door, and I’m very happy so stay that I’ve made quite a few new aquaintances.

Listening to the attendees mingling about made it clear that the SQL Saturday is very much a social event; some people seemed to know each other from other PASS events, and some were at their very first. The feeling among the attendees were one of friendly curiosity, and everywhere I went I heard fragments of interesting conversations and people sharing their knowledge.

And that’s what the community is about – sharing knowledge. That’s why I’m very happy to be part of it and very happy to get the oportunity to share some of the knowledge I’ve spent 20 years accumulating. The IT landscape is always changing rapidly, but I’m here, now, during *this* change, and I intend to do what I can to help others. Come join me!

SQL Saturday #536

I had a fairly short vacation this year in anticipation for a trip to Japan next year, but the time running up to said vacation was … hectic to say the least. Some time during this hectic workload I had the brilliant idea to send in a few abstracts for SQL Saturdays around the world. One fine day this summer I got a phone call from fellow Sweden Mikael Wedham, organizer of SQL Saturday #536 in Gothenburg.

In short: I’ll be presenting “Unicorn safari – alleviating consolidation pains with SQL Server 2016” in two weeks time. The title does not entirely reflect the content, though, but I’m hoping my audience will forgive me. It’s an intermediate level talk, not necessarily because of the technical content but the fact that a lot of business factors are involved.

SQL Saturdays are the best thing that’s happened since sliced bread, and what’s even better is that they’re free. Now is the time to go to Gothenburg and SQL Saturday!

Three amigos screw up a cluster

Three amigos joined me for a cluster installation the other day. Neither of them was very friendly, so I decided to call them all out in one place in order for others to steer clear.

The opinionated folder

Using mount point folders is a great idea for neatness and to avoid having a gazillion drive letters in play. It is also a great idea to review the best practices for SQL Server clusters on mount point folders, as there are a few gotchas. This blog post lays it all out.

If you fail to set up the correct permissions, funky stuff will ensue. The issue I’ve seen most often is something along the lines of lack of privileges on the SYSVOL\{some-long-GUID}. In that case, it’s time to read the blog post and berate oneself for trying to put stuff in the root folder.

The unwilling mover

The following error stumped me for a while:

“The resource ‘LogDisk (INST1)’ could not be moved from cluster group ‘Available Storage’ to cluster group ‘SQL Server (INST)’ “

For some reason the cluster didn’t feel like moving the cluster disks into the SQL Server group. Unforturnately I didn’t have enough time to really dig in to why this occurs, but it does have something to do with running with mounted folders. Fortunately the solution is fairly simple – just create an empty cluster group, call it what you want your SQL Server cluster group to be called, and move the disks into it before you install SQL Server. This is usually very simple, but in this case the GUI didn’t let me move the disks. Basically nothing happened, which made it somwhat difficult to troubleshoot… When searching for the correct syntax for moving disks via Powershell, I came across this blog post that covers this issue and the Powershell workaround.

The lost agent (with a slight detour via insufficient privileges)

clip_image003

The SQL Server installation failed thoroughly when it became evident that the cluster network object (CNO) for some reason lacked the privileges to create computer objects. This led to a failure of the cluster to bring up the name resource, and subsequently to abort the installation. The solution is to make sure the CNO has the permission to create computer objects. This blog post goes into excellent detail.

After rectifying the above I decided to salvage the installation, and I noticed that there was something missing. Like the agent. In order to create that resource, take a look here. With that done this specific issue was taken care of.

 

 

 

 

 

 

 

Finding the culprit

I had a case the other day where the customer called and told me that he had a bit of a performance problem. Well, I’ve heard that a few times over the years, and asked him to elaborate. He sent me a screenshot of SolarWinds Database Performance Analyzer and, well, it kind of points the finger to one thing:

asyncwaits

Um. Yeah. The ASYNC_NETWORK_IO thing. It kind of … sticks out.

Okay, so how to tackle this then? We know what waits are happening, but where do we go from here? Well, It’s time to find out exactly what is generating said waits. And this is where extended events come into play. Again. (Extended events are the coolest thing since sliced bread, there, I’ve said it.)

There are a few scripts I use when doing this and they were written by the venerable John Sterrett (see his blog post from 2013 here). There are a few small gotchas though – ASYNC_NETWORK_IO does not exist as an event ID. It is called NETWORK_IO. Hence the statements for capturing this specific workload looks like this:

 

DECLARE @WaitTypeTVP AS WaitType;
INSERT INTO @WaitTypeTVP (Name)
VALUES ('PAGEIOLATCH_EX'), ('NETWORK_IO')
EXEC Waits.CaptureStatementsCausingWaits @TVP = @WaitTypeTVP;

Then you settle back and wait for the troublesome queries to crawl out of the woodwork. Little do they know that we’ve set a trap.

Upon emptying the trap after a few minutes, it became apparent that the culprit was one huge, automatically generated(meh) query. Time to get the execution plans for them using sys.dm_exec_query_plan. Trouble is (and this is the second gotcha of the scripts), said TVF requires the plan_handle, not the sql_handle. So we have to do a little digging around to get that – for instance taking a peek in sys.dm_exec_query_stats (which might hold multiple plan_handles per sql_handle, mind you!). Plug said plan_handle into sys.dm_exec_query_plan and hey, presto! The execution plan in all its gory glory.

It was apparent that the queries were able to use neither indexes (hello automatic code generation!) nor had a very good where statement (why don’t we take ALL of our several million rows and send them to the client, yes?). And there you have the problem – bad code that tries to send too much data to the client. Bad code, no cookies for you.

I provided the customer with my findings and he went on his merry way, grumbling and heading for the developers.

PASS Summit 2015 – a quick reflection

PASS Summit 2015 is winding down and I am strolling through increasingly deserted hallways. This was my second PASS Summit, and I already know it will not be my last. I’ve been to Oracle OpenWorld in San Francisco six times and compared to that conference this one is downright tiny. But the thing with the PASS community in relation to the Oracle community is like night and day.

A quick background: I started out with Oracle back in ’97 and SQL Server shortly thereafter, and let’s face it – SQL Server was not a serious contender back then. Both SQL Server and Oracle has come a good ways since then, and Oracle is ahead in many respects. I find that a product is very much dependent on it’s user base, and the more vibrant a community is, the greater success said product tends to enjoy.

The Oracle community is HUGE – quite a bit larger than the SQL Server ditto, but it is also, in my opinion, vey much more closed and, to be honest, spiteful. The thought of walking up to one of the big names in the Oracle community at OpenWorld is fairly alien to me, especially after having been rather rudely dismissed doing just that a couple of years back. Looking at the SQL Server community, the opposite is true. Someone tweeted something along the lines of “don’t be afraid to talk to an MVP – everyone was a beginner at some point”.

Last year I watched these mythical MVP and MCM creatures with awe, and I didn’t realize until the last day that they actually meant it when they say “come talk to me, I love to interact with the community”. This year, I’ve had the pleasure of talking to quite a few MVPs and other speakers, and everyone has been kind, courteous, genuinely interested in the discussion and above all – very, very helpful.
This community feels like coming home after a long day’s work – kicking off the shoes and falling onto the couch, not needing to worry about anyone trying to badmouth someone else, block you from speaking at events or just messing with you out of sheer spite.
I’m sure readers from the Oracle community might disagree, but, well, this has happened to me and I’ve had enough of that. I’m closing on 40 and have been doing databases for half my life, and I’m fed up with people trying to screw me over.

I’ve been contemplating trying to get on the speaker’s circuit at events for quite some time, but have not had the balls to commit, to be honest. Sure, I’m an MCT and have no problem teaching a wide variety of subjects, but what if I screw my demos up and people subsequently hate me? Well, the tipping point came just a few hours ago.
I had the pleasure of attending a session on advanced restore methods given by the exceptional Jes Borland. Most of her demos bombed, and she smiled and told the audience she was terrified as this is the worst thing that can happen to a presenter.

And they laughed.

Not in spite, not at her incompetence, but at the fact that here is someone not so far removed from them showing that she’s human too. I’m sure most if not all of the people in the audience saw something of themselves in Jes as that moment.
Oh, and she did her usual stellar job of delivering the presentation despite the demo issues, and I’m sure everyone left the session happy and more informed. It dawned on me that I’m the only one holding myself back.

By putting this on my blog I’ve kind of painted myself into a corner – now I have to follow through. As soon as I get home I’ll start drawing up propolsals and make a sort of a map of where I am and where I want to go with my speaking carreer. Thank you PASS and thank you Jes for today. The future just got a lot more scary – just the way it’s supposed to be.

*UPDATE*

I think I was just taught a lesson in how Twitter works. I originally just tweeted the link to this post to Jes with a thank you note, and then she retweeted it. And then it got retweeted again. And again. More people than I could have imagined have seen the tweet (and link to this blog), and the website statistics have … changed somewhat from a normal day… 😛

Azure Automation and index maintenance gotchas

I decided to write up a blog post about index maintenance in SQL Azure databases (yes, I’m gravitating towards Azure just like everyone else these days). I had everything planned, what research to do, the case I was going to use, the whole nine yards. It would have been epic, had it not been for Pieter Vanhove, who’s already done it. Worse still, he did it way better than what I had planned.

http://pietervanhove.azurewebsites.net/?p=14137

Somewhat miffed, I decided to play with it anyway (just not write a blog post about it, no, I’m not bitter 😛 ).

Turns out there’s quite a gotcha with this:

AzureDB does not support cross database queries, which means that, since the runbook Powershell script creates a connection to the MGMTDB, it is ONLY within that database that the queries will execute. You can check this using a simple select * from sys.databases – it will ONLY show you the user database you’re in and the system databases. Ergo, Ola’s script will only execute on the database it is installed in. Okay, you say, let’s create the scripts in the MASTER database (generally a bad idea, but hey) and run them, as if I create a connection to the MASTER database, sys.databases shows every other user database on the server. Turns out that, well, you can’t. There seems to be no way to create objects in the master database, so we’re stuck at the moment.

Ola’s solution is to create his scripts in every user database, but while this works most excellent from a performance and integrity standpoint, the amount of manual labor required to either create new runbooks for each and every database that gets created (and remove said runbook for decommissioned databases) OR create a potentially HUGE runbook that encompasses all databases (that also needs to be manually maintained, ugh) is simply staggering. The hunt for a solution is on.

 

*EDIT*

Aha! While attending the PASS Summit I got word from Brent Ozar that cross-database queries had arrived to Azure. I’ve yet to track down Ola and ask him about his scripts as the implementation is … maybe not quite as straight-forward as one might think, but the solution is a lot closer. Oh, and I spoke to Pieter as well – he’s updated the original blog post with his workaround. It’s a good example of what can be done with some ingenuity and a runbook. Check it out!

Back in the saddle – index time

Well, I’m back in the saddle. Four weeks of vacation, of which three resembled some kind of monsoon. The last was great, and ofcourse the weather became great as soon as I returned to work.

I decided to get me a new script to show me unused indexes, and started looking at Stack Overflow to see if someone already had taken care of it (someone usually has). Turns out that there was a blog post by John Pasquet of folioTek here, and a very interesting addition to said blog post (and scripts) by user HAL9000 of Stack Overflow here. It just became my go to script for finding unused indexes. Thanks, HAL9000 and John!

 

Vacation! And a few gotchas…

The blog has been slow lately due to the impending vacation. I’ll toss in a few things I stumbled on the last week though:

When dealing with a cluster, chances are that some of the disks just won’t be returned to the fold when deleted from available storage. For some reason, the cluster sometimes keeps the reservations on some disks, leading to some rather weird error messages. The solution is to us powershell to give said reservations the boot like this:

Clear-ClusterDiskReservation -disk X -force

Speaking of clusters; trying to find which node the SQL Server is running from can be a bit of a challenge from within SQL Server. Try this script that I found over at SQLMatters.com:

With ClusterActiveNode as
(
SELECT SERVERPROPERTY('ComputerNamePhysicalNetBIOS') as NodeName, Cast('Active' as varchar(10)) as Active
),
ClusterNodes as
(
SELECT NodeName FROM sys.dm_os_cluster_nodes
)
Select b.nodename, isnull(active,'Passive') as NodeStatus from ClusterNodes as b left join ClusterActiveNode as a
on a.NodeName = b.nodename

I’ve also thoroughly enjoyed setting up a 2008R2 cluster on Windows 2012R2 and mount points. That’s also riddled with some interesting … features. A couple of good links are an mssqltips.com article here, a Microsoft Connect entry here and a blog post from Jacob Moran from 2009 here.

Found a nice script to set up instant file instantiation with powershell, written by Ingo Karstein:

#written by Ingo Karstein, http://ikarstein.wordpress.com

#  v1.1, 10/12/2012

# modified by Preston Cooper 4-27-2015 to modify "Perform Volume Maintenance Tasks" and added elevation from http://stackoverflow.com/questions/7690994/powershell-running-a-command-as-administrator

## <--- Configure here
$accountToAdd = "DOMAIN\Username"
$pathToFile1 = "c:\temp\secedit.sdb"
$pathToFile2 = "c:\temp\PerformVolumeMaintenanceTasks.txt"

## ---> End of Config

# Get the ID and security principal of the current user account
$myWindowsID = [System.Security.Principal.WindowsIdentity]::GetCurrent();
$myWindowsPrincipal = New-Object System.Security.Principal.WindowsPrincipal($myWindowsID);

# Get the security principal for the administrator role
$adminRole = [System.Security.Principal.WindowsBuiltInRole]::Administrator;

# Check to see if we are currently running as an administrator
if($myWindowsPrincipal.IsInRole($adminRole))
{
    # We are running as an administrator, so change the title and background colour to indicate this
    $Host.UI.RawUI.WindowTitle = $myInvocation.MyCommand.Definition + "(Elevated)";
    $Host.UI.RawUI.BackgroundColor = "DarkBlue";
    Clear-Host;
}else{
    # We are not running as an administrator, so relaunch as administrator

    # Create a new process object that starts PowerShell
    $newProcess = New-Object System.Diagnostics.ProcessStartInfo "PowerShell";

    # Specify the current script path and name as a parameter with added scope and support for scripts with spaces in it's path
    $newProcess.Arguments = "& '" + $script:MyInvocation.MyCommand.Path + "'"

    # Indicate that the process should be elevated
    $newProcess.Verb = "runas";

    # Start the new process
    [System.Diagnostics.Process]::Start($newProcess);

    # Exit from the current, unelevated, process
    Exit;
}

And finally a nice script from Eric Humphrey to set SQL Server trace flags with Powershell:

##############################################################################
##
## Add-SqlServerStartupParameter
##
## by Eric Humphrey (http://www.erichumphrey.com/category/powershell/)
##
##############################################################################

<#

.SYNOPSIS

Adds an entry to the startup parameters list for all instances of SQL Server
on a computer

.EXAMPLE

PS >Add-SqlServerStartupParameter '-T3226'

#>

param(
    ## The parameter you wish to add
    [Parameter(Mandatory = $true)]
    $StartupParameter
)

$hklmRootNode = "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server"

$props = Get-ItemProperty "$hklmRootNode\Instance Names\SQL"
$instances = $props.psobject.properties | ?{$_.Value -like 'MSSQL*'} | select Value

$instances | %{
    $inst = $_.Value;
    $regKey = "$hklmRootNode\$inst\MSSQLServer\Parameters"
    $props = Get-ItemProperty $regKey
    $params = $props.psobject.properties | ?{$_.Name -like 'SQLArg*'} | select Name, Value
    #$params | ft -AutoSize
    $hasFlag = $false
    foreach ($param in $params) {
        if($param.Value -eq $StartupParameter) {
            $hasFlag = $true
            break;
        }
    }
    if (-not $hasFlag) {
        "Adding $StartupParameter"
        $newRegProp = "SQLArg"+($params.Count)
        Set-ItemProperty -Path $regKey -Name $newRegProp -Value $StartupParameter
    } else {
        "$StartupParameter already set"
    }
}

 

Differences between Oracle and SQL Server transaction log files

When moving between Oracle and SQL Server, one of the first gotchas is how the transaction log is handled differently between the two engines. In Oracle, the transaction log is a pure transaction log in that it contains only the data and not the rollback information. SQL Server combines the transaction log and the rollback data into the same file.
This leads to some interesting differences i behavior. Let’s take a peek at Oracle first. Oracle divides the different workloads from each other; the transaction log only contains change data but not data required for rollback. The rollback data is stored in a separate rollback tablespace. This also means that the transaction log size i fixed. When a transaction log file is filled up, a so-called log switch occurs where the log writer process switches to the next log file in the group and starts to write there.  This is a good time to point out that the transaction log on Oracle requires at least two (preferably at least three) files in a group, and that the group can consist of several mirrors of the transaction logs (preferably on different drives).

When the log writer switches out of the now full transaction log file, the archiving process steps in and starts reading and copying the transaction log to a new file called the archive log (specified by log_archive_dest_x-parameters). This process then repeats itself as soon as the log writer is finished with one transaction log.

As previously mentioned the transaction log size is fixed, but the number of archive logs produced vary depending on the workload. According to the classic best practices for Oracle, a log switch should occur about every 20-30 minutes. That means that the logs should be sized accordingly, but how many DBAs have a consistent workload that always fills up a transaction log in the exact same way?

Let’s think about the consequences of file size for  short while. Too small files means that the log writer process might be blocked by the archive log process, since the archiver is slower (it has to read AND write). This means poor performance and very hammered transaction log and archive log disks.
The other alternative is even worse – should the transaction log disk disappear in a black hole (it *might* happen!), you will loose all the data that’s still in the transaction log (and have yet to be archived).

The solution is to create large enough (or high enough number of) transaction log files that the archiver have time to scurry out of the way before the log writer switches back to the first file in the log group during the most intensive workload. This might mean that the files are either too large or too numerous to have a log switch of every 20-30 minutes during most of the day, but this is easily taken care of the parameter archive_lag_target. This parameter, set in minutes, specify the maximum amount of time that can pass before an explicit log switch (and hence archiving) is invoked.
The sad thing is that I’ve rarely ever seen anyone set this parameter.

SQL Server, then. The transaction log have a way of growing, sometimes out of hand. The good thing is that we have a continous log thread and the bad thing is that it is differing wildly in size. Where Oracle switches the log file automatically (and hopefully in a consistent, reasonable way), SQL Server does not and hence continues to build rather large transaction log files. There is always a hot discussion what is a good time between log backups, but as with the Oracle discourse above, it all boils down to how much data you are prepared to loose. Some of my customers run a log backup every five minutes, some do it once per day (don’t get me started). The really bad thing about SQL Server when it comes to log handling is the fact that you can’t mirror the transaction log file from within the database.

 

 

Basic settings for SQL Server 2012/2014, part 4

Part 4 – scripts for the previous parts

Below are scripts for most of the SQL Server specific settings I discussed in the previous three parts. Please note that not quite everything is included (trace flags, instant file instantiation, etc.).

 

-- Set memory amount
sp_configure 'show advanced options', 1; GO RECONFIGURE; GO sp_configure 'max server memory', 40000; GO RECONFIGURE; GO



-- Turn on optimize for adhoc workloads
EXECUTE sp_configure 'show advanced options', 1; GO RECONFIGURE; GO EXECUTE sp_configure 'optimize for ad hoc workloads', 1; GO RECONFIGURE
-- Turn on DAC

EXECUTE sp_configure 'remote admin connections', 1;
GO
RECONFIGURE;
GO

-- Adjusting TEMPDB
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev', SIZE = 100MB , FILEGROWTH = 50MB);
GO

-- Script to create TEMPDB files like the one existing
USE [master]

go

DECLARE @cpu_count     INT,
@file_count    INT,
@logical_name  SYSNAME,
@file_name     NVARCHAR(520),
@physical_name NVARCHAR(520),
@alter_command NVARCHAR(max)

SELECT @physical_name = physical_name
FROM   tempdb.sys.database_files
WHERE  NAME = 'tempdev'

SELECT @file_count = Count(*)
FROM   tempdb.sys.database_files
WHERE  type_desc = 'ROWS'

SELECT @cpu_count = 8

WHILE @file_count < @cpu_count
BEGIN
SELECT @logical_name = 'tempdev'
+ Cast(@file_count +1 AS NVARCHAR)

SELECT @file_name = Replace(@physical_name, 'tempdb.mdf',
@logical_name + '.ndf')

SELECT @alter_command = 'ALTER DATABASE [tempdb] ADD FILE ( NAME =N'''
+ @logical_name + ''', FILENAME =N'''
+ @file_name
+ ''', SIZE = 100MB, FILEGROWTH = 50MB )'

PRINT @alter_command

EXEC Sp_executesql
@alter_command

SELECT @file_count = @file_count + 1
END;



--Create operator DBATeam
USE [msdb]
GO
EXEC msdb.dbo.sp_add_operator @name=N'DBATeam',
@enabled=1,
@pager_days=0,
@email_address=N'foo@bar.com; gazonk@bar.com';
GO
--Add alerts

USE [msdb]
GO
EXEC msdb.dbo.sp_add_alert @name=N'Severity 016',
@message_id=0,
@severity=16,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO

EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 016', @operator_name=N'DBATeam', @notification_method = 7;
GO

EXEC msdb.dbo.sp_add_alert @name=N'Severity 017',
@message_id=0,
@severity=17,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO

EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 017', @operator_name=N'DBATeam', @notification_method = 7;
GO

EXEC msdb.dbo.sp_add_alert @name=N'Severity 018',
@message_id=0,
@severity=18,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO

EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 018', @operator_name=N'DBATeam', @notification_method = 7;
GO

EXEC msdb.dbo.sp_add_alert @name=N'Severity 019',
@message_id=0,
@severity=19,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO

EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 019', @operator_name=N'DBATeam', @notification_method = 7;
GO

EXEC msdb.dbo.sp_add_alert @name=N'Severity 020',
@message_id=0,
@severity=20,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO

EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 020', @operator_name=N'DBATeam', @notification_method = 7;
GO

EXEC msdb.dbo.sp_add_alert @name=N'Severity 021',
@message_id=0,
@severity=21,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO

EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 021', @operator_name=N'DBATeam', @notification_method = 7;
GO

EXEC msdb.dbo.sp_add_alert @name=N'Severity 022',
@message_id=0,
@severity=22,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO

EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 022', @operator_name=N'DBATeam', @notification_method = 7;
GO

EXEC msdb.dbo.sp_add_alert @name=N'Severity 023',
@message_id=0,
@severity=23,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO

EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 023', @operator_name=N'DBATeam', @notification_method = 7;
GO

EXEC msdb.dbo.sp_add_alert @name=N'Severity 024',
@message_id=0,
@severity=24,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO

EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 024', @operator_name=N'DBATeam', @notification_method = 7;
GO

EXEC msdb.dbo.sp_add_alert @name=N'Severity 025',
@message_id=0,
@severity=25,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000';
GO

EXEC msdb.dbo.sp_add_notification @alert_name=N'Severity 025', @operator_name=N'DBATeam', @notification_method = 7;
GO

EXEC msdb.dbo.sp_add_alert @name=N'Error Number 823',
@message_id=823,
@severity=0,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO

EXEC msdb.dbo.sp_add_notification @alert_name=N'Error Number 823', @operator_name=N'DBATeam', @notification_method = 7;
GO

EXEC msdb.dbo.sp_add_alert @name=N'Error Number 824',
@message_id=824,
@severity=0,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO

EXEC msdb.dbo.sp_add_notification @alert_name=N'Error Number 824', @operator_name=N'DBATeam', @notification_method = 7;
GO

EXEC msdb.dbo.sp_add_alert @name=N'Error Number 825',
@message_id=825,
@severity=0,
@enabled=1,
@delay_between_responses=60,
@include_event_description_in=1,
@job_id=N'00000000-0000-0000-0000-000000000000'
GO

EXEC msdb.dbo.sp_add_notification @alert_name=N'Error Number 825', @operator_name=N'DBATeam', @notification_method = 7;
GO
--Set failsafe operator
USE [msdb]
GO
EXEC master.dbo.sp_MSsetalertinfo @failsafeoperator=N'DBATeam',
@notificationmethod=1;
GO
-- Set database mail profile
USE [msdb]
GO
EXEC msdb.dbo.sp_set_sqlagent_properties @databasemail_profile=N'MailProfile';
GO
-- Set SQLAgent to use database mail
USE [msdb]
GO
EXEC msdb.dbo.sp_set_sqlagent_properties @email_save_in_sent_folder=1,
@use_databasemail=1;
GO
-- Configure parallelism and backup compression settings.

EXEC sys.sp_configure N'show advanced options', N'1' RECONFIGURE WITH OVERRIDE;
GO
EXEC sys.sp_configure N'cost threshold for parallelism', N'50';
GO
EXEC sys.sp_configure N'max degree of parallelism', N'2';
GO
RECONFIGURE WITH OVERRIDE;
GO
EXEC sys.sp_configure N'show advanced options', N'0';
GO
RECONFIGURE WITH OVERRIDE;
GO
EXEC sys.sp_configure N'backup compression default', N'1';
GO
-- Adjust MODEL
USE [master]
GO
ALTER DATABASE [model] MODIFY FILE ( NAME = N'modeldev', SIZE = 100MB , FILEGROWTH = 50MB);
GO
ALTER DATABASE [model] MODIFY FILE ( NAME = N'modellog', SIZE = 100MB , FILEGROWTH = 50MB);
GO
-- Example of creating DBA_DB database and user for replication

USE [master]
GO

CREATE DATABASE [DBADB]
ON PRIMARY
( NAME = N'DBADB', FILENAME = N'D:\MSSQL\Data\DBADB.mdf' , SIZE = 102400KB , MAXSIZE = UNLIMITED, FILEGROWTH = 51200KB )
LOG ON
( NAME = N'DBADB_log', FILENAME = N'E:\MSSQL\Log\DBADB_log.ldf' , SIZE = 10240KB , MAXSIZE = 2048GB , FILEGROWTH = 51200KB )
GO

USE [master]
GO
CREATE LOGIN [dba_repl] WITH PASSWORD=N'xxxxxxxx', DEFAULT_DATABASE=[DBADB], CHECK_EXPIRATION=OFF, CHECK_POLICY=OFF
GO
USE [DBADB]
GO
CREATE USER [dba_repl] FOR LOGIN [dba_repl]
GO
USE [DBADB]
GO
ALTER ROLE [db_owner] ADD MEMBER [dba_repl]
GO