Migrating databases, part 1

Migrating databases

Most of us poor souls that for some reason enjoy this work called “database administration” get to meet “Mr. Migration” from time to time. I’ve lost count of the number of customers in need of moving one or more (most often A LOT) of databases from one server to another. The reasons for this are as many as the databases themselves: licensing issues, new underlying design, performance problem alleviation or frankly anything under the sun. Since the ball usually lands squarely in our court when it comes to trying to please all members of the project – be it the end users that don’t want any downtime, the project management that don’t want any risk or the sysadmins that don’t want to do any work ( 😀 ) – I decided to write down my different takes on the issue. Basically there are four ways to transfer data, each with it’s own specific challenges. This is part 1, covering detach/attach and backup and recovery.

As always when doing I/O in SQL Server it is well worth mentioning alignment and NTFS allocation size. By default, Windows formats a disk to an NTFS cluster size of 4K. Since SQL Server generally works with 64Kb block size (8 pages á 8K) this means an increased load on the I/O system – up to 16(!) I/O operations per SQL Server block retrieved. This can lead to a serious performance degradation even before we’re out of the gate, so the best practice is to set this to 64Kb. Trouble is, you have to reformat the drive to do it…
Alignment used to be a much larger problem than it is today, but still rears its ugly head from time to time. Windows 2008 and above (usually) takes care of this by itself, but always double check your alignment.
Checking NTFS allocation size is done by putting the below into an administrator-level command prompt, changing the drive letter:

fsutil fsinfo ntfsinfo e:

and looking for Bytes per Cluster.

Checking the alignment can be done by putting the below into an administrator-level command prompt, looking for DeviceID and StartingOffset:

wmic partition list full

Detach/attach

The simplest migration technique I know of is probably an old-fashioned detach-copy-attach. First of all, let’s get one thing out of the way: due to the way SQL Server is designed, most of the time the logins are not included in the backup/database file copies. (I say most of the time, since from SQL Server 2012 and onwards there exists provisions for a “self contained” database which can include logins). This lack of users is a problem associated with every type of migration I’ll go through here, so keep it in mind.

Detaching a database means excising it from the master database of the source server, copying the actual database and transaction log files to the destination and plugging them into the new master database. This means no extra time for backing up or restoring the data in itself, and in the case of a move, no need for extra disk space. The detach and attach operations always take the same time regardless of database size. In my opinion, that’s where the appeal ends. Consider the scenario where the database detaches fine, but for some reason won’t attach. Yes, it can happen even if it is very rarely, and no, it’s neither a great time to troubleshoot, nor is it very good for your blood pressure. I’ve been a DBA for quite some time, and I always strive to keep everything redundant. In the detach/attach scenario, I have the potential to saw off the branch of the tree I’m sitting on, and most of the time I don’t enjoy falling unceremoniously to the ground as this does nothing for my image.
Also consider that the entire files are copied – even if there is 99% free space. Nothing is done to the data/transaction log files, so a top tip is to make sure that you’ve done your housekeeping well before embarking on the detach/attach journey.
Onward to a more elegant migration path instead!

Backup/recovery

Another solution with very low risk is a simple backup and recovery. The database(s) are backed up on the source system, transferred over to the destination and restored. There are a few issues that might sink this migration path, though. Downtime and the need for a lot of disk space comes to mind. The added disk space can be somewhat alleviated using SQL Server backup compression, available from SQL Server 2008 Enterprise Edition and SQL Server 2008R2 Standard Edition. It is turned off by default, but can yield surprisingly good results if turned on. Try it out and see what the result is for your specific data.
The downtime issue is a bit more difficult to handle. There will be downtime – from the moment you start the backup, through the transfer to the destination host and the restore. There are ways to shorten the backup/restore time though, but before starting to fiddle with the knobs, we first need to find out what settings are used by default. This is easily done via two trace flags:

USE master
GO
DBCC TRACEON (3605, -1) 
DBCC TRACEON (3213, -1) 
GO

These flags will log the exact backup settings used to the SQL Server log file. Armed with the information about what settings are used at the moment, let’s see how we can increase the backup speed.

1. Use compression – this shortens the backup time and the restore time. The actual savings depend entirely on what kind of data you have and how compressible it is. But do try compression as it generally rocks.

BACKUP DATABASE [BackupPerfTest] 
TO 
DISK = N'S:\MSSQL\Backup\BackupPerfTest.bak'
WITH FORMAT, INIT, MEDIANAME = N'BackupTest', NAME = N'BackupTestFull', SKIP, NOREWIND, 
NOUNLOAD, STATS = 1, COMPRESSION
GO

2. Set BUFFERCOUNT – this specifies the total number of I/O buffers that will be used for the backup operation. The total space used by the backup buffers is determined by: buffercount * maxtransfersize. Increasing BUFFERCOUNT can significantly increase your backup performance, but keep in mind that you are using up memory in the SQLSERVR.EXE process. As always, test this out before throwing it in production.

BACKUP DATABASE [BackupPerfTest] 
TO 
DISK = N'S:\MSSQL\Backup\BackupPerfTest.bak'
WITH FORMAT, INIT, MEDIANAME = N'BackupTest', NAME = N'BackupTestFull', SKIP, NOREWIND, 
NOUNLOAD, STATS = 1, COMPRESSION, BUFFERCOUNT=400
GO

3. Set MAXTRANSFERSIZE – this setting specifies the largest unit of transfer used between SQL Server and the backup media.
The possible values are multiples of 64 KB ranging up to 4194304 bytes (4 MB). The default is 1 MB. This is also displayed in the log with the trace flags above.

BACKUP DATABASE [BackupPerfTest] 
TO 
DISK = N'S:\MSSQL\Backup\BackupPerfTest.bak'
WITH FORMAT, INIT, MEDIANAME = N'BackupTest', NAME = N'BackupTestFull', SKIP, NOREWIND, 
NOUNLOAD, STATS = 1, COMPRESSION, BUFFERCOUNT=400, MAXTRANSFERSIZE=2097152
GO

4. Set BLOCKSIZE – this setting specifies the physical block size for the backup files, in bytes. The trouble is that it defaults to 512 bytes for everything that isn’t a tape…

BACKUP DATABASE [BackupPerfTest] 
TO 
DISK = N'S:\MSSQL\Backup\BackupPerfTest.bak'
WITH FORMAT, INIT, MEDIANAME = N'BackupTest', NAME = N'BackupTestFull', SKIP, NOREWIND, 
NOUNLOAD, STATS = 1, COMPRESSION, BUFFERCOUNT=400, MAXTRANSFERSIZE=2097152, BLOCKSIZE=65536
GO

5. Increase the number of backup files. This does complicate the restore somewhat, but the rewards vastly compensate for this. As long as your disk subsystem can keep up (especially if you’re running SSDs), this setting alone will give you a serious performance boost.

BACKUP DATABASE [BackupPerfTest] 
TO 
DISK = N'S:\MSSQL\Backup\BackupPerfTest1.bak',
DISK = N'S:\MSSQL\Backup\BackupPerfTest2.bak',
DISK = N'S:\MSSQL\Backup\BackupPerfTest3.bak',
DISK = N'S:\MSSQL\Backup\BackupPerfTest4.bak'
WITH FORMAT, INIT, MEDIANAME = N'BackupTest', NAME = N'BackupTestFull', SKIP, NOREWIND, 
NOUNLOAD, STATS = 1, COMPRESSION, BUFFERCOUNT=400, MAXTRANSFERSIZE=2097152, BLOCKSIZE=65536
GO

The only way to increase the transfer speed between the hosts that I know is to transfer less data – this is where compression comes in Depending on the data, I’ve seen between 30% and 80% reduction in backup file size. Less data to transfer, faster restore due to compression.

Combining all the ideas above with a differential backup solution is another interesting way to shorten the actual down time. Let’s assume, for the sake of the argument, that the database is 1TB and it takes 1 hour to backup, transfer and restore. An idea is to first do a full backup, transfer and restore it without recovering the database – this is done without downtime, when is convenient for everybody involved. When it is time for the actual migration, stop the application, do a differential backup, transfer that and restore. Depending on the transaction volume this will most likely be a much smaller backup file, meaning that the entire operation can be done much faster. This is nothing new in any way, but I’m just pointing it out so no one forgets old school stuff.

While we’re on the subject of differential backups, what if there was a way to keep the destination host in sync or similar? Well, that’s the subject of the next post!

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *