Quick recap on VLF’s: A SQL Server log file is logically subdivided into smaller files called Virtual Log Files, or VLF’s. Awesome background on the what’s, how’s and why’s of VLF’s can be found on Brent Ozar’s site and on SQLSkills.com in posts by Kimberly Tripp and Paul Randal. The inspiration for this post is a great post by David Levy, where he provides a simple script to fix your high VLF count. David’s script is great, but has a couple of gotcha’s that I have attempted to address. More on that in a bit…
Suffice it to say that (as always) we’re looking for the Goldilocks solution – too many or too few VLF’s cause problems. With too many, the database is constantly doing maintenance on the VLF’s as it cycles through them. With too few, the maintenance (when it happens) takes a long time. The Goldilocks solution is to find the balance between the number of VLF’s and their size…What constitues too many or two few VLF’s is dependent on many factors – physical size, workload, system performance, etc.
So… you’re here because you’re at least pretty sure that your VLF count is too high. You’ve hopefully read the info I’ve linked to, and know that to get rid of your excess VLF’s you must shrink your log file as much as possible. Once the log is shrunk, you then manually grow it to the proper size to achieve a manageable VLF size and count.
I’m not gonna lie – doing this is a pain. 24×7 environments don’t like to be brought down or slowed down; this process WILL slow things down at a minimum, and should really be done during a scheduled maintenance outage. That said, the steps aren’t difficult. Many DBA’s like to do them manually so they have control over every step. I have no problem with that, but I also like to automate known processes so I can move on as quickly as possible. Enter the automation:
Let’s start with David’s script. It really is great – it shrinks the log file, then re-grows it to your specified size without any issues. One big gotcha I found is that it does the growth in one chunk. If your target is a 50GB log file and you grow from 1GB to 50GB in one shot, you’re going to end up with 16 new VLF’s of ~3GB each (plus the remaining VLF’s from your 1GB). 3GB feels a bit big to me, even if you’re running all flash. I could be wrong, but it doesn’t pass my gut-check. The problem is only going to get worse as your target size gets bigger.
Of course, the obvious workaround is to run the growth portion of the script multiple times, stepping up the target size each time, right? That definitely gets around the single-growth issue, but negates the gains we got from automation.
The other issue I saw is really more one of personal preference. David’s script leaves the database in full recovery mode and only shrinks once. If your currently in one of the last VLF’s, the shrink won’t do you much good. Again, you’re back to running that portion multiple times by hand.
So – my take on the process… I’ve added loops to control the shrink and growth processes. Currently the shrink only goes until you get one shrink operation that doesn’t do anything (old size = new size), but I plan to refine that later. You can specify the growth rate to control VLF size; that number combined with the max size number allows the growth loop to run until complete. I set the recovery model to simple to aid in the shrink process, then set it back to full once complete. I only took that step because this script will only be run during scheduled maintenance.
--Initialize variables provided by user (below) declare @dbname sysname, --Database Name @shrinkdelay char(8), --Delay time between shrink rounds. Format: hh:mm:ss @shrinksizeMB int, --Target min size in MB to shrink to. Shrink process may not reach this size @growthsizeMB int, --Size in MB that each growth cycle should use. @maxsizeMB int; --Max size that the script will grow the log file to. --Set values for use select @dbname = '<database>', @shrinksizeMB = 50, @growthsizeMB = 4000, @maxsizeMB = 50000, @shrinkdelay = '00:00:05'; --Initialize variables handled by the script declare @filename sysname, --Log file name @oldsizeMB int, --Stores the size after the last shrink. Used for loop checks. @currsizeMB int, --Stores the current size of the log file @cmd nvarchar(512); --Used to create and execute dynamic SQL --Get initial values select @filename = name, @oldsizeMB = 0, @currsizeMB = (size*8)/1024 from sys.master_files where DB_NAME(database_id) = @dbname and type = 1; print 'BEGIN PROCESS' print 'Setting Recovery Model to SIMPLE' --Set DB to simple recovery to prevent log entries set @cmd = 'ALTER DATABASE ['+@dbname+'] SET RECOVERY SIMPLE;'; exec(@cmd); --Shrink log file to ~50MB print 'Beginning shrink of file: ' + @filename + ' from ' + cast(@currsizeMB as nvarchar) + ' MB.' while((@currsizeMB >= @shrinksizeMB) and (@currsizeMB <> @oldsizeMB)) begin set @cmd = 'use ['+@dbname+']; dbcc shrinkfile(N'''+@filename+''',0, TRUNCATEONLY);'; exec(@cmd); set @oldsizeMB = @currsizeMB; select @currsizeMB = (size*8)/1024 from sys.master_files where DB_NAME(database_id) = @dbname and type = 1; print 'Current Size = ' + cast(@currsizeMB as nvarchar) + ' MB.' waitfor delay @shrinkdelay; end select 'Shrink Process Complete.' --Set File Growth Size print 'Setting file AutoGrowth rate.' set @cmd = 'ALTER DATABASE ['+@dbname+'] MODIFY FILE ( NAME = N'''+@filename+''', FILEGROWTH = '+cast(@growthsizeMB as nvarchar)+' );'; exec(@cmd); --Set DB back to full recovery model print 'Setting Recovery Model to FULL.' set @cmd = 'ALTER DATABASE ['+@dbname+'] SET RECOVERY FULL;'; exec(@cmd); select @currsizeMB = (size*8)/1024 from sys.master_files where DB_NAME(database_id) = @dbname and type = 1; --Manually Grow File print 'Beginning Manual Growth of file' while(@currsizeMB <= @maxsizeMB) begin set @cmd = 'ALTER DATABASE ['+@dbname+'] MODIFY FILE ( NAME = N'''+@filename+''', SIZE = '+cast(@currsizeMB+@growthsizeMB as nvarchar)+' );'; exec(@cmd); select @currsizeMB = (size*8)/1024 from sys.master_files where DB_NAME(database_id) = @dbname and type = 1; print 'Current Size = ' + cast(@currsizeMB as nvarchar) + ' MB.' end print 'PROCESS COMPLETE'
There’s definitely more work to be done to make this script bullet-proof… I’m open to suggestions. I’ll post updates as I make improvements.