TSQLTuesday

Extending automation of SQL recoveries using Ola Hallengren’s backup solution

“Automation is inherently good” This might be one of the only phrases you can get all DBA’s to agree on without that dreaded “it depends” that we DBA’s like to throw around so much.

This months T-SQL Tuesday is about automation and I thought I’d write about extending existing automation.  One of my favorite scripts for automation is Ola Hallengren’s Backup & Maintenance solution.  Ola’s scripts are a fantastic way to automate highly configurable backups and maintenance on your SQL Server instances.  If you’re not using them, you should seriously consider looking into why.

This solution serves as an outstanding base but like anything else its can be useful to tweak things a bit.  Extending the initial automation provided by his scripts is what this post is all about.

In particular, I’ve modified Ola’s scripts to generate the files needed to restore all of the databases that have been backed up with his solution.  In particular, having the ability to easily restore the whole server in the case of a disaster.  Though, you could easily pull out one DB to only restore it.  This script is currently only written for litespeed since that’s what I use for backups.  However, it could easily be changed to support native backups or any of the other backup products that Ola’s scripts can be configured for.  Perhaps Ill work on those in the future if it would be useful.

The idea is that every time you take a backup the backup job will create a .sql file on the server filesystem in the backup directory that can be used to restore to the point of the backups that were just taken.

This solution includes three pieces, an additional stored procedure, an additional step in both of the backup jobs to execute that stored procedure and lastly a step in the cleanup procedure to remove the restore scripts from the filesystem that have aged.

A couple of notes of caution:

As with anything you find on the internet, please use at your own risk in a development/test system and proceed with caution.

This script makes several assumptions including

  • That you’ve installed Ola’s commands into the master database
  • That you’re using litespeed
  • That logging to the commandlog table is enabled

The stored procedure is relatively simple and accepts a single parameter @type  “LOG” will generate the script as of the last log backup taken or for any other parameter, I happen to use “FULL”, it generates the script based on the last full backup.

CREATE PROCEDURE [dbo].[GenerateRestoreScript] (@type NCHAR(30) = 'LOG')
AS 
DECLARE @ID INT
DECLARE @DB NVARCHAR(128)

SET NOCOUNT ON

SELECT  @ID = MAX(database_id)
FROM    sys.databases

IF @type = 'LOG' SET @type = 'xp_backup_log' ELSE SET @type = ''
--These intentionally not commented in the script as a precaution (to generate an error)
        SELECT 'ALERT-ALERT-ALERT-ALERT-ALERT-ALERT-ALERT-ALERT-ALERT-ALERT-ALERT-ALERT-ALERT-'
        SELECT 'PLEASE BACKUP THE TAIL OF THE LOG
        SELECT 'OTHERWISE THIS COULD BECOME A RGE (GOOGLE THE ACRONYM!)'
        SELECT 'IF YOU ARE OK REPLACING THE DB AND LOOSING DATA IN THE TAIL LOG RUN THIS SCRIPT WITHOUT THESE COMMENTS '
        SELECT 'RAISERROR(N''ARE YOU SURE YOU WANT TO DO THIS?'', 25,1) WITH Log;'      
        SELECT '--------'

WHILE @ID > 2
    BEGIN

        SELECT  @DB = NAME
        FROM    sys.databases
        WHERE   database_id = @ID

        SELECT  @ID = @ID - 1

        SELECT '----' + @DB + '-----------------------------------------------------'  

        SELECT 'EXECUTE ' 
        + REPLACE(Command, '_backup_', '_restore_')
        + ', @filenumber = 1, @with = N''' 
        + CASE WHEN rn <> 1 THEN 'NO' ELSE '' END 
        +  'RECOVERY'''
        + CASE WHEN CommandType = 'xp_backup_database' THEN ', @with = N''REPLACE'';' ELSE ';' END

         FROM 
        (
        SELECT 
            SUBSTRING(LEFT (Command, CHARINDEX(''', @with =',Command)),CHARINDEX('[master]',Command),LEN(Command)) AS Command
            , ROW_NUMBER() OVER (ORDER BY cl.ID DESC) AS rn
            , CommandType
        FROM    [master].[dbo].[CommandLog] cl
        WHERE   cl.DatabaseName = @DB
                AND (cl.CommandType = 'xp_backup_database' OR cl.CommandType = @type)

                AND cl.ID >= ( SELECT   MAX(ID)
                               FROM     CommandLog c
                               WHERE    CommandType IN ( 'xp_backup_database' )
                                        AND cl.DatabaseName = c.DatabaseName
                             )
        ) AS rntab

        ORDER BY rn DESC                     

    END

To execute the stored procedure, this needs to be added as an additional cmdexec job step to the Full backup job (make sure to change the directory where you want the .sql files stored (H:\SERVERNAME below))

sqlcmd -l 30 -E -S $(ESCAPE_SQUOTE(SRVR)) -d master -y 0 -b -Q "EXEC [dbo].[GenerateRestoreScript] ''FULL''" –o”H:\SERVERNAME\DRFULL_$(ESCAPE_SQUOTE(STRTDT))_$(ESCAPE_SQUOTE(STRTTM))_RESTORE.sql" –w50000

To execute the stored procedure, this needs to be added as an additional cmdexec job step to the Transaction log backup job (make sure to change the directory where you want the .sql files stored (H:\SERVERNAME below))

sqlcmd -E -S $(ESCAPE_SQUOTE(SRVR)) -d master -y 0 -b -Q "EXEC [dbo].[GenerateRestoreScript]" -o"H:\SERVERNAME\DRLOG_$(ESCAPE_SQUOTE(STRTDT))_$(ESCAPE_SQUOTE(STRTTM))_RESTORE.sql" –w50000

This cmdexec Job step that needs to be added to the output file cleanup job to clean up old .sql files (make sure to change the directory where the .sql files stored (H:\SERVERNAME below))

Note: currently this configuration keeps the files from the past 3 days but the actual files kept depends on when the cleanup job is scheduled.

cmd /q /c "For /F "tokens=1 delims=" %v In (''ForFiles /P "H:\SERVERNAME" /m *RESTORE.sql /d -3 2^>^&1'') do if EXIST "H:\SERVERNAME"\%v echo del "H:\SERVERNAME"\%v& del "H:\SERVERNAME"\%v"

I have these steps scripted into Ola’s original solution .sql so the folder names are set properly and job creation is completely automated.  Ill leave that part of extending automation to you, dear reader, as homework.

T-SQL Tuesday #19 Wrapup

Huge Thanks go out to everyone who participated in this months T-SQL Tuesday

I apologize for the tardiness of this post, its been a busy week with PASS finalizing the Summit Sessions.

As always, there were some awesome posts this month!  If youve ever wondered why you need to prepare to recover your databases, or your life for that matter I suggest reading through the huge amount of content below.

The good stuff

Rob Farley (B | T) Writes us a two part post with half being technical about migrations, downtime and high availability and the other half being personal with regards to dealing with and controlling life’s disasters.  Hats off to Rob for pouring it all out there.  (sometimes it just feels better to write it all down and put it in perspective)

Noel McKinney (B | T) recounts a bad situation where he played the part of message queue during a human disaster where a developers spouse unplugged the telephone in the middle of the night (surprising this didnt cost someone a job)

John Pertell (B | T) tells us about times where he learned lessons the hard way about backups and restores.  His stories hit home for me and im sure they will for most other seasoned DBAs.  Ive lost more SAN arrays over the years to firmware flashes than I care to think about, so much so that I cringe when the SAN admin calls and even utters the word firmware.

Robert Davis (B | T) writes about backing up system configurations in the case of a complete server failure.  Good info in one place here about what you would loose if you lost one of the system databases.

Ricardo Leka (B | T) turns in his post letting us know that its important to have a backup plan but even more important to have a recovery plan! (his post was in portugese so if I’m way off I blame google translate!  Thanks for the post Ricardo)

Merrill Aldrich (B | T) reminds us to be aware of blind spots in the recovery scenario of our companies.  He shares some great info about cultures that can cause disasters to be unrecoverable.

Jack Vamvas (B) Shows us how he uses powershell to gather an inventory of SQL Server info that may be needed in the case of a disaster.

Mark Broadbent (B | T) Writes a post about how others mistakes can often become your problem when corruption lands in your lap.

Muthukkumaran Kaliyamoorthy (B) Goes over the various ways that you can build HA/DR system including Clusters, Mirroring, Replication, etc

Jason E Bacani (B | T) shows once again that backing up a database is important but making sure you are backing up what you think you are backing up is even more important

Bob Pusateri (B | T) recounts a story of a former employer and the resulting problems from having a “if it isn’t broken dont fix it attitude”

Chad Miller (B | T) writes about using powershell and CMS to inventory your SQL Servers

Ryan Adams (B | T) Writes some tips about using and configuring mirroring to prevent disasters

Gail Shaw (B | T) does her best to remind us that disasters arent just huge events in the world but rather most of them involve smaller more isolated events.  Id agree with her analysis and I live in the bullseye of hurricane country!

Nic Cain (B | T) writes about a full scale disaster at a former place of employment.  I see a running joke in these posts about san firmware upgrades being the cause of most DBA disasters.

Robert Pearl (B | T) shares his story of 9/11 and recovering from that disaster.  Things have certainly changed in the years since then.

Amit Banerjee (B | T) gives us 10 key points to keep in mind when thinking about disasters and how to best deal with them

Pinal Dave (B | T) recounts his early days as a DBA and 4 pieces of wisdom that he learned early on

Steve Jones (B | T) Writes about small disasters that arent natural disasters.  He’s right, these types disasters are considerably more likely than a massive natural disaster.

Thomas Rushton (B | T) Shared not one but two posts for this months edition of TSQLTuesda.  He reminds us to test our DR plans and recounts a story of what was likely someone updating every record in a database with the same value.  Which is a common disaster indeed.

Jason Brimhall (B | T) Shared a story of three personal disasters. included is a good tip about recovering the registered servers in ssms after a reinstall

Nick Haslam (B | T) wrote about an experience at a retail organization where a loss of power took out all of the systems.  Seems its often the small things that get overlooked (not that power is small but, often taken for granted)

John Samson (BT) shared links to his prior posts about DBA responsibilities in planning for recoveries

Nancy Hidy Wilson (B | T) who lives just up the road from me in Houston recounts her own personal story from Hurricane Ike.  I learned I need a chainsaw and a tractor to recover from a hurricane.  Also I was reminded just how far our modern jobs have come in that we can personally experience disaster and move a few hundred miles away and continue to work our day jobs since their systems *should* be designed for uptime!

Thanks again to everyone who participated this month! 

Be on the watch for next months host and consider participating if you havent before!

Invitation for T-SQL Tuesday #19 – Disasters & Recovery

Disasters

Its the first week of June and for those of us living along the Gulf and Atlantic coasts of the US, that brings the beginning of hurricane season.  It also means its time for this months installment of T-SQL Tuesday.

This Months Topic

Hurricane Ike dead ahead

There goes your weekend/month

Disaster Recovery.  This topic is very near and dear to me based on the fact that I live on a barrier island that was the site to the deadliest natural disaster in US history and more recently destroyed by the third costliest hurricane in history.  Needless to say preparing for disasters is nearly instinctive to me which might explain why I’m a DBA but I digress.  Anything you’d like to blog about related to preparing for or recovering from a disaster would be fair game, have a great tip you use to keep backups and recovers running smoothly, a horrific story of recovery gone wrong? or anything else related to keeping your systems online during calamity.  We want to hear it!

My street a month after Hurricane Ike

My street a month after Hurricane Ike

T-SQL Tuesday info

Originally an idea dreamed up by Adam Machanic (Blog|Twitter), it has become a monthly blog party where the host picks a topic and encourages anyone to write a post on that topic then a day or 3 later produces a roundup post of all the different perspectives from the community.

Rules

  • Your post must be published between 00:00 GMT Tuesday June 14, 2011, and 00:00 GMT Wednesday June 15, 2011
  • Your post must contain the T-SQL Tuesday logo from above and the image should link back to this blog post.
  • Trackbacks should work, but if you don’t see one please link to your post in the comments section below so everyone can see your work

Nice to haves!

  • include a reference to T-SQL Tuesday in the title of your post
  • tweet about your post using the hash tag #TSQL2sDay
  • consider hosting T-SQL Tuesday yourself. Adam Machanic keeps the list, if he let me do it you’re bound to qualify!

Check back in a few days to see the roundup post of all the great stories your peers shared

Database Automagic

This months TSQL Tuesday is hosted by a good friend Pat right over at  SQL Asylum

For this months entry I decided to keep it short and sweet, following in my Bits N Bytes theme.

The Meta Script

In the true sense of the word automation, this really doesn’t fit but, in the terms of quickly getting something done that would otherwise be a mundane repetitive task, this can save a world of time.

Lets say we have a list of objects in the Sales Schema and we have a request to grant Select and Insert access to a user for those objects.  There are two approaches, 1 is to grant select and insert to the actual schema like this

GRANT SELECT, INSERT ON SCHEMA::Sales TO BusinessUser 

However you might decide that you only want to grant direct SELECT and INSERT on the tables that exist in the DBO Schema today not those tables which may be created in the future (auditors love to make us do this)

A simple way to automate granting these rights is by writing a script that writes a script like so

SELECT
‘GRANT SELECT, INSERT ON ‘ + sch.name + ‘.’ + obj.name + ‘ TO BusinessUser’
   FROM sys.all_objects obj JOIN sys.schemas sch
 

    ON obj.schema_id = sch.schema_id

  WHERE sch.name = ‘Sales’
  and obj.type = ‘U’



This should give you a result set that looks something like the following:

GRANT SELECT, INSERT ON Sales.People TO BusinessUser
GRANT SELECT, INSERT ON Sales.Sales TO BusinessUser

At this point, run the output in a separate command window and viola you’ve automated that grant of permissions

This may not be true “automation” in the sense that Pat was looking for but, perfecting the ability to write scripts that write scripts is a huge timesaver

This year I resolve to…

If you hadn’t guessed, today’s post is part of this months TSQL Tuesday.  This is an interesting topic for me since as a matter of principle I usually refuse to make resolutions and the like around the start of the new year.  I like to set goals, and work towards those goals but, I think “resolving” to do something has this nagging way of never turning out how I’d like.  It probably has something to do with the fact that I track goals but, typically only think about resolutions at a point in time.

So, this year Ill resolve to document a few of my goals for the year.

This year I only have a few professional goals.  Actually, quite a few less than usual.  I decided to trim down my professional goals this year to only a couple since they are quite large and very open ended.

  1. Id like to make PASS as responsive as possible to the needs of our SQL Community.  This is simply to say that I plan to do what I feel I was elected to do.  Of all the directors I am as well positioned as anyone to make real change that can be seen to the average user of SQL Server.  I will need lots of help to make this happen, and I have no problem asking for that help (watch this space SOON for details)
  2. I want to learn to be a better “manager/leader”  It takes a different set of skills to lead people than it does to be a DBA and do technical work.  I love the technical work, actually more than the management stuff but, my current roles are requiring more leadership and less technical.  I need to do better with the details of this and learn to inspire greatness in my teammates.

That’s it, 2 whole goals for the year, not much by count but, by effort I’d say these might be the some of the loftiest goals I’ve set in a long time…

Go to Top