Archive for January, 2010
This one has been sitting on my to do list for a very very long time, actually its been on my list since nearly the day after the 2009 SQLPASS Summit closed.
Admittedly Im not a PASS “Chapter leader”, nor do I attend the meetings in person more than a few times a year (its 3 hrs roundtrip to the local group during lunch) but, Im always looking for ways we can improve processes at PASS and Ive heard too many times to count over the years that we dont do a good job of helping chapters connect with speakers, or even providing a way for the chapters to contact potential speakers.
The other day I contacted Andy Warren about an idea I had for getting a simple speaker bureau off the ground reasonably quick since I know he’s had that on his mind lately. I’d like to think that his post on that subject was part of the fruits of that brainstorming session but, Ill never know. In order to make some of the things we want to do for the community “easier” we need to rework (in a small way) the speaker terms and conditions to allow for it.
Every speaker who presents at the PASS Summit is required to sign a contract that protects PASS as well as the speakers, its pretty simple really, even I can read and understand it. What id like to do is integrate a few “optional” opt in type items that would give PASS the ability to manage the connection between speakers, their submitted abstracts and the chapters that need speakers. The other day, I had to tell the Appdev VC that I couldnt refer speakers to them nor could I contact a few speakers on their behalf because we had never asked if we could contact the speakers in situations like that. I suspect many speakers would be ok with occasional contact from PASS HQ when speakers are needed. This should at least give us the ability to do a better job of making that connection
I need a couple of people to hopefully form a small group & decide how best to accomplish this goal. With any luck someone out there would also like to lead this small group in making this happen.
leave a comment here or contact me at if you’re interested in helping
I get asked quite frequently about the different PASS committee’s and how a person goes about volunteering for them. I thought I’d take a little time and explain what the program committee does and a little about how they do it.
The program committee
The current program committee could easily be renamed “The Summit Committee”. This group of community volunteers are responsible for most pieces of education at the annual SQLPASS Summit.
Every year PASS sends out a call for volunteers for the program committee (usually in Jan-Feb). This call is actually quite formal with a few questions asked in an online survey. The answers to these questions are used to match volunteers to tasks within the committee
The Program Committee Structure 2010
PASS Board Member (2010 Jeremiah Peschka)
PASS HQ Elena Sebastiano & Craig Ellis
Program Manager (2009-2010 Allen Kinsel)
Program Leaders (2 or 3 for lg projects)
Abstract review team members
Tasked team members
PASS BOD Member
The program committee always has a BOD member in charge of overseeing everything, they are usually expected to come up with great ideas, keep everything running smoothly and handle the “problems” as they arrise. Actually the BOD members that get tricked into taking over program are very involved in so many decisions I don’t even know what they are!
These 2 are the backbone of “getting things done” and making sure we volunteers stay on task and on schedule. In program, as with most high profile projects, once the deadlines start they never seem to stop and if they start to slip its not good for anyone!
Thats my current job, I wish I could find a job description. Id surely like to see it! Essentially, I like to consider myself the glue that holds the group together and keeps us moving in the right direction. Sort of like a project manager that actually works on the work of a project <zing>
These are high level volunteers that work to meet more difficult goals. Such as define criteria and make selections of pre/post-con Sessions and spotlight sessions. Develop speaker resources, develop better evaluation procedures and various other similar things.
Abstract review teams
This group of volunteers, usually 11 people, is split up into teams by track (DBA, BID, BIA, AD, PD) This group gets the daunting task of reading and ranking every single submitted abstract. Then they are asked to choose not only the accepted sessions but alternate sessions. This process is several months long and the bulk of the work usually happens from Mar-May
This year I intend to change the requirements of the volunteers on the committee and split up the work a little more. Every year the program committee is asked to do more work since the conference grows annually. So, I’ve been looking for ways to split the work up even more. This has 2 benefits, one its less work for any one person or group of people. Two, it allows more people to get involved in a great organization.
This year I hope to pull together several task based groups (with leaders) to do things such as pull the session evaluation data together for all years available (2005 onward), review the session powerpoints, revamp the speaker terms, design and test our >proposed< new summit speaker tool, group abstracts, and several other tasks. There should be plenty of work to go around the biggest issue I normally run into is finding volunteers willing to take on leadership of these tasks which leads me to my 2010 program goals.
Goals for 2010 — I have 1 goal other than a successful summit program, that is to recruit several people into leadership positions within the program committee. It is my opinion that the only way everything PASS needs to accomplish will get done is if I can find a few good volunteers willing to lead tasks & projects.
There’s a meme going around that I thought I’d take my turn at answering.
Better late than never I suppose, Work always seems to have a way of getting in the way of posts like this!
It all started with a CAT3 cable
It all started on a dark night in the middle 90′s, I was enrolled in college sitting in my dorm room trying to connect my brand spanking new Pentium 133mhz computer to our college network so I could partake in what was at that time a huge LAN group playing Warcraft/Diablo/Duke Nukem. The problem was no one on the campus apparently knew how to connect to the network, yes it was a smallish campus. The only piece of guidance that could be found was in the welcome doc. “Network connectivity can be established in the bookstore” after contacting the bookstore and procuring the required 10baseT network card (~175$) they basically said, take this wire and plug it in the wall, everything else will work automatically. Well, even today we know things rarely work that easily. The cable that was sold to me by the bookstore was a regular phone cable because apparently the bookstore managers didn’t know any better, it wasn’t their fault though since the public campus network was less than a year old at that point. Somehow I spent enough time trying to get the correct table that I was lucky enough to get hooked up with the “campus nerd” who happened to live in the dorm 1 floor above me. He set me straight, told me where to get the required cable and handed me a scribbled list with the required connection info. Many late nights and much tinkering later I was successfully connected. Being the natural tinkerer I shortly figured out all about the network and what it took to get win 3.1 and 95 connected. Shortly, I became the “campus nerd” and when it was apparent to me that I was naturally inclined with computers, and not so much with coursework I wasnt inerested in, I quickly gave up school and began bartering computer work.
Then there was a book
A short while later I had landed a job as an all around network guy. I was doing everything and anything for a relatively small business. One day my boss proudly announced we were going to be getting a new server with a Database (SQL 6.5)! Apparently we had outgrown our existing business systems and the decision had been made to install what was essentially a combined financial/payroll system. A few short months later, in the middle of a payroll processing cycle our SQL server decided to do what SQL 6.5 did quite often, it got corrupted. Since I had a grand total of 4 months experience in SQL a consultant was called in and she fixed our problem. More importantly she brought with her a copy of the latest and greatest SQL book and as luck would have it, she left it behind. For the next 6 months I studied that book inside and out. A “database geek” was born
Finally, a chance meeting
In 2004 I was attending my first precon (given by Kimberly Tripp) at my first PASS Summit when I was looking for some lunch and happened to sit with 2 guys, Pat Wright and Tom Larock that are to this day two of my closest PASS friends. There is little doubt that the experience of meeting these 2 and attending the volunteer “roundup” lead by Wayne Snyder has had a profound impact on my career (this blog is a testament to that impact). A “volunteer geek” was born. Being a volunteer for PASS and participating in the SQL Server community has taken my skills up at least 2 notches, for that I am thankful.
These are the technical moments of my life that led me here, since I’m nearly the last one to answer this, I thought id go ahead and tag my friend Pat Wright since I noticed he hadn’t answered yet. Otherwise, I have enjoyed reading everyone else’s paths to a very similar outcome!
Photo Courtesy of Darren Hester
Living with a datacenter in Hurricane alley, We’ve been doing disaster preparedness(recovery) on a small scale for many years but this year we’ve been working towards recovering all of our assets to an offsite colocation. That part of the decision is easy, the actual method used to do these recoveries is definitely up in the air and I fully expect our processes to change for the better, every time we redo our disaster testing (many times a year going forward).
In exploring the recovery process we quickly realized that our “hardware failure” recovery documents weren’t going to work effectively in a datacenter failure situation. So, it was time to design a new set of criteria for success. I thought Id share our thought process and how we plan on tackling this always fun experience. Its worth mentioning in a side note that no SQL replication is wanted/allowed for in our case.
1st thought: Bring up blank OS builds for the database servers, load SQL Server, Patch it to the correct level while the tape restores of the database backups are happening, Recover the system databases then kick off the individual restores(that are scripted with the regular nightly backup jobs)
- Benefits to DBA: clean, repeatable, documentable process that we are mostly in control of.
- Drawbacks: Time consuming, potential version match issues, recovering system databases is always “fun”
2nd thought: Use a windows snapshot to restore the OS/Sql Binaries and Sql System databases then recover the user databases using the aforementioned scripts. This also buys us the nicety of having litespeed already installed
- Benefits to DBA: Faster, System level recovery done in a standard (for our system group) method
- Drawbacks: system/SQL recovery out of our (DBA) control
Since our Systems engineers are already asking to go the snap route (because thats common for other application servers), and we expect this method to take less overall time, we are planning on trying that first. Depending on how that test goes, we will likely have option 1 as a backup plan or potentially try that next time thats why we’re testing it, so that we can make sure we have it right.
As always, there’s more than 1 way to accomplish the same outcome so my question is how do you do off-site disaster recovery (testing)? Or maybe the better question is do you do disaster recovery testing? If not why?
Even an old dog can learn new tricks
I had an Aha! moment recently. For my entire career as a DBA I have generally considered aliases for connections a workaround for bad behaving applications. Whenever someone said “alias” my mind immediately heads to SQL Server client configuration aliases which I try to avoid if at all possible (since they are configured on each client) It never snapped to me until recently that DNS aliases may be a good solution to a few problems we’re currently experiencing.
For disaster recovery reasons, as well as for manageability reasons we have decided to start using DNS aliases for every application connection to database servers. This should allow us to have the luxury of moving databases from server to server without having to reconfigure multiple applications which would normally be a whole process in itself since the code was already migrated to production which is locked.
Using DNS aliases should also allow us to swap highly important applications over individually to a remote datacenter, which could have less computing power, without having to switch every application and thus kill the performance of that standby server.
There are certainly limitations to this, if for instance you want to move applications from one named instance to another. In our current environment this isnt much of an issue since many of our production instances are indeed default. The other major limitation to this is any change will have a small amount of downtime while the DNS changes are propgated throughout the network.
One more “gotcha” that weve already run into is vendor applications (surprise surprise) that resolve the DNS name to an ip address and then store that inside the application configuration.
We decided on working out a naming standard that looks like this:
For direct database access, where an application only connects to 1 database the following is used
for an application (like sharepoint) where many databases are going to be accessed we change it a bit
While this isnt necessarily a “new” idea, it was certainly a different idea in our environment and I suspect there are other “DBA’s” out there like myself that dont have a habit for using our network skills on a regular basis
Photo courtesy: Ronn Ashore