Posts tagged SQL Server

TSQL Tuesday #80 – AKA Chris Yates’s Birthday

Its the 80th T-SQL Tuesday and for some reason I decided to write a blog post.  This month’s host is a good friend, Chris Yates and since its his birthday he decided to give us a gift by opening the topic to “anything”

My topic of choice is simple and relates to Chris’s Present. SQL Server’s Birthday.  

I’ve worked with SQL Server in some form for longer than I care to admit so I thought it would be interesting to write down my memories of each release.  Why? I dont know, I guess because I can and perhaps just perhaps to compare notes with a few other “old-timers”

 

SQL Releases over the years  

SQL 6.5

  • a GUI on a DB? This was sheer madness
  • SQL Enterprise manager was revolutionary & light years ahead of the competition
  • Corruption, Corruption Corruption
  • I still miss ISQL_w sometimes (not really)
  • Shipped with actual printed books

SQL 7.0

  • Side by side unusable installs of SQL 6.5 and 7.0 during upgrades
  • I do miss the standalone Query Analyzer tool
  • DTS… oh the nightmares of DTS, they all start here

SQL 2000

  • Someone put XML in a database… AKA: Patient zero
  • You get a bigint and you get a bigint and you, bigints for everyone!
  • There was a disturbance in the force and SSRS appeared (Someone say it was SSAS that caused the disturbance but I disagree)
  • Put multiple instances of SQL on a server they said, it’ll be fun they said…… NOT

SQL 2005

  • SSIS appears, the lights suddenly dim and a clown jumps out of a box and says “Surprise”
  • Upgrade pain like no other “80” compatibility still haunts many-a-DBA
  • CLR is banished by DBA’s everywhere for being the devils magic and clearly evil
  • DTA appears and is quickly crowned with a dunce cap
  • SSNS came and went so quickly no one ever saw it
  • TSQL Window functions appear and developers rejoice!
  • BIDS shows up and BI developers lives are never the same

SQL 2008

  • PBM appears and quickly goes stale
  • Compression of all the things arrives but only if you spend mega bucks
  • DMV’s finally became useful
  • intellisense? what is this dark art that saves me from typing SLECT 1 more time
  • AlwaysOn, Alwayson, Always on, Who knows what it will be called today but the moniker appeared and Allan Hirt suddenly had more grey hair

SQL 2008R2

  • The SQL team joined the “R2” Parade and quickly got lost
  • The what edition of SQL do I need to do X game gets into high gear
  • PowerName All the things starts with PowerPivot
  • MasterDataServices appears and quickly goes stale

SQL 2012

  • Core licensing appears and the value proposition gets instantly more complicated
  • SSIS was disemboweled (Thankfully)
  • Column Store indexes were created to give Niko something to write about
  • AlwaysOn Availability Groups make DBA’s look like superheros
  • Extended event shaming becomes a thing
  • BIDS? SSDT? SSDT-BI? Data tools? The confusion is maddening

SQL 2014

  • The what edition do I need game takes another twist
  • SSMS is still treated like a stepchild
  • Hekaton changes the game (If you meet this teeny tiny list of caveats)
  • Azure is melted into the box product

SQL 2016

  • Someone put JSON in a database… AKA: Patient zero
  • Dynamic data masking and Row level security brings all the auditors to the yard
  • AlwaysEncrypted continues the pattern of “Always or Power” all the names!
  • QueryStore appears and DBA’s the world over are immediately confused by this black magic
  • Temporal tables appear and SQL Server catches up with postgres circa 2012
  • SSMS starts getting major love, everyone wonders if its a trick
  • SSDT is… wait…for…it… a SINGLE tool finally!

Future SQL Server

  • Can we get indexes on functions expressions *pretty please* If a mainframe can do it surely SQL can too
  • Banish the message “string or binary data may be truncated” to the history books
  • How about a tiny touch of love for Powershell (those guys yell loud enough that powershell can cure all my ills, how about giving them a proper hammer)
  • An edition based on size of server, pay for the T-SQL features you use based on server/database size *things like hekaton on small db’s* in a new “AlwaysPower” edition
  • *basic* Scalar UDF optimization to cache deterministic values

What did I miss?  Wait, I know I missed a lot so dont answer that.

Adventures in SSRS

Came across a strange permission issue today that I thought I’d blog about since it is apparently somewhat common judging by the number of unanswered forum posts about it.

Adding/editing or Testing SSRS Datasources in Report Manager generates a permission error

A developer was trying to add a datasource and use “test connection” in the report manager web interface instead of BIDS which, incidentally worked.  When they tried to save or test the data source, an error occurred “The permissions granted to user “Domain\User” are insufficient for performing this operation. (rsAccessDenied)”  After much websearching and digging through verbose SSRS logs, SQL Traces, Windows Event logs, Process Monitor traces, etc.  I had just about given up hope and turned to twitter to vent about the error.  Luckily, Chris Testa-O’Neill , Doug Lane, and Tamera Clark came to my rescue and headed me down the right path of this being an SSRS catalog permission issue (Thanks again!)

Environment Details

SSRS 2008r2

Folder inheritance is broken and the developer account has Browser permissions on the root folder as well as Browser, Publisher and Report Builder permissions on the sub-folder.  There are no system level permissions for the account.

The Problem

In the SSRS error log I found the following messages which are only slightly more helpful (Highlighted for influence)

System.Web.Services.Protocols.SoapException: System.Web.Services.Protocols.SoapException: The permissions granted to user ‘Domain\User’ are insufficient for performing this operation. —> Microsoft.ReportingServices.Diagnostics.Utilities.AccessDeniedException: The permissions granted to user ‘Domain\User’ are insufficient for performing this operation.
at Microsoft.ReportingServices.WebServer.ReportingService2010Impl.TestConnectForDataSourceDefinition(DataSourceDefinition DataSourceDefinition, String UserName, String Password, String& ConnectError)…

The command is failing on the “TestConnectForDataSourceDefinition” method.  The permissions needed are those required by this method.

The Fix

Near the bottom of the documentation for the TestConnectForDataSourceDefinition the permissions required are listed as “Execute report definition”

Granting this permission is accomplished by connecting to the SSRS server with SSMS and creating a new system role with just this permission

Then grant the new role “Execute Report Definitions” permission

Finally add the user account to the new system role, this is done in report manager under Site Settings ->Security -> New Role Assignment

Have you checked your datatypes lately?

Earlier this week I was working through an issue that I’ve seen one too many times lately and thought Id write about it.  For various reasons *cough* Legacy *cough*  I often seem to run across columns in tables that are defined with what would appear to be the wrong data type or worse, multiple data elements stuffed into one column!

In this case the table held various account codes in a char(06) column.  These codes were things like ‘001’, ‘002’,’A’, ‘B’ etc

Each account code was for use in a different system.  so, on system X it was an integer and system Y the codes were character based, etc

This data was used to join many different tables.  Initially these joins were small and straightforward with predicates restricting the code table results to only include the proper data for the system being used.

pseudo code looks roughly like this

UPDATE DATA

SET DATADESC = CODE.MORE_DATADESC

FROM DATA

JOIN CODE

ON DATA.CODE_KEY = CODE.CODE_KEY

JOIN METADATA

ON META_KEY = CODE.CODE_KEY

AND DATESTAMP > = ‘9999-12-31’

WHERE DATA IS NULL

AND CODE.DATATYPE = ‘ACCT’

This is the execution plan from a typical update of 150Million rows

typical_plan

Notice the Metadata table and Codes table are joined before the data table so the resulting codes data is only going to include the 16 rows of smallint data which passes the implicit convert to smallint

As the rowcounts increased and joins changed the implicit data type conversions continued to work until one day they didn’t.  Instead they started returning this error message

Msg 245, Level 16, State 1, Line 2
Conversion failed when converting the varchar value ‘B     ‘ to data type smallint”

After some digging it turns out the optimizer had started using a different plan where it joined the tables together in a different order.

Heres that execution plan of the same update

atypical_plan

Notice the Metadata table join is reordered and happens after the codes table is joined to the data table.  When the data table is joined to the code table the error above occurs as character data in codes cant be converted to smallint during the join

To overcome this problem there are a couple of choices (admittedly there are more than a couple so this list isnt inclusive)

Best –> Change the source table to account for having different source systems account code types in different columns with the correct datatypes.  Benefits are straightforward, the BIG drawback is the extra work for system redesign

Better –> Change the queries to use an explicit conversion on the joined columns changing the datatype on the data table from smallint to char(06).  The benefit is the small amount of change needed, the drawback is the extra bytes involved per row joined

Good OK –> Add a query hint to force the join order. The benefit is the small amount of change needed, the drawback is in my mind we’re now forcing the optimizer to take a different path than it would if left to its own.  I try to avoid hints unless there arent other options and in this case there are.

Other things that could be considered as solutions include temp tables, CTE’s, etc. all of which were more change then we wanted to pursue testing.

Invitation for T-SQL Tuesday #19 – Disasters & Recovery

Disasters

Its the first week of June and for those of us living along the Gulf and Atlantic coasts of the US, that brings the beginning of hurricane season.  It also means its time for this months installment of T-SQL Tuesday.

This Months Topic

Hurricane Ike dead ahead

There goes your weekend/month

Disaster Recovery.  This topic is very near and dear to me based on the fact that I live on a barrier island that was the site to the deadliest natural disaster in US history and more recently destroyed by the third costliest hurricane in history.  Needless to say preparing for disasters is nearly instinctive to me which might explain why I’m a DBA but I digress.  Anything you’d like to blog about related to preparing for or recovering from a disaster would be fair game, have a great tip you use to keep backups and recovers running smoothly, a horrific story of recovery gone wrong? or anything else related to keeping your systems online during calamity.  We want to hear it!

My street a month after Hurricane Ike

My street a month after Hurricane Ike

T-SQL Tuesday info

Originally an idea dreamed up by Adam Machanic (Blog|Twitter), it has become a monthly blog party where the host picks a topic and encourages anyone to write a post on that topic then a day or 3 later produces a roundup post of all the different perspectives from the community.

Rules

  • Your post must be published between 00:00 GMT Tuesday June 14, 2011, and 00:00 GMT Wednesday June 15, 2011
  • Your post must contain the T-SQL Tuesday logo from above and the image should link back to this blog post.
  • Trackbacks should work, but if you don’t see one please link to your post in the comments section below so everyone can see your work

Nice to haves!

  • include a reference to T-SQL Tuesday in the title of your post
  • tweet about your post using the hash tag #TSQL2sDay
  • consider hosting T-SQL Tuesday yourself. Adam Machanic keeps the list, if he let me do it you’re bound to qualify!

Check back in a few days to see the roundup post of all the great stories your peers shared

How deep can you dive?

Go deep or stay home!

After last years Summit we launched a feedback site http://feedback.sqlpass.org in hopes of gathering all of the feedback about the event in one place.  The number one thing people have asked for on that site is for there to be a track of sessions in the 400-500 level range.  The problems with the community desire for higher level sessions are twofold, one we dont normally get a huge number of session submissions that are at the 400-500 level.  Two, I’ve been told by those who are qualified to present those types of sessions that an hour and fifteen minutes (spotlight) is often not enough time.

Cramped for space

In the past our conference size has dictated the floor-plans at the conference center and we’ve been  maxed out at 14 concurrent sessions.  This year however, because of anticipated attendee growth, the logistical geniuses at PASS HQ were able to add another session room (I’m looking at you Anika and Craig).  With this new room I have options on what to do with the extra session rooms!!

Changes for the Summit 2011

This year we’ll be using the space we gained from the new room addition to have longer deep dive sessions.  The current idea is to offer these longer sessions in hopes that they will attract presenters who are qualified to present these deeper dive sessions. Currently, the plan is to have a maximum of 6 deep dive (lvl 400 or 500 only) 3 hour sessions.  Because of the way the schedule is laid out, we will run 2 of these sessions concurrently every conference day.

Rules… Yeah there’s always rules

We will accept abstracts for this new session type in the same manner as a regular abstract.  That is to say anyone can submit a half day abstract.  If you submit an abstract for a 1/2 day session it will count as one of your 4 allowed abstract submissions.  The session selection for these sessions will be handled by the regular respective abstract review teams.  Even though we are going to allow anyone to submit abstracts for these sessions, it should go without saying that if you don’t have prior experience or reputation for being able to give an extended, strong 400-500 level session it may be best to focus on a regular summit session.  What I mean by this is for these particular sessions we will be instructing the review teams to weigh the speakers perceived ability to deliver the session higher than we normally would for a regular session.

Possible Hiccups i.e. Changes

Two things could change with these sessions.

  • I am considering making the sessions 4 hours long (roughly 3 regular session slots).  If we do that the maximum number of sessions would drop to four.  I’m leaning heavily away from this but, if anyone has a strong opinion on this I’ll listen
  • Depending on the quantity and quality of the abstracts we receive, we may have less than the maximum sessions shown above (4 or 6)
  • Im still considering a single DBA 101 “Accidental DBA” type session for one of these sessions but havent been swayed that there is more interest there than there is in deep dives
Go to Top