Posts tagged SQL Server
Came across a strange permission issue today that I thought I’d blog about since it is apparently somewhat common judging by the number of unanswered forum posts about it.
Adding/editing or Testing SSRS Datasources in Report Manager generates a permission error
A developer was trying to add a datasource and use “test connection” in the report manager web interface instead of BIDS which, incidentally worked. When they tried to save or test the data source, an error occurred “The permissions granted to user “Domain\User” are insufficient for performing this operation. (rsAccessDenied)” After much websearching and digging through verbose SSRS logs, SQL Traces, Windows Event logs, Process Monitor traces, etc. I had just about given up hope and turned to twitter to vent about the error. Luckily, Chris Testa-O’Neill , Doug Lane, and Tamera Clark came to my rescue and headed me down the right path of this being an SSRS catalog permission issue (Thanks again!)
Folder inheritance is broken and the developer account has Browser permissions on the root folder as well as Browser, Publisher and Report Builder permissions on the sub-folder. There are no system level permissions for the account.
In the SSRS error log I found the following messages which are only slightly more helpful (Highlighted for influence)
System.Web.Services.Protocols.SoapException: System.Web.Services.Protocols.SoapException: The permissions granted to user ‘Domain\User’ are insufficient for performing this operation. —> Microsoft.ReportingServices.Diagnostics.Utilities.AccessDeniedException: The permissions granted to user ’Domain\User’ are insufficient for performing this operation.
at Microsoft.ReportingServices.WebServer.ReportingService2010Impl.TestConnectForDataSourceDefinition(DataSourceDefinition DataSourceDefinition, String UserName, String Password, String& ConnectError)…
The command is failing on the “TestConnectForDataSourceDefinition” method. The permissions needed are those required by this method.
Near the bottom of the documentation for the TestConnectForDataSourceDefinition the permissions required are listed as “Execute report definition”
Granting this permission is accomplished by connecting to the SSRS server with SSMS and creating a new system role with just this permission
Then grant the new role “Execute Report Definitions” permission
Finally add the user account to the new system role, this is done in report manager under Site Settings ->Security -> New Role Assignment
Earlier this week I was working through an issue that I’ve seen one too many times lately and thought Id write about it. For various reasons *cough* Legacy *cough* I often seem to run across columns in tables that are defined with what would appear to be the wrong data type or worse, multiple data elements stuffed into one column!
In this case the table held various account codes in a char(06) column. These codes were things like ‘001’, ‘002’,’A’, ‘B’ etc
Each account code was for use in a different system. so, on system X it was an integer and system Y the codes were character based, etc
This data was used to join many different tables. Initially these joins were small and straightforward with predicates restricting the code table results to only include the proper data for the system being used.
pseudo code looks roughly like this
SET DATADESC = CODE.MORE_DATADESC
ON DATA.CODE_KEY = CODE.CODE_KEY
ON META_KEY = CODE.CODE_KEY
AND DATESTAMP > = ’9999-12-31′
WHERE DATA IS NULL
AND CODE.DATATYPE = ‘ACCT’
This is the execution plan from a typical update of 150Million rows
Notice the Metadata table and Codes table are joined before the data table so the resulting codes data is only going to include the 16 rows of smallint data which passes the implicit convert to smallint
As the rowcounts increased and joins changed the implicit data type conversions continued to work until one day they didn’t. Instead they started returning this error message
Msg 245, Level 16, State 1, Line 2
Conversion failed when converting the varchar value ‘B ‘ to data type smallint”
After some digging it turns out the optimizer had started using a different plan where it joined the tables together in a different order.
Heres that execution plan of the same update
Notice the Metadata table join is reordered and happens after the codes table is joined to the data table. When the data table is joined to the code table the error above occurs as character data in codes cant be converted to smallint during the join
To overcome this problem there are a couple of choices (admittedly there are more than a couple so this list isnt inclusive)
Best –> Change the source table to account for having different source systems account code types in different columns with the correct datatypes. Benefits are straightforward, the BIG drawback is the extra work for system redesign
Better –> Change the queries to use an explicit conversion on the joined columns changing the datatype on the data table from smallint to char(06). The benefit is the small amount of change needed, the drawback is the extra bytes involved per row joined
Good OK –> Add a query hint to force the join order. The benefit is the small amount of change needed, the drawback is in my mind we’re now forcing the optimizer to take a different path than it would if left to its own. I try to avoid hints unless there arent other options and in this case there are.
Other things that could be considered as solutions include temp tables, CTE’s, etc. all of which were more change then we wanted to pursue testing.
Its the first week of June and for those of us living along the Gulf and Atlantic coasts of the US, that brings the beginning of hurricane season. It also means its time for this months installment of T-SQL Tuesday.
This Months Topic
Disaster Recovery. This topic is very near and dear to me based on the fact that I live on a barrier island that was the site to the deadliest natural disaster in US history and more recently destroyed by the third costliest hurricane in history. Needless to say preparing for disasters is nearly instinctive to me which might explain why I’m a DBA but I digress. Anything you’d like to blog about related to preparing for or recovering from a disaster would be fair game, have a great tip you use to keep backups and recovers running smoothly, a horrific story of recovery gone wrong? or anything else related to keeping your systems online during calamity. We want to hear it!
T-SQL Tuesday info
Originally an idea dreamed up by Adam Machanic (Blog|Twitter), it has become a monthly blog party where the host picks a topic and encourages anyone to write a post on that topic then a day or 3 later produces a roundup post of all the different perspectives from the community.
- Your post must be published between 00:00 GMT Tuesday June 14, 2011, and 00:00 GMT Wednesday June 15, 2011
- Your post must contain the T-SQL Tuesday logo from above and the image should link back to this blog post.
- Trackbacks should work, but if you don’t see one please link to your post in the comments section below so everyone can see your work
Nice to haves!
- include a reference to T-SQL Tuesday in the title of your post
- tweet about your post using the hash tag #TSQL2sDay
- consider hosting T-SQL Tuesday yourself. Adam Machanic keeps the list, if he let me do it you’re bound to qualify!
Check back in a few days to see the roundup post of all the great stories your peers shared
After last years Summit we launched a feedback site http://feedback.sqlpass.org in hopes of gathering all of the feedback about the event in one place. The number one thing people have asked for on that site is for there to be a track of sessions in the 400-500 level range. The problems with the community desire for higher level sessions are twofold, one we dont normally get a huge number of session submissions that are at the 400-500 level. Two, I’ve been told by those who are qualified to present those types of sessions that an hour and fifteen minutes (spotlight) is often not enough time.
Cramped for space
In the past our conference size has dictated the floor-plans at the conference center and we’ve been maxed out at 14 concurrent sessions. This year however, because of anticipated attendee growth, the logistical geniuses at PASS HQ were able to add another session room (I’m looking at you Anika and Craig). With this new room I have options on what to do with the extra session rooms!!
Changes for the Summit 2011
This year we’ll be using the space we gained from the new room addition to have longer deep dive sessions. The current idea is to offer these longer sessions in hopes that they will attract presenters who are qualified to present these deeper dive sessions. Currently, the plan is to have a maximum of 6 deep dive (lvl 400 or 500 only) 3 hour sessions. Because of the way the schedule is laid out, we will run 2 of these sessions concurrently every conference day.
Rules… Yeah there’s always rules
We will accept abstracts for this new session type in the same manner as a regular abstract. That is to say anyone can submit a half day abstract. If you submit an abstract for a 1/2 day session it will count as one of your 4 allowed abstract submissions. The session selection for these sessions will be handled by the regular respective abstract review teams. Even though we are going to allow anyone to submit abstracts for these sessions, it should go without saying that if you don’t have prior experience or reputation for being able to give an extended, strong 400-500 level session it may be best to focus on a regular summit session. What I mean by this is for these particular sessions we will be instructing the review teams to weigh the speakers perceived ability to deliver the session higher than we normally would for a regular session.
Possible Hiccups i.e. Changes
Two things could change with these sessions.
- I am considering making the sessions 4 hours long (roughly 3 regular session slots). If we do that the maximum number of sessions would drop to four. I’m leaning heavily away from this but, if anyone has a strong opinion on this I’ll listen
- Depending on the quantity and quality of the abstracts we receive, we may have less than the maximum sessions shown above (4 or 6)
- Im still considering a single DBA 101 “Accidental DBA” type session for one of these sessions but havent been swayed that there is more interest there than there is in deep dives
Occasionally I’m asked what the most important thing for a DBA to know is. It seems some people want to know what 1 thing to focus on in order to be successful long term as a DBA. I’ve pondered this question for quite a long time and I’ve given various answers over the years.
Ive come to the conclusion that there really is no one special thing or “skill” that its important for a DBA to know. Sure, there are lots of qualities that tend to make a successful DBA. Things like attention to detail, thirst for knowledge, and an uncanny love of BACON… The problem is as huge as the SQL Server product is becoming it is also becoming increasingly difficult to be an expert in everything. I am actually of the opinion that it is impossible to be an expert in everything SQL Server.
Knowing what you dont know
The most important thing for a DBA to know can be summed up in 3 simple words ” I Dont Know” Why is that important? Because, knowing what you don’t know, and being able to admit it to those around you is by far the most valuable skill that you can poses as a DBA at any skill level.
When in doubt its always good to remember that its ok to say “I don’t know” The Corollary to this statement is of course “Ill find out, or find someone who does know”. Ive seen many small issues blow up into large problems over the years because someone didn’t really understand what was going on and they were afraid to let their coworkers (or boss) know that they don’t understand. In many cases if we’ll just learn to say “I don’t know” some further pain can easily be avoided.