SSRS : Reports Rendering issues with Firefox and Safari Browsers

1 12 2009

As part of our project, we need to execute the reports on different platforms and browsers to check the compatibility of reports in different versions and variants of web browsers. While development was in progress, I could never concentrated on this issue , as I am pretty sure that reports will execute in all the browsers without any hassle. But when I tried to execute the reports in Firefox, Safari and some other third party browsers as part of my unit testing I have noticed that the format of the report is being messed up highly and the structure is weird. I had to spend some more time at the end for the development (of course, it happens for all the software projects) to overcome this issue.

I would like to share with you the knowledge I have obtained as part of my research and also save your time to not go for a detailed research like me.

What is the process behind the Reports Execution in the browsers?

As you might be aware that, Report Manager is a pure web based HTML site. When we deploy our reports which were developed in the BIDS onto the Report Manager, SSRS converts the report definition language into the HTML syntax for the easy rendering of the reports on the server.

SSRS 2005 is pretty slick, but the HTML is just terrible. Reports are displayed in an IFRAME that’s deep in nested table land, and the IFRAME’s height setting only works in IE. The end result is that reports don’t display correctly in Firefox, Safari etc – the IFRAME’s height defaults to a few hundred pixels, so you only see the top 2 inches of the report.

What would be the work around?

The simplest workaround would be install the IE add-on for the Firefox which is available freely at the below location. But this does not solve our whole problem, as it is not practically feasible to install IE add-on in each machine which is used by the end user. We should concentrate in finding the solution at the server level and most importantly at the execution engine level

Download Firefox IE Add-on

However, they did the right thing by designating CSS classes for most of the important elements, so we can fix it by adding a min-height setting. I’m sure there are other issues with getting SSRS to display correctly in Firefox, and possibly other answers (let me hear them in the comments below), but this CSS fix at least lets the reports show.

Workaround # 1:

1. Go to the following Location, where SQLServer is installed…\Program Files\Microsoft SQL Server\MSSQL.3\ReportingServices\ReportServer\Pages

2. Open ReportViewer.aspx File add the style property marked in bold and blue color, Then try this style=”display:table;margin:0px;overflow:hidden”runat=”server”/>

If not Helps the first method, then try the Workaround 2.

Workaround # 2

 /* Fix report IFRAME height for Firefox, Safari */

    Min-height: 860px;
    Min-width: 1000px;


Put this code at bottom of the ReportingServices.css file. After you added this code, Close your browser and clear your cookies then try again

If you’re really lazy, you can just run this batch script which will make the change for you:

::Add to C:\Program Files\Microsoft SQL Server\MSSQL.3\Reporting Services\ReportManager\Styles\ReportingServices.css
SET CSSFILE=%ProgramFiles%\Microsoft SQL Server\MSSQL.3\Reporting Services\ReportManager\Styles\ReportingServices.css
echo. >> “%CSSFILE%”
echo. >> “%CSSFILE%”
echo /* Fix report IFRAME height for Firefox */ >> “%CSSFILE%”
echo .DocMapAndReportFrame >> “%CSSFILE%”
echo { >> “%CSSFILE%”
echo     min-height: 860px; >> “%CSSFILE%”
echo } >> “%CSSFILE%”

Notes / Disclaimers / Retractions

This just adds a min-height attribute to the class used for the IFRAME. Of course, you can set the min-height to another value if you’d like; if you make it larger than your end user’s screen height they’ll see a scroll bar and may go into convulsions.

This change isn’t needed for IE7. One of the big changes to IE7’s CSS handling is that it will stop treating height and width as min-height and min-width, but IE7 and Firefox still treat height=100% differently (at leat for IFRAMES).

Please let me know if there’s a better way to fix this, more to be fixed, etc.


Performance Tuning – SQL Server Analysis Services 2005/2008

13 10 2009

This post describes how application developers can apply performance-tuning techniques to their Microsoft SQL Server 2005/2008 Analysis Services Online Analytical Processing (OLAP) solutions.


Fast query response times and timely data refresh are two well-established performance requirements of Online Analytical Processing (OLAP) systems. To provide fast analysis, OLAP systems traditionally use hierarchies to efficiently organize and summarize data. While these hierarchies provide structure and efficiency to analysis, they tend to restrict the analytic freedom of end users who want to freely analyze and organize data on the fly.

To support a broad range of structured and flexible analysis options, Microsoft® SQL Server™ Analysis Services (SSAS) 2005/2008 combines the benefits of traditional hierarchical analysis with the flexibility of a new generation of attribute hierarchies. Attribute hierarchies allow users to freely organize data at query time, rather than being limited to the predefined navigation paths of the OLAP architect. To support this flexibility, the Analysis Services OLAP architecture is specifically designed to accommodate both attribute and hierarchical analysis while maintaining the fast query performance of conventional OLAP databases.

Realizing the performance benefits of this combined analysis paradigm requires understanding how the OLAP architecture supports both attribute hierarchies and traditional hierarchies, how you can effectively use the architecture to satisfy your analysis requirements, and how you can maximize the architecture’s utilization of system resources.

 Note   To apply the performance tuning techniques discussed in this post, you must have SQL Server 2005 Service Pack 2 installed.

 To satisfy the performance needs of various OLAP designs and server environments, this post provides extensive guidance on how you can take advantage of the wide range of opportunities to optimize Analysis Services performance. Since Analysis Services performance tuning is a fairly broad subject, this post organizes performance tuning techniques into the following four segments.

1.   Enhancing Query Performance

 Query performance directly impacts the quality of the end user experience. As such, it is the primary benchmark used to evaluate the success of an OLAP implementation. Analysis Services provides a variety of mechanisms to accelerate query performance, including aggregations, caching, and indexed data retrieval. In addition, you can improve query performance by optimizing the design of your dimension attributes, cubes, and MDX queries.

 Querying is the operation where Analysis Services provides data to client applications according to the calculation and data requirements of a MultiDimensional eXpressions (MDX) query. Since query performance directly impacts the user experience, this section describes the most significant opportunities to improve query performance. Following is an overview of the query performance topics that are addressed in this section:

Understanding the querying architecture – The Analysis Services querying architecture supports three major operations: session management, MDX query execution, and data retrieval. Optimizing query performance involves understanding how these three operations work together to satisfy query requests.

Optimizing the dimension design – A well-tuned dimension design is perhaps one of the most critical success factors of a high-performing Analysis Services solution. Creating attribute relationships and exposing attributes in hierarchies are design choices that influence effective aggregation design, optimized MDX calculation resolution, and efficient dimension data storage and retrieval from disk.

Maximizing the value of aggregations – Aggregations improve query performance by providing precalculated summaries of data. To maximize the value of aggregations, ensure that you have an effective aggregation design that satisfies the needs of your specific workload.

Using partitions to enhance query performance – Partitions provide a mechanism to separate measure group data into physical units that improve query performance, improve processing performance, and facilitate data management. Partitions are naturally queried in parallel; however, there are some design choices and server property optimizations that you can specify to optimize partition operations for your server configuration.

Writing efficient MDX – Below are the techniques for writing efficient MDX statements such as: 1) writing statements that address a narrowly defined calculation space, 2) designing calculations for the greatest re-usage across multiple users, and 3) writing calculations in a straight-forward manner to help the Query Execution Engine select the most efficient execution path.

 2.   Tuning Processing Performance

 Processing is the operation that refreshes data in an Analysis Services database. The faster the processing performance, the sooner users can access refreshed data. Analysis Services provides a variety of mechanisms that you can use to influence processing performance, including efficient dimension design, effective aggregations, partitions, and an economical processing strategy (for example, incremental vs. full refresh vs. proactive caching).

Processing is the general operation that loads data from one or more data sources into one or more Analysis Services objects. While OLAP systems are not generally judged by how fast they process data, processing performance impacts how quickly new data is available for querying. While every application has different data refresh requirements, ranging from monthly updates to “near real-time” data refreshes, the faster the processing performance, the sooner users can query refreshed data.

Note that “near real-time” data processing is considered to be a special design scenario that has its own set of performance tuning techniques. For more information on this topic, see Near real-time data refreshes in Books Online.

To help you effectively satisfy your data refresh requirements, the following provides an overview of the processing performance topics that are discussed in this section:

Understanding the processing architecture – For readers unfamiliar with the processing architecture of Analysis Services, this section provides an overview of processing jobs and how they apply to dimensions and partitions. Optimizing processing performance requires understanding how these jobs are created, used, and managed during the refresh of Analysis Services objects.

Refreshing dimensions efficiently – The performance goal of dimension processing is to refresh dimension data in an efficient manner that does not negatively impact the query performance of dependent partitions. The following techniques for accomplishing this goal are discussed in this section: optimizing SQL source queries, reducing attribute overhead and preparing each dimension attribute to efficiently handle inserts, updates, deletes as necessary.

Refreshing partitions efficiently – The performance goal of partition processing is to refresh fact data and aggregations in an efficient manner that satisfies your overall data refresh requirements. The following techniques for accomplishing this goal are discussed in this section: optimizing SQL source queries, using multiple partitions, effectively handling data inserts, updates, and deletes, and evaluating the usage of rigid vs. flexible aggregations.

 3.   Optimizing Special Design Scenarios

 Complex design scenarios require a distinct set of performance tuning techniques to ensure that they are applied successfully, especially if you combine a complex design with large data volumes. Examples of complex design components include special aggregate functions, parent-child hierarchies, complex dimension relationships, and “near real-time” data refreshes.

Throughout this post, specific techniques and best practices are identified for improving the processing and query performance of Analysis Services OLAP databases. In addition to these techniques, there are specific design scenarios that require special performance tuning practices. Following is an overview of the design scenarios that are addressed in this section:

Special aggregate functions – Special aggregate functions allow you to implement distinct count and semi additive data summarizations. Given the unique nature of these aggregate functions, special performance tuning techniques are required to ensure that they are implemented in the most efficient manner.

Parent-child hierarchies – Parent-child hierarchies have a different aggregation scheme than attribute and user hierarchies, requiring that you consider their impact on query performance in large-scale dimensions.

Complex Dimension Relationships – Complex dimension relationships include many-to-many relationships and reference relationships. While these relationships allow you to handle a variety of schema designs, complex dimension relationships also require you to assess how the schema complexity is going to impact processing and/or query performance.

Near real-time data refreshes – In some design scenarios, “near real-time” data refreshes are a necessary requirement. Whenever you implement a “near real-time” solution requiring low levels of data latency, you must consider how you are going to balance the required latency with querying and processing performance.

 4.       Tuning Server Resources

 Analysis Services operates within the constraints of available server resources. Understanding how Analysis Services uses memory, CPU, and disk resources can help you make effective server management decisions that optimize querying and processing performance.

Query responsiveness and efficient processing require effective usage of memory, CPU, and disk resources. To control the usage of these resources, Analysis Services 2005 introduces a new memory architecture and threading model that use innovative techniques to manage resource requests during querying and processing operations.

To optimize resource usage across various server environments and workloads, for every Analysis Services instance, Analysis Services exposes a collection of server configuration properties. To provide ease-of-configuration, during installation of Analysis Services 2005, many of these server properties are dynamically assigned based on the server’s physical memory and number of logical processors. Given their dynamic nature, the default values for many of the server properties are sufficient for most Analysis Services deployments. This is different behavior than previous versions of Analysis Services where server properties were typically assigned static values that required direct modification. While the Analysis Services 2005 default values apply to most deployments, there are some implementation scenarios where you may be required to fine tune server properties in order to optimize resource utilization.

Regardless of whether you need to alter the server configuration properties, it is always a best practice to acquaint yourself with how Analysis Services uses memory, CPU, and disk resources so you can evaluate how resources are being utilized in your server environment.

Understanding how Analysis Services uses memory – Making the best performance decisions about memory utilization requires understanding how the Analysis Services server manages memory overall as well as how it handles the memory demands of processing and querying operations.

Optimizing memory usage – Optimizing memory usage requires applying a series of techniques to detect whether you have sufficient memory resources and to identify those configuration properties that impact memory resource utilization and overall performance.

Understanding how Analysis Services uses CPU resources – Making the best performance decisions about CPU utilization requires understanding how the Analysis Services server uses CPU resources overall as well as how it handles the CPU demands of processing and querying operations.

Optimizing CPU usage – Optimizing CPU usage requires applying a series of techniques to detect whether you have sufficient processor resources and to identify those configuration properties that impact CPU resource utilization and overall performance.

Understanding how Analysis Services uses disk resources – Making the best performance decisions about disk resource utilization requires understanding how the Analysis Services server uses disk resources overall as well as how it handles the disk resource demands of processing and querying operations.

Optimizing disk usage – Optimizing disk usage requires applying a series of techniques to detect whether you have sufficient disk resources and to identify those configuration properties that impact disk resource utilization and overall performance.


This post has outlined the areas where application developer has to look over to tune the performance of their applications built with SSAS 2005/2008. Due to the space constraint I could not go into the depth of all the areas. I recommend all the readers to refer the books online for the in-depth knowledge of each specified topic above.

 For more information:

 Please leave a comment/concern on this post.

SSIS – Unit Testing – An Approach

28 09 2009

What is Unit Testing?

In computer programming, unit testing is a software verification and validation method in which a programmer tests if individual units of source code are fit for use. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual program, function, procedure, etc., while in object-oriented programming, the smallest unit is a class, which may belong to a base/super class, abstract class or derived/child class.

Ideally, each test case is independent from the others: substitutes like method stubs, mock objects, fakes and test harnesses can be used to assist testing a module in isolation. Unit tests are typically written and run by software developers to ensure that code meets its design and behaves as intended. Its implementation can vary from being very manual (pencil and paper) to being formalized as part of build automation.

SSIS Unit Testing a Nightmare?

I have always been displeased being not able to unit test my packages created in SSIS. This was because there was no way to do this. SSIS Data flows can be really complex. Worse, you really can’t execute portions of a single data flow separately and get meaningful results.

Further, one of the key features of SSIS is the fact that the built-in data flow toolbox items can be equated to framework functionality. There’s not so much value in unit testing the framework.

Below listed are the some of the ways how I have done unit testing in the past with SSIS packages.

  1. Create our own frame work in C#.
  2. Access SSIS package activities and variable through .NET application

I would like to give the brief knowledge of the above mentioned ways below.

Create our own framework in C#

Meaningful unit testing of SSIS packages really comes down to testing of Executables in a control flow, and particularly executables with a high degree of programmability. The two most significant control flow executable types are Script Task executables and Data Flow executables.

Ultimately, the solution to SSIS unit testing becomes package execution automation.

There are a certain number of things you have to do before you can start writing C# to test your scripts and data flows, though. I’ll go through my experience with it, so far.

In order to automate SSIS package execution for unit testing, you must have Visual Studio 2005 (or greater) with the language of your choice installed (I chose C#).

Interestingly, while you can develop and debug SSIS in the Business Intelligence Development System (BIDS, a subset of Visual Studio), you cannot execute SSIS packages from C# without SQL Server 2005 Developer or Enterprise edition installed (“go Microsoft!”).

Another important caveat… you CAN have your unit test project in the same solution as your SSIS project. Due to over-excessive design time validation of SSIS packages, you can’t effectively execute the SSIS packages from your unit test code if you have the SSIS project loaded at the same time. I’ve found that the only way I can safely run my unit tests is to “Unload Project” on the SSIS project before attempting to execute the unit test host app. Even then, Visual Studio occassionally holds locks on files that force me to close and re-open Visual Studio in order to release them.

Anyway, I chose to use a console application as the host app. There’s some info out there on the ‘net about how to configure a .config file borrowing from dtexec.exe.config, the SSIS command line utility, but I didn’t see anything special in there that I had to include.

The only reference you need to add to your project is a ref to Microsoft.SqlServer.ManagedDTS. The core namespace you’ll need is

using Microsoft.SqlServer.Dts.Runtime;

In my first case, most of my unit testing is variations on a single input file. The package validates the input and produces three outputs: a table that contains source records which have passed validation, a flat output file that contains source records that failed validation, and a target table that contains transformed results.

What I ended up doing was creating a very small framework that allowed me to declare a test and some metadata about it. The metadata associates a group of resources that include a test input, and the three baseline outputs by a common URN. Once I have my input and baselines established, I can circumvent downloading the “real” source file, inject my test source into the process, and compare the results with my baselines.

Access SSIS package activities and variable through .NET application

We need to set variables defined at SSIS package through .Net Application. In this .Net application, we can get list of SSIS packages and execute them manually. For the testing of SSIS packages we need more control on the package like disable specific task in the package, setting some runtime variable in the SSIS package. All these things can be achieved with the help of .Net application.

There are two ways to store the SSIS packages in the server. One way of storing is in the file system and the other way of storing these packages are in the MSDB database of SQL Server. If we want to test our packages we need to store those packages in the MSDB database. We can access those packages stored in MSDB with the help of the below .Net assemblies








Load all the packages from the SQL Server MSDB database into your array variable.

Loop through all the packages, select the specific package and get the properties of that package and do the necessary validation with the help of the methods and properties of the .Net assemblies.

 For performing the entirefunctionality user must have respective permission on MSDBdatabase.

ssisUnit – Dark days are gone

It seems that those days of darkness are gone. SQL Server Integration Services
Community has felt this need of the day and has come up with a Unit Testing
Framework. Now guyz! if you are not from development background even then
you don’t need to worry because no .net code has to be written to write
your unit tests. All you have to create is an XML files with your commands.
So you don’t need to know any .net code. Still I have not found out if I
want to write some .net code then how can I do that.

The thing I liked most is that this Unit Test framework is based on xUnit.
It means it follows the same setUP, test and teardown flow. There is also
support for Test Suites. The developers of Microsoft Dynamics AX should be
happy to listen to this news. To keep ease in consideration, a GUI has been added to create Unit Tests in ssisUnit.

You can download the free version of ssisUnit from the following location.

What is xUnit

As we have mentioned above that ssisUnit is based on xUnit family. Let me give some information on xUnit for better understanding of ssisUnit.

Various code-driven testing frameworks have come to be known collectively as xUnit. These frameworks allow testing of different elements (units) of software, such as functions and classes. The main advantage of xUnit frameworks is that they provide an automated solution with no need to write the same tests many times, and no need to remember what should be the result of each test. Such frameworks are based on a design by Kent Beck, originally implemented for SmallTalk as SUnit, but are now available for many programming languages and development platforms.

The overall design of xUnit frameworks depends on several components.

Test Fixtures

A test fixture (also known as a test context) is the set of preconditions or state needed for a test to succeed. The developer should set up a known good state before the tests, and after the tests return to the original state.

Test Suites

A test suite is a set of tests that all share the same fixture. The order of the test shouldn’t matter.

Test Execution

The execution of an individual unit test proceeds as follows:

Setup(); /* First, we should prepare our ‘world’ to make an isolated environment for testing */

/* Body of test – Here we make all the tests */

teardown(); /* In the end, whether succeed or fail we should clean up our ‘world’ to not disturb other tests or code */

The setup() and teardown() methods serve to initialize and clean up test fixtures.


An assertion is a function or macro that verifies the behavior (or the state) of the unit under test. Failure of an assertion typically throws an exception, aborting the execution of the current test.

Please give your suggestions/comments on this post……………

SQLBIConfessions – What Inspired Me To Start This Blog

17 09 2009

I have been working in the areas of SQL Server Business Intelligence technologies for the past 6 years. When I have started working on this specialization, SQL Server 2000 veresion was there in the market. I really impressed with the end –  end implementation of Datawarehosuing by using SQL Server BI. At that time Microsoft was the only vendor who provides the end-end tools for a typical datawarehosuing implementation.

In my day-day project execution I need to work on many issues/concepts which are related to SQL BI arena. I search the internet to get the information required for my work arounds and in that process I may encounter many new features, tools and tricks etc. While I am searching for any specific feature or issue or something I end up in not getting the end-end concept for that in most of the cases.This situation has inspired me to start posting end-end implementation of the features those I am well aware of or come across in my proejct execution

I would like to share all these things to the people who are on the same board like me. This blog does not give the information about how to create sample report, package or cube etc in SSRS, SSIS ans SSAS respectively (ofcourse many blogs and books are there in the market to solve that purpose). It touches any concept which is related with SQL BI from front end business tools to typlical inter-platform integrations with Microsoft Dynamics, MOSS, SAP netweaver etc. It is strcikley for the people who are already wet their hands with SQL BI related technologies. Anypost in this blog gives the end-end knowledge with respect to that feature/issue etc.

People are highly encouraged to give suggestions/comments on my blog.

How To Implement Multi-Lingual (Localized) Reports In SSRS 2008

16 09 2009

As part of my project execution , I had to work on the multi-langauge reports. I literally spent more than 4 hours to find out the information required for my project. Still I could get only bits and pieces of information not  an end-end post on multi-language reports. This situation is inspired me to write a post on this topic.

This post describes the end-end way of implementing multi language reports in the SQL Server 2008 environment. any experienced techie if he follows the example given in this article will be able to create multi-langauge reports in his project. Multi-language reports are also called localized reports in the programming context.

Architectural Process behind  Localization

These reports can support different languages based on the resource files. Resource file is a text based file where we have different strings in english and also their localized meaning in the local language. we need to create one resource file for one langauge. One particular resource file will be loaded by the .NET CLR in to the SSRS 2008 Reporting Processor depends upon the browser langauge of the Report Manager. Reporting Server programming extension acts as the through channel in between .NET CLR and Reporting processor in this communication process.

Sample Multi – Lingual Report In SSRS 2008

Please follow the below example to implement the multi-lingual reports in SSRS 2008.

Go to Start-> SQL Server 2008 -> SQL Server Business intelligence Development

Create the new project in the Report Server project Template. Give the name SampleMultiLingualReport  , save the template.


In the solution explorer right click the Reports folder and add new item.

Select the Report from the Add New Item box. Give the name SampleReport then click the button Add.


Drag the textbox from the toolbox and drop into the Report Area.


1.      Create the Localization Code (Class Library) in C# or VB.Net in Visual Studio 2008

1.         Open Visual Studio 2008 to create resource files for en-US, en-UK and fr-FR for the English, USA and English, UK and French, France respectively (you can create the same files with either text files as well).

2.         Using Visual Studio Create a new Class Library (VB or C#) I am going to name my Project SSRSAssembly

3.        Rename your Class1.VB or Class1.CS to LocalizedReport, presumably you would have your re-usable methods in this class

4.        Add the following Method to your Class


Public StringGetLocalizedStrings(String strCultureInfo, String strLocalString)


// Your C# logic should come here

// Return Localized String


5.       Build your project.

2.     Copy the Custom Assembly to the SQL Reporting Services Folders

Once the C# application has been compiled it will generate separate DLL for each resource file in the application Bin directory with the folder names en-US, en-GB and fr-FR.  It also generates the DLL for the Visual Studio project called ssrsAssembly.DLL

Copy en-US, en-GB, fr-FR folders into the below path in your system.

C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PublicAssemblies.

Copy the project ssrsAssembly.DLL into the below mentioned paths in your System.

.Net Path: C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies

Report Server Path:  C:\Program Files\Microsoft SQL Server\MSRS10.MSSQLSERVER\Reporting Services\ReportServer\bin

3.    Add the DLL File to your Reporting Application

To add a reference to your custom assembly, open the Reporting Services report in Report Designer. To do this, follow these steps:

  1. Open the report that will reference the custom assembly.
  2. On the Report menu, click Report Properties.
  3. In the Report Properties dialog box, click the References tab.
  4. Under References, click the ellipsis () button that is next to the Assembly name column header.
  5. In the Add References dialog box, click Browse. (In SQL Server 2005, click the Browse tab.) Locate and then click the custom assembly. Click Open. (In SQL Server 2008, click Add instead of Open.)
  6. In the Add References dialog box, click OK.
  7. In the Report Properties dialog box, click OK.

We are now ready to use the custom assembly in Reporting Services

4.   Call the C# Code into the Report

Now the Report will be able to refer the code in the C# that we have written in the .Net , as we have embedded that DLL in to our Report.

1.Right click the textbox that we have dropped in to our report earlier. Click the menu item Expression from the Menu.

2. Expression editor will be opened. Write the below code in the editor.


Explanation of Above Code

CODE – This is the standard keyword pre defined by SSRS to call the custom assembly Function

SSRSAssembly – This is the name of our C# application name space

LocalizedReport – This is the name of our class library

GetLocalizedStrings – This is the name of our method defined in C# to get the localized strings.

User!Language – This is the built-in variable from SSRS. It defined the browser langauge of the Report.

Hello – it can be any string for which we need a localized meaning. This string should be defined in the all resource files with their corresponding localized meaning.

5.   Update the rssvPolicy.config File in the Report Server  

Open rssvPolicy.config located in C:\Program Files\Microsoft SQL Server\MSSQL.x\Reporting Services\Report Server\bin

Add the below <CodeGroup> after the last <CodeGroup> in the rssvPolicy.config file.

<CodeGroup version=”1″ PermissionSetName=”FullTrust” Description=”reportHelperLib. “>





“C:\ProgramFiles\MicrosoftSQLServer\MSRS10.MSSQLSERVER\ReportingServices    \ReportServer\bin” />


6.   Restart the Reporting Services Service

Go to the Control->Administrative Tools -> Services.

Find the SQL Server Reporting services and Restart the Services of SSRS

7.   View Report in the Report Manager

Compile the report, deploy the report onto the report manager and execute the report in different language options by setting language mode at IE->Tools->Appearance->Languages tab of the Internet Explorer.


Let me conclude this post by discussing the limitations of localized reports in SQL BI 2008. We can define the localized string for any report item like Report Data,Report Headers, Report Column etc. Only drawback of this feature is we can not apply this logic to the Report Parameters. If you want the Report Parameter also in diffrent languages like the report data it is not possible.

Since Report parametrs can not be defined as the Expressions in SSRS, we can not open the Expression Editor to call the C# code into the Report Parameter definition. Report Parameter always contains static text value.


please give your feedback / comments on this topic.

if you need any further help on this please mail me at