Feeds:
Posts
Comments

Archive for July, 2011

I ran one of my ETL package on a Windows 2008 R2 standard server. The package was completed successfully but with one warning.

Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available.  To resolve, run this package as an administrator, or on the system’s console.

This warning is misleading as it has nothing to do with the server memory or DLL. You will get this warning if you run an ETL package directly as a user such as BAULD\cconsult2. If you put this package to run off from a SQL job, the warning is gone. If you run the SQL Server Management Studio (SSMS) as administrator then run the package, the warning is gone too.

Read Full Post »

The answer is to use vbcrlf.

Example:

= “Issued To: ” & vbcrlf & Fields!Organisation.Value

 

Read Full Post »

There is a requirement in my SSIS package to trap invalid data, which is stored in a text file to be included in the email notification for further action. To do that, I put my SQL query in the Execute SQL Task and set the resultset to XML (Remember to use “FOR XML Auto” or “FOR XML RAW” after the WHERE statement in your SQL query).

In the Result Set, I set the Result Name to InvalidData and the Variable Name to User::InvalidEnvData. Basically, the output of the SQL query will be stored to a variable (User::InvalidEnvData) of string type.

When I ran the package, I received the following error:

The result binding name must be set to zero for full result set and XML results.

This error is straighforward as it is telling me to set the Result Set Name to zero rather than some name. This resolves the error.

 

To attach the output as the content in an email, set the Expressions of MessageSource of the Send Mail Task to  the variable @[User::InvalidEnvData].

 

Read Full Post »

I have a log file of log extension (called ErrorLog.log) stored locally on the server. This log file is to capture all errors/warnings that might occur during the ETL process. By default, SSIS appends the log information. To overwrite it, use the File System Task” and select “Copy File” as the workaround. The Source Connection must be different from the Destination Connection. So what I did was I copied my log file to another location and renamed it as ErrorLogCopy.Log, which I set as my source connection. My destination connection is the ErrorLog.log. And make sure that the “OverwriteDestination” property under the DestinationConnection is set to True.

 

Read Full Post »

In the Send Mail task properties, select Expressions. Set the property to Subject and then set the Expression by using a combination of the system variables and user variables if you have any.

For example,

“ETL Error in Package ” + @[System::PackageName]  +  “, Task ” +  @[System::TaskName]  + ” at ” + (DT_WSTR, 30)  @[System::StartTime] + ” on Machine ” + @[System::MachineName]

The above expression will get displayed as:

ETL Error in Package test, Task SMTP Email Error to Admin at 21/07/2011 2:35:57 PM on Machine PROSERV18

And furthermore, to store this in the SSIS global configuration package, create a package variable called EmailSubject of String type. Set its value to the above expression value. Right click on the Control flow then select “Package Configurations”. Tick the @EmailSubject variable value. So, if in the future you need to change the email subject expression, you only need it to change in one place i.e. the global configuration which in my case is stored in a database table called Config.[SSIS Configurations].
 

More details – http://support.microsoft.com/kb/906547

Read Full Post »

Whenever you create a new dataset with certain parameters in MDX, SSRS will automatically create the datasets to populate the parameter values for  you. You would then think that this is easy but hang on, when you run the report, it takes so long to run.  The next think you will see is that your BIDS project is not responding. You would then have to kill it and reopen the project. This is the real nightmare!

I posted previously that the number of rows returned from the datasets is very important. Using SQL Server Profiler proves this.

So if you experience a very slow performance in running your MDX reports, check the datasets and attempt to reduce the number of rows returned. For example, a date parameter does not need its available values to obtain from the query. So set its available values to None and default value to some dates or todays date. Next, restrict the number of dates return in the dataset by filtering it say, filter the dates where a certain measure is greater than zero.

Another thing is, if you have a main report with a subreport, check which parameters that will be passed through to the subreport. These parameters will be passed on as one value only. Hence the parameter in the subreport should have its available values set to None and the dataset to populate its values can be deleted.

 The above will significantly improve the performance of the report. 🙂

Read Full Post »

I have a need to perform a UNION in MDX similar to SQL for my report. I have 2 datasets generated from two different fact tables. There is a MDX UNION operator but it only operates on sets. Googling around came up with some blog users suggesting performing the UNION using subreports. I tried this and it meets my report requirement.

However, I encountered another issue. When the subreport returns no value, the main report shows it as blank. I wanted  it to display zero. I added a check if the field is null then set it to zero. It still would not display it as zero. I then set the No Rows message to zero in both subreport and main report. It now displays the zero but it is left aligned. I wanted it right aligned. The only way to achieve this is to set the TextAlign of the field to Right.

Also, if the main report somehow does not display the subreport correctly, deploy both reports then run them from report server. I reckon this is a bug!

Read Full Post »

I have an attribute named Suggestion, which has a data type of text in SQL Server database. When I populated this as a text or varchar(max), SSIS set it to DT_TEXT automatically. When this gets to cube, it failed to be processed. Interestingly, as varchar has a max allocation of 8000, when I set the data type to varchar(8000), the cube dimension was processed successfuly.

 

Read Full Post »

I have a dimension with an attribute called Action which is in varchar(5000). When this gets added into the dimension, it gets assigned as WChar of size 1024. And of course when I tried to process the dimension, it failed with the following error:

errors in the back-end database access module. the size specified for a binding was too small resulting in one or more column values being truncated

It took  me a while to find how to change the size of the attribute from 1024 to 5000. I refreshed the cube data source views a few times hoping that it will automatically refresh the data size but it didn’t happen. I have had to manually change the data size. It is located under the Properties of the attribute of the KeyColumns (see below).

 

Read Full Post »