Tag Archives: tm1

TM1 Maximum overdrive! Running TI processes in parallel with multiple cores

The Problem

Since the release of IBM Cognos TM1 10.2 it has been possible to leverage multi-core CPUs into 100% action as the MTQ (Multi-Threaded Query) allows to spread queries along multiple cores. Unfortunately this applies only to queries (as figure 2 here illustrates), therefore the TI processes are still ran in single thread and can only use one core at a time. This quite annoying because if you have for ex. 64 cores you end up running CPU-intensive TI processes only with one core! With some serious magic this limitation can be of course turned around with little help of tm1runTI.exe tool. With it’s help we can start multiple TI processes at the same time via command prompt and achieve the consumption of more than one core at the same time!

The Solution

This trick consists of three separate “modules”

  • tm1runti_execute_parametrized.bat – Parametrized batch file that wraps the execution of tm1runTI.exe
  • Maintenance.Execute.Tm1RunTI.Parametrized – Universal parametrized wrapper TI process that wraps the calling of the batch file above. Can be used to run any TI process in parallel execution mode.
  • A parametrized TI process that can safely be ran simultaenously to avoid locking. For example a version copy process parametrized by year/month etc. See details in the “Usage example” section.
The parametrized batch file code

The parametrized batch file code

The batch file itself is very straightforward: it needs authentication information and the name of the process that is going to be executed in parallel.

The parametrized wrapper TI process (Maintenance.Execute.Tm1RunTI.Parametrized) also takes parameters: TI process that is going to be executed in parallel and it’s parameter names and corresponding values:

Parametrized TI process parameters

Parametrized TI process parameters

The actual code is also nice and simple. It just checks whether the parameter values are given and generates a command prompt batch file call and the executes it in the command prompt:

The parametrized TI process code

The parametrized TI process code

Usage example

Now that the core modules are introduced it’s time for and usage example. Lets say you have multiple big cubes and you want to make copies of versions in each of the cube periodically. With big cubes and big data amounts that could take long as TM1 only uses 1 core for the TI process execution. But here we can now make a big difference: we can divide the copying of the versions by year and do the copying for each cube at the same time (the only limitation here is the amount of cores available).

Here is the example code:

Sample usage code

Sample usage code

What we do here is that in the outer while loop (has been left out of the picture) we iterate trough all the cubes that we want to process with the version copying. Then in the inner while loop we go through years between 2006 and 2018. Depending on the amount of cubes we could end up with for example +100 separate TI process calls. As they are separated from each other by the time dimension they dont lock each other and can be ran at the same time (this is the most important part: one must design the runnable TI process in way that when it’s executed in separate threads the executions dont lock up each other!).

So, when we bring this piece of code into action we sure can take everything out of the expensive PVU’s as we can see here:

Task manager seeing red

Task manager seeing red

TM1Top: TM1 having busy time processing

TM1Top: TM1 having busy time processing

Summary

The 10.2.2 introduction of MTQ was a relief and it’s a feature that should have been there 10 years ago when multi-core CPU’s arrived. Unfortunately TM1 still lacks the out-of-the-box feature to execute TI processes in parallel but with this little trick we can overcome this limitation and reveal the full power of the CPU in our TM1 solution!

Advertisement

TM1: Application Maintenance utility and CAM security. The INVALID_IDENTITY challenge.

In the newest TM1 versions it’s possible to refresh the TM1 Applications user permissions and do some other maintenance work via command line. The bad thing is that is still has some issues when working with CAM security (which happens to be the case almost in 100% of the production environments). One may encounter some INVALID_IDENTITY – related error messages like this:

com.ibm.cognos.fpmsvc.exception.FPMSVCException: INVALID_IDENTITY
exitCode=700

IBM’s official answer says that one must use SDK and develop some custom code to get a grip on a cookie value that is set by the IBM Cognos BI portal. Luckily fellow TM1 gurus found also an alternative way of fetching the cookie “cam_passport” value but unfortunately the value could keep changing multiple times a day. Now if you have automated the maintenance utility it may keep going for a while but when the cookie value expires the utility stops working. And that’s bad.

So we need a way to automatically fetch the “cam_passport” cookie value when executing the application maintenance utility (app_maintenance.bat). Once again PowerShell comes to rescue! Here’s what you should do inside a PowerShell script:


$url = "https:///ibmcognos/cgi-bin/cognosisapi.dll?b_action=xts.run&m=portal/cc.xts&gohome="
$cookiejar = New-Object System.Net.CookieContainer
$webrequest = [System.Net.HTTPWebRequest]::Create($url);
$webrequest.CookieContainer = $cookiejar
$credut = New-Object System.Net.NetworkCredential;
$credut.UserName = "username";
$credut.Password = "password";
$credut.Domain = "domain.com"
$webrequest.Credentials = $credut
$response = $webrequest.GetResponse()
$cookies = $cookiejar.GetCookies($url)

$cookies["cam_passport"].Value | Out-File "d:\temp\cam_passport.txt"

Basically what it does is make a http request to the IBM Cognos portal using the credentials provided (these must be the credentials of a TM1 admin user that has priviledges to run the app_maintenance.bat). On the last line it writes the “cam_passport” cookie value into a temporary file.

Save the script into a file for eg. “Update_CAM_passport.ps1” and run it via cmd command line by:

powershell.exe -Noninteractive -Noprofile -Command 
"D:\TM1\scripts\Update_CAM_passport.ps1"

Now, to make sure that the cam_passport is valid through all times you can schedule the command above to Windows Scheduler and put it running say every 30 minutes.

The final step is to call the app_maintenance.bat by giving it the recently fetched cam_passport value as parameter and we can achieve it as follows (a batch file contents):

@Echo on 
timeout 2
set /p p_cam_password=<d:\temp\cam_passport.txt
"D:\Program Files\ibm\cognos\tm1_64\webapps/pmpsvc/WEB-INF/tools/app_maintenance.bat" 
-op refreshrights  -serviceurl https://tm1hostname.com:9514 
-credentials CAM:%p_cam_password% -app <application_id>

Where the application_id is the unique id of your application.

Se on siinä! As we would say in finnish. That’s about it. Now you can sleep safely knowing that your automated application maintenance utility keeps running even if the cam_passport value changes.
 


TM1: Run SQL Server Job from TI process

TI_sql_job_image

TM1 Turbo Integrator is quite limited when it comes to loading data from different sources and transforming and cleaning the data before it can be loaded to cubes. Therefore you might have a dedicated data repository for your TM1 solution for example a data warehouse. Sometimes you need to refresh the DW data before it can be loaded to your TM1 solution. Then you face the question: how to manually execute data warehouse refresh ETL jobs? Of course you can always use SQL Server Management Studio and run the jobs there but if there is a need for the end users to manually refresh the data during working hours it would be nice to provide them a single-platform solution do accomplish this.

So lets face it: maybe you already have some user interface for the end-users to execute some TM1 solution’s internal TI-processes. It could be via TM1 Perspectives web sheets on TM1Web,  straight from TM1 Architect etc. Wouldn’t it be nice to have the ability to run external DW loading processes from the same place? Here is how to accomplish it.

Set up the logins and jobs in SQL Server

TI_sql_job_image_login_properties

First of all you need to ensure that the user account that will be used when from TI process connects SQL Server has all the needed permissions in the SQL Server instance. At least the three following things must be ensured:

  • Login has SQLAgentUserRole checked in the msdb database. Then the login will have permissions on only local jobs and job schedules that it owns
  • Login has the required permissions to access the source files etc. That can be solved by defining a proxy account and I have described it in my previous blog post.
  • The job must be owned by the same user you will use when connecting from TM1. This is because we don’t want to throw admin permissions around and we want the user to have only rights to one or couple of jobs not all of them

 

Set up TI process to run the job: zSQLJob_StartJob

To accomplish we need two different TI processes: one to actually execute the job and one to determine whether the job execution has ended or not. Let’s start with the main process zSQLJob_StartJob. It has one parameter p_sql_job_name which will contain the name of the job that will be executed. The process itself is quite straightforward and here is the complete code:

TI_sql_job_image_zsqljob_start

What it does is basically just opens the ODBC connection to the msdb database of the SQL Server instance and then executes the job. The magic here is that the process won’t finish until the job on SQL Server has finished. To have this functionality we need to poll the msdb database every 5 seconds and find out if the job has finished or not.

Set up TI process to determine the status of the running job: zSQLJob_CheckJobStatus

To find out whether the job has finished or not we just query the job status directly from the msdb database. To do that we use the following SQL query and put it in the “Data Source” tab like this:


SELECT CASE
WHEN EXISTS (
select job.Name, job.job_ID, job.Originating_Server,
activity.run_requested_Date, datediff(minute, activity.run_requested_Date, getdate()) as Elapsed
from msdb.dbo.sysjobs_view job
inner join msdb.dbo.sysjobactivity activity
on (job.job_id = activity.job_id)
where run_Requested_date is not null and stop_execution_date is null
and job.name = 'JOHIDW_Update_Publishing' ) THEN 1 ELSE 0 END AS JOB_RUNNING_STATE

TI_sql_job_image_zsqljob_checkstatus_datasource

The only magic with this process is to save the single result value to the global variable vnJOBSTATE_global that we defined in the previous master process. If the value is 1 it means that the job is still running and the master process keeps polling the msdb database every 5 seconds. When the job finishes the value turns to 0 and the master process can finish as well.

Conclusion

There may be times when you need to execute SQL Server jobs to make sure your data to be loaded to TM1 is up-to-date. With this example you can do a nice and easy integration between TM1 and SQL Server and allow end-users to execute simple SQL Server jobs if needed.


Scheduled backup solution for TM1 with 7zbackup

7zip_logo

Working with in-memory applications like TM1 it’s essential to have a working backup strategy. TM1 doesn’t provide any built-in functionality for this but with little help from 3rd party tools one can easily build up a lightweight and working backup solution. What you need is 7zbackup, 7-Zip 9.20 and Windows PowerShell 2.0.

Getting started, step 1: Modify the “selection file”

(Notice: I personally needed to go back to version 7zbackup 1.8.5 stable to get the script running succesfully.)

After installing the required applications start with modifying the 7zbackup “selection” config file. That’s the file that contains the configuration needed to run through the backup process. There are many different parameters that you can modify but let’s just go through the most important ones.

includesource=<path_to_your_tm1_datafiles_folder>\datafiles|alias=PROD

Here you define what to back up. Datafiles folder is the heart of TM1 so you need to back it up entirely. If you want you can also include the logfiles folder but its not necessary.

rotate=60

As it says in the comment line This variable holds the number of historical archive backups you want to keep on target media. I tend to play safe and there’s disk space available nowadays so two months of rotation should be enough. Just remember to copy the actual backup zips to remote location periodically so you got also a real disaster recovery possibility.

There are also other parameters you may like to config but this will do it for us now.

Step 2: Set up a batch file to run the PowerShell script

Next we set up a classic bat-file to make it possible to execute the PowerShell script from windows scheduler. Here is the contents of the bat-file:


@ECHO off
for /f "tokens=1" %%a in ('powershell.exe ""') do set WD=%%a
powershell.exe -Noninteractive -Noprofile -Command "&{<path_to_your_7zbackup_folder>\7zBackup.ps1 --type full --selection <full_path_to_your_selection_file> --destpath <path_to_your_backup_destination_folder> --logfile <path_to_your_7zbackup_logging_folder>\daily_backup_%WD%.txt}"

Basically what it does is that it runs the 7zBackup.ps1 script file telling it to take a full backup, use the settings from the selection file, put the zipped backup in the given destination folder and put the 7zBackup own logging files in given directory and give the logfile a timestamped name.

If you haven’t configured the script execution policy you may encounter this error message:

File 7zBackup.ps1 cannot be loaded because the execution of scripts is disabled on this system. Please see "get-help about_signing" for more details.

That means you need to set the execution policy to allow executing custom scripts. More information here.

Step 3: Schedule the backup process with Windows Scheduler

The last thing to do is to schedule the backup process on daily basis (or more often if you like). There are 5 tasks to set up there:

  1. Run SaveDataAll() on the TM1 server to make sure that all the data is written to disk before the backup starts
  2. Shutdown the TM1 server: NET STOP <your_tm1_server_service_name>
  3. Execute the backup bat file (created in step 2)
  4. Start the TM1 server again: NET START <your_tm1_server_service_name>
  5. (OPTIONAL): Copy the newly created backup zip file from destination folder to your remote backup storage location.

Here we are! Happy backuping!

-GD


TM1: List all subsets per dimension and their assigned aliases

powershell_2

I was bit amazed to find out that TM1 doesn’t seem to have any built-in functions for fetching all the subsets inside a dimension and it sure does not have a way to tell which alias is currently applied to the subset. Subsets are saved in separate files inside the datafiles directory so I needed a little PowerShell magik to fetch the subsets and their corresponding alias names by myself.

Inspired by the script “Listing all Subsets per Dimension in an IBM Cognos TM1 Server” in IBM developerWorks I wanted to push it little more further. I also switched TI scripting to PowerShell script because I needed to go inside the files contents instead of just iterating through the filenames.

The thing is, inside the .sub-files there is a special line starting with “274,” following after that comes the name of the alias that is applied to the subset. The real reason why I wanted to have a list the aliases applied to each subset is described in the next post but for now here comes the PowerShell script (it must be executed inside your datafiles-folder of your TM1 server):

Get-ChildItem -Recurse -Include *.sub
| Where-Object {$_.Directory -match "\w*}subs\b"}
| Select-String -pattern "274,"
| Select-Object Path, Filename, Line
| Where-Object {$_.Line -notmatch ",$" -AND $_.Path -notmatch "datafiles\\}" 
  -AND $_.Path -notmatch "datafiles\\tunit"}
| ForEach-Object { $_.Path.Split('\')[-2].Split('}')[0] + ";" 
  + $_.Filename.Split('.')[0] + ";" + $_.Line.Split(',')[1] + ";" + $_.Path }
| Set-Content "D:\temp\TM1_Dimension_Subset_Alias.csv"

What it does is basically the following:

  • go through recursively all the files in the datafiles folder that are located in subfolders containing “}subs” in their name
  • from the files, fetch all the lines that match the pattern “274,” (alias name)
  • filter out unnecessary system folders
  • finally form a nice csv-formatted content containing dimension name, subset name, alias name and the full path to the subset file
  • write the results to output csv file

Example of the contents of the output file:

dimName1;subsetNameA;aliasname;c:\tm1server\datafiles\dimName}subs\subsetName.sub

There you go! Have fun powershelling your TM1 subsets!

-Greeny


TM1 Performance Modeler – Export hits OutOfMemoryError

I was planning to do the very first deployment from development to production in a new customer environment. Using the transfer out – feature of TM1 Performance Modeler it should have been straightforward but right after the transferring started I encountered an nasty error message:

Transfer has failed due to the following error(s):

ERROR

The transfer failed.
java.lang.OutOfMemoryError

OutOfMemoryError

What a nice way to start production deployment. Server had gigabytes of free memory but somehow insufficient amount of it was available to Performance Modeler. Google to rescue and I found the following IBM Support page: http://www-01.ibm.com/support/docview.wss?uid=swg21600719

I found out that there’s a yet another XML configuration file called ModelingMDT.ini that should be adjusted. It’s located in the following directory:

C\Users\<USERNAME>\AppData\Roaming\IBM\Cognos Performance Modeler\bins\bin_10.1.XXXX.X

I just increased the value -Xmx512M in to -Xmx1024M and voila’: transfer out succeeded this time. Our model is pretty small at the moment but still it doesn’t fit in the default 512MB while transferring the objects. I think it’s a pretty small value for a default one. What do you think?


Achilles’ heel of TM1: Daylight savings time (DST)

Once again it’s the time of the year (March 31st) when we shift into summer time (or daylight savings time (DST)). That’s great: days are getting longer and you start feeling that the summer really is near. The bad thing is if you happen to have a TM1 production environment somewhere and you have scheduled chores to execute at specific point of time. The thing is: TM1 is unable to handle daylight savings time correctly. It still amazes me how this is possible? In other words how they haven’t made any fixes to correct this? Maybe it’s just so deep deep deep inside the legacy business logic or something. Anyhow, the point is you’re on your own and have to deal with this issue yourself.

So, after Easter Monday remember to go through the chore schedules in your TM1 environment and reschedule them to start up at correct time.

For more information, please check out these:


TM1 9.5.2 crash course: Beware of the LONG-function

During the last few weeks we’ve faced some serious problems with one of our TM1 production servers. The production server instance kept crashing on daily basis and without giving any clear reason why. We arranged surveillance by using Windows’ Performance Monitor and collected some system data but found nothing suspicious there either. The only clue were the last lines in tm1server.log just before the server crashed:

tm1server.log just before the crash

So, instead of trying to find a needle in a haystack we went through the rules of the last mentioned }ElementAttributes cubes and found out that in three of them a built-in-and-safe-sounding function LONG was used. At the time of development it may have been a good idea but nowadays at least with version 9.5.2 it is definitely a no-go: PM38586: LONG FUNCTION IN TI CAUSES TM1-SERVER HANG/INSTABILITY

IBM’s recommendation is to upgrade to version 10.1.0 but that was not an option for us at the moment so we needed to find a workaround. Now after fixing the rules with a workaround that avoids using the LONG function seems to have done the trick: production server is stable again and does not crash. So, my strong recommendation is not to use LONG function in rules if you are running TM1 version 9.5.2 or lower.


TM1 dimension hierarchy levels not showing up in Cognos BI Reporting Studio

Being still a “TM1-freshman” I find myself facing some mysterious problems specially when it comes to the integration part of TM1 and Cognos BI. Recently we made updates to customer’s production environment. We copied new dimension and cube files and the associated {-prefixed control cube files from test environment. Cubes were also used by many reports in Cognos BI and everything else seemed fine but the hierarchy levels were missing:

Dimension levels not showing up in Report Studio

Normally updating the reporting package from Framework Manager does all the tricks but this time it wasn’t enough. It’s a relief to have a community like TM1 Forum where hundreds of professionals gather up and offer their help. A quick search there and I found out that there is a special control cube called {HierarchyProperties and from there you can choose whether to publish the dimensions with level information or not:

{HierarchyProperties cube

So I thought adding the level names there and once again refreshing the reporting package I’ll do the trick but there were still no levels showing up. Going through more threads I discovered that there is also a special TI function called RefreshMDXHierarchy() that one should execute after making changes in the control cube. Finally that missing function call did the trick and hierarchy levels appeared in Report Studio! One more thing to remember in the future …

Edit 19.9.2013: Make sure you ran first RefreshMDXHierarchy(‘ ‘) and then again RefreshMDXHierarchy(‘<your_dimension_name’)

Edit 18.11.2013: IBM released a new technote regarding this same matter. According to it one must ensure also the following things:

  • Determine whether CQM or DQM (Compatable Query Mode or Dynamic Query Mode) is being used in BI.
  • If using CQM, navigate to CognosInstallDIR\data\cqe\RTModels and remove everything within it (folders and files) so that there is an empty RTModels folder.
  • If using DQM, then navigate to CognosInstallDIR\data\mfw4j\PMCs and remove everything within it (folders and files) so there is an empty PMCs folder.

NOTE that you will need to stop BI to perform the actions above.
Alternatively, try republishing the package via FM.


Managing hierarchies being used in IBM Cognos TM1 – precautions

TM1 doesn’t like duplicate names in hierarchies

IBM Cognos TM1 is an excellent tool when it comes to planning, budgeting and forecasting. It also has hierarchy management capabilities but often you want to import hierarchies and data from an enterprise MDM solution such as Master Data Services (MDS) in SQL Server.

When  importing hierarchies and data to TM1 one must be careful with the business codes, ids and names of the entities being imported to TM1 since it has an interesting “limitation” (or feature, depends on from who you ask): all the elements within a dimension must have not only a unique id but also a unique business code and a unique name. It means that the following cost center source data example cannot be imported because the elements have duplicate names (although the business codes differ):

Type Code Name Parent
Consolidated 1000 Administration
Leaf 1 General 1000
Leaf 2 Purchasing 1000
Consolidated 2000 Production
Leaf 3 General 2000

To prevent situations like this and prevent having problems when importing data to TM1 one must be sure that there are no duplicate names or codes in the source data. In MDS this situation can be handled nicely by using MDS business rules.

Deploy Business Rules to prevent duplicates

In MDS you can define business rules to automatically keep your data in shape and also make some automatic processing to your data if wanted. Now we want to prevent duplicate names in the cost center data (codes are automatically unique so we don’t need to think about them).

Force the cost center names to be unique

Force the cost center names to be unique

Now that the business rule is defined and published we can apply it to the data itself:

Apply the business rule to existing data

Apply the business rule to existing data

As we can see two cost centers have the same name “General” and that violates the business rule definition.

Two cost centers have the same name

Two cost centers have the same name

The red flag indicates that there is a problem with the data and it needs to be fixed. You can also define business users to have notification emails when the validation rules fail so that someone can instantly fix the issues before they are causing errors in the downstream systems such like TM1.

After the the names are fixed and the business rules are applied again the situation is OK and the data meets the quality needs of TM1 and its ready to go!

No duplicates exist - the data is good to go

No duplicates exist – the data is good to go

Cost center hierarchy after fixing the data

Cost center hierarchy after fixing the data

Extra notes

Although we can prevent duplicate names within leafs and consolidated members independently the business rules cannot be used to prevent them in both simultaneously: you can still have leaf member with name “General” and a consolidated member with name “General”. And that’s also bad for TM1.

In most cases the hierarchy data goes first into a data warehousing system before it’s loaded to TM1. If that’s the case you can brute force the uniqueness of the names by storing both the consolidated and leaf members into a single table and then defining a unique constraint on the name column. That will keep your data well-formed. It’s not the most beautiful way to do it but a very effective one. Let’s hope that we can tackle this situation also in the future versions of MDS.