Archive for January, 2012
If you are experiencing slowdowns and delays on Cognos reports on your Unix-based system, the first good place to look is at your ulimit parameters. To see your ulimit values, enter “ulimit -a” on your Unix system.
Pay particular attention to your maximum file descriptor, listed above as nofiles(descriptors). The default value of this item can be quite low, as you can see in this example. Setting it higher can make significant improvement on your Cognos system. You can set this value with the Unix command ulimit -n xxxxx (where xxxxx is your desired maximum file descriptor value).
You will find that these ulimit settings are a function of your Unix session, and will revert to default values after you log out. If you want to make permanent changes to your Cognos ulimit settings you will need to either:
- Create a script that calls ulimit -n xxxxx command before it calls the cogconfig.sh file for Cognos startup; or
- Add the ulimit -n xxxxx command to your .profile for the Unix account account in question (open with vi .profile)
For more information on ulimit settings for Cognos, see this IBM article here (this IBM article is specific to AIX and Linux systems).
After importing and/or exporting a Cognos Adaptive Analytics Framework (AAF) package, the following problems are encountered:
- Reports return calendar data errors
- Validating Cognos AAF package returns “The information in IBM Cognos Framework Manager package was not completely loaded. XML failed to parse. Invalid document structure.”
Stop and restart your Cognos services. Then revalidate your AAF package.
Have you ever wondered why the only way you can read SAP data is through SAP connection plug-ins for your ETL tool-of-choice? Why can you not simply read SAP tables as is from the source database? Why is everything data transfer-wise just a little bit harder with SAP?
Well, for years I just assumed that SAP (being a proprietary system) liked a tight grip on its data systems. But as I was looking into a situation for a client, I learned the truth was a bit more complicated.
It all started when I wanted to join the transaction header table (BKPF) with the transaction detail table (BSEG). Couldn’t this be done at the SAP source, instead of in the data warehouse staging area? If this was a straight-up relational database, joining these tables would be a no-brainer.
Now anyone familiar with SAP knows about the cryptic table names and columns, and probably has at least a passing awareness of the ABAP/BAPI/IDOC programming languages that read SAP data. So this alphabet soup is where we start. But the cryptic names aren’t the half of it.
The first thing you need to understand is that SAP is not what you would consider to be a garden-variety relational database. SAP actually maintains three different types of tables, 2 of which are unreadable outside of SAP’s programming environment. One of these table types is called a cluster table. It is a control table of sorts that reads from many different physical tables. BSEG is just such a table, and is therefore not readable by straight SQL commands. The second table type is called a pooled table, which is used to store program parameters. The third and final table type is called transparent, and this is what most SQL programmers would call “normal”. It is physically and logically the same table, and can therefore theoretically be read by standard SQL commands (although virtually all SAP interfacing is done through SAP programming in any case).
Knowing this helps SAP’s world make a little more sense.