Category Archives: Cognos

An Overview of Esri Maps For Cognos, Part 3 – Shapes

(Special “The Colour and The Shape” edition)

By Peter Beck, CBIP

In Part 1 of this series we introduced the Esri Maps for Cognos (EM4C) product, which enables us to tie together BI-type reporting with the rich capabilities of Esri’s mapping software. In Part 2 we demonstrated how easy it is to use this software to connect point-type

Wonder how we're doing in the dragon district?

Wonder how we’re doing in the dragon district?

data in a Cognos report to a map. In essence, we can take points of data identified by lat/long values and connect them to a map, and then colour-code the points to represent different categories or types of data. In our example, we looked at crime data for San Francisco. The result enabled the user to make inferences from the geographic distribution and type of crime reports that would be difficult to make if the data were simply listed by address, or even grouped into neighbourhood categories.

In this installment, we will look at a slightly different way of displaying data within the context of geography – instead of displaying discrete points (which require lat/long values) we will categorize larger geographic areas, defined by shapes on the map.

 

As before, we need a Report Studio report, with a query:EM4C_3_1_query

Note that in this example we don’t have any “lat/long” type data here – instead, we have Retailer Province-State, which contains values representing the name of each state:

EM4C_3_2_results

This time, instead of adding a Cognos X/Y Layer to our Esri map in the report, we will add a Cognos Shape Layer:

EM4C_3_3_Shape_Layer

A Cognos Shape Layer acts similar to a XY layer, except that it binds data based on common descriptions between the report data to a  map containing a “shape”, instead of lat/long points. In this case we set up the map associated with the “shape” layer to one containing shapes of States in the US. In the wizard provided we can match the shape names in the map we have selected (STATE_NAME) to the appropriate column (Retailer Province-State) in our query:

 

 

EM4C_3_4_Join

We select the measures we are interested in…

EM4C_3_5_Fields

… and then configure the “shape join”, assigning colour-values to relative levels of each measure (in this case, Revenue):

EM4C_3_6_Style

We now have a map that lets us see, by quantile, how revenue compares by state:

EM4C_3_7_Map_1

Easy!

However, because we have selected several measures, we can also use the map legend to select the other measures and see how they compare as well:EM4C_3_8_Map_2

For example, here is the map showing Gross Profit:

EM4C_3_9_Map_3

Note that the legend shows the quantile breakdowns for each colour. As well, hovering over each state brings up information on the state:

EM4C_3_10_Map_4

Users are not limited to a single shape layer – multiple layers can be combined on a single map, and then the layers activated/deactivated by the user to how different data by different “shape”.

Shapes are not limited to conventional maps, of course. Floor plans provide an ideal source of shapes. Retailers can use shapes to identify revenue by area of a store, or property managers can look at building usage, perhaps over time. All that is needed is a Esri map with shapes that correspond to the physical areas the user is interested in, and have an attribute that can be joined to a column in the report that contains values that match the values of the attribute.

 

 

An Overview of Esri Maps For Cognos, Part 2 – Points

(Special “If You’re Going To San Francisco” edition)

By Peter Beck, CBIP

In part 1 of this series, we looked at how Esri Maps For Cognos – EM4C – allows us to embed a map from an Esri map server inside a Reoprt Studio report. But the map is pretty useless if it doesn’t allow us to connect to our data and perform some kind of analysis that can’t be done with a regular list report, or with some kind of graph.

From a mapping perspective there are a couple of concepts that we need to keep in mind if we are going to bind business data to a map: one is the idea of a point, the other the idea of a shape.

Creating map-points (old school)

Creating map-points (old school)

We’ll start with a point. A point is a lat/long value on a map: it is (strictly speaking) an entity with no area. It could be a point that represents a store location, a home address, whatever you like. The important thing to keep in mind is that even if a store (or your house) occupies area, from a mapping/point perspective it is simply a point on the map.

So what kind of data can we plot using points? Crime data is one example – a police call is typically to a particular address. If we can plot these locations on a map, by type, we might gain insights into what kinds of crimes are being reported not just by location, but by location relatively to each other – what kinds of crimes cluster together, geographically.

Crime data for San Francisco for March, 2012 is available on the web, and this data set comes with both category of crime and lat/long of the police report. This makes the data set ideal for plotting on a map.

First, I set up a quick Framework Manager model that retrieves the data from my database. Then, we need a query in Report Studio that retrieves the data:

Creating a simple query

Creating a simple query

Note that we have a Category, Description, and X and Y values representing Longitude and Latitude respectively.

I add a map placeholder (as we did in Part 1) and then save the report. (I could, of course, add any additional report items, queries etc to the report that I wish.) I then open the map placeholder in Esri Maps Designer, add a base map, and then add a new layer: the special Cognos X Y Layer. I rename it Crime_Locations:

Adding the X Y Layer

Adding the X Y Layer

A wizard enables me to select the query associated with the Crime_Locations layer, which will display points:

Selecting Data

Selecting Data

Note the inclusion of a Unique Field – this is the IncidentNum from the original data.

Further configuration allows me to then assign the Lat/Long from the data set, and identify each point by the Category of crime.

Categorization

Categorization and Symbolization

I now have a set of symbols – coloured squares – that correspond with the categories of my data. When I view my report, I can see the location of each crime, by colour-coded type, at each location it was reported at:

Woah thats a lot of crime...

Woah thats a lot of crime…

Even at this zoom level I can draw some conclusions about what areas have more crime – the north-east seems to have more reports that the south-east, for example. But by selection of specific crimes, and zooming in, interesting patterns begin to emerge.

What patterns are emerging?

What patterns are emerging?

The orange squares represent drug-related charges. The green and purple squares are assault and robbery charges respectively. The drug-related charges are more concentrated in one relatively small area, while the assault and robbery charges seem more spread out – but with a concentration of them in the area the drug charges are also being laid.

If we zoom in even closer, we can see that certain streets and corners have more calls than others in close proximity – that the crimes seem to cluster together:

Stay away from these areas...

What’s special about these areas?

But zooming out again, we see an interesting outlier – a rash of drug charges along one street, with what appears to be relatively few assaults or robberies:

Something looks out of place...

Something looks out of place…

Zooming in we see that this activity is almost completely confined to a 7-block stretch of Haight St., with virtually no activity in the surrounding area, and few robberies or assaults:

What is special about this street?

What is it that makes this street so special?

This kind of spatial relationship is extremely hard to discern from a list or chart, even a chart that breaks events like police calls down by geographic category of some kind. But using mapping, with a simple zoom we can go from an overall view of patterns of activity to a much higher degree of detail that begins to tell some kind of story, or at least warrant further investigation.

But wait, there’s more…

By hovering over an individual square, I can get additional category information from my underlying data, assuming I have included it in my query. In this case there is a sub-category of the call:

The reader is left to draw his or her own conclusions...

The reader is left to draw his or her own conclusions…

By adjusting the query I can re-categorize my data to yield results by, for example, day of the week, or sub-category.  For example, here we can contrast Possession of Marijuana (green) with Possession of Base/Rock Cocaine (pink):

Patterns of behaviour...

Patterns of behaviour…

Marijuana possession seems more diffuse, although concentrated in a few areas. The cocaine charges are much more concentrated.

In our next entry in this series, we’ll take a look at allocating data to shapes, to colour-code areas to represent different levels of activity.

 

 

 

 

 

 

An Overview of Esri Maps For Cognos, Part 1 – Intro

By Peter Beck, CBIP

Cognos report writers have long been frustrated by the poor built-in support for GIS-type displays in Cognos reporting tools. True, there is a basic map tool included as part of Report Studio, but it is quite limited in functionality. It can be used to colour geographic areas, but lacks layering, zooming, sophisticated selection tools, and the kind of detail we’ve all become used to with the advent of Google Maps and the like.

Wonder how the sales are doing in Mordor

Wonder where the ring sales are these days?

There are a few map-related add-ons for Cognos reporting available. Recently I had the opportunity to take Esri’s offering in this space for a test drive with a 2-day training session at Esri Canada’s Ottawa office. I came away impressed with the power and ease-of-use offered by this product.

EM4C – Esri Maps For Cognos – came out of development by SpotOn Systems, formerly of Ottawa, Canada. SpotOn was acquired by Esri in 2011. The current version of the product is 4.3.2. The product acts as a kind of plug-in to the Cognos portal environment, enabling Report Studio developers to embed Esri maps, served up by an Esri server, in conventional Report Studio reports. From a report developer perspective EM4C extends Report Studio, and does so from within the Cognos environment. This is important: EM4C users don’t have to use additional tools outside the Cognos portal. From an architectural perspective things are a little more complex: the Cognos environment must be augmented with EM4C server, gateway and dispatcher components that exist alongside the existing Cognos components.

Then, of course, there are the maps themselves. Since this is a tool to enable the use of Esri maps, an Esri GIS server must be available to serve the maps up to the report developer and ultimately the user. For shops that are already Esri GIS enabled this is not a challenge, and indeed I can see many users of this product wanting to buy it because they have a requirement to extend already available mapping technolgy into their BI shops. However, if you don’t have an Esri map server, don’t despair – the product comes with out-of-the-box access to a cloud-based map server provided as part of the licence for the product. This is a limited solution that won’t satisfy users who have, for example, their own shape files for their own custom maps, but on the other hand if you have such a requirement you probably already have a map-server in-house. If you are new to the world of GIS this solution is more than enough to get started.

So where do we start with EM4C? First, you need a report that contains data that has some geographic aspect to it. This can be as sophisticated as lat/long encoded data, or as simple as something like state names.

When we open our report, we notice we have a new tool: the Esri Map tool:

Selecting the Esri Map tool

Selecting the Esri Map tool

As mentioned, the EM4C experience is designed to enable the report writer to do everything from within Cognos. Using this tool we can embed a new map within out report:

Map Placeholder

Map Placeholder

 

So now what? We have a map place-holder, but no map. So the next step is to configure our map.

This step is done using Esri Maps Designer. This tool is installed in the Cognos environment as part of the EM4C install, and enables us to configure our map – or maps, as we can have multiple maps within a single report.

Selecting Esri Map Designer

Selecting Esri Map Designer

Esri Maps Designer is where we select the map layers we wish to display in our report. When we open it we can navigate to any Report Studio reports in which we have embedded and Esri map :

 

Selecting a map to configure

Selecting a map to configure

In this case VANTAGE_ESRI_1 is the name of the map in my report; the red X indicates it has not been configured yet. Clicking Configure brings up our configuration. This is where we select a Base Map, and then link our Cognos data to a layer to overlay on the map.

As mentioned, out-of-the-box the EM4C product enables the user to use maps served from the Esri cloud. We will select one of these maps from Esri Cloud Services as the Base Map of our report:

Maps available from Esri's cloud services

Maps available from Esri’s cloud services

When the base map is embedded, it becomes a zoom-able, high-detail object within the report:

An embedded map

An embedded map

Unfortunately, while the map looks great it bears no relationship to the report data. So now what?

In part 2 of this overview we will look at how to connect the report data points to the report map. It is the combination of the ease-of-use of BI tools (and the data they can typically access) with mapping that makes a tool like EM4C so powerful. We will symbolize data to created colour-coded map-points to reveal the geographic location and spatial relation data, potentially allowing users to draw conclusions they otherwise would not have been able to with list-type data.

 

 

Can’t anybody here play this game?

Can’t anybody here play this game? – Casey Stengel

 By Peter Beck, CBIP

IBM has a new paper out on that age-old question – why do BI projects fail, and what can be done about it? The paper is entitled “Bridge The Gap Between BI Best Practices and Successful Real World Solutions“. The first few pages are the usual marketing fluff, and they generally contradict the “meat” of the paper, which begins a little further in. That is, once again, we see a particular technical/product solution proposed to solve what is not a technical problem. This is accomplished by simply asserting that this particular technical solution maps neatly over the business problems Gartner has uncovered. If you are brave enough to hack your way through the paper to where the Gartner material actually begins, there are some interesting discoveries to be made. By “interesting” I mean “depressing”. Taken as a whole the paper can be thought of as a fine example of what the Gartner research itself reveals.

The paper begins with a set of now-common observations: that BI programs need a business sponsor, that IT ends up “selling” BI to the business (and doing it badly), that BI tends to get “stuck in reporting”, and that “Technology is rarely the culprit if the BI project is considered a failure”. All well and good. And then at page 2 we read, in bold, all-caps:

IBM COGNOS EXPRESS: THE KEY TO A WINNING BI STRATEGY

I see. IBM’s technology will be the “key”. That’s a relief. Gap closed! Close the document and move along.

But if I keep reading, I discover that the folks at Gartner have done some research on the practice of BI programs, most of which are not particularly related to technology (on the contrary.) The results aren’t good. That doesn’t mean they are surprising, of course.

The Gartner section of the paper is called “The BI(G) Discrepancy: Theory and Practice of Business Intelligence”. They break out 9 aspects of BI implementation, and discuss what should be done in each aspect, versus what their research indicates is actually taking place in the real world. The results are a confirmation of what most of us “in the trenches” feel intuitively: there seems to be little correspondence between what should be done, and what actually is done. And technology isn’t going to change that.

The whole thing is worth a read, but the most eye-popping section turns out to be the discussion of -  BI strategy! That thing that the latest IBM product will provide a “key” for! Turns out only 2% of organizations informally surveyed in mature markets had anything called a BI strategy. That’s not a typo. 2%. And this is among Gartner clients. Let that sink in for a second, and then consider this quote from the paper:

“Nearly shocking results are obtained when reviewing the so-called BI strategy documents. Almost never would those qualify as strategy in Gartner’s opinion. Quite often a strategy is merely a statement like “We have a Microsoft BI strategy” or “Our BI strategy is SAP” indicating what products the organization is using or planning to implement. Other times the “strategy” is merely an architecture diagram… This is as if the Ferrari Formula 1 team described its racing strategy as “using Bridgestone tires, Shell fuel, a V8 engine and red paint.”

I like the use of “Nearly” to suggest seen-it-all unflappability on the part of the author.

The analyst goes on to describe the initial 2% number as “rather optimistic” (raw-ther, old sport!), blows some dust off the dictionary definition of “strategy”, and then (perhaps beginning to get a little exasperated, and reaching for the bottle) muses that:

“The question could be expanded to: Do executives even understand what constitutes a strategy?”

Yes! It does appears that the question could be expanded to that!

Everyone, and I mean everyone, I have ever encountered in this industry who works above the level of writing reports struggles with the problems outlined in the Gartner material every day. And yet here we are, decades now into the world of BI, and it doesn’t appear to be getting any better. BI still seems to be mired in confusion as to what it is – what is its identity within the organization. The default position seems to be: it’s a technology. IBM et. al. seems ok with this, and I can’t blame them. As long as the discussion can be returned to “BI is a product (and our product is the best!)” they seem to be happy, as they have a tangible thing to sell. My own feeling  (obviously) is whenever the real answer to this question is found, it won’t be “Cognos” or “Microsoft Analysis Services” or any other piece of software, and I say this as someone who spends his days with these products in front of him.

If executives don’t have a grasp of the rudiments of BI strategy (or perhaps strategy in general), it seems that the best anyone can do is try to keep pushing technology. At least that seems to be what IBM’s “strategy” is with this document – provide a high-level summary, name the product and map it to what is “supposed” to happen in an organization, and hope for the best – and that no-one keeps reading. Or what the Gartner analyst, in the section on what goes on in the real world when it comes to the business case for BI, characterizes as a “leap of faith”. I’m not kidding, they actually use those words to describe what Gartner clients are doing to justify their BI investments.

Check the paper out, it’s worth a read.

Cognos 10, Report Bursting and Saving Output to File

 By Peter Beck, CBIP

(The instructions below present setting up C10 for output to a file location on the network within the context of bursting reports, but there is no reason you can’t set up file output for the normal manual or scheduled execution of reports – PB)

Cognos 10 (like all versions of Cognos BI since ReportNet) has a fairly straightforward way of configuring a given ReportNet report for “burst” output – that is, for generating a set of reports from a specific report specification, where the only difference between the reports is some selected value. Consider a generic sales report, where we have 2 different sales reps.

We might want to “burst” the report across the sales rep identifier, so we would get one report for each sales rep. We could then distribute each report to the appropriate rep.

Setting a report up for bursting is performed in the Report Studio interface. Under File… Burst Options we set how the report will burst. We also have the option of selecting how the report will be distributed – either as an email or as a Cognos directory entry. The value for the both the burst specification and the distribution must come from a query in the report.

However, it is quite possible that we might want the output to go out to a file location instead. To set this up requires a little bit of configuration, but it is quite straightforward. In versions of Cognos BI prior to 8.3 this was a bit limiting – we essentially had only one destination we could output to. In even older versions controlling the name of the output report was a pain as well – we needed secondary scripting to re-name the report in the output file based on an associated XML file. This is no longer necessary.

Note about the instructions below: this is not limited to burst output – setting up C10 for file system output can be useful for saving any report you run to the file system – a manually run report, a scheduled report, or burst report.

First, we need to create a shared folder on our server. This can be any name, but should not be located in the installation directory. The user under which the C10 service runs must have full rights to the folder. In this case I’ve created a folder called CognosOutput.

Now I must start Cognos Configuration, and navigate to Actions… Edit Global Configuration:

Under General, I enter the value of my \\server\share combination, prefixed with file://

Click the Test button, and then OK.

Returning back to the main configuration screen, select Data Access… Content Manager, and set the Save Report Outputs… value to True

You are now set up for report output. IBM notes that it is very important that you not be running your Cognos installation as “localhost”, but rather under the name of the server the service is running on.

These steps have set up the top-level directory under which we can save report output.  Within Cognos Connection we must now define what the actual destination output locations within this folder will be.

Open up IBM Cognos Administration from the Launch menu in Cognos Connection. Then navigate to the Configuration tab and select Dispatchers and Services, and in the upper right side of the screen select Define File System Locations:

Give the new location a name under the Name section, and (optionally) a description and screen tip. Finally, give it a location – this is where it will appear under the output file folder you set up above. You can use the “\” character to nest a folder beneath another folder. You do not declare the top level folder, so in this case NewOutput could be used as a location, but not CognosOutput\NewOutput.

Now you are ready to burst the report to the file system! Select Run with Options for the report in Cognos Connection, and under Delivery method select Save the Report. Then click Advanced Options and on the the next page, select Save To the Filesystem, and select Edit the Options

In this case I have selected “New Output”, which I have set up to output to NewOutput/NewOutput1 on my file system. I have also renamed the report to August_Sales_Reports

Select OK, and select Burst The Reports from the radio button on the lower left side. Then click Run.

The reports will now be burst to the CognosOutput/NewOutput/NewOutput1 folder:

A couple of quirks: Cognos will append the language setting to the name of the report. It will also append the value by which the report was burst (useful for organizing the reports). It will also output a second XML file that describes the report.

 

 

SQL Server Analysis Services Cubes and Cognos PowerPlay

By Peter Beck, CBIP

SQL Server Analysis Services is a popular OLAP product included with Microsoft SQL Server. Especially since SQL Server 2005 this product has been quite powerful and fairly easy to  develop with. SQL Server provides the Business Intelligence Development Studio (BIDS), a Visual Studio –like product to aid the development of Analysis Services cubes.

For browsing and reporting on a cube, however, choices have been more limited. Excel provides a good choice, especially since Excel 2007, which contains enhancements that make creating cross-tab reports easier than previous versions.

If your users are committed to Cognos PowerPlay, you can use this tool as well. Setting up a MS Analysis Services cube for browsing with PowerPlay is a little more involved than a regular Cognos cube, but is still quite easy to do.

First, you need to access a tool called PowerPlay Connect. This can be found in the Tools folder of your Cognos installation:

The executeable is ppconnct.exe.

This tool is used to create a binary “pointer file” with a .MDC extension. This file, once created, will behave like a PowerPlay OLAP Cube, but the underlying cube will actually be (in this case) a Microsoft Analysis Services cube.

Start PowerPlay Connect, select File… New to create a new MDC file. For the database type, select MS SSOS (ODBO):

You have a couple of choices next. If you know the server name for your instance SQL Server Analysis Services you can enter it in the next line, under Server:[Port]. In this case I can enter “localhost”, as I am serving the cube from my local machine.

Alternatively, I can select the … button beside Database, and I will be presented with the Chose a Remote Cube dialog box. In this case I then select Microsoft SQL Server OLAP Server at the bottom, and then select a connection I have already created previously using the tool. In this case the connection is called local. I’m then presented with a list of databases available on the connection “local”.

I can then open SSAS_Adventure_Works and the cube that exists in this particular database. A database might have many cubes available in it.

Alternatively I could create a new connection, by clicking on Connections… and then clicking Add. I enter the name I want to give the connection, and then the name of the server, and select Microsoft SQL Server OLAP Server and MSOLAP as the provider:

Since I selected the cube SSAS_Adventure_Works, we see this in the details of the connection string:

I can now click File… Save and save this as an .MDC file:

The file appears as a normal MDC cube, but is really just a pointer file to the SSAS database server:

Using PowerPlay, I can now open the MDC file as if it were an ordinary cube. I can navigate it generally in the same way I would navigate a Cognos cube, althought some things such as Measure Groups that are part of the Microsoft approach to OLAP do not behave exactly the same way. Meaures appear as a single list, much as they do in Cognos cubes.

PowerPlay Connect MDC files can be put on the network, or shared as any other file, and will work as long as the user has access to the underlying Microsoft database.

Cognos Transformer and Category Uniqueness, Part 2

By Peter Beck, CBIP

In Part 1 of this series we examined how Category Codes are generated. For many users the “uniqueness” of these codes is never a problem, but if they change errors can result in reports that use them.

For example, if the categories are deleted and re-generated, Transformer may not assign the same value to the Category Code. Here is the result after deleting the category California from our example in Part1, and regenerating it:

Instead of CA~8, we now have CA~9. This causes the category to disappear from our Reporter-mode report from Part 1:

You can imagine the impact on users depending on reports with large numbers of dimensions that suddenly disappear on them. As a developer, if you don’t understand the impact of category code generation you can spend a lot of time scratching your head about what is going on.

So what can we do? The surest method is to always ensure that the Category Code is unique in a dimension by assigning a completely predictable value to it that will never change, and that you can count on as being unique at least in the hierarchy, if not in the entire model. By taking control of this value we are ensuring that it is not assigned by Transformer. This is probably best done in the database, during the generation of the dimension at the database level (not the Transformer “dimension”.)

As a quick, alternative method you can ensure uniqueness by creating a calculated column in the dimension that you know will be unique for the level. In this case I have created a calculated column for All_States called State_Cat, and one for All_Countries called Country_Cat.

These are calculations based on concatenating Country_CD and Country (and State_CD and State). Since these calculations will be unique in our dimension we can use them as sources for the Catergory Code in each respective level (only All_States is shown, but the idea is the same for All_Countries):

Now when we generate categories, we see that the Category Code for California is CACalifornia:

Because we are controlling this value, and not Transformer, we can be sure that this value will not change, even if the category is removed and regenerated in the Transformer model. This will ensure that reports that use this Category Code will continue to work correctly.

Cognos Transformer and Category Uniqueness, Part 1

By Peter Beck, CBIP

One tricky and poorly understood problem with the deployment of Cognos Transformer cubes relates to the concept of uniqueness within a dimension. Developers and users can proceed for a long time without having any problems with the uniqueness of categories, and then suddenly – usually as a result of deployment of an enhanced version of an existing cube – user reports may start to behave strangely. Selected categories may disappear from reports authored in PowerPlay, or drill-throughs in reports created in C8 may no longer function correctly. Managing these category codes (“Member Unique Name(s)”, or MUN(s)) requires a bit of forethought that may be skipped over in the development phase

These problems may be rooted in the fact that Transformer insists on a unique identfier across all categories in a dimension, regardless of the level of the dimension the categories exists at. This can cause some subtle and hard-to-diagnose problems.

Consider a trivial “sales” data set:

We have a denormalized “flat file” of data with a hierarchy of Country … State … City. If we create a set of dimensions in Transformer using this data set, we might create something like this:

At the level All_Countries, the level is defined as shown:

At the level All_States, the level is defined as shown:

Note that in both levels the Category Code is left undefined. This will be determined by Transformer, and Transformer will ensure that it will be unique for the dimension. It will base the code on the value in the Source for the level.

This uniqueness is key, because if we look back at our data, we note that we have the value CA for both a Country_CD (Canada) for level All_Countries and a State_CD (California) for level All_States. We have used Country_CD as the Source for each respective leve. By default, since we have not defined how to calculate the Category Code, Cognos will calculate it for us. It will do so based on the Source column for each level, using value CA in the case of the Canada category in All_Countries and the California category in All_States.

The potential problem arises because we must have unique values in the entire dimension for Category Code.  When Transformer finds that it has a value CA as the Source in All_Countries, and the same value as a Source for All_States, it must calculate a unique value for Category Code for All_States for California. It does so by putting a tilde (~) sign and a number after the Source value. We can see this in the Categories in Transformer, after the categories for the cube have been generated. Note the value CA~8 for Category “California”:

To reiterate: the Category Code value must be unique in the dimension.

The problem arises if we use this Category in a report in PowerPlay, for example. Consider a PowerPlay report authored in Reporter mode, that includes California:

The key point here is that the report is using the Category Code CA~8 “under the covers”, not the value California, to identify the value to be returned.

For many users this is not a problem. As long as the calculated Category Code “CA~8” never changes the user’s report will be fine.

But what happens, if as part of a development process, for example, the Category Code gets changed? This can cause some real headaches. If the Category Code is changed for any reason, the users PowerPlay Reporter-mode report will break. One result could be the disappearance of the category that is in the report:

In Part 2 well take a closer look at what can happen when Category Codes change, and propose some solutions.

An Introduction to the Cognos SDK, Part 4 – Cognos 8 Extended Applications and Conclusion

By Peter Beck, CBIP

In this final entry in our series on the SDK we’ll touch on Cognos Extended Applications, a set of JSP tags that enable the development of custom “portlets” that can be hosted by a number of “portal” applications, including IBM’s Websphere portal and SAP’s Enterprise Portal (as well as Cognos Connection, of course.)

JSP technology includes the ability to create custom tags that contain back-end code. Just as a set of tags like <a> </a> means something specific to the browser, and <%> </%> are used to demarcate code that will be executed on the server side (but is contained within the JSP page) the user can create tags of their own that will call code when the page is rendered but exists in a java class the user has created.

The Cognos SDK includes several tags libraries that can be used to create JSP pages that can then be registered with a portal. Once again, these libraries enable the user to essentially perform any task that can be performed through the normal interface, but extended or combined as the user wants. Once registered the JSP portlet will be available to users of the existing corporate portal. The details of registering a JSP page with the portal vary with the portal being used.

The Extended Applications approach is ideal for the development of content you want to present within an existing portal, but it is complex, and requires the use of the JSP portion of the J2EE stack.

The C8 SDK is a powerful tool for organizations that want to extend the power of Cognos 8 within the enterprise, and provides a number of tools to do so. The BI Bus API provides a set of classes that can be used to build applications that interact with C8 “from the ground up”. The URL API provides an easy, ‘lightweight” way of calling methods to interact with C8 via a correctly formatted URL, or from JavaScript within a client. Finally, Extended Applications provides a set of custom JSP tags within libraries that can be used to create JSP pages that can then be registered as portlets within an enterprise portal.

An Introduction to the Cognos SDK, Part 3 – The Cognos 8 URL API

By Peter Beck, CBIP

In the previous entry in this series we took a brief look at the BI Bus API, a collection of classes (either in .NET or Java) or a legacy VB6 .dll that can be used to perform actions against Cognos 8 – tasks as varied as changing attributes of C8 content, running reports, changing security settings, etc. Essentially any task that can be performed through the Cognos 8 interface can be executed through calls to the correct part of the BI Bus API.
Another way of performing tasks is to interact with the Cognos dispatchers through calls to the Cognos gateway – the URL API. You can format an execute a call using a specially formatted URL passed over HTTP/HTTPS. Tasks that can be performed this way include starting Cognos components, and executing tasks such as running a report.
As an example, starting Report Studio can be accomplished by calling the following URL:

http://localhost/cognos8/cgi-bin/cognos.cgi?b_action=xts.run&m=portal/launch.xts&ui.gateway=http://localhost/cognos8/cgi-bin/cognos.cgi&ui.tool=ReportStudio&ui.action=new

(In this example you would of course need to pass the correct dispactcher/gateway urls, which are not likely to be “localhost” except on a demo  machine.)

Including the parameter &ui.object can be used to open a specific report:

http://localhost/cognos8/cgi-bin/cognos.cgi?b_action=xts.run&m=portal/launch.xts&ui.gateway=http://localhost/cognos8/cgi-bin/cognos.cgi&ui.tool=ReportStudio&ui.object=[PATH]

…Where PATH above is the path to the report. This can be found most easily by examining the properties of the report within Cognos Connection.

Special JavaScript methods can also be embedded in a web page and called to open Cognos 8 content, either in an existing or new window. Use of these methods requires the inclusion of the path to a JavaScript library “cognoslaunch.js” within <script> tags of the header of the page.

<script language=”JavaScript” src=”Cognos8Gateway/cognoslaunch.js”>

</script>

Calls that can be made with a URL can also be called using the JavaScript cognosLaunch method. For example, to open Report Studio, the following call can be made in JavaScript.

cognosLaunch(“ui.gateway”,”gateway”,”ui.tool”,”ReportStudio”)

Using the URL API is useful to embed Cognos studios or content within a browser window or frame in a non-Cognos application. The syntax of the URL API is well documented within the API documentation. As with the BI Bus API the range of actions that can be performed is quite extensive, mirroring what can be called through the regular UI. The URL API can be thought of as a light-weight way to accomplish certain tasks or easily embed content within a web application other than the regular portal. Cognos suggests that for more complex tasks the BI Bus API be used.