An Overview of Esri Maps For Cognos, Part 3 – Shapes

(Special “The Colour and The Shape” edition)

By Peter Beck, CBIP

In Part 1 of this series we introduced the Esri Maps for Cognos (EM4C) product, which enables us to tie together BI-type reporting with the rich capabilities of Esri’s mapping software. In Part 2 we demonstrated how easy it is to use this software to connect point-type

Wonder how we're doing in the dragon district?

Wonder how we’re doing in the dragon district?

data in a Cognos report to a map. In essence, we can take points of data identified by lat/long values and connect them to a map, and then colour-code the points to represent different categories or types of data. In our example, we looked at crime data for San Francisco. The result enabled the user to make inferences from the geographic distribution and type of crime reports that would be difficult to make if the data were simply listed by address, or even grouped into neighbourhood categories.

In this installment, we will look at a slightly different way of displaying data within the context of geography – instead of displaying discrete points (which require lat/long values) we will categorize larger geographic areas, defined by shapes on the map.


As before, we need a Report Studio report, with a query:EM4C_3_1_query

Note that in this example we don’t have any “lat/long” type data here – instead, we have Retailer Province-State, which contains values representing the name of each state:


This time, instead of adding a Cognos X/Y Layer to our Esri map in the report, we will add a Cognos Shape Layer:


A Cognos Shape Layer acts similar to a XY layer, except that it binds data based on common descriptions between the report data to a  map containing a “shape”, instead of lat/long points. In this case we set up the map associated with the “shape” layer to one containing shapes of States in the US. In the wizard provided we can match the shape names in the map we have selected (STATE_NAME) to the appropriate column (Retailer Province-State) in our query:




We select the measures we are interested in…


… and then configure the “shape join”, assigning colour-values to relative levels of each measure (in this case, Revenue):


We now have a map that lets us see, by quantile, how revenue compares by state:



However, because we have selected several measures, we can also use the map legend to select the other measures and see how they compare as well:EM4C_3_8_Map_2

For example, here is the map showing Gross Profit:


Note that the legend shows the quantile breakdowns for each colour. As well, hovering over each state brings up information on the state:


Users are not limited to a single shape layer – multiple layers can be combined on a single map, and then the layers activated/deactivated by the user to how different data by different “shape”.

Shapes are not limited to conventional maps, of course. Floor plans provide an ideal source of shapes. Retailers can use shapes to identify revenue by area of a store, or property managers can look at building usage, perhaps over time. All that is needed is a Esri map with shapes that correspond to the physical areas the user is interested in, and have an attribute that can be joined to a column in the report that contains values that match the values of the attribute.



An Overview of Esri Maps For Cognos, Part 2 – Points

(Special “If You’re Going To San Francisco” edition)

By Peter Beck, CBIP

In part 1 of this series, we looked at how Esri Maps For Cognos – EM4C – allows us to embed a map from an Esri map server inside a Reoprt Studio report. But the map is pretty useless if it doesn’t allow us to connect to our data and perform some kind of analysis that can’t be done with a regular list report, or with some kind of graph.

From a mapping perspective there are a couple of concepts that we need to keep in mind if we are going to bind business data to a map: one is the idea of a point, the other the idea of a shape.

Creating map-points (old school)

Creating map-points (old school)

We’ll start with a point. A point is a lat/long value on a map: it is (strictly speaking) an entity with no area. It could be a point that represents a store location, a home address, whatever you like. The important thing to keep in mind is that even if a store (or your house) occupies area, from a mapping/point perspective it is simply a point on the map.

So what kind of data can we plot using points? Crime data is one example – a police call is typically to a particular address. If we can plot these locations on a map, by type, we might gain insights into what kinds of crimes are being reported not just by location, but by location relatively to each other – what kinds of crimes cluster together, geographically.

Crime data for San Francisco for March, 2012 is available on the web, and this data set comes with both category of crime and lat/long of the police report. This makes the data set ideal for plotting on a map.

First, I set up a quick Framework Manager model that retrieves the data from my database. Then, we need a query in Report Studio that retrieves the data:

Creating a simple query

Creating a simple query

Note that we have a Category, Description, and X and Y values representing Longitude and Latitude respectively.

I add a map placeholder (as we did in Part 1) and then save the report. (I could, of course, add any additional report items, queries etc to the report that I wish.) I then open the map placeholder in Esri Maps Designer, add a base map, and then add a new layer: the special Cognos X Y Layer. I rename it Crime_Locations:

Adding the X Y Layer

Adding the X Y Layer

A wizard enables me to select the query associated with the Crime_Locations layer, which will display points:

Selecting Data

Selecting Data

Note the inclusion of a Unique Field – this is the IncidentNum from the original data.

Further configuration allows me to then assign the Lat/Long from the data set, and identify each point by the Category of crime.


Categorization and Symbolization

I now have a set of symbols – coloured squares – that correspond with the categories of my data. When I view my report, I can see the location of each crime, by colour-coded type, at each location it was reported at:

Woah thats a lot of crime...

Woah thats a lot of crime…

Even at this zoom level I can draw some conclusions about what areas have more crime – the north-east seems to have more reports that the south-east, for example. But by selection of specific crimes, and zooming in, interesting patterns begin to emerge.

What patterns are emerging?

What patterns are emerging?

The orange squares represent drug-related charges. The green and purple squares are assault and robbery charges respectively. The drug-related charges are more concentrated in one relatively small area, while the assault and robbery charges seem more spread out – but with a concentration of them in the area the drug charges are also being laid.

If we zoom in even closer, we can see that certain streets and corners have more calls than others in close proximity – that the crimes seem to cluster together:

Stay away from these areas...

What’s special about these areas?

But zooming out again, we see an interesting outlier – a rash of drug charges along one street, with what appears to be relatively few assaults or robberies:

Something looks out of place...

Something looks out of place…

Zooming in we see that this activity is almost completely confined to a 7-block stretch of Haight St., with virtually no activity in the surrounding area, and few robberies or assaults:

What is special about this street?

What is it that makes this street so special?

This kind of spatial relationship is extremely hard to discern from a list or chart, even a chart that breaks events like police calls down by geographic category of some kind. But using mapping, with a simple zoom we can go from an overall view of patterns of activity to a much higher degree of detail that begins to tell some kind of story, or at least warrant further investigation.

But wait, there’s more…

By hovering over an individual square, I can get additional category information from my underlying data, assuming I have included it in my query. In this case there is a sub-category of the call:

The reader is left to draw his or her own conclusions...

The reader is left to draw his or her own conclusions…

By adjusting the query I can re-categorize my data to yield results by, for example, day of the week, or sub-category.  For example, here we can contrast Possession of Marijuana (green) with Possession of Base/Rock Cocaine (pink):

Patterns of behaviour...

Patterns of behaviour…

Marijuana possession seems more diffuse, although concentrated in a few areas. The cocaine charges are much more concentrated.

In our next entry in this series, we’ll take a look at allocating data to shapes, to colour-code areas to represent different levels of activity.







An Overview of Esri Maps For Cognos, Part 1 – Intro

By Peter Beck, CBIP

Cognos report writers have long been frustrated by the poor built-in support for GIS-type displays in Cognos reporting tools. True, there is a basic map tool included as part of Report Studio, but it is quite limited in functionality. It can be used to colour geographic areas, but lacks layering, zooming, sophisticated selection tools, and the kind of detail we’ve all become used to with the advent of Google Maps and the like.

Wonder how the sales are doing in Mordor

Wonder where the ring sales are these days?

There are a few map-related add-ons for Cognos reporting available. Recently I had the opportunity to take Esri’s offering in this space for a test drive with a 2-day training session at Esri Canada’s Ottawa office. I came away impressed with the power and ease-of-use offered by this product.

EM4C – Esri Maps For Cognos – came out of development by SpotOn Systems, formerly of Ottawa, Canada. SpotOn was acquired by Esri in 2011. The current version of the product is 4.3.2. The product acts as a kind of plug-in to the Cognos portal environment, enabling Report Studio developers to embed Esri maps, served up by an Esri server, in conventional Report Studio reports. From a report developer perspective EM4C extends Report Studio, and does so from within the Cognos environment. This is important: EM4C users don’t have to use additional tools outside the Cognos portal. From an architectural perspective things are a little more complex: the Cognos environment must be augmented with EM4C server, gateway and dispatcher components that exist alongside the existing Cognos components.

Then, of course, there are the maps themselves. Since this is a tool to enable the use of Esri maps, an Esri GIS server must be available to serve the maps up to the report developer and ultimately the user. For shops that are already Esri GIS enabled this is not a challenge, and indeed I can see many users of this product wanting to buy it because they have a requirement to extend already available mapping technolgy into their BI shops. However, if you don’t have an Esri map server, don’t despair – the product comes with out-of-the-box access to a cloud-based map server provided as part of the licence for the product. This is a limited solution that won’t satisfy users who have, for example, their own shape files for their own custom maps, but on the other hand if you have such a requirement you probably already have a map-server in-house. If you are new to the world of GIS this solution is more than enough to get started.

So where do we start with EM4C? First, you need a report that contains data that has some geographic aspect to it. This can be as sophisticated as lat/long encoded data, or as simple as something like state names.

When we open our report, we notice we have a new tool: the Esri Map tool:

Selecting the Esri Map tool

Selecting the Esri Map tool

As mentioned, the EM4C experience is designed to enable the report writer to do everything from within Cognos. Using this tool we can embed a new map within out report:

Map Placeholder

Map Placeholder


So now what? We have a map place-holder, but no map. So the next step is to configure our map.

This step is done using Esri Maps Designer. This tool is installed in the Cognos environment as part of the EM4C install, and enables us to configure our map – or maps, as we can have multiple maps within a single report.

Selecting Esri Map Designer

Selecting Esri Map Designer

Esri Maps Designer is where we select the map layers we wish to display in our report. When we open it we can navigate to any Report Studio reports in which we have embedded and Esri map :


Selecting a map to configure

Selecting a map to configure

In this case VANTAGE_ESRI_1 is the name of the map in my report; the red X indicates it has not been configured yet. Clicking Configure brings up our configuration. This is where we select a Base Map, and then link our Cognos data to a layer to overlay on the map.

As mentioned, out-of-the-box the EM4C product enables the user to use maps served from the Esri cloud. We will select one of these maps from Esri Cloud Services as the Base Map of our report:

Maps available from Esri's cloud services

Maps available from Esri’s cloud services

When the base map is embedded, it becomes a zoom-able, high-detail object within the report:

An embedded map

An embedded map

Unfortunately, while the map looks great it bears no relationship to the report data. So now what?

In part 2 of this overview we will look at how to connect the report data points to the report map. It is the combination of the ease-of-use of BI tools (and the data they can typically access) with mapping that makes a tool like EM4C so powerful. We will symbolize data to created colour-coded map-points to reveal the geographic location and spatial relation data, potentially allowing users to draw conclusions they otherwise would not have been able to with list-type data.



Bill Inmon and Textual ETL

By Peter Beck, CBIP

BI/Data warehouse legend/guru Bill Inmon spoke to the Ottawa data warehousing and BI community at an event organized by the local chapter of DAMA in conjunction with Coradix. Among other subjects, Mr. Inmon spoke at length on the idea of “Textual ETL”, a method for bringing semi-structured and unstructured data into the data warehouse, and making in available for analysis using conventional BI tools.

Wonder what all this text means?

Mr. Inmon estimated that at least 80% of the data in an enterprise exists in this form – as emails, word documents, PDFs etc. – and he has spent almost a decade on the problem of organizing this data into a form that is queryable. The result is what he calls Textual ETL.

In essence this refers to a process for integrating the attributes of a text document (such as a contract) into a database structure that then enables query-based analysis. In the case of a contract, the document might contain certain key words that can be interpreted as significant, such as  “Value” or “Royalties”. Rather than simply indexing the document, the Textual ETL process (which can contain over 160 different transformations) is designed to take unstructured documents and produce database tables that enable the user to create “SELECT”-style queries. In the case of a contract-type document, such queries might be to answer questions such as “find all the contracts that are of a value between X and Y that refer to product Z”.

A user with a system to manage such documents might have already added attributes such as “product” and “contract value” to the management system thus already enabling such queries, but the beauty of Textual ETL is that it enables the use of the application of taxonomies to documents to resolve the meanings of the texts themselves. This can extend to things like the resolution of things like synonyms. Mr. Inmon gave the example of texts (emails, for example) that refer to different brands of cars – Porche, Ford, and GM, say – or perhaps use the word “automobile”, but never use the word “car” explicitly. A well-designed textual ETL process would result in tables the allowed for ability to search for emails that refer to cars. It would do this by matching the brands of cars, or the word “automobile” to the word “car”, in effect appending “car” to the brands listed.

The process can be extended to dealing with documents where the same expression might mean very different things. Doctors may use similar, short expressions that mean different things depending on context. The application of Textual ETL to these kinds of documents would (must!) resolve these to different meanings.

The problems of implementing Textual ETL don’t seem trivial, and Mr. Inmon only presented a bare outline of how it is done. However, the implications for organizations that produce or deal with huge amounts of unstructured but critical texts – which is almost any organization of any size – could be considerable. In theory Textual ETL enables items that are thought of as not part of the normal domain of data warehousing to be brought into the data warehouse and subjected to the same kinds of analysis normally applied to such things as inventory levels, sales records and so forth.

Dashboard Agonistes!

By Peter Beck, CBIP

Ventana Research CEO Mark Smith has an interesting blog post up with the subtle title “The Pathetic State of Dashboards”.

Wonder what the KPIs say…

I’ve always been a bit of a dashboard skeptic. The fluff promoted by vendors (gauge-type displays for business metrics, for example) has always struck me as noisy and silly. A gauge-type display makes sense in a car, where second-by-second changes in pressure on the gas pedal create immediate changes in a gauge, that then feeds back to the pressure you apply (assuming you are paying attention) but there are few business requirements like this. Highlighting outliers is easily accomplished by conditional formatting. Using the “dashboard” as a metaphor – taking it from the real world of for example, a car, and mapping it to business activity – is an idea that in my experience doesn’t often stand up to scrutiny. The driver’s seat of a car is a different kind of place than the chair in a cubicle, and BI tools are generally too generic for the kind of moment-to-moment operational-level activity implied by dashboards.

Dashboards as an entry point to data discovery may make a certain amount of sense, but drill-through reporting has been around for a long time. Clear exception reports, the kind that can be created easily with out-of-the-box reporting software, are generally of far greater utility than the products of graphics-rich “dashboard” software.

A Quick Review of Tableau

By Peter Beck, CBIP

The Gartner Magic Quadrant for BI is a good place to start when looking at the rich field of players in the BI space. The usual suspects are always there – IBM, Oracle, Microsoft, Microstrategy etc. but is is always interesting to look at tools that are less well-known, or fall outside the upper-right quadrant that everyone seems to aspire to (and judge products by – as a side note I’m thinking about “The Tyranny of The Upper Right Quadrant” as a subject for a future post.)

Tableau is an interesting OLAP-type analytical product that that falls in the upper-left quadrant, qualifying it as a “challenger” in Gartner-speak. But those that like it like it a lot – Gartner goes on to describe it as “The ‘sweetheart’ of the quadrant.” Apparently customers love this product.

I took a quick look at the desktop version of the software, which is offered as a fully-functional 14 day trial.

Tableau has an interesting history. It was started by folks with a strong interest in data visualization. From the beginning Tableau was positioned as a tool that would enable fast visual representations of data (original founders included a founding member of Pixar.) Tableau advertises itself as “a stunning alternative to traditional business intelligence”, attempting to carve out a niche in an area that Cognos, for example, has traditionally not been great at (in my opinion visualization has always been clumsy in tools even as advanced as Report Studio.)

Another area Tableau claims to excel in is in raw speed – “Bring your data into Tableau’s high performance data engine and work with it at blazing speed. And do it with a click—there’s no programming required. Tableau turns millions of rows of data into answers at the speed of thought.” goes the sales pitch. No “programming” required, but definitely some thinking.

When you start doing analysis with Tableau you are offered the ability to connect to a wide, impressive range of data sources. These include Excel, the usual commercial databases etc, but also open-source favourites MySQL and PostgreSQL. As well, there is an option to connect to Cloudera HADOOP Hive. Tableau is plainly positioning itself for “Big Data”-type analysis.

When you select a relational-type data source, such as Microsoft SQL Server, you have the option to select one or more tables, and establish their joins using a series of dialog boxes.  From a data-modelling perspective this interface feels a bit awkward, but it gets the job done at the desktop-level…

… and with clearly-defined keys and a simple data model this shouldn’t present data-savy users with much of a problem – more on this below.

Next we have the option of either connecting directly to a the data source, or importing the data into a native Tableau format:

This is where it gets interesting. I created a MS SQL Server database consisting of 10000 customers, 50 products, and 100 million sales rows –  a very simple model, but a large overall size for my hamster-powered laptop. I then created a MS Analysis Services cube to play with. However, working from the relational model, a user can connect to the database and importing this directly Tableau’s native format – according to Gartner a column-oriented in-memory data engine. On my admittedly underpowered laptop this took a couple of hours, but performance when querying the imported data was quite impressive – it seemed at least as fast as the Analysis Services cube. This isn’t sophisticated benchmarking, but indicates that Tableau’s engine definitely has some power. Using this feature assumes that the user is comfortable arranging the hierarchies of the data themselves, instead of having a modeller do it for them in a cube.

This approach reveals something about critical about Tableau’s market – this tool is meant for people who are comfortable with the world of databases and OLAP-style structures, and for whom creating joins, hierarchies and all the rest is a natural part of the way they think about the data – but who are also the very people interested in analyzing their data. The database, the joins, the model – all of this is a means to an end, carried out, at least to some degree, by the analysts themselves. This hints at Wayne Eckerson’s observation that real analysis is often a bottom-up process, with savvy folks in the business using the powerful tools now available to them to “end run” the IT department. This tool essentially builds-in a kind of ETL between a database and a proprietary analytical structure. This isn’t mandatory, of course, and connecting to my Analysis Services cube was quite easy and natural, but this is something to think about.

As expected, visualizations are where Tableau excels. The “Show Me” tab gives the user a number of visualization options, with hints as to what is appropriate for what kind of data.

Many of the visualizations available are quite useful – for example, below I am able to visually locate a customer who is “Tier 1”, but has very low sales. Arranging this display tool seconds:

Tableau offers the user the ability to connect simultaneously to multiple data sources. Here I have 2 data sources in the “Data” tab. Contrast this with the approach Cognos takes, where multiple data sources are put together in a package that hides this from the user. Once again, the idea is that the user knows the data (and how it relates) well enough to perform these kinds of tasks – but the user can act quickly to select the data sources they want and combine them as he or she sees fit.

Digging into all of Tableau’s features is beyond the scope of this post, but this is definitely a thought-provoking product. The BI world seems to be in a never-ending struggle between quick, user-oriented tools and the more controlled, but less agile, enterprise-grade BI suites. Tableau seems to be positioning itself as a product for the highly competent analyst in a relatively small organization – or a small part of a large organization. Gartner provides some insight here: “Tableau’s products often fill an unmet need in organizations that already have a BI standard, and are frequently deployed as a complementary capability to an existing BI platform. Tableau is still less likely to be considered an enterprise BI standard than the products of most other vendors.” Tableau is not a general-purpose reporting tool – it is an analysis tool, for analysts.

Wayne Eckerson @ TDWI Ottawa

By Peter Beck, CBIP

Wayne Eckerson is a noted BI consultant who spoke recently to the Ottawa TDWI chapter. I’d call Wayne a guru, but someone once told me that guru was a polite word for charlatan. Wayne is the very opposite – he is a very down-to-earth speaker who delivered a direct, unpretentious and thoughtful presentation on the subject of BI organizational architecture.

One of Wayne’s interesting observations was that he sees the need for what he calls “purple people” for any successful BI organization. If we think of people on the business side as “blue” and the people on the IT side as “red”, then “purple people” are people that have a mix of skills that enable them to be effective at bridging the gap between the two worlds. I spoke to Wayne afterwards and he elaborated on the idea:

“Purple people are a blend of business and IT – not blue in business or red in IT but a combination of both. These are both senior and junior level folks. At the senior level, some start in the business and end up in IT and then usually come back to the business where they run a business technology group that acts as an interface between the business and IT. (In the BI world I call these teams BOBI – business-oriented BI teams.)  Some in IT become very conversant with the business and do a good job meeting business needs. These are directors of BI who interface with business executives more than their technical teams just about, to present budgets, roadmaps, funding requests, etc.

At the junior level, things are trickier, and not as effective. Most companies have business requirements analysts who interview business people, gather requirements, and translate those into specs for developers. I usually find there is a lot lost in translation with these junior level purple people.”

Another one of his key observations in the presentation was that from a BI architecture/organizational perspective, we can think of reporting as being a top-down process, with (we hope!) needs analysis, clearly defined specs, a process for building and moving data marts and reports into production, various controlling structures and so on.

Analysis, however, doesn’t really lend itself to this kind of approach – analysts may not know the questions that they want answered until they begin to delve into the data in a very ad-hoc kind of way. They want to quickly add data sources, join things together, and perform analysis that will lead to more questions, potentially the requirement for more data sources, and so on.

This leads to the business attempting to work around IT to get what they want, including bringing in tools that IT isn’t prepared to support. Analysis ends up being a volatile, bottom-up process, driven by the business, and the organization may struggle to keep it under control. IT fears chaos, but – to some degree – real analysis has a chaotic, or at least unpredictable, character. BI practice has to recognize the contrast in the very natures of reporting and analysis to be effective.

Wayne is a regular blogger and author of books and reports, such as Performance Dashboards: Measuring, Monitoring, and Managing Your Business. If you get the opportunity to hear Wayne speak take advantage of it – he delivers a lot of thought-provoking content that has application in the real world.


Can’t anybody here play this game?

Can’t anybody here play this game? – Casey Stengel

 By Peter Beck, CBIP

IBM has a new paper out on that age-old question – why do BI projects fail, and what can be done about it? The paper is entitled “Bridge The Gap Between BI Best Practices and Successful Real World Solutions“. The first few pages are the usual marketing fluff, and they generally contradict the “meat” of the paper, which begins a little further in. That is, once again, we see a particular technical/product solution proposed to solve what is not a technical problem. This is accomplished by simply asserting that this particular technical solution maps neatly over the business problems Gartner has uncovered. If you are brave enough to hack your way through the paper to where the Gartner material actually begins, there are some interesting discoveries to be made. By “interesting” I mean “depressing”. Taken as a whole the paper can be thought of as a fine example of what the Gartner research itself reveals.

The paper begins with a set of now-common observations: that BI programs need a business sponsor, that IT ends up “selling” BI to the business (and doing it badly), that BI tends to get “stuck in reporting”, and that “Technology is rarely the culprit if the BI project is considered a failure”. All well and good. And then at page 2 we read, in bold, all-caps:


I see. IBM’s technology will be the “key”. That’s a relief. Gap closed! Close the document and move along.

But if I keep reading, I discover that the folks at Gartner have done some research on the practice of BI programs, most of which are not particularly related to technology (on the contrary.) The results aren’t good. That doesn’t mean they are surprising, of course.

The Gartner section of the paper is called “The BI(G) Discrepancy: Theory and Practice of Business Intelligence”. They break out 9 aspects of BI implementation, and discuss what should be done in each aspect, versus what their research indicates is actually taking place in the real world. The results are a confirmation of what most of us “in the trenches” feel intuitively: there seems to be little correspondence between what should be done, and what actually is done. And technology isn’t going to change that.

The whole thing is worth a read, but the most eye-popping section turns out to be the discussion of –  BI strategy! That thing that the latest IBM product will provide a “key” for! Turns out only 2% of organizations informally surveyed in mature markets had anything called a BI strategy. That’s not a typo. 2%. And this is among Gartner clients. Let that sink in for a second, and then consider this quote from the paper:

“Nearly shocking results are obtained when reviewing the so-called BI strategy documents. Almost never would those qualify as strategy in Gartner’s opinion. Quite often a strategy is merely a statement like “We have a Microsoft BI strategy” or “Our BI strategy is SAP” indicating what products the organization is using or planning to implement. Other times the “strategy” is merely an architecture diagram… This is as if the Ferrari Formula 1 team described its racing strategy as “using Bridgestone tires, Shell fuel, a V8 engine and red paint.”

I like the use of “Nearly” to suggest seen-it-all unflappability on the part of the author.

The analyst goes on to describe the initial 2% number as “rather optimistic” (raw-ther, old sport!), blows some dust off the dictionary definition of “strategy”, and then (perhaps beginning to get a little exasperated, and reaching for the bottle) muses that:

“The question could be expanded to: Do executives even understand what constitutes a strategy?”

Yes! It does appears that the question could be expanded to that!

Everyone, and I mean everyone, I have ever encountered in this industry who works above the level of writing reports struggles with the problems outlined in the Gartner material every day. And yet here we are, decades now into the world of BI, and it doesn’t appear to be getting any better. BI still seems to be mired in confusion as to what it is – what is its identity within the organization. The default position seems to be: it’s a technology. IBM et. al. seems ok with this, and I can’t blame them. As long as the discussion can be returned to “BI is a product (and our product is the best!)” they seem to be happy, as they have a tangible thing to sell. My own feeling  (obviously) is whenever the real answer to this question is found, it won’t be “Cognos” or “Microsoft Analysis Services” or any other piece of software, and I say this as someone who spends his days with these products in front of him.

If executives don’t have a grasp of the rudiments of BI strategy (or perhaps strategy in general), it seems that the best anyone can do is try to keep pushing technology. At least that seems to be what IBM’s “strategy” is with this document – provide a high-level summary, name the product and map it to what is “supposed” to happen in an organization, and hope for the best – and that no-one keeps reading. Or what the Gartner analyst, in the section on what goes on in the real world when it comes to the business case for BI, characterizes as a “leap of faith”. I’m not kidding, they actually use those words to describe what Gartner clients are doing to justify their BI investments.

Check the paper out, it’s worth a read.

Cognos 10, Report Bursting and Saving Output to File

 By Peter Beck, CBIP

(The instructions below present setting up C10 for output to a file location on the network within the context of bursting reports, but there is no reason you can’t set up file output for the normal manual or scheduled execution of reports – PB)

Cognos 10 (like all versions of Cognos BI since ReportNet) has a fairly straightforward way of configuring a given ReportStudio report for “burst” output – that is, for generating a set of reports from a specific report specification, where the only difference between the reports is some selected value. Consider a generic sales report, where we have 2 different sales reps.

We might want to “burst” the report across the sales rep identifier, so we would get one report for each sales rep. We could then distribute each report to the appropriate rep.

Setting a report up for bursting is performed in the Report Studio interface. Under File… Burst Options we set how the report will burst. We also have the option of selecting how the report will be distributed – either as an email or as a Cognos directory entry. The value for the both the burst specification and the distribution must come from a query in the report.

However, it is quite possible that we might want the output to go out to a file location instead. To set this up requires a little bit of configuration, but it is quite straightforward. In versions of Cognos BI prior to 8.3 this was a bit limiting – we essentially had only one destination we could output to. In even older versions controlling the name of the output report was a pain as well – we needed secondary scripting to re-name the report in the output file based on an associated XML file. This is no longer necessary.

Note about the instructions below: this is not limited to burst output – setting up C10 for file system output can be useful for saving any report you run to the file system – a manually run report, a scheduled report, or burst report.

First, we need to create a shared folder on our server. This can be any name, but should not be located in the installation directory. The user under which the C10 service runs must have full rights to the folder. In this case I’ve created a folder called CognosOutput.

Now I must start Cognos Configuration, and navigate to Actions… Edit Global Configuration:

Under General, I enter the value of my \\server\share combination, prefixed with file://

Click the Test button, and then OK.

Returning back to the main configuration screen, select Data Access… Content Manager, and set the Save Report Outputs… value to True

You are now set up for report output. IBM notes that it is very important that you not be running your Cognos installation as “localhost”, but rather under the name of the server the service is running on.

These steps have set up the top-level directory under which we can save report output.  Within Cognos Connection we must now define what the actual destination output locations within this folder will be.

Open up IBM Cognos Administration from the Launch menu in Cognos Connection. Then navigate to the Configuration tab and select Dispatchers and Services, and in the upper right side of the screen select Define File System Locations:

Give the new location a name under the Name section, and (optionally) a description and screen tip. Finally, give it a location – this is where it will appear under the output file folder you set up above. You can use the “\” character to nest a folder beneath another folder. You do not declare the top level folder, so in this case NewOutput could be used as a location, but not CognosOutput\NewOutput.

Now you are ready to burst the report to the file system! Select Run with Options for the report in Cognos Connection, and under Delivery method select Save the Report. Then click Advanced Options and on the the next page, select Save To the Filesystem, and select Edit the Options

In this case I have selected “New Output”, which I have set up to output to NewOutput/NewOutput1 on my file system. I have also renamed the report to August_Sales_Reports

Select OK, and select Burst The Reports from the radio button on the lower left side. Then click Run.

The reports will now be burst to the CognosOutput/NewOutput/NewOutput1 folder:

A couple of quirks: Cognos will append the language setting to the name of the report. It will also append the value by which the report was burst (useful for organizing the reports). It will also output a second XML file that describes the report.



SQL Server Analysis Services Cubes and Cognos PowerPlay

By Peter Beck, CBIP

SQL Server Analysis Services is a popular OLAP product included with Microsoft SQL Server. Especially since SQL Server 2005 this product has been quite powerful and fairly easy to  develop with. SQL Server provides the Business Intelligence Development Studio (BIDS), a Visual Studio –like product to aid the development of Analysis Services cubes.

For browsing and reporting on a cube, however, choices have been more limited. Excel provides a good choice, especially since Excel 2007, which contains enhancements that make creating cross-tab reports easier than previous versions.

If your users are committed to Cognos PowerPlay, you can use this tool as well. Setting up a MS Analysis Services cube for browsing with PowerPlay is a little more involved than a regular Cognos cube, but is still quite easy to do.

First, you need to access a tool called PowerPlay Connect. This can be found in the Tools folder of your Cognos installation:

The executeable is ppconnct.exe.

This tool is used to create a binary “pointer file” with a .MDC extension. This file, once created, will behave like a PowerPlay OLAP Cube, but the underlying cube will actually be (in this case) a Microsoft Analysis Services cube.

Start PowerPlay Connect, select File… New to create a new MDC file. For the database type, select MS SSOS (ODBO):

You have a couple of choices next. If you know the server name for your instance SQL Server Analysis Services you can enter it in the next line, under Server:[Port]. In this case I can enter “localhost”, as I am serving the cube from my local machine.

Alternatively, I can select the … button beside Database, and I will be presented with the Chose a Remote Cube dialog box. In this case I then select Microsoft SQL Server OLAP Server at the bottom, and then select a connection I have already created previously using the tool. In this case the connection is called local. I’m then presented with a list of databases available on the connection “local”.

I can then open SSAS_Adventure_Works and the cube that exists in this particular database. A database might have many cubes available in it.

Alternatively I could create a new connection, by clicking on Connections… and then clicking Add. I enter the name I want to give the connection, and then the name of the server, and select Microsoft SQL Server OLAP Server and MSOLAP as the provider:

Since I selected the cube SSAS_Adventure_Works, we see this in the details of the connection string:

I can now click File… Save and save this as an .MDC file:

The file appears as a normal MDC cube, but is really just a pointer file to the SSAS database server:

Using PowerPlay, I can now open the MDC file as if it were an ordinary cube. I can navigate it generally in the same way I would navigate a Cognos cube, althought some things such as Measure Groups that are part of the Microsoft approach to OLAP do not behave exactly the same way. Meaures appear as a single list, much as they do in Cognos cubes.

PowerPlay Connect MDC files can be put on the network, or shared as any other file, and will work as long as the user has access to the underlying Microsoft database.