Demystifying IBM Cognos 8 Security


One of the most challenging aspects of establishing any enterprise business intelligence platform is the design and implementation of a security model that is flexible and extensible enough to meet the needs of your business while also remaining simple enough to administer and maintain. Every organization is unique in how they operate, so a structure that best suits one may not necessarily be appropriate for another. Because of these fundamental differences, there is unfortunately, no single archetype for business intelligence security. There are however a few proven guidelines which if followed, will help to ensure that whichever solution you arrive at will work in harmony with your IBM Cognos 8 environment.

Centralization

Unlike prior generations of enterprise business intelligence software, IBM Cognos 8 lacks any form of local or proprietary method of authentication.  These responsibilities have thankfully been off-loaded to the enterprise authentication provider of your choosing (LDAP, Active Directory, Siteminder, etc.).  Once authentication is performed by the 3rd party resource, group and role based authorization is applied from within Cognos 8. Out of the box, Cognos is configured with a standard set of application based groups and roles that assign capabilities to the various studios and other aspects of IBM Cognos 8 in a manner that aligns closely with the typical software licensing model.
At this point it is tempting to begin mapping users from your external directory directly to default roles within the Cognos namespace, and creating new groups and roles to organize users in ways that satisfy the requirements of your security model (by function, region, business unit, etc). If you have ventured down this path, then you can likely attest that while it may be a near term fix, it quickly becomes cumbersome to manage. The IBM Cognos directory lacks many of the creature comforts of the fully featured directory service management interface and therefore if your organization’s IT policies permit, you are almost always better served to manage group and role membership for your BI security model from within your directory service itself. Providers such as Microsoft’s Active Directory allow for delegation of management to specific users, and offer a management console simple enough for almost anyone to use. The key to this approach is understanding that moving your BI security model into your directory service does not equate to an increased burden on dedicated information security resources within your organization.
There are multiple benefits to this approach. As was mentioned, it completely centralizes the administration and maintenance of your BI security architecture to a single point, and it enables those tasks to be done in the most effective manner possible (natively within directory service itself).  If you operate multiple IBM Cognos 8 environments, you can quickly and easily mirror the same security model across all of them. You can also take this approach one step further and rather than map your custom groups and roles from your directory service directly into IBM Cognos 8, you can decouple your security model from the application by creating matching groups and roles within the built in namespace, and then performing one to one mapping from your directory to these place-holder objects. This technique, while adding some overhead, will make your IBM Cognos 8 environment agnostic to its underlying authentication provider and enable flexibility to more easily change the resource in the future if the need arises.

Consolidation

Even if you have selected a single primary authentication provider for IBM Cognos 8 and proceeded to centralize on this platform as described above, it is likely you may encounter additional heterogeneous security providers that exist only to handle authorization duties for specific data sources. For example, your data warehouse may have its own proprietary set of security tables, utilize built in security or Series 7 Cognos customers hoping to migrate their PowerCubes into IBM Cognos 8 might feel tethered to their legacy Access Manager Namespace.  The value of centralizing your security model for the ease of management quickly degrades when levied with the notion of making changes in triplicate across these additional security providers and the need to consolidate becomes very apparent.
C8 Security Model
IBM Cognos 8 can enable this sort of consolidation by utilizing features from various components of its application architecture. You can leverage your groups and roles from the IBM Cognos 8 namespace to apply data level security from within Framework Manager, or even pass a user’s single sign on credentials straight through to the database. In addition, the latest version of IBM Cognos 8 Transformer fully supports the use of the Cognos namespace for data level security within your cubes. To facilitate this process, the Ironside Group has developed a methodology and set of tools to help you through the process of migrating your old Access Manager Namespace to IBM Cognos 8.

Integration

Sometimes there isn’t an out-of-the-box, fully consolidated or centralized solution to suit the needs of your specific organization, but it doesn’t mean you should have to compromise on your requirements or change your usage pattern to seemingly meet the needs of your technology investment. Your business intelligence platform should conform to your business needs and not the contrary. Stay tuned for a future article that addresses a few of the more unorthodox security integration challenges that are likely to arise in the real world.

Determinants – The Answer to a Framework Manager Mystery


Determinants can play a crucial role in the overall performance and consistency of your Framework Manager model but remain one of the most confusing aspects of the application to most developers. This article will attempt to end the confusion.
Determinants are used so that a table of one grain (level of detail) behaves as if it were another actually stored at another grain. They are primarily used for dimension tables where fact tables join to dimension tables at more than one level in the dimension. (There are other cases where you could use them, but they are less common and fairly specific situations.)

The Situation

Let’s use the example of a date dimension table with day level grain. If all the fact tables join at the day level, the most detailed level, then you do not need determinants.  But as many of us know from experience, this is not always the case. Fact table are often aggregated or stored at different levels of granularity from a number of reasons.

The Problem

The trouble arises when you wish to join to the dimension table at a level that is not the lowest level. Consider a monthly forecast fact table which is at the month level of detail (1 row per month). A join to the month_id (e.g. 2009-12) would return 28 to 31 records (depending on the month) from the date dimension, and throw off the calculations. Determinants solve this problem.

The SQL

Often when modeling, it’s useful to think about the SQL code you would like to generate. Without determinants, the incorrect SQL code would look something like this.
SELECT
F.FORCAST_VALUE,
D.MONTH_ID,
D.MONTH_NAME
FROM SALES_FORECAST F INNER JOIN DATE_DIM D ON
F.MONTH_ID = D.MONTH_ID
This code will retrieve up to 31 records for each of the sales forecast records. Applying mathematical functions, for example Sum and Count, would produce an incorrect result. What you would like to generate is something along the following lines, which creates a single row per month, AND THEN join to the fact table.
SELECT
F.FORCAST_VALUE,
D1.MONTH_ID,
D1.MONTH_NAME
FROM SALES_FORECAST F INNER JOIN
( SELECT DISTINCT
D.MONTH_ID,
D.MONTH_NAME
FROM DATE_DIM D ) AS D1
ON F.MONTH_ID = D1.MONTH_ID
As shown  above, the trick is to understand which columns in the dimension table are related to the month_id, and therefore are unique along with the key value.  This is exactly what determinants do for you.

Unraveling the Mystery in Framework Manager

Following Cognos best practices, determinants should be specified at the layer in the model in which the joins are specified.
Here we see a date dimension with 4 levels in the dimension, Year, Quarter, Month and day level.
1
This means we can have up to 4 determinants defined in the query subject depending on the granularity of the fact tables present in your model.  The first three levels, Year, Quarter, Month, should be set to “group-by” as they do not define a unique row within the table and Framework Manager needs to be made aware that the values will need to be “Grouped” to this level. In other words, the SQL needs to “group by” a column or columns in order to uniquely identify a row for that level of detail (such as Month or Year).  The Day level (often called the leaf level) should be set to “Uniquely Identified”, as it does uniquely identify any row within the dimensional table. While there can be several levels of “group by” determinants, there is typically only one uniquely identified determinant, identified by the unique key of the table. The “uniquely identified” determinant by definition contains all the non-key columns as attributes, and is automatically set at table import time, if it can be determined.
The Key section identifies the column or columns which uniquely identify a level.  Ideally, this is one column, but in some cases may actually need to include more than one column.  For example, if your Year and Month values (1-12) are in separate columns.  In short, the key is whatever columns are necessary to uniquely identify that level.
Using our aforementioned table, the setup would look like this:
2
The Attributes section identifies all the other columns which are distinct at that level.  For example, at a month_id  (e.g. 2009-12) level , columns such as month name, month starting date, number of days in a month are all distinct at that level. And obviously items from a lower level, such as date or day-of-week, are not included at that level.
Technically, the order of the determinants does not imply levels in the dimension. However, columns used in a query are matched from the top down which can be very important to understanding the SQL that will be generated for your report. If your report uses Year, Quarter and Month, the query will group by the columns making up the Year-key, Quarter-key and Month-key. But if the report uses just Year and Month (and not the Quarter) then the group by will omit the Quarter-key.

How Many Levels Are Needed?

Do we need all 4 levels of determinants? Keep in mind that determinants are used to join to dimensions at levels higher than the leaf level of the dimension. In this case, we’re joining at the month level (via month_id). Unless there are additional joins at the year or quarter level, we do not strictly need to specify those determinants. Remember that year and quarter are uniquely defined by the month_id as well, and so should be included as attributes related to the month, as shown.
3

The Result

Following these simple steps the following SQL will be generated for your report. The highlighted section is generated by the determinant settings. Notice how it groups by the Month_ID, and uses the min function to guarantee uniqueness at that level.  (No, it doesn’t trust you enough to simply do a SELECT DISTINCT.)  The second level of group by is the normal report aggregation by report row.  So the result is that the join is done correctly, which each monthly fact record joined to 1 dimensional record at the appropriate level, to produce the correct values in the report.
4

IBM Cognos 8 Framework Manager Model Design


For those starting out with Cognos 8, or even those who have worked with it for quite a while, Framework Manager model design can seem like a daunting or even overwhelming task. With so many options available, it can be hard to know what to consider during design, and how to promote stability and sustainability.
This is the first in a two-part article that will focus on the best practices, organization and overall methodology of Framework model development. As is often the case with complex tools, even developers who have worked with Cognos for a while can be confused on at least some of the concepts and considerations of model design. The focus of this first article will start from the beginning, and describe an overall 4 layer approach, detailing the purpose and reason for each of the four layers. Next month’s article will describe advanced best practices, and other tips.
Four Layer Approach
One of the basic tenets for best practice model design is to segment the model into four specific sections or layers (Data, Logical, Presentation and Dimensional). Each layer has a specific function and set of modeling activities. Generally, the layers build upon one another, with the data layer being the foundation of the model. Other areas, such as packages and connections are also important, but fall outside of the layers. Many of the best practices can be thought of in terms of what development activities should or should not be performed in each of the layers.

Data Layer

The data layer, also called the import layer, contains the data source query subjects, based directly on the underlying database objects. Whenever possible, use unmodified SQL ( “select * fromTable Name”)to retrieve the table information. If you modify the SQL code or add a filter or a calculation to the data subject, it eliminates Framework Managers meta-data caching capabilities. This means that Cognos 8 will validate the query subject at report run time, adding overhead to the report load time. In some circumstances, this may be worth the trade-off, but should be avoided when possible.
Add joins, cardinality and determinants at this level. Cardinality, the definition of how joins should behave, is critical to a well developed model and can be confusing, even to seasoned veterans. In lieu of a lengthy discussion within this article, consider other IBM documents which offer a thorough discussion of cardinality, and the typical types of situations you may see.
Expect 20 to 40 percent of the model development to take place in the data layer.

Logical Layer

The logical layer adds the bulk of the meta-data to the model, and consequently is where most of the work is likely to occur. It provides business context and understanding to the data objects. Tasks include but are not limited to:
  • Organizing elements into logical query subjects. This may mean combining elements from multiple tables, as in a snowflake dimension.
  • Renaming element names, including descriptions and tooltips, and adding multiple language definitions if needed. Assign standardized and business-defined terms to the database columns, giving them business context.
  • Add calculations, and filters including stand-alone filters, embedded filters and security-based filters. Base these on the underlying data layer objects to make them less susceptible to errors when copying and reusing the query subject.
  • Arrange query subject items in a user-friendly manner. Make use of folders to group similar items, or when there are too many items. I suggest 12 or fewer objects per folder, although I have used more when the logical breakout calls for it. For example, if there are many dates in a query subject, it might be more intuitive to have all the dates in a single large folder, than to try to subdivide each into smaller, random folders. Arrange the contents of the folder in an intuitive manner, such as alphabetic, or with the most commonly used items at top. HINT: It can be useful to use the “Reorder” command available within FM for this purpose.

  • Assign output formats and usages to the reporting elements. This is easier if you create a small folder of trivial calculations, used only to provide standardized object format templates. The formats can easily be copied to target items by multi-selecting, and dragging the topmost format through the list.
  • Add prompts, including cascading prompts, and prompt-based calculations.
Roughly 50 to 70 percent of the modeling work occurs in this section.

Presentation Layer

The importance of making information easy for report writers to use is frequently underestimated, but is a critical component to driving user adoption. Fortunately, this can be a simple step. The presentation layer is used only as an organization structure in order to make it easier for report writers to more easily find their needed information. This layer includes only shortcuts to existing items in the logical layer, plus organization items such as folders and namespaces.
For example, create a namespace called Orders and include shortcuts to the ten or twenty relevant query subjects, out of perhaps a hundred or more query subjects in the logical layer. Also include shortcuts to relevant filters. Commonly used query subjects (such as Items or Customers) will appear in multiple areas. Rename the shortcuts to something which provides helpful business context.
For organizing major groupings, you can use either folders or namespaces. However be aware that namespace names must be unique, and that items within a namespace must likewise be unique. So, if you use folders for your major groupings, you cannot have a shortcut named “Items” in more than one folder. You must rename them to unique names, such as “Order Items”. For this reason in particular, I generally prefer to use separate namespaces.
The presentation layer takes approximately 10% of the overall model design effort.

Dimensional Layer

The dimensional layer is required only for models which include dimensionally modeled data. Leaving aside the trivial situations where cubes are simply imported into the model, this includes dimensionally modeled relational data (DMR).
Specifically, this is for creating Dimensional and Measure Dimension Query subjects. Much like the presentation layer, this layer also is built upon the logical layer, which leverages the effort put into that layer. Apply the element renaming, descriptions, tooltips in the logical layer, and they can be reused in the dimensional layer, with some help from the search tool.
Finally, note that when you are creating the final package, you should hide the data and logical layers so that the user will only see the presentation and/or dimensional layers.
above described the overall approach for best practice model design using the 4 layer architecture. In this month’s article, I’ll describe some advanced issues which arise during model design, and how best to deal with them. Given the limitless and complex situations that can come up during modeling, these techniques may or may not apply to your specific situation.

Data Sources

One area which many modelers neglect is the data sources. A key part of the data source is the Content Manager Data Source, which references a data source configured via the web interface in the Administration area of Cognos Connection. You should have as few of these as possible, ideally only 1 per database, unless there are very specific reasons (such as security). Even so, many security or logon issues can be addressed via multiple connections or signons within a data source.
Framework Manager will automatically create multiple data sources, (one per schema, for example), but they should reference the same Content Manager Data Source. The reason minimizing these is because multiple Content Manager data sources will open separate connections to the database and force Cognos to perform simple joins within the reporting engine instead of in the database where they belong.

Relationships and Cardinality

The area which causes much confusion and difficulties with modeling is relationships and cardinality. There are a number of documents which discuss the handling of data modeling traps, such as theGuidelines for Modeling Metadata document included in the documentation, and so I will not duplicate those discussions here, instead addressing other topics.
A common concern is when to alias a table in the data layer. In general, common dimension tables used across multiple fact tables, or contexts, should not be aliased. This is important in creating conformed dimensional reports. For example, a parts dimension is the same when used for reporting against sales or inventory fact tables. When a table has separate usages, or meanings, it should be aliased. For example, a general purpose address table, which contains branch, customer and employee addresses. These are often identified by having multiple join paths or filters which can signal different usage.
Another situation to use aliases is when there are multiple usages of a dimension table. The most common occurrence of this is the general-purpose date dimension table. Applications such as health and insurance can have many different joins to a date dimension, each of which has a different context. This can be complicated to track, because one usage, such as Admission Datemust apply in a conformed manner to many fact tables, while a different usage (Treatment Date)must also be conformed across many of the same dimension tables. Obviously a meaningful naming convention is required!
If you do decide that you’re logical query subject requires a different alias, make sure to make liberal use of the Remap to New Source option, available by right-clicking on the query subject.

Controlling Stitch Queries

One frequent point of confusion, is how the model causes reports to create stitch (or multiple) queries. To resolve this requires a firm understanding of the cardinality in relationships. Also realize that stitch queries are sometimes best way of issuing a query.
Report studio will generate simple, consistent queries when there is consistent progression from “1 to many” relationships. This is a smooth progression from dimensions to facts. It can easily generate facts with correct aggregations with this simple scenario. Remember that Cognos will treat the query subject at the end of one or series of 1:N relationships as fact tables.
Similarly, it also works well in the case where multiple “dimensions” converge on a single fact table, as shown below. An easy visual check is to make sure that the 1:N relationships proceed in the same direction, with the ultimate fact table serving as the “Many” end-point of several dimensions.
The stitch issue occurs when there is more than one “Many” side along the same dimension path, as shown below.
A query which utilizes both of the rightmost, or “fact” tables will result in a stitch query as the only relationship is via a comnformed “dimension” table. In a query where i nventory (fact) and sales (fact) are compared at a product (dimension) level, a stitch query is the correct approach. However, if this is an unintentional or poorly modeled relationship, incorrect or inefficient queries will be generated. Understanding the implications of cardinality in relationships is the best defense against this.

Naming Conventions and Nomenclature

Another area of confusion is naming conventions and nomenclature, and areas surrounding these subjects. The best practice is to have the query subject and element names all carefully thought out and implemented before developing the model, but that doesn’t always happen.
First, let’s describe how Framework Manager and report studio considers element names. Each element name is uniquely identified within the package by a 3 part name: The element itself, the parent query subject (or shortcut) name, and the parent namespace to the query subject. Element names must therefore be unique within the query subject, and similarly for query subjects within the namespace. However, the namespace name must be unique within the entire model. So you can have multiple references to element “Part Name” within query subject “Part Master”, but the namespace, such as “Purchasing” or “Inventory” can only occur once within the model, making the fully qualified name unique. Notice that folder names do not come into play when defining names. They are used only for organizing other items.

Managing Name Changes

The result of all this, is that names within the model are very important, especially once report writing begins, because they are difficult to change without invalidating existing reports.
So, in order of preference, maintain consistent names across models by:
  1. Get it right the first time. If multiple areas of the organization have a stake in naming conventions, involve all organizations early in the design process, even if the first stages do not apply to some of them.
  2. Identify situations where a renamed item is used. Here the option “Find Report Dependencies” is invaluable in identifying reports where an item is used. Keep in mind, though, that if your model references a development server, it will not identify reports or queries which exist only in your production environment. Therefore it may not adequately identify issues in every environment.
  1. Move the renamed item to a “deprecated” query subject folder, and create a similar element with the new name. While this relies on a certain amount of end-user training, to instruct report and query writers not to use objects within a folder named “DO NOT USE” or similar, it will ensure that existing reports will still run correctly, while using the new, correct name moving forward.

What’s new in IBM Cognos 10 Framework Manager


While Framework Manager didn’t receive the dramatic changes in IBM Cognos 10 that some other areas did, there are still some interesting and useful new features, which might have slipped under the radar with the new release.

The main new features are:
  • The ability to create “Durable Models”
  • Model Design Accelerator
  • Dynamic query Mode
There have also been several minor improvements in the way the IBM Cognos 10 handles SAP BW sources. If that’s an option that you are using, either contact Ironside Group and speak to one of our expert IBM Cognos/SAP consultants or check out the Framework Manager manual for details.
There is one notable feature that is deprecated in Cognos 10. Native Support for CVS and Microsoft Visual Source Save has been removed. Support for these and other code repositories is still available, but in a more generic fashion.
Let’s take a look that the major changes.

Durable Models

This is probably my favorite of the new features. The problem this feature addresses, is the situation where you want to rename Framework Manager objects, such as query subjects, query items, filters or namespaces after reports have already been written. If you change the name in the model and republish the package, the report will error out with references to data items which cannot be found. While there have been some workarounds for a while, this feature provides an elegant solution.
The key to making this work is to start with a new model, and select an unused language or dialect for the design language. I usually choose “English (New Zealand)” (EN-NZ). Then choose standard English (EN) as the active language. Finally, there is a new project property Use Design Local for Reference. You must also set this to true. This causes the report object names to be referenced internally in the report using the design language (EN-NZ or English–New Zealand) rather than the language of the report author (EN or base English). This is similar to authoring reports in multi-language environments, but requires no special actions on the part of the report author.

In this example, we added a data item called EXPENSE_GROUP_EN to the report
The Framework modeler never applied a user friendly name to the column. We add it to the report anyway, and save the report. The modeler then uses Framework Manager to correct the “Active Language” (EN) data item name to a business friendly name, such as “Expense Account Group”. The design language (EN-NZ) object name must not change.Republish the package. By simply running the report with the updated package, the renamed query item name and column heading is now shown in the report. This because the internal object reference in the report is set to the design language (NE-NZ or New Zealand ) name, which did not change. At run time, the report executes using the language of the user, and so displays the updated object name.
Looking at report studio, you will see that the report structure will reference the original EN-NZ description of the object (EXPENSE_GROUP_EN), which did not change. At run time, it will execute using the active language (EN or English base) name and column header, and therefore isolate the name change. Essentially, the report is built in one language (English – New Zealand) and run in a different language (English – base language). But this is all done without any special actions by the report author. And regardless of the language in which the report was authored, the object names are always stored internally using the model design language.
There are a couple of caveats to this method. Most obvious is that once you set the Design language name for an object, you must never change it. Therefore, you’ll need to have meaningful object names from the start. I suggest using the native database column names, where possible, as they are meaningful, not likely to change and are not required to be “business friendly” at this point.
Also, this techniques works for renaming objects only. It will not compensate for major structural changes, such as moving an object from one namespace to another.

Model Design Accelerator

The Model Design Accelerator is a graphical “wizard” which walks you through the design process for a relational model, complete with physical, logical, and presentation layers. Like most wizards, it doesn’t do much that you can’t do on your own, but does make the steps easier, while enforcing good design principles. You have the complete ability to modify your design manually, as usual.
You launch Model Design Accelerator from the Tools menu. The wizard presents you with a blank star schema model. You drag-and-drop items from your data source onto the fact and various dimension tables as needed. Simply add or delete as many new dimensions as you need. You can rename query subjects and items within the wizard, or later. When you are finished, Model Design Accelerator presents you with any warnings, then proceeds to generate the model, complete with the three layers, in separate namespaces.
Interestingly, it doesn’t automatically create a Dimensionally Modeled Relational (DMR) layer. That step is made easier, by directing you to think dimensionally about the model based on the star schema, but you still need to manually apply the various levels to the hierarchies, which can take some thought.
While this tool doesn’t technically add any functionality that didn’t already exist, it’s a nice feature, targeted towards the mid-level or beginner model designer. It will assist the occasional or new model designer think in dimensional terms and develop a solid three-layer model with minimal effort, and it does that admirably.

Dynamic Query Mode

Dynamic Query Mode (DQM) is an option to speed up query performance of certain cubes, by providing in-memory data caching. Dynamic Query Mode is limited to three specific OLAP data sources in the current release, namely TM1 version 9.5.1, Essbase versions 9 and 11, and SAP BW, version 7.1. Furthermore, the specific performance improvements you can expect differ according to the data source. In general terms, they support better null suppression, and improved repeatable performance, which is useful for the typical ad-hoc analysis style of query for which these cubes are often used.
There is a separate document dedicated just to DQM plus more information in a “Cookbook” on the IBM web site (http://www.ibm.com/software/data/cognos/customercenter/).

How to Standardize Unique Default Skins for Each of Your Different Environments


Many organizations utilize multiple Cognos environments as part of their development process, such as development, QA and production servers. It is not uncommon for users (especially system administrators and developers) to become confused about which server he/she is working on as all of the environments look the same. (logo, background, font etc). So how can you easily distinguish between these various environments?
Here we describe a simple technique to apply a common skin for each different Cognos environment regardless of what a user has set in their profile. This method can be used to set one skin in development, and another in the Test/QA system. In production, you can choose to allow each user to select a skin, or to enforce a standard.
When an individual user logs into Cognos for the first time, their individual profile is set. Because it is set at an individual level, administrative changes to the Cognos default profile will not affect them. For example if you had 50 experienced users and 10 new users, a change in the default profile will only affect the 10 new users when they log into Cognos for the first time.
Let’s begin:
1. We’ll be working under the assumption of a Cognos 8.3 installation and will be providing instruction on how to change the skin on one server. This process can be repeated for every server (environment).
First select a skin to be used as the default skin for each environment. With each Cognos installation there are several skin sets delivered. By default the Corporate skin is used.
  • Business
  • Classic
  • Contemporary
  • Corporate
  • Modern
  • Presentation
Let’s say we decided on this arrangement shown below:
Environment/ServerDefault Skin
Production EnvironmentCorporate
Development EnvironmentContemporary
QA EnvironmentModern
2. Let’s select an environment we want to change and log into that server. We will use the Development server for this example and based upon the matrix we created in step 1 we want to set the skin to Contemporary.
We can find the skin scheme files under:
Cognos root\webcontent\skins

A typical location on windows would be:
C:\Program Files\cognos\c8\webcontent\skins
You will notice the following folders within the skins folder. We’ll be replacing the content within these skins with the content from the contemporary skin.
2
3. Select all the folders, copy them and then paste them in the same directory. We’ll be using these new folders as a backup of the originals. In this particular screenshot you’ll see that 2 of the folders have been backed up. You may want to changes the names of these folders to something like “Original – contemporary” or some other similar naming convention.
3
4. Now let’s ensure that everyone on this server sees only the contemporary skin. Open the contemporary folder, select all sub-folders and files within it. Make a copy and paste (overwrite) the contents from this folder into all other skins folders (except the backups you just created). We have effectively changed the contents of every skin to be the contemporary skin. Regardless of the skin that a user has set in their profile, they will always see the contemporary skin. Should you want to roll back or make future changes, you can always refer to the backup folders that we created in step 3.
5. To confirm: Log into Cognos Connection. You will see the default skin has beenchanged (green scheme)
5
6. Repeat these steps for each environment, choosing a different skin as the default and copying the contents to the folders of every other skin.
Note: If you or any user can’t see the change immediately, you may need to clean the cached web pages/content from your web browser first. Here are the instructions for IE 6:
· Go to Tools->Internet Options
6
  • Temporary Internet Files section click on Delete Files button. It opens a small Delete Files window. Check “Delete all offline content”.6a
  • Click Ok. Then Ok.
  • Launch a new Cognos session. You should now see the new skin.

A Tool to Standardize The Look and Feel of Your Reports


In Previous Tips and Tricks we discussed how to standardize a “skin” in each of your different Cognos 8 environments. This allowed all users to receive the same look and feel when interacting with Cognos Connection. We’re going to take it one step further and discuss the benefits of using a built-in tool called the Layout Component Reference to accomplish this at the individual report level.
Having reports with a common look and feel promotes user adoption as well as trust in the tools they are using. Have you ever noticed that some of your reports looks similar to others but may have some slight formatting variations? Text justified to the left instead of center? Font size or style not quite matching? Table formatting inconsistent? As is often the case with multiple report developers, over long periods of time you may find that your old reports are starting to look significantly different than your new ones.
Many developers use some sort of customized template to begin report development. While that is a great idea and a general best practice, templates may be too generic for every type of report, not to mention a big undertaking to maintain. Imagine having a collection of 50, 100 or even 300 reports made with a common template; Your legal team informs you that the disclaimer in your report footers require an immediate change. How would you go about accomplishing that in a time and cost efficient manner? The short answer is that you can’t!
A layout component reference is a widely underutilized item available in the tools menu. It allows one to create reusable objects within reports then store and maintain these objects all in one convenient location. Changes made to layout components are reflected in every report that refers to them. All of a sudden, changing the disclaimer in the footer for your 300+ reports went from a multi-resource, multi-day error prone low-level development nuisance to a 5 minute quick fix.
Customizing a layout reference is also easy and intuitive. A report developer can choose whether they want their objects to update automatically upon detected changes, or simply ignore future changes altogether. A developer is also able to nest objects, allowing them to create large multi-component objects with complex formatting. Child objects can be individually modified, especially useful in headers where a title may need to be changed per report.
While there is some cost (time) associated to converting all of your old reports to make use of this feature, the future benefit will outweigh the investment in the near term especially as the number of reports continues to grow. When time permits, schedule a time to convert your older reports. Next time you need to make broad changes you’ll be thankful you did.
Below is a screenshot of the Insertable Objects Pane where the Layout Component Reference can be found.
Layout_Reference1

Presenting in Calendar Format Report


The Problem

Over the past several years, we have had a number of customers inquire about a method to deliver real-time  scheduling information in a common calendar format.  Typically this requires the ability to schedule events around or related to other events, update dates in a separate application, and view the resulting calendar information immediately. The need for real-time information eliminates any possibility for caching or warehousing data which might be utilized to stage and provide performance improvements. Fortunately the amount of event data displayed on the calendar is often fairly small, and we can provide drill-down for specific events on the calendar, to more detailed information on lower-level reports. Additionally, other specific filters can be utilized to narrow the types of events to display within the calendar. This article describes one particular case of calendar reporting.

The Solution

To tackle this report, we divided the issues into two areas: the layout of the calendar itself, and the daily information within each date “cell”.

Setting the grid

The general approach to the calendar structure was simple, a 7 X 6 repeater grid allowing for 7 days per week, and a potential for up to 6 rows, in cases with a 31 day month beginning on a Saturday. The driving table for this query was a date dimension table (1 record per day) which included two month options, an actual month and a display month.  The purpose of the two fields is to compensate for the first week of the month which mostly did not start on Sundays. If the month started on Wednesday for example, the Display Month included three days of the prior month to fill in the gap, thus ensuring that each week started on a Sunday. The filter included an OR condition to select either the actual or display months matching the prompted month value. There are various conditional formatting options available for those prior month days, such as shading the display background, to distinguish them from the actual selected month.
Additionally, we displayed the date value at the top of the cell in an attractive date format with shading, to clearly label the date cell.
Finally a toolbar at the top of the report displayed a 12 month bar, showing the 6 prior and 5 subsequent months relative to the selected month, to provide simple month navigation.
The main body of the final report. This version is filtered by one event type.


 Close up of a date value, showing additional mouseover information. The events are color-coded to match the filters.

 The list as it appears in Report Studio inside the repeater cell. HTML objects and javascript code  were used extensively here, for mousover information and color coding.

Master Detail

Each day “cell” was constructed as a simple list, with a Master-Detail relationship to the parent date cell, which was linked by the date. An obvious concern in a case like this is that there can be a large set (potentially 31) of detail queries for each month. We tested a number of other options (such as conditionally formatting, and aligning an entire week) and found that the Master-Detail approach worked best for our intended use, given the environment and anticipated number of records.  While not the snappiest of reports, it does perform reasonably well.
Here’s the Master-Detail connection between the date query and the list within the date cell. Additional filters still apply to the list to control types of events displayed.
Also, due to the small amount of space within a cell, we made good use of color to convey additional information (color coding event types) about the events. The cells are fixed size by width, and have a minimum size by height. If there are many events within a cell, the weekly row will expand to display all the events.

Environment/Caveats

In order for this report to work reasonable well, we often work with a DBA to optimize the data structures. Primarily, we have used the two-key approach to the date dimension table, and made sure that the event detail records were indexed by the date values used in the Master-Detail relationships.

Linking IBM Cognos Portal Pages through Prompts


In this month’s Tech Tip we will extend the capability introduced in last month’s article “Creating IBM Cognos Portal Pages” by linking various portal pages in Cognos Connection dashboard applications. To demonstrate this technique, we will explain how to pass your prompt selections from one dashboard portlet to other report portlets within the same dashboard page.

Create the main portal tab

The first step in enabling this functionality is to fuse all the different dashboard pieces together by creating another, master dashboard page that will encapsulate two things:
  1. The prompt.
  2. The two report outputs.
To do this, start by navigating to the “Global Filters” folder, created in the previous article, in Cognos Connection and click the New Page button.
Name the page “Returns Analysis Dashboard” and click Next.
This time around, our page will have two columns, so we will have to select the appropriate radio button in the Number of columns: section at the top. Then, using the Columns width: drop-downs, set the left column to a width of 20% and the right column to 80%.
Click the Add… link in the left column and add a Cognos Viewer object just as we did in the previous step.
Then, in the right column, add a Multi-page object from the Dashboard entries.
Your portal page should now be laid out like the image below. If this is the case, click Finish.
From Cognos Connection, click the “Returns Analysis Dashboard” page and click the properties icon of the left
column.
Just like we did in Step 2, click the Select an entry… link and navigate to the “Prompt Report” (inside the “Global Filters” folder). Then, in the report’s properties, set it to communicate using the “promptChannel” channel.
Click OK twice.
Now click the properties button of the right column. Click the Select an entry… link and navigate to the “Detail Pages” folder as shown below.
Click OK.
The prompt and reports are displayed side-by-side.
Test out work by making a selection in the prompt.
To create a new tab in Cognos Connection using your new portal page, click the Add to my portal link (circled in red below).

IBM Cognos CSV Files: The Encoding Mystery


The ability to export a report to CSV format is one of great out of box capabilities provided by the IBM Cognos suite of tools. A delimited text (CSV) file can be opened in any application associated with the .csv file type, such as Microsoft Excel, Microsoft Wordpad, or Star Office in Unix. In addition, the CSV format file is also widely accepted by most modern database vendors. Providing the end users with this ability makes report data easily re-usable and integration friendly with other applications.
Of course, there are always caveats to consider with any technology solution. For example, I once had a project where the customer informed me that the exported CSV data file couldn’t be used in a third party application. The numeric information and English letters appeared as unreadable “symbols” when the file was consumed by their particular application. Since a CSV file should have no issues being consumed by any application, why was it that this plain CSV file couldn’t be interpreted by that application? After research and some testing, I was able to quickly identify that the root cause was “surprisingly” simple – the character encoding.
To uncover the mystery, let’s first take a look at what can be found inside the “plain-looking” CSV formatted text file. By default Cognos reports saved in CSV format utilize the following attributes:
  • Designed to support Unicode data across many client operating systems.
  • UTF-16 Little Endian data-encoded.
  • Include a BOM (Byte Order Mark) at the beginning of the file.
  • Tab-delimited.
  • Does not enclose strings in quotation marks.
  • Uses a new line character to delimit rows.
  • Show only the results of a report query. Page layout items, such as titles, images, and paramDisplay values do not appear in the CSV output.
As you can see, the CSV file is no longer “plain-looking” but includes multiple different characteristics which must be considered in various cases.
For our example, we will focus on the fact that in the CSV formatted report characters are encoded in UTF-16/Unicode, which is widely used today in many applications (but not all!) to support multi-language. Why characters need to be encoded? Fundamentally computers just deals with numbers. It stores letters, numbers and other characters by assigning a numeric value for each one. Before Unicode was invented, there were many different character encoding systems for assigning these numbers to represent characters. This leads to a significant problem as no single encoding method could contain enough characters to represent even just one single language. For instance, in the English language no single encoding is adequate for all the letters, punctuation, and technical symbols commonly used.
Unicode became the solution. It provides a unique number for every character, no matter what the platform, no matter what application, no matter what the language. The Unicode Standard has been adopted by most industry leading leaders. It is supported in many operating systems, all modern browsers, and many other applications/systems. Unicode enables a single software product such as a single website to be targeted across multiple platforms, languages (countries) without re-engineering. It allows data such as a Cognos report CSV formatted data file to be transferred through many different systems without corruption. That being said, as long as the target system supports Unicode then there should be no issues with re-using the Cognos exported CSV data. In the scenario I have mentioned above, as you can imagine the issue is caused by the target system, which didn’t support Unicode.
I am sure the next question you will ask is how to resolve such encoding conflicting issue? Is there a way to set encoding method when exporting a Cognos report in CSV formatted? Fortunately the answer is “Yes”. You can modify properties for the exporting CSV output format from Cognos system admin console.
Here are the Steps:
1. Start IBM Cognos Connection.
2. In the upper-right corner, click Launch, IBM Cognos Administration.
3. On the Status tab, click System.
4. From the System drop-down menu, click Set properties.
5. Click the Settings tab.
6. Next to Environment, Advanced Settings, click Edit.
7. Enter parameter RSVP.CSV.ENCODING and a supported encoding value such as windows-1252which is also known as “Windows characters”. The default value Cognos used is utf-16le.

8. Click OK. Wait about 30 seconds to take changes effect.