Archive for February, 2010

Setting up automated Hyperion-Essbase Migrations in with Life Cycle Management

February 23, 2010

What is LCM?

One of most attractive features of version 11 is Life Cycle Management.  It is by far the biggest step that Oracle has taken in a true migration utility that has made migrating applications from DEV to TEST to PROD easier than ever before. But it does come with its share of nuances.

LCM actually existed in 9.3, but it was more of a command line tool that was integrated with Workspace, and it did not really offer a good solution to security and provisioning.  LCM is now integrated (and installed) with Shared Services with a much easier to use GUI interface.

What it does

LCM migrates entire applications or individual “artifacts” between environments.  If the environments share a common Shares Services, the migration can go directly from application to application.  However, more commonly, the environments use separate Shared Services, so most migrations are done by exporting and importing to/from the file system.

The exports are stored in $HYPERION_HOME/common/import_export/<user>

Each user has directory here that stores their exported LCM artifacts.  The exports are organized into a logical directory structure.  Some use this structure to ZIP up entire applications and use them for version control and release management.

For Essbase, LCM will copy most everything that is needed for an Essbase Application, including

  • Definitions of all databases within the applications
  • Outlines
  • Substitution Variables
  • Rule Files
  • Calculation Scripts
  • Report Scripts
  • Excel Files
  • Location Aliases
  • Security Filters

It does not do

  • Data
  • Versioning
  • MXL files
  • Triggers
  • Partition definitions,
  • custom defined macros/functions


  • In addition to product provisioning, users must be provisioned to the LCM Administrator Role.  For Essbase, a user needs Server Access, Calc, and Database Manager privileges.
  • Essbase must be in Shared Services Mode
  • Environments must be same software release version
  • LCM does not create applications, so the target app must exist, with the same name
  • During export, the source applications need to be running, and likewise, on import, the target applications need to be running.
  • When performing a migration, if the source artifacts have an earlier time stamp than the destination artifacts, the artifacts will not get migrated.

OBIEE High Availability – Presentation Services and Scheduler

February 23, 2010

Presentation Services

With the BI services already set up in a clustered mode, managed by the Cluster Controller, the next step is to configure a new Cluster-aware ODBC connection. This connection is set to communicate with the Cluster Controller instead of connecting directly to a BI server instance. Each Presentation Server instance in the cluster is then configured to use this new ODBC connection. On a Unix box, the connections are defined in the [OracleBI]/setup/odbc.ini file, but if you are running on a Windows box, use the [Control Panel/Administrative Tasks/Data Sources (ODBC)] Wizard to set up a similar connection (make sure not to use a space/blank in the connection name, as this will not work). This new ODBC connection must be defined on each PS node we are setting up. Although there is nothing stopping you from naming each ODBC DSN differently, I really suggest you keep some constancy and stick to the same name.

The [OracleBIDate]/web/config/instanceconfig.xml file on each node contains the configuration properties of the PS. The first interesting bit is which ODBC connection the PS uses to connect to a BI server. Find and change the <DSN> entry in this file, to refer to the our Cluster DSN instead of the AnalyticsWeb connection. This step needs to be done on each box we are setting up.


Another thing to note is the <CatalogPath> entry. As all the PS need to share the same web catalog, we have to set up a shared directory (much like we did for the BI server repository and global cache) to store the catalog files. I have set up a share on /media/share/Catalog that I use for this purpose. Make sure you copy all your catalog directory structure to this shared directory.

<CatalogPath> /media/share/Catalog/samplesales </CatalogPath>

This takes care of what we need to run each PS in our cluster. Restart each service for the changes to take effect.


For the purpose of this write-up I am using a single OC4J instance. I am assuming that you already have an OC4J instance set up, running the analytics web application. In a simple deployment, the configured application is made aware of a single PS instance, and all incoming web requests are forwarded directly to that node. We can reconfigure this so that each PS in our cluster gets a round-robin assignment of the incoming requests. The file in question is called web.xml and can be found in the [OASHOME]/j2ee/home/applications/analytics/analytics/WEB-INF/ directory (if you deployed your WAR file locally in a simple stand-alone OBIEE manner, then OASHOME is replaced by [OracleBI]/oc4j_bi/). Instead of a two pair/value parameters (host and port), we put a single parameter, that lists all the host:port values for the PS nodes.




Make sure you restart the application after you edit this file.


Assuming that you have already gone through the default scheduler configuration on each node, adding these to the cluster is quite straight forward. In the [OracleBI]/server/Config/NQClusterConfig.INI file, add the following line

SCHEDULERS = "", "";

(obviously replacing my server names with your own). This will basically tell the Cluster Controller that the first server in the list, namely aravis4, will be the main/active scheduler, and aravis1 will be the passive one. Next step is to configure each scheduler to join the cluster. On each node, use the schconfig tool to change the Advanced settings of the Scheduler (Choice 1, 3 and then 3) to set ‘Participant in a Cluster’ to True. Save your settings and exit the tool. Now restart the cluster controllers on each node and then start up the schedulers.


By default, each PS will communicate with the JavaHost running on the local machine, on the default port of 9810. We can, if the mood strikes us, decide to share the JavaHost services on each node with the other nodes, in (you guessed it) a round-robin manner. Would there really be any gain in doing so, though? Certainly we could put the JavaHost services on yet another set of nodes, which would then require that we configure each PS to use those. Again, edit the [OracleBIData]/web/config/instanceconfig.xml file and add the following code within the <ServerInstance> tags

<Host address="" port="9810" />
<Host address="" port="9810" />

Make sure to restart the JavaHost and PS services on each node. This method can also be used to let the PS service now that the JavaHost is running on a non-default port (i.e. you had a port conflict and changed the port in the [OracleBI]/web/javahost/config/config.xml file.
And remember; When in doubt, reboot. It can save you a lot of time and frustration to restart all the services each time, even though you might think that restarting one service is all that should be required

Is Oracle BI not supporting MS-Windows???

February 23, 2010

When trying to install Oracle Business Intelligence Suite Enterprise Edition, v. for Microsoft Windows in Windows XP and getting the following error message.

“Oracle Business Intelligence is not supported on this Windows version. Oracle Business Intelligence is only supported on Windows XP x86, Windows 2003 x86, Windows 2003 AMD64, Windows 2003 EM64T, Windows Vista x86, Windows Vista AMD64, Windows Vista EM64T, and Windows 2000 x86.”

Installer uses systeminfo.exe command to get information on the architecture of the machine. If this command will cause any error, then there is a need to check and fix that error first.

Run the following command and see if any errors. In our case we had the following error.

The command systeminfo.exe is run from command line.



Back with error for Network Card
Information???????..Error:Provider Load failures.


Since installer use the same command to get information on the architecture of the machine. In this case because of the above error installer was not able to drive the machine detail and as a result installation failed.

The errors from the above command systeminfo.exe (if any) needs to be fixed first.  In our case after we fixed the Network Card issue, installation went fine.

General OBIEE Winter Blues dedication

Not very OBIEE related. To folks on googlewave, I’ll be back very soon – just haven’t been able to look at it (just another validation that 1-person forum is a very difficult thing to do).

Too much work, not enough rest. Rushing to complete a project.  Stress is affecting everyone. I think, however, that if you can’t win over stress, then it will win over you. Trying to eat well and healthy and get sufficient exercise.

Sun Certification FAQs

February 23, 2010

Will Oracle University support the Sun Certification Program?
Oracle plans to migrate and offer Sun certification offerings to Oracle certification offerings. Changes in the program will be communicated to candidates on the Sun and/or Oracle Certification Program Web pages. Please check these sites frequently if you have questions or concerns.

Will my Sun certification continue to be valid?
Any Sun certifications that you have earned will continue to be recognized by Oracle and remain valid for the version specified by your credential. Retirement or decommissioning of any certification track will be announced on the Oracle Certification Program Web page.

Will I be required to recertify as a result of Oracle’s acquisition of Sun?
Credential holders will not need to retake exams in order to keep their current Sun credentials. Future certification offerings may require candidates to take an exam if they wish to upgrade. All program requirements will be explained on the Oracle Certification Program Web page.

Can I still get Sun certified? What should I do?
Yes. Sun certification is still available. Follow the paths and instructions that are posted on the Oracle Certification Program Web page. Any updates or changes in requirements will be posted there.

Will the Sun certification tracks or requirements change?
Sun certification tracks will be modified to follow Oracle’s certification model: Associate, Professional, Master, and Expert. All changes to the Sun certification credential names, tracks, and requirements will be available on the Oracle Certification Program Web page.

What is the exam registration process?
Currently, Sun exams are administered at Authorized Prometric Testing Centers. Candidates who wish to take Sun exams should register for those exams under “Sun Microsystems” on Prometric’s Web site.

How can I learn more about Sun certification?
Visit the Oracle Certification Program Web page for information about the Sun Certification Program. This site includes information on available tracks, requirements, and training.

Will my credential be branded Oracle or Sun?
For now, the credential will remain a Sun certification. In the future, Sun certification credentials will be fulfilled through the Oracle Certification Program. Any branding changes will be posted on the Oracle Certification Web page.

Will Oracle continue to sell individual and bundled ePractices?
Yes. ePractice exams will be available for individual purchase and also available within Classroom Value Packages.

How do I obtain vouchers for Sun exams through Oracle University?
Candidates will no longer need to obtain an exam voucher prior to registering for Sun exams at Prometric. Simply go to and pay the exam fee with your credit card during the registration process. If you already have an exam voucher, this can still be used as payment at Prometric when you register.

Will Oracle continue to offer the Certification Re-take Promotion?
Certification re-takes will be exclusive to Certification Value Packages. At this time, there are no plans to offer re-take promotions or re-take vouchers on a stand-alone basis.