Recent Posts


Mapping Data Between Plan Types in Oracle Planning and Budgeting Cloud Service (PBCS): The Case for Map Reporting, XWRITE, or Data Management
Posted on November 23, 2016
Author: Mohan Chanila, Performance Architects

Over the last several years, the number of Oracle Planning and Budgeting Cloud Service (PBCS) applications has exponentially increased and, with this increase, we’ve seen several large on-premise applications migrated or simply moved onto PBCS. When migrating these applications to the cloud, the trend has been to either merge several on-premise applications into one PBCS application with multiple plan types or to simply move them to several PBCS applications.

Whatever the situation, there is always a requirement for moving data between either two plan types in the same application, or between applications, in PBCS. With PBCS, we now have several options to move data.  In this blog, we’ll examine three options and when they are suitable, depending on your individual data and mapping requirements.

At a high level, the three options include creating a “Map Reporting Application;” writing and creating a business rule using the “@XWRITE” function; and “Data Management” functionality. Let’s examine these options in more detail.

Option 1: Create a Map Reporting Application

Creating a “Map Reporting Application” is functionality that has been around for a few years now. It was first introduced in on-premise Oracle Hyperion Planning Version  The purpose was to provide an administrator with the ability to push data from a Planning-only BSO plan type to a reporting ASO plan type or to another BSO plan type (for example, a staff planning cube to a high-level P&L cube). Before this functionality was available, data movement between BSO and ASO plan types was performed using data extracts and manual uploads into ASO.

This was immediately recognized as a useful utility for functional administrators. In the latest release of PBCS, Oracle added functionality to automate map reporting, as well as to dynamically push data from a data form by creating data maps and assigning them to forms. Oracle has come a long way with this particular functionality.

I believe the pros and cons of map reporting include:


  1. Very easy to set up.
  2. Fast execution time, providing you only push data and do not run a clear script prior to data push.
  3. Maps dimensions that are not available in the destination application, or vice versa. For example, you could move employee salaries from a staffing cube to a high-level P&L cube which doesn’t contain the employee dimension.
  4. Maps a smart list to a dimension in the destination. For example, you could report on several attributes.


  1. If you execute a clear script prior to pushing data, depending on the amount of data being pushed, processing time can take a few minutes to an hour.
  2. Complex data mapping isn’t possible; for example, mapping several dimensions where certain members from the source application need to be mapped to very specific members in the destination application is not possible.
  3. This can only be executed by an administrator.
  4. Does not provide any log files if a job breaks.

Option 2: Create a Business Rule Using the @XWRITE Function

The XWRITE function has been around for much longer than map reporting applications and was previously the go-to method for moving data between BSO plan types. It is also very similar to map reporting in that you can map an entire dimension from one plan type to another that does not contain the dimension and vice versa. It can also be automated using “Jobs” in PBCS, or attached to a task list or a data form and executed by a planner.

The XREF function is also used in conjunction with the XWRITE function. XWRITE pushes the data while XREF pulls the data, which means simply that XWRITEs are written in the source plan type while XREFs are generally called from the destination plan type.

I believe the pros and cons of XWRITE include:


  1. Executes extremely quickly with a small data set. In this example, “small” means we’ve typically used this for a handful of accounts, when moving aggregated employee/position data from a staffing model to a P&L model, across one scenario, version, or year. With this “small” data set, execution time could be just seconds. Larger data sets tend to take longer.
  2. Allows for customization such as appending on an aggregation script or performing any complex mathematical calculations prior to the data push.
  3. Can be executed by a planner or administrator.
  4. Can be automated either as part of EPM Automate scripts, jobs, or added to a data form to run on save.


  1. Requires an administrator to maintain “Location Aliases” and a minor change could break the job.
  2. Performance depends on how well the script is written and optimized.
  3. Provides a job status, but not a comprehensive log file with details on errors.

Option 3: Use Data Management

“Data Management” is another relatively new module that allows you to push data between plan types in PBCS; PBCS subscriptions include Data Management (also called Financial Data Management Enterprise Edition or FDMEE for the cloud).

Data Management jobs require some setup time since the setup and workflow settings can be difficult to navigate, but it is worth it considering Data Management can automated in PBCS.

The single biggest difference between Data Management and the other two options is the ability to map multiple dimension members from your source plan type to very specific members in the destination. For example, if you have two dimensions, “Account” and “Entity,” and you have complex mapping between the source and target where several source accounts need to be mapped to different destination accounts with a similar concept on the entity, then Data Management is the best solution available to do this from within PBCS.

I believe the pros and cons of Data Management include:


  1. Maps multiple dimension members from a source plan type to very specific members in the destination and essentially maintains a data map.
  2. Data maps can be uploaded, with functionality such as ranges, explicit mapping and the “Like” function.
  3. Once the workflow is set up, it can be automated.
  4. Provides comprehensive log files with errors and warnings.


  1. Can be complex to set up initially.
  2. Cannot be used in conjunction with a business rule such as a clear script, unless the business rule is executed as a separate job.

In my experience, all three of the options have distinct advantages depending on the specific needs of your application(s). I have personally found all three methods useful at one time or another and have implemented several of these


© Performance Architects, Inc. and Performance Architects Blog, 2006 - present. Unauthorized use and/or duplication of this material without express and written permission from this blog's author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Performance Architects, Inc. and Performance Architects Blog with appropriate and specific direction to the original content.

Leave a Reply

Your email address will not be published. Required fields are marked *