Monday 11 December 2017

What’s new in FDMEE 11.1.2.4.220

Well it almost a year since the 11.1.2.4.210 patch set was released but finally the next patch .220 has landed.

Besides bug fixes there are some new features in the .220 release which I will quickly go through in this post.

I must admit I was expecting more of the functionality from EPM Cloud to be pushed down to on-premise in this release but unfortunately that has not been the case.

There are four new features in the patch readme and one of them existed in .210, anyway let us have a quick look, the first new feature is:

Import and Export of Mapping Scripts in Text Mapping Files

“FDMEE now supports the export and import of mapping scripts in a text file. This support includes both Jython and SQL scripts. The scripts are enclosed in a <!SCRIPT> tag.”

It is amazing it has taken this long to include custom scripting in mapping text files, if I take the following mapping which has custom Jython scripting applied.


Then export the mapping using either the “Current Dimension” or “All Dimensions” option.


The file is then saved to a specified location under the FDMEE root folder.

If the exported mapping text file is opened you will see that the only difference to previous versions is that the custom script section is included.


The custom script can be spread across multiple lines and just needs to be enclosed with

Now to import a mapping text file that contains a simple custom SQL script.


Just like in previous versions the file needs to uploaded or copied to a location under the FDMEE root folder and then imported.


The mapping file is selected.


The import mode and validation options are exactly the same as before.


Now the imported custom script is available with the mapping definition.


It is also possible to use the REST API to import and export the mappings which you can read about here.

On to the next new feature:

Registering Duplicate Target Application Names

“FDMEE now enables you to register target applications with the same name. This may be the case where a customer has multiple service environments and the application name is the same in each environment, or the application names are identical in development and production environments. 

This feature enables you to add a prefix to the application name when registering the application so that it can be registered successfully in FDMEE and be identified correctly in the list of target applications.

Target applications with prefixes are not backward compatible, and cannot be migrated to an

11.1.2.4.210 or earlier release. Only a target application without a prefix name can be migrated to an earlier release.”

So basically, when you add a new target application you have the option to include a prefix.


I have already written about this functionality in EPM Cloud and it operates in the same way, you can read about it in more detail here.

The third new feature is:

Support Member Names with a Comma When Exporting to Planning

“When export to Planning, you can now export a dimension member name that contains a comma (,).  A new Member name may contain comma setting has been added to Target options, which enables the feature.”

This is another feature that has been pushed down from EPM Cloud where it appeared in the 17.07 release.

I have always wondered why it needs to be an option and it should be able to handle member names with commas by default.

The property can be set at either target application level.


or in the target options for a data load rule.


If the property value is not set it will default to no.

I had a look at this functionality in the past and found that it does not relate to all load methods.

If I take a rule with a load method of numeric data only and set “Member name may contain a comma” to no.


In the following example, there is an entity member containing a comma, the export to the target planning application completes successfully.


The file that is produced before being loaded to the target application shows that the member containing the comma is enclosed by quotes so there is no problem with it loading.


There are no problems loading members containing commas with both the numeric data only load methods.

If I switch the rule to an all data loads method.


This time the export fails.


The process log contains the following error:

“The member Dummy does not exist for the specified plan type or you do not have access to it.”

Looking at the export file generated by FDMEE provides a clearer indication to why the load failed.


When setting the load method to “All Data Types” the outline load utility (OLU) will be used to load the data, if it numeric data only load method an Essbase data load rule will be created to load the data.

With the OLU method the Point-of-View contains a comma separated member list and because the member contains a comma this is causing the load to fail, if it was a driver member such as account that contained the comma the load would not have failed.

Also contained in the log is reference to the file delimiter.

DEBUG [AIF]: SELECT parameter_value FROM aif_bal_rule_load_params WHERE loadid = 1464 and parameter_name = 'EXPORT_FILE_DELIMITER'
DEBUG [AIF]: fileDelimiter: comma

The export delimiter will be set depending on the yes/no value in the “Member name may contain comma” property.

You can also see that the /DL parameter is set as part of the OLU load.

DEBUG [AIF]: Data Load: exportMode=STORE_DATA, loadMethod=OLU
Property file arguments: /DL:comma /DF:MM-DD-YYYY /TR

The POV is comma separated if you add a member into the POV that has a comma then it will fail.

Let me repeat the process but this time set the property value to yes.


This time the export is successful.


The export file that is produced is tab delimited.


The value retrieved from the FDMEE repository is tab.

DEBUG [AIF]: SELECT parameter_value FROM aif_bal_rule_load_params WHERE loadid = 464 and parameter_name = 'EXPORT_FILE_DELIMITER'
DEBUG [AIF]: fileDelimiter: tab

The OLU field delimiter is set to tab using the /DL parameter.

DEBUG [AIF]: Data Load: exportMode=STORE_DATA, loadMethod=OLU
Property file arguments: /DL:tab

So, even though “Member name can contain comma” can be set for any load method it will only apply to all data types and only affects members in the “point-of-view”.
Remember if the property is not set it will default to no.

The final new feature is actually not a new feature as it exists with the same functionality in 11.1.2.4.210

Support for REST APIs

“REST API can be used now in FDMEE to execute various jobs that run data load rules, batches, scripts, reports, and the import and export of mapping rules.”

I think the only difference is that the REST APIs are now officially supported even though they are referenced in the 11.1.2.4.210 FDMEE documentation.

I have previously written a couple of detailed posts about the REST API in FDMEE which you can read all about here and here.

There are some new features that have been missed from the patch readme, one of them being the data type load method “All data types with auto-increment of line item


Not to worry I have two posts about this functionality which you can read about here and here

Finally, with the all data load types method there is the option to apply mappings to a data dimension.


Guess what, I have also written about this functionality which you can read about here, look for “Is mapping data values possible with the all data type?

I think that covers off what's new in FDMEE 11.1.2.4.220

Saturday 2 December 2017

EPM Cloud – Data Management – Loading metadata is here

Well it is certainly a piece of functionality that has been on the roadmap for a long time but finally it has arrived in the 17.11 EPM Cloud release.

I must admit I was quite surprised that the functionality has been added as the Enterprise Data Management Cloud Service offering will be released in the very near future, EDMCS will include application adaptors which will allow metadata integration with EPM Cloud, I thought Oracle would try and push everyone down this route for metadata but clearly not.

First of all, loading metadata is now available for any of the EPM cloud services that are built on the planning and budgeting platform, so this means it is supported for the following:
  • Oracle Planning and Budgeting Cloud Service
  • Oracle Enterprise Planning and Budgeting Cloud Service
  • Oracle Financial Consolidation and Closing Cloud Service
  • Oracle Tax Reporting Cloud Service 
The only supported source in this release is from a flat file, loading metadata is available for standard, custom and Smart List dimensions.

The format for the metadata is based on the same properties as importing metadata directly into a target EPM cloud application so there should not be much of a learning curve, behind the scenes the outload load utility (OLU) will load the metadata which should be familiar if you have an on-premise planning background.

To test out the functionality I am going to go through the process of loading a couple of standard dimensions, a custom one and a Smart List.

To ensure I am testing with valid metadata I took exports of the Account, Entity, Product and Smart List dimension from the Vision sample application, I edited the exported csv files to include a selection of the most important properties.


I confirmed the files were valid by loading the CSV files back into the planning application, clearing all the members first. After successfully testing I cleared down the dimensions so they are empty again.

In theory, because I am using the same source files I shouldn’t encounter any problems loading the files through Data Management, I said in theory because reality can be totally different.

In Data Management if you attempt to add a new application you will see there is new option available called “Dimensions”


After selecting “Dimensions” a new window will be displayed where a registered application can be selected and a prefix entered.


The prefix is optional but will probably be a good option if you plan on carrying out multiple dimension builds with different properties enabled.

Six dimension type target applications will be automatically created, these are Account, Entity, Scenario, Version, Custom and Smart List.


The format for name of the dimension application will be:

“Prefix” + “Application Name – “+ “Dimension Type”

In the dimension details where you would be used to seeing dimension names there is a list of dimension properties.


This is the same concept in one of my previous blogs where I created a custom application to build Essbase dimensions.

By default, the properties that are enabled are the dimension member so in this case Account, Parent and Alias.

The properties that are selected will be available in the import format so you should select only the ones you are going to use, it is possible to create additional dimensions using the prefix option so you can have metadata builds with different properties enabled.

If you enable a property and save the data table column name will be populated.


The column names don’t really make that much sense for metadata because they are the same ones that are used for data loading, they are just the names of the columns in the Data Management relational database tables and not something you should get too concerned about, the main thing is that once a property is enabled it is mapped to a table column name.

Each dimension type will have a different set of properties, so for example the Entity dimension will have a property called “Base Currency” because it is only applicable to Entities.

If a property is not in the list and is required then it is possible to manually add one.


The source file I am going to use load the accounts metadata is:


The properties were enabled to match that of the file.


In the dimension application options, there are properties available to define the dimension name and refresh the database, these settings can also be applied at load rule level, the dimension name only really needs to be set if you are using a custom dimension or want to override the dimension that the metadata will be loaded to.


On to the import format, the target dimension was selected and the file type set as “Delimited – All Data Type” which is a requirement for loading metadata, it is probably so it switches the load over to the OLU.


The properties that were enabled are made available as targets in the import format mappings.


I mapped each column of the source file to the relevant target property, the data target can be ignored as it is not required, it appears as it standard when using an all data type.

The location is the same concept as with data loading and the import format was selected.


I created a new category called “Metadata” just to make the POV more meaningful, an existing one could have been used.


I didn’t opt for creating a new period mapping and stuck with an existing mapping.


I could have created a period mapping just for loading metadata but I thought by going down the route of loading by period the data will be stored and not overwritten each period, this could be useful if you want to view the metadata that was loaded against a given period.

Each of the dimension properties will be available to apply mapping to.


The mappings operate in the same as with data loading so it should not be anything new to you.

For this first example, I stuck with like for like mappings as the source file is in the correct format.


When creating a data load rule against a standard dimension you will see that the plan type is automatically set as “REGULAR”.


I uploaded and selected the source file.

In the rule target options, there are the same properties that are available at target application dimension level.


The dimension name can be left blank unless it needs to be overridden.

There is also the option to refresh the database meaning after the metadata is loaded to the target application it will be pushed down to Essbase, you would usually set this to Yes if you plan to load data after the metadata load.

If you plan on loading multiple dimensions it is probably best to set the refresh option to yes for the last metadata load, otherwise you will be refreshing after each load which is definitely not efficient.

Now that everything is in place it is time to import the metadata into the workbench.


The mapping validation failed and here lies the first problem, there are rows of metadata where the source will be null which should be perfectly acceptable, in the above example the validation failed due to there being no member alias.

For metadata, you would want to keep the target mapping as null where the source is null, as Data Management has been primarily designed to load data a target mapping value is usually required and fails if one doesn’t exist.

Whether this is the correct way or not I created a SQL mapping to map null values to a space, a space should be ignored when loading to a target application.


After adding this mapping, the validation issue was fixed.


The workbench will have columns for source/target data which can be ignored as they only apply to loading all data type data.


On to the export which will load the metadata to the target application.


Next problem, the export failed.

If any rejections occur in the load to the target application the process will fail, the valid records will still be loaded.


The process log contained the following information.

Possible circular reference detected, aborting sort and continuing with Load.  76 possible circular reference records found.
com.hyperion.planning.InvalidMemberException: The member 0001 does not exist for the specified cube or you do not have access to it.
com.hyperion.planning.InvalidMemberException: The member 0002 does not exist for the specified cube or you do not have access to it.
……… and so on
Outline data store load process finished. 266 data records were read, 266 data records were processed, 194 were accepted for loading (verify actual load with Essbase log files), 72 were rejected.


So for some reason a circular reference was found, this stopped the sorting of the records which subsequently meant records were rejected because parent members did not exist.

I had a look at the file that Data Management produced and tried to load against the target application, the file was the same as my original source file except for the order.

I tested loading the file that Data Management produced directly into the planning application using the import functionality and it failed with the same type of errors, I then loaded my original source file and it imported perfectly.

It looks like the issue is down to a combination of the order of the members in the file that is being created by DM and that there are shared members involved.

The account dimension that was built looks like:


If I load the source file (not the one DM produces) directly to planning then it looks like:


Not only is there a problem with members being rejected, the order of the members is different.

Basically what happens is Data Management executes the following SQL to produce the file that is then loaded to the application.

DEBUG [AIF]: SELECT ACCOUNT,ENTITY,ICP,UD1,UD2,UD3,UD4,UD5,UD6,UD7,UD8
 FROM AIF_HS_BALANCES
 WHERE LOADID = 31
 ORDER BY 2,1

The order is defined by parent then member and in ascending order, so it looks like it doesn’t matter what order your source file is in, the target order will not match this and it doesn’t look like there is currently anyway to change this, this may not be an issue but something to be aware of.

Update: This issue should have been resolved as the data is ordered by the DATAKEY column so the order it was inserted

DEBUG [AIF]: SELECT ACCOUNT,ENTITY,ICP,UD1,UD2,UD3,UD4,UD5,UD6,UD7,UD8
 FROM AIF_HS_BALANCES
 WHERE LOADID = 31
ORDER BY DATAKEY

If you plan on loading formulas then you will probably hit issues if the length is over 80 characters, when the source file is loaded into the repository then be prepared for the following type of failure and error message:

ERROR [AIF]: EPMFDM-140274:Message - ORA-12899: value too large for column "DRMQHJQKBLPQ"."TDATASEG_T"."UD2" (actual: 1173, maximum: 80)

Using the import metadata feature in the planning application there is no problem loading large formulas.

Also, if the formula is split over multiple lines in the source file:


If it is loaded directly through the import metadata feature in the planning application the formula will load successfully, if the same source file is loaded through Data Management the import will be split over multiple records and be completely messed up.


My assumption is that to be able to load formulas the character will need to be under 80 and the formula be on a single row.

Moving on to the entity dimension, I followed the same process and besides the null source value mapping issue the metadata loaded successfully.


This produced the following dimension hierarchy in the planning application.


If I load the source file directly to planning then again you see there is a difference in the ordering.


Because of the way the metadata has been ordered it also caused an issue with shared members occurring before base members.


Now for a custom dimension which is my example is product, I started out with the following source file:


In my source file, I have some level 0 members which have no parent and the aim is to map these to a parent named “P_Unmapped”.

Once again, the process is the same for setting up the integration, there is a slight difference with the target plan type name in the data load rule, the name will be automatically set as “CUSTOM”


As this a custom dimension and there could be multiple, the target dimension name needs to be defined in the load rule target options.


I added a SQL mapping to map the null parent members and assign them to “P_Unmapped”.


The null parent members were correctly mapped after importing the source file.


The export was successful and the hierarchy was built as expected.


So how about Smart Lists, well the concept is the same and I started out with the following source file which contained the definition for one Smart List.


The file could have contained multiple Smart List definitions, I just went for one for demonstration purposes.

The data load rule will have a default target plan type set as “SMARTLIST”


There is no need to set the dimension name in the target options of the rule.

Besides having to map the null source properties there were no other difference to mention.


The Smart List was created in the planning application with valid entries.


If you want the display order to be by ID then it might be worth adding in the entry ID property to the target dimension settings and include in the source file, this way the order will be the way you want it.

Finally, I want to briefly cover off deleting members, if you look the properties available in the dimension details of a target dimension application you will see there is a property called “Operation”.


This property controls whether the record being loaded performs an insert/update or delete, the possible values for this property are:
  • update
  • delete level 0
  • delete idescendants
  • delete descendants
Usually you would not pay attention to the operation property as if not specified the default is to update, if you want to delete members you can use one of the other values.

The aim of this next example is use Data Management to delete descendants of “P_Unmapped” in the product dimension.


For some reason, the custom dimension does not include the operation property by default so I added a new property.

So I wouldn’t affect my original product metadata load I added a new dimension with a prefix and enabled only the “Parent” and “Operation” properties.


The source file contains the member and the operation to delete descendants.


The remaining steps to set up are the same so there is no need to go through them again, the full process was successful.


In the target application, the descendants of “P_Unmapped” were deleted.


In terms of automation the same applies as loading data, the rules could be scheduled in Data Management, if you want multiple rules to be run they can be added to a batch.

The uploading of the source file, running the rule or batch can be carried out using EPM Automate, alternatively the REST API can achieve the same results.

Well that concludes my first look into loading metadata through Data Management, I know I have experienced some issues with the functionality but that could be because it is so new and I have not had the time to investigate, if anything changes or is factually incorrect I will update this post.