Saturday, 2 December 2017

EPM Cloud – Data Management – Loading metadata is here

Well it is certainly a piece of functionality that has been on the roadmap for a long time but finally it has arrived in the 17.11 EPM Cloud release.

I must admit I was quite surprised that the functionality has been added as the Enterprise Data Management Cloud Service offering will be released in the very near future, EDMCS will include application adaptors which will allow metadata integration with EPM Cloud, I thought Oracle would try and push everyone down this route for metadata but clearly not.

First of all, loading metadata is now available for any of the EPM cloud services that are built on the planning and budgeting platform, so this means it is supported for the following:
  • Oracle Planning and Budgeting Cloud Service
  • Oracle Enterprise Planning and Budgeting Cloud Service
  • Oracle Financial Consolidation and Closing Cloud Service
  • Oracle Tax Reporting Cloud Service 
The only supported source in this release is from a flat file, loading metadata is available for standard, custom and Smart List dimensions.

The format for the metadata is based on the same properties as importing metadata directly into a target EPM cloud application so there should not be much of a learning curve, behind the scenes the outload load utility (OLU) will load the metadata which should be familiar if you have an on-premise planning background.

To test out the functionality I am going to go through the process of loading a couple of standard dimensions, a custom one and a Smart List.

To ensure I am testing with valid metadata I took exports of the Account, Entity, Product and Smart List dimension from the Vision sample application, I edited the exported csv files to include a selection of the most important properties.


I confirmed the files were valid by loading the CSV files back into the planning application, clearing all the members first. After successfully testing I cleared down the dimensions so they are empty again.

In theory, because I am using the same source files I shouldn’t encounter any problems loading the files through Data Management, I said in theory because reality can be totally different.

In Data Management if you attempt to add a new application you will see there is new option available called “Dimensions”


After selecting “Dimensions” a new window will be displayed where a registered application can be selected and a prefix entered.


The prefix is optional but will probably be a good option if you plan on carrying out multiple dimension builds with different properties enabled.

Six dimension type target applications will be automatically created, these are Account, Entity, Scenario, Version, Custom and Smart List.


The format for name of the dimension application will be:

“Prefix” + “Application Name – “+ “Dimension Type”

In the dimension details where you would be used to seeing dimension names there is a list of dimension properties.


This is the same concept in one of my previous blogs where I created a custom application to build Essbase dimensions.

By default, the properties that are enabled are the dimension member so in this case Account, Parent and Alias.

The properties that are selected will be available in the import format so you should select only the ones you are going to use, it is possible to create additional dimensions using the prefix option so you can have metadata builds with different properties enabled.

If you enable a property and save the data table column name will be populated.


The column names don’t really make that much sense for metadata because they are the same ones that are used for data loading, they are just the names of the columns in the Data Management relational database tables and not something you should get too concerned about, the main thing is that once a property is enabled it is mapped to a table column name.

Each dimension type will have a different set of properties, so for example the Entity dimension will have a property called “Base Currency” because it is only applicable to Entities.

If a property is not in the list and is required then it is possible to manually add one.


The source file I am going to use load the accounts metadata is:


The properties were enabled to match that of the file.


In the dimension application options, there are properties available to define the dimension name and refresh the database, these settings can also be applied at load rule level, the dimension name only really needs to be set if you are using a custom dimension or want to override the dimension that the metadata will be loaded to.


On to the import format, the target dimension was selected and the file type set as “Delimited – All Data Type” which is a requirement for loading metadata, it is probably so it switches the load over to the OLU.


The properties that were enabled are made available as targets in the import format mappings.


I mapped each column of the source file to the relevant target property, the data target can be ignored as it is not required, it appears as it standard when using an all data type.

The location is the same concept as with data loading and the import format was selected.


I created a new category called “Metadata” just to make the POV more meaningful, an existing one could have been used.


I didn’t opt for creating a new period mapping and stuck with an existing mapping.


I could have created a period mapping just for loading metadata but I thought by going down the route of loading by period the data will be stored and not overwritten each period, this could be useful if you want to view the metadata that was loaded against a given period.

Each of the dimension properties will be available to apply mapping to.


The mappings operate in the same as with data loading so it should not be anything new to you.

For this first example, I stuck with like for like mappings as the source file is in the correct format.


When creating a data load rule against a standard dimension you will see that the plan type is automatically set as “REGULAR”.


I uploaded and selected the source file.

In the rule target options, there are the same properties that are available at target application dimension level.


The dimension name can be left blank unless it needs to be overridden.

There is also the option to refresh the database meaning after the metadata is loaded to the target application it will be pushed down to Essbase, you would usually set this to Yes if you plan to load data after the metadata load.

If you plan on loading multiple dimensions it is probably best to set the refresh option to yes for the last metadata load, otherwise you will be refreshing after each load which is definitely not efficient.

Now that everything is in place it is time to import the metadata into the workbench.


The mapping validation failed and here lies the first problem, there are rows of metadata where the source will be null which should be perfectly acceptable, in the above example the validation failed due to there being no member alias.

For metadata, you would want to keep the target mapping as null where the source is null, as Data Management has been primarily designed to load data a target mapping value is usually required and fails if one doesn’t exist.

Whether this is the correct way or not I created a SQL mapping to map null values to a space, a space should be ignored when loading to a target application.


After adding this mapping, the validation issue was fixed.


The workbench will have columns for source/target data which can be ignored as they only apply to loading all data type data.


On to the export which will load the metadata to the target application.


Next problem, the export failed.

If any rejections occur in the load to the target application the process will fail, the valid records will still be loaded.


The process log contained the following information.

Possible circular reference detected, aborting sort and continuing with Load.  76 possible circular reference records found.
com.hyperion.planning.InvalidMemberException: The member 0001 does not exist for the specified cube or you do not have access to it.
com.hyperion.planning.InvalidMemberException: The member 0002 does not exist for the specified cube or you do not have access to it.
……… and so on
Outline data store load process finished. 266 data records were read, 266 data records were processed, 194 were accepted for loading (verify actual load with Essbase log files), 72 were rejected.


So for some reason a circular reference was found, this stopped the sorting of the records which subsequently meant records were rejected because parent members did not exist.

I had a look at the file that Data Management produced and tried to load against the target application, the file was the same as my original source file except for the order.

I tested loading the file that Data Management produced directly into the planning application using the import functionality and it failed with the same type of errors, I then loaded my original source file and it imported perfectly.

It looks like the issue is down to a combination of the order of the members in the file that is being created by DM and that there are shared members involved.

The account dimension that was built looks like:


If I load the source file (not the one DM produces) directly to planning then it looks like:


Not only is there a problem with members being rejected, the order of the members is different.

Basically what happens is Data Management executes the following SQL to produce the file that is then loaded to the application.

DEBUG [AIF]: SELECT ACCOUNT,ENTITY,ICP,UD1,UD2,UD3,UD4,UD5,UD6,UD7,UD8
 FROM AIF_HS_BALANCES
 WHERE LOADID = 31
 ORDER BY 2,1

The order is defined by parent then member and in ascending order, so it looks like it doesn’t matter what order your source file is in, the target order will not match this and it doesn’t look like there is currently anyway to change this, this may not be an issue but something to be aware of.

If you plan on loading formulas then you will probably hit issues if the length is over 80 characters, when the source file is loaded into the repository then be prepared for the following type of failure and error message:

ERROR [AIF]: EPMFDM-140274:Message - ORA-12899: value too large for column "DRMQHJQKBLPQ"."TDATASEG_T"."UD2" (actual: 1173, maximum: 80)

Using the import metadata feature in the planning application there is no problem loading large formulas.

Also, if the formula is split over multiple lines in the source file:


If it is loaded directly through the import metadata feature in the planning application the formula will load successfully, if the same source file is loaded through Data Management the import will be split over multiple records and be completely messed up.


My assumption is that to be able to load formulas the character will need to be under 80 and the formula be on a single row.

Moving on to the entity dimension, I followed the same process and besides the null source value mapping issue the metadata loaded successfully.


This produced the following dimension hierarchy in the planning application.


If I load the source file directly to planning then again you see there is a difference in the ordering.


Because of the way the metadata has been ordered it also caused an issue with shared members occurring before base members.


Now for a custom dimension which is my example is product, I started out with the following source file:


In my source file, I have some level 0 members which have no parent and the aim is to map these to a parent named “P_Unmapped”.

Once again, the process is the same for setting up the integration, there is a slight difference with the target plan type name in the data load rule, the name will be automatically set as “CUSTOM”


As this a custom dimension and there could be multiple, the target dimension name needs to be defined in the load rule target options.


I added a SQL mapping to map the null parent members and assign them to “P_Unmapped”.


The null parent members were correctly mapped after importing the source file.


The export was successful and the hierarchy was built as expected.


So how about Smart Lists, well the concept is the same and I started out with the following source file which contained the definition for one Smart List.


The file could have contained multiple Smart List definitions, I just went for one for demonstration purposes.

The data load rule will have a default target plan type set as “SMARTLIST”


There is no need to set the dimension name in the target options of the rule.

Besides having to map the null source properties there were no other difference to mention.


The Smart List was created in the planning application with valid entries.


If you want the display order to be by ID then it might be worth adding in the entry ID property to the target dimension settings and include in the source file, this way the order will be the way you want it.

Finally, I want to briefly cover off deleting members, if you look the properties available in the dimension details of a target dimension application you will see there is a property called “Operation”.


This property controls whether the record being loaded performs an insert/update or delete, the possible values for this property are:
  • update
  • delete level 0
  • delete idescendants
  • delete descendants
Usually you would not pay attention to the operation property as if not specified the default is to update, if you want to delete members you can use one of the other values.

The aim of this next example is use Data Management to delete descendants of “P_Unmapped” in the product dimension.


For some reason, the custom dimension does not include the operation property by default so I added a new property.

So I wouldn’t affect my original product metadata load I added a new dimension with a prefix and enabled only the “Parent” and “Operation” properties.


The source file contains the member and the operation to delete descendants.


The remaining steps to set up are the same so there is no need to go through them again, the full process was successful.


In the target application, the descendants of “P_Unmapped” were deleted.


In terms of automation the same applies as loading data, the rules could be scheduled in Data Management, if you want multiple rules to be run they can be added to a batch.

The uploading of the source file, running the rule or batch can be carried out using EPM Automate, alternatively the REST API can achieve the same results.

Well that concludes my first look into loading metadata through Data Management, I know I have experienced some issues with the functionality but that could be because it is so new and I have not had the time to investigate, if anything changes or is factually incorrect I will update this post.

Thursday, 30 November 2017

FDMEE/Data Management – common questions part 2

Moving on to the second part where I am going to cover a few of the common questions and misunderstanding around the different load methods in FDMEE/Data Management, mainly concentrating on the “All data types” method.

The different load methods can be defined at application or data load rule level, for EPM cloud the methods currently available are:


Unless you have a specific requirement to load non-numeric data it is best to just stick with the “Numeric Data Only” method, though I have seen numerous posts in the Oracle forums where numeric data was being loaded but one of the all data type methods had been selected, this can cause problems due to the way the all data loads operate which I will get on to later.

If you are not aware, the numeric data only method loads data behind the scenes using an Essbase data load rule, for the cloud a file is created during the export stage, an Essbase load rule is created and the data is loaded using the file and rule.

The all data type method uses the outline load utility which you may be familiar with if you have been involved with on-premise planning, data is loaded directly through the planning layer instead of Essbase which allows non-numeric data to be loaded.

I have already covered in detail in previous blog posts the different all data type load methods, the “All data types with auto-increment of line item” used to be named “All data types in Planning File Format” and changed name without warning, this is the cloud remember.


You can read the two posts I put together about this method here and here, the posts were put together when the old naming convention was still in use, this should not be a problem as the functionality is exactly the same, just the name changed.

I also put together a post around loading non-numeric data with the “All data types with security” which you can read about here.

To complicate matters further when loading data using this method as a non-administrator then the user’s security is enforced and a REST API comes into play instead of the OLU, luckily I put together three posts which try to explain how the security operates with cloud and on-premise for both Planning and Essbase, to start to understand the security mystery then go here.

If you compare the load options currently available in on-premise FDMEE to the cloud then you will see they differ.


The numeric loads can be defined to load data using either a file based Essbase data load rule which is the same as the cloud, alternatively you select to load using an Essbase SQL based load rule which will load the data directly from the FDMEE database repository.

There is only one all data type load method available and does not include the security option.

I suspect these options will be aligned to the cloud in the next patch release, if this is the case I will update this post.

Back to EPM cloud, there are a few strange functionality quirks when selecting one of the all data type load methods, these may well be bugs and will go away over time but at the moment some of the things to watch are for are.

If you set one of the all data type load methods at target application level:


When you create a load rule, the rule will default to an all data type method and there is no option to override with “Numeric Data Only”.


So best to keep the target application load method to the default value of “Numeric Data Only” as this way it is possible to select any of the methods when defining a load rule.

Another strange one is if you have a load rule set to a load method of “Numeric Data Only”


Then change to one of the all data type methods.


It is not possible to switch it back to numeric data only, the only values available are the all data type load methods.


If you try and enter “Numeric Data Only” manually and save then you get hit with an error.


If you try and use migration to import the rules back in it doesn’t seem to overwrite the property, yes you can delete the rule and recreate but remember if you try and delete the rule and data has already been loaded, then the data could be deleted.

I do consider them to be bugs and if they are ever addressed and fixed I will update this post.

Right, on to an issue that I have seen raised a few times which is usually down to load method selected:

“I am selecting the export method of “Add Data” but the data is being replaced in the target application”

To understand the problem, I think it is worth going through some examples of how the functionality works between numeric and all data type load methods.

To start with let us take a numeric type load with a simple source file that contains one row of data.


The import format was set to type “Delimited – Numeric Data” and the rule load method defined as “Numeric Data Only”.


The data was into imported and validated in the workbench.

To load the data to the target application there are four export modes available.


The documentation provides a clear definition of what these options do, what it doesn’t mention is that these relate to the numeric data only load method.

Store Data — Inserts the value from the source or file into the target application, replacing any value that currently exists.

Replace Data — Clears all data for the POV in the target, and then loads from the source or file. For example, when you have a year of data in your Planning application but are only loading a single month, this option clears the entire year before performing the load.

Add Data — Adds the value from the source or file to the value in the target application. For example, when you have 100 in the source, and 200 in the target, then the result is 300.

Subtract Data — Subtracts the value in the source or file from the value in the target application. For example, when you have 300 in the target, and 100 in the source, then the result is 200.

Before loading the data, I created a form to match the POV and entered a value.


The data was then loaded and the export mode selected was “Add Data”


As you would expect the data value loaded was added to the existing data.


In the process logs you can see that an export file and Essbase data load rule are created.

INFO  [AIF]: Creating data file: /u03/inbox/outbox/Vision_1448.dat
INFO  [AIF]: Data file creation completed: 1
DEBUG [AIF]: Created rule file: AIF0286
INFO  [AIF]: Saved rule file to essbase server

The data is then loaded directly to Essbase using the data file and load rule.

If you have worked with Essbase you will know that export mode options for replace, add and subtract match to those that can be set in an Essbase load data load rule.


If I run an export again and change the mode to “Store Data”.


This time the data in the target application will be replaced with that from the data load.


All makes perfect sense, so now let us switch over to an all a data type load, the import format was updated to a type of “Delimited – All Data Type” and the load method in the rule set to “All data types with security”.


The data value in the target application was set back to 500.


The data is reloaded into the workbench and the export mode set to “Add Data”, this is done with an administrator account which means the load should be carried out by the OLU.


The process logs highlight the load method is now using the outline load utility (OLU).

DEBUG [AIF]: Overrode info.loadMethod for the admin user: OLU

Outline data store load process finished. 1 data record was read, 2 data records were processed, 2 were accepted for loading (verify actual load with Essbase log files), 0 were rejected.

Reloading the planning form shows that the data has been replaced instead of added to.


The reason for this is that is the way the OLU operates and to be honest the all data type is really aimed at loading non-numeric data such as text, dates or smart lists, if you think about it, it just wouldn’t make sense trying to add non-numeric data.

How about if the process is repeated but this time with a non-administrator account, this should mean the REST API will be the method for loading the data (currently only EPM cloud supports this).

Run the export again with export mode set to “Add Data”.


In the process log you can see that because it is all data type load and a non-admin user the load is carried out with the REST API, if you want to understand more about this method and its limitations then refer to my “diving into the Essbase and Planning security mystery” posts.

DEBUG [AIF]: LOAD_METHOD:HPL, LOAD_TYPE:DATA, EXPORT_MODE:ADD_DATA
DEBUG [AIF]: Overrode info.loadMethod for the non-admin user: REST
DEBUG [AIF]: No need to connect to Essbase server.
DEBUG [AIF]: requestUrl: http://localhost:9000/HyperionPlanning/rest/v3/applications/Vision/plantypes/Plan1/importdataslice
INFO  [AIF]: Number of rows loaded: 1, Number of rows rejected: 0

Refreshing the data form confirms the data has been added to the existing value.


With the REST API method, Data Management will generate a file containing JSON that is read and passed as the payload when calling the REST resource.

An example of the JSON created to load the data above is:

{
   "aggregateEssbaseData":true,
   "dateFormat":"MM-DD-YYYY",
   "cellNotesOption":"Skip",
   "dataGrid":{
      "columns":[
         [
            "Actual"
         ]
      ],
      "rows":[
         {
            "headers":[
               "1110",
               "110",
               "Working",
               "BaseData",
               "P_000",
               "FY17",
               "Nov"
            ],
            "data":[
               "400"
            ]
         }
      ]
   }
}

The REST resource has a parameter called “aggregateEssbaseData”, the value is set to true when the export mode of “Add Data” is selected, if “Store Data” is selected then value is set as false.

The documentation on the REST API has some important information to consider when loading text and numeric data together using this method.

aggregateEssbaseData which has a true or false value.  If true, the values being saved will be added to the existing values. Only numeric values can be aggregated. Cells with Smart list, Text and Date data types will be rejected. If false, the data values will be overwritten.”

If you set the export mode to add data and there is a mixture of numeric and non-numeric data the non-numeric should be rejected, to prove this I added a row to the source to load text to a text type member.


If there is no existing data for the text member then the export is successful, if there is existing data then the export fails.


The process log confirms the numeric data was loaded and the text data was rejected.

INFO  [AIF]: Number of rows loaded: 1, Number of rows rejected: 1
INFO  [AIF]: List of rejected cells: ["[Actual, Text Example, 110, Working, BaseData, P_000, FY17, Nov]"]
ERROR [AIF]: The data load process has failed.

So if you are loading numeric data that needs to be added to existing data then carefully consider the outcome when selecting a load method.

There are also differences between the load methods when loading multiple records against the same POV, I will go through an example to demonstrate this.

This time I have added an additional row to the source file which is against the same member combination as the previous row.

The load method has been set to “Numeric Data Only”


The data is summed before it is loaded to the target application which you can see by switching the view in the UI to target.


The data export file that is created before being loaded to the target application also confirms this.


As expected the data in the target application is the same as the “Store Data” option was selected.


If we repeat the process using the “All data types with security” load method with an admin user then it acts differently.


Checking the form after a successful load reveals that only one data value has been loaded.


The data export file contains only one value.


This is because when using the all data type method, instead of the values being summed the maximum value is loaded to the target and replaces any existing value.

If the process is repeated using a non-admin account so the load method switches to using the REST API the functionality acts in the same way and the maximum value will be loaded.

Another reason to stick with the “Numeric Data Only” load method when loading numerical data.


On to the final question which is:

“Is mapping data values possible with the all data type?”

Currently it is not possible with on-premise though I am sure that will change in a future patch, it wasn’t possible in EPM Cloud until a while back when data dimension appeared from nowhere in the usual cloud type way, I don’t remember reading about it in the monthly update documentation.

If you look at the available dimensions in cloud there is a “Data” dimension available.


The data dimension is visible whether it is a numeric data only load or all data type load but it only relates to the latter, if you do try to use it for numeric data only loads then it will be ignored.

It can also be ignored for all data type loads if there are no mappings required, if there are no mappings applied it will map the source to the target automatically.

For on-premise you are going to be disappointed until it is updated.


There nothing that different about using the data dimension for mappings but it is worth just putting a quick demonstration of it in action.

I am going to through a simple example of loading Smart List data, I created a new Smart List based on members in a hierarchy.


The name and are automatically generated, the name is created by prefixing an underscore to the original member name and the label is a combination of member name and alias.

The Smart List was applied against two account members and a form created to check the output from the data management load.


The source file is extremely basic with the two account members and data value for the Smart List.


The import format was set as a file type of “Delimited – All Data Type”.

The data load rule load method was defined as “All data types with security”.


For this first example, no mappings were added to the data dimension.


The process log shows the data was loaded successfully as there were no rejections:

Outline data store load process finished. 2 data records were read, 3 data records were processed, 3 were accepted for loading (verify actual load with Essbase log files), 0 were rejected.

The data form shows that the values have been loaded though they don’t match any of the Smart List entries because they have not been mapped correctly.


In the above example, the values loaded are numeric so they still get loaded to Essbase even if they are mapped incorrectly, if you try to load invalid Smart List entries with text data like:


I cleared out the data entries in the form and ran the export again.

The load looks to be successful from the log as it records that data was loaded:

Outline data store load process finished. 2 data records were read, 3 data records were processed, 3 were accepted for loading (verify actual load with Essbase log files), 0 were rejected.

In reality, no data will have been loaded as it did not match any Smart List entries which means it will have no numerical ID associated with it to load to Essbase.


Moving on, back to the original source file but this time I will add a mapping to the data dimension.


I have used the format function to prefix the source value with an underscore so it matches the format for the name of the Smart List entry.


Refreshing the form shows that the data has been mapped correctly this time as the Smart List entries are displayed with their associated label.


It is also possible to map the data to the Smart List label instead of the name.


The data will be mapped successfully in the target application with either the name or the label of the Smart List entry.


I think that covers as much as I need to on all data type mappings as they operate in pretty much the same way as standard dimension mappings.

As always I hope you found this post useful, until next time..