My first FDMEE data load in EPM 11.2.2


Oracle Hyperion Financial Data Quality Management Enterprise Edition (FDMEE) is a data integration tool that helps transform data and metadata from a variety of sources into a consumable format for Hyperion products like Planning, HFM and Essbase.

In earlier versions, Oracle EPM included:

  • Classic FDM (Financial Data Quality Management): A general data integration tool to integrate from various sources to EPM apps.
  • ERPi (Enterprise Resource Planning Integrator): Another data integration tool that is designed directly for integrating data and metadata from ERP system (eBS, PPsoft, SAP,etc.) to EPM app using ODI.
Starting from Oracle EPM 11.1.2.3, classic FDM and ERPi combined into a single product called FDMEE. It includes both classic FDM and ERPi Functions. Uses classic FDM interface and ODI is running behind the scenes.
🔎 Please note: In FDMEE, import scripts support Jython only. VB mapping scripts are not supported anymore so you will have to re-write them as Jython or SQL mapping scripts (or replace by multi-dimensional mappings).

In this blog, let’s take a look at how to load and validate data in FDMEE and then export the data to other EPM products, such as Planning.

Before using FDMEE, we must register the source system from which you want to import data. Then register the target application, such as Planning to which you want to load the data from one or more source systems.

SYSTEM SETTINGS
First, let's use system settings to update system level profiles that apply to entire system.
On the Setup tab, under Configure, select System Settings:
Select Profile Type as File.
Define path for application root folder, in this case, the path is D:\Oracle\Middleware\user_projects\FDMEE.
The Application Root Folder identifies the root directory of the FDMEE application. This folder is located on the FDMEE server and functions as the root folder for all FDMEE activities. Based on this parameter, FDMEE saves log files, generated files and reports to the appropriate folder under this root directory.
When you select the File profile type, the System Setting screen displays the Create Application Folders button. This feature instructs the system to create a folder structure in the path specified in this field. The folder structure is (with sub-folders in each):
  • data (contains a copy of each file that is loaded by the system. It is also the root folder for the scripts directory)
  • inbox (if application folders are not used, then location folders are created in the top-level 'inbox' folder. When application folders are used, then location folders are created in the related application folder.)
  • outbox (stores export files created by FDMEE, any error log files from a target application, drill region load files, and drill load logs from the target)
Click Create Application Folders button, then FDMEE will create the folders and provide a confirmation as below:
Click on Save. It will give you a confirmation that the system settings are saved.
Below folders are created in the FDMEE server:
Create Location Folder instructs the system to create a location folder in the inbox when a location is created. Available values are Yes or No.
Archive Mode specifies whether archived files are copied or moved to the archive location. Enter Copy, Move, or None. If you select Copy, then the file is left in the inbox. If you select Move, then the file is copied to the archive folder and deleted from the inbox.
Batch Size specifies the number of rows read at a time from the file to memory. This parameter is mainly used for performance. When data is loaded, this setting determines how many records are stored in the cache.
Encrypted Password Folder specifies the directory where the files that store passwords in encrypted form is located.

Select the Profile Type as ODI. The fields will already be populated.
Click on Check ODI Connection button. It should give you a confirmation as below:
These should be enough for us to proceed further. The POV profile type specifies certain defaults that take precedence when no equivalent settings are in Application Settings or User Settings. Leave the Other profile type settings as is.

SOURCE SYSTEM
Let's set up a Source System. On the Setup tab, under Register, select Source System. Click on Add. Since I will be loading from a file, I filled the below details:
The ODI Context Code refers to the context defined in Oracle Data Integrator. A context groups the source and target connection information.
The default context code is GLOBAL.
The Drill-Through URL identifies the URL to use for drilling through. I'm skipping this for now.
The just created source system will look something like below:

TARGET APPLICATION
Let's register a target application. On the Setup tab, under Register, select Target Application.
Select Add.
You will see two options:
  • Local: This application type refers to a local EPM application (on-premise deployment) in the current service.
  • Cloud: This feature enables you to adapt cloud deployments into your existing EPM portfolio.
Select Local.
Choose the Type as Planning.
Choose the desired application name. I'll choose POC.
Click OK.
In the Application Details grid for the target application, specify the dimension details. The Dimension Details tab shows the dimensions in the Planning application. In this case, POC application has 6 dimensions, which are Account, Entity, Period, Scenario, Version and Year.
Select the Target Dimension Class for each dimension that is not defined in the application. The dimension class is a property that is defined by the dimension type. For example, if you have a Period dimension, the dimension class is also “Period.” For Essbase applications, you must specify the appropriate dimension class for Account, Scenario, and Period.
In Data Table Column Name, specify the table column name of the column in the staging table (TDATASEG) where the dimension value is stored. DO NOT specify this for Period, Scenario and Years dimension as those will not be populated in TDATASEG table.
In Sequence, specify the order in which the maps are processed.
For example, when Account is set to 1, Product is set to 2, and Entity is set to 3, then FDMEE first processes the mapping for Account dimension, followed by Product, and then by Entity.
Click on Save. Your changes will be saved and you will be able to see the target application.
Select Refresh Metadata. It retrieves information about the target application like number of dimensions, type, names, what you see in dimension details.
Once refreshed, you will get a confirmation like the below:
Next, select Refresh Members. It refreshes dimension members from the target application. Once you click it, a process will be initiated.
You can check the completion status in the Process Details under Workflow tab:
If the Create Drill Region is checked, then you will be able to drill through to the source file or system (drill through is a feature in FDMEE that allows users to see where the data is coming from).
Let's run through some important Application Options:
Load Method: Specify the method for loading data from the TDATASEG staging table to Essbase. Below are the options:
  • File: Data is written to a data file in the outbox directory of the application (defined in System Settings). The file name is in the format <APPLICATION NAME>_<PROCESS_ID>.dat. It is then loaded to Essbase.
  • SQL: Uses SQL Method to load data.
Batch Size: Specify the batch size used to write data to file. The default size is 10,000.
Drill region: Select Yes to create a drill region. A drillable region is created to use the drill through feature.
Check Entity Calculation Method: Specifis the calculation method for check entities.
Available methods are:
  • dynamic: Check entity data is calculated based on the data at retrieval time. If set to dynamic, then the default Essbase calculation is executed.
  • calculation script: Check entity data is calculated based on a predefined calculation script.
Purge Data File: When a file based data load to Essbase is successful, specify whether to delete the data file from the application 'outbox' directory. Select Yes to delete the file, or No to retain the file.
Member name may contain comma: To export a dimension member name containing a comma to Planning, select Yes. Otherwise, select No.

Calculation Scripts: When loading data to Oracle Essbase or Oracle Hyperion Planning, FDMEE enables you to run custom calculation scripts. A calculation script is a series of calculation commands, equations, and formulas that enable you to define calculations other than those defined by the database outline. Calculation scripts are defined in Essbase and Planning. I'm skipping this step as of now.
Target applications are stored in the AIF_TARGET_APPLICATIONS table in FDMEE schema.

APPLICATION SETTINGS
On the Setup tab, under Configure, select Application Settings.
Pick target application, then define the target application folder where planning application is going to store data files.
Click on Create Application Folders button, then FDMEE will create three folders automatically, which are data, inbox and outbox
Click Save. Folders are created in the FDMEE server:
You can change the Log Level as per your requirements. The range of Log Levels in FDMEE is 0-5, with zero producing no output and 5 generating the most granular information available. The size of the log file highly depends on the defined log level. If you don't mention here this setting is picked from System Settings.

IMPORT FORMAT
Import Formats determine which fields (columns) are extracted from the source system and how the data is stored in the FDMEE staging table. It is defining the mapping between the source system segments and the dimensions. Basically, the import format defines the layout of source data.
In this blog, I will use a comma delimited file. The following columns are included in the file:
Account, Entity, Period, Scenario, Version, Year, Amount.
The data file looks like this:
In the import format, we will define the location of these columns and map them to dimensions in the target application.
On the Setup tab, under Integration Setup, select Import Format. Click Add to add an import format.
Name: Name of the import format.
Source: Type of source. In this case, my source is file.
File Type: I have chosen 'Delimited - Numeric Data'. Below are the available types:
Drill URL: Enter the URL that identifies the URL to use for drilling through when using this import format. I am not using drill through functionality in this blog.
Description: Give a description of the import format.
Target: Select any EPM application as a target.
File Delimiter: Specify the delimiter. 
Click on Save.
The import format gets saved. 
In the Mappings grid of the import format, you map the columns in the source data file to the dimensions in the target application. I will be defining one-to-one mappings. 
The following figure shows a one-to-one mapping between segments in a source system and dimensions in a target EPM application.
Each member in the source is created as a single member in the target dimension. You will get the below screen:
You will not get Period, Scenario & Years in the above screen because we have not mentioned the 'Data Table Column Name' for these dimensions during registering the target application. I have updated the source fields and the respective field numbers against the target dimensions.
“Period” dimensions are supported as columns in a data file. If you have data for multiple periods in a single file, then you can include the year and period on each row of the data file that gets loaded to the target application. You load a period as a column from a data file by defining the load definition through the import format, and the data load rule.
From the Import Format Detail Mapping grid, select the source column and click Add. Select Source Period Row then click on Year.
Map the source field.
Next, we will add a row for period:
Finally, the mappings will look like this:
You can also use the Build Format (import format builder). Click on it and select the data file to be loaded. 
The file contents are shown on the Import Format Builder screen.
Select the text to import. Here I selected 1001. In Assign selected text as Source Dimension Name, select N to specify the source dimension name on the 'Enter Dimension Mapping Detail' screen.
Select Y to use the selected text as the source dimension name and to use the one-to-one mapping between the source dimension and target dimension members. I selected N.
Click Assign Dimension. You will get the below window:
In Source Dimension Name, specify the name of source dimension to which to assign the selected text.
In Select Target Dimension, select the target dimension to which to map the source dimension.
In Selection Field Number, specify the field number from the file to import (defaults to the field number from the file when text is selected).
Click OK. When you assign or change source dimension information, the information is shown in the Add New Mapping summary grid.
Do this for all the fields in the source file. The information is shown in the Add New Mapping summary grid. Click on OK.
NOTE: You might get this error while adding the mappings:
The import format UI in FDMEE is extremely buggy and often fails with the above error while saving the mappings. To get around this, try closing FDMEE window, reopening it and adding ONE MAP AT A TIME, starting with Account.
Note: You cannot delete an import format if it's tagged to a location. You need to either remove the tagging or delete the location first.

LOCATION
After defining the import format, create a location. You define locations to specify where to load data to a target system. That is, a location is the level at which a data load is executed in FDMEE. A location is associated with one source system. Each location is assigned an import format, one or more Data Load Rules, and a Data Load Mapping. Additionally, locations enable you to use the same import format for more than one target application where the dimensionality of the target applications is the same.
Under Integration Setup, select Location.
Specify the target application:
Click Add to add a location.
Name: Enter the location name.
Import Format: Enter the import format or click on the magnifying glass to select it.
Source is automatically selected as File.
Target is automatically selected as POC. This is because the import format is already tagged to POC target application.
🔒Since this is a very basic load, I am not using the other fields as of now. But it's good to have an idea of the below fields.
Parent Location: Parent mapping are used to share mappings with other locations. Enter mappings at the parent location, and the related locations can use the same mappings. Multiple locations can share a parent. This feature is useful when multiple locations use one chart of accounts. Changes to a child or parent mapping table apply to all child and parent locations. I'm leaving it blank because this is my first location.
Data Value: Specify the extra dimension that is used only for integration with multiple dimension target systems.
Logic Account Group: Logic Accounts generate additional accounts, or data, that is not deriving from the source. They are generated during the import step and can either reside in the target application or FDMEE. Logic accounts are dynamically generated accounts and are used to calculate values that are not provided in source files. Logic groups are associated with a target application. (The logic group list of values is filtered in the location user interface based on the target application in which it was created.) Like all other source accounts, logic accounts can be mapped and loaded into target systems.

Click Save. The location is saved.
TPOVPARTITION table (in FDMEE schema) contains the names of the locations and the “key” (PARTITIONKEY column) to join to the TDATAMAP table.

PERIOD MAPPING
You need to define period mappings in FDMEE to map your source system data to Period dimension members in the EPM target application. You have the flexibility to use various kinds of calendars (for example, monthly, weekly, or daily) based on your business and statutory requirements. In your EPM system, you can also use different calendars, based on your application requirements (for example, different levels of periods). Because FDMEE extracts the Enterprise Resource Planning (ERP) source system data to the target EPM application, establish the mapping relationship by defining a period mapping between the source ERP source system periods and the target EPM application periods.
You can define period mappings in three ways:
  • Global Mapping: You define a global mapping in cases where you do not have many target applications getting data from multiple source systems with different types of source calendars. Use a global mapping to ensure that various periods are accommodated in the individual mapping. As a first step, define a global mapping.
  • Application Mapping: If you have multiple target applications, getting data from various source systems with complex period types, you can create application mappings in addition to global mappings. When you define an application mapping, you can modify the Target Period Month as necessary.
  • Source Mapping: Specifies source period mapping for adapter based integrations.
Under Integration Setup, select Period Mapping. Click Add and enter the period maps.
In this blog, I have added Global Mapping for FY20 only:
Select Application Mapping. Cross check the target application name:
Click Add. You will get the below window:
Select Jan. You'll get the below window:
Fill the pending details as shown below:
Do it for all the months.
Next, select Source Mapping. Select source system as file.
For (E-Business Suite, Fusion, and PeoplesSoft) source systems, you can select explicit or adjustment systems. For all other systems, you can select only an explicit mapping.
Click on Add. You'll get the below window:
Select Jan. You'll get the below window:
Fill the details as shown below:
The GLOBAL period maps are stored in the TPOVPERIOD table.
 FDMEE Allows you to update one row at a time. You can also update the same from this backend table - through insert statements. Note: PERIODKEY and PRIORPERIODKEY need to be casted as DATE. Example below:

CATEGORY MAPPING
In FDMEE, when you setup a target application, a target dimension class needs to be specified for each dimension. Target dimension class is a property that is defined by dimension type. For example, if you have Period dimension, the dimension class is “Period”; for Account dimension, dimension class should be “Account”; Entity dimension with “Entity” class, and Scenario dimension goes for “Scenario” class.
“Category Mapping” is to be used as categorizing and mapping source system data to target application Scenario dimension member. After Scenario dimension is set for “Scenario” class, it will not show up in data load mapping. Instead, it is in the data load rule definition and decided by “Category Mapping.”
On the Setup tab, and then under Integration Setup, select Category Mapping.
Select Global Mapping. Click Add.
I am not specifying anything in the application mappings. But, if you want you can add as below:
Similar to the Period mappings, where Global mappings are stored in a table called TPOVPERIOD, the Global Category mappings are stored in a table called TPOVCATEGORY as shown in the next snapshot:
Note: CATEFREQ (Category Frequency. This can have four values namely Y, Q, M, D for Yearly, Quarterly, Monthly and Daily).

DATA LOAD MAPPING
Create data load mappings in FDMEE to map source dimension members to their corresponding target application dimension members. Define the set of mappings for each combination of location, period, and category to which you want to load data.
On Workflow tab, under Data Load, select Data Load Mapping.
Select the Point of View (POV) to define the data load mappings for a specific location, period, and category. At the bottom of the page, click the current location name and define the POV in the Select Point of View dialog box:
Click OK.
On the Data Load Mapping page, define the mappings for each dimension. From the Dimensions drop-down, select the dimension that you want to map. Click on Account dimension.
Since my accounts are exactly same in the file and the target application, I can specify '*' mappings in 'Like' mappings section. But here I will show how to create an "Explicit' mapping.
Click on Explicit tab. Click on Add.
Source Value: Specify the source dimension member to map to the target dimension member.
Target Value: Specify the target dimension member to map the source dimension member to.
Apply to Rule: This is optional. This is to apply the mapping only to a specific data rule in the location.
Do this for all other required dimensions. In my example, I had to do for Entity & Version. 
Mappings are stored in TDATAMAP table:

DATA LOAD RULE
Create a data load rule for a specific location and category to define what you want to import from the source system. Data load rules are defined for locations that you have already set up. Data load rules are specific to locations. You can create multiple data load rules for a target application so that you can import data from multiple sources into a target application. The data load rule is created once but used each time there is a transfer.
On the Workflow tab, under Data Load, select Data Load Rule
From the POV Bar, select the location to use for the data load rule. Data load rules are processed within the context of a point of view. The default point of view is selected automatically. The information for the point of view is shown in the POV bar at the bottom of the screen.
Click Add.
Name: Enter the name of the data load rule.
Category: I have just one category. The categories listed here are those that you created in the category mapping section.
Period Mapping Type: Select the period mapping type for each data rule:
  • Default: The Data Rule uses the Period Key and Prior Period Key defined in FDMEE to determine the source periods mapped to each FDMEE period included in a Data Rule execution.
  • Explicit: The Data Rule uses the Explicit period mappings defined in FDMEE to determine the source periods mapped to each FDMEE period included in a data load rule execution. 
Import Format: Specify the import format.
Description: Enter description for the data load rule.
Target Plan Type: Specify the target plan type where data will be loaded to.
File Name: Specify the data file to be loaded.
Click on Save. Your changes will be saved.
With the rule selected, click Execute. In the Execute Rule dialog box, define the options for running the rule. You will get the below window:
Import from Source: FDMEE imports the data from the source system, performs the necessary transformations, and exports the data to the FDMEE staging table.
Select this option only when:
  • You are running a data load rule for the first time.
  • Your data in the source system changed. For example, if you reviewed the data in the staging table after the export, and it was necessary to modify data in the source system.
In many cases, source system data may not change after you import the data from the source the first time. In this case, it is not necessary to keep importing the data if it has not changed.
Recalculate: When the source system data has changed, you need to recalculate the data (remap all imported source data using the current mapping table and to recreate all logic accounts).
Export To Target: Exports the data to the target application. Select this option after you have reviewed the data in the staging table and you want to export it to the target application.
Execute Check: Generate the date, and then run the Check Report.
Start Period: Select the beginning period of the POV from which to import the data from the source system.
End Period: Select the ending period of the POV to which to import the data from the source system.
Import Mode: Select the method for loading data:
  • Append: Existing rows for the POV remain the same, but new rows are appended to the POV (that is, appends the new rows in TDATASEG). For example, a first time load has 100 rows and second load has 50 rows. In this case, FDMEE appends the 50 rows to TDATASEG. After this load, the row total for the POV is 150.
  • Replace: Replaces the rows in the POV with the rows in the load file (that is, replaces the rows in TDATASEG). For example, a first time load has 100 rows, and a second load has 70 rows. In this case, FDMEE first removes the 100 rows, and loads the 70 rows to TDATASSEG. After this load, the row total for the POV is 70.
Export Mode: This option is only available while using the Export to Target option.
  • Store Data: Inserts the data from the source or file into the target application. This replaces any value that currently exists.
  • Add Data: Adds the value from the source or file to the value that exists in the target application. For example, if you have 100 in the source, and 200 in the target, then the result is 300.
  • Subtract Data: Subtracts the value in the source or file from the value that exists in the target application. For example, if you have 300 in the target, and 100 in the source, then the result is 200.
  • Override All Data: Clears all data in the target, and then loads from the source or file. For example, if you have a year of data in your Planning application, but are only loading a single month, this option clears the entire year before performing the load.
Data load rules are stored in the AIF_BALANCE_RULES table:
Let's run the DLR. Click Run. You will get the below message:
Then wait for a couple of minutes. When it is done and successful, a green check mark will show up.
To verify the status of the process, there are two ways to check the log file:
1. On the Workflow tab, under Monitor, select Process Details. A green check mark indicates success for the Process ID.
You can click on Show to check the log file. This information is stored in the AIF_PROCESSES table in FDMEE schema.
Below are the process steps.
This information is stored in the AIF_PROCESS_DETAILS table in FDMEE schema.
2. In the FDMEE server, go to the application folder, under outbox\logs, find the related log file. 

DATA LOAD WORKBENCH
Let's check the import results. Under Data Load, select Data Load Workbench.
This tab displays the imported data
The above data is for the Period 'Jan'. See the POV bar at the bottom of the page:
Now, let's change the period to 'Feb'. Below is the 'Feb' data:
At the top of the page, the Import and Validate tasks display a gold fish, which indicates a successful import and validation
During validation, FDMEE applies data load mappings to map source members to target members. When validation errors occur during the Validate task, a Validation Errors tab displays (next to Load Data tab) the un-mapped source members.
 And you will get a red cross mark and a grey fish:
You must review the un-mapped source members and then, in the Data Load Mapping task, adjust the mappings to correct all errors. After correcting the errors, execute the data load rule again and check the results in Data Load Workbench.
The workbench data is stored in the TDATASEG table in the FDMEE schema:
The TDATASEG table is used to store the data loaded by the user, and the transformation between the source dimension members and results of the mapping process. TDATASEG table stores the actual data. However TDATASEG_T table is the staging table that holds the data until successful data execution (import) and then data gets copied to TDATASEG table. Data is dropped from the TDATASEG_T table after successful import. TDATASEG_T table is also used for transforming data from source to target in a data rule execution. 
When importing data, the mapping used to transform data is archived in the TDATAMAPSEG table for each point of view (POV).
If you use a large number of maps, this table grows in size over a period of time and can create a performance issue during the data import process. To improve performance, archive the data from the TDATAMAPSEG table.

After the source data has passed the validation process, use the Export option to export data to a target application. Select this option after you have reviewed the data in the data grid and are sure you want to export it to the target application. Click on Export. You will get the below window:
I will go with 'Store Data' option. 
Execution Mode: Select the mode for exporting the source data to the target application:
  • Online: ODI processes the data in sync mode (immediate processing).
  • Offline: ODI processes the data in async mode (runs in background).
I went with 'Offline' option. Got the below message:
Checked in Process Details, it was successful. 
Below are the process steps:
Now the Export fish turns golden:
Now change the POV period to 'Jan' and load the data:
After exporting data to the target system, execute the Check step to display the Check report for the current POV. If check report data does not exist for the current POV, a blank page is displayed.
FDMEE creates a '.dat' file (in 'outbox' directory) and then loads the data to Planning via a Load Rule that is dynamically created unless specified in the Load Rule Name field in Application Options of target application.

So, that's how I loaded a very simple 'multi-period' data file.
You can verify the data through a very simple form in Planing (this is a sample form having diff data):


NOTE:
Please check the ODI connectivity prior to executing your DLR.
Check the ODI connectivity in Setup > Configure > System Settings > Profile Type: ODI
Next, go to the ODI console (Master Repository):
Under Browse, go to Topology, then Physical Architecture and then expand Agents.
Click on OracleDIAgent. You will get a test connection button. 
Click it, you should get a positive connectivity test:

That's all for this blog!

Comments

  1. Hi there. Thanks for sharing the valuable information. I found this article very helpful.
    I have read a similar article on EPM VS. ERP: DIFFERENT BUT WORK WELL TOGETHER. Please do check it out. It will give a good insight about the topic.

    ReplyDelete
  2. This is a great content. Thanks for putting all this together.

    ReplyDelete

Post a Comment