Dm_extension_guide_en_xx.docx

  • Uploaded by: PP
  • 0
  • 0
  • November 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Dm_extension_guide_en_xx.docx as PDF for free.

More details

  • Words: 19,473
  • Pages: 71
SAP BusinessObjects Data Services 4.0 October 2013 English

Extension Guide for SAP Best Practices for Data Migration Extension Guide

SAP AG Dietmar-Hopp-Allee 16 69190 Walldorf Germany

Extension Guide for SAP Best Practices for Data Migration

Copyright © 2018 SAP SE or an SAP affiliate company. All rights reserved. No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP SE or an SAP affiliate company. SAP and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP SE (or an SAP affiliate company) in Germany and other countries. Please see http://global.sap.com/corporate-en/legal/copyright/index.epx#trademark for additional trademark information and notices. Some software products marketed by SAP SE and its distributors contain proprietary software components of other software vendors. National product specifications may vary. These materials are provided by SAP SE or an SAP affiliate company for informational purposes only, without representation or warranty of any kind, and SAP SE or its affiliated companies shall not be liable for errors or omissions with respect to the materials. The only warranties for SAP SE or SAP affiliate company products and services are those that are set forth in the express warranty statements accompanying such products and services, if any. Nothing herein should be construed as constituting an additional warranty. In particular, SAP SE or its affiliated companies have no obligation to pursue any course of business outlined in this document or any related presentation, or to develop or release any functionality mentioned therein. This document, or any related presentation, and SAP SE’s or its affiliated companies’ strategy and possible future developments, products, and/or platform directions and functionality are all subject to change and may be changed by SAP SE or its affiliated companies at any time for any reason without notice. The information in this document is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. All forward-looking statements are subject to various risks and uncertainties that could cause actual results to differ materially from e xpectations. Readers are cautioned not to place undue reliance on these forward-looking statements, which speak only as of their dates, and they should not be relied upon in making purchasing decisions.

© SAP SE

Page 2 of 71

Extension Guide for SAP Best Practices for Data Migration

Icons Icon

Meaning Caution Example Note Recommendation Syntax Query Transform Validate Transform Merge Transform

Typographic Conventions Type Style

Description

Example text

Words or characters that appear on the screen. These include field names, screen titles, pushbuttons as well as menu names, paths and options. Cross-references to other documentation.

Example text

Emphasized words or phrases in body text, titles of graphics and tables.

EXAMPLE TEXT

Names of elements in the system. These include report names, program names, transaction codes, table names, and individual key words of a programming language, when surrounded by body text, for example, SELECT and INCLUDE.

Example text

Screen output. This includes file and directory names and their paths, messages, source code, names of variables and parameters as well as names of installation, upgrade and database tools.

EXAMPLE TEXT

Keys on the keyboard, for example, function keys (such as F2) or the ENTER key.

Example text

Exact user entry. These are words or characters that you enter in the system exactly as they appear in the documentation.

<Example text>

Variable user entry. Pointed brackets indicate that you replace these words and characters with appropriate entries.

© SAP SE

Page 3 of 71

Extension Guide for SAP Best Practices for Data Migration

Contents Extension Guide for SAP Best Practices for Data Migration ........................................................... 5 1

Purpose .................................................................................................................................... 5

2

Preparation ............................................................................................................................... 5 2.1

3

4

5

Reuse in Data Services ............................................................................................................ 5 3.1

Dataflows........................................................................................................................ 6

3.2

Functions ........................................................................................................................ 6

Adding New Segments ............................................................................................................. 6 4.1

Adding a new Migration Cockpit segment ...................................................................... 6

4.2

Adding a new IDoc segment ........................................................................................ 25

Changing Rules ...................................................................................................................... 42 5.1

Validation Rule Changes .............................................................................................. 43

5.1.1

Mandatory ................................................................................................................ 44

5.1.2

Format ...................................................................................................................... 44

5.1.3

Lookup ..................................................................................................................... 50

5.2

6

Prerequisites .................................................................................................................. 5

Enrichment Rule Changes ........................................................................................... 53

5.2.1

Predefined ................................................................................................................ 54

5.2.2

User-Specific ............................................................................................................ 61

Extending the Reconciliation Job ........................................................................................... 61 6.1

Overview ...................................................................................................................... 61

6.2

Prerequisites ................................................................................................................ 62

6.3

Adding a New Column to an Existing Object ............................................................... 62

6.4

Adding a New Segment to an Existing Object ............................................................. 65

6.5

Adding a New Object ................................................................................................... 68

7 Extending the Migration Services Application and the Data Profiling Facility (including new lookup tables) ................................................................................................................................ 69 7.1

© SAP SE

Customer Identification Numbers Example .................................................................. 70

Page 4 of 71

Extension Guide for SAP Best Practices for Data Migration

Extension Guide for SAP Best Practices for Data Migration 1 Purpose This guide provides the information you need to extend and modify the Data Services contents that is an integral part of the SAP Best Practices Data Migration solution. This guide does not contain information on modifying or enhancing Migration cockpit on the SAP application side of the solution.

2 Preparation 2.1 Prerequisites Complete all steps from the SAP Best Practices Data Migration Quick Guide and the SAP Best Practices Data Migration Business Process Documentation guides for all objects you require to modify and enhance, before following the instructions contained within this guide.

3 Reuse in Data Services Data Services is an object based tool and most components within Data Services are reusable and can also be replicated. All objects visible within the local object library are reusable and can be dragged and associated with other objects other than their original parent. For example, a Workflow can be associated to many Jobs and likewise a Dataflow can be associated to many Workflows. Tables can be used in multiple Dataflows as can file formats. This functionality is very useful and allows for reduction in development and maintenance cost of the solution deployed within the toolset. One thing that must be considered though is that this reuse of objects by other objects means that a change to the child object can affect multiple parent objects. Let us take an example to understand this concept better. Let us create a Dataflow called DF_Test1. Let us now create two Workflows called WF_Test1 and WF_Test2. We can now associate the Dataflow DF_Test1 to both Workflows by dragging and dropping it onto the Design workspace of each Workflow. If you now make a change to DF_Test1, it would affect both Workflows. This may be the desired functionality but in certain situations you may not want to affect both parent processes. If so, then you should use the Replicate functionality, which is also available for most objects in the local object library. By replicating an object and then associating this rather than the original with the parent object, you are making a copy of the original object but allowing for changes to be made that would not affect the other process flow. The table below defines which objects can be replicated or reused. Object Type

Reuse

Replication

Project

No

No

Job

Yes, within multiple projects

Yes

Workflow

Yes, within both Jobs and

Yes

© SAP SE

Page 5 of 71

Extension Guide for SAP Best Practices for Data Migration

Object Type

Reuse

Replication

other Workflows Dataflow

Yes, within Jobs, Workflows, and other Dataflows

Yes

Tables

Yes, within Dataflows

No

File Formats

Yes, within Dataflows

Yes

Custom Functions

Yes, within Scripts, Dataflows, and Conditionals (Ifthenelse and While)

Yes

3.1 Dataflows The reuse of Dataflows is particularly useful as you may want to preserve the original mapping Dataflow and specifically create a copy of this for each client implementation. This way you can keep templates set up for specific clients or source applications where you have already performed the mapping exercise while maintaining a clean copy if you need to revert to this.

3.2 Functions If functions have been already written that contain a section of the syntax you need then you can replicate any custom function so as to maintain the original but save yourself the development time of creating another full copy.

4 Adding New Segments The pre-delivered data migration content within Data Services has been built using the SAP Best Practices development content and contains all the data fields required for a standard implementation of the All-in-One solution for specified country. We have applied a rule when developing the content that if a field within a specific segment was required for SAP Best Practices development content, then we would include the complete segment within the Data Services developed content. For example, if the Customer Name field was required in Customer General Data segment of the Customer Object, then all fields would be included from the General Data segment of the Customer Object. In the data migration content, there are two approaches to load data into SAP system, IDoc and Migration Cockpit. Migration Cockpit is only applicable to business partner jobs, Job_DM_Customer_MC, Job_DM_Customer_Ext_MC, Job_DM_Vendor_MC, and Job_DM_Vendor_Ext_MC. And IDoc is for others. Since partners or customers may want to include additional segments to extend the Best Practices content coverage, this guide provides the information on how to perform this work. The following sections detail the process of modifying the Data Services content to allow for these additional segments to be extracted, validated, enriched, and loaded to the SAP application.

4.1 Adding a new Migration Cockpit segment Migration cockpit application will be updated periodically, new segment of object could be added or structure of object segment could be changed. The following section describes the process required to add a new segment to the Data Services object processing job (Job_DM__ MC). In this guide, we assume that we are adding a segment that already exists in the

© SAP SE

Page 6 of 71

Extension Guide for SAP Best Practices for Data Migration standard MC structure, but one that we have not delivered as part of the standard Best Practice content. For this example, we would work on the segment Identification Numbers of Customer.

Procedure 1. First, we need to examine Migration cockpit object to find the staging table name of segment that we wish to add to the standard content. 2. Log on Migration cockpit by entering transaction code LTMC in SAP system. 3. Find the migration project you created and click the project name, and then click the object name Customer. Examine the new added segment, e.g. Identification Numbers, remember the staging table name /1LT/DSQKX003304 of segment Identification Numbers.

4. Open Data Services and select the Datastores tab in the Local Object Library. 5. Expand the DS_HANA_MC Datastore and then right-click the Tables node and choose Import By Name. 6. Enter Identification Numbers staging table /1LT/DSQKX003304 for name, Choose Import. 7. Expand the Tables node, and open table /1LT/DSQKX003304 by double-clicking the table name, and examine the fields that make up the segment. 8. The fields we would require for this segment are: 

KUNNR - Varchar (80) - Customer Number*



TYPE - Varchar (80) - Identification Type*



IDNUMBER - Varchar (60) - Identification Number*

© SAP SE

Page 7 of 71

Extension Guide for SAP Best Practices for Data Migration 9. Additionally we must include the MC process flag field /1LT/PROCESSED so that Migration Cockpit know how to process the uploaded records. 

‘’ - unprocessed ( this is the value we should assign to the field)



‘P’ – processed



‘E’ – processed with error

10. Now that we know which segment we are going to add, we can start to make the changes required to the legacy migration object Job. In this case, we would work on the Job_DM_Customer_MC Job. 11. Firstly, we would create Tax Numbers excel formats as the source data. Locate source data folder, e.g. C:\Migration_S4HANA\Source_Files_DI_IN, Open source file CustomerMC.xls. 12. Add the sheet named Identification Numbers following existing segments format as below:

13. In the Local Object Library, select the Formats tab and right click the Excel Workbooks node and choose New. 14. Enter the format name MC_CUSTOMERIDENTIFICATIONNUMBERS, add Tax Numbers fields as below: 

KUNNR - Varchar (255) - Customer Number*



TYPE - Varchar (255) - Identification Type*



IDNUMBER - Varchar (255) - Identification Number*

15. Enter the format values as below: 

Directory - $G_SourceData_Path



File Name – CustomerMC.xls



Worksheet – Identification Numbers

© SAP SE

Page 8 of 71

Extension Guide for SAP Best Practices for Data Migration

16. Choose OK to save the format. 17. Open the project DM_BPDM_IDOC so that all of the object Jobs are available in the Project Area of the Data Services Designer interface. 18. Select the Job Job_DM_Customer_MC by selecting it in the Project Area. 19. The first modification we need to make to the Job is to add the segment processing and MC segment processing variables as we have for other segments we process. 20. With the Job_DM_Customer_MC selected, and the Job open in the Design window, from the Designer menu, choose Tools  Variables. 21. The Variables and Parameters dialog box is displayed. 22. Right-click the Global Variables segment in the Variables and Parameters window and choose Insert to create a new global variable. 23. Repeat step 12 to create a second new global variable as we need one for segment processing variable and one for MC segment population variable. 24. Navigate to the bottom of the Variables and Parameters windows and double-click the first newly created variable. The Global Variable Properties dialog box appears. 25. Change the name to follow the standards we use for segment processing variables for Customer Master Job, for example, $G_CMIdentNumbers_Req and set the value in the Data type field to be Varchar(1) and the default value to be ‘Y’. 26. Repeat step 16 but calling the variable $G_CMIdentNumbers_MC_Req and setting the Data type and default values as we did for variable in step 16. 27. You have now created the required variables for adding the new segment. 28. Choose Project – Save All from the menu to ensure that your changes are saved.

© SAP SE

Page 9 of 71

Extension Guide for SAP Best Practices for Data Migration 29. From the Design window, Double-click the script SCR_Initialise to open it for modification. Scroll to the top of the script if it has not opened at the beginning and locate the variable definition sections for segment processing and MC population variables, which looks as follows: # Choose which map, validate, and enrichment steps to execute during the processing of the Job, # default is all steps set to 'Y' $G_CMGeneralData_Req = nvl( upper($G_CMGeneralData_Req) , 'Y'); $G_CMBPRoles_Req = nvl ( upper($G_CMBPRoles_Req) , 'Y'); … # Choose which segment staging tables to populate in the creation of object $G_CMGeneralData_MC_Req = nvl( upper($G_CMGeneralData_MC_Req) , 'Y'); $G_CMBPRoles_MC_Req = nvl ( upper($G_CMBPRoles_MC_Req) , 'Y'); ….. Add the new variable $G_CMIdentNumbers_Req to the top segment just before the $G_GenerateMC_Req variable definition. Use the same syntax that we used for other segment processing variables, that is $G_CMIdentNumbers_Req = nvl( upper($G_CMIdentNumbers_Req) , 'Y'); 30. Repeat step 21, but this time place the variable definition for $G_CMIdentNumbers_MC_Req into the staging table population segment with the following syntax $G_CMIdentNumbers_MC_Req = nvl( upper($G_CMIdentNumbers_MC_Req) , 'Y'); 31. In the print statements section of the SCR_Initalise for these variable types, ensure that you add the respective print statements, to record the settings of the variables at runtime into the Data Services Trace log. print('INFO - Customer Master Identificaton Numbers section variable $G_CMIdentNumbers_Req set to: '|| $G_CMIdentNumbers_Req ); print('INFO - Customer Master Identificaton Numbers MC section variable $G_CMIdentNumbers_MC_Req set to: '|| $G_CMIdentNumbers_MC_Req ); 32. We have now modified the script SCR_Initialise and completed the variable definitions. Close the script window in the Design workspace to return to the Job level view of the Customer Master object. 33. We now need to add a new segment processing conditional to the top level of the Job. 34. Select the Conditional object on the tool palette on the right-hand side of the Design window by clicking on it and then select the Design workspace. This would place a new Conditional object onto the Job structure as shown below.

© SAP SE

Page 10 of 71

Extension Guide for SAP Best Practices for Data Migration

35. Name the new condition CustomerIdentNumbers_S_CUST_IDENT_Required. 36. Now delete the line linking the CustomerContactPerson_S_CUST_CONT_Required Conditional and the ProfileMappingTables_Required Conditional and join in the new Conditional object you just created in the Job as shown below.

37. Double-click the new conditional object and in the if statement, enter $G_CMIdentNumbers_Req = 'Y'. This would mean that the top section of the conditional would only get executed if the variable is set to ‘Y’, else the bottom section would execute.

© SAP SE

Page 11 of 71

Extension Guide for SAP Best Practices for Data Migration 38. For ease of use, and to show how we can cut and paste objects within Data Services, we would use this functionality along with the replication functionality in Data Services to speed up the next steps of development. 39. Within the conditional object we want to place three scripts and three Dataflows as shown in the screenshot below (for Tax Numbers segment).

40. Open the Tax Numbers conditional and right-click the SCR_SectionRequired script and choose Copy. Then open the new conditional object we have created and pasted the object by right-clicking and pasting in the Then windows of the conditional as shown below.

© SAP SE

Page 12 of 71

Extension Guide for SAP Best Practices for Data Migration 41. Repeat the process in step 33, for each of the scripts in the Tax Numbers conditional until all three scripts are positioned in the new conditional. 42. Open the SCR_SectionRequired and SCR_SectionNotRequired scripts and change the print statement to refer to the Identification Numbers segment we are now working on. 43. Next, we need to create the new Dataflows that we require for mapping, validation, and enrichment of our new segment. For one of these objects, we would make use of the replication functionality, rather than starting from new. For other Dataflows we would create from new to show both aspects of this functionality. 44. For mapping Dataflow, we would replicate an existing Dataflow. In the Local Object Library, select the Dataflows tab and expand the Dataflows sections and find the DF_DM_CustomerTaxNumbers_Map Dataflow object. 45. Right-click the object and choose Replicate. This would create a new object called Copy_1_ DF_DM_CustomerTaxNumbers_Map. Rename this to be DF_DM_CustomerIdentNumbers_Map. Drag this new Dataflow onto the conditional area after the SCR_SectionRequired script.

46. For other two Dataflows, we would create new ones. Choose the Dataflow icon on the Tool Palette to the right of the Design Workspace and then choose the conditional then window to drop on two Dataflows. Name these two Dataflows DF_DM_CustomerIdentNumbers_Validate and DF_DM_CustomerIdentNumbers_Enrich respectively, and join the objects together for sequential processing. 47. Double-click the Mapping Dataflow and delete the source excel format and the target template table. 48. In the Local Object Library, drag the excel format MC_CUSTOMERIDENTIFICATIONNUMBERS to the mapping dataflow design area, and connect it to Qry_BestPractices query object.

© SAP SE

Page 13 of 71

Extension Guide for SAP Best Practices for Data Migration 49. Choose the template table icon in the Tool Palette and select the Dataflow Design Window to place a target template table on the Dataflow. Name the Template table CUST_MC_IDENT_NUMBERS_MAP and ensure that it is associated with the DS_STG_OBJ_INIT Datastore. The Dataflow should look like the screen below.

50. Open excel format MC_CUSTOMERIDENTIFICATIONNUMBERS by double-clicking it, and modify the Skip First attribute to 7, to skip the header information in the identification numbers excel sheet. Click back icon

© SAP SE

in toolbar to return mapping dataflow design window.

Page 14 of 71

Extension Guide for SAP Best Practices for Data Migration 51. To open the Qry_BestPractices Query object, double-click it. Delete all fields of Tax numbers in the Schema out window, and drag all columns from the Schema In window to the Schema Out window. Rename the field name by adding symbols as below: 

* - field should be mandatory.



$ - field with default value



+ - field with lookup conversion

52. Click back icon

in toolbar, and then open the Qry_AllFields query object, double-click it.

53. In the Schema Out window on the right-hand side of the query object, select all fields from the *+KUNNR field to the *TAXNUM field inclusively. Right-click the selected field set and choose Delete. We are keeping the fields SOURCE_ID, DI_JOB_RUN_ID, DI_SAMPLE, DI_OWNER, DI_GENERATED_ROW_NUM, and DI_TIMESTAMP fields as these would be reused for this segment.

© SAP SE

Page 15 of 71

Extension Guide for SAP Best Practices for Data Migration 54. Drag all the fields in Schema In window on to SOURCE_ID field in Schema Out window, and choose Insert Above. The following screen shows how the Query should look like:

55. Close the Query window and return to the Dataflow level view and then choose Validation – Validate – All objects in view from the menu. If you have any errors, resolve these now and then save all objects from the Project menu. 56. Close the Dataflow and return to the conditional level view. Double-click the validate Dataflow DF_DM_CustomerIdentNumbers_Validate to open it. 57. In the validate Dataflow, drag the newly created template table CUST_MC_IDENT_NUMBERS_MAP from the DS_STG_OBJ_INIT Datastore by expanding the Datastore and then template tables headers in the Datastore tab of the Local Object Library onto the Design workspace of the Dataflow. 58. Create two additional template tables called CUST_MC_IDENT_NUMBERS_VAL and CUST_MC_IDENT_NUMBERS _INV in the Dataflow using the Tools Palette with both tables associated with the DS_STG_OBJ_INIT Datastore. 59. Place the following items on the Dataflow: 

two Query transforms



three Validation transforms and



two Merge transforms

All of these transforms can be dragged from the Transforms tab of the Local Object Library. Once they are dropped on to the Dataflow, name the transforms as shown below and connect the objects.

© SAP SE

Page 16 of 71

Extension Guide for SAP Best Practices for Data Migration

Ensure that the validation pass outputs are all connected to the MergeValid transform and all of the validation fail outputs are connected to the MergeInvalid transform. 60. To open the MaLegacy transform, double-click it, and drag all columns from the Schema In window to the Schema Out window as shown below.

61. Close the MapLegacy transform and open the Qry_SelectDistinct transform. Repeat the process in step 54 and drag all columns from the Schema In window to the Schema Out window. In this case, we need to ensure we only get a distinct output, as the validation

© SAP SE

Page 17 of 71

Extension Guide for SAP Best Practices for Data Migration transforms would create duplicate records. Hence, we need to choose the SELECT tab and select the distinct rows checkbox. 62. Close the Qry_SelectDistinct transform. 63. Open the Validate_Lookups transform. In the Schema In window, select field *+KUNNR and choose the Add button. Enter the Name, Description, check Enabled and Ignore if NULL, and choose Column Validation. It should look like as below:

64. Choose the column you should do lookup validation from the dropdown list, the first one is the MapLegacy.”*+KUNNR”, and then choose Custom Condition for the Condition dropdown list. Highlight the contents of the custom condition box and delete it. Choose the button. 65. We should validate that the customer number in Identification numbers segment does exist in General Data records, entering the following text in Smart Editor: sql('DS_STG_OBJ_INIT','select count(*) from [db_owner(\'DS_STG_OBJ_INIT\', \'DS_OWNER\')].CUST_MC_GENERAL_DATA_MAP where "*+KUNNR" =\''|| MapLegacy."*+KUNNR"||'\'' ) > 0 66. Choose OK button, and then Choose OK button to complete this field rule. 67. The second field is *+TYPE, the identification type should be validated whether the value could be found in lookup table LKP_CRM_BP_ID_CATEG. 68. Repeat step 63 for field *+TYPE, in the Column Validation block, choose column MapLeagcy.”*+TYPE”, condition Exists In Table, and Choose button

.

69. In the Input Parameter window, Choose datastore DS_STG_MGMT_LKP, and choose table LKP_CRM_BP_ID_CATEG, then choose field LEGACY_ID_CATEG.

© SAP SE

Page 18 of 71

Extension Guide for SAP Best Practices for Data Migration 70. Choose OK, When complete the string DS_STG_MGMT_LKP.DS_OWNER.LKP_CRM_BP_ID_CATEG.LEGACY_ID_CATEG should exist in the box right of Exists in table. 71. This completes the lookup validations as only the customer number and identification type columns have respective lookup tables. Section 8.1 of this document provides an example for changing the respective files in the Job_DM_Lookups_Initialise job. 72. Moving now to the mandatory column validations, close the Validate_Lookups transform and open the Validate_Mandatory_Columns. We need to validate that all these three columns customer number, identification number and identification type are populated with values. 73. Select column *+KUNNR in the Schema In window and choose Add button. Check Enabled, uncheck Ignore if NULL, and choose Condition IS NOT NULL from the dropdown list. It looks like below:

74. Repeat step 73 for *+TYPE, *IDNUMBER column also. 75. Close the Validation Transform Validate_Mandatory_Columns and open Validate_Format. 76. For format checks we wish to perform, we only need to validate the format of any columns that are not lookups, which in this case means that we only need to validate the *IDNUMBER column. 77. Select the *IDNUMBER column in the Schema In window and select the Enable checkbox. In this case, because the field mandatory validation has been done in Validate_Mandatory_Columns , we also check the Ignore if NULL checkbox so that no format validation failure is created for NULL values coming through the mapping phase. 78. Choose column MapLegacy.”*IDNUMBER” and choose Custom condition from the dropdown list and in the custom condition box, enter the following expression length (QryLeagcy.” IDNUMBER”) <= 60. This would check that the value passed into the column would be less © SAP SE

Page 19 of 71

Extension Guide for SAP Best Practices for Data Migration than or equal to 60 characters, which is the allowed limit within the target staging table field. You can use any Data Services functions you like to validate the format of the field, including the is_valid functions. The only rule is that the function/expression must evaluate to True or False. 79. This completes the validation transforms and the Validation Dataflow. 80. Choose Project – Save all. 81. Close the Dataflow and return to the condition level object. 82. Double-click the SCR_DeleteFailedRecords script. 83. Enter the following text, replacing the existing text: # Delete the records from the _VALID table that exist in the _INVALID table sql('DS_STG_OBJ','delete from [db_owner(\'DS_STG_OBJ\', \'DS_OWNER\')].CUST_MC_IDENT_NUMBERS_VAL where DI_GENERATED_ROW_NUM in (select distinct DI_GENERATED_ROW_NUM from [db_owner(\'DS_STG_OBJ\', \'DS_OWNER\')].CUST_MC_IDENT_NUMBERS_INV)'); This deletes any records in the _VAL data table that failed one of the validations but passed another so that the records are not incorrectly processed. 84. Close the script and double-click the DF_DM_CustomerIdentNumbers_Enrich Dataflow to open it. 85. Drag the CUST_MC_IDENT_NUMBERS_VAL template table from the DS_STAGE_INIT Datastore in the Datastores tab of the Local Object Library onto the Dataflow. This would create the table as a source for Dataflow. 86. Drag a Query transform onto the Dataflow from the Transform tab and then select the Template table icon on the Tools Palette. Select the Dataflow and call the Template table CUST_MC_IDENT_NUMBERS_ENR. The Dataflow should look like the following screen.

© SAP SE

Page 20 of 71

Extension Guide for SAP Best Practices for Data Migration

87. Double-click the Qry_Enrich query to open it. 88. We now need to apply the enrichment required for segment. Firstly, select all of the columns in the Schema In window and drag them all to the Schema Out window. For each column in the Schema Out windows, change the name to remove any special characters, that is * + or $ and change the data format to that required in staging table. In this case: 

KUNNR is VARCHAR (80)



TYPE is VARCHAR (80)



IDNUMBER is VARCHAR (60)

89. For KUNNR, IDNUMBER column, we do not need to do anything else as this is a straight mapping and we have already validated that the data values would correctly fit in the column. If the column is not mandatory and no column value comes from the source data, we should assign empty blank rather than NULL when uploading to staging table, because NULL value would cause error in staging table. This kind of mapping information should like: nvl( CUST_MC_IDENT_NUMBERS_VAL."*+KUNNR" ,'') 90. For TYPE column we need to perform the lookup to translate the value from the legacy value to the newly assigned SAP identification type value. Choose Functions… button on the Mapping tab, select Lookup Functions in the left and Lookup_ext in the right. Enter the information as shown below.

© SAP SE

Page 21 of 71

Extension Guide for SAP Best Practices for Data Migration

The resulting syntax in the Mapping tab is as below. lookup_ext([DS_STG_MGMT_LKP.DS_OWNER.LKP_CRM_BP_ID_CATEG,'PRE_LOAD_CAC HE','MAX'], [ID_CATEG],[NULL],[LEGACY_ID_CATEG,'=',CUST_MC_IDENT_NUMBERS_VAL."*+TYPE"]) SET ("run_as_separate_process"='no', "output_cols_info"='' )) 91. Choose Finish and ensure that all the enrichment rules are valid by selecting Validation – Validate from the menu. If there are any errors, resolve those and then choose Project – Save all. The enrichment query should look as follows.

© SAP SE

Page 22 of 71

Extension Guide for SAP Best Practices for Data Migration

92. Close the Qry_Enrich transform and close the Dataflow to return to the Conditional level. You have now completed the Data Services content to map, validate, and enrich the new Identification Numbers segment for Customer Data Migration. We now need to ensure that this data is included in the uploading of staging table 93. Close the conditional level object and return to the Job level. Now open the CustomerMaster_GenerateMC conditional and then double-click the Workflow WF_DM_CustomerMaster_GenerateMC to open it. Open the Dataflow DF_DM_CustomerMaster_CreateMC. 94. You can see the enriched data tables from each of the required segments being used to data loading of staging tables respectively. 95. In the Datastores tab of the Local Object Library, Drag the CUST_MC_IDENT_NUMBERS_ENR template table from the DS_STAGE_INIT datastore as a source and staging table /1LT/DSQKX003304 from DS_HANA_MC datastore as a target onto the Dataflow. 96. Drag a Query transform onto the Dataflow from the Transform tab, named INDENTIFICATIONNUMBERS. Connect these three object look like the following screen.

© SAP SE

Page 23 of 71

Extension Guide for SAP Best Practices for Data Migration

97. Open query INDENTIFICATIONNUMBERS by double-clicking it, and drag fields in Schema In window on to the corresponding fields in Schema Out window. For Schema Out field /1LT/PROCESSED, assign value ‘ ‘ in the Mapping tab.

98. Close the INDENTIFICATIONNUMBERS transform and return to the Dataflow design area. 99. Open staging table /1LT/DSQKX003304 by double-clicking it, change Mode from Append to Truncate in Bulk Loader Options tab.

© SAP SE

Page 24 of 71

Extension Guide for SAP Best Practices for Data Migration 100. Close the staging table and choose Validation – Validate – All Object in View from the menu to check that everything is correct. If not, correct the validation errors and repeat the process. Warnings are ok to proceed with as these are due to data type conversions that would be applied by Data Services. 101.

Choose Project – Save All.

102.

The Job is now complete and is ready for deployment.

4.2 Adding a new IDoc segment The following section describes the process required to add a new segment to the Data Services object processing job (Job_DM__ IDOC). The creation of a new segment would be required if additional attributes are required to be migrated to the target SAP application that differs from the standard SAP Best Practices delivered content. The requirement to add new segments could occur due to the need to populate other segments specific to the customer deployment of SAP, which differ from the standard Data Migration content deployment. The addition of new segments could also cover the addition of a brand new segment to the IDoc to cover customization of the target SAP application. In this guide, we assume that we are adding a segment that already exists in the standard IDoc structure, but one that we have not delivered as part of the standard Best Practice content. For this example, we would work on the Customer Master object.

Procedure 1. First, we need to examine the IDoc structure to find the segment metadata that we wish to add to the standard content. 2. Open Data Services and select the Datastores tab in the Local Object Library. 3. Expand the DS_SAP Datastore and then further expand the IDOCs subfolder. Double-click the DEBMAS06 IDoc, or right-click the IDoc and choose Open. 4. Expand the IDoc structure for E1KNA1M segment by clicking the + icon next to the segment header as shown in the screen shot below. 5. Expand the segment E1KNASM – Master customer master additional EU tax number that we would use for this example and examine the fields that make up the segment. 6. The fields we would require for this segment are: 

LAND1 – Varchar (3) – Country Key



STCEG – Varchar (20) – VAT Registration Number

7. Additionally we must include the legacy customer number field so that we can join this segment data to the correct customer basic segment parent data. 8. Now that we know which segment we are going to add, we can start to make the changes required to the legacy migration object Job. In this case, we would work on the Job_DM_CustomerMaster_IDOC Job. 9. Open the project DM_BPDM_IDOC so that all of the object Jobs are available in the Project Area of the Data Services Designer interface. 10. Select the Job Job_DM_CustomerMaster_IDOC by selecting it in the Project Area. 11. The first modification we need to make to the Job is to add the segment processing and IDoc segment processing variables as we have for other segments we process. © SAP SE

Page 25 of 71

Extension Guide for SAP Best Practices for Data Migration 12. With the Job_DM_CustomerMaster_IDOC selected, and the Job open in the Design window, from the Designer menu, choose Tools  Variables. 13. The Variables and Parameters dialog box is displayed. 14. Right-click the Global Variables segment in the Variables and Parameters window and choose Insert to create a new global variable. 15. Repeat step 12 to create a second new global variable as we need one for segment processing variable and one for IDoc segment population variable. 16. Navigate to the bottom of the Variables and Parameters windows and double-click the first newly created variable. The Global Variable Properties dialog box appears. 17. Change the name to follow the standards we use for segment processing variables for Customer Master Job, for example, $G_CMEUTaxNumbers_Req and set the value in the Data type field to be Varchar(1) and the default value to be ‘Y’. 18. Repeat step 16 but calling the variable $G_CMEUTaxNumbers_ IDoc_Req and setting the Data type and default values as we did for variable in step 16. 19. You have now created the required variables for adding the new segment. 20. Choose Project – Save All from the menu to ensure that your changes are saved. 21. From the Design window, Double-click the script SCR_Initialise to open it for modification. Scroll to the top of the script if it has not opened at the beginning and locate the variable definition sections for segment processing and IDoc population variables, which looks as follows: # Choose which map, validate, and enrichment steps to execute during the processing of the # Job, default is all steps set to 'Y' $G_CMBasicData_Req = nvl( upper($G_CMBasicData_Req) , 'Y'); $G_CMSalesData_Req = nvl ( upper($G_CMSalesData_Req) , 'Y'); … # Choose which segments to populate in the creation of the IDoc $G_CMBasicData_ IDoc_Req = nvl( upper($G_CMBasicData_ IDoc_Req) , 'Y'); $G_CMSalesData_ IDoc_Req = nvl ( upper($G_CMSalesData_ IDoc_Req) , 'Y'); ….. Add the new variable $G_CMEUTaxNumbers_Req to the top segment just before the $G_Generate IDOC_Req variable definition. Use the same syntax that we used for other segment processing variables, that is $G_CMEUTaxNumbers_Req = nvl( upper($G_CMEUTaxNumbers_Req) , 'Y'); 22. Repeat step 21, but this time place the variable definition for $G_CMEUTaxNumbers_ IDoc_Req into the IDoc population segment with the following syntax $G_CMEUTaxNumbers_IDoc_Req = nvl( upper($G_CMEUTaxNumbers_IDoc_Req) , 'Y'); 23. In the print statements section of the SCR_Initalise for these variable types, ensure that you add the respective print statements, to record the settings of the variables at runtime into the Data Services Trace log. print('INFO - Customer Master EU Tax Numbers section variable $G_CMEUTaxNumbers_Req set to: '|| $G_CMEUTaxNumbers_Req) print('INFO - Customer Master EU Tax Numbers IDoc segment variable $G_CMEUTaxNumbers_ IDoc_Req set to: '|| $G_CMEUTaxNumbers_ IDoc_Req) © SAP SE

Page 26 of 71

Extension Guide for SAP Best Practices for Data Migration 24. We have now modified the script SCR_Initialise and completed the variable definitions. Close the script window in the Design workspace to return to the Job level view of the Customer Master object. 25. We now need to add a new segment processing conditional to the top level of the Job. 26. Select the Conditional object on the tool palette on the right-hand side of the Design window by clicking on it and then select the Design workspace. This would place a new Conditional object onto the Job structure as shown below.

27. Name the new condition CustomerMasterEUTaxNumbers_E1KNASM_Required. 28. Now delete the line linking the CustomerMasterCreditManagementControllingArea…. Conditional and the CustomerMaster_Generate IDOC Conditional and join in the new

© SAP SE

Page 27 of 71

Extension Guide for SAP Best Practices for Data Migration Conditional object you just created in the Job as shown below.

29. Double-click the new conditional object and in the if statement, enter $G_CMEUTaxNumbers_Req = 'Y'. This would mean that the top section of the conditional would only get executed if the variable is set to ‘Y’, else the bottom section would execute. 30. For ease of use, and to show how we can cut and paste objects within Data Services, we would use this functionality along with the replication functionality in Data Services to speed up the next steps of development.

© SAP SE

Page 28 of 71

Extension Guide for SAP Best Practices for Data Migration 31. Within the conditional object we want to place three scripts and three Dataflows as shown in the screenshot below (for Credit Management Controlling Area object).

32. Open the Credit Management Controlling Area conditional and right-click the SCR_Section Required script and choose Copy. Then open the new conditional object we have created and pasted the object by right-clicking and pasting in the Then windows of the conditional as shown below.

33. Repeat the process in step 33, for each of the scripts in the Credit Management Controlling Area conditional until all three scripts are positioned in the new conditional.

© SAP SE

Page 29 of 71

Extension Guide for SAP Best Practices for Data Migration 34. Open the SCR_SectionRequired and SCR_SectionNotRequired scripts and change the print statement to refer to the EU Tax Numbers segment we are now working on. 35. Next, we need to create the new Dataflows that we require for mapping, validation, and enrichment of our new segment. For one of these objects, we would make use of the replication functionality, rather than starting from new. For other Dataflows we would create from new to show both aspects of this functionality. 36. For mapping Dataflow, we would replicate an existing Dataflow. In the Local Object Library, select the Dataflows tab and expand the Dataflows sections and find the DF_DM_CustomerMasterCredMngmntCntrlArea_Map Dataflow object. 37. Right-click the object and choose Replicate. This would create a new object called Copy_1_DF_DM_CustomerMasterCredMngmntCntrlArea_Map. Rename this to be DF_DM_CustomerMasterEUTaxNumbers_Map. Drag this new Dataflow onto the conditional area after the SCR_SectionRequired script.

38. For other two Dataflows, we would create new ones. Choose the Dataflow icon on the Tool Palette to the right of the Design Workspace and then choose the conditional then window to drop on two Dataflows. Call these Dataflows DF_DM_CustomerMasterEUTaxNumbers_Validate and DF_DM_CustomerMasterEUTaxNumbers_Enrich respectively creating the following structure for conditional and join the objects together for sequential processing. 39. Double-click the Mapping Dataflow and delete the target template table. Choose the template table icon in the Tool Palette and select the Dataflow Design Window to place a target template table on the Dataflow. Name the Template table CUST_MAST_EU_TAX_NO_MAP and ensure that it is associated with the DS_STG_OBJ_INIT Datastore. The Dataflow should

© SAP SE

Page 30 of 71

Extension Guide for SAP Best Practices for Data Migration look like the screen below.

40. To open the Qry_Map Query object, double-click it. We now need to modify the columns available for mapping to for new segment that we are creating. 41. In the Schema Out window on the right-hand side of the query object, select all fields from the *$+CREDIT_CONTROL_AREA field to the DATE_MONITORING field inclusively. Rightclick the selected field set and choose Delete. We are keeping the *+LEGACY_CUSTOMER_NUMBER_1 field, DI_GENERATED_ROW_NUM, and DI_TIMESTAMP fields as these would be reused for this segment. The following screen shows how the Query should look after the deletion of the columns.

© SAP SE

Page 31 of 71

Extension Guide for SAP Best Practices for Data Migration

42. We now need to add the new fields we need for EU Tax Numbers segment; Country, and VAT Registration Number. 43. Right-click the *+LEGACY_CUSTOMER_NUMBER_1 field and select New Output Column… and then Insert Below. This would create a new field below the existing customer number field. In the Column Properties dialog box, enter the following values: Fields

Values

Name

*+COUNTRY_KEY

Data type

Varchar

Length

255

Description

Country Key – KNAS.LAND1

44. Choose OK. Repeat the process this time by right-clicking the newly created field to add the other required field and enter VAT_REGISTRATION_NO in the Name field. 45. Choose OK. 46. The two new fields should now be visible in the Schema Out window. 47. To complete the mapping Dataflow, you need to set the default mapping for columns to NULL so that the Dataflow would validate it. This can be done by selecting each field and then typing NULL into the Mapping tab or you can select both fields and right-click Properties and then set the mapping value to NULL for both fields at once. Ensure that the other properties are unchecked as we only want to set the mapping values for all selected columns. This is a quick way of setting and resetting properties for multiple columns. 48. Close the Query window and return to the Dataflow level view and then choose Validation – Validate – All objects in view from the menu. If you have any errors, resolve these now and then save all objects from the Project menu.

© SAP SE

Page 32 of 71

Extension Guide for SAP Best Practices for Data Migration 49. Close the Dataflow and return to the conditional level view. Double-click the validate Dataflow DF_DM_CustomerMasterEUTaxNumbers_Validate to open it. 50. In the validate Dataflow, drag the newly created template table CUST_MAST_EU_TAX_NO_MAP from the DS_STG_OBJ_INIT Datastore by expanding the Datastore and then template tables headers in the Datastore tab of the Local Object Library onto the Design workspace of the Dataflow. 51. Create two additional template tables called CUST_MAST_EU_TAX_NO_VALID and CUST_MAST_EU_TAX_NO_INVALID in the Dataflow using the Tools Palette with both tables associated with the DS_STG_OBJ_INIT Datastore. 52. Place the following items on the Dataflow: 

two Query transforms



three Validation transforms and



two Merge transforms

All of these transforms can be dragged from the Transforms tab of the Local Object Library. Once they are dropped on to the Dataflow, name the transforms as shown below and connect the objects.

Ensure that the validation pass outputs are all connected to the MergeValid transform and all of the validation fail outputs are connected to the MergeInvalid transform. 53. To open the Qry_Map transform, double-click it, and drag all columns from the Schema In window to the Schema Out window as shown below.

© SAP SE

Page 33 of 71

Extension Guide for SAP Best Practices for Data Migration

54. Close the Qry_Map transform and open the Qry_SelectDistinct transform. Repeat the process in step 54 and drag all columns from the Schema In window to the Schema Out window. In this case, we need to ensure we only get a distinct output, as the validation transforms would create duplicate records. Hence, we need to choose the SELECT tab and select the distinct rows checkbox. 55. Close the Qry_SelectDistinct transform. 56. Open the Validate_Lookups transform. In the Schema In window, select the *+LEGACY_CUSTOMER_NUMBER_1 and check the Enable validation checkbox, choose the custom condition button. Highlight the contents of the custom condition box and delete it. Choose the Functions… button and then choose the lookup functions from the lookup categories box and then select Lookup_Ext from the function name box. Choose Next. 57. We are going to validate that the legacy customer number has been assigned a new SAP number in the MGMT_NUMBER_ALLOCATION table and we would perform a lookup on this table. Enter the information into the Lookup_Ext dialog box as shown below by either entering or dragging the fields from the Input schema and selecting the respective fields in the Condition and Output segments. The lookup should look like the dialog screen shown below, the lookup table is the MGMT_NUMBER_ALLOCATION table, the columns to join on are the OBJECT and LEGACY_OBJECT_NUMBER columns and the return column is the OBJECT_NUMBER column.

© SAP SE

Page 34 of 71

Extension Guide for SAP Best Practices for Data Migration

58. Choose Finish to return to the Validation transform and enter is not null after the lookup_ext function syntax in the custom condition box. This completes the check for legacy customer number field. 59. Select the *+COUNTRY_KEY field in the Schema In window and select Enable validation. 60. For this field, we can use the standard validation exists in table as we only need to, to join to the relevant lookup table using one field as the join (hence we could not use it above), and the legacy country key column. 61. Choose the Exists in table button and then on the dropdown arrow to the right of this selection. Choose the DS_STG_MGMT_LKP Datastore and then select the LKP_COUNTRY table as the relevant lookup table. Then choose the LEGACY_COUNTRY column. 62. When complete the string DS_STG_MGMT_LKP.DBO.LKP_COUNTRY.LEGACY_COUNTRY should exist in the Exists in table box. 63. This completes the lookup validations as only the legacy customer number and country columns have respective lookup tables. Section 8.1 of this document provides an example for changing the respective files in the Job_DM_Lookups_Initialise job. 64. Moving now to the mandatory column validations, close the Validate_Lookups transform and open the Validate_Mandatory_Columns. We need to validate that both the legacy customer number and country key columns are populated with values.

© SAP SE

Page 35 of 71

Extension Guide for SAP Best Practices for Data Migration 65. Select column *+LEGACY_CUSTOMER_NUMBER_1 in the Schema In window and select the Enable validation checkbox. From the available conditions in the validation dropdown, choose the IS NOT NULL value. 66. Repeat step 66 for *+COUNTRY_KEY column also. 67. Close the Validation Transform Validate_Mandatory_Columns and open Validate_Format. 68. For format checks we wish to perform, we only need to validate the format of any columns that are not lookups, which in this case means that we only need to validate the VAT_REGISTRATION_NO column. 69. Select the VAT_REGISTRATION_NO column in the Schema In window and select the Enable validation checkbox. In this case, because the field is optional, we also want to check the Do not validate when NULL checkbox so that no validation failure is created for NULL values coming through the mapping phase. 70. Choose the Custom condition button and in the custom condition box, enter the following expression length (Qry_Map.VAT_REGISTRATION_NO) <= 20. This would check that the value passed into the column would be less than or equal to 20 characters, which is the allowed limit within the SAP target field. You can use any Data Services functions you like to validate the format of the field, including the is_valid functions. The only rule is that the function/expression must evaluate to True or False. 71. This completes the validation transforms and the Validation Dataflow. 72. Choose Project – Save all. 73. Close the Dataflow and return to the condition level object. 74. Double-click the SCR_DeleteFailedRecords script. 75. Enter the following text, replacing the existing text: # Delete the records from the _VALID table that exist in the _INVALID table sql('DS_STG_OBJ','delete from CUST_MAST_EU_TAX_NO_VALID where DI_GENERATED_ROW_NUM in (select distinct DI_GENERATED_ROW_NUM from CUST_MAST_EU_TAX_NO_INVALID)'); This deletes any records in the _VALID data table that failed one of the validations but passed another so that the records are not incorrectly processed. 76. Close the script and double-click the DF_DM_CustomerMasterEUTaxNumbers_Enrich Dataflow to open it. 77. Drag the CUST_MAST_EU_TAX_NO_VALID template table from the DS_STAGE_INIT Datastore in the Datastores tab of the Local Object Library onto the Dataflow. This would create the table as a source for Dataflow. 78. Drag a Query transform onto the Dataflow from the Transform tab and then select the Template table icon on the Tools Palette. Select the Dataflow and call the Template table CUST_MAST_EU_TAX_NO_ENRICHED. The Dataflow should look like the following screen.

© SAP SE

Page 36 of 71

Extension Guide for SAP Best Practices for Data Migration

79. Double-click the Qry_Enrich query to open it. 80. We now need to apply the enrichment required for segment. Firstly, select all of the columns in the Schema In window and drag them all to the Schema Out window. For each column in the Schema Out windows, change the name to remove any special characters, that is * + or $ and change the data format to that required in SAP that the IDoc is expecting. In this case: 

LEGACY_CUSTOMER_NUMBER_1 is VARCHAR (10)



COUNTRY_KEY is VARCHAR (3)



VAT_REGISTRATION_NO is VARCHAR (20)

81. For VAT_REGISTRATION_NO column, we do not need to do anything else as this is a straight mapping and we have already validated that the data values would correctly fit in the column. 82. For LEGACY_CUSTOMER_NUMBER_1 column we need to perform the lookup to translate the value from the legacy value to the newly assigned SAP customer number value. We do this by performing the same lookup that we performed in the validation Dataflow, which opens the Lookup_ext – Select Parameters screen. Select the column in the Schema Out window and then choose the Functions button on the Mapping tab and enter the information as shown below.

© SAP SE

Page 37 of 71

Extension Guide for SAP Best Practices for Data Migration

The resulting syntax in the Mapping tab is as below. lookup_ext([DS_STG_MGMT_LKP.DBO.MGMT_NUMBER_ALLOCATION,'PRE_LOAD_CACHE' ,'MAX'], [OBJECT_NUMBER],[NULL],[OBJECT,'=','CUSTOMER',LEGACY_OBJECT_NUMBER,'=',CUST _MAST_EU_TAX_NO_VALID."*+LEGACY_CUSTOMER_NUMBER_1"]) SET ("run_as_separate_process"='no', "output_cols_info"='' ') 83. Select the COUNTRY_KEY column in the Schema Out window and again use the lookup_ext function to replace the legacy value with the required SAP value that is mapped in the LKP_COUNTRY table and maintained by the Migration Services application. Choose the Mapping tab if it is not already selected and choose the Functions… button. Choose the

© SAP SE

Page 38 of 71

Extension Guide for SAP Best Practices for Data Migration Lookup_ext function and enter the information as below.

84. Choose Finish and ensure that all the enrichment rules are valid by selecting Validation – Validate from the menu. If there are any errors, resolve those and then choose Project – Save all. The enrichment query should look as follows.

© SAP SE

Page 39 of 71

Extension Guide for SAP Best Practices for Data Migration

85. Close the Query transform and close the Dataflow to return to the Conditional level. You have now completed the Data Services content to map, validate, and enrich the new EU Tax Numbers segment for Customer Master Data Migration. We now need to ensure that this data is included in the build of the IDoc before transferring to SAP. 86. Close the conditional level object and return to the Job level. Now open the CustomerMaster_Generate IDOC conditional and then double-click the Workflow WF_DM_CustomerMaster_Generate IDOC to open it. Open the Dataflow DF_DM_CustomerMaster_Create IDOC. 87. You can see the enriched data tables from each of the required segments being used to create the IDoc structure in the Qry_Nest_IDoc query transform. The nested IDoc structure is then sent to SAP via the IDoc Message Target object on the right of the Dataflow. 88. To add this in our new segment we need to drag the CUST_MAST_EU_TAX_NO_ENRICHED template table onto the Dataflow and then to a Query transform, which we would call E1KNASM. 89. Connect the CUST_MAST_EU_TAX_NO_ENRICHED table to the Query E1KNASM and then connect the Query to the Qry_Nest_ IDoc object. 90. Navigate to the Datastores tab of the Local Object Library and expand the DS_SAP Datastore. Expand the IDoc segment and to open the DEBMAS06 IDoc (Customer Master), double-click it. 91. Navigate down the IDoc structure by expanding the E1KNA1M segment and locate and expand the E1KNASM segment we have been working on. Select the columns LAND1 and STCEG and right-click and choose Copy. 92. Close the IDoc window and return to the Dataflow. Open the Query E1KNASM and in the Schema Out window, right-click and choose Paste. This copies the two fields to the target structure of the Query transform.

© SAP SE

Page 40 of 71

Extension Guide for SAP Best Practices for Data Migration 93. Drag the COUNTRY_KEY column from the Schema In window onto the LAND1 field in the Schema Out window to map the column. 94. Repeat the above step for VAT_REGISTRATION_NO to the STCEG field in the Schema Out window. 95. For the last column, we need to bring the LEGACY_CUSTOMER_NUMBER_1 field across so that we attach the segment to the correct Customer Master record in the IDoc. Drag the column across to the Schema Out window onto the LAND1 field and then choose Insert Above. This creates the column in the Schema Out window. Now Double-click the LEGACY_CUSTOMER_NUMBER_1 field in the Schema Out window and change the name to be KUNNR (Customer Number). 96. You have now pulled all the required fields and renamed then to the expected names for IDoc population. Close the Query and return to the Dataflow level. 97. Return to the IDoc metadata by expanding the DS_SAP Datastore and then the IDoc segment and open the DEBMAS06 IDoc. Navigate down to the E1KNASM segment again but this time copy the whole segment as shown below.

98. Close the IDoc metadata and return to the Dataflow. Now open the Qry_Nest_ IDoc Query and expand the Qry_Nest_ IDoc schema in the Schema Out window. Right-click the E1KNA1M segment and choose Make Current. Right-click again on the E1KNA1M segment and choose Paste. This places the E1KNASM segment at the bottom of the nested structure under the Customer Master Basic segment E1KNA1M. 99. You have now added the segment to the target schema and just need to apply the Mapping, From clause and Where clause correctly to transfer the new segments data to the IDoc. 100. Firstly, let us set the From clauses correctly for IDoc. When adding a new segment or schema to a Nested Relational Data Model (NRDM) as we have here, Data Services automatically adds the schema to the From clause at the top level, which is incorrect as this could cause Cartesian products to be generated.

© SAP SE

Page 41 of 71

Extension Guide for SAP Best Practices for Data Migration 101. Double-click the Qry_Nest_ IDoc schema to make it the current focus (you can also rightclick and choose make current). Now choose the From clause tab in the bottom half of the window and right-click the E1KNASM schema, which has been added to the From clause incorrectly and choose Delete. This should leave you with just the E1KNA1M schema in the From clause, which is correct as we want one IDoc for each Customer Master Basic record. 102. Navigate down to the newly created schema in the Schema Out window E1KNASM and right-click the schema and choose Make Current. Now select the From tab if it is not already open and delete the current entry. Drag the schema E1KNASM from the Schema In window to the From clause box to assign the E1KNASM input schema to the E1KNASM output schema. This now means that we can map fields from this input schema into the target structure. Note that we can refer to any higher level schema fields also as they are automatically available to the lower level or child schemas, in this case E1KNA1M. 103. Expand the E1KNASM schema in the Schema In window and drag the columns LAND1 and STCEG across to the matching field in the Schema Out window. Select the MSGFN column in the Schema Out window and then select the Mapping tab and enter NULL as we do not need to pass a value you in here. 104. You have now applied the Mapping and From clause correctly the only thing left is to apply a Where clause so that the data is correctly associated to the parent Customer Master Basic record. 105. Select the Where clause tab and enter the following text, which joins the lower level segment to the higher level segment using the customer number field. E1KNASM.KUNNR = E1KNA1M.KUNNR 106. Close the query and choose Validation – Validate – All Object in View from the menu to check that everything is correct. If not, correct the validation errors and repeat the process. Warnings are ok to proceed with as these are due to data type conversions that would be applied by Data Services. 107.

Choose Project – Save All.

108. The Job is now complete and is ready for deployment. You now need to replace the Row_Generation transform in the mapping Dataflows with your data source and map your legacy data from the legacy application, spreadsheets, or flat files.

5 Changing Rules The requirement to add or change validation and enrichment rules in Data Services could depend on a number of factors. These include but are not limited to the following: 

Modification to existing validation rules due to changes to the SAP Best Practices configuration for specific partner or client.



Addition of new validations into the required migration output that require further validation either for formatting, mandatory checks, lookup purposes or for further business logic validation.



Further validation or enrichment of data due to industry-specific functionality requirements.

There are two specific areas in changing rules: 

Validation rule changes – changes to the validation rules that ensure that only valid data is passed through to the staging table.

© SAP SE

Page 42 of 71

Extension Guide for SAP Best Practices for Data Migration 

Enrichment rule changes – the enrichment rules that are applied to the valid data to transfer the data into staging table with the correct values and in the required format.

These changes are detailed in the following sections.

5.1 Validation Rule Changes The validation of the migration data is performed within the DF_DM__Validate Dataflows, an example is shown below. There are three types of validation that we perform within the Validation Dataflow: 

Validate_Lookups – performs checks to see that legacy values have been mapped to the new SAP values for supporting configuration data. Additionally, this validation is used to check that parent records exist where required, to ensure referential integrity. For example, a Customer Credit Limit record must have a valid Customer Master record for it to successfully load. This validation is signified with a + in the Lookup Required column of the mapping template spreadsheets (that are provided as part of the SAP Note 1710056).



Validate_Mandatory_Columns – performs checks to ensure that columns that require a value are populated, for example, Customer Name and Country. This is signified in the mapping template spreadsheet with a * in the System Required column of the mapping template spreadsheets (that are provided as part of the SAP Note 1710056).



Validate_Format – performs checks to see that certain data fields are correctly formatted to successfully load the legacy data into SAP, for example, zip codes and street. The formats are specified in the business content and mapping template spreadsheets of the mapping template spreadsheets (that are provided as part of the SAP Note 1710056).

When creating new validation logic, ensure that the validation is correctly assigned to one of the above categories as the name of the validation transform is appended to the data records in error (DI_ERRORCOLUMNS), to provide additional information to the data owner/steward who is assigned to resolve the issues.

The following table explains the predelivered validation rules and how they are defined by the prefix values in the mapping template:

Symbol Combination

Validation Required

*

Mandatory column validation check required. Deselect the Do not validate when NULL checkbox as this is a mandatory column and must be populated.

$

If a value is passed in then a format validation may be required otherwise no further validation is required. This symbol specifies that an enrichment rule exists. Select the Do not validate when NULL checkbox to ensure that a null value is not considered a validation failure.

+

If a value is passed into the column then perform a lookup validation to ensure it can be converted from the legacy value to a valid SAP value. Use the Do not validate when NULL checkbox to ensure that a null value is not considered a validation failure.

*$

If a value is passed into the column then a format validation check

© SAP SE

Page 43 of 71

Extension Guide for SAP Best Practices for Data Migration

Symbol Combination

Validation Required may be required. Check the Do not validate when NULL checkbox to ensure that a null value is not considered a validation failure.

*+

A value must be passed into the column and the value must exist in the relevant lookup table as a mapping between the legacy value and the SAP value. Therefore a mandatory validation check and a lookup validation check should be performed. Do not check the Do not validate when NULL checkbox as this is a mandatory column and must be populated.

$+

If a value is passed into the column then perform a lookup validation to ensure it can be converted from the legacy value to a valid SAP value. Use the Do not validate when NULL checkbox to ensure that a null value is not considered a validation failure as an enrichment rule would replace the null value.

*$+

If a value is passed into the column then perform a lookup validation to ensure it can be converted from the legacy value to a valid SAP value. Use the Do not validate when NULL checkbox to ensure that a null value is not considered a validation failure as the enrichment rule would replace the null with a default value.

5.1.1

Mandatory

Some columns of data that are to be passed through to the new SAP application are required to be always populated with a value (mandatory), such as Customer Name and Country. These are signified by the use of the * symbol as a prefix to the column name in the mapping templates and Dataflows. To check that a mandatory column has passed a value, open the validation transform Validate_Mandatory_Columns and select the column you require to add the mandatory check in the Schema In window. Select the Enable Validation checkbox. Select the IS NOT NULL validation check in the first of the Condition dropdown boxes.

When specifying a validation rule you are always providing the pass criteria rather than specifying what causes the record to fail, for example, a mandatory column validation rule would always be IS NOT NULL as the validation rule.

5.1.2

Format

Columns of data that are to be passed through to the new SAP application require to be passed in a specific format but do not have any further enrichment rules applied to them. For these columns we need to ensure that the format of the data is valid for transferring into the target SAP field. To check that a legacy value has been passed in or transformed correctly, open the validation transform Validate_Format and select the column you require to add the format check to in the Schema In window. Select the Enable Validation checkbox and then specify the format check as either one of the available predefined validations (In, Between, Matches Pattern, Custom Validation Function) or specify your own rule in the Custom condition box. The rule could be a simple length check to ensure that the data would fit into the target column (Customer Name) or could be far more complex using matching criteria including regular expression matching for an

© SAP SE

Page 44 of 71

Extension Guide for SAP Best Practices for Data Migration attribute like Zip Code / Post Code. For simplicity, we would recommend just using the Data Services scripting language directly in the Custom Condition box, as seen in the screenshot below.

For more complex rules, we would recommend writing custom functions to handle the validation so that they can be reused in multiple validations without requiring rewriting.

A section of the VAL_ZipCode function: # Required Vars if( $ZipCode is null) return 0; if( $LegacyCountryKey is null) return 0; # Lookup Country Key $sapCountryKey = lookup_ext([DS_STG_MGMT_LKP.DBO.LKP_COUNTRY,'PRE_LOAD_CACHE','MAX'], [COUNTRY],[NULL],[LEGACY_COUNTRY,'=',$LegacyCountryKey]) SET ("run_as_separate_process"='no', "output_cols_info"='' ') if ( $sapCountryKey is null) return 0; # Validation if ( not $sapCountryKey in ('DE', 'US', 'CA')) return 1; if( $sapCountryKey IN ('DE', 'US')) return match_regex($ZipCode, '^[0-9]{5}$', NULL); if( $sapCountryKey = 'CA')

© SAP SE

Page 45 of 71

Extension Guide for SAP Best Practices for Data Migration return match_regex($ZipCode, '^[A-Z]{1}[0-9]{1}[A-Z]{1} [09]{1}[A-Z]{1}[0-9]{1}$', NULL); return 0;

Refer to the Data Services Scripting Language section of the Data Services Reference Guide that can be found as part of the Data Services Technical Manuals. All of the Data Services documentation including the technical manuals can either be accessed under C:\Program Files\Business Objects\BusinessObjects Data Services\Doc\Books\en or via the Help menu within the Data Services tool (on the Data Services menu bar, choose Help →Technical Manuals).. All custom functions delivered for the purpose of validation as part of the SAP Best Practices Data Migration offering are listed in the following table. All these functions are validated against the values loaded via the Data Services framework and the values supplied by the SAP Best Practices framework (or as extended in the target SAP system). Function Name

Parameters

Description

VAL_ActivityTypeLookup

Legacy Activity Type (Input 1)

Looks up and validates Legacy Activity Type value as follows:

Boolean 1 or 0 (Return)

1. Check the values created from loading Activity Types in the valid Activity Types table. If not found then: 2. Check the Activity Type lookup values extracted from SAP check table.

VAL_BatchManagement

Batch Management Value (Input 1) Material Type (Input 2) Boolean 1 or 0 (Return)

VAL_CharacteristicName

Legacy Characteristic Name (Input 1) Boolean 1 or 0 (Return)

Checks the Batch Management value against the Material Type to check that the required business rules are applied. If the SAP Material Type is ‘DIEN’ then the Batch Management value cannot be ‘X’ Looks up and validates Legacy Characteristic Name value as follows: 1. Check the values created from loading Characteristics, if not found then: 2. Check the Characteristic Name lookup values extracted from SAP check table.

© SAP SE

Page 46 of 71

Extension Guide for SAP Best Practices for Data Migration

Function Name

Parameters

Description

VAL_ClassName_Lookup

Legacy Class Name (Input 1)

Looks up and validates Legacy Class Name value as follows:

Boolean 1 or 0 (Return)

1. Check the values created from loading Classes, if not found then: 2. Check the Class Name lookup values extracted from SAP check table.

VAL_CustomerPreviousRec No_Format

Legacy Record Number (Input 1) Legacy Customer Account Group (Input 2) Default Customer Account Group (Input 3) Boolean 1 or 0 (Return)

VAL_CustomerPreviousRec No_Lookup

Legacy Record Number (Input 1) Lookup Type (Input 2) Boolean 1 or 0 (Return)

Checks the legacy customer record number passed in to see if it is in the right format for transferring to SAP. Depending on the customer account group number range assignment, this format check would either see whether the value can fit in the KNB1.ALTKN field (that is, 10 characters long) or whether the value is within the correct range of values to be used as the new SAP customer number. For more information on this functionality, check the Customer Master BPD. If Lookup Type is ‘EXT’ then lookup and validate Legacy Record Number from Number Allocation Control Table MGMT_NUMBER_ALLOCATION. Else if Lookup Type is ‘SAP’ then lookup and validate Legacy Record Number from SAP Customer Number Lookup table LKP_CUSTOMER_NUMBER.

VAL_HCMPreviousRecNo_F ormat

Legacy Record Number (Input 1) Number Range (Input 2) Default Number Range (Input 3) Boolean 1 or 0 (Return)

© SAP SE

This function checks to see that the HCM record number has been passed in, in the correct format. If the number is not being used as the new SAP reference number then the validation would always return 1. If the number is being used as the new SAP reference number then the function checks that the number is within the valid range as specified in the number range assigned to the HCM objects.

Page 47 of 71

Extension Guide for SAP Best Practices for Data Migration

Function Name

Parameters

Description

VAL_InspectionMethod

Legacy Inspection Method (Input 1)

Validates the Legacy Inspection Method within the LKP_INSPECTION_METHOD table, that is you have already migrated the Inspection Methods to SAP, and if it does not find the value here then checks the INSPECTION_METHOD_VALID table, which is created when loading Inspection Method object through Data Services.

Boolean 1 or 0 (Return)

VAL_MaterialLegacyNumber _Lookup

Legacy Record Number (Input 1) Lookup Type (Input 2) Boolean 1 or 0 (Return)

If Lookup Type is ‘EXT,’ then lookup and validate Legacy Record Number from Number Allocation Control Table MGMT_NUMBER_ALLOCATION. Else if Lookup Type is ‘SAP’ then lookup and validate Legacy Record Number from SAP Material Number Lookup table LKP_MATERIAL_NUMBER.

VAL_MaterialPreviousRecNo _Format

Legacy Record Number (Input 1) Legacy Material Type (Input 2) Internal or External Numbering (Input 3) Default Int or Ext Numbering (Input 4) Boolean 1 or 0 (Return)

VAL_NumberRangeAssignm ent

Object (Input 1) Group (Input 2) Group Default (Input 3) Int or Ext (Input 4) Int or Ext Default (Input 5)

Similar in functionality to the Customer format checking function. This function checks that the legacy material number is in the correct format for specified material type number range definition. Material number ranges can be set up as both Internal and External and the last two input parameters tell the function whether it is verifying against the internal or external number range. If the format is valid, it returns 1 if not then 0. It uses the MGMT_NUMBER_RANGE table to define the valid values for external number range checks. This function checks to see if the number range assignment chosen for material number migration process exists in the MGMT_NUMBER_RANGES table and is correctly specified as either Internal (INT) or external (EXT)

Boolean 1 or 0 (Return)

© SAP SE

Page 48 of 71

Extension Guide for SAP Best Practices for Data Migration

Function Name

Parameters

Description

VAL_Region

Legacy Region (Input 1)

Looks up corresponding SAP value for Legacy Country and lookup SAP Region value based on the Legacy Region and SAP Country values from the Country Region lookup table.

Legacy Country (Input 2) Boolean 1 or 0 (Return)

VAL_VendorLegacyNumber _Lookup

Legacy Record Number (Input 1) Lookup Type (Input 2) Boolean 1 or 0 (Return)

If Lookup Type is ‘EXT’ then lookup and validate Legacy Record Number from Number Allocation Control Table MGMT_NUMBER_ALLOCATION. Else if Lookup Type is ‘SAP’ then lookup and validate Legacy Record Number from SAP Vendor Account Number Lookup table LKP_VENDOR_ACCOUNT_NUM BER.

VAL_VendorPreviousRecNo _Format

Legacy Record Number (Input 1) Legacy Vendor Account Group (Input 2) Default Vendor Account Group (Input 3) Boolean 1 or 0 (Return)

© SAP SE

Checks the legacy vendor record number passed in to see if it is in the right format for transferring to SAP. Depending on the vendor account group number range assignment, this format check will either see whether the value can fit in the LFB1.ALTKN field (that is 10 characters long) or whether the value is within the correct range of values to be used as the new SAP vendor number. For more information on this functionality, check the Vendor Master BPD.

Page 49 of 71

Extension Guide for SAP Best Practices for Data Migration

Function Name

Parameters

Description

VAL_ZipCode

Zip Code (Input 1)

Checks the Zip Code value against the Country Key to ensure that the required business rules are applied.

Legacy Country Key (Input 2) Boolean 1 or 0 (Return)

Looks up corresponding SAP value for Legacy Country Key. If SAP Country Key is not ‘DE’ or ‘US’ or ‘CA,’ then no validation is performed. Else apply the match_regex function to validate the Zip Code format. For ‘US’ and ‘DE’: match_regex($ZipCode, '^[09]{5}$', NULL) For ‘CA’: match_regex($ZipCode, '^[A-Z]{1}[0-9]{1}[A-Z]{1} [09]{1}[A-Z]{1}[0-9]{1}$', NULL)

5.1.3

Lookup

Some columns of data, which are to be passed through to the new SAP application, have supporting SAP Configuration Data. This means that the values passed to SAP must match those values of the supporting configuration data already in the SAP system. For example, if we are to load a customer record with a specific company code, the company code must exist in the configuration table T001 (Company Codes) before it would be accepted by SAP as a valid record. These columns are denoted in the business content and mapping spreadsheets, as well as the mapping Dataflow, with a preceding ‘+’ symbol to signify the lookup. To check that a lookup column has been passed a valid value, open the validation transform Validate_Lookups and in the Schema In window, select the column you require to add the lookup check. Check the Enable Validation checkbox and select the Exists in table Condition. Choose the dropdown arrow next to the Exists in table box to open the Exists in table lookup wizard. The following procedure shows the process of selecting the Datastore, Table, and then Column that contains the value to look up against. 

First, select the Datastore from the available options (always choose DS_STG_MGMT_LKP)



Then select the lookup table that contains the lookup mapping you wish to check



Then choose the column that contains the legacy value in that would be translated to the SAP value

Once you have gone through this process the screen would return to the Validate_Lookups screen and the Exist in table box would be populated.

When specifying a lookup against a table already defined as part of the business content you would always choose the DS_STG_MGMT_LKP datastore, then the © SAP SE

Page 50 of 71

Extension Guide for SAP Best Practices for Data Migration lookup table you require LKP_ and then select the legacy value column LEGACY_.

5.1.3.1

Adding New Lookups

If a new lookup is required where the lookup table does not exist in the standard delivered content, a number of additional steps are needed to create the new lookup table. New lookups could be required on standard SAP fields not populated as part of the predelivered SAP Best Practices Data Migration solution, or due to the addition of new custom fields (Z fields) that are specific to the customer or partner implementation. The following instructions detail how the standard content is built and delivered within Data Services to ensure better performance and functionality.

For users upgrading from a prior version of the DM content, note that the process for creation and maintenance of the lookup tables has been changed significantly. The creation and management of the lookup tables are controlled by the following four jobs within the project DM_BPFDM_Lookups: ID

Job Name

Comment

1

Job_DM_Lookups_Initialise

Performs initial setup and creation of all required lookup and management tables in the staging area

2

Job_DM_Lookups_SAPToFile

Extract the latest lookup table data from SAP target system into local text files

3

Job_DM_Lookups_FileToApp

Upload default lookup table data in the provided text files to the staging area

4

Job_DM_Lookups_SAPToApp

Extract lookup table data from SAP target system into the staging area directly

The entire process is now data driven and depends upon the following configuration files which by default can be found in the c:\migration directory after installation. ID

File Name

Comment

1

LOOKUPS.csv

This file describes the general information about each lookup table

2

VIEW_TO_DF_NAME.csv

This file describes the hierarchy mapping relationship between business contents and IDoc jobs

3

LOOKUP_WHERE_USED.csv

This file describes where the lookup table would be used

4

LOOKUP_TAB_STRUCTURE.csv

This file details the structure of each lookup table

To add a new lookup table, all we now need to do is open each of these files and add new entries to the bottom of the file and then run the above jobs to populate the data. Once the files have been updated with the information about the lookup tables, we need to do one of the following. Run the following jobs as before, this would re-create, and reload the data for all tables so may take some time:

© SAP SE

Page 51 of 71

Extension Guide for SAP Best Practices for Data Migration 1. Job_DM_Lookups_Initialise 2. Job_DM_Lookups_FileToApp 3. Job_DM_Lookups_SAPToApp Or If you want to just create new lookup tables, perform the following process:

In the following section, you would need to modify the Global variables for the job at runtime. 1. When you execute the job, you would see the Execution Properties window. 2. Choose the Global Variable tab. 3. Here you can set all the variables as defined below and they would apply just for current run of the job. 4. Run Job_DM_Lookups_Initialise to import the new lookup table with the following settings for Global Variables a. $G_MGMTtables_Create =`N’ b. $G_PopulateLookups=`Y’ c. $G_PopulateViewToDF=`Y’ d. $G_PopulateLookupWhereUsed=`Y’ e. $G_PopulateLookupTableStructure=`Y’ f. $G_Lookuptable_CreateImport =`Y’ g. $G_LookuptableName_RecreateReimport =’LKP_TABLE1,LKP_TABLE2,…’, you can input many lookup tables separated by ‘,’. h. $G_MigrationServices_ProfilingConfig =`Y’(if necessary) 5. Create a new lookup runtime file that contains all new lookups, like LOOKUPS_RUNTIME_NEW.csv whose format is the same as LOOKUPS_RUNTIME.csv file. 6. Run Job Job_DM_Lookups_SAPToFile with the following setting for Global Variables: a. $G_GetDataFromObjectList =’N’ b. $G_GetDataFromTXTFile =’Y’ c. $G_Runtime_TxtFile_Name =’LOOKUPS_RUNTIME_NEW.csv’ d. $G_GetDataFromVariable=’N’ 7. Run Job Job_DM_Lookups_FileToApp with the following setting for Global Variables: a. $G_GetDataFromObjectList =’N’ b. $G_GetDataFromTXTFile =’Y’ c. $G_Runtime_TxtFile_Name =’LOOKUPS_RUNTIME_NEW.csv’ d. $G_GetDataFromVariable=’N’ 8. Run Job Job_DM_Lookups_SAPToApp with the following setting for Global Variables: a. $G_GetDataFromObjectList =’N’ b. $G_GetDataFromTXTFile =’Y’ c. $G_Runtime_TxtFile_Name =’LOOKUPS_RUNTIME_NEW.csv’ d. $G_GetDataFromVariable=’N’ The layouts for the various files are documented in detail in DM_BPD_Lookup_Content.docx.

© SAP SE

Page 52 of 71

Extension Guide for SAP Best Practices for Data Migration

5.2 Enrichment Rule Changes The enrichment of the legacy data to the standards required of the target SAP application are performed in the DF_DM__Enrich Dataflows. The enrichment can take the form of lookups, data transformations, and default value assignments. For certain objects requiring external or internal number generation and assignment it can also include this functionality (Customer, Material, Vendor, and HCM). A standard enrich dataflow would look like the following screenshot.

The source of the Dataflow is the valid records table containing the legacy records passing the validation checks. The source table then connects to the Qry_Enrich Query transform where the transformation and enrichment of the data is performed. The second Query may not always be present for simpler enrichment processes; in this case of Customer it is applying the internal or external number generation required before writing the data to the _ENRICHED table. In a number of the objects, there would be additional Query transforms between the Qry_Enrich and the target table; these provide additional enrichment and transformation functionality. An example would be where the enrichment relies on another value being enriched first before it can be transformed. In three of the objects; Material, Customer and Vendor, an additional enrichment mechanism is required to handle the need for external or internal number generation and assignment. This external numbering involves a number of additional steps within the object processing Job.

Ensure that you have run the job Job_DM_Lookups_FileToApplicationPopulation_Initial with the global variable $G_CreateNumberAllocationTable set to ‘Y’ as this creates the required table to hold the external/internal numbering allocation records. You must only run this job once as Job_DM_Lookups_SAPToApplicationPopulation_Delta provides the delta functionality. The first time that any of the Material, Customer, or Vendor Jobs are run, you should set the $G_ResetNumbering variable is set to ‘Y’. By setting this global variable to ‘Y’ when

© SAP SE

Page 53 of 71

Extension Guide for SAP Best Practices for Data Migration loading the master level data for each of these objects, you delete the contents on the MGMT_NUMBER_ALLOCATION table, which stores the legacy to SAP number conversions. If you are going to run the processing Job a number of times and wish to maintain the external number allocated to the valid legacy records for each execution then the variable $G_ResetNumbering should be set to ‘N’ for further executions. If you wish to reallocate new external/internal numbers to the legacy records then you can leave the value as ‘Y’ and this would regenerate the full number range for object each execution run. The number range allocation process is described in more detail within the Material, Customer, Vendor, and HCM BPD documents. We pre deliver a file (DEFAULT_NUMBER_RANGES.txt) as part of the Best Practice Lookup files, which contains the standard number ranges as setup on Best Practice Data Migration SAP deployment. If you want to update the contents of the MGMT_NUMBER_RANGES table, and therefore change the way the number allocation process is set up, modify the number ranges assigned to the specific object within the SAP target application. The Delta Lookup Job would pick up these number range changes and populate them into the MGMT_NUMBER_RANGES table. Within the Qry_Enrich transform and the following enrichment transforms, there are several prebuilt functions and mapping expressions, which we will detail below. The list below provides an insight into the functionality predelivered that can be used as templates for further enhancements. For a list of all functions, refer to the Data Services Reference Guide within the Data Services (on the Data Services menu bar, choose Help →Technical Manuals).

5.2.1

Predefined

Predefined enrichment and transformation functions exist in two main categories; mappings directly written within the Query transforms and Custom Functions. Enrichment performed directly in the mapping panel of the Query transform tend to be less complex and require simpler syntax. More complex transformations and enrichment are performed using custom functions. Another benefit of the use of custom functions is that they can be shared among other objects via the local object library.

5.2.1.1

Mapping Enrichment and Transformation

Below are some examples of transformations and enrichment rules being performed directly within the column mapping. 1. The first mapping is dealing with the VALID_FROM date field in the Customer Master object. The statement can be broken down into three components; the first component is the ifthenelse function that requires three input parameters, the condition to be evaluated, the true output expression and the false output expression. The condition in this case is evaluating whether the field value is not null, that is populated with a value. If the field is not null (True) then the date is transformed from the specified input format, defined by the $G_Date_Format global variable to that of the format required by SAP ‘YYYYMMDD’ and this is performed by the embedded to_char() and to_date() functions. If the field is null then the output is left as null as this is not a mandatory field. ifthenelse( CUSTOMER_MASTER_VALID.VALID_FROM is not null, to_char( to_date( CUSTOMER_MASTER_VALID.VALID_FROM, $G_Date_Format), 'YYYYMMDD'), null) 2. The next mapping, although looking more complex is similar to the first. The main difference being that the ifthenelse statement contains a more complex lookup_ext function call. This time the CUSTOMER_ACCOUNT_GROUP field is being evaluated to see if it is null. If it is null then a default value is assigned with the use of the global variable $G_Customer_Account_Group_Default. If a value is found in the field then a lookup is performed to translate the value from the legacy value to that of the SAP value required.

© SAP SE

Page 54 of 71

Extension Guide for SAP Best Practices for Data Migration

ifthenelse( CUSTOMER_MASTER_VALID."*$+CUSTOMER_ACCOUNT_GROUP" is null, $G_Customer_Account_Group_Default , lookup_ext([DS_STG_MGMT_LKP.DBO.LKP_CUSTOMER_ACCOUNT_GROUP,'PRE_LO AD_CACHE','MAX'], [CUSTOMER_ACCOUNT_GROUP],[NULL],[LEGACY_CUSTOMER_ACCT_GRP,'=',CUST OMER_MASTER_VALID."*$+CUSTOMER_ACCOUNT_GROUP"]) SET ("run_as_separate_process"='no', "output_cols_info"=' ' ))

For all functions, you can use the Functions wizard (choose the Functions button above the mapping box) to create the correct syntax so you do not need to know the syntax of the specific functions. 3. When the column being enriched is a mandatory lookup but without an enrichment rule then the following mapping would be used. This lookup_ext syntax is created via the lookup_ext wizard. This enrichment takes the value passed in and looks for the corresponding SAP value in the lookup table LKP_COST_ELEMENT_CATEGORY. lookup_ext([DS_STG_MGMT_LKP.DBO.LKP_COST_ELEMENT_CATEGORY,'PRE_LOA D_CACHE','MAX'], [COST_ELEMENT_CATEGORY],[NULL],[LEGACY_COST_ELEMENT_CATEGORY,'=',C OST_ELEMENT_VALID."*+COST_ELEMENT_CATEGORY"]) SET ("run_as_separate_process"='no', "output_cols_info"=' ' ) 4. Simple syntax function can also be used for example: 

substr(column, start position, number of character) – this would extract the number of character specified in the third parameter from the column specified in the first parameter, starting at the position specified in the second parameter.



to_date(column, format) – turns the input Varchar field into a date value using the input date format specified. This format can be a variable or fixed format (like ‘YYYYMMDD’ or ‘DD/MM/YYYY’)



to_char(column, format) – turns a date back into a Varchar field using the format specified as the output format.



Concatenation of strings is performed by the use of the || operator for example, ‘FirstName ‘||’LastName’ becomes ‘FirstName LastName’

5. Embedding Custom Functions within the mapping syntax is performed either by using the function wizard (the functions button above the mapping box) or by entering the name of the function you wish to call. An example of the combination of both mapping and custom functions would be as follows. The custom function is highlighted in bold and is placed within the standard functions being used in the mapping. There is no difference in the calling of standard functions from the custom delivered functions. ifthenelse( Qry_Enrich.TAX_CATEGORY is null, ENR_TaxCategory(Qry_Enrich.COMPANY_CODE), lookup_ext([DS_STG_MGMT_LKP.DBO.LKP_TAX_CATEGORY,'PRE_LOAD_CACHE', 'MAX'],[TAX_CATEGORY],[NULL],[LEGACY_TAX_CATEGORY,'=', Qry_Enrich.TAX_CATEGORY]) SET ("run_as_separate_process"='no', "output_cols_info"='
© SAP SE

Page 55 of 71

Extension Guide for SAP Best Practices for Data Migration 8"?> ' ))

5.2.1.2

Custom Functions

The following are a list of example custom functions that are used to transform and enrich the data being passed through the tool. All enrichment functions predelivered by the SAP Best Practices for Data Migration project are prefixed with an ENR_. 1. ENR_Decimal_Conversion(Input String, Decimal Separator) – This function converts the input string passed from the legacy application into the required decimal format but removing any thousand separators and changing ‘,’ decimal separators into ‘.’ provides the decimal data in the correct format for loading via LSMW into the SAP application. return(ifthenelse($P_DefaultDecimalSeperator = '.', replace_substr($P_Instring,',',''), replace_substr(replace_substr($P_Instring,'.',''),',','.'))); 2. ENR_Assign_Object_Number(Assignment Type, Group Value, Legacy Record Number) – This function is the most complex of the delivered enrichment functions as it handles all external object numbering for Customer, Vendor, and HCM. The function is long and only a section of the total function is displayed below. For each of the specific objects, the function checks to see if a duplicate record with the same legacy number has already been assigned a value. If it has the same number assigned to this object and this is returned to the calling process, and if the number is not found, then a new number is assigned, which would differ depending on the Assignment Type (such as Customer and Material) and the group value being passed (such as Customer Account Group and Material Type). The newly assigned number is then returned to the calling process. # Check to see if any of the input parameter are missing if ( $P_Object is null or $P_Group_Value is null or $P_Legacy_Record_Number is null) begin raise_exception( 'ERROR - Input parameters missing object\\group\\legacy record number: ' || $P_Object ||'\\'|| $P_Group_Value ||'\\'|| $P_Legacy_Record_Number ); end # Check to see if a number has already been allocated due to # duplicates in the source dataset # If it has returned the already assigned object number $L_DuplicateRecordNo = lookup_ext([DS_STG_MGMT_LKP.DBO.MGMT_NUMBER_ALLOCATION,'NO_CACHE', 'MAX'],[OBJECT_NUMBER],[NULL],[OBJECT,'=',$P_Object,GROUP_VALUE,'= ',$P_Group_Value,LEGACY_OBJECT_NUMBER,'=',$P_Legacy_Record_Number] ) SET ("run_as_separate_process"='no', "output_cols_info"='' ); if ( $L_DuplicateRecordNo is not null begin return ( $L_DuplicateRecordNo ); end # If no duplicate record number is found then proceed to assign # new object number

© SAP SE

Page 56 of 71

Extension Guide for SAP Best Practices for Data Migration $L_Dummy = lookup_ext([DS_STG_MGMT_LKP.DBO.MGMT_NUMBER_RANGES,'NO_CACHE','MAX '], [NUMBER_RANGE_KEY,IS_NUMERIC,FROM_NUMBER,TO_NUMBER,EXTERNAL_NUMBER ING_INDICATOR],[NULL,NULL,NULL,NULL,NULL],[OBJECT,'=',$P_Object,GR OUP_VALUE,'=',$P_Group_Value],[],[$L_NumberRangeKey,$L_IsNumeric,$ L_FromNumber,$L_ToNumber,$L_ExtNumberInd]) SET ("run_as_separate_process"='no', "output_cols_info"=' ' ); # Check to see if the number range has been allocated, if not raise an error if ( $L_NumberRangeKey is null ) begin raise_exception( 'ERROR - No number range allocated for object\\group combination: ' || $P_Object ||'\\'|| $P_Group_Value || ' - Please assign object\group to number range and rerun'); end # For the customer account groups that require external or internal number allocation if ( $L_ExtNumberInd = 'INT' ) begin … RETURN($L_Object_Number); The other custom functions predelivered by the SAP Best Practices for Data Migration offering are listed in the table below. Function Name

Parameters

Description

ENR_Assign_Material_Object_Number

Internal or External (Input 1)

Allocates a new number from the allocated number range if using an Internal number range (INT) or returns the legacy record number, which is already validated as the new SAP record number for external number range definition (EXT)

Group Value (Input 2) Legacy Record Number (Input 3) SAP Object Number (Return)

ENR_Assign_Object_Number

Assignment_Type (Input - 1) Group_Value (Input - 2) Legacy_Record_Number (Input - 3)

© SAP SE

Used within the Customer, Vendor, Material, and HCM objects to assign the correct SAP object number to the legacy record number passed in. The Assignment_Type

Page 57 of 71

Extension Guide for SAP Best Practices for Data Migration

Function Name

VAL_BaseUoM

Parameters

Description

SAP Object Number (Return)

parameter expects a value of ‘CUSTOMER’, ‘VENDOR’ or ‘HCM’. Group_Value would be set dependent on the object (Customer = Customer Account Group, Vendor = Vendor Account Group and HCM = specified number range key). The Legacy_Record_Number is set to the legacy record number passed with the source data.

Legacy Base UoM (Input 1)

Returns the required SAP Base UoM value dependent on the parameter passed in, Legacy UoM and the Material Type. If the Legacy UoM is not specified then the Material Type would define the Base UoM.

Legacy Material Type (Input 2) SAP UoM (Return)

ENR_Decimal_Conversion

InString (Input – 1) DefaultDecimalSeparator (Input – 2) Formatted Decimal String (Return)

Converts the input string containing a decimal value in string representation and changes it to the format required by SAP, that is no thousand separators and a decimal place of ‘.’. The second parameter states what decimal separator is used in the input string. The Default Decimal Separator global variable value is set to a ‘.’

ENR_IncoTermsPart2

LegacyIncoTermPart1 (Input – 1) LegacyIncoTermPart2 (Input – 2) Formatted IncoTermPart2 value (Return)

© SAP SE

Takes the input legacy values for Incoterms Part 1 and Part 2 and applies the business rules specified for SAP Best Practices to return the required Incoterms Part 2 value that SAP requires.

Page 58 of 71

Extension Guide for SAP Best Practices for Data Migration

Function Name

Parameters

Description

VAL_KeyForTaskListUsage

Legacy Work Center Category (Input 1)

Looks up SAP Work Center Category.

Legacy Key for Task List (Input 2)

If Task List Usage Value is NULL and SAP Work Center Category is not equal to ‘0002’ or ‘0004’, then return Default Task List Value

Default Task List Value (Input 3) SAP Task List Value (Return)

VAL_ProcurementType

Legacy Material Number (Input 1) Procurement Type (Return)

Else look up SAP Task List Usage value and Return. Looks up the SAP Material Type from Number Allocation Control Table based on the given Legacy Material Number. If SAP Material Type is equal to one of the following ('ROH', 'DIEN', 'HAWA', 'LEIH'), then return ‘F’ for Procurement Type. Else if SAP Material Type is equal to one of the following ('FERT','HALB'), then return ‘X’ for Procurement Type. Else Return blank.

VAL_SLocForExtProcurement

Legacy Plant (Input 1) Storage Location (Return)

Returns the Storage Location based on the SAP Plant value. Look up Corresponding SAP value for Legacy Plant value. If SAP Plant is ‘1000’ then return ‘1050’ for Storage Location Else If SAP Plant is ‘1100’ then return ‘1130’ for Storage Location Else Return blank.

© SAP SE

Page 59 of 71

Extension Guide for SAP Best Practices for Data Migration

Function Name

Parameters

Description

VAL_StandardValueKey

Legacy Work Center Category (Input 1)

Looks up SAP Work Center Category.

Legacy Standard Value Key (Input 2)

If Standard Value Key is NULL and SAP Work Center Category is not equal to ‘0002’ or ‘0004’, then return Default Standard Value Key

Default Standard Value Key (Input 3) SAP Standard Value Key (Return)

ENR_TaxJurisdiction

LegacyTaxJurisdiction (Input – 1) LegacyCountry (Input – 2) LegacyRegion (Input – 3)

Else look up SAP Standard Value Key and return. Evaluates whether the legacy Tax Jurisdiction code has been passed and if not evaluates the country and region values to assign the correct Tax Jurisdiction.

Tax Jurisdiction (Return) VAL_TransportationGroup

Legacy Transportation Group (input 1) Legacy material Type (Input 2) SAP Transportation Group (Return)

Checks the Transportation Group value against the Material Type to ensure that the required business rules are applied. Looks up corresponding SAP Values for Legacy Transportation Group and Legacy Material Type. If SAP Material Type is ‘DIEN,’ then SAP Transportation Group must be NULL. Else if SAP Material Type is not equal to ‘DIEN’ and SAP Transportation Group is NULL then default SAP Transportation Group to ‘0001’. Else return SAP Transportation Group.

© SAP SE

Page 60 of 71

Extension Guide for SAP Best Practices for Data Migration

Function Name

Parameters

Description

VAL_ValuationClass

Legacy Material Number (Input 1)

Looks up the SAP Material Type from Number Allocation Control Table based on the given Legacy Material Number.

Valuation Class (Return)

If SAP Material Type is ‘FERT’ then return ‘7920’ for Valuation Class. Else if SAP Material Type is ‘HALB,’ then return ‘7900’ for Valuation Class. Else if SAP Material Type is ‘DIEN,’ then return ‘3200’ for Valuation Class. Else if SAP Material Type is ‘HAWA,’ then return ‘3100’ for Valuation Class. Else if SAP Material Type is ‘ROH,’ then return ‘3000’ for Valuation Class. Else return blank.

5.2.2

User-Specific

For additional function syntax and use information, refer to the Data Services Reference Guide via the Help menu within the Data Services tool (on the Data Services menu bar, choose Help →Technical Manuals) for examples and descriptions of all functions available within Data Services. Refer to the Scripting Language documentation within the same guide to cover the creation of your own custom functions and scripting.

6 Extending the Reconciliation Job 6.1 Overview If you have made any changes or extensions to the Best Practice set of objects and fields then you may want to also reconcile these additional objects and columns. The following section of this guide details these scenarios: There are a number of possible scenarios here; 1. Adding a new column to an existing segment 2. Adding a new segment to an existing object 3. Adding a new object Clearly each of these increases in terms of complexity and the amount of work required. We would look at each in turn.

© SAP SE

Page 61 of 71

Extension Guide for SAP Best Practices for Data Migration It is important that you complete the changes to the load of the objects before updating the reconciliation code in order that the new table structures are created and populated.

6.2 Prerequisites Ensure that you have completed the installation processes contained in the Data Migration Quick Guide to deploy the content before commencing this section of the guide.

6.3 Adding a New Column to an Existing Object We would consider adding a new column to the reconciliation, for example, suppose your source table/file for Customer also includes information about the Title for customer, that is Mr or Mrs, and you wish to include this as fields for transferring to SAP. 1. First, identify the name of the new column in the target SAP table. We can do this by using the mapping document (Customer_IDoc.xls). On the second tab (Customer), there is a list of all possible fields. The column we are about to add is called ANRED on table KNA1 and is in the Basic Data segment.

If you are adding a new column, which is not included in the mapping spreadsheet then you would need to follow the example in section 6.4 below. 2. Access the Data Services Designer by choosing the following navigation options: Database type / Database server name / Database name

Microsoft SQL Server / \SQLEXPRESS / DM_REPO_ERP

User name / Password

DM_REPO_ERP / <specified password>

3. Open the Project DM_BPDM_IDOC. To select the Job Job_DM_Reconcile, choose the Project Area Window. 4. Navigate to the Object Work Flow and then the segment Data Flow that you need to work on by expanding the Job structure within the Project Area window. 5. We can identify the Data Flow we need. We know that it is the Customer Work Flow and we know within this we are interested in the Basic Data segment. Therefore we need to open the Data Flow DF_DM_Reconcile_CUST_MAST_BASIC_DATA by clicking on it in the Project Area window. The DF_DM_Reconcile_CUST_MAST_BASIC_DATA – Data Flow screen is displayed. 6. We need to make minor changes to all the objects in the Data Flow. We would work from top left to bottom right (from the sources to the target). 7. Open the Query Q_Get_Source_Data. 8. We can see the column we are looking for on the Schema In window by expanding the list of columns in the CUST_MAST_BASIC_DATA_ENRICHED table schema.

© SAP SE

Page 62 of 71

Extension Guide for SAP Best Practices for Data Migration

We use the %_ENRICHED tables as the source for reconciliation as it is the last staging table before we create and send the IDoc and therefore contains the data once validated and enriched. 9. Copy the column using drag and drop onto the Schema Out window. In this case we want to follow the column order, so insert it after the CUSTOMER_NUMBER_1 column. 10. In the mapping box, we need to add some code to ensure that we remove all preceding and trailing white space, so in this case we need to change the basic mapping: CUST_MAST_BASIC_DATA_ENRICHED.TITLE To ltrim_blanks_ext(rtrim_blanks_ext(CUST_MAST_BASIC_DATA_ENRICHED.TITLE)) 11. Once completed, choose the Back button. 12. You need to check that you have not made a mistake. Check the symbol to the left of the column name in the output schema. If it is a light blue triangle then it is ok, if it is a red cross then you need to check what you have entered in the mapping. 13. Now open the ABAP Data Flow Reconcile_KNA1. 14. It is a basic flow (Source, Query and Data Transport), but note that some may appear more complex as they have joins between a number of SAP source tables. The basic structure here is a Query to select the required columns from the SAP table and a Data Transport, this is a mechanism whereby DS would send an ABAP query to the SAP system, which would run a query and produce a text file. DS would then transfer the file using either ftp or a shared folder (defined in the Datastore) to the Job Server and then make it available to the parent Data Flow as a source dataset. We need to edit the query and add the column we identified earlier; 15. Scroll down through the source schema columns and then copy ANRED drag and drop onto the output schema, again we would try to keep the column order true, so drop it between TELFX and LZONE. 16. That is all we need to do here so press the back button twice, to return to the basic segment Data Flow and open the Query Q_Get_SAP_Columns.

© SAP SE

Page 63 of 71

Extension Guide for SAP Best Practices for Data Migration

17. Here we need to copy the column using drag and drop ANRED from the Schema In window to the Schema Out window, inserting it between TELFX and LZONE again to maintain the column order. Once more, we also need to remove preceding and trailing nulls so edit the mapping as shown in the diagram shown above.

If the column you are adding is a primary key column then you would need to select the properties for column and deselect the primary key box.

18. Once completed, choose the Back button and check that the column has a blue triangle by it once again. Then choose Back again to return to the Data Flow and open the Q_Outer_Join Query by double-clicking on its icon. 19. We need to copy the column using drag and drop TITLE from the Q_Get_Source_Data input schema and place it in the correct order in the output schema as above, note that the respective columns from the two source schemas are placed next to each other, so the Q_Get_SAP_Columns.ANRED column goes just below the TITLE column. 20. Choose the Back button and open the Pivot transform. 21. Make sure that the Pivot set is set to 1 and then copy the TITLE using drag and drop column above the CUSTOMER_ACCOUNT_GROUP column. Then switch to the 2nd Pivot set and drop ANRED above the KTOKD column, and press the back button. All we need to do now is validate and then run the job to check that it is working. To validate it (this ensures that we have not coded in any problems or issues), press the Validate All button, or choose Validation – Validate – All Object in View from the menu bar.

© SAP SE

Page 64 of 71

Extension Guide for SAP Best Practices for Data Migration 22. You should see only Warnings (yellow triangles). If the Output window appears on the Warnings tab then there are no errors and the job is ready to go. If errors do appear, then double-click the error and resolve the issue. 23. You can now run the Job as usual, although you would need to make sure that you have rerun your new data through the main DM jobs to be able to reconcile the new column.

6.4 Adding a New Segment to an Existing Object For purposes of this explanation, we would work through the Customer EU Tax Numbers example used elsewhere and explain how to include this segment into the Reconciliation job. 1. Access the Data Services Designer by choosing the following navigation options: Microsoft SQL Server / Database type / Database server name / \SQLEXPRESS / DM_REPO_ERP Database name User name / Password

DM_REPO_ERP / <specified password>

2. Open the Project DM_BPDM_IDOC and select the Job Job_DM_Reconcile by clicking on it in the Project Window. 3. Open the Reconcile_Customer section of the Job. 4. Next, navigate to the Data Flow tab in the Local Object Library window (defaults to bottom left) and scroll down to the DF_DM_Reconcile_CUST_MAST_BANK_DETAILS Data Flow. Right-click and select Replicate. It does not really matter which Dataflow we pick, there is just fewer items to rename if we use a Customer based one. 5. You would see a new Data Flow named, Copy_1_ DF_DM_Reconcile_CUST_MAST_BANK_DETAILS. We need to rename this to follow the naming convention used, which is DF_DM_Reconcile_XXX, where XXX is the name of the Enriched object table without the _ENRICHED at the end, so for our example the enriched table name is CUST_MAST_EU_TAX_NO_ENRICHED and therefore the new Data Flow name should be DF_DM_Reconcile_CUST_MAST_EU_TAX_NO. Right-click the new Dataflow and select Rename, change the name to the new one. 6. We need to include the new Data Flow in the Job, so copy the Data Flow into the top right using drag and drop project window, drop it over on the right-hand side close by the DF_DM_Reconcile_CUST_MAST_TAX_INDICATORS. Connect this Data Flow to the new one. 7. To open the new Data Flow, double-click on it and the DF_DM_Reconcile_CUST_MAST_EU_TAX_NO – Data Flow screen appears. 8. We need to make two major changes, as highlighted above. We need to delete the old source table (CUST_MAST_BANK_DETAILS_ENRICHED), so right-click it and select Delete. Then we need to insert the new one, so choose the Datastore tab in the Local Object Library, and expand the DS_STG_OBJ Datastore and then the Template Tables group. Copy the required table using drag and drop onto the Design Workspace area and select Make Source. Join the table to the Query, Q_Get_Source_Data.

© SAP SE

Page 65 of 71

Extension Guide for SAP Best Practices for Data Migration

If you cannot find the new table in the list of available tables, then you would need to import it. Right-click the Datastore name and select Import By Name. Enter the table name and select OK. 9. The second major change to be made is to create a new source ABAP Flow to get the data from SAP. Go to the Dataflow tab and scroll to the bottom of the list and expand the ABAP Dataflow tree. Select any of the ABAP Dataflows (however do note that some have joins in, so stick to basic ones like AUFK), right-click Reconcile_AUFK and select Replicate. The new object would be called Copy_1_Reconcile_AUFK. 10. Right-click the new Data Flow object and select Properties. Change the name to the new one (in this case we know the target table in SAP for this segment is KNAS, but you would have to speak with the SAP team to find this out as part of the mapping exercise.) In our example the new name is Reconcile_KNAS. 11. Select the Options tab, and change all occurrences of AUFK to KNAS in the ABAP file name, ABAP program name, and the Job name fields. 12. This naming convention is important; any changes could result in errors when the job runs. When you have finished, choose OK. 13. In the Design workspace window (main window), delete the ABAP Data Flow Reconcile_KNBK and copy the new one using drag and drop, and connect it to the Query Q_Get_SAP_Columns. 14. Open the new ABAP Dataflow so that you have a screen similar to that in the following screenshot.

15. Then open the Data Transport and change the File Name to RECON_KNAS.dat – again the naming convention is important, so as not to overwrite other extract files.

© SAP SE

Page 66 of 71

Extension Guide for SAP Best Practices for Data Migration 16. Choose the Back button, and delete the source table AUFK from the ABAP Data Flow. Go to the Datastore tab of the Local Object Library, and right-click the DS_SAP Datastore, select Import By Name and enter KNAS. Then open the DS_SAP Datastore tree and scroll down to the KNAS table and drag and drop onto the Design Workspace window. Then connect to the Query object. 17. Open the Query object and select and delete the existing columns from the output schema. Then in the input schema, select the required columns (KUNNR, LAND1, and STCEG for this example) and drag and drop to the output schema. 18. Choose the Back button twice to return to the parent Data Flow. 19. Open the Query Q_Get_Source_Data and decide which columns you wish to reconcile. For Best Practice columns for other objects it is normal to just reconcile the columns of high importance. Delete all columns from the output schema by selecting all four columns, rightclicking, and select Delete. 20. Drag the required columns from the input schema to the output schema. Ensure that you include the primary key, in this case, LEGACY_CUSTOMER_NUMBER_1. 21. To ensure that we do an equal comparison between these column values and the ones from SAP, we have to trim all excess white space from the values. So in the mapping tab for COUNTRY_KEY and VAT_REGISTRATION_NO, we need to add rtrim_blanks_ext(ltrim_blanks_ext(before and after). 22. LEGACY_CUSTOMER_NUMBER_1 is different. In SAP it is left padded with zeros so we need to effect this by changing the mapping to lpad(LEGACY_CUSTOMER_NUMBER_1,10,’0’). 23. Choose the Back button and open the Query Q_Get_SAP_Columns. Select the 4 columns in the output query and delete them. Copy all the three columns using drag and drop from the input schema to the output schema. Once again we need to edit each mapping to include function calls to trim white space. 24. Choose the Back button to return to the Data Flow level and open the Query Q_Outer_Join by Double-clicking the icon. 25. First, delete all the columns in the output schema, which have a red cross to their left. You should just be left with two columns then. 26. Next, we need to identify the primary key for the object. In this case, it is a concatenated key of LEGACY_CUSTOMER_NUMBER_1 and COUNTRY_KEY, so copy these two columns using drag and drop from the Q_Get_Source_Data Query in the input schema into the output schema at the top. 27. We would use the primary key columns to join between the two source tables later. Now we need to copy over the data values, so copy VAT_REGISTRATION using drag and drop to the output schema. 28. Then we can copy the corresponding SAP using drag and drop source column as shown below.

© SAP SE

Page 67 of 71

Extension Guide for SAP Best Practices for Data Migration

In our example, we have only one data column to be compared. If you have more, then add them to the output schema in pairs, that is, one from the ENRICHED table source followed by one from the SAP source. 29. When you have finished copying over the pairs, select the Where tab. 30. The key for the join here needs to be changed as follows, Q_Get_Source_Data.LEGACY_CUSTOMER_NUMBER_1 = Q_Get_SAP_Columns.KUNNR and Q_Get_Source_Data.COUNTRY_KEY = Q_Get_SAP_Columns.LAND1 31. Choose the Back button to return to the Data Flow and open the Pivot Transform object. We need to make a number of changes here. 32. The nonpivot columns need to be changed. So delete the current value CUSTOMER_NUMBER_1 and drag and drop LEGACY_CUSTOMER_NUMBER_1 and COUNTRY_KEY from the input schema. 33. Next, delete the columns BANK_COUNTRY_KEY, BANK_NUMBER and BANK_ACCOUNT_NUMBER from the Pivot Columns and copy the VAT_REGISTRATION_NUMBER using drag and drop column from the input schema. 34. Select Pivot Set 2 and delete the columns BANKS, BANKL and BANKN and copy the column using drag and drop STCEG from the input schema. 35. Choose the Back button to return to the Data Flow. 36. The final step is to open the Query Q_Format_Columns. 37. Here we need to change the mapping for both the SOURCE_TABLE column (change to CUST_MAST_EU_TAX_NO_ENRICHED) and the TARGET_TABLE column (change to KNAS). In this case we also need to change the ROW_ID mapping as this represents the primary key for source ENRICHED table, change this to the following; Pivot.LEGACY_CUSTOMER_NUMBER_1 || Pivot.COUNTRY_KEY 38. The final step is choose the back button and then to perform a Validate All. If you only see Yellow Warnings then the Dataflow is ok and you can perform a test run of the entire Customer Object.

6.5 Adding a New Object To add a new object, we need to perform a number of steps, which are similar to adding a new group of segments. Therefore, for each specific segment you want to add for the new object, you

© SAP SE

Page 68 of 71

Extension Guide for SAP Best Practices for Data Migration would need to follow the instructions in this section followed by the instruction in section 7.4: Adding a New Segment to an Existing Object.

1. Open the Job Job_DM_Reconcile and then open the Variables window (via Tools / Variables on the menu bar) as we first need to add a new Global Variable, right-click Global Variables and select Insert. 2. Scroll to the bottom of the list and double-click $NewJobGlobalVariable1. Change the name to $G_Reconcile_XXXX, where XXXX is your new object name, for example, $G_Reconcile_CostCentre. Set the Data Type to varchar 1 and the default value to ‘Y’.

We have used Cost Center as the example from here on, and you would need to substitute the name of the object you are adding. 3. Choose OK when done and close the Variables and Parameters window. 4. Now open the Work Flow WF_DM_Reconcile. 5. We need to add a new Work Flow, so go to the Work Flow tab and scroll down to WF_DM_Reconcile_Customer. Right-click and select Replicate. It does not matter which one of the reconciliation Work Flows you replicate, but in this example, we have been working with Customer to date. 6. You would see a new Work Flow named Copy_1_WF_DM_Reconcile_Customer. Right-click and select Rename, and change the name to WF_DM_Reconcile_CostCentre. Then drag and drop it onto the Design Workspace at the end of the current chain of Work Flow objects, and connect it to the end. 7. Open the new Work Flow and rename the conditional from C_Reconcile_Customer to C_Reconcile_CostCentre. Then open the Conditional. 8. We have to change the name of the Global Variable that the Conditional would check, from $G_Reconcile_Customer to $G_Reconcile_CostCentre. This enables us to control the objects reconciled at runtime. 9. The next step is to delete all the existing Data Flows here as they relate to the Customer object. You would need to add a new Data Flow as described in the prior section for each segment or ENRICHED table you have for your new object.

7 Extending the Migration Services Application and the Data Profiling Facility (including new lookup tables) The Migration Services application (and the data profiling facility) relies on a number of Management tables to work out which objects and segments you have installed, and the corresponding lookup tables associated with each job, segment, and column. Therefore if you add a new job or segment, then you have to update these tables for Migration Services application to be able to see them. To do this, you need to add additional rows of data into the following files (which by default are in the directory c:\migration for an ERP install):

© SAP SE

Page 69 of 71

Extension Guide for SAP Best Practices for Data Migration 

LOOKUPS.csv – defines all of the possible lookup tables



VIEW_TO_DF_NAME.csv – defines all of the views and where they are used by Dataflows in Data Services



LOOKUP_WHERE_USED.csv – defines where each lookup table is used by a Data Services Dataflow



LOOKUP_TAB_STRUCTURE.csv – defines the structure and source of a specific lookup table

The document DM_BPD_Lookup_Content.doc (available on the SAP Note 1710056) discusses each of these tables in more detail and explains how to load and use the new objects and configurations. In brief, you need to add any new lookup tables into the LOOKUPS.csv, LOOKUP_WHERE_USED.csv and LOOKUP_TAB_STRUCTURE.csv files and then run the Job Job_DM_Lookups_Initialise with the correct global variables set (see section 5.1 of the above document for example usage). If you are adding new segments (or new jobs) then you also need to add rows to VIEW_TO_DF_NAME.csv before you run Job Job_DM_Lookups_Initialise.

7.1 Customer Identification Numbers Example For the example specified in section 4: Adding a New Migration Cockpit Segment, we would make the following changes to the above specified files: 

LOOKUPS.csv – No changes required as this new segment only references existing Lookup table LKP_CRM_BP_ID_CATEG.



VIEW_TO_DF_NAME.csv – Add the following entry as a new row at the end of the existing csv file ID = VIEW_NAME = Customer Master MC Identification Numbers DF_NAME = DF_DM_CustomerIdentNumbers_Validate OBJECT_NAME = Customer_MC JOB_NAME = Job_DM_Customer_MC Example:

If you want the new segment lookups to appear with the other related object segments, that is, all customer segments together, then insert the record next to the other related entries and increment the existing numbers so that they run sequentially. 

LOOKUP_WHERE_USED.csv – Add the following entry as an inserted row after the other Customer Master lookup entries. OBJECT_NAME = Customer_MC DF_NAME = DF_DM_CustomerIdentNumbers_Validate LKPTAB_NAME = LKP_CRM_BP_ID_CATEG SOURCE_TABLE_COLUMN = CUST_MC_IDENT_NUMBERS_MAP;*+TYPE COMMENTS = RELEASED_ON = (in the format YYYY.MM.DD HH24:MI:SS)

© SAP SE

Page 70 of 71

Extension Guide for SAP Best Practices for Data Migration 

LOOKUP_TAB_STRUCTURE.csv – No changes need to be made to this file in this example as we are not adding a new lookup, we are only referencing an existing one LKP_CRM_BP_ID_CATEG.

© SAP SE

Page 71 of 71

More Documents from "PP"