• Share
  • ?
  • Profiles ▼
  • Communities ▼
  • Apps ▼

Blogs

  • My Blogs
  • Public Blogs
  • My Updates

MDM Developers

  • Log in to participate
64033fdd-4a35-4733-852f-a39abcdf4fb3 Blog

About this blog

The MDM Developers Community is open for any developers using IBM's Master Data Management software to connect with each other, stay posted on the latest MDM technical resources, learn how to get the most out of MDM tools, ask questions, and share tips and answers to technical problems.
  • Youtube
  • Google
  • RSS

Archive

  • March 2018
  • January 2018
  • September 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • September 2016
  • May 2016
  • April 2016
  • March 2016
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • June 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • October 2014
  • September 2014
  • August 2014
  • July 2014
  • June 2014
  • May 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • March 2013
  • January 2013
  • December 2012
  • October 2012
  • September 2012
  • August 2012
  • May 2012
  • January 2012

Tags

All posts
  • Sort by:
  • Date ▼
  • Title
  • Likes
  • Comments
  • Views

Purging suspect duplicate tasks for a list of parties in IBM Stewardship Center

geeta pulipaty 270000GUE0 | | Visits (505)

Tweet

Author: Pavan kumar Patil

Product: Infosphere Master Data Management.
Component: IBM Stewardship Center.
Version: 11.5.0

Other prerequisite software: IBM Business Process Manager 8.5.6 , IBM Process Designer 8.5.6, IBM Stewardship Center 11.5.0 installed and configured.

IBM Stewardship Center provides the ability to perform reactive data stewardship activities around Physical MDM. When a suspect gets created on MDM , a notification is being sent to BPM and a Suspected Duplicate task gets created on BPM Process portal for a data steward to act upon. ISC default implementation creates all SDP tasks for a single group called DataStewardGroup.

This document explains how to delete all those suspect tasks for a list of parties. This could be because those tasks are no longer valid that could have happened because of other collapses in MDM.

 

1. Open Physical MDM Suspected Duplicates process application in editable mode in IBM Process Designer or you can create a new process application and follow rest of steps.

2. In Process App create new epv  “PartyIdListEpv” .To do so   select Data - > Exposed Process Value

image

 

image

 

3.Now add Exposed Process Value Variable “partyIdList”

image

 

4. Create a new “General System Service” from Implementation option in the left side pane and name it “Delete Process Instance”.

5. Now go to variable tab click on “Link Epv” and select “PartyIdListEpv”

6. Now go to Diagram tab inside the service.

7. Create a new Server Script by dragging Server Script box from right side pane and name it as “Get all active SDP tasks and terminate which are mentioned in list”.

8. Inside the implementation section in properties tab add below code

   var processName = "Resolve Suspected Duplicate process";

   // Set Search condition.

   var partyIdStore = tw.epv.PartyIdListEPV.partyIdList + "";

   var partyIdArray = partyIdStore.split(",");

   var conProcessType = new TWSearchCondition();

   conProcessType.column = new TWSearchColumn();

   conProcessType.column.name = TWSearchColumn.ProcessColumns.Name;

   conProcessType.column.type = TWSearchColumn.Types.Process;

   conProcessType.operator = TWSearchCondition.Operations.Equals;

   conProcessType.value = processName;

 

   var conStatus = new TWSearchCondition();

   conStatus.column = new TWSearchColumn();

   conStatus.column.name = TWSearchColumn.ProcessInstanceColumns.Status;

   conStatus.column.type = TWSearchColumn.Types.ProcessInstance;

   conStatus.operator = TWSearchCondition.Operations.Equals;

   conStatus.value = "Active";

 

   var search = new TWSearch();

   search.conditions = new Array(conProcessType, conStatus);

   // Execute search and returns all active SDP process instances.

   var processInstances = search.executeForProcessInstances();

 

for (var j = 0; j < partyIdArray.length; j++) {
         for (var i = 0; i < processInstances.length; i++) {
               var instanceId = processInstances[i].businessData.get("partyId");
               if (instanceId === partyIdArray[j]) {

                   //terminate process instance whose party id matches
                   processInstances[i].abort();
                   break;
         }
}

image

 

9. Create a “Heritage Human Service” from User Interface option in left side pane.

10. Name it as “Delete Admin Service”.

11. Now go to variable tab click on “Link Epv” and select PartyIdListEpv”

12. Inside the overview tab, we need to set Exposed to as “All Users” and Exposed As will be “Administration Service”. Attach a nested service here to "Delete Process Instance" and connect to start and end points. Refer to below screenshots.

image

 

image

 

13. Save all.

14. If using Process Server version then a new snapshot is to be created from the Process designer and installed on Process Server for the Process Admin console to have new administrative service available for users. If using Process Center version , all the changes done are available at Tip version of snapshot.

15. To set list of party Ids , go to Process Admin console and expand “Admin Tools” option on the left side pane , click on “Manage EPV”. From snapshot  dropdown select Physical MDM Suspect Duplicate and  from name dropdown select “PartyListEpv”. Set a comma separated string to value for “partyIdList”. These are the party ids for which suspect tasks are checked and removed if present.

16. In the same Process Admin console , expand "Physical MDM Suspected Duplicates" option on left hand side pane and click on "Delete Admin Service" option. This step may take sometime based on number of tasks in the system. Once done it gives a message saying "The service has finished".

 

Note : This list of steps work for any version of IBM Stewardship Center post of 11.5.0.

 

Setting up security in MDM CE

AndyOusterhout 504P18DXNA | | Visits (895)

Tweet

I have spent the last 3 years playing with MDM CE and trying to figure out how to make this powerful product easy to use and understand.  And I've finally gotten around to how can one set  up an easy to use security framework that can actually simplify the application for users.  And after much talk with Anup Gandhi, our Chief Engineer, and Bryce Crapse who is part of IBM's MDM Services organization, we have come up with an approach that frankly makes this fairly easy.....  or at least you will let me know if you find it so.

  1. Create a role called "ACG Creation" which is only used for creating these ACG's.  Do this from Admin UI which is what we are calling the UI that is used in version 11.6 to maintain the system and is the only UI in earlier versions by
    • Data Model
    • Security
    • Role
  2. The starting point is to set up an ACG that will be related to each Object and name them clearly.  I recommend naming the ACG's with the pattern [Object Name] [Object Type].  So the catalog with the name of "Products" would have an associated ACG called "Product Catalog ACG".
    • Data Model
    • Security
    • Attribute Control Groups
    • Attribute Control Group Console
  3. Assign each ACG to the ACG Creation Role, and select this and nothing else:
    • Catalog List if of type Catalog
    • Hierarchy List if of type Hierarchy
    • Collaboration List if of type Collaboration Area
  4. Assign the related object to the ACG
    • Data Model
    • Security
    • Attribute Control Groups
    • Object to Attribute Control Group Mapping
  5. Create each role and select the ACG's related to the object that role should have access to, then select the additional access privileges if the related object is a Catalog or Hierarchy.  The related privileges for Collaboration Areas are defined by the workflow and workflow steps.  In addition, in order to the person to be able to select from associated lookup tables, settings in the default ACG will also need to be set as follows:
    • Select Catalog List & View
    • Select List View
  6. As part of Version 11.6, we introduced a Business User Experience that leverages these same ACG's, Roles, and Users to control object access and introduced a JSON to simplify access to various buttons on the screen. I'll create a separate blog on this, but suffice it to say that the JSON links a Role in CE with the screen layout in the Business UX.
  7. Create your Users and assign them the appropriate roles, including one or more of the roles releated to the Business UX and all should be well with the world.

Note that this format has the embedded concept of not including the object type in the object name.  Since object names are currently what gets displayed to users, I am not currently aware of why a business user needs to know the difference between a catalog, hierarchy, item or category... but this is for another blog....

 

Let me know what you think and please add your other ideas on how to  simplify.  And please use the forums here to ask me questions.  I will do my best to get you answers.

 

Regards,

Andy Ousterhout

Senior Offering Manager

IBM Master Data Management Collaborative Edition

The “env” command does not exist on Ubuntu machine with IOException : cannot run the program “env”: error=2.!! Find the fix here.

RakhiMaheshwari 270007AXEH | | Visits (1981)

Tweet

Issue:

For Master Data Management v11.6.0.x and Reference Data Management v11.6.0.x on Ubuntu machine, the below exception may be noticed while running madconfig.sh utility.

Buildfile: build.xml
java.io.IOException: Cannot run program "env": error=2, No such file or directory

 

Reason:

This exception is occurred because ‘env’ program is not installed on Ubuntu machine.

This can be confirmed by executing the command env from the shell

<MDM-INSTALL-DIR>/mds/scripts$env

The program 'env' is currently not installed. You can install it by typing:
sudo apt-get install coreutils.

 

Solution:

To Install the "env" program on Ubuntu box, follow the below steps :

  1.  Run the below command

sudo apt-get install coreutils

  1. Verify the madconfig.sh.

                 <MDM-INSTALL-DIR>/mds/scripts/madconfig.sh

                  Or

                <RDM-INSTALL-DIR>/mds/scripts/madconfig.sh

Purging all MDM Suspect Duplicate tasks in IBM Stewardship Center

geeta pulipaty 270000GUE0 | | Visits (3796)

Tweet

Author: Geetha S Pullipaty

Product: Infosphere Master Data Management.
Component: Data Stewardship and Governance.
Version: 11.5.0

Other prerequisite software: IBM Business Process Manager 8.5.6 , IBM Process Designer 8.5.6, IBM Stewardship Center 11.5.0 installed and configured.

IBM Stewardship Center provides the ability to perform reactive data stewardship activities around Physical MDM. When a suspect gets created on MDM , a notification is being sent to BPM and a Suspected Duplicate task gets created on BPM Process portal for a Data steward to act upon. ISC default implementation creates all SDP tasks for a single group called DataStewardGroup.

This document explains how to delete all those Suspect tasks that get created.

1. Open Physical MDM Suspect Duplicate Process from BPM Process Designer.

2 .Create a new “General System Service” from Implementation option in the left side pane.

3. Name the service as “Delete Process Instance”

4. Now go to Diagram tab inside the service.

5. Create a new Server Script by dragging Server Script box from right side pane and name it as “Get all active SDP tasks and terminate them”.

6.Inside the implementation section in properties tab add below code

                var processName = "Resolve Suspected Duplicate process";
               // Set Search condition.
              var conProcessType = new TWSearchCondition();
             conProcessType.column = new TWSearchColumn();
             conProcessType.column.name = TWSearchColumn.ProcessColumns.Name;
             conProcessType.column.type = TWSearchColumn.Types.Process;
             conProcessType.operator = TWSearchCondition.Operations.Equals;
             conProcessType.value = processName;

             var conStatus = new TWSearchCondition();
             conStatus.column = new TWSearchColumn();
             conStatus.column.name = TWSearchColumn.ProcessInstanceColumns.Status;
             conStatus.column.type = TWSearchColumn.Types.ProcessInstance;
             conStatus.operator = TWSearchCondition.Operations.Equals;
             conStatus.value = "Active";

            var search = new TWSearch();
            search.conditions = new Array(conProcessType, conStatus);
            // Execute search and returns all active SDP process instances.
            var processInstances = search.executeForProcessInstances();

          for (var i=0; i<processInstances.length; i++) {
            var instanceId = processInstances[i].id
             processInstances[i].abort(); // This will terminate each active process instance.
          }

 

  image

 

7. Create a “Heritage Human Service” from User Interface option in left side pane.

8. Name it as “Delete Admin Service”.

9.Inside the overview tab, we need to set Exposed to as “All Users” and Exposed As will be “Administration Service”. Refer below screenshot.

image

 

10. Now in the diagram add a new Nested Service and name it as “Delete Process Instance”.

11. Inside implementation attached a nested service which is created in step 3.

image

 

12. Now go to Process Admin Console and expand “Physical MDM Suspected Duplicates” option on the left side pane

Note : If using Process Server version then a new snapshot is to be created from the Process designer and installed on Process Server for the Process Admin console to have new administrative service available for users.

13. Now click on the “Delete Admin Service” option. This step may take sometime based on the number of tasks in the setup.

14. We will get below screen which means all the sdp task are terminated.

image

15. Now go to any was profile path something similar to this “C:\IBM\WebSphere\profiles\AppSrv01\bin”

16. Execute below command based on your configuration to clear all the deleted task from bpm. Please refer to this url about running these commands "https://www.ibm.com/support/knowledgecenter/en/SSFTBX_8.5.6/com.ibm.wbpm.ref.doc/topics/rref_BPMProcessInstancesCleanup.html "

wsadmin -conntype SOAP -port <soap_port> -host <hostname> -user admin -password admin -lang jython

AdminTask.BPMProcessInstancesCleanup('[-containerAcronym MDMSDP -containerSnapshotAcronym <acronym_snapshot_name> -instanceStatus ALL]')

17. After doing above step all the sdp tasks will get removed.

MDM Post Install Configuration Targets - modify_default_queues

A Chitra 060000T4FN | | Visits (2631)

Tweet

Target modify_default_queues
Modifies Queue names in the Java Messaging Services configuration of the application server in which MDM is installed.

 

When to use:
When MDM is configured to use WebSphere Messaging Queue (WMQ) certain default queue names are used.  To modify this configuration to use custom queue names the post configuration target modify_default_queues can be used.

 

Inputs obtained:
The target reads the below properties from file <MDM_INSTALL_DIR>/properties/post_config.properties
# The MQ queue name for AsynchronousWorkQueue, the default name is MDM.ASYNCHRONOUS.WORK.
mqAsynchronousWorkQueue =
# The MQ queue name for Customer Completed Work, the default name is CUSTOMER.COMPLETED.WORK
mqCustomerCompletedWork =
# The MQ queue name for Customer Integration, the default name is CUSTOMER.INTEGRATION
mqCustomerIntegration =
# The MQ queue name for Customer Scheduled Work, the default name is CUSTOMER.SCHEDULED.WORK
mqCustomerScheduledWork =
# The MQ queue name for Customer Tail, the default name is CUSTOMER.TAIL
mqCustomerTail =
# The MQ queue name for EMQUEUE, the default name is EMQUEUE
mqEMQUEUE =
# The MQ queue name for MDM Change Broadcast Queue, the default name is MDM.BROADCAST
mqMDMChangeBroadcastQueue =
# The MQ queue name for MDM Messaging Backout Queue, the default name is MDM.MESSAGING.BACKOUT
mqMDMMessagingBackoutQueue =
# The MQ queue name for MDM Messaging Failed Response Queue, the default name is MDM.MESSAGING.FAILED_RESPONSE
mqMDMMessagingFailedResponseQueue =
# The MQ queue name for MDM Messaging Request Queue, the default name is MDM.MESSAGING.REQUEST
mqMDMMessagingRequestQueue =
# The MQ queue name for MDM Messaging Successful Response Queue, the default name is MDM.MESSAGING.SUCCESSFUL_RESPONSE
mqMDMMessagingSuccessfulResponseQueue =
# The MQ queue name for MDS Queue, the default name is MDS.QUEUE
mqMDSQueue =
# The MQ queue name for Flex Queue, the default name is FLEX.QUEUE
mqFlexQueue =

Please fill in the above file before invoking the target.

Properties pertaining to MDM application server are read from <MDM_INSTALL_DIR>/properties/mdm_install.properties file.

 

Invocation:

  • When the operating system is Windows:
    • Go to <MDM_INSTALL_DIR>/mds/scripts
    • Invoke madconfig modify_default_queues
  • In other operating systems
    • Go to <MDM_INSTALL_DIR>/mds/scripts
    • Invoke ./madconfig.sh modify_default_queues

 

Tasks performed:

  1. Modifies the queue names in the Java Messaging Services (JMS) configuration in the application server based on inputs provided in <MDM_INSTALL_DIR>/properties/post_config.properties

 

Logs:

Logs can be found at

  • <MDM_INSTALL_DIR>/logs/madconfig/java

 

Available in:

  • MDM v11.5
  • MDM v11.6

MDM Post Install Configuration Targets - switch_to_mq

A Chitra 060000T4FN | | Comment (1) | Visits (3122)

Tweet

Target switch_to_mq
Uninstalls configuration related to  WebSphere Embedded Messaging (WEM) and configures the application server to make use of WebSphere Messaging Queues.

 

When to use:
When MDM has be installed using WebSphere Embedded Messaging (WEM) and has to be configured to use WebSphere Messaging Queue (WMQ) the target switch_to_mq can be used.

 

Inputs obtained:
The target reads the below properties from file <MDM_INSTALL_DIR>/properties/post_config.properties
# MQ server host name
messagingHost=<MQ-host>
# MQ server listener port
messagingPort=1414
# MDM user name to access to MQ server
messagingUser=<MQ-user>
# MDM user password to access to MQ server
messagingPassword=<MQ-user-password>
# The MQ Queue Manager name for MDM server
messagingQueueManager=<MDM-QMGR>
# The MQ server connection channel name for MDM server
messagingChannel=<MDM-SVR-CHANNEL>
# The MQ queue transport type for MDM server, CLIENT or BINDING
messagingTransport=CLIENT
# The MQ server home path
messagingHomePath=/usr/mqm

Please fill in the above file before invoking the target.

Properties pertaining to MDM application server are read from <MDM_INSTALL_DIR>/properties/mdm_install.properties file.
Please note that the default queue names are used during configuration.  The default queue names can be found in file <MDM_INSTALL_DIR>/properties/post_config.properties

 

Invocation:

  • When the operating system is Windows:
    • Go to <MDM_INSTALL_DIR>/mds/scripts
    • Invoke madconfig switch_to_mq
  • In other operating systems
    • Go to <MDM_INSTALL_DIR>/mds/scripts
    • Invoke ./madconfig.sh switch_to_mq

 

Tasks performed:

  1. Removes WEM related configuration from the application server on which MDM is installed.
  2. Configures the server to use WMQ by obtaining input from <MDM_INSTALL_DIR>/properties/post_config.properties

 

Logs:

Logs can be found at

  • <MDM_INSTALL_DIR>/logs/madconfig/java

 

Available in:

  • MDM v11.5
  • MDM v11.6

 

Modify MDM and RDM installation.

RaghuVenkatesh 2700051S4W | | Visits (2729)

Tweet

For MDM v11.6 onwards, Modify option for MDM and RDM from Installation manager is not supported.

To support modify scenario, new post configuration targets Modify_MDM and Modify_RDM were introduced for MDM and RDM respectively. These targets help users to add or remove UI’s from current installation.

During the installation, on all of the UI panels an option to include/skip UI installation is provided as shown below.

 

Business Administration panel:

image

 

 

 

 

 

 

 

 

 

 

 

 


 

Inspector panel:

image

 

 

 

 

 

 

 

 

 

 

 

 

In above installation scenario, only Inspector UI is selected and remaining UI’s are skipped.

After extraction phase completes in Installation manager, target Configure_MasterDataManagement or Configure_ReferenceDataManagement invoked for MDM or RDM respectively.

Now, to add or remove UI’s user has to invoke Modify_MDM target from <MDM_HOME>/mds/scripts directory. This target provides user an interactive way for adding or removing of UI’s.

image

 

 

 

 

 

 

 

Adding UI to MDM installation:

If user input for Add feature, it lists out already installed UI’s and UI’s which are available for install.

image

 

 

 

 

 

 

 

 

 

 

 

 

 

Once user enters particular UI selection, it will ask user to input values for installation.Also, users given an option to use default values that were used for MDM operational server install (excepting passwords). To use default values user just has to press enter key at prompt.

image

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

After taking in input values, the target will perform WebSphere Application Server status check against user input. When the entries are valid deployment of UI starts and completes successfully.All the input values are updated in property file <MDM_INSTALL_DIR>/properties/mdm_install.properties and will be used for post configuration processes.  Passwords are not saved.

image

 

 

 

 

 

 

 

 

 

 

 

 

 

Removing UI from MDM installation:

On the operation type selection, if user enters for remove UI option, it will show UI’s that are part of MDM installation as below.

image

 

 

 

 

 

 

 

 

 

 

 

 

Once particular UI is selected for removal, relevant passwords are expected to be entered at prompt.

image

 

 

 

 

 

 

 

 

 

 

A WebSphere Application Server status check is performed for and the target uninstalls the UI if the requirement is met.

image

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Note: For RDM modify scenario, the target to be used is Modify_RDM.  The available UI is Business Administration UI.

 

Does Configure_MasterDataManagement fail with Return Code: -1073741515? Find the fix here

A Chitra 060000T4FN | | Visits (3864)

Tweet

For Master Data Management v11.5.0 and Master Data Management v11.6.0 installation on Windows, installation of Microsoft Visual C++ is a pre-requisite.

 

Issue

The below Exception will be noticed in <MDM_INSTALL_DIR>/logs/madconfig/Configure_MasterDataManagement_<TIMESTAMP>.log when Microsoft Visual C++ is not installed.

test_datasource:

Running ODBC SQL statement [select 1 from sysibm.sysdummy1;]...
Executing C:\Program Files\IBM\MDM\mds\bin\madsql.exe
Result: -1073741515
Return Code: -1073741515, Time elapsed: 0.053 sec

The error occurs because certain libraries that come with Microsoft Visual C++ are missing.

 

Solution

1. Download Microsoft Visual C++ from https://www.microsoft.com/en-gb/download/details.aspx?id=30679 and install Visual C++

2. Ensure that the file msvcp110.dll is present in the system

3. Open a new command prompt

4. Go to <MDM_INSTALL_DIR>/mds/scripts

5. Invoke madconfig Configure_MasterDataManagement

MDM Post Install Configuration Targets - install_new_db_lang_code

A Chitra 060000T4FN | | Visits (2849)

Tweet

Target install_new_db_lang_code

Inserts MDM gold data corresponding to the locales provided as input.

When to use:

During MDM installation, the list of locales corresponding to which MDM gold data to be inserted into the database is obtained (Code Languages) as input.  When MDM gold data has to be inserted for one or more locales after MDM installation and configuration, this target can be used.

Invocation:

When the operating system is Windows:

  • Go to <MDM_INSTALL_DIR>/mds/scripts
  • Invoke madconfig install_new_db_lang_code

In other operating systems

  • Go to <MDM_INSTALL_DIR>/mds/scripts
  • Invoke ./madconfig.sh install_new_db_lang_code

Inputs obtained:

  • The database password
  • Comma separated list of one or more locales

Other details on database configuration are obtained from mdm_install.properties found in <MDM_INSTALL_DIR>/properties folder.  (In MDM v11.5, other details are obtained from <MDM_INSTALL_DIR>/properties/db.properties)

Tasks performed:

  • Inserts rows in tables that have locale specific gold data, corresponding to the locales provided by the user

Logs:

Logs can be found at

  • <MDM_INSTALL_DIR>/logs/madconfig/java

Available in:

  • MDM v11.5
  • MDM v11.6

MDM Post Install Configuration Targets - redeploy_user_interface

A Chitra 060000T4FN | | Visits (2320)

Tweet

Target redeploy_user_interface

Uninstalls and installs a specified user interface application.

Invocation:

When the operating system is Windows:

  • Go to <MDM_INSTALL_DIR>/mds/scripts
  • Invoke madconfig redeploy_user_interface

In other operating systems

  • Go to <MDM_INSTALL_DIR>/mds/scripts
  • Invoke ./madconfig.sh redeploy_user_interface

Inputs obtained:

  • The user interface that has to be redeployed (amongst the deployed user interfaces)
  • The WebSphere Application Server administrator user password corresponding to the server in which the UI is installed
  • The WebSphere Application Server administrator user password corresponding to the server in which MDM is installed

Other details on application configuration are obtained from mdm_install.properties found in <MDM_INSTALL_DIR>/properties folder.  (In MDM v11.4 FP3 and MDM v11.5, other details are obtained from the properties file corresponding to the UI in directory <MDM_INSTALL_DIR>/properties)

Tasks performed:

  • Uninstalls the web application (UI) specified by the user
  • Starts the application server on which the web application has to be installed.
  • Configures the web application based on details from the properties file and user input
  • Installs the web application on the server
  • Restarts the server on which the web application has been deployed

Logs:

Logs can be found at

  • <MDM_INSTALL_DIR>/logs/madconfig
  • <MDM_INSTALL_DIR>/logs/madconfig/java

Available in:

  • MDM v11.4 FP3
  • MDM v11.5
  • MDM v11.6

Note:

The properties file(s) that contains configuration details for the web application can be found at:

  • Business Administration UI: <WAS_PROFILE_HOME>/installedApps/<CELL>/ba-App-<INSTANCE_ID>.ear/propertiesUI.jar
  • Web Reports: <WAS_PROFILE_HOME>/installedApps/<CELL>/webreports-App-<INSTANCE_ID>.ear/webreports.war/WEB-INF/classes/webreports.properties
  • Inspector: <WAS_PROFILE_HOME>/installedApps/<CELL>/inspector-App-<INSTANCE_ID>.ear/inspector.war/WEB-INF/classes/inspector.properties
  • Enterprise Viewer: <WAS_PROFILE_HOME>/installedApps/<CELL>/enterpriseviewer-App-<INSTANCE_ID>.ear/viewer.war/WEB-INF/classes/ContextManager.prop

 

 

MDM Post Install Configuration Targets - redeploy_native_ear

A Chitra 060000T4FN | | Visits (2679)

Tweet

Target redeploy_native_ear

Uninstalls and installs the native engine which is the core virtual MDM component

When to use:

In cases where native engine deployment failed during MDM configuration.

Invocation:

When the operating system is Windows:

  • Go to <MDM_INSTALL_DIR>/mds/scripts
  • Invoke madconfig redeploy_native_ear

In other operating systems

  • Go to <MDM_INSTALL_DIR>/mds/scripts
  • Invoke ./madconfig.sh redeploy_native_ear

Inputs obtained:

  • The WebSphere Application Server administrator user password
  • The database password

Other details on application configuration are obtained from mdm_install.properties found in <MDM_INSTALL_DIR>/properties folder.  (In MDM v11.4 FP3 and MDM v11.5, other details are obtained from <MDM_INSTALL_DIR>/properties/app.properties)

Tasks performed:
The native engine is the core virtual MDM component and has certain configuration files including ODBC data source configuration file.
This target:

  • Provides details on the WebSphere Application Server environment
  • Obtains user consent to proceed with native ear redeployment
  • Checks the server status
  • Uninstalls the native engine
  • Installs the native engine and generates the configuration files

Logs:

Logs can be found at

  • <MDM_INSTALL_DIR>/logs/madconfig
  • <MDM_INSTALL_DIR>/logs/madconfig/java

Available in:

  • MDM v11.4 FP3
  • MDM v11.5
  • MDM v11.6

Note:

  • The configuration files can be found at <WAS_PROFILE_DIR>/installedApps/<CELL>/MDM-native-<INSTANCE_ID>.ear/native.war/conf folder.
  • Please restart the application server after invoking redeploy_native_ear.

MDM Post Install Configuration Targets - redeploy_EBA

A Chitra 060000T4FN | | Visits (2797)

Tweet

Target redeploy_EBA

Uninstalls and installs the Enterprise Business Application (EBA) and performs security configuration for the EBA.

Invocation:

When the operating system is Windows:

  • Go to <MDM_INSTALL_DIR>/mds/scripts
  • Invoke madconfig redeploy_EBA

In other operating systems

  • Go to <MDM_INSTALL_DIR>/mds/scripts
  • Invoke ./madconfig.sh redeploy_EBA

Inputs obtained:

  • The WebSphere Application Server administrator user password
  • MDM Administrator user password

Other details on application configuration are obtained from mdm_install.properties found in <MDM_INSTALL_DIR>/properties folder.  (In MDM v11.4 FP3 and MDM v11.5, other details are obtained from <MDM_INSTALL_DIR>/properties/app.properties)

Tasks performed:
The MDM Operational Server is a Business Level Application that comprises of the EBA and a jar file that contains crucial property files that determine the MDM application's functionality.
This target:

  • Provides details on the WebSphere Application Server environment
  • Obtains user consent to proceed with OSGI bundles redeployment
  • Checks the server status
  • Uninstalls the jar file that holds properties files
  • Uninstalls the enterprise business application
  • Installs the enterprise business application
  • Install the jar file that contains property files
  • Performs security configuration for the enterprise business application
  • Restarts the application server

Logs:

Logs can be found at

  • <MDM_INSTALL_DIR>/logs/madconfig
  • <MDM_INSTALL_DIR>/logs/madconfig/java

Available in:

  • MDM v11.4 FP3
  • MDM v11.5
  • MDM v11.6

 

MDM Post Install Configuration Targets - recreate_database

A Chitra 060000T4FN | | Visits (3190)

Tweet

Target recreate_database

Helps you obtain a vanilla instance of the MDM database with the gold data.

When to use:

  • In test instances, where test data has to be removed and the instance can be re-used.
  • In cases where database load failed during MDM configuration, due to issues like missing tablespace(s)

Please Note:

All tables in the schema in which MDM is configured will be dropped.  A vanilla instance of the MDM database will be created containing database objects and gold data that come with the MDM installer.  Customizations and rows inserted on a functional MDM instance will be lost.  Hence, It is adviced to use this target in Development and Test instances only.

Invocation:

When the operating system is Windows:

  • Go to <MDM_INSTALL_DIR>/mds/scripts
  • Invoke madconfig recreate_database

In other operating systems

  • Go to <MDM_INSTALL_DIR>/mds/scripts
  • Invoke ./madconfig.sh recreate_database

Inputs obtained:

  • The database password
  • The WebSphere Application Server administrator user password

Other details on database configuration are obtained from mdm_install.properties found in <MDM_INSTALL_DIR>/properties folder.  (In MDM v11.4 FP3 and MDM v11.5, other details are obtained from <MDM_INSTALL_DIR>/properties/db.properties)

Tasks performed:

  • Obtains user consent to proceed with database recreation
  • Stops the server
  • Drop all the triggers, foreign keys, indexes, sequences and tables in the given database schema. 
  • Removes and creates the ODBC Data Source
  • Creates Virtual MDM database objects and populates Virtual MDM data
  • Create Physical MDM database objects and populates gold data
  • Updates the Configuration tables with the instance details
  • Starts the server

Please Note: This target does not drop or create tablespaces

Logs:

Logs can be found at

  • <MDM_INSTALL_DIR>/logs/madconfig
  • <MDM_INSTALL_DIR>/logs/madconfig/java
  • <MDM_INSTALL_DIR>/logs/database
  • <MDM_INSTALL_DIR>/mds/log

Available in:

  • MDM v11.4 FP3
  • MDM v11.5
  • MDM v11.6

 

 

MDM Standard Edition with external LDAP

geeta pulipaty 270000GUE0 | | Visits (2543)

Tweet

Configuring MDM Standard Edition for external LDAP

 

  1. Configurations to be done WAS Admin console of MDM.
  2. Configurations to be done on imm file.

 

Configurations to be done on WAS Admin console:

When we are using external LDAP , we have to create federated repositories on MDM WAS. The reason for this being there are default groups like mdm_admin , mdm_all_cvws etc that are created during MDM installation.

Following are the steps to configure MDM to external LDAP using WebSphere Application Server Administrative Console.

  1. Open up the WAS administrative console of MDM.
  2. Navigate to Security -> Global securityimage
  3. Select “Configure” for Federated repositories and the screen would be like this. image
  4. Click on “Manage repositories” link under section “Related Items”.
  5. Add a new LDAP repository. Details of the LDAP repository to be given are in the screenshot below. Need to specify the Primary host name and port number details for the LDAP which you want to use. Click on Apply once the details are given and this new LDAP repository gets added to the list of repositories for this WAS. image image
  6. Now go back to Global security -> Federated repositories. Here Click on “Add repositories(LDAP, custom,etc)…” image
  7. It will list down the new LDAP repository we have created. Select that repository. We also need to give unique distinguished name of base entry in this repository. For the sample LDIF file we have this would be the string “cn=localhost”. All users and groups from this LDIF file are created from this base entry. It could be different based on your LDAP. image
  8. Click on Apply and save configuration.
  9. Now when we go back to Federated repositories it would look like this. image
  10. Save all the configurations.
  11. We will now define how to identify groups uniquely in our LDIF file. A snapshot of the way groups defined in our LDIF file here : image
  12. We will have to now define this in our LDAP configuration in WAS. Go to the following link: image
  13. We have to make sure Object classes property is set to an appropriate value based on your LDIF or LDAP configuration. In this example it had to be “groupOfUniqueNames”.
  14. We will now define how to get the relationship between users and groups uniquely in our LDIF file. Go to the following link Global security > Federated repositories > Manage repositories > LDAP1 > Group attribute definition > Member attributes If not present already create a new member attribute with “Name of member attribute” set as “uniquemember” and Object class as “groupOfUnqiueNames”. This is specifically for the LDIF configuration shown here where a user is associated with group by uniquemember/groupOfIniqueNames combination.
  15. Save all the configurations and restart MDM server.

Configurations to be done on IMM in MDM Workbench

  1. Open the imm file and in the Groups tab create a new group with the same name as the one in external LDAP. In our example it is “Group1”.
  2. In all the corresponding sub sections i.e., Attributes/Sources , Relationship Attributes , Composite Views , Interactions , Operations you need to give the appropriate access for this group.
  3. I will just take an example of giving users of this group ability to do read write access to legal name and do a memput interaction. Screenshots for the same
  4. imageimage
  5. image
  6. Save the imm file and do deploy hub configuration.
  7. Now when we use SOAP UI to do a memput for PerLegalName attribute with these users present in group the txn is successful. Any other user from the LDAP doesn’t have access and MDM fails with the same error.

    Tip : To make users from an LDAP group ability to login to inspector the list of Interactions that are to be given access are APPGETINFO , DICGET , USRGETINFO. Once done this we need to save it and redeploy hub configuration.

    Note : If you create a user and group on WAS instead of external LDAP , in addition to doing imm config changes you would need to add the user entry to mpi_usrhead table.

 

Does getParty transaction retrieve values for attributes added through an Extension?

A Chitra 060000T4FN | | Visits (2673)

Tweet

Inquiry transactions for certain business objects in MDM use Optimized Transparent SQLs (OTS) to improve performance.  These transactions include getParty, getPerson, getOrganization, getContract and getProductInstance.  The OTS feature can be leveraged by setting value for property optimized.sql to true in TCRM.properties in com.ibm.mdm.server.resources.properties.jar.  By default, this value is set to true.

The SQLs to retrieve the business object and child business objects can be found in table INQLVLQUERY. 

When extensions are added to the business objects, the SQLs to obtain the business objects have to be regenerated so that the new attributes that are part of the extension are also retrieved.  The SQLs in the INQLVLQUERY table can be regenerated using the updateInqLevel transaction.

<?xml version="1.0" encoding="ASCII"?>
<DWLAdminService xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="DWLAdminService.xsd">
    <RequestControl>
        <requestID>100012</requestID>
        <DWLControl>
            <requesterName>cusadmin</requesterName>
            <requesterLanguage>100</requesterLanguage>          
        </DWLControl>
    </RequestControl>
    <DWLTx>
        <DWLTxType>updateInqLevel</DWLTxType>
        <DWLTxObject>DWLInqLevelBObj</DWLTxObject>
        <DWLObject>
            <DWLInqLevelBObj>
                <InquiryLevelId>1008</InquiryLevelId>
                <Description>Level 4 Person Objects(REGENERATED)</Description>
                <InquiryLevelLastUpdateDate>2005-04-13 15:11:28.185003</InquiryLevelLastUpdateDate>
                <GenerateQuery>Y</GenerateQuery>
                <BusinessTxType>44</BusinessTxType>
            </DWLInqLevelBObj>
        </DWLObject>
    </DWLTx>
</DWLAdminService>


When this transaction is executed with the GenerateQuery value set to YES (Y), the query corresponding to the given inquiry level and business transaction gets regenerated in the INQLVLQUERY table, such that it can return Attributes added using Extensions.

The list of Inquiry Levels(InquiryLevelId) and Business Transaction Type Code (BusinessTxType) for which OTS has to be regenerated can be found by executing the below SQL:

select distinct inqlvlquery.inqlvl_id, business_tx_tp_cd from inqlvl, inqlvlquery where inqlvl.inqlvl_id = inqlvlquery.inqlvl_id and description like '%<Business Object>%'

For example, for Person object, the below query can be executed:
select distinct inqlvlquery.inqlvl_id, business_tx_tp_cd from inqlvl, inqlvlquery where inqlvl.inqlvl_id = inqlvlquery.inqlvl_id and description like '%Person%'

Links:
Customizing InfoSphere MDM Server using pluggable business object query and pluggable SQL framework

Installing IBM InfoSphere MDMv11.6 on Oracle using different user name and schema name

A Chitra 060000T4FN | | Visits (1833)

Tweet

Oracle database automatically creates a schema when a user is created.  When a user logs in the default schema used is the one with the same name as the user.

In order for InfoSphere Master Data Management v11.6 to use a schema that is not the same the user name, the Logon Trigger given below has to be created and certain privileges have to be granted to the schema name (user).

 

image

Logon Trigger

The Oracle native driver does not provide a property to specify the schema name when the schema name is not the same as the User name.  Hence a trigger has to be executed, when the schema that has to be used is different from the user name.

CREATE OR REPLACE TRIGGER LOGON_TRIGGER

  AFTER LOGON ON DATABASE

BEGIN

    IF (USER IN ('<USER>')) THEN

      EXECUTE IMMEDIATE 'ALTER SESSION SET CURRENT_SCHEMA = <SCHEMA>';

    END IF;

EXCEPTION

  WHEN OTHERS

    THEN NULL;

END LOGON_TRIGGER;

/

In the above trigger, the placeholders <USER> and <SCHEMA> have to be replaced with the appropriate values.

 

Privileges to be granted

Create schema script that comes with MDM provides the below privileges to the user:

GRANT CREATE SESSION TO <SCHEMA>;

GRANT UNLIMITED TABLESPACE TO <SCHEMA>;

GRANT CREATE SEQUENCE TO <SCHEMA>;

GRANT CREATE ANY SYNONYM TO <SCHEMA>;

GRANT CREATE TABLE TO <SCHEMA>;

GRANT CREATE TRIGGER TO <SCHEMA>;

GRANT CREATE TYPE TO <SCHEMA>;

GRANT CREATE VIEW TO <SCHEMA>;

GRANT SELECT ANY TABLE TO <SCHEMA>;

GRANT IMP_FULL_DATABASE TO <SCHEMA>;

GRANT SELECT ANY DICTIONARY TO <SCHEMA>;

GRANT RESOURCE TO <SCHEMA>;

GRANT CONNECT TO <SCHEMA>;

GRANT CREATE SNAPSHOT TO <SCHEMA>;

In addition to the above privileges and access to tablespaces, the below privileges have to be granted:

GRANT SELECT ANY SEQUENCE TO <SCHEMA>;

GRANT ANALYZE ANY TO <SCHEMA>;

GRANT LOCK ANY TABLE TO <SCHEMA>;

The purpose for each Privilege is explained below:

GRANT SELECT ANY SEQUENCE TO <SCHEMA>;

This privilege is required to access sequences that are created during RDM installation.

GRANT ANALYZE ANY TO <SCHEMA>;

This privilege is required to execute SQLs which are found in Insensitive_search_enabled.sql which executes statements similar to the below:

EXEC dbms_stats.gather_table_stats(ownname => '<SCHEMA>', tabname => 'HIERARCHY' ,method_opt => 'for all indexed columns size auto');

GRANT LOCK ANY TABLE TO <SCHEMA>;

This statement is required to obtain a lock on the SIB tables and to execute verify_install which obtains locks on certain SE tables.

 

Add Indian language support to Party Maintenance application of IBM Stewardship Center

geeta pulipaty 270000GUE0 | | Visits (2933)

Tweet
  1. Login to IBM Process designer with appropriate credentials.
  2. Open up MDM stewardship toolkit and open up the Resource bundle with name “CoachLabels”. It already has the list of strings used in the UI and translated text for the default languages supported by BPM and MDM.
  3. You can add a new Indian locale that is supported by this editor like Marathi, Punjabi, Gujarati, Tamil, Kannada and Telugu. For example lets add the locale Telugu(India) among the list of locales.
  4. Once the locale is added , you need to put the translated text for each string in that locale.
  5. Save once all the translated text for all the strings are given to the new locale added.
  6. Take a snapshot of MDM Stewardship toolkit and upgrade the dependency in MDM Party Maintenance.
  7. Create a new user using Process Admin console of BPM(for example user name is DSUserTelugu).
  8. For the new user created mention the Locale in Process Admin Console as te-IN and update it. Screenshot for the same is

image

 

  1. Now login with user “DSUserTelugu” in Process Portal and you should be able to see the localized strings in Telugu language for this user for Search dashboard.

 

Note : The Localization Resource editor should show a full list of locales existing in JRE. But some of the languages are missing definitely. For example Hindi is not present in the list to select and add translated text. So the workaround in such cases are these steps to be followed instead of steps 3,4 and 5 in the above list.

  • Localization Resource editor not only allows adding locales and keys one by one but also allows exporting/importing of a zip with multiple *.properties files.
  • Export the Localization resource “CoachLabels” from MDM stewardship toolkit as a zip file into your local filesystem.
  • Lets say you want to add Hindi(Locale code : hi_IN) support for this localization resource.
  • Add a new file called CoachLabels_hi_IN.properties file into this zip file and copy the contents from CoachLabels_en.properties.
  • After copying the contents translate the values to Hindi language by keeping keys same in this new property file.
  • Import this zip file back into MDM Stewardship toolkit and you see new locale Hindi coming up with translated text for Hindi.
  • Proceed with rest of steps 6 to 9 as mentioned above to use Hindi as locale for a user. Locale for the user to be mentioned in Process Admin Console in this case would be “hi_IN”.

Inspector Error: , already exists and INSERT_ONLY.

Doug Cowie 270005CYF0 | | Visits (2369)

Tweet

The error ", already exists and INSERT_ONLY" may seem a bit cryptic. If you have ever encountered it on the add a record page in Inspector the good news is that it is very simple to fix. While it may look like there is a clash with the , the problem is in fact due to clashing memrecnos.

It may be that data has been bulk loaded, but the source sequence identifier settings have not taken this into account.

Firstly work out which memrecnos are already in use using the search by Source:ID option. Increase the ID in blocks until there is no data being returned.

In the workbench, go to the Source Sequence Identifiers window, connect the server and edit the entry for the source you are trying to add data too. Set the Next Identifier value to the next free value, and that will resolve the issue.

Steps to connect two MDM environments to single Process Center environment

geeta pulipaty 270000GUE0 | | Comments (2) | Visits (3249)

Tweet

Author: Geetha S Pullipaty

Product: Infosphere Master Data Management.
Component: Data Stewardship and Governance.
Version: 11.5.0

Other prerequisite software: IBM Business Process Manager 8.5.6 , IBM Process Designer 8.5.6, IBM Stewardship Center 11.5.0 installed and all process applications imported on BPM and EPVs for MDM Connection details set for all of them.

This blog provides detailed steps of connecting two MDM instances to a single Process center environment for IBM Stewardship Center(ISC) component. It provides the manual steps and doesn't do run any scripts. It also doesn’t explain all the steps required for installing and configuring ISC but only those additional steps when two MDM instances are involved.

This document assumes that you haven’t done any configurations on MDM or BPM WAS consoles for configuring ISC. If you have already done please delete those additional configurations.

 

Lets say you have two MDM instances MDM1 , MDM2 and single Process Center instance BPM1.

 

Steps to be done on WebSphere Administrative Console of MDM1.

1.    Create an Alias Destination on MDM Server's SIBus as below:
        
        a) Navigate to Service Integration  -> Buses  -> <BUS Name>
        b) Click the <Bus name> link to open the configurations of the Bus.
        c) Click on the Destination link to open the list of destinations.
        d) Click on New and Select Alias and click Next
        
        Provide the Destination attribute values:
            Identifier: MDMBPMAlias
            Bus: Select the SIbus of MDM environment. Note: The Bus name in typical MDM installation will be MDM.SIB.server1
            Target identifier: Select Other, please specify from the drop down list.  Specify the identifier of the BPM environment target. Note: Target identifier name in a typical Process center installation will be  eventqueueDestination.SingleCluster
            Target bus: Select Other, please specify from the drop down list.  Specify the bus name of the BPM environment. Note: Target bus name In a typical Process center installation will be BPM.ProcessCenter.Bus
            After providing the detail click Next
            Click Finish.
            Save the configurations

            image
            
            

        

 

2.    Create JMS Queue on MDM Environment as below:
        a) Navigate to Resources  -> JMS  -> Queues
        b) Select the scope.  If it is a Standalone MDM server set it to Node, Server (Example, Node=mdmNode, Server=server1).  If MDM installed in cluster, set the scope to Cluster
        c) Click New
        d) Select Default messaging provider and click OK
        e) Set the values of the attribute as below:
            Name: MDMBPMQueue
            JNDI name: notification/MDMBPMQueue
            Bus name: Select the SIbus of MDM environment. Note: The Bus name in typical MDM installation will be MDM.SIB.server1
            Queue name:   MDMBPMAlias  
        f) After setting the attribute values, Click OK and Save the configurations.

      image

 

3.    Create JMS Queue Connection Factory on MDM Environment as below:
        a) Navigate to Resources  -> JMS  -> Queue connection factories
        b) Select the scope.  If it is a Standalone MDM server set it to Node, Server (Example, Node=mdmNode, Server=server1).  If MDM installed in cluster, set the scope to Cluster
        c) Click New
        d) Select Default messaging provider and click OK
        e) Set the values of the attribute as below:
            Name: MDMBPMQueueConnectionFactory
            JNDI name: notification/MDMBPMQueueConnectionFactory
            Bus name: Select the SIbus of MDM environment. Note: The Bus name in typical MDM installation will be MDM.SIB.server1
            Provider endpoints: It is generallyof the format “host:port:Inbound Transport Chain”. You can look at the existing Queue connection factories of MDM and get this value  
        f) After setting the attribute values, Click OK and Save the configurations.

image

 

4.    Creation of Foreign Bus Connection on MDM Environment as below:
        a) Navigate to Service Integration  -> Buses  -> <BUS Name>. Note: The Bus name in typical MDM installation will be MDM.SIB.server1
        b) Click the <Bus name> link to open the configurations of the Bus.
        c) Click on the Foreign Bus Connection link to open the list of destinations.
        d) Click New…
        e) Select Direct Connection and click Next
        f) Select Service integration bus and click Next
        g) Select the messaging engine on MDM environment from the list and click Next
        h) Set the values of the attribute as below:
            Name of service Integration bus to connect to (the foreign bus): Set the value to the name of the SIBus in BPM environment. Note: The Bus name in typical BPM Process center  installation will be BPM.ProcessCenter.Bus
            Gateway messaging engine in the foreign bus: Set the value to the name of the messaging engine of BPM environment. Note: The messaging name in typical BPM Process center  installation will be SingleCluster.000-BPM.ProcessCenter.Bus
            Service integration bus link name: MDM_BPM_LINK_ONE
            Bootstrap service integration bus provider endpoints: Set the value as <bpmNodeHostName>:<port>:<protocol>   (For example mdmdemowin:7278:BootstrapBasicMessaging)  Note: If the BPM cluster has multiple nodes provide the endpoints for all the nodes as comma separated list.
        i) After setting the values click Next
        j) Review the summary and click Finish and Save the configurations

image

 

Steps to be done on WebSphere Administrative Console of MDM2

 

1.Create a new Service Integration Bus on MDM2. The reason for this to be done is because both MDM environments are identical and have same SIBus names connecting back from BPM will be an issue if BPM has to connect to two SIBuses with same name.
        a) Navigate to Service Integration  -> Buses  -> <BUS Name>
        b) Click New..
        c) Give a name to the new SIBus to be created. For example MDM.SIB.NewBPMSer

 

2.    For the new SIBus created , add a Bus member. This is required to create a messaging engine for the new bus created.
        a) Navigate to Service Integration  -> Buses  -> <BUS Name>
        b) Click the <Bus name> link to open the configurations of the Bus. Here click the SIBus created during step 1.
        c) Click Bus members and Click on Add.
        d) Choose the server or cluster to add to the new bus that got created. If MDM is installed on a server then server option has to be selected and the corresponding server. If cluster then the cluster option to be selected. Click Next.
        e) If server option is selected , then you select option “Data store” in the type of message store. Click Next
        f) If server option is selected, then select the option “Create default data source with generated JNDI name”in properties of data store. Click Next
        g) Leave performance parameters as defaults and click Next.
        h) Review the details and finish the configuration.


3. Follow steps 1 to 4 in the previous section. Wherever the SIBus name is required or to be selected , select the new Bus that’s created in step 1 in this section. Please note that while creating foreign bus connection provide the link name as “MDM_BPM_LINK_TWO” instead of “MDM_BPM_LINK_ONE” mentioned in the previous section.

 

Steps to be done on Websphere Administrative console of BPM

1.    Creation of Foreign Bus Connection on BPM Environment as below:
        a) Navigate to Service Integration  -> Buses  -> <BUS Name>
        b) Click the <Bus name> link to open the configurations of the Bus.
        c) Click on the Foreign Bus Connection link to open the list of foreign bus connections.
        d) Click New…
        e) Select Direct Connection and click Next
        f) Select Service integration bus and click Next
        g) Select the messaging engine on BPM environment from the list. Provide Inbound user id as admin user id of BPM and click Next
        h) Set the values of the attribute as below:
            Name of service Integration bus to connect to (the foreign bus): Set the value to the name of the SIBus in MDM1 instance.
            Gateway messaging engine in the foreign bus: Set the value to the name of the messaging engine of MDM1 instance
            Service integration bus link name: MDM_BPM_LINK_ONE
            Bootstrap service integration bus provider endpoints: Set the value as <mdmInstance1HostName>:<port>:<protocol>   (For example mdmdemowin:7276:BootstrapBasicMessaging)
            Note: If the MDM cluster has multiple nodes provide the endpoints for all the nodes as comma separated list.
        i) After setting the values click Next
        j) Review the summary and click Finish and Save the configurations

image

 

Repeat the same steps but this time giving details of MDM2 instance. The values to be given in this case are

    Name of service Integration bus to connect to (the foreign bus): Set the value to the name of the new SIBus created during step 1 in previous section in MDM2 instance.
    Gateway messaging engine in the foreign bus: Set the value to the name of the messaging engine associated with the new SIBus created during step 1 in previous section in MDM2 instance.
    Service integration bus link name: MDM_BPM_LINK_TWO
    Bootstrap service integration bus provider endpoints: Set the value as <mdmInstance2HostName>:<port>:<protocol>.
        If the MDM cluster has multiple nodes provide the endpoints for all the nodes as comma separated list.

So by the end of these steps you would have two foreign bus connections created on BPM. MDM_BPM_LINK_ONE pointing to MDM1 instance and MDM_BPM_LINK_TWO pointing to the new SIBus created on MDM2 instance.

 

Steps to be done on BPM Process Center console

 

1.    Login to BPM Process Center Console with admin access
2.    All the process applications for ISC are available at the installed location “<ISC_INSTALLED_FOLDER>/mdmg/processes”.
3.    If not already imported you need to import all of them using Process Center console.
4.    Create two snapshots “TESTSNAPSHOTONE” and “TESTSNAPSHOTTWO” for both MDM Party Maintenance and Physical MDM Suspected Duplicates process applications. Our focus is mainly on testing only these two.
5.     Make all these 4 new snapshots Active.

 

Steps to be done on BPM Process Admin Console

 

1.    Login to BPM Process Admin Console with admin access.
2.    Go to Admin Tools > Manage EPVs
3.    Select MDM Party Maintenance(TESTSNAPSHOTONE) and select EPV MDM_Connection_Details. Set the details of MDM1 instance over here. Provide non secure port for example 9080 for simplicity purposes, usessl option to be not checked i.e., set to false.
4.    Select MDM Party Maintenance(TESTSNAPSHOTTWO) and select EPV MDM_Connection_Details. Set the details of MDM2 instance over here. Provide non secure port for example 9080 for simplicity purposes, usessl option to be not checked i.e., set to false.
5.    Select Physical MDM Suspected Duplicates(TESTSNAPSHOTONE) and select EPV MDM_Connection_Details. Set the details of MDM1 instance over here. Provide non secure port for example 9080 for simplicity purposes, usessl option to be not checked i.e., set to false.
6.    Select Physical MDM Suspected Duplicates(TESTSNAPSHOTTWO) and select EPV MDM_Connection_Details. Set the details of MDM2 instance over here. Provide non secure port for example 9080 for simplicity purposes, usessl option to be not checked i.e., set to false.

 

SQLs to be run on MDM databases

 

1. Run these SQLs on the database connected to MDM1 instance

UPDATE CONFIGELEMENT SET VALUE = 'true', LAST_UPDATE_DT = CURRENT_TIMESTAMP WHERE NAME = '/IBM/DWLCommonServices/Notifications/enabled';
 

UPDATE CDEVENTDEFTP SET ENABLE_NOTIFY='Y', LAST_UPDATE_DT=CURRENT_TIMESTAMP WHERE (LANG_TP_CD=100 AND EVENTDEF_TP_CD=8);
update configelement set value='true', last_update_dt=current_timestamp where name='/IBM/Party/SuspectProcessing/enabled';

INSERT INTO BPMNOTIFICATIONTYPE VALUES ('ntem','MDMSDP', 'TESTSNAPSHOTONE', null, 'MDM_EVT_SDP', null, null, null, null, CURRENT_TIMESTAMP, null);
INSERT INTO BPMNOTIFICATIONTYPE VALUES ('nt1','MDMSDP', 'TESTSNAPSHOTONE', null, 'MDM_EVT_SDP', null, null, null, null, CURRENT_TIMESTAMP, null);
INSERT INTO BPMNOTIFICATIONTYPE VALUES ('nt2','MDMSDP', 'TESTSNAPSHOTONE', null, 'MDM_EVT_SDP', null, null, null, null, CURRENT_TIMESTAMP, null);
INSERT INTO BPMNOTIFICATIONTYPE VALUES ('nt3','MDMSDP', 'TESTSNAPSHOTONE', null, 'MDM_EVT_SDP', null, null, null, null, CURRENT_TIMESTAMP, null);
INSERT INTO BPMNOTIFICATIONTYPE VALUES ('ntpe','MDMHDQ', 'TESTSNAPSHOTONE', null, 'PERSIST_ENTITY', null, null, null, null, CURRENT_TIMESTAMP, null);

 

2. Run these SQLs on the database connected to MDM2 instance.

UPDATE CONFIGELEMENT SET VALUE = 'true', LAST_UPDATE_DT = CURRENT_TIMESTAMP WHERE NAME = '/IBM/DWLCommonServices/Notifications/enabled';
 

UPDATE CDEVENTDEFTP SET ENABLE_NOTIFY='Y', LAST_UPDATE_DT=CURRENT_TIMESTAMP WHERE (LANG_TP_CD=100 AND EVENTDEF_TP_CD=8);
 

update configelement set value='true', last_update_dt=current_timestamp where name='/IBM/Party/SuspectProcessing/enabled';

INSERT INTO BPMNOTIFICATIONTYPE VALUES ('ntem','MDMSDP', 'TESTSNAPSHOTTWO', null, 'MDM_EVT_SDP', null, null, null, null, CURRENT_TIMESTAMP, null);
INSERT INTO BPMNOTIFICATIONTYPE VALUES ('nt1','MDMSDP', 'TESTSNAPSHOTTWO', null, 'MDM_EVT_SDP', null, null, null, null, CURRENT_TIMESTAMP, null);
INSERT INTO BPMNOTIFICATIONTYPE VALUES ('nt2','MDMSDP', 'TESTSNAPSHOTTWO', null, 'MDM_EVT_SDP', null, null, null, null, CURRENT_TIMESTAMP, null);
INSERT INTO BPMNOTIFICATIONTYPE VALUES ('nt3','MDMSDP', 'TESTSNAPSHOTTWO', null, 'MDM_EVT_SDP', null, null, null, null, CURRENT_TIMESTAMP, null);
INSERT INTO BPMNOTIFICATIONTYPE VALUES ('ntpe','MDMHDQ', 'TESTSNAPSHOTTWO', null, 'PERSIST_ENTITY', null, null, null, null, CURRENT_TIMESTAMP, null);

 

 

If you give snapshot names in the Process Center Console different than these make sure to change these SQLs as well. Make sure all the SQLs are run successfully.
If any duplicate errors come up while inserting data to BPMNOTIFICATIONTYPE table delete the existing entries causing the issue and run the sqls again.

 

Restart all the instances MDM1, MDM2 and BPM. You need to restart the Application servers, nodeagents , deployment managers.

 

 

MDM JVM coredumps during startup/shutdown on Red Hat Linux

ycastane 110000J38U | | Comment (1) | Visits (2799)

Tweet

Problem(Abstract)

When using InfoSphere Master Data Management (MDM) customers will see coredumps being created when an application server is being started or stopped in an environment with multiple application servers.

Symptom

There are several use cases that can lead to this behavior, all involve a WebSphere Application Server topology of more than one JVM in a clustered or non-clustered environment.

Some examples below:

- JVMs recycle by themselves no apparent cause
- JVM1 is up and running, JVM2 is started and causes JVM1 to crash
- JVM1 and JVM2 are up and running, runbatch.sh process is started and causes JVM1 to crash
- JVM1 and JVM2 are up and running, if a user attempts to stop JVM1 or JVM2, the JVM that is being stopped crashes

 

Cause

This issue has been identified by Red Hat Linux Support as a defect Bug 1327623 - replacing .so which was opened and closed, leads to segfault on next dlopen/dlsym.

 

Diagnosing the problem

The javacore and heapdumps will have two things in common:

1. The crash occurring in the ld-linux-x86-64.so.2 library (Red Hat Linux) which is called from dlsym() call in the JVM
2. A process tries to access the libMAD.so library (MDM) from an existing JVM causing it to crash

Contact IBM Support if you suspect you are experiencing this issue. Ensure the MustGather: Crash on Linux is followed to collect all required information and that the core has been processed through jextract.

 

Resolving the problem

In order to have immediate relief to this issue a workaround is available.

Follow steps below:

1. Stop any JVM that has the core MDM application installed

2. Locate the libMAD.so library and set attribute to IMMUTABLE

Example: /opt/IBM/WebSphere/AppServer/profiles/AppSrv01/expandedBundles/com.ibm.mdm.mds.jni.app_11.4.0.FP00IF000_20140924-0832/com.ibm.mdm.mds.jni_11.4.0.FP00IF000_20140924-0832.jar/linux64

Ensure to locate the library in expandedBundles directory.

3. As root, run the command below:
chattr +i /some/path/libMAD.so

Note:
chattr (Change Attribute) is a command line Linux utility that is used to set/unset certain attributes to a file in Linux system to secure accidental deletion or modification of important files and folders.
The defect seems to be opening/closing the libMAD.so, setting this property will prevent such activity, which will prevent the crash.

4. Restart JVMs and test

 

Related information

Bug 1327623 - replacing .so which was opened and closed
MustGather: Crash on Linux

Create SDP tasks by fetching data from Suspect table of MDM

geeta pulipaty 270000GUE0 | | Comment (1) | Visits (4751)

Tweet

 

Author: Geetha S Pullipaty

Product: Infosphere Master Data Management.
Component: Data Stewardship and Governance.
Version: 11.5.0

Other prerequisite software: IBM Business Process Manager 8.5.6 , IBM Process Designer 8.5.6, IBM Stewardship Center 11.5.0 installed and configured.

 

IBM Stewardship Center provides the ability to perform proactive data stewardship activities around Physical MDM. When a suspect gets created on MDM , a notification is being sent to BPM and a Suspected Duplicate task gets created on BPM Process portal for a Data steward to act upon. This can be disabled if the Customer has huge number of suspects especially during intial load or the tasks creation rate is slowing down the process or Customer already has suspects and want to create tasks for them in BPM.

 

This document explains how to create SDP tasks on BPM Process Portal by fetching suspect data from MDM database.

 

Steps to be done on BPM Websphere Administrative Console.

1. Create a new Data Source to point to MDM DB.

image

 

image

 

Please make sure to give the data source name as “jdbc/MDMDB”. Something else can be given too but then you would need to change in the process application for that.

 

image

 

image

 

 

 

 

Please make sure to give correct database name, server name and port number related to MDM database.

image

 

Click finish and save the configuration

For the new Data source created now go to “JAAS - J2C authentication data” as shown below

image

 

image

Create a new entry with Alias as MDM_DB_Alias and User ID and password as credentials to the MDM database.

 

image

 

 

 

Click Apply and save the configuration.

Now apply this alias as Authentication alias for XA recovery and Component-managed authentication alias for the new data source created for MDMDB.

 

image

 

 

Click apply and save the configuration.

Do a Test connection for the newly created data source and if required synchronize the changes to all nodes. The test connection part for this new data source created should be successful.

 

Restart BPM Server after all these changes done and saved.

 

Steps to be done on IBM Process designer and Process Admin console

 

1. Import the process application into BPM using IBM Process designer or Process Center Console.

SDP_Create_Tasks - V1.2.twx|View Details

 

2. Provide correct values for "PHYSICAL_SDP_PROCESS_APP_SNAPSHOT" and "MDM_DATA_SOURCE_JNDI_NAME" in Process Admin console under Manage EPVs for the snapshot drop down selected as "SDP Create tasks". Screenshot for the same.

image

 

3. Run the service "Create SDP tasks" from the same Process Admin console. It is available as last item on the left hand side menu as shown in previous screenshot

 

This service creates SDP tasks for all the suspects fetched from MDM. It first checks if a task is existing for a Party Id. If not present it creates a new Suspected Duplicate task.This needs to be customized if a Customer wants to get only some subset of supect list and so on. The SDP task thus created is available on BPM Process Portal for a Datasteward to act upon.

Configure IBM Stewardship Center with MQ as the Messaging Engine

geeta pulipaty 270000GUE0 | | Visits (3880)

Tweet

 

Author: Geetha S Pullipaty

Product: Infosphere Master Data Management.
Component: Data Stewardship and Governance.
Version: 11.5.0

Other prerequisite software: IBM Business Process Manager 8.5.6 , IBM Process Designer 8.5.6, IBM Stewardship Center 11.5.0 installed and configured.

 

Communication from MDM to BPM for IBM Stewardship Center is always via messaging.  In case of some specific events happening on MDM, the events are to be notified to BPM to create a new task. BPM can only listen to messages that are put in its event queue. To make this possible we create a link between MDM and BPM. MDM can be installed with MQ as messaging provider. BPM suggests to use only WAS default messaging provider. To be able to send messages a Customer needs to do these steps

  1. Configuration steps on MQ associated with MDM installation.
  2. Configuration steps on WAS Admin console associated with MDM installation.
  3. Configuration steps on WAS Admin console associated with BPM installation.

Configuration steps on MQ using MQ explorer

1. Create a new Sender channel under the Queue manager that is default created for MDM. installation.

image

 

 2.Create a Receiver channel  under the Queue manager that is default created for MDM.

image

 

Channel name : Could be anything. In our setup we named it BPMReceiver.

Transmission protocol : TCP

3. Create a local queue with usage type as Transmission under the Queue manager that is default created for MDM.

 

image

 

Queue name : Service Integration Bus name of BPM. In our setup the SIBus name of BPM server is BPM.ProcessCenter.Bus hence the same given.

Scope : Queue manager

Usage : Transmission

4. Create a remote queue under the Queue manager that is default created for MDM.

 

image

 

 

Queue name : Could be anything. In our setup we named it EVENTRQ.

Remote queue manager : Service Integration Bus name of BPM. In our setup the SIBus name of BPM server is BPM.ProcessCenter.Bus hence the same given.

Transmission queue : Same name as given for the queue name in previous step. It is same as Service integration bus name of BPM. In our setup it is “BPM.ProcessCenter.Bus”.

Remote queue : Destination name for which messages are to be sent on BPM Service Integration bus. Check for destination name that starts with eventqueueDestination in the list of destinations under SIBus for BPM Server.

image

 

5.  Start channels created on step 1 and step 2.  Sender channel created in step 1 should be in running state.

This completes list of steps to be done on Websphere MQ Explorer. Now we will see the list of steps to be done on MDM and BPM installations.

Configuration steps on Administrative Console of WAS where MDM is installed.

1. Create a new Queue Connection Factory with following details.

image

 

  1. Navigate to Resources -> JMS - >Queue Connection Factories. Select New to create a new one.
  2. Create it at the same scope as existing Queue Connection Factories.
  3. Select JMS resource provider : WebSphere MQ messaging provider
  4. Name : MDMBPMQueueConnectionFactory
  5. JNDI name : notification/MDMBPMQueueConnectionFactory
  6. Select ‘Enter all the required information into this wizard’
  7. Queue manager or queue sharing group name: Name of the Queue Manager that gets created from MDM installation. In our setup it is QM.E001
  8. Hostname:  Hostname of the MQ machine where this MQ is running and Queue managers are created by MDM installation. In our setup it is localhost.
  9. Port: Port number where this Queue manager created by MDM installation is listening. In our setup it is 1414.

2. Create a new Queue with following details

image

 

  1. Navigate to Resources -> JMS - >Queues . Select New to create a new one.
  2. Create it at the same scope as existing Queues.
  3. Select JMS resource provider : WebSphere MQ messaging provider
  4. Name : MDMBPMQueue
  5. JNDI name : notification/MDMBPMQueue
  6. Queue name : It should be same as the remote queue name created in previous section in step 4. In our setup it is EVENTRQ.
  7. Queue manager or Queue sharing group name : Name of the Queue Manager that gets created from MDM installation. In our setup it is QM.E001.
  8. Save all and restart WAS for these configurations to take effect.

Configuration steps on Administrative Console of WAS where BPM is installed

1.Create a new foreign bus connection to the existing Service Integration Bus of BPM.

 

image

 

  1. Navigate to Service integration > Buses > BUS_NAME > Foreign bus connections -> New
  2. Bus connection type : Direct connection
  3. Foreign bus type : WebSphere MQ
  4. Messaging engine to host the connection : Leave the default one listed.
  5. Virtual queue manager name: Name of the SIBus of BPM. In our setup it is BPM.ProcessCenter.Bus
  6. Foreign bus name: Name of the Queue Manager that gets created from MDM installation. In our setup it is QM.E001
  7. MQ link name : Could be anything. In our setup we gave it as MDM_BPM_LINK
  8. Enable Service integration bus to WebSphere MQ message flow : Select this checkbox if not already done.
  9. WebSphere MQ receiver channel name : Name of the receiver channel created on step 2 of section 1 in this document. In our setup it is BPMRECEIVER
  10. Hostname:  Hostname of the MQ machine where the MQ associated with MDM is running and Queue managers are created by MDM installation. In our setup it is localhost.
  11. Port: Port number where the Queue Manager of MQ created by MDM installation is listening. In our setup it is 1414.
  12. Enable WebSphere MQ to Service integration bus message flow : Select this checkbox if not already done.
  13. WebSphere MQ sender channel name: Name of the sender channel created on step 1 of section 1 in this document. In our setup it is BPMSENDER
  14. SIB inbound user ID : An admin user id who have the ability to put messages to BPM Queue. This could be a default admin user that gets created with BPM installation. In our setup it is “admin”.
  15. Save all and restart WAS for these configurations to take effect.

 

Dynamically routing SDP tasks in IBM Stewardship Center

geeta pulipaty 270000GUE0 | | Visits (4706)

Tweet

Author: Geetha S Pullipaty

Product: Infosphere Master Data Management.
Component: Data Stewardship and Governance.
Version: 11.5.0

Other prerequisite software: IBM Business Process Manager 8.5.6 , IBM Process Designer 8.5.6, IBM Stewardship Center 11.5.0 installed and configured.

 

IBM Stewardship Center provides the ability to perform proactive data stewardship activities around Physical MDM. When a suspect gets created on MDM , a notification is being sent to BPM and a Suspected Duplicate task gets created on BPM Process portal for a Data steward to act upon. ISC default implementation creates all SDP tasks for a single group called DataStewardGroup.

 

This document explains how to create tasks to different data steward groups based on certain data on the notification message.

 

1. Open Physical MDM Suspected Duplicates process application in editable mode using IBM Process Designer.

 

2. Add one input variable called "partySourceId" in process "Resolve Suspected Duplicate process".

image

 

3. Create two groups called DSSourceOneGroup and DSSourceTwoGroup . Each of them have two users. DSSourceOneGroup has two users called DSSourceOneUser1 and DSSourceOneUser2. DSSourceTwoGroup has two users called DSSourceTwoUser1 and DSSourceTwoUser2. Basically users and groups creation is done using Process admin console

4. Create two teams with names "PSDP Source One DS Team" and "PSDP Source Two DS Team" and associate each team with one group created in previous step. PSDP Source One DS Team has group “DSSourceOneGroup” and "PSDP Source Two DS Team" has group “DSSourceTwoGroup” associated.

image

 

image

 

 

This step is done in Phsyical MDM Suspected Duplicates process application using IBM Process Designer.

5. Change the assignments part for the step called "Suspected Duplicate" in  process "Resolve Suspected Duplicate process".

 

image

 

So basically if partySourceId is 1 then we are assigning to the team called "PSDP Source One DS Team" which internally assigns to first group and two users associated with that. Similary if partySourceId is 2 then the task is assigned to second data steward team and group.

 

6. Add the following script to step called "Create Task" on the service "Process Suspected Duplicate Event Message"

 

image

 

 

So when we run with making partySourceId as 1 the task gets created for users of group with name “DSSourceOneGroup” and with 2 it gets assigned for users of group with name “DSSourceTwoGroup”. This can be checked by logging into Process portal with different users from two groups that we created.

 

 

This is just to show the concept of dynamically routing tasks. In any customer scenario we need to identify the attributes based on which the routing has to be done. This attribute information also need to be available from the message or notification sent from MDM to BPM. This would also need customizing notification classes on MDM to have the required information sent in the message.

Customizing label of a relationship link in IBM Stewardship Center

Darren Sullivan 2700057QUX | | Comment (1) | Visits (5167)

Tweet

Author: Geetha S Pullipaty

Product: Infosphere Master Data Management.
Component: Data Stewardship and Governance.
Version: 11.5.0

Other prerequisite software: IBM Business Process Manager 8.5.6 , IBM Process Designer 8.5.6, IBM Stewardship Center 11.5.0 installed and configured.
IBM Stewardship Center provides the ability to perform proactive data stewardship activities around Physical MDM with a dashboard called “SEARCH”. It allows you to Search for an Entity , select one among various search results and View/Update various details about that Entity like Addresses, Names and so on.
It also provides a graphical view of the relationships of the party with other parties in a tab namely “Explore” in Edit Party screen.
This document concentrates on getting the required labels on this graphical view for custom added relationships on MDM.

Problem: I added a new relationship type with type code “300000” in cdreltp of MDM and I have two parties with relationship created among them. The screenshot in Explore tab now is:

1. Login to IBM Process Designer with required credentials.

2. Provide edit access to this user to MDM Application Toolkit in the list of toolkits.

image

Make sure “Allow users to update toolkit” checkbox is checked.

3. Open MDM Application toolkit in designer.

image
4. Go to service “MDM Hierarchy Localization Service” under User Interface category.

5. Modify the script “Load Label Map”.

6. Add the following line of code at the end of the script.
    tw.local.labels.put("PARTYREL_300000",”CEO”);
Note : If you need to do localization for this label , you can add it in the Localization Resource “MDM_Hierarchy_Resource” and use it accordingly in the script as
tw.local.labels.put("PARTYREL_300000", tw.resource.MDM_Hierarchy_Resource.CEO_Relationship_Label);

7. Once done these changes, save all the modifications.

8. Take a new snapshot of MDM Application toolkit

image

9. Upgrade the dependency of MDM Application toolkit in MDM Stewardship Toolkit and create a new snapshot of MDM Stewardship Toolkit.

10. Upgrade the dependencies of MDM Stewardship Toolkit and MDM Application Toolkit in MDM Party Maintenance process application.

Please make sure not to miss step 9.

11. Reload Search dashboard from Process Portal and search for the same record and go to edit page. The explore tab now looks like

image



Tags:  application-toolkit mdm stewardship governance techtip data-stewardship

Configuration and Deployment of External Rules in MDM Workbench 11.4

bakleks 270007PVJ3 | | Visits (6683)

Tweet

IBM InfoSphere MDM provides a set of out-of-the-box entity processing rules, like 'partyMatch' or 'collapseParties'. These rules are extendible and this blog entry will walk through the process of extending one of them – 'collapsePartiesWithRules'.

Assuming that a development project has already been created in the workspace, code for the project has been generated and setup SQL scripts have been ran – create a new package in the project's 'src' folder (For example: 'com.ibm.mdm.customRules'). Within that package create a new java class (For example: 'CustomCollapse.java'). As we are looking at extending 'collapsePartiesWithRules' the Java class should extend the 'CollapsePartiesWithRules' class and its 'execute' method should be overridden.

Within the new class the default 'collapse' can be used by calling the 'collapseObjectsSurvivingRules()' method on the TCRM component and supplying the parties that need collapsing inside a 'Vector' object. Additional behaviour can be defined as well.

The created package and Java class should then be added to the 'blueprint.xml' file present in the development project as follows:

 

<?xml version="1.0" encoding="UTF-8"?>

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" default-activation="eager">

<bean id="RuleLocatorBean" scope="singleton" class="com.dwl.base.externalrule.RuleLocatorImpl">

<property name="bpBundle" ref="blueprintBundle"/>

<argument>

<list>

<bean class="com.ibm.mdm.customRules.CustomCollapse"/>

</list>

</argument>

</bean>

<service id="ExternalRuleLocatorService" ref="RuleLocatorBean" interface="com.dwl.base.externalrule.RuleLocator" ranking="50">

<service-properties>

<entry key="rule.java.impl">

<list>

<value>com.ibm.mdm.customRules.CustomCollapse</value>

</list>

</entry>

</service-properties>

</service>

</blueprint>

 

The following packages also need to be added to the 'manifest.mf' file of the project to export the package that contains the external rule class:

 

Export-Package: com.ibm.mdm.customRules

Import-package: com.dwl.base.externalrule

 

The below packages also need to be added to the 'compositebundle.mf' file so that the rule framework can find the new class:

 

CompositeBundle-ExportService: com.dwl.base.externalrule.RuleLocator

Export-Package: com.ibm.mdm.customRules

Import-Package: com.dwl.tcrm.externalrule

 

Export the 'CBA' using the wizard and deploy it to the server (instructions for one of the approaches to deploying a CBA can be found here).

To make the system use the modified rule an update to the database needs to be run. Search the 'JAVAIMPL' table for an entry with the 'JAVA_CLASSNAME' of 'com.dwl.tcrm.externalrule.CollapsePartiesWithRules' and note the rule id ('1038' in this case). Run the following sql to update the class name to the updated rule:

 

UPDATE SCHEMA.JAVAIMPL SET JAVA_CLASSNAME = 'com.ibm.mdm.customRules.CustomCollapse', LAST_UPDATE_DT = CURRENT_TIMESTAMP WHERE EXT_RULE_IMPL_ID = 1038;

 

Restart the server.

The next step would be to re-configure the Optimized Transparent SQL (OTS) queries. OTS queries allow the 'SELECT' statements used to retrieve data from the database to be customized and optimized for a given deployment.

The 'INQLVL' table defines the set of OTS capable entities and their associated inquiry levels. Depending on the types of the objects that will be collapsed – the appropriate 'GROUP_NAME' should be looked up. In this example we will be working with Organizations.

 

image

 

The values to note are 'INQLVL_ID' and 'INQLVL'. It's worth noting that the 'SELECT' statements themselves are not stored in this table, but rather are contained within 'INQLVLQUERY' table.

 

image

 

Each combination of Entity-Inquiry Level contains one or more 'SELECT' statements within the table. These statements are also associated with a 'BUSINESS_TX_TP_CD' value (32) which is derived from the 'CDBUSINESSTXTP' table and defines the associated query transaction.

The 'CDINQLVLQUERYTP' table defines the possible type code values for entries in the 'INQLVLQUERY' table.

 

image

 

'CDBUSINESSTXTP' table contains a list of available transactions and associated 'BUSINESS_TX_TP_CD' values.

 

image

 

The OTS queries need to be re-generated and updated to include the extended fields after the customized CBA has been deployed, otherwise those fields will be omitted from the response. The 'add' and 'update' transactions are not affected as they are not query transactions. There exists an 'updateInqLevel' transaction to re-build these queries.

For each Inquiry Level Id resolved above a single 'updateInqLevel' transaction needs to be ran against the server. The following fields need to be filled out: InquiryLevelId (from the 'INQLVL' table), InquiryLevelLastUpdateDate (from the 'INQLVL' table), GenerateQuery (set to 'Y' to rebuild the queries) and BusinessTxType (from the 'INQLVLQUERY' table). So for Organization (inquiry level 3) the following transaction will be used:

 

<?xml version="1.0" encoding="UTF-8"?>

<DWLAdminService xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="DWLAdminService.xsd">

<RequestControl>

<requestID>1000</requestID>

<DWLControl>

<requesterName>cusadmin</requesterName>

<requesterLanguage>100</requesterLanguage>

</DWLControl>

</RequestControl>

<DWLTx>

<DWLTxType>updateInqLevel</DWLTxType>

<DWLTxObject>DWLInqLevelBObj</DWLTxObject>

<DWLObject>

<DWLInqLevelBObj>

<InquiryLevelId>1012</InquiryLevelId>

<InquiryLevelLastUpdateDate>2005-04-13 11:11:28.205001</InquiryLevelLastUpdateDate>

<GenerateQuery>Y</GenerateQuery>

<BusinessTxType>32</BusinessTxType>

</DWLInqLevelBObj>

</DWLObject>

</DWLTx>

</DWLAdminService>

 

The response produced will contain two unusual characteristics. Here is what it will look like:

 

<?xml version="1.0" encoding="UTF-8"?>

<DWLAdminService xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="DWLAdminResponse.xsd">

<ResponseControl>

<ResultCode>SUCCESS</ResultCode>

<ServiceTime>1108</ServiceTime>

<DWLControl>

<requesterLanguage>100</requesterLanguage>

<requesterLocale>en</requesterLocale>

<requesterName>mdmadmin</requesterName>

<requestID>1000</requestID>

<userRole>mdm_admin</userRole>

<requesterTimeZone>America/New_York</requesterTimeZone>

</DWLControl>

</ResponseControl>

<TxResponse>

<RequestType>updateInqLevel</RequestType>

<TxResult>

<ResultCode>SUCCESS</ResultCode>

</TxResult>

<ResponseObject>

<DWLInqLevelBObj>

<InquiryLevelId>1013</InquiryLevelId>

<Application>TCRM</Application>

<GroupName>Organization</GroupName>

<InquiryLevel>4</InquiryLevel>

<CumulativeIndicator>Y</CumulativeIndicator>

<Description>Level 4 Organization Objects</Description>

<InquiryLevelLastUpdateDate>2005-04-13 11:11:28.205003</InquiryLevelLastUpdateDate>

<DWLStatus>

<Status>5</Status>

<DWLError>

<ComponentType>99</ComponentType>

<ErrorMessage>The data submitted already exists on the database; no update applied.</ErrorMessage>

<ErrorType>DRECERR</ErrorType>

<LanguageCode>100</LanguageCode>

<ReasonCode>603</ReasonCode>

<Severity>5</Severity>

<SeverityValue>Warning</SeverityValue>

</DWLError>

</DWLStatus>

</DWLInqLevelBObj>

</ResponseObject>

</TxResponse>

</DWLAdminService>

 

Because the transaction does not make any changes to the 'INQLVL' table itself – the last update date does not get changed and the transaction reports an error stating that the data has not been updated. However, the contents of the 'INQLVLQUERY' table shows that the changes have been applied and both the queries and the last update date have changed.

Custom rules should now work and transactions implementing these rules should return the contents of the default and extended fields.

MDM AE pMDM with RESTful web services

Dany Drouin 270004VXKT | | Comments (7) | Visits (8839)

Tweet

 

MDM AE pMDM with RESTful web services



New in MDM v11.4 is the ability to submit REST requests to the service controller.
No more complex SOAP client code, or worst EJB remote calls!


image

Previously, interactions with MDM Operational server were possible with EJB/RMI, JMS, JAX-WS and JAX-RPC(deprecated).  We have now added JAX-RS to the mix.

Possible payloads that are accepted are application/xml and application/json.  JSON support was added in v11.4 FP1.

It is important to note that all REST interaction are using one RESTful service “MDMWSRESTful”, PUT method type only and accessed via URI http://server:port/com.ibm.mdm.server.ws.restful/resources/MDMWSRESTful. 

The same xml request/response payload used for EJB/RMI is used for REST interactions.
ASI (Adaptive Service Interface) can be used in conjunction with the RESTful service.  This is particularly useful if a simplified XML/JSON structure is required for interactions or if the MDM solution must adapt to a certain XML industry standard such as IFW, ACCORD, and NIEM.

For the full list of capabilities and supported request headers consult the following documentation link:

https://www-01.ibm.com/support/knowledgecenter/SSWSR9_11.4.0/com.ibm.mdmhs.dev.platform.doc/concepts/c_rest_web_services.html

 

 

Interacting with MDMRESTful service



Using Apache Wink (or any other REST client api of choice), you can with minimal code submit a request to the MDM Hub Operational Server

Here’s a sample client leveraging Apache Wink demonstrating an MDM RESTful call:

ClientConfig config = new ClientConfig();
// setup basic authentication
BasicAuthSecurityHandler basicAuthHandler = new BasicAuthSecurityHandler();
basicAuthHandler.setUserName("mdmadmin");
basicAuthHandler.setPassword("mdmadmin");
basicAuthHandler.setSSLRequired(false);
config.handlers(basicAuthHandler);

RestClient rc = new RestClient(config);

Resource r =  rc.resource("http://myserver.com:9080/com.ibm.mdm.server.ws.restful/resources/MDMWSRESTful");

// optional MDM headers
r.header("TargetApplication", "");
r.header("RequestType", "");
r.header("Parser", "");
r.header("ResponseType", "");
r.header("Constructor", "");
r.header("OperationType", "");
r.header("ASI_Request", "");
r.header("ASI_Response", "");        
// submit put request of the xml payload.
String response = r.contentType("application/xml")
                   .accept("application/xml")
                   .put(String.class, requestPayload);

The above code will submitting an MDM xml payload and expecting back an xml response.

This is determined by the ‘Content-type’ and ‘Accept’ http header properties.

Here’s a look at a getParty xml payload and response:

Request XML:

<TCRMService xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xmlns="http://www.ibm.com/mdm/schema" 
              xsi:schemaLocation="http://www.ibm.com/mdm/schema MDMDomains.xsd">
    <RequestControl>
        <requestID>100187</requestID>
        <DWLControl>
            <requesterName>cusadmin</requesterName>
            <requesterLocale>en</requesterLocale>           
        </DWLControl>
    </RequestControl>
    <TCRMInquiry>
        <InquiryType>getParty</InquiryType>
        <InquiryParam>
            <tcrmParam name="PartyId">1</tcrmParam>
            <tcrmParam name="PartyType">P</tcrmParam>
            <tcrmParam name="InquiryLevel">1</tcrmParam>
        </InquiryParam>
    </TCRMInquiry>
</TCRMService>

Response XML (portion of the response has been trimmed):

<TCRMService
    xmlns="http://www.ibm.com/mdm/schema"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.ibm.com/mdm/schema MDMDomains.xsd">
    <ResponseControl>
        <ResultCode>SUCCESS</ResultCode>
        <ServiceTime>123</ServiceTime>
        <DWLControl>
            <requesterName>mdmadmin</requesterName>
            <requesterLanguage>100</requesterLanguage>
            <requesterLocale>en</requesterLocale>
            <userRole>mdm_admin</userRole>
            <requestID>100187</requestID>
        </DWLControl>
    </ResponseControl>
    <TxResponse>
        <RequestType>getParty</RequestType>
        <TxResult>
            <ResultCode>SUCCESS</ResultCode>
        </TxResult>
        <ResponseObject>
            <TCRMPersonBObj>
                <PartyId>1</PartyId>
                <DisplayName>Jane Doh</DisplayName>
                …
                <DWLStatus>
                    <Status>0</Status>
                </DWLStatus>
            </TCRMPersonBObj>
        </ResponseObject>
    </TxResponse>
</TCRMService>

The same request/response as JSON, using application/json, as both content-type and accept:

Request JSON:

{
             "TCRMService": {
                          "@schemaLocation": "http:\/\/www.ibm.com\/mdm\/schema MDMDomains.xsd",
                          "RequestControl": {
                                      "requestID": 604157,
                                      "DWLControl": {
                                                   "requesterName": "cusadmin",
                                                   "requesterLocale": "en"
                                      }
                          },
                          "TCRMInquiry": {
                                      "InquiryType": "getParty",
                                      "InquiryParam": {
                                                   "tcrmParam": [{
                                                                "@name": "PartyId",
                                                                "$": "1"
                                                   },
                                                   {
                                                                "@name": "PartyType",
                                                                "$": "P"
                                                   },
                                                   {
                                                                "@name": "InquiryLevel",
                                                                "$": "1"
                                                   }]
                                      }
                          }
             }
}

Response JSON (full response is shown):

{
    "TCRMService": {
        "@schemaLocation": "http://www.ibm.com/mdm/schema MDMDomains.xsd",
        "ResponseControl": {
            "ResultCode": "SUCCESS",
            "ServiceTime": "86",
            "DWLControl": {
                "requesterName": "mdmadmin",
                "requesterLanguage": "100",
                "requesterLocale": "en",
                "userRole": "mdm_admin",
                "requestID": "604157"
            }
        },
        "TxResponse": {
            "RequestType": "getParty",
            "TxResult": {
                "ResultCode": "SUCCESS"
            },
            "ResponseObject": {
                "TCRMPersonBObj": {
                    "PartyId": "1",
                    "DisplayName": "Jane Doh",
                    "PreferredLanguageType": "100",
                    "PreferredLanguageValue": "English",
                    "ComputerAccessType": "1",
                    "ComputerAccessValue": "14.4K Baud",
                    "PartyType": "P",
                    "CreatedDate": "2015-09-10 15:16:47.583",
                    "SinceDate": "2015-09-10 00:00:00.0",
                    "StatementFrequencyType": "1",
                    "StatementFrequencyValue": "Annually",
                    "ClientStatusType": "1",
                    "ClientStatusValue": "Active",
                    "AlertIndicator": "N",
                    "SolicitationIndicator": "N",
                    "ConfidentialIndicator": "N",
                    "ClientPotentialType": "1",
                    "ClientPotentialValue": "Client",
                    "ClientImportanceType": "4",
                    "ClientImportanceValue": "Medium",
                    "DoNotDeleteIndicator": "1",
                    "PartyLastUpdateDate": "2015-09-10 15:16:50.138",
                    "PartyLastUpdateUser": "mdmadmin",
                    "PartyLastUpdateTxId": "616244191260573356",
                    "PersonPartyId": "1",
                    "BirthDate": "1966-08-25 00:00:00.0",
                    "BirthPlaceType": "1",
                    "BirthPlaceValue": "Afghanistan",
                    "GenderType": "M",
                    "UserIndicator": "N",
                    "AgeVerifiedWithType": "2",
                    "AgeVerifiedWithValue": "Passport",
                    "HighestEducationType": "5",
                    "HighestEducationValue": "Master Degree",
                    "CitizenshipType": "1",
                    "CitizenshipValue": "Afghanistan",
                    "NumberOfChildren": "3",
                    "MaritalStatusType": "2",
                    "MaritalStatusValue": "Single",
                    "PartyActiveIndicator": "Y",
                    "PersonLastUpdateDate": "2015-09-10 15:16:50.31",
                    "PersonLastUpdateUser": "mdmadmin",
                    "PersonLastUpdateTxId": "616244191260573356",
                    "TCRMPartyAddressBObj": [
                        {
                            "PartyAddressIdPK": "826144191261145264",
                            "PartyId": "1",
                            "AddressId": "828144191261121794",
                            "AddressUsageType": "1",
                            "AddressUsageValue": "Primary Residence",
                            "StartDate": "2001-06-11 00:00:00.0",
                            "AddressGroupLastUpdateDate": "2015-09-10 15:16:51.591",
                            "AddressGroupLastUpdateUser": "mdmadmin",
                            "AddressGroupLastUpdateTxId": "616244191260573356",
                            "LocationGroupLastUpdateDate": "2015-09-10 15:16:51.451",
                            "LocationGroupLastUpdateUser": "mdmadmin",
                            "LocationGroupLastUpdateTxId": "616244191260573356",
                            "TCRMAddressBObj": {
                                "AddressIdPK": "828144191261121794",
                                "ResidenceType": "2",
                                "ResidenceValue": "Detached House",
                                "AddressLineOne": "120 Richmond St",
                                "City": "Toronto",
                                "ZipPostalCode": "M5A 1P4",
                                "ResidenceNumber": "789",
                                "ProvinceStateType": "108",
                                "ProvinceStateValue": "ON",
                                "CountyCode": "1",
                                "CountryType": "31",
                                "CountryValue": "Canada",
                                "LatitudeDegrees": "180",
                                "LongitudeDegrees": "90",
                                "AddressLastUpdateDate": "2015-09-10 15:16:51.216",
                                "AddressLastUpdateUser": "mdmadmin",
                                "AddressLastUpdateTxId": "616244191260573356",
                                "DWLStatus": {
                                    "Status": "0"
                                }
                            },
                            "DWLStatus": {
                                "Status": "0"
                            }
                        }
                    ],
                    "TCRMPartyIdentificationBObj": [
                        {
                            "IdentificationIdPK": "826744191261084265",
                            "PartyId": "1",
                            "IdentificationType": "1",
                            "IdentificationValue": "Social Security Number",
                            "IdentificationNumber": "1245",
                            "IdentificationStatusType": "2",
                            "IdentificationStatusValue": "Active",
                            "IdentificationExpiryDate": "2005-08-11 23:59:59.0",
                            "StartDate": "2002-02-02 00:00:00.0",
                            "PartyIdentificationLastUpdateDate": "2015-09-10 15:16:50.841",
                            "PartyIdentificationLastUpdateUser": "mdmadmin",
                            "PartyIdentificationLastUpdateTxId": "616244191260573356",
                            "DWLStatus": {
                                "Status": "0"
                            }
                        }
                    ],
                    "TCRMPersonNameBObj": [
                        {
                            "PersonNameIdPK": "822844191261056191",
                            "NameUsageType": "1",
                            "NameUsageValue": "Legal",
                            "PrefixType": "12",
                            "PrefixValue": "Miss",
                            "GivenNameOne": "Jane",
                            "StdGivenNameOne": "JANE",
                            "LastName": "Doh",
                            "StdLastName": "DOH",
                            "PersonPartyId": "1",
                            "StartDate": "2002-02-02 00:00:00.0",
                            "PersonNameLastUpdateDate": "2015-09-10 15:16:50.56",
                            "PersonNameLastUpdateUser": "mdmadmin",
                            "PersonNameLastUpdateTxId": "616244191260573356",
                            "LastUpdatedBy": "mdmadmin",
                            "LastUpdatedDate": "2015-09-10 15:16:50.56",
                            "DWLStatus": {
                                "Status": "0"
                            }
                        }
                    ],
                    "DWLStatus": {
                        "Status": "0"
                    }
                }
            }
        }
    }
}


How does MDM handle the JSON requests/responses?


 

The default MDM JSON model is actually based on the core XML schema model (MDMCommon.xsd and MDMDomains.xsd).  Internally, MDM will validate the JSON using these schemas. 

We use a “Mapped notation” api to build the JSON.  A couple things to note about this implementation: 

  • XML elements that contain nested elements will be mapped to JSON objects (ie. <TCRMPersonBObj> will be "TCRMPersonBObj": { … })
  • XML element values will be mapped to a JSON name/value pair (ie. <GivenNameOne>Jane</GivenNameOne> will be "GivenNameOne": "Jane")
  • XML element attributes become JSON name/value pairs within the JSON object. (ie. <tcrmParam name="PartyId">1</tcrmParam> will be "tcrmParam": { "@name": "PartyId", "$": "1" })  Note the attribute is prefix with “@”.  The xml element value will be mapped to the name/value pair using the name “$”.  The “@” and “$” characters are used to distinguish the attribute value from the element’s value.
  • XML array elements, an XML document which contains multiple nested elements of the same name will automatically be mapped to a JSON array.  (ie.
    <TCRMPersonNameBObj>
       <PersonNameIdPK>822844191261056191</PersonNameIdPK>
       <NameUsageType>1</NameUsageType>
    ...
    </TCRMPersonNameBObj>
    <TCRMPersonNameBObj>
       <PersonNameIdPK>821544198626606646</PersonNameIdPK>
       <NameUsageType>2</NameUsageType>
    ...
    </TCRMPersonNameBObj> will be

"TCRMPersonNameBObj": [ {
    "PersonNameIdPK": "822844191261056191",
    "NameUsageType": "1",
  ...
},
{
    "PersonNameIdPK": "821544198626606646",
    "NameUsageType": "2",
  ...
}]
)

There are instances where we need to force the return of a JSON array.  This is the case for list type elements in MDM or results which include multiple elements.  To control this behavior, you can set the elements which need to be forced as JSON array using the Configuration Management element “/IBM/DWLCommonServices/Restful/JsonResponse/ArrayObjects” (comma separated value list).

 

Don’t want to write any code to test your MDM services?



Using Firefox/Chrome POSTMAN REST client extension, you can submit your requests directly to the backend server securely.
image


Enter REST URL as http://localhost:9080/com.ibm.mdm.server.ws.restful/resources/MDMWSRESTful

Choose “PUT” as the HTTP method
Set headers: Content-Type and Accept to either application/xml or application/json
Set authorization by supplying credentials using Basic Auth tab.


Another great tool is SOAPUI. This tool can be used to fully automate your functional and regression testing by submitting REST requests.  For more info on this tool visit http://www.soapui.org/.


Another option is using cURL (http://curl.haxx.se/)
cURL is a cmd line tool for transferring data using a URL syntax.

image

curl --user "mdmadmin:mdmadmin" -X PUT
            -H "Content-Type: application/xml"
            -H "Accept: application/xml"
            -d @getParty.xml "http://localhost:9080/com.ibm.mdm.server.ws.restful/resources/MDMWSRESTful"

 

 



Tags:  jax-rs pmdm webservices restful mdm rest

Getting started with the new MDM Application Toolkit Hierarchy coach view

Doug Cowie 270005CYF0 | | Visits (4398)

Tweet

From version 11.4 FixPack 3 the MDM Application Toolkit has a new hierarchy widget, which replaces the now deprecated MDM Tree coach view.

This new widget, the MDM Hierarchy coach view uses the latest in web visualisation technology to render hierarchies in BPM coaches. As well as using this new technology the MDM Hierarchy coach view also has a new method of interacting with the MDM operational server.

 

To highlight some of the new features, this post presents a step-by-step guide of how to get up and running with the new MDM Hierarchy coach view. I will assume a degree of familiarity with IBM BPM, in particular Process Designer.

 

Set-up:

  • Before opening BPM ensure the MDM operational server has a hierarchy, containing at least one hierarchy node.
  • Create a new Process App, I have called mine Hierarchy Demo.
  • Open the new Process App in IBM Process Designer
  • Import the MDM Application Toolkit
  • Log in to Process Admin in a browser, expand Admin Tools, click on Manage EPV’s. In the snapshot drop down select the Process App you have just created. In the name drop down select MDM_Connection_Details. Configure the EPV values in the variable list with your MDM servers’ details.
  • Create a Client-Side Human Service, I’ve called mine Demo Hierarchy
  • Open up the default coach, created in the Client-Side Human Service

 

Step 1: Drag and drop the MDM Hierarchy coach view from the palette onto the canvas, it is listed under the MDMAT grouping.

Switch to the configuration tab. You will notice that most of the fields have default values. In the rootNodeId field enter the values for the hierarchy and a node in the hierarchy in the format <hierarchyID/hierarchyNodeID> 

image

Step 2: Press the “Run” button in BPM. This will launch a browser, showing the coach you have just created. The hierarchy will be visible, and should render data if it has been set up correctly.

image
 

 

That is all that is required to get the MDM Hierarchy coach view up and running.

The coach view has a set of other configuration options; please see the documentation for more details on the configuration options.

 

The MDM Hierarchy coach view can be augmented by connecting it to a set of other coach views, which provide pop-up dialogs with additional behaviour that complements the hierarchy. These are the MDM Hierarchy Dialog Add, the MDM Hierarchy Dialog Details, the MDM Hierarchy Dialog Error and MDM Hierarchy Dialog MultiParent.

 

While each of these coach views can be added independently, the instructions below will guide you through adding them all.

 

Step 1: Adding the other coach views.

Drag and drop the MDM Hierarchy Dialog Add, the MDM Hierarchy Dialog Details, the MDM Hierarchy Dialog Error and MDM Hierarchy Dialog MultiParent on to the canvas that contains the MDM Hierarchy coach view.

image

 

Step 2: Create a new MDM_Hierarchy_Event_Framework for all of the widgets to use.

Switch to the variables tab. Create a new Private variable, call it “events”. Change the Variable Type to MDM_Hierarchy_Event_Framework. This variable is used by the MDM Hierarchy coach view to communicate with the other coach views, it provides a hot swappable mechanism so that the out of the box dialogs can be replaced with custom dialogs making use of the event framework.

 

Step 3: Configure all of the widgets to use the same, shared event framework. Switch back to the Coaches view. For each coach, select it on the canvas, then select the Configuration tab at the bottom.

Locate the EventFramework configuration option; click the purple button next to the label. Then click the Select button to the right hand side. Find the variable you created in Step 2 (events) and select it. Do this for each of the widgets.

image

 

Step 4: Configure the visibility for each of the dialog coach views; this step should not be performed on the MDM Hierarchy coach view.

Select the coach view, then click the Visibility tab at the bottom. Leave source as “Value” then press the purple button next to the Visibility label. Press the Select button, then expand the events variable, expand the appropriate event, then select the visibility entry. Each of the different dialogs should be configured against its specific event. The MDM Hierarchy Dialog Add should be configured to use the addNode event; the MDM Hierarchy Dialog Details should be configured to use the nodeDetails event; the MDM Hierarchy Dialog MuliParent should be configured to use the multiParent event; the MDM Hierarchy Error Details should be configured to use the error event.

image

 

Step 5: Click the “Run” button in BPM.

 

The tree now has additional behaviour, if you right click on a node a pop-up dialog should now appear that will display additional data about the node. The add button on this dialog will launch the add dialog that can be used to add nodes into the hierarchy. If a node in the hierarchy has multiple parents in the hierarchy an icon indicating this is displayed to the right of the node, the MultiParent dialog will be launched if that icon is clicked and allows users to re-focus the hierarchy on the different parent nodes.

 

This brief post has demonstrated how to use the new MDM Hierarchy and associated coach views. In future posts more advanced topics, such as replacing the ajax service which supply the data to the hierarchy, and how to create custom widgets that use the event framework will be explored.

 



Tags:  techtip mdm bpm application-toolkit hierarchy

Failed to connect to the JMX port on server

S Eggleston 2700002CDU | | Comment (1) | Visits (10822)

Tweet

Failed to connect to the JMX port on server

When you first connect from MDM Workbench to WebSphere Application Server (AppServer) where MDM Server is installed, for example to deploy a configuration project or to run a virtual job, you might see this error:

Job Manager Error - Failed to connect to the JMX port on server

There can be several reasons why the connection might fail, for background, here is the stack you are relying on when you connect to the JMX port.

image

In order for the JMX port connection to be successful, you need every component in this diagram to be in a fully functioning healthy state. And yes, that means there are a lot of places you can check! As a result, it's not practical here to explain every possible area to review, but this should give you some idea of where to start investigating.

To begin, cut the problem in half: there is a message associated with blueprint virtual bridge. Look for this, and it will help you decide whether the problem is more likely to be a runtime issue (below and to the right of the blueprint virtual bridge component) or a configuration issue

1. Look for virtual bridge messages

On the Application Server where MDM is hosted, open SystemOut.log or HPEL logs: if possible restart the AppServer first to make sure you have startup messages:

Success scenario

When the MBean starts successfully, you will see messages like these:

  • RMIConnectorC A ADMC0026I: The RMI Connector is available at port <xxxx>
  • JMXConnectors I ADMC0058I: The JMX JSR160RMI connector is available at port <xxxx>
  • BlueprintCont I org.apache.aries.blueprint.container.BlueprintContainerImpl doRun Bundle com.ibm.mdm.server.virtualbridge is waiting for dependencies [(objectClass=com.ibm.mdm.server.config.api.ConfigManager), (objectClass=com.initiatesystems.hub.mpinet.MPINetProtocolLogic)]

Note that these messages will only appear on startup, so they may not be visible if the logs have wrapped

If you have these success messages the Blueprint virtual bridge is available for JMX requests, and everything to the right of the diagram (MPIJNI, JMS, databases, filesystems) is healthy.

In this case the likely cause of the problems is to the left of the diagram, and probably relates to a configuration issue. More information is available in section 3. When the virtual bridge has started successfully

Failure scenario

When the MBean has not started, you see messages like this:

  • BlueprintCont E org.apache.aries.blueprint.container.BlueprintContainerImpl$1 run Unable to start blueprint container for bundle com.ibm.mdm.server.virtualbridge due to unresolved dependencies [(objectClass=com.ibm.mdm.server.config.api.ConfigManager), (objectClass=com.initiatesystems.hub.mpinet.MPINetProtocolLogic)]
  • This message identifies a problem with the JMX MBean listener, but does not in itself identify the root cause: to find the source, look for other messages in the logs with earlier timestamps.

If you have these failure messages the Blueprint virtual bridge is not available. More information is available in the next section, 2. When the virtual bridge has not started

No messages found

If you don't find any messages relating to com.ibm.mdm.server.virtualbridge, the most likely reason is that there were messages when the server started, but the logs have wrapped the virtual bridge messages are no longer current. The recommended action is to restart the server and collect the new logs.

2. When the virtual bridge has not started

When the blueprint virtual bridge has not started, the next step is to investigate potential runtime issues in one or more of the components on the right side of the diagram.

  1. Look for Database errors, search the Application Server logs for
    • SQL
    • DSRA
  2. Look for JMS errors, the JMX MBean for virtual relies on the SIBus messaging engine, search the Application Server logs for
    • CWSIS

Note that you can choose whether you use a datastore or filestore for the messaging engine data store: the default is datastore (database).

There may be file system errors, these will usually be reported by the component that depends on the file system, for example the database or the JMS filestore.

In many cases you will be able to find technotes or other links on the internet with information about how to resolve the errors, or if not, contact IBM support and provide the logs that show the errors.

These related links have information about resolving blueprint errors:

https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/W2c3ebf603f05_4460_8e8b_a26780b35b45/page/Troubleshooting%20guidance%20for%20InfoSphere%20MDM%20installations

http://www-01.ibm.com/support/docview.wss?uid=swg21509668

3. When the virtual bridge has started successfully

Once you have found the success message, the next step is to investigate the configuration in both WebSphere Application Server and MDM Workbench.

  • com.ibm.mdm.server.virtualbridge is waiting for dependencies

Review the server logs for authorization errors

On the Application Server where MDM is hosted, open SystemOut.log or HPEL logs. Look for errors that reference one or more of:

  • LDAP
  • LTPA
  • SECJ

Errors with any of these codes suggest that you need to re-visit the security configuration in the WebSphere Application Server administrative console, and check userid and password settings in the workbench client. Review the error messages, in many cases you will be able to find technotes or other links on the internet with information about how to resolve them, or if not, contact IBM support and provide the logs that show them.

Review the firewall settings

Verify that you can ping from the Workbench machine to the machine that hosts WebSphere Application Server and MDM Server, using your preferred ping tool.

Optionally you can use "Test Connection" from MDM Workbench, although note that in an ND configuration this tool only checks the dmgr, so it may not be the correct status for the actual server where MDM is hosted.

If you can not connect to the target MDM server, the JMX connection will not work and you need to contact your networking support team to make sure the network is available and if necessary, that appropriate firewall ports are opened.

Review the port and host configuration

  1. In the WebSphere application console
    1. Go to Servers -> Server Types -> WebSphere application servers
    2. Select the server where the MDM runtime is installed
    3. Scroll down to the ports section and open it (see the screenshot below)
    4. Make a note of the ports for
      • BOOTSTRAP_ADDRESS
      • SOAP_CONNECTOR_ADDRESS
      • ORB_LISTENER_ADDRESS (if not 0)
  2. In the Workbench
    1. Go to the Servers tab (you may need to add it from Window -> Show View)
    2. In the Overview panel, under Server select Manually provide connection settings
    3. Set the RMI port to BOOTSTRAP_ADDRESS
    4. Set SOAP port to SOAP_CONNECTOR_ADDRESS
    5. Test these settings initially without any IPC port configuration
    6. If you are still not able to connect, also configure the IPC port with the ORB_LISTENER_ADDRESS and retest

Screen shot of the WebSphere Application Server ports in the Administration Console

 

 

 

 



Tags:  jmx virtual-mdm techtip virtual

NoClassDefFoundError when starting BPM on linux

Doug Cowie 270005CYF0 | | Visits (5650)

Tweet

Have you ever tried to start a BPM server on linux only to be greeted by the following incomprehensible error?

java.lang.NoClassDefFoundError: org.eclipse.emf.ecore.EFactory          
        at java.lang.ClassLoader.defineClassImpl(Native Method)         
        at java.lang.ClassLoader.defineClass(ClassLoader.java:275)      
        at java.security.SecureClassLoader.defineClass                  
(SecureClassLoader.jav4)                                                
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:540)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:451)
        at java.net.URLClassLoader.access$300(URLClassLoader.java:79)   
        at java.net.URLClassLoader$ClassFinder.run(URLClassLoader.java:
1038)                                                                   
        at java.security.AccessController.doPrivileged(AccessController.
java2)                                                                  
        at java.net.URLClassLoader.findClass(URLClassLoader.java:429)   
        at com.ibm.ws.bootstrap.ExtClassLoader.findClass(ExtClassLoader.
java4)                                                                  
        at java.lang.ClassLoader.loadClassHelper(ClassLoader.java:703)  
        at java.lang.ClassLoader.loadClass(ClassLoader.java:682)        
        at com.ibm.ws.bootstrap.ExtClassLoader.loadClass(ExtClassLoader.
java0)                                                                  
        at java.lang.ClassLoader.loadClass(ClassLoader.java:665)        
        at java.lang.J9VMInternals.verifyImpl(Native Method)            
        at java.lang.J9VMInternals.verify(J9VMInternals.java:94)        
        at java.lang.J9VMInternals.verify(J9VMInternals.java:92)        
        at java.lang.J9VMInternals.initialize(J9VMInternals.java:171)   
        at org.eclipse.hyades.logging.events.cbe.impl.                  
EventFactoryContext.<i>(EventFactoryContext.java:82)                    
        at org.eclipse.hyades.logging.events.cbe.impl.                  
EventFactoryContext.gestance(EventFactoryContext.java:122)              
        at com.ibm.ejs.ras.Tr.<clinit>(Tr.java:316)                     
        at java.lang.J9VMInternals.initializeImpl(Native Method)        
        at java.lang.J9VMInternals.initialize(J9VMInternals.java:237)   
        at com.ibm.ws.management.tools.AdminTool.<clinit>(AdminTool.    
java:66)                                                                
        at java.lang.J9VMInternals.initializeImpl(Native Method)        
        at java.lang.J9VMInternals.initialize(J9VMInternals.java:237)   
        at java.lang.J9VMInternals.initialize(J9VMInternals.java:204)   
        at java.lang.J9VMInternals.initialize(J9VMInternals.java:204)   
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
        at sun.reflect.NativeMethodAccessorImpl.invoke                  
(NativeMethodAccessorI.java:60)                                         
        at sun.reflect.DelegatingMethodAccessorImpl.invoke              
(DelegatingMethodAssorImpl.java:37)                                     
        at java.lang.reflect.Method.invoke(Method.java:611)             
        at com.ibm.ws.bootstrap.WSLauncher.main(WSLauncher.java:280)    
Caused by: java.lang.ClassNotFoundException: org.eclipse.emf.ecore.     
EFactory                                                                
        at java.net.URLClassLoader.findClass(URLClassLoader.java:434)   
        at com.ibm.ws.bootstrap.ExtClassLoader.findClass(ExtClassLoader.
java4)                                                                  
        at java.lang.ClassLoader.loadClassHelper(ClassLoader.java:703)  
        at java.lang.ClassLoader.loadClass(ClassLoader.java:682)        
        at com.ibm.ws.bootstrap.ExtClassLoader.loadClass(ExtClassLoader.
java0)                                                                  
        at java.lang.ClassLoader.loadClass(ClassLoader.java:665)        
        ... 33 more 

 

Well the good news is, it's a really trivial fix. Your ulimit is set too low!

DOH!

Try ulimit -n 8196

And then re-run startServer, startNode, startManager, BPMConfig or however your normally start BPM.

To stop this happening again you could consider changing the default setting, this will depend on which particular distribution of linux or flavour of unix your using.



Tags:  linux ulimit techtip bpm

MDM Tech Talk - Stewardship and Governance

jaylimburn 2700028UUJ | | Comments (2) | Visits (5032)

Tweet

 

You may have seen the recent tech talks that the team here have been producing for our clients. In these tech talks an IBM expert will talk through a specific MDM topic in great detail sharing the deep expertise of the architects and developers that are living and breathing the technology. These tech talks are provided for free and just require a simple registration process to allow you to attend. All sessions are recorded and replays will be available shortly afterwards.

 

One area of keen interest to our clients has been concerning the Stewardship and Governance  capabilities provided by MDM, specifically the IBM Stewardship Center, that was released in MDM 11.3. So it falls to me to host the next MDM tech talk on June 23rd. In this session I will be discussing the new capabilities offered by the IBM Stewardship Center, how we are changing the game for stewardship teams looking to evolve their organization to be more reactive to data quality events, engaging line of business users to provide input to data quality issues and adding advanced business rules and intelligence to automate events from across the entire data quality landscape.

 

A one hour tech talk is no where near enough time to do such a broad and important area justice however, we will spend some time up front explaining IBM's perspective on Information Governance and how IBM's InfoSphere portfolio provides the market leading integrated suite of comprehensive governance capabilities that can flex to suit your specific industry requirements. We will dive into the IBM Stewardship Center and its comprehensive workflow engine, providing collaboration and orchestration across the enterprise and touch on the MDM Application Toolkit, a suite of accelerators designed and built by some of our development ninja's to make creating custom governance workflows and quick and easy experience....and if we have time we may even have a live demo of the latest version of the Stewardship Center. During the session the live chat will be open allowing you to ask questions and I will have a team of experts ready to respond in real time.

 

If your organization is trying to address the growing focus on Information Governance, if you are trying to figure out how to make your Stewardship organization more efficient, or you just wanted to take a look at one of the coolest new features from the MDM team then don't miss the Master Data Stewardship & Governance tech talk on June 23rd at 10am Eastern......Register here

 

For a list of all upcoming events see the Tech Talks Wiki... http://bit.ly/MDMTechTalks

 



Tags:  stewardship web-conference governance mdm webinar tech-talk webconf

New IBM Redbook - Designing and Operating a Data Reservoir

jaylimburn 2700028UUJ | | Visits (5992)

Tweet

image

The team here are pleased to announce that after many nights researching, writing and editing, a new IBM Redbook is due to be published entitled 'Designing and Operating a Data Reservoir'.

The book takes you through the steps an organization should go through when designing and building a data reservoir solution, In this book we base the scenario around a pharmaceutical company, however the discussions, principals and patterns included in the book are relevant for any industry.

A data reservoir provides a platform to share trusted, governed data across an organization. It empowers users to engage in the sharing and reuse of data to ensure that an organization can fully leverage their most important asset. - Data. A data reservoir allows for collection of vast sets of data that can be curated, shaped and monitored to allow advanced analytics to be constructed offering new insights to an organization about their data.

See this blog post for more background on the need for a data reservoir:

5 Things to Know About a Data Reservoir

The new IBM Redbook, as been authored by thought leaders in the data management space and will be available as a full IBM Redbook publication shortly. In the mean time the draft is available directly from the IBM Redbooks website:

Designing and Operating a Data Reservoir

We'd love to hear your feedback on the book and would be keen to hear your stories around data reservoir solutions.

Make Data Work

Jay



Tags:  redbooks redbook data-reservoir

How Are We Doing?

Laura Ellis 2700000J4U | | Visits (3587)

Tweet

We have one thing on our mind - making you happy!

 

image

 

Please take a few minutes to complete our survey and let us know how we're doing.  Your satisfaction is our number one priority.  If there is something we can do to make your work life easier, let us know!

 

IBM InfoSphere Master Data Management Survey

 

Photo © Alexander Henning Drachmann (CC BY-SA 2.0)



Tags:  survey

Upgrading to MDM v11 - live webinar

jtonline 110000B6Y8 | | Comment (1) | Visits (5939)

Tweet

image

The MDM Developers community now has a new Events section. The first event will be a live webinar on Upgrading to MDM v11 presented by Lena Woolf:

"In this presentation we will perform a comprehensive overview of the MDM v11 upgrade process. We will begin the session by reviewing how MDM v11 differs from our previous releases of MDM Advanced Edition and MDM Standard Edition from a technical perspective. Next we will outline the upgrade process and considerations on a high level. Finally we will discuss case based nuances to consider when planning your upgrade effort."

For more information, including how to register, have a look at the full event details.

Photo © Håkan Dahlström (CC BY 2.0) 



Tags:  mdm-11 upgrade webconf web-conference webinar mdm

Automated Builds of Physical MDM Customisations

bakleks 270007PVJ3 | | Comments (2) | Visits (10464)

Tweet

This document outlines how Physical MDM customisations can be built from source artefacts in an automated build and test system. This document does not aim to be a complete guide on this topic, but rather to point the way to how some detailed steps can be implemented using examples.

 

1. Overview

The MDM Advanced or Standard editions both include the MDM Workbench. In version 11.0 and beyond the MDM Workbench is used by solution developers to create artefacts which customise the MDM solution for the physical, virtual and hybrid implementation styles. These source code artefacts are typically built into a Composite Bundle Archive (CBA) and deployed to WebSphere where they augment the functionality already available in the MDM Server Enterprise Business Application (EBA).

A good practice amongst MDM solution developers is to create an automated build process such that customisation source code is checked-in to a code control repository, and an automated build process takes those source files and builds the CBA ready for deploying onto post-build test systems, placing built artefacts into a second repository or shared file system.

Some automated systems take this “build” concept further, by automating the deployment of such built artefacts to test systems, which in turn report back on the “health” of the build, how many tests passed and failed, and generally quickly provide valuable feedback to developers whether recent changes broke the solution or not. Project managers overseeing such projects are able to reduce project risk by adopting this continuous delivery processes, and changes to MDM solutions become more reliable and safer as a result.

To add MDM solutions to such a continuous build environment it is necessary to: 

  1. Identify the pieces of the solution which represent the “source code” for the solution.
  2. Create a source code repository which identified “source code” can be checked into. GIT is a popular choice here, though there are many alternatives such as Rational Team Concert from IBM, CVS, SVN, etc.
  3. Recognize when a consistent set of code has been checked-in, at which point a “build” is started.
  4. Create a build environment.
  5. The build environment often “boot-straps” itself via a simple initial script which checks-out the rest of the build scripts which in turn build the artefacts from solution developers.
  6. The build scripts check out the artefacts from code control to the local file system.
  7. The source artefacts are processed, transforming them into built artefacts
  8. The build process often executes “unit tests” to further validate that the solution artefacts are healthy and do what they are expected to do.
  9. Built artefacts are published to a repository which versions every build and against which build metadata can be gathered and reported. Such build logs, unit tests results, and results of other tests indicate the “health” of the build.
  10. If the build is considered “good” then further automation can be added to deploy the built solution to a test environment, with higher-level tests (functional and end-to-end system tests) exercising the solution further. Such tests can also report back to the build repository on the health of each build.

 

This article is mostly concerned with step #7 – building source artefacts.

 

2. Materials and prerequisites

This article is accompanied by a collection of example scripts. We do not intend that these are used directly, but as an example of how you may wish to implement your own automated build process.

The current solution consists of four main files: 

  • mdm_wb_build.xml

  • mdm_wb_build.properties

  • mdm_wb_build_report.xsl

  • mdm_wb_build_inside_eclipse.xml

 

In order for the scripts to work, the machine running the scripts needs to have the following products installed:

  • Rational Application Developer (RAD) or Rational Software Architect (RSA)

  • WebSphere Application Server (WAS) with a profile set up

  • MDM Workbench (the scripts accommodate version 11.0 and above)

 

mdm_wb_build.properties contains a list of properties that need to be defined before running the scripts:

  • “InputFolder” defines the folder containing the code to be imported/built.

  • “OutputFolderPrefix” property defines the folder to which the projects will be copied and CBAs exported. Currently the script generates a time-stamp that gets added to the defined output folder prefix.

  • “AntContribHome” should be set to the folder containing “ant-contrib.jar” which provides some of the functionality used within the scripts.

  • “EclipseHome” should be set to the installation folder of RAD or RSA.

  • “WASHome” should be set to the installation folder of WAS and both “WASRuntimeTypeId” and ““WASRuntimeId”should be specified as well.

 

To run the Ant scripts the user needs to run mdm_wb_build.xml as a build file.

The script contains only the “runBuild” target.

The target checks that necessary properties, such as Eclipse Home, date and time stamps and output folder prefix are set. Provided these properties do exist, it creates a folder based on OutputFolderPrefix and date and time, within which “logs”, “CBAExport” and “workspace” folders are created.

The logs folder contains “MDM_WB_BUILD_log.html” and “MDM_WB_BUILD_log.xml”. The XML file stores debug information. The HTML file contains general information on whether the main steps were successful, for example: 

checkPropertiesExist: BUILD SUCCESSFUL

createServerRuntimeAndTargetPlatform: BUILD SUCCESSFUL

generateDevProject: BUILD SUCCESSFUL

workspaceBuild: BUILD SUCCESSFUL

exportCBA: BUILD SUCCESSFUL

End of report.

 

The CBAExport folder contains all of the exported CBAs.

The workspace folder contains a local copy of build artefacts.

After the directories have been created, the script checks which operating system it is running on and sets the isLinux or isWindows property to “true” as appropriate and calls either runAnt.sh or runAnt.bat to run a headless Eclipse process. The relevant file (either the batch or the shell script) should be available by default in the bin directory in the Eclipse installation directory.

The runAnt script then sets up the log files, environment variables and runs a second script “mdm_wb_build_inside_eclipse.xml” inside a headless RAD/RSA environment.

 

3. Step breakdown of the automated build and test system

Given that automated building and testing of MDM solutions is a worthwhile goal, the following sections provide some guidance in some of these areas where actions specific to the MDM tools and development/build environment are necessary, and some points of discussion are presented where choices exist.

 

3.1 Identify the pieces of the solution which represent the “source code” for the solution.

The source code for an MDM solution will be made up of a collection of Eclipse projects and their contents. MDM development, MDM configuration, MDM hybrid mapping, MDM service tailoring, MDM custom interface, MDM metadata and other MDM-specific projects types. CBA projects will add to the list.

MDM Development projects contain a “module.mdmxmi” file, which contains a model of the customizations which the project aims to create. This file should always be considered to be source code.

At some point the mdmxmi file will be used to generate Java, XML, SQL and other file artefacts, and there are a few different approaches you can take for these files:

The current solution is to only consider files which have been manually changed as “source code”, and “generate artefacts” from the mdmxmi model as part of the automated build process itself. This approach demands that the MDM workbench tools are installed as part of the build environment, because the “generate artefacts” process that turns .mdmxmi files into other artefacts will be a necessary part of the build process.

A project “MDMSharedResources” in the workspace can be considered “source code”, and it is recommended that this project is checked-into the code control system, together with its .project file and .mdmxmi file. The other contents of this project can generally be ignored from check-in, as the artefacts in this project often will be re-created by the automated build system.

 

3.2 Create a source code repository.

There are many choices regarding which product to use as a source code repository and covering them is not the aim of this document.

 

3.3 Recognize when a consistent set of code has been checked-in, at which point a “build” is started.

This event may be triggered manually, automated overnight, or whenever a change-set is delivered to the code stream. The capturing of this event is often specific to the code control system being used, though some solution teams augment this by adding a web page that enables build requests to be manually requested.

 

3.4 Create a build environment.

A build environment should include RAD (or RSA) which can be called in a “headless” manner such that functionality within RAD can be used without a user-interface being present.

MDM Workbench will be required in addition to RAD to perform a complete build of “module.mdmxmi” files.

For the list of platforms that MDM Workbench v11.0 and onwards support – refer to the product release documentation.

 

3.5. The build environment “boot-straps” itself.

A small script is responsible for “boot-strapping” the process by it checking-out the other build scripts which in turn build the artefacts from solution developers.

 

3.6 The build scripts check out the artefacts from code control to the local file system.

These actions are specific to the code control system so will not be discussed further here.

 

3.7. The source artefacts are processed, transforming them into built artefacts.

This phase of the automated system typically consists of a hierarchy of Ant files which decomposes the overall build process into many smaller steps and “Ant targets”. The Maven framework is a common choice of technology to oversee this phase.

These Ant files can be categorized into two types:

  1. Run outside of the RAD environment;
  2. Run inside a “headless” RAD instance.

 

For a detailed walkthrough of specific implementations of the build process refer to the Ant scripts provided with this blog entry.

 

3.8. The build process often executes “unit tests” to further validate that the solution artefacts are healthy and do what they are expected to do.

The tools and approaches used to execute unit tests vary widely dependent on technology choice. Simple Java JUnit tests offer one simple solution, which can be invoked with scripts once the tests and tested code are built.

 

3.9 Built artefacts are published to a repository.

Every build against which build metadata can be gathered and reported is versioned by the publish process. Build logs, unit test results and results of other tests indicating the “health” of the build are gathered and published to the repository as well.

Products such as Rational Asset Manager can be used here, or for a really basic solution a simple shared folder on a network drive may suffice.

 

3.10 Automated deploy and test health of overall build.

If the build is considered “good” then further automation can be added to deploy the built solution to a test environment, with higher-level tests (functional and end-to-end system tests) exercising the solution further. Such tests can also report back to the build repository on the health of each build.

The automated deployment of entire systems for testing is often one of the most complex areas of the whole continuous development process. Products such as Rational Urban Code Deploy (UCD) can be used for this stage of the process, though for some environments a set of (reasonably complex) scripts might be sufficient.

For the MDM pieces, we are mostly concerned with deploying and un-deploying SQL scripts, deploying/un-deploying CBA files, and starting and stopping the MDM solution and WebSphere server(s).

Prior to deploying extensions to the server, it is often necessary to modify the database. This is possible using the SQL scripts found in the MDMSharedResources project in the built workspace. Rollback scripts in the same location should be applied once testing is complete to reset the database back to a known good state.

For CBA deployment, Jython scripts can be used to manipulate the WebSphere server. Detailed documentation of these steps can be found in the WebSphere documentation.

Additional documentation on adding the CBAs to the WebSphere local bundle repository, extending the MDM server EBAs, starting and stopping the WAS server is available on the MDM Developer Site.

 

Sample scripts and projects are available for download.



Tags:  cba osgi build mdm-workbench ant mdm rad techtip

2015 Preview

jtonline 110000B6Y8 | | Visits (4163)

Tweet

2014 was a great year for the MDM Developers community, with a growing knowledge base on our wiki, new videos on our YouTube channel, and nine more blog posts. A huge thank you to everyone who has contributed to the MDM Developers community over the last year. We're already well in to the new year so, rather than looking back, I'd like to share a quick look ahead at what's in store for 2015.

The biggest news is that there are now seven more people committed to supporting the community, so look forward to seeing an even more active community this year! For example, there should be more MDM Collaborative Edition resources coming soon. It's my great pleasure to introduce Bob Maddison, Chitra Iyer, Dmitry Drinfeld, Doug Cowie, Mohana Kera, Nic Townsend and Shweta Shandilya...

 

imageBob Maddison

Bob Maddison is our most experienced MDM Workbench developer. He is a senior software engineer based at the IBM Hursley Laboratory. Over the past twenty years he has been involved with a number of major IBM products including CICS, Java Virtual Machines and more recently with development of tooling for Master Data Management.
 

image
Chitra Iyer

"I am Chitra Ananthanarayanan.  After completing Masters in Computer Applications in 2004, I joined IBM.  I am working for Master Data Management since 2007.  I enjoy solving interesting puzzles that come up during product development through code, and learning new programming languages and databases.  Recently IBM Bluemix keeps me busy, and is helping me to learn services - Cloudant, Secure Connectors and Watson Cognitive applications.  I have started blogging on Big Data and on Bluemix.

I am fascinated to note that over the past ten years, the advent of applications for almost all services, has helped almost everyone save a lot of time,  Of these, the mobile applications deserve a special mention.  The direction in which the industry is moving now, to gather information through these applications and leverage it, will provide varied benefits.

I prefer to see the small businesses make a presence in the Internet and am sure this will happen quite soon.  

Big Data and Analytics interests me and I read posts on it, some of these posts are from links in Twitter.  I am available in Twitter @chitraaiyer"
 

image
Dmitry Drinfeld

"I've started my MDM journey as an intern back in 2010 and liked it so much that I could not bear to be away from it for too long, so I returned to the team as a full time developer in 2012. Since then I've been fortunate to work on major features of the product and provide continuous support to multiple high profile clients as a lab advocate. The topics of MDM, secure engineering and big data are near and dear to my heart and I am always ready to share my expertise and learn new things! Beyond that I've been interested in all things tech ever since I can remember. Everything from wearable tech to mainframes fascinates me. And the true magic unveils when you start to understand the inner workings and complexities behind it all. One thing I've always found to be extremely important in which ever task I am undertaking is the presence of a challenge. Luckily, with our ever evolving technology those are abundant :) ."
 

image
Doug Cowie

Doug Cowie is a software engineer on the MDM Application Toolkit team at IBM's Hursley labs. He joined IBM after finishing his Master's degree at the University of Bristol, and has previously worked on the User Interface Generator and other projects.
 

image
Mohana Kera

Mohana Kera has more than 15 years of in-depth experience in architecture, design, development and implementation of software solutions. He has been working at IBM for the last 10 years, primarily on InfoSphere MDM Collaborative Edition(MDM CE). Prior to that, he has worked on developing Business Process Management solutions. He has in-depth experience with all aspects of MDM CE and was the development lead for Java API interfaces of the system. He has published several technotes, developerWorks articles, provided customer training on Java APIs, and has actively worked in developing the content for IBM Knowledge Center around MDM CE Java APIs. He is passionate about latest technology trends, blogging, social media and an active user of the social media.
Twitter handle: @mohkera
LinkedIn: Mohana Kera
 

image
Nic Townsend

Q. How long have you been working with MDM?
A: 3.5 years

Q: What did you do before that?
A: Studying CompSci at University of Warwick

Q: What inspires you in your work?
A: Being able to architect solutions as a team rather than receiving a strict specification.

Q: What are you passionate about?
A: Mobile, IoT and wearables

Q: If you were stuck on a technology deprived island, what single technology could you not live without?
A: Smartphone (assuming signal reaches the island, and it has electricity)

Q: What book are you currently reading?
A: Games People Play by Eric Berne

Q: What has been your biggest surprise you have witnessed in the technology industry?
A: Smartphones becoming cloud based for applications

Q: Is there any technology that you think should get more respect and adoption but does not?
A: VR (oculus rift) etc with haptic feedback for more than just gaming

Q: What is your favorite technology that fizzled or failed to live up to the hype?
A: Kinect/PS Eye for gaming

Q: Any new technologies that you think are about to break into the big time?
A: Embedded/tattoo tech

Q: What future technology would make your life easier?
A: Driverless cars

Q: How do you prefer to find answers to your questions?
A: Google - typically then linking to stack overflow/wikipedia etc.

Q: What are some of your favorite websites/feeds/twitter accounts to follow?
A: Gizmodo, Wired, Engadget

Q: Where can we find you on Twitter, Linkedin, Facebook, and other social spaces?
A: @nict0wnsend
 

image
Shweta Shandilya

Shweta Shandilya is a development manager working in IBM's India Software Lab having joined IBM through the Trigo acquisition in 2004. Shweta leads a team of install developers and blogs about technology, MDM, and big data.

 

Something to look out for later in the year is a live video tech talk. The exact details, including the topic, are still to be confirmed but it should be a great opportunity for more interaction with the community.

As usual, there will be more articles and blog posts over the year but if there is anything else you would like to see, please let us know. Or, better yet, please join in and get involved- it's your community!

 

 



Tags:  tech-talk 2015 profiles

Avoiding pitfalls when using MDM Data Objects in BPM

Doug Cowie 270005CYF0 | | Visits (4228)

Tweet

Occasionally when creating Business Process with the MDM Application Toolkit you may be returned errors from the server which at first glance seem a bit cryptic. When running against Virtual MDM you may sometimes come across errors which look like the following: IxnMemPut did not succeed. Reason: member XXX, unable to insert/update/delete member data.

In some cases this can be caused by the way BPM initialises variables as it serialises between Java objects and Javascript objects, for use on Coaches. So for example if I have an un-initialised, or null object before I display this object on a coach (i.e. have it bound to a CoachView) then when I check it after the coach, it will have been initialised. The problem is in fact a bit more convoluted than this, with nested complex objects, as the level they are initialised to is increased with every coach the object is used in. So for example, if I have a heavily nested data type, and an instance of this is used in 3 coaches, after the first coach the top level is initialised, after the second coach the top level, and the children of the top level are initialised, and after the third coach the top level, it's children and their children will be initialised. In Json this would look like:

Person : { Name : { First : String, Last : String}, Address : {Line1 : String}}
If we perform a search and get back person = {Name : {John, Rogers}}. We select this entry and move to the next coach.
In the next coach we will have person = {Name : {John, Rogers}, Address : {}}. If leave this coach and send data up to the server
we will in fact be sending person = {Name  : {John, Rogers}, Address : {Line1 : "" }} with this, we may get an error (if field is mandatory), or we may overwrite data in the server with blank fields.
 

This can be a problem in the Application Toolkit, for example when creating a simple search and update process. If I retrieve a sparsely populated object from the server, and pass it into a coach, any unused or empty fields will now be inititalised, if I pass this object to several more coaches before sending it back to the server as an update, then several levels of initialisation may have occured in any nested complex objects. In Virtual MDM this may give you an error if one of those initialised fields is required, as the engine will check to make sure you are not trying to set it to as blank, or an empty value. If you don't get an error that does not mean everything is okay, it could be that data on the server that was not originally retrieved has now been overwritten by empty values.

To make sure this doesn't happen it is important to follow BPM Coach design best practice and make sure you always use a display object when displaying an object in a coach. What this means is creating a data type (the display object) that contains only the fields you wish to display on screen, and copying the values into an instance of this new data type from the object retrieved from the server. This will also offer a performance enhancement as the whole object from the server is not passed back and forth from BPM to the web browser.



Tags:  application-toolkit bpm techtip mdm coaches

Scripts to setup WAS Profile for MDM v10.1

IanDallas 120000B89A | | Visits (4436)

Tweet

Scripts to setup WAS Profile for MDM v10.1. This is done by the DEST tool, but is useful to have the scripts for autromation purposes.

a)
Commands to delete/create the Profile if not there/want to test from fresh:

manageprofiles.bat -delete -profileName AppSrv01
rmdir C:\IBM\WebSphere\AppServer\profiles\AppSrv01
manageprofiles.bat -create -profileName AppSrv01 -profilePath C:\IBM\WebSphere\AppServer\profiles\AppSrv01 -templatePath C:\IBM\WebSphere\AppServer\profileTemplates\default

b)
command to setup the WAS Profile with MDM artifacts ie SIB, JDBC, Qs etc
 
C:\IBM\WEBSPH~1\APPSER~1\java\jre\bin\java.exe -classpath C:\IBM\SDP\plugins\MDMDevEnvTooling\autotooling.jar com.ibm.mdm.setup.tooling.utils.ConfigWASStarter C:\IBM\SDP\plugins db2admin db2admin MDM101 C:\IBM\SQLLIB\java C:\WORKSP~1 DB2 AppSrv01 IBM-b65cd2b4faaNode01Cell IBM-b65cd2b4faaNode01 C:\IBM\WEBSPH~1\APPSER~1 ibm-b65cd2b4faa true null C:\workspaceMDM101\.metadata\.plugins\com.ibm.mdm.config.external.automation null null C:\WORKSP~1\.metadata\.plugins\com.ibm.mdm.config.external.automation\config_WAS_2014-04-30-15-08-59.log custSetupAppServer.py custSetupClasspath.py MDMServer 1010 1 50000 server1  

Not tested, but this is Oracle equivalent

C:\<WAS home>\java\jre\bin\java.exe -classpath C:\IBM\SDP\plugins\MDMDevEnvTooling\autotooling.jar com.ibm.mdm.setup.tooling.utils.ConfigWASStarter C:\IBM\SDP\plugins <ora user> <ora password> MDM101 <ORA PATH TO JDBC> C:\WORKSP~1 ORACLE AppSrv01 <localhost name>Node01Cell <localhost name>Node01 C:\IBM\WEBSPH~1\APPSER~1 <localhost name> true null C:\<workspace home>\.metadata\.plugins\com.ibm.mdm.config.external.automation null null C:\WORKSP~1\.metadata\.plugins\com.ibm.mdm.config.external.automation\config_WAS_2014-04-30-15-08-59.log custSetupAppServer.py custSetupClasspath.py MDMServer 1010 1 <ORA port #> server1  

c)
command to deploy MDM.ear file to WAS Profile
 
C:\IBM\SDP\plugins\MDMDevEnvTooling\controller\deploy.bat AppSrv01 IBM-b65cd2b4faaNode01Cell IBM-b65cd2b4faaNode01 C:\IBM\WEBSPH~1\APPSER~1 true C:\IBM\SDP\plugins C:\Users\IBM_AD~1\AppData\Local\Temp\DEST\ear\MDM.ear MDMServer C:\workspaceMDM101\.metadata\.plugins\com.ibm.mdm.config.external.automation\scripts\custDeployApp.py C:\workspaceMDM101\.metadata\.plugins\com.ibm.mdm.config.external.automation\scripts\uninstallApplication.py DB2UDBNT_V82_1 server1

 

d)
To set WAS Variable for Oracle path:

cd C:\IBM\WebSphere\AppServer\bin
wsadmin -f setWASVariable.py


File setWASVariable.py:


import sys
global AdminConfig
global AdminTask
 
#AdminTask.setVariable('[ -scope Server=server1 -variableName DB2_JDBC_DRIVER_PATH -variableValue c:\\ibm\\sqllib1]')
AdminTask.setVariable('[ -scope Server=server1 -variableName ORACLE_JDBC_DRIVER_PATH -variableValue c:\\oracle\\appl1]')
AdminConfig.save()
 
print ""
print "  Done setting up environment."
print ""
 



Tags:  mdm-server websphere dest mdm

Customising BPM Process Applications using CSS files

Nic Townsend 2700051ED4 | | Visits (9209)

Tweet

MDM v11.3 leverages IBM BPM technology to provide a data stewardship framework under a single UI. However, it may be desirable to modify this UI to match the look and feel of your existing solutions. While BPM Process Designer allows you to restructure page layout with the Coach designer, the best way to modify the look and feel of a Coach is with CSS.

BPM provides three ways to alter CSS out of the box from within Process Designer:

  1. Supplying a CSS file as an attached script, or editing the inline CSS section of a Coach View
  2. Adding a <style> HTML attribute on a Coach View inside a Coach
  3. Supplying CSS as a Custom HTML fragment inside a Coach

However, there are occasions where these approaches are not suitable:

  1. You may not own the Coach View so cannot directly update the attached scripts or inline CSS
  2. You want to use the CSS across multiple Coach Views rather than scoping it to the single instance of the Coach View. Or you want to customise multiple Coach Views without manually adding <style> attributes to each Coach View.
  3. You do not want to have to maintain multiple copies of the same CSS file inside each of the Custom HTML fragments.

The ideal way to insert CSS into a Coach would be to load the CSS as a managed file in BPM - that way you only need to edit the managed file and all Coaches that reference the CSS would use the latest version (pending updated snapshots). Unfortunately, BPM does not offer the mechanism out of the box.

Update 05/11 - If you upload a HTML file to BPM that consists of <style> tags wrapping the CSS, you can use the "Managed File" option of the Custom HTML component to load the "HTML" CSS into the Coach. However, this does not work if the HTML file is inside a .zip archive, or if the CSS needs to reference local resources.

A documented Javascript function in BPM holds the key: com_ibm_bpm_coach.getManagedAssetUrl. This function allows you to get a URL for a managed file. The function call is: com_ibm_bpm_coach.getManagedAssetUrl(filePath, assetType, projectAcronym);

 

filePath

This is the name of the file you want a URL for. This can take two forms:

  • Simplest case, the name of the managed file directly - "styling.css"
  • If the managed file is an archive, you can supply the archive name with the internal path appended - "styling.zip/css/styling.css"
assetType

Either web, server, or design - this is the asset type for the managed file. BPM has utility statics for this: 

  • com_ibm_bpm_coach.assetType_WEB
  • com_ibm_bpm_coach.assetType_SERVER
  • com_ibm_bpm_coach.assetType_DESIGN
projectAcronym The short name for the process app/toolkit where the managed file is stored.

 

By using this method call, you can get a relative URL to the managed file in BPM - eg /teamworks/webasset/2064.146ca826-2525-45e7-bcf5-b68b4c46eadc/J/styling.zip/css/coachstyling.css.

Using the above technique, I created a HTML file containing a <script> that would create a link to the URL for a CSS file in the document's <head> element. I then used the "Managed File" option on a Custom HTML component to load the script into a Coach. This meant that my CSS file was referenced inside the <head> element.

You can use this principle to load your own custom CSS files into a Coach. Custom CSS files can be used to override BPM Coach Views or Coaches, or alternatively CSS files can be used to override the MDM Coach Views supplied with MDM V11 onwards.

As an example, I present an updated solution to: creating a custom coach view to load CSS tree icons. Rather than producing a custom Coach View to wrapper the MDM Tree and load tree icons, the HTML script can be used to the same effect. You simply write a HTML script to link to the url for the CSS file that references the MDM Tree icons, and insert the HTML as a Custom HTML component on the same Coach as the MDM Tree. The advantage of this solution is that you do not have to create a new Coach View for every set of icons; you just use a new HTML script to link to each CSS file required for the page.

Developing behavior extensions for InfoSphere MDM v11

Dmitry Drinfeld 270005DFVF | | Comment (1) | Visits (8518)

Tweet

Developing behavior extensions for InfoSphere MDM v11

Special thanks to Stephanie Hazlewood for providing guidance as well as content for some of the sections of this article!

Executive Summary

Many established organizations end up having unmanaged master data. It may be the result of mergers and acquisitions or due to the independent maintenance of information repositories siloed by line of business (LOB) information. In either situation, the result is the same – useful information that could be shared and consistently maintained is not. Unmanaged master data leads to data inconsistency and inaccuracy.  IBM Master Data Management (MDM), specifically the physical MDM set of capabilities, allows the enterprise to create a single, trusted record for a party.  Similar capabilities exist for mastering product and account information.  It can be integrated with content management systems and can also support a co-existence style of master data management sometimes referred to as hybrid MDM capabilities, were a linkages between matched records mastered and maintained in various source systems are created in virtual MDM and then persisted in physical MDM.

One of the most fundamental extension mechanisms of InfoSphere MDM allows for the modification of service behavior. These extensions are commonly referred to as behavior extensions and the incredible flexibility they provide allows for an organization to implement their own “secret sauce” to the over 700 business services provided out of the box with InfoSphere MDM.  The purpose of this tutorial is to introduce you to behavior extensions and guide you through the implementation, testing, packaging and deployment of these extensions.  You will be introduced to the Open Service Gateway initiative (OSGi)-based extension approach introduced in the InfoSphere MDM Workbench as of version 11.

 

Introduction

With the release of InfoSphere MDM v11, we adopt the OSGi specification which allows, amongst many other things, extensions to be deployed in a more flexible and modular way. This document will describe a real client behavior extension scenario and step you through all of the following, required steps:

-        Extension scenario outline.

-        Creation of the extension project.

-        Development of the extension code.

-        Deployment of the extension onto the MDM server.

-        Testing deployed code using remote debugging.

We will then conclude this document with the summary of what you have learned.

 

Extension scenario

It is often necessary to customize an MDM implementation in order to meet your solution requirements. One of the extension capabilities InfoSphere MDM provides it the ability to implement additional business rules or logic to a particular out-of-the-box service.  These types of extensions are referred to as behavior extensions, as they ultimately change the behavior of a service. In this tutorial we will create a behavior extension to the searchPerson transaction.

The searchPerson transaction is used to retrieve information about a person when provided with a set of search criteria. You can filter out the result set by active, inactive or all records that get retrieved by these criteria. Important to note is that this particular search transaction uses exact match and wildcard characters to retrieve the result set.  There are separate APIs available for probabilistic searching – this service is not one of them.

Sometimes, the searchPerson transaction response may contain duplicate parties. For example, if a party contains both legal and business names which are identical, and searchPerson transaction uses last name as criteria, - the parent object will be returned twice in the response, as it will be matched by both of the names. While this behavior is acceptable in some circumstances, some cases might more filtering before it is returned. In order to do so, we will create a behavior extension, which will be responsible for processing transaction output and removing any duplicate records in the result set. The InfoSphere MDM Workbench provides us with exactly the right tools to quickly create and deploy such an extension.

 

Creating extension project

First, create the extension project structure using the wizards provided by MDM Workbench. Go to File -> New -> Other… and search for Development Project wizard:

image

If you cannot find Development Project wizard within the list, chances are the Workbench has not been installed, please verify using IBM Installation Manager.

When creating your project, make sure to specify a unique project and package names in order to avoid conflict with the existing ones:

image

Make sure to choose the correct server runtime for your projects, as well as unique name for the CBA project:

image

Note: You are allowed to choose from the existing CBAs. A single CBA can contain multiple development project bundles.

Click Finish and wait for the wizard to generate the required assets.

At this point, what we have is a skeletal InfoSphere MDM Development project that contains all of the basic facilities to help us create the desired extension. The next step is to create the extension assets and there are two ways of doing so: either by using the behavior extension wizard, or by using the model editor.

Creating a behavior extension using the extension wizard

You can create an extension using a wizard in the MDM Workbench, much like the one used to create a development project:

1.   Open Behavior Extension wizard by going to File -> New -> Other… -> Behavior Extension, located under Master Data Management -> Extension folders

image

2.   Once in the wizard, select the development project to place the extension under:  image  

Note: A development projects can contain multiple extensions of various types underneath it. You might choose to use development projects to logically group extensions  having a similar purpose, type or to facilitate parallel development activities.

3.   Within the next window, choose a name and a description for your behavior extension. Choose a Java class name for your extension. This is the class that we will be populating with custom logic in order to achieve desired behavior. Alternatively, if you require to use an IBM Operational Decision Manager (ODM, and previously known as ILOG) rule – specify this associated parameter. ILOG/ODM rule creation is not covered as a part of this tutorial as we will implement the extension in as a Java extension.

image

4.     Within the “Specify details of the trigger” pane, you need to specify the following parameters:

a. Trigger type:

                                    i.  ‘Action‘ will cause the behavior extension to trigger whenever chosen transaction is ran by itself or as a part of another transaction.  ‘Actions’ are executed at the component level. .

                                    ii.  On the other hand, if you are looking to trigger extension only on a specific standalone transaction event (otherwise known as controller level transaction) select ‘Transaction’ trigger type.

                                    iii. ‘Action Category’ trigger type executes behavior extension on various data actions such as add, update, view or all for extensions to be executed at the component level.

                                    iv. ‘Transaction Category’ trigger type will kick off behavior extension when a transaction of specified type is executed, namely inquiry, persistence or all.

          b. When to trigger:

                                    i. ‘Trigger before’ will cause the behavior extension to fire of before the work of the transaction is carried out. Sometimes you will hear this referred to as  a preExecute extension. It is a typically used when some sort of preparation procedure has to be executed before the rest of the transaction is carried out. An example of such scenario would be preparing data within the business object that is being persisted.

                                    ii. ‘Trigger after’ will cause the behavior extension to run after the transaction work has carried out. Sometimes you will hear this referred to as a postExecute extension. It is typically used in the scenarios where logic implemented in the behavior extension depends on the result of the transaction. Normally any sort of asynchronous notification would be placed in a post behavior extension, as there would be no way to roll it back in case of transaction failure, if it is sent before the transaction is executed.

          c.  ‘Priority’ parameter indicates the order in which this behavior extension will be triggered. The lower the priority number, the higher the priority.  That is,             a behavior extension with priority 1 would execute first followed by behavior extension with priority 2, 3 or 4 in that order.  

In our scenario we are looking to filter the response of a specific transaction,, namely searchPerson. Therefore we set the trigger type to be ‘Transaction’ with value of searchPerson. Since we are filtering the response of the transaction – we have to trigger our behavior extension after the transaction has gone through, and response became available. Lastly, in our particular example priority does not play a special role, so we will leave it at default of ‘1’.

image

5.   After the above configuration is done, click Next and review the chosen parameters. Note that there is a checkbox at the top of the dialog, allowing you to generate the code based on the specified parameters immediately. For the purposes of this tutorial leave it checked and click Finish.  The workbench will generate all of the required assets for you.

 

Creating a behavior extension using the model editor

If you have used the wizard approach above to create the behavior extension already, please feel free to skip ahead to the section titled, “review generated assets” that follows.

This section describes how to generate a behavior extension using the model editor.  To do so, the following steps will guide you through this process:

1.   Go to the development project you created earlier and open the module.mdmxmi file under the root folder of the project. Select the model tab within the opened view.

2.   Right click PartySearchBehaviorExtension folder, then select New -> Behavior Extension:

image

 

3.   Name the behavior extension that has been created as a result ‘PartySearchDuplicateFilter’, and provide appropriate documentation:
image

4.   Now we will create a transaction event definition under behavior extension. Right-click the behavior extension, then select New - > Transaction Event.

 

5.   Once the transaction even has been created, specify the appropriate properties:

 

a.   Because this event is triggered on the personSearch transaction, PersonSearchEvent is appropriate. Recall that sometimes the “trigger before” behavior is referred to as “preExecute” extension.

b.   Because ‘Pre’ checkbox stands for preExecute, (more specifically the behavior extension gets executed before the rest of the transaction) leave it unchecked. Similar to the wizard configuration, leave priority as ‘1’, since priority of execution does not affect this behavior extension.

c.   Finally, select searchPerson as the transaction of choice by clicking Edit… -> Party -> CoreParty -> searchPerson.

image

After all of the above configurations are done and reviewed, go ahead and click Generate Code under the Model Actions section of the view, telling workbench to generate configured assets.

 

Review your generated extension code

Once either of the above methods is used, let us review the generated assets:

  • Under bestpractices.demo.behavior, we find the PersonSearchDuplicateFilter which is the actual behavior extension Java class that, once configured, will be executed at runtime. We will be providing the implementation that will filter out duplicate person objects from the response of the searchPerson transaction here shortly.
  •       In the OSGI-INF/blueprint/blueprint-generated.xml we can see the OSGi service definition, listing the  bestpractices.demo.behaviour.PersonSearchDuplicateFilter class as the extension service.
  •       Finally, under the resources/sql/<db_type>/PartySearchBehaviorExtesion_MetaData_DB2.sql file, we find the configurations we’ve specified regarding behavior extension execution:

o    EXTENSIONSET table record defines the behavior extension, its associated class bestpractices.demo.behaviour.PersonSearchDuplicateFilter and its priority of ‘1’:

image

 

o    CDCONDITIONVALTP defines a new condition of transaction name being equal to searchPerson.

o    EXTSETCONDVAL connects the above CDCONDITIONVALTP record to the behavior extension record from EXTENSIONSET. Additionally another EXTSETCONDVAL record connects CDCONDITIONVALTP with id of ‘9’, which stands for execution of behavior transaction after transaction.

 

Let us now move on to developing the extension code required to filter out duplicate person records from the result set returned by the searchPerson transaction.

 

Develop the extension code

The behavior extension skeleton and supporting configuration assets have now been generated.  You add your custom logic, or behavior change, in the execute method of PersonSearchDuplicateFilter class. The objective is simple. We need to go through all of the partySearchResult objects returned by the service, and if a same person object repeats multiple times throughout the result vector – we want to remove it. The following code will achieve just that:

 

    public void execute(ExtensionParameters params)

    {

        // Only work with vectors in the response

        if(params.getWorkingObjectHierarchy() instanceof Vector)

        {

            // Get the response object hierarchy

            Vector partySearchResultList =

               (Vector)params.getWorkingObjectHierarchy();

           

            // Iterate through the party search result

            // objects to find duplicates

            Iterator listIterator =

               partySearchResultList.iterator();

               

            // We will keep the party ids of objects we've already

            // processed to identify the duplicates

            Vector partyIdList = new Vector();

            while(listIterator.hasNext())

            {

                Object o = listIterator.next();

                if(o instanceof TCRMPersonSearchResultBObj)

                {

                    TCRMPersonSearchResultBObj personSearchResultBObj =

                          (TCRMPersonSearchResultBObj)o;                                    

                    String partyId = personSearchResultBObj.getPartyId();

                   

                    // If the party id has not been seen yet, this person

                    // object is not a duplicate, otherwise - remove it from

                    // the response

                    if(partyIdList.contains(partyId))

                        listIterator.remove();

                    else

                        partyIdList.add(partyId);

                }

            }

        }

System.out.println("PartySearchBehaviorExtension has finished executing.");

    }

 

Note: The above implementation is not pagination friendly and pagination will not be covered as a part of this tutorial.

 

Once you have compiled the code above, you will notice that some of the classes are not found and have to be imported. You cannot simply import TCRMPersonSearchResultBObj because the package containing this business object is not imported for the bundle that we’re working with. To fix that go to BundleContent -> META-INF -> MANIFEST.MF and go to the dependencies tab:

image

Add com.dwl.tcrm.coreParty.component to the list of imported packages. Now you should be able to import TCRMPersonSearchResultBObj in our behavior extension class to resolve the dependency error.

After recompiling the projects again, you will notice that the PartySearchBehaviorExtensionCBA now contains an error:

image

This error is occurring because the composite bundle that contains PartySearchBehaviorExtension bundle does not import com.dwl.tcrm.coreParty.component. In order to resolve the error,  go to the manifest file of the CBA, which should be located under root folder of the project, and add com.dwl.tcrm.coreParty.component to the list of imported packages. As you recompile – it should resolve the error.

Now that all compilation problems have been resolved, we are ready to deploy our extension onto the server.

 

Deploying your new behavior extension to MDM

Once the implementation of the behavior extension has been developed, we are ready to deploy it onto the server. There are two steps involved into the deployment:

-        Deploying code to the server.

-        Executing generated SQLs to insert required metadata.

 

Deploying code to the server

Our customized behavior extension can be deployed to the server as a Composite Bundle Archive (CBA) as follows:

1.   Make sure that the customized code has been built and then export the CBA containing the behavior extension by right clicking the CBA project and selecting ‘Export… -> OSGi Composite Bundle (CBA)’.

2.   In the opened view, select PartySearchBehaviorExtension as the bundle to include. Click ‘Browse…’ and navigate to a selected export location and click ‘Save’. If you do not explicitly provide the file name, the wizard will generate the appropriate name automatically.

3.   Click ‘Finish’. The CBA containing the behavior extension has now been exported to selected location.

4.   At this point, we will assume that the MDM instance is up and running. Let’s open the WebSphere Administrative Console. We are looking to import our new CBA into the internal bundle repository. To do so go to Environment -> OSGi bundle repositories -> Internal bundle repository. In the opened view, click New…, choose Local file system and specify the location of the CBA we’ve exported above. Save your progress.

5.   Once the CBA has been imported, attach this new bundle to the MDM application. Go to Applications -> Applications Types -> Business-level applications. Choose MDM application from the opened view. In the next open view, open the MDM .eba file.

6.   We are now looking at the properties of the MDM Enterprise Bundle Archive (EBA). In order to attach our CBA, go to Additional Properties section and select Extensions for this composition unit. 

7.   If this is the first extension that you’ve deployed on your instance, the list of attached extensions will be empty. Let’s now click Add…, and check the CBA we’ve imported above, then click Add. Wait for the addition to complete. Save your changes.

8.   You may think that we are done here, but not quite. We’ve only updated the definition of the EBA deployment by adding our extension. The MDM OSGi application itself has not been updated and even if you restart the server, your new behavior extension will not be picked up. So you must update the MDM application to the latest deployment by returning to the EBA properties view.

image

Before we attached our extension, the button shown above was grayed out; the comment stated that the application is up to date. But since we’ve update our application with a new extension bundle, we need to update it to the latest deployment. Go ahead and click the Update to latest deployment … button.

9.   In the next view, you can see that the PartySearchBehaviorExtensionCBA that we’ve attached to the MDM EBA, will now be deployed:

image

At this point, scroll down and click Ok to proceed. It may take several minutes depending on your system hardware.

10.  At this point, WebSphere will take you through three views, offering multiple information summaries of the deployments and several customization options. There is no need to customize anything, go ahead and click Next three times, followed by  Finish. At this point the application will update. It may take some time; please allow 5 – 10 minutes to complete depending on underlying hardware. Once it is complete – save your changes. At this point, the MDM application has been updated to the latest deployment which includes our extension.

Now we need to deploy our custom metadata to the database.  This metadata will govern the behavior of our extension in ways discussed above.

 

Deploy metadata onto the MDM database

As mentioned earlier in this tutorial, the Workbench generates database scripts that insert the required configuration into the metadata tables of the MDM repository. This metadata is generated based on the parameters we provided for our behavior extension as part of the Creating extension project section. In order to deploy this metadata to the database, run the database scripts listed under the resources -> sql folder that are appropriate for your database type. Conversely, if you need to remove extension from the server, you would need to run the rollback scripts provided in the same folder.

Note: In the case where some portion of the script fails, please investigate the error, because it may render the extension useless. Potential reasons for an error may include residual data from previous extension (rollback was not run when extension was removed), incorrect database schema, etc.

Once the scripts have been successfully run, you’re your behavior extension has now been successfully deployed. Restart your WebSphere server so that your new metadata gets picked up when the application runs next.

 

Testing deployed code using remote debugging

Now that all of the aspects of the behavior extensions have been deployed, we are ready to test it out! To do that, run a searchPerson transaction. It is required to have at least one person in the database so that you can actually search and yield a successful search result to trigger your new extension. This test will show us that the extension is successfully deployed. Once the transaction returns as successful, go to the SystemOut.log of the WebSphere server which is located under the log folder of the WebSphere profile where MDM application is deployed. If the extension has deployed correctly, due to the following line in our custom code:

System.out.println("PartySearchBehaviorExtension has finished executing.");

You should be able to see this message in the logs:

[6/17/14 13:24:59:816 EDT] 000001b3 SystemOut     O PartySearchBehaviorExtension has finished executing.

Note: The log message is there for testing purposes only, and depending on the usage of the behavior extension can significantly impede performance. For that reason please make sure to remove such debugging messages or put them into fine logging level before going into production. Such as:

logger.finest("PartySearchBehaviorExtension has finished executing.");

 

Configuring WebSphere Application Server debug mode

To observe the behavior of our extension more closely, put WebSphere server into the debug mode, and connect MDM Workbench to the said server in order to debug our code step by step. To put your server in debug mode:

1.   Go to WebSphere Application Server administrative console, and navigate to Servers -> Server Types -> WebSphere application server -> <Name of your instance>.

2.   Once in the server configuration view, take a look at Server Infrastructure section and navigate to Java and Process Management -> Process definition.

3.   In the Additional Properties section, select Java Virtual Machine.

4.   Once we are in the Java Virtual Machine view, navigate down to the Debug Mode checkbox, check it and provide the following settings in the Debug arguments textbox:

-Dcom.ibm.ws.classloader.j9enabled=true -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=7777

Note that ‘7777’ is the debug port to which the MDM Workbench will connect. Make sure this port does not conflict with any other assigned ports on the server, and set it accordingly.

5.   Save configuration and restart your server. It is now running in debug mode. Note: If later you observe unexpected performance degradation and do not require debug mode any longer, make sure to take the server out of the debug mode using the same steps.

 

Configuring MDM Workbench to for remote debugging

Once the server is running in debug mode, we can go back to the MDM Workbench and configure it for debugging:

1.   In MDM Workbench, go to Run -> Debug Configurations.

2.   Within the Debug Configurations window, double click Remote Java Application. This will create a new Remote Java Application profile.

image

3.   When configuring the Remote Java Application, lets name our configuration ‘MDM Local Instance Debug’. The Project setting does not play a role, you may leave it empty, or whatever the default populated value is. Connection Type should remain as ‘Standard (Socket Attach)’. Lastly Connection Properties should reflect the location of the MDM instance and debug port we’ve chosen above.

image

          We will not cover other tabs because the configuration we’ve done so far is sufficient.

4.   Once configuration is complete, hit Apply followed by Debug in order to attach to the MDM instance. The attach process may take a little bit of time depending on the environment. Once it is complete, go to the Debug perspective of your environment. In the debug view, you should observe the connected MDM instance if the attach was successful:

image

You can see above that the instance is available along with all of the threads.

5.   Finally set a break point at the beginning of the behavior extension execute method and observe this breakpoint getting engaged once we run a searchPerson transaction:

image

 

6.   If you have multiple TCRMPersonSearchResultBObj coming back in the response, step through and observe the duplicates being removed.

As a last point, note that we can debug both local and remote instances as described above, using Eclipse’s Remote Java Application debug capabilities. 

 

Conclusion

In this tutorial we’ve gone through the steps of creating, configuring, deploying and testing a basic yet realistic behavior extension scenario for InfoSphere MDM.

We’ve covered two ways in which an extension template can be created: while the wizard option is straightforward and is preferable for a novice or a simple extension scenario, the model editor allows for more flexibility.

We’ve taken a look at the various configurations that apply to a behavior extension and outlined their effects on its execution. Additionally, we’ve covered the assets that get generated as a result of the configuration.

For the development step, we’ve created and analyzed the implementation of our behavior extension.

And finally, we’ve deployed, tested and debugged our behavior extension to make sure it performs as expected.

All of the above steps constitute a complete development process of an MDM Server behavior extension.

 

Related materials

InfoSphere Master Data Management operational server v11.x OSGi best practices and troubleshooting guide

How to deploy Composite Business Archives (CBA) to WebSphere



Tags:  mdm-server mdm preview mdm-workbench osgi article behavior-extension mdm11

Fast path to installation: Cookbooks for v11.3

vwilburn 100000F865 | | Visits (6580)

Tweet

If you're looking for a fast path to installation, try one of our brief cookbooks. Each cookbook limits its pages to a specific scenario for Standard and Advanced Editions. That means each cookbook excludes superfluous pages and speeds you onto a simplified install experience. The following cookbooks cover key MDM scenarios:

  • Quick POC or development environment with the Workbench
  • DB2, WebSphere clustering, and WebSphere MQ messaging
  • Oracle, WebSphere clustering, and WebSphere default messaging


The cookbooks' worksheets help you to prepare for and document your installation. Verification steps ensure that your installation is functional and ready to go. Example values give you a starting point as you work through each aspect of installation. Additionally, these cookbooks have diagrams and screen captures for key aspects of installation, for example:

imageimage
 

 

As always, if your environment or scenario is more complex than the cookbook ones, you can find the details in the InfoSphere MDM Standard and Advanced Installation Guide. That guide also includes troubleshooting, fix pack, and client installation topics. You can also quickly search across all the MDM product docs in IBM Knowledge Center.



Tags:  pdf docs documentation mdm11.3 mdm cookbooks

Testing MDM web services

tgarrard 120000GH07 | | Comments (2) | Visits (11532)

Tweet

As of version 10.1, MDM is secured by default. This means that using the Web Service Explorer to test your web service will not be possible. Whilst there are many web service testing tools out there like SOAPUI, there is one that is included within the MDM Workbench that you can use, the Generic Service Client. The following steps detail how to invoke the required MDM web service :

  1. To test the web service you need to select the wsdl that contains the operation you want to test (N.b. all MDM  web services can be found in the project CustomerResources in 10.1 and MDMSharedResources from v11 onwards). 
  2. Right click the wsdl and select Web Services -> Test with Generic Service Client

This will then open the following editor :

image

 

  1. Select your operation and fill in the required fields for the message (requestID, requesterName and requesterLocale and you transaction specifics)
  2. Modify the service binding i.e. your host/port. Select the transport tab specify the new bindings for the URL

image

 

  1. Specify your security details. To do this, select the Request Stack tab and select the "Override Stack" radio button. Then click the Add button and select the "User Name Token" from the drop down. You should be presented with the following :

image

 

  1. Fill in the Name and Password settings
  2. Finally click the Invoke button next to the Edit Request

The responses are saved and can be rerun, but if you want more functionality you'll probably need to look at Rational Performance Tester



Tags:  mdm-workbench techtip security wsdl webservices mdm

InfoSphere Master Data Management operational server v11.x OSGi best practices and troubleshooting guide

Dany Drouin 270004VXKT | | Visits (8321)

Tweet

InfoSphere Master Data Management operational server v11.x OSGi best practices and troubleshooting guide

 

Note: This preview only covers the initial set up of the MDM Workbench. The full developerWorks article has now been published and contains these additional topics:

  • Bundle project build path
  • Exporting services from composite bundles
  • Deploying a CBA composition unit extension
  • Class loaders explained… cyclical dependencies
  • Troubleshooting

 

Introduction

 

The goal of this article is to show best practices and optimal development practices for developing with the InfoSphere MDM operational server. We will discuss common OSGi patterns, troubleshooting, including failures and resolution, as well as how to best deploy MDM composite bundle (CBA) extensions.

 

The InfoSphere MDM version v11 operational server is based on an enterprise OSGi architecture, which is modular in nature. The benefits of a modular architecture application design include reducing complexity, reducing time to delivery, and increasing serviceability. The Java EE infrastructure leveraged in previous versions of InfoSphere MDM had limited ability to enforce or encourage a modular design.

The advantage of a modular MDM application is to allow customization to be deployed without having to alter the core MDM application. Instead, customizations are attached in the form of extensions to the core MDM application. This is done using composite bundles, or CBA files.

image

 

Optimal workspace operational server configurations

 

The InfoSphere MDM Workbench is a tool that supports development of customizations and extensions to MDM operational server. The MDM Workbench allows you to define the desired data model and transactions and then generates the code required to implement the MDM Server extensions.

When using workbench to build and deploy your MDM customizations and extensions, there are a few workspace configurations to consider for achieving the best performance and development experience.

 

  1. Workbench workspace server definition settings

    Publishing CBAs from the MDM Workbench using the "Run server with resources within the workspace" option, sometimes called a "loose config" option, is not desired because it might cause the workspace and the server deployment to be out of sync. Use of this option often results in CBA deployment errors where the MDM EBA application fails to start. To resolve this, you must manually remove the CBA deployment using the WebSphere Application Server administrative console to return the application back to its original state.

    The optimal configuration is to use the "Run server with resources on Server" publishing option. Although this option will be slower, it is more stable. This option is a true reflection of a deployment to the production environment because the CBA is physically built, packaged, and deployed in the internal bundle repository of WebSphere Application Server.

    The second server definition option is to avoid publishing changes automatically to the server. Since publishing can be a time consuming operation, we want to avoid doing this for every simple changes we do in the workspace. The option to publish only when we have sufficient changes is ideal.

    -Double click (or Right click and select Open) to open the server’s definition.

    image







    image
    Note: Un-deploy any previously deployed CBA assets from the server prior to switching the publishing settings.


    Finally, we recommend “Start server with a generated script” to be unchecked to allow proper MDM logging to occur.
    image









     

  2. Run server in debug mode

    Avoid re-publishing the CBA and bundle changes to the server for simple Java class changes.  We can opt to run the server in “debug mode” and leverage the hot swap of Java code (also known as the “Hot Method Replace”) feature.  When running the application server in debug mode, hot method replace allows most application changes to be picked up automatically without requiring a republish, application restart or server restart.  There are cases where we cannot simply rely on the hot method replace.  Application structure changes such as OSGi blueprint changes, manifest change, CBA manifest changes and code refactoring will still require a republish.
     
  3. Remove legacy applications and services

    If your MDM implementation doesn’t require the use of the Legacy UIs and/or legacy JAX-RPC web services, you can choose to uninstall these components from the application server.  This will increase the startup performance of the application server and also reduce memory consumption. 

    The MDM JAX-RPC web services are deployed as a CBA extension.  Note that the JAX-RPC web services have been deprecated in InfoSphere MDM v10 and replaced by the new JAX-WS specification.  It is recommended to migrate your existing JAX-RPC web services implementation to the new specifications but this may not always be possible.

    Uninstalling JAX-RPC web services is as simple as removing the CBA extension from the composition unit of the InfoSphere MDM operational server. 

    1. In WebSphere Application Server Administrative Console, Navigate to Applications --> Application Types --> Business Level Applications, and select the MDM-Operational-server-EBA-E001. Then select com.ibm.mdm.hub.server.app_E001-0001.eba. See below:

    image





























    2. Select ‘Extensions for this composition unit’.
    3. Select the ‘com.ibm.mdm.server.jaxrpcws.cba’ CBA currently attached to the EBA and click Remove.
    4. Save changes to master configuration file.
    5. Return to the previous screen and click ‘Update to latest deployment…’.
    6. Save changes to master configuration file.

     
  4. Apply WebSphere interim fixes for performance improvements

    See the following InfoSphere MDM technote for more info.
    http://www-01.ibm.com/support/docview.wss?uid=swg21651235

 

 

Read more...



Tags:  classloader cba mdm11 best-practices osgi mdm-workbench troubleshooting mdmserver deployment error mdm

Deploying an MDM Virtual project with sample data using MDM Workbench

Nic Townsend 2700051ED4 | | Comment (1) | Visits (9005)

Tweet
This blog post details how to use the template models provided in MDM Workbench to create and deploy a new Virtual Configuration Project to an Operational Server, and then deploy the sample data supplied in the template model.

Creating a new Virtual Configuration Project

  1. Open MDM Workbench.
  2. Click File > New > Other > Master Data Management > Configuration Project.
  3. Click Next.
  4. Enter a name for the new Configuration Project. For this blog post I have named mine "Party".
  5. Click Next.
  6. Select a template to use for the project. For this blog post I selected the Virtual > Party template.
  7. Click Finish. Workbench will create a new project using the name you supplied. It can be viewed in Package Explorer.

Creating a connection to the Operational Server

  1. Switch to the MDM Configuration perspective in Workbench.
  2. Click the Servers tab at the bottom pane of Workbench, right click on the white space in the Servers window and select New >Server.
  3. Select Websphere Application Server 8.5 as the server type.
  4. Enter the server's hostname/IP address in Server's host name.
  5. Enter an identifable name for the server in Server name.
  6. Make sure the Server runtime environment is set to Websphere Application Server v8.5.
  7. Click Next.
  8. If the Operational Server is on the same machine as Workbench, ensure the correct profile name is selected and click Automatically determine connection settings.
  9. If the Operational Server is a remote machine, click Manually provide connection settings.
    1. You need to supply the correct port numbers for the Operational Server.
      1. IPC is the IPC_CONNECTOR_ADDRESS.
      2. RMI is the BOOTSTRAP_ADDRESS.
      3. SOAP is the SOAP_CONNECTOR_ADDRESS.
  10. If security is enabled on the Operational Server, tick Security is enabled on this server.
    1. Enter the username and password to connect to the Operational Server.
  11. Enter the Application server name for the Operational Server.
  12. Click Finish.    

Deploy the new Configuration Project:

  1. At the top of Workbench, click Master Data Management > Deploy Configuration Project.
  2. Select the "Party" project you have just created.
  3. Select the Server you have just defined the connection for.
  4. Leave the check boxes unchanged.
  5. Click Finish and wait for the wizard to deploy the project to the Operational Server.

Processing and loading the sample data:

  1. Inside the Configuration Project there is a DemoData_config folder. This needs to be copied into the work directory on the Operational Server. This will be located under <WAS_install>/profiles/<AppSvr01 or Node01>/installedApps/<Cell name>/MDM-native*.ear/native.war/work/<Workbench Configuration Project Name>/work
  2. In the DemoData_config folder there will be two types of files; *-mpxdata-input.cfg and *.unl. Make a note of all the file names. The "Party" project I created has four files - org-mpxdata-input.cfg, orgdata.unl, per-mpx-data-input.cfg, persondata.unl.
  3. In Workbench, click Master Data Management > New Job Set.
  4. Select the "Party" project
  5. Leave Job Set Template as None
  6. Select the correct Server
  7. Click Next
  8. Click the add button (green plus sign), and select Bulk Tools > Derive Data and Create UNLs (mpxdata)
  9. Click OK.
  10. Under the Input tab, enter DemoData_Config/orgdata.unl in the Input File field. If your Configuration project does not have orgdata.unl, enter a valid *.unl file from the DemoData_config folder. This field will look in the Operational Server's work directory (which is why we copied the folder into the work directory on the Operational Server)
  11. Leave the rest of the tab unchanged.
  12. Click the Input Config tab.
  13. Click the Import From File button. This will open a File Browser window folder structure at the "Party" project root. Open the DemoData_Config folder and select the org-mpxdata-input.cfg file.
  14. Click the Options tab
  15. Select MEMCOMPUTE under Mode. Leave everything else unchanged.
  16. Click Finish. This should start the job running. Select the Jobs tab in Workbench and wait for the job to finish. Once the job finishes, right click on the job and select Run Jobset Again. This will load the Job Configuration window again.
  17. Click the Options tab and select MEMPUT. Make sure that at the bottom of the window, Mem mode is complete, and Put type is insert_update.
  18. Click Finish. Select the Jobs tab in Workbench and wait for the job to finish.
  19. Repeat steps 3-18 for the remaining *.unl with corresponding *-mpxdata-input.cfg files.


Comparing member records:

  1. Once you have finished running the mpxdata jobs,  click Master Data Management > New Job Set.
  2. Click the add button, and select Bulk Tools > Compare Members in Bulk (mpxcomp).
  3. Under entity type, I selected mdmper - this will differ if you did not choose the Party template.
  4. Click the Options tab, at the bottom of the window select Match, link, and search for the Comparison mode.
  5. Click Finish. Select the Jobs tab in Workbench and wait for the job to finish.
  6. Repeat steps 1-5 for the remaining entity types - I only had mdmorg left to do.


Enabling entity linkage:

  1. Once you have finished running the mpxcomp jobs,  click Master Data Management > New Job Set.
  2. Click the add button, and select Bulk Tools > Link Entities (mpxlink).
  3. Under entity type, I selected mdmper - this will differ if you did not choose the Party template.
  4. On the Inputs and Outputs tab check the Generate BXM output option. Leave the rest of the tab unchanged.
  5. Click Finish. Select the Jobs tab in Workbench and wait for the job to finish.
  6. Repeat steps 1-5 for the remaining entity types - I only had mdmorg left to do.

 

Conclusion

After completing these steps, you will have deployed a new Configuration Project to your Operational Server, and you will have populated the server with the sample data from the project template. To confirm this, perform a search against your Operational Server and you should have entities returned.



Tags:  mdm11 virtual initiate techtip sample configuration mdm mdm-workbench mds

What is IBM Knowledge Center?

vwilburn 100000F865 | | Visits (5583)

Tweet

Currently in an open beta until February 2014, IBM Knowledge Center gathers all IBM product information into one easy-to-access place. IBM Knowledge Center provides an improved search experience that supports saving your searches for future use, and capturing personalized collections of just the information you need.

Search across the whole library, or limit your search to the products and versions you care about. You can use filters to narrow your search results. You can apply multiple search filters to get specific results for what you need. More of what you need and less of what you don’t need.

Once you have the search results, you can assemble those topics and not lose them. Collect your favorite topics (like collecting your favorite shells, pins, or stamps). You get more bang for your buck if you put single topics in your collections – skip all the navigation. Saving single topics allows for easier retrieval at a later time. Organize just the topics you need for viewing and printing. Build custom PDFs that are tailored for your needs, with only the topics that you are interested in.

Put the power of information to work for you. Try IBM Knowledge Center today. After you try IBM Knowledge Center, please take a short survey and let us know what you think.



Tags:  kc knowledge-center mdm docs knowledge_center documentation

Manually installing a version 11 development and test environment

jtonline 110000B6Y8 | | Comments (22) | Visits (17030)

Tweet

There is a typical workstation install that automatically sets up a full development and test environment, described in the 'Full development environment install' section of the Installing MDM Workbench v11 post, however this approach has some limitations. For example, it will only work if you do not have any trace of the products it installs on your machine already, there are restrictions on which features can be installed, it is not possible to provide passwords, and so on.

The alternative is to manually install the MDM Workbench for MDM configuration and development, and an Operational Server for test purposes. In this example I will install a full development and test environment on Windows, using a DB2 database. The instructions below assume that you do not have any of the prerequisite software installed but, if you do, just skip the relevant steps.

To avoid problems with path lengths, special characters, or Windows virtualised directories, I installed all the software under a C:\IBM directory.

This blog post is accompanied by a series of videos on YouTube.

 

Downloading and extracting install images

These are all the install images I downloaded. See the Download IBM InfoSphere Master Data Management version 11.0 document for more part numbers and information about downloading from Passport Advantage. I extracted the downloads into the folder structure described in the Setting up the installation media topic, with a couple of additional directories as required. If you don't already have anything that can extract .tar.gz files, you can use the unpack option in download director:

image

 

Important: If you are about to install MDM but downloaded the install images before 17th October 2013, you must download the product refresh first.

Important: The workbench install will fail if the .tar.gz install images are extracted using WinZip. So far it looks like the Download Director unpack option, WinRAR, and 7-Zip all work but please leave a comment if you have problems with any unzip tools and I'll update the list.

 

IBM Installation Manager V1.6.0

This is required to install everything except DB2.

Part number: CIM7CML

 

DB2 Enterprise Server Edition V10.1

I used fix pack 2 to install DB2, available via the DB2 fix pack download page, rather than installing the GA version and upgrading. Alternatively, you could use the following part.

Part number: CI6WEML

 

Installation Startup Toolkit

This provides the scripts required to create an MDM database.

Part number: CIR9WML

 

Master Data Management Standard & Advanced Edition

This is the actual MDM Operational Server install.

Part numbers: CIR9NML, CIR9PML, CIR9QML, CIR9RML, CIR9SML

 

Master Data Management Workbench Standard & Advanced Edition

This is the Rational based workbench used to configure and develop MDM solutions.

Part numbers: CIR9TML, CIR9UML, CIR9VML

 

Rational Application Developer for WebSphere Software V8.5.1

I installed the workbench into Rational Application Developer but you could use Rational Software Architect for WebSphere Software instead. In either case you need at least version 8.5.1, however there is a known problem with version 8.5.5.

Part numbers: CIE5FML, CIE5GML, CIE5HML, CIE5IML

 

WebSphere Application Server V8.5.0.2

The minimum version required is 8.5.0 fix pack 2, otherwise the install verification tests will fail. I download WebSphere Application Server Fix Pack 8.5.0.2 from Fix Central. There are known problems if you are planning to use version 8.5.5.

Part numbers: CI6XNML, CI6XPML, CI6XQML

 

Installing Installation Manager

I ran install.exe to install Installation Manager in GUI mode. After installing Installation Manager you can add the required repositories individually before you run each install, as I did in the video series, or you can add all the repositories in one go as follows.

Create a repository.config file in the directory where you extracted the install images. Copy and paste in this content:

LayoutPolicy=Composite
LayoutPolicyVersion=0.0.0.1
repository.url.mdm=./MDM/disk1
repository.url.mdmst=./MDMST/disk1
repository.url.mdmwb=./MDMWB/disk1
repository.url.rad=./RAD/disk1
repository.url.was=./WAS
repository.url.wasfp=./WASFP

Edit any paths based on the directories you used before saving the file. Now you can add this single repository using the Installation Manager repository preferences and all the packages will show up on the install page.

For more information about Installation Manager, see the Installation Manager 1.6 documentation.

Note: you may have seen a suggestion to alter Installation Manager's agent data location using the cic.appDataLocation configuration setting, however it is not typically necessary, or a good idea, to change this setting.

 

Installing the workbench

Installing the workbench is straightforward once you've added the Rational Application Developer and workbench repositories to Installation Manager. Pick a suitable install location, for example C:\IBM\SDP, and you can accept the defaults for everything else.

In the MDM Workbench v11 Installation video I chose to install a few additional features from Rational Application Developer, for example 'JSF' for customising UIs. Depending on what work you'll be doing, you may want to do the same, and you can always use Installation Manager to add extra features later on if required.

 

Installing Operational Server prereqs

Before installing the MDM operational server, you need to install DB2 Enterprise Server Edition version 10.1 and WebSphere Application Server 8.5.0.2. In addition, the Installation Startup Toolkit provides the database scripts you'll need to create an MDM database.

You can watch how to run these installs in the MDM v11 Test Environment: Installing Prereq Software video.

 

DB2

Important: you must install DB2 in a directory called SQLLIB, otherwise the operational server install will not work. For example, I installed DB2 to C:\IBM\SQLLIB

I accepted most of the defaults in the DB2 install wizard, except that I chose not to enable email notifications or operating system security since this is for a development environment.

 

WebSphere Application Server and Installation Startup Toolkit

Both of these are installed using Installation Manager so I installed them at the same time. (You could even install them at the same time as the workbench to save time.)

Important: you must install fix pack 2 for WebSphere Application Server 8.5 otherwise the MDM install verification tests will fail.

I changed the install locations, to C:\IBM\AppServer and C:\IBM\MDMStartupKit respectively, but I accepted the defaults for everything else.

 

Preparing to install the Operational Server

There are several advantages to manually installing a development and test environment, however the biggest disadvantage compared to a typical install is that the installer does not create an MDM database or WebSphere profile for you. Instead, you have to prepare the database and prepare the application server before starting the install.

These are the steps I followed, which are covered in the MDM v11 Test Environment: Preparing to Install video.

 

Edit SQL files

There are a couple of SQL files provided in the startup toolkit for creating an MDM database on DB2:

  • CoreData\Full\DB2\Standard\ddl\CreateDB.sql
  • CoreData\Full\DB2\Standard\ddl\CreateTS.sql

Both these files contain placeholders which need to be replaced with suitable values before use. These are the values I used:

 

Placeholder Value
<DBNAME> MDMDB
<TERRITORY> US
<DBUSER> db2admin
<TABLE_MDS4K> TBS4K
<TABLE_SPACE> TBS8K
<TABLE_SPMDS> TBS16K
<INDEX_SPACE> INDEXSPACE1
<LONG_SPACE> LONGSPACE1
<TABLE_SPPMD> EMESPACE1
<TABLE_SPPMI> EMESPACE2

 

Notes: Authority will be granted to the user specified by the <DBUSER> value, so this should be different to the user running the scripts. The database name is easy to specify in the installer but here I used the default. The tablespace names need to match the settings used by the installer, and the easiest way to do that for a development environment is to use the values shown above.

The following PowerShell command will fill in the placeholders and I ran it for CreateDB.sql and CreateTS.sql rather than editing the files by hand:

powershell -command "(Get-Content C:\IBM\MDMStartupKit\CoreData\Full\DB2\Standard\ddl\CreateDB.sql) | Foreach-Object {$_ -replace '<DBNAME>','MDMDB' -replace '<TERRITORY>','US' -replace '<DBUSER>','db2admin' -replace '<TABLE_MDS4K>','TBS4K' -replace '<TABLE_SPACE>','TBS8K' -replace '<TABLE_SPMDS>','TBS16K' -replace '<INDEX_SPACE>','INDEXSPACE1' -replace '<LONG_SPACE>','LONGSPACE1' -replace '<TABLE_SPPMD>','EMESPACE1' -replace '<TABLE_SPPMI>','EMESPACE2' } | Set-Content C:\temp\CreateDB.sql"

 

Create database

After editing the SQL files, I ran them using this command in a DB2 Command Window:

db2 -v -td; -f C:\temp\CreateDB.sql

And the same for CreateTS.sql.

 

Create application server profile

I used the advanced option when creating an application server profile using the Profile Management Tool. I chose not to install the default application, gave the profile a meaningful name and picked the Development tuning setting. Administrative security must be enabled for MDM, and the advantage of creating the profile yourself is that you get to choose the username and password. If you run the Profile Management Tool as administrator, you will also be given the option to run the server process as a Windows process, which isn't necessary for a development environment.

Important: When creating a profile for use with the MDM Workbench, make sure you create it in the default location with a directory name that matches the profile name.

 

Set DB2_JDBC_DRIVER_PATH WebSphere variable

After creating the application server profile you need to set the DB2_JDBC_DRIVER_PATH variable. The value needs to be the path containing the directory where you installed DB2. I installed DB2 in C:\IBM\SQLLIB, so on my system the DB2_JDBC_DRIVER_PATH variable should be set to C:\IBM.

Important: Do not follow the instructions in the description for this WebSphere variable: the operational server install requires a non-standard setting.

 

Installing the Operational Server

Now everything should be ready for a successful MDM install, however the operational server installer does not do many checks so there is still a chance that you could encounter problems. When problems do occur, the install will either roll back, requiring a reinstall, or keep going without reporting any issues until the install verification tests fail. This makes it difficult to track down problems so it's worth taking time to make sure everything is configured correctly before starting the install.

In the MDM v11 Test Environment: Installing the Operational Server video I checked following prereqs that are prone to error.

 

Manual pre-install checks

First I checked that DB2 was installed in a directory called SQLLIB using the db2level command. Next I checked that the required table spaces had all been created using the command below:

db2 -td; CONNECT TO MDMDB && db2 -td; SELECT VARCHAR(TS.TBSPACE,20) AS TABLESPACE, TS.PAGESIZE, VARCHAR(BP.BPNAME,20) AS BUFFERPOOL FROM SYSCAT.TABLESPACES TS, SYSCAT.BUFFERPOOLS BP WHERE TS.BUFFERPOOLID = BP.BUFFERPOOLID

Finally I checked the DB2_JDBC_DRIVER_PATH variable using wsadmin:

wsadmin -lang jython -c "AdminTask.showVariables('[-scope Node -variableName DB2_JDBC_DRIVER_PATH]')"

This should be the parent directory of SQLLIB, which I checked first.

 

Running the install

The MDM Operational Server install is another Installation Manager based install, so add the MDM repository if you haven't already and choose the install option. I changed the installation directory to C:\IBM\MDM to avoid any confusion with the missing space in the default, 'ProgramFiles'.

In the video I chose to install the Business Administration UI feature, and there are other features you may want to select. The remaining panels cover the configuration settings required to set up MDM.

Note: There are several buttons to test connections and retrieve details from the application server during the configuration process which do not report process. It can look like the install has stopped responding but if you use a slightly long single click and wait a few minutes, you should see the status message change when the processing has completed.

 

Configuration settings

On the Database Configuration page, the Database home setting must match the value of the DB2_JDBC_DRIVER_PATH WebSphere variable, for example C:\IBM on my machine. The rest of the settings on this page should match the MDM database you created earlier.

Pick suitable values for your requirements on the History Configuration page.

Select the Base Edition option on the WebSphere Application Server Configuration page. The settings should match the application server profile you created earlier. You'll need to specify the correct SOAP port setting since the default value is wrong. This is likely to be 8880 if the MDM profile was the first profile you created, but you can check using the AboutThisProfile.txt file in the profile logs directory.

Pick suitable values for your requirements on the Application Configuration page.

There may be further configuration panels depending on the features you selected, for example I needed to complete the Business Administration UI settings for my install. There are a collection of worksheets describing all the configuration settings in the Information Center.

When you've completed all the configuration panels you can start the install. This will take some time to complete.

 

Troubleshooting

If the install finishes with no errors and there are no errors reported for the install verification tests, the install was successful. If the install fails for some reason, the installation troubleshooting topic will help you debug known install problems. There are several logs which may help with this process, including Installation Manager logs, application server logs, MDM install logs and install verification logs. For example:

  • C:\ProgramData\IBM\Installation Manager\logs
  • C:\IBM\AppServer\profiles\MdmDev01\logs\server1\SystemOut.log
  • C:\IBM\AppServer\profiles\MdmDev01\logs\server1\SystemErr.log
  • C:\IBM\MDM\mds\logs
  • C:\IBM\MDM\logs\database
  • C:\IBM\MDM\IVT\testCases\xml\response
  • C:\IBM\MDM\IVT\testCases\xml_virtual\response

The exact paths may be different on your system if you chose to install in different locations.

 

Feedback

I hope you find this post useful but if you do spot any errors or omissions, please leave a comment below. Any hints and tips based on your own install experiences would also be great. If you're having problems installing, the best place to ask questions is on the MDM forum.

 

Related information

For an up-to-date* list of install related information, see the following wiki page:

  • Installing MDM Version 11

(* Please update it if it's not up-to-date!)

 

Updates:

  • added 'Related information' section (20th January 2014)
  • added warning regarding WinZip issue (11th February 2014)


Tags:  mdm11 install dest techtip installation mdm-workbench mdm-server

MDM Tech TV: Watch what's happening with InfoSphere MDM

vwilburn 100000F865 | | Visits (6820)

Tweet

Skip the fluff and go deep into the technical details of InfoSphere MDM. In a series we're calling MDM Tech TV, several IBM engineers catch you up on technical topics. These informal video demonstrations give you a quick taste of various features.  

Watch these videos now: MDM Tech TV
 

 

InfoSphere QualityStage Address Standardization with the MDM Workbench image

  • Deepa Balraj takes you into the MDM Workbench where she shows how you can use the address standardization features of InfoSphere QualityStage.
  • You'll see the callout handler, the returned standardized address, the payload attribute, and the preconfigured tables for this handler. More details are in the docs.
     

InfoSphere Reference Data Management: Role-Based Security and Access image

  • Sushain Pandit demonstrates how to configure users and groups within InfoSphere MDM Reference Data Management and WebSphere Application Server.
  • You'll see the users and groups in the WebSphere Administrative Console. Then see how the properties files control groups, role mappings, and permissible states/actions for entities and roles.  More details are in the docs.
     

Batch Activity Dashboard for Hybrid Data Governance in InfoSphere MDM image

  • Mike Adams and Gary Chen show the Batch Activity Dashboard where they view and manage batch tasks for data governance. The video features linking and persisting entities for a hybrid MDM implementation.
  • You'll see how to use the Batch Activity Dashboard to create, schedule, and manage MDM batch tasks such as suspect processing, name and address standardization, and entity persistence.  More details are in the docs.
     

The three new videos complement the existing MDM videos:

  • MDM Workbench: Create and deploy a virtual configuration
    • Mike Cobbett creates and deploys a virtual configuration in the MDM Workbench. More details are in the docs.
  • MDM & Information Server Integration
    • Lena Woolf follows the path of MDM information with InfoSphere Metadata Workbench. See her demonstrate the MDM integration with Metadata Workbench by using the export feature in the MDM Workbench. More details are in the docs.
  • Plus several earlier demos of the MDM Workbench, the Application Toolkit, and the 11.0 installation.


Watch for new videos in the MDM Tech TV series, coming soon!
 



Tags:  standardization videos mdmtechtv mdm-workbench quality-stage youtube hybrid hybrid-mdm rdm video governance

Handling No Results Found using the Application Toolkit MDM Search Integration Service

Dave_Kelsey 100000BGBR | | Visits (5975)

Tweet

No results found is a common response to a search request, but how do you detect this in your BPM process ?

A “No Results Found” situation causes an exception to be thrown which means you can detect it using an intermediate error event, but you need to be able to handle a real errors as well as a no results found which is a special kind of exception.

Here we have a simple example of a search human service which applies equally to Physical and Virtual MDM Server.

image

 

There is an intermediate error event attached to the do search nested service that calls the MDM Search integration service. The gateway decision determines whether it is a no results found or not.

To determine how to configure the decision gateway we need to have a look at the format of the exception we get when a no results found is generated and this differs slightly between virtual and physical MDM Server.

Handling a Virtual MDM Server response

A sample of part of the xml format of the exception is shown here

<error type="mdm.bpm.exceptions.MDMBPMOperationException" description="MDMBPMOperationException">
  <cause type="java.lang.Throwable" description="cause"></cause>
  <errors type="java.util.ArrayList" description="ArrayList" />
  <localizedMessage type="java.lang.String" description="String">CDKAT0107E IxnMemSearch did not succeed. Reason: no candidates found.</localizedMessage>
  <message type="java.lang.String" description="String">CDKAT0107E IxnMemSearch did not succeed. Reason: no candidates found.</message>
  <reasonCode type="java.lang.String" description="String">ENOREC</reasonCode>

Here is is the reason code we are interested in. So the decision gateway configuration looks like this

image

 

The decision logic being

tw.system.error.xpath("error/reasonCode").item(0).getText() == 'ENOREC'

extracts the reason code and checks for ENOREC.

Handling Physical MDM Response

The implementation remains the same, the test within the decision gateway needs to be altered as the location of the reason code is different to virtual. In this example the reason code that is checked is applicable to a Party or Person Search, however it is possible that the reason code will be different for different types of physical search.

A sample of part of the xml format of the exception is shown here

<error type="mdm.bpm.exceptions.MDMBPMOperationException" description="MDMBPMOperationException">
  <cause type="java.lang.Throwable" description="cause"></cause>
  <errors type="java.util.ArrayList" description="ArrayList">
    <element type="mdm.bpm.exceptions.MDMBPMResponseMessage" description="MDMBPMResponseMessage">
      <errType type="java.lang.String" description="String">READERR</errType>
      <errorMessage type="java.lang.String" description="String">No record is found.</errorMessage>
      <mdmThrowable type="java.lang.String" description="String"></mdmThrowable>
      <reasonCode type="java.lang.String" description="String">794</reasonCode>
    </element>
  </errors>
  <localizedMessage type="java.lang.String" description="String">CDKAT0130E Transaction searchPerson did not complete successfully.</localizedMessage>
  <message type="java.lang.String" description="String">CDKAT0130E Transaction searchPerson did not complete successfully.</message>
  <reasonCode type="java.lang.String" description="reasonCode"></reasonCode>

 

Here we see the reason code we are interested in is nested under the <errors> tag within an < element>. 

image

 

So in this case we need the decision logic to be

tw.system.error.xpath("error/errors/element/reasonCode").item(0).getText() == '794'

 



Tags:  search bpm application mdm-application-toolkit toolkit techtip mdmat

Latest Resources for MDM 11 developers

ThomasRogers 270006MA78 | | Visits (5383)

Tweet

Whether you're already familiar with the Workbench or completely new to it there are a lot of new changes in v11. So why not check out the MDM Workbench Development tutorial for a detailed step by step guide to developing physical customizations in the new release. It's a great resource if you're looking for more insight into what MDM Workbench offers, going through the stages of designing a model and then deploying it to the MDM Server.

There's also a major new addition to the Wiki if you're looking for more information on installing MDM on Linux head over to Installing InfoSphere MDM Version 11 on Linux using the custom installation process for a step by step guide to the whole process.



Tags:  mdm-workbench mdm11 install installation tutorial mdm-server

Going out of Style

jtonline 110000B6Y8 | | Visits (5019)

Tweet

image

There's a new series of blog posts just starting which will be describing how fitting a specific, inflexible, MDM implementation style might cause problems, and how a more adaptive style of MDM could help. The first in the series starts off by reviewing the traditional MDM styles:

MDM Styles... Going out of Style

If you haven't already visited the MDM Best Practices community, there is a growing collection of best practice papers which are worth checking out.



Tags:  physical infosphere virtual virtual-mdm adaptive-mdm physical-mdm adaptive hybrid style mdm hybrid-mdm hub registry centralized best-practices

Introduce yourself

jtonline 110000B6Y8 | | Comments (5) | Visits (8888)

Tweet

From an early MDM Workbench news site, the MDM Developers community has evolved and grown to a group of over 200 members, and it would be great to take a break from the usual posts and forum discussions to find out more about some of you with a quick blog interview. Whether you're a new member or a long term contributor, please say hi and tell us a little about yourself.

Feel free to leave a comment and answer any of the following questions that resonate with you, or add your own questions instead. This is just a casual blog interview and meant to be more like a real world conversation, rather than a formal resume or biography!

For fun, and a bit of encouragement, I have a few limited edition MDM Developer community stickers to give away!

image

Here are a few questions to get you started:

  • How long have you been working with MDM?
  • What did you do before that?
  • What inspires you in your work?
  • What are you passionate about?
  • If you were stuck on a technology deprived island, what single technology could you not live without?
  • What book are you currently reading?
  • What has been your biggest surprise you have witnessed in the technology industry?
  • Is there any technology that you think should get more respect and adoption but does not?
  • What is your favorite technology that fizzled or failed to live up to the hype?
  • Any new technologies that you think are about to break into the big time?
  • What future technology would make your life easier?
  • How do you prefer to find answers to your questions?
  • How are you using social networking today?
  • How could you see yourself using it in 5 years?
  • What are some of your favorite websites/feeds/twitter accounts to follow?
  • What publications / websites do you read / visit?
  • Where can we find you on Twitter, LInkedin, Facebook, and other social spaces?
  • Are you planning to attend any events where you are likely to meet other MDM developers?

Update: Unfortunately no one replied in time to claim the ticket that prompted this blog post. Luckily if you're one of the first to reply, you could still get one of these, much more exclusive, MDM Developers stickers!

image

Photo © Alexander Henning Drachmann (CC BY-SA 2.0)
 



Tags:  community

MDM Application Toolkit and Product domain soft specs - Tips!

jaylimburn 2700028UUJ | | Comments (2) | Visits (7744)

Tweet

MDM Application Toolkit for Product Domain

I recently had to build a product bundling process for a demo using BPM and the MDM Application Toolkit(MDMAT). Having built many business processes over the past 2 years using data from InfoSphere MDM I realized this was going to be the first one I that I was to build against the product domain of the physical engine. Using the MDMAT against the Party domain is pretty darn easy and very quickly a rich process can be built that interacts with MDM's library of web services for many different types of processes. How useful would it be for me when operating against the Product Domain, especially when a good chunk of my data was stored in Product domain XML soft specs? Well I'm pleased to say it was also very straight forward. I've written some notes below that will hopefully allow others to also find it just as easy to use the MDMAT against the product domain. 

The Challenge

The process was to execute a search against the MDM product domain using some pre defined criteria that would allow me to pull back all products that met a certain criteria. in this case it was to retrieve a list of products that were within the 'Mobile Phone' category of the 'Channels' hierarchy, were aimed at a 'Market Segment' that was 'Affluent' had an 'Effective Date' before today's date and an 'Expiry Date' that was after todays date. This would allow me to show currently active offers on the mobile channel for Affluent customers. The 'Market Segment', 'Effective Date' and 'Expiry Date' attributes were all stored as attributes within an XML spec called 'Offer Attributes'. In the search results that come back from MDM I also needed to pull out some additional attributes that were stored within another XML spec called 'Channel Mobile Phone', these attributes were named 'MobilePhoneHeadline' and 'MobilePhoneSalesText' All of this had to be built in a small amount of time AND display on a mobile device. Fortunatly I knew that with the MDMAT and BPM I at least had a chance of pulling this off pretty easily.

The Solution

Whenever I build a business process I first start by defining the variables that I will need. Since BPM applications are data driven, I find it helpful to define the data upfront and then worry about wiring them into a process at a later stage. Using the MDM Workbench I exported my MDM WSDL and imported it into Process Designer. This gives me access to my MDM Product business objects within BPM, allowing me to easily construct a ProductSearchBobj object with the criteria I need to execute my search and also create a ProductSearchResultBObj object to hold the results that are returned from the search. In total I decided I needed 4 business objects:

 

Object Name Object Type Type imported from: Purpose
ProductSearch ProductSearchBObj MDM Workbench Hold Product search criteria
ProductSearchResults ProductSearchResultsBObj MDM Workbench Hold Product search result data
MDMConnection MDM_Connection (From MDMAT) MDM Application Toolkit Hold MDM server connection credentials
diplayObject DisplayObject Manually created To be used by the UI controls to display data

With the objects defined I could move on to define my process flow. I created a very simple flow to suit the requirements as seen below:

image

I would first use the 'Configure Spec Search Criteria' node to execute a script to populate the ProductSearch object with the crieteria I needed. I would then configure the 'Retrieve all Offers' node to use the MDM Application Toolkits' Physical MDM Txn service to execute a search an return a list of ProductSearchResults objects. I figured that there would be some simple scripting required to extract the information I would need from the search results XML specs so I created a 'Populate Display Object' node to define a script that would allow me to extract the spec values from the XML and pass into the displayObject. the 'Display All Offers' coach then displays the list of displayObjects in a table and once a user has made a selection the 'View Offer Details' coach displays details for that offer.

With my objects defined and my process defined all I had to do was a little bit of scripting to firstly populate my search and then extract my search results to populate my displayObject. (I had already populated my MDMConnection object with my MDM server's credentials and configured the 'Retrieve all Offers' node to use the MDM Application Toolkit's Physical MDM TXn service to call an MDM 'searchProductInstance' service and pass in my ProductSearch object.)

Populating the Search

I wrote a simple script in my 'Configure Spec Search Criteria' object to pass in the search criteria. I wont include the full script here, but all I had to do was create an instance of a ProductSearch object and set the following attributes:

 

Attribute Value
ProductSearch/EntityCategorySearchBObj/CategoryName 'Mobile Phone'
ProductSearch/EntityCategorySearchBObj/CategoryHierarchyName 'Channels'
Spec Value instance 1  
ProductSearch/SpecValueSearchBObj[0]/Path '/OfferAttributes/MarketSegment'
ProductSearch/SpecValueSearchBObj[0]/SpecId '1000100'
ProductSearch/SpecValueSearchBObj[0]/SpecValueSearchCriteriaBObj/Value 'Affluent'
Spec Value instance 2  
ProductSearch/SpecValueSearchBObj[1]/Path '/OfferAttributes/EffectiveDate'
ProductSearch/SpecValueSearchBObj[1]/SpecId '1000100'
 ProductSearch/SpecValueSearchBObj[1]/SpecValueSearchCriteriaBObj/Value currentTime
Spec Value instance 3  
ProductSearch/SpecValueSearchBObj[2]/Path '/OfferAttributes/EndDate'
ProductSearch/SpecValueSearchBObj[2]/SpecId '1000100'
ProductSearch/SpecValueSearchBObj[2]/SpecValueSearchCriteriaBObj/Value currentTime

When passed into the 'Retrieve all Offers' node my search criteria successfully results in a list of products that I am interested in being returned as a list of ProductSearchResultBObj's within my ProductSearchResult object. The final stage is to extract the attributes I need from the results to populate my display Object.

Extracting the spec values and populating the display object

Up until now everything I had done was pretty similar to other processes I had built, this final piece was the most challenging, in that I had never extracted values from an XML spec before within a business process. Looking at my ProductSearchResults object I drilled down into the XMLSpec attribute and noticed that there were no attributes defined within it to store the spec values that were returned from my search. That is because the import of the WSDL into Process Designer can not recognize the type as XML. To solve this problem I manually created objects that represent the XML spec contents. I added a 'ChannelMobilePhone' object that contained two String attributes and also I added an 'OfferAttributes' object that contained three String attributes as per my MDM XML spec. I knew that the MDM Application Toolkit uses the name of the attribute to work out how to populate the object inside the process so by creating the attributes to match the XML spec they will be populated with the values from the XML spec. When I was done my XML spec object looked like this: image

With my spec values now populated inside my ProductSearchResults object all that was left was to use a script to iterate through the results and populate my displayObject. This was pretty standard stuff, I just had to ensure I included plenty of null checks in the script but that was as complex as it got. On running my process the results were successfully returned from MDM and the spec values were added to the extra attributes I defined within my XMLSpec attribute. Finally the values from the XML spec were extracted and displayed in my coach screens.

This ended up being a bit of a longer blog post then I intended (sorry JT), but hopefully it will provide you a good starter in using the MDMAT for the product domain. I really enjoyed building this process (and writing this article) as it showed me how cool the MDMAT is for helping me to build MDM centric business processes. The ability to build processes against MDM and not worry about the connection and any complexity in calling MDM Web Services saves a huge amount of time and with a little bit of script I was able to leverage the value of MDM's XML specs. if you want more information drop me an email. I'd love to hear what you are doing. 



Tags:  mdm specs mdm-application-toolkit pim bpm techtip product mdmat

WAS logging in the MDM Application Toolkit

Nic Townsend 2700051ED4 | | Visits (5341)

Tweet

The MDM Application Toolkit consist of two components - a REST server and a IBM Business Process Manager toolkit.

When using the toolkit, you may wish to enable logging for the components. By default, the components will log anything at error level severity or greater. To modify the logging, log on to the Websphere Application Server Admin console where the REST server and BPM have been deployed.

 


Changing log levels

On left menu, click Troubleshooting > Logs and trace. Click on the server/dmgr you wish to enable logs for, then click Change log detail levels.

image

Click the Runtime tab and expand All Components

image

The REST server component is under the package com.ibm.im.* The BPM toolkit component is under mdm.bpm.*

Note: the BPM integration services supplied in the toolkit also log to the the mdm.bpm.* package. However, each integration service will not appear in the log component list until you have started a human service that uses the integration service.

image

 


Log files

The BPM InfoCentre page shows that the output file is not the WAS default SystemOut.log / SystemErr.log file. As a result, the MDM Application Toolkit deployed to BPM will output to trace.log unless tracing is enabled. The REST server will always output to SystemOut and SystemErr.log depending on detail level.

image

Default locations are ${SERVER_LOG_ROOT}/trace.log, ${SERVER_LOG_ROOT}/SystemOut.log and ${SERVER_LOG_ROOT}/SystemErr.log



Tags:  logs bpm techtip mdmat logging mdm-application-toolkit

Want to know about MDM Workflow? We have the answers

jaylimburn 2700028UUJ | | Visits (9059)

Tweet

Rarely do I talk to a customer about MDM without the role of Workflow being discussed within the first 10 minutes. Correct usage of Workflow when interacting with Master Data is important both for an effective Data Stewardship strategy but also to ensure that Master Data is served up to lines of business in an organized manner. IBM InfoSphere MDM provides robust Workflow capabilities out of the box that can be utilized across many domains, addressing workflow requirements for master data stewardship and governance as well as enterprise wide consumption.

Recently IBM has published some excellent information to help you understand how you have utilize MDM Workflow to improve your master data quality and enhance your enterprise processes. Check out the links below to understand how MDM Workflow can help you and your business....

 

IBM developerWorks explains the MDM Workflow capabilities of the InfoSphere MDM platform to Ensterprize Architects:

InfoSphere MDM for master data governance with MDM workflow

 

IBM DataMag article explains how workflow with MDM can enhance your business processes:

No More Excuses: Improving Business Processes and Decisions



Tags:  data-stewardship data-magazine governance workflow mdm

Master(ing) new terms in InfoSphere MDM, V11

vwilburn 100000F865 | | Visits (5888)

Tweet

In Version 11 of InfoSphere MDM, some big changes happened. One change that might leave you scratching your head is the addition of new and changed terms for some familiar components. We also have a couple new components, so those might be unfamiliar too. Let's take a quick walk through the changed terms to get you started.

 

Product names

The first thing that you'll notice is an emphasis on capabilities rather than product names. You might not see these familiar product names anymore:

InfoSphere MDM Server

Initiate Master Data Service (MDS)

Other Initiate product names

 

Instead, you’ll see references to technical capabilities that those products achieve:

 

Technical capability

Previous product name

virtual MDM

Initiate Master Data Service

physical MDM

InfoSphere MDM Server

hybrid MDM

InfoSphere MDM Server and Initiate Master Data Service

 

You might be wondering what exactly these technical capability terms mean. You can use virtual, physical, and hybrid MDM to manage your master data, whether you store that data in a distributed fashion, in a centralized repository, or in a combination of both.

The following definitions show the differences and the relationships among the technical capabilities:

virtual MDM

The management of master data where master data is created in a distributed fashion on source systems and remains fragmented across those systems with a central "indexing" service.

physical MDM

The management of master data where master data is created in (or loaded into), stored in, and accessed from a central system.

hybrid MDM

The management of master data where a coexistence implementation style combines physical and virtual technologies.


For more details and a diagram, see the comparison of virtual, physical, and hybrid MDM capabilities.

 

Server and engine terms

Another new area that you’ll notice is a unified server, which is referred to by one common term:

 

Earlier terms

Current term

InfoSphere MDM Server

Initiate Master Data Service

MDM Hub, MDM Server

master data engine

MDM operational server

 

The former InfoSphere MDM Server and the former Initiate Master Data Service are combined to share a single infrastructure in the application server. That single infrastructure is called the MDM operational server or operational server for short. The operational server is the software that provides services for managing and taking action on master data. The operational server includes the data models, business rules, and functions that support entity management, security, auditing, and event detection. For detailed descriptions and diagrams, see the architecture and concepts topic.

 

Records, member records, and entities

Finally, the concepts of entities and records were clarified:

 

entity

A single unique object in the real world that is being mastered. Examples of an entity are a single person, single product, or single organization.

record

The storage representation of a row of data.

member record

The representation of the entity as it is stored in individual source systems.  Information for each member record is stored as a single record or a group of records across related database tables

 

Depending on your implementation style, these concepts reflect the technical capabilities of virtual, physical, and hybrid MDM. For example, an entity in virtual MDM is assembled dynamically based on the member records by using linkages and then is stored in the MDM database. Conversely, an entity in physical MDM is based on matching records from the source systems that are merged to form the single entity. For details, see the diagrams and definitions for these concepts.

 

I’ll leave a discussion of hybrid MDM to a future article. If you’d like to read some conceptual topics about hybrid MDM now, see its technical overview.

 

Some helpful links:

  • Mapping of earlier terms to current terms
  • Portfolio-wide glossary
  • IBM terms and definitions
  • What else is new in V11


Tags:  glossary virtual-mdm mdm member-record record mdm11 hub entity mds operational-server physical-mdm initiate hybrid-mdm mdm-server terms

Installing MDM Workbench v11

jtonline 110000B6Y8 | | Comments (9) | Visits (18908)