Dany Drouin 270004VXKT Visits (3380)
MDM AE pMDM with RESTful web services
Previously, interactions with MDM Operational server were possible with EJB/RMI, JMS, JAX-WS and JAX-
Possible payloads that are accepted are application/xml and application/json. JSON support was added in v11.4 FP1.
It is important to note that all REST interaction are using one RESTful service “MDMWSRESTful”, PUT method type only and accessed via URI http
The same xml request/response payload used for EJB/RMI is used for REST interactions.
For the full list of capabilities and supported request headers consult the following documentation link:
Interacting with MDMRESTful service
Here’s a sample client leveraging Apache Wink demonstrating an MDM RESTful call:
The above code will submitting an MDM xml payload and expecting back an xml response.
This is determined by the ‘Content-type’ and ‘Accept’ http header properties.
Here’s a look at a getParty xml payload and response:
The same request/response as JSON, using application/json, as both content-type and accept:
The default MDM JSON model is actually based on the core XML schema model (MDMCommon.xsd and MDMDomains.xsd). Internally, MDM will validate the JSON using these schemas.
We use a “Mapped notation” api to build the JSON. A couple things to note about this implementation:
Don’t want to write any code to test your MDM services?
Choose “PUT” as the HTTP method
curl --user "mdmadmin:mdmadmin" -X PUT
If you've used previous versions of the workbench, one of the first changes you'll hit is that you no longer need to run the developer environment setup tool when you create a new workspace. In version 11, no projects need to be imported into the workspace, and you use the same installer to setup a local test server on your development machine as you would to install a production system.
Full development environment install
If you have a completely clean machine, the simplest way to get started is to use the workbench typical install. This will install DB2, Rational Application Developer, and WebSphere Application Server, along with MDM Server and the workbench, i.e. everything you need for a full development and test environment in one go. Here's how to get everything ready to run a typical install...
Firstly, you'll need to download all the typical install images. The following part numbers are required for a full MDM Workbench v11 typical install:
CIM6NEN, CIM6PEN, CIR9NML, CIR9PML, CIR9QML, CIR9RML, CIR9SML, CIR9TML, CIR9UML, CIR9VML, CIE5FML, CIE5GML, CIE5HML, CIE5IML, CI6XNML, CI6XPML, CI6XQML
Important: If you are about to install MDM but downloaded the install images before 17th October 2013, you must download the product refresh first.
Once you have all the install images downloaded, the contents must be extracted into a specific dire
After extracting all the install images, open the install launchpad, which you can find in the MDM\disk1 directory (there are 32 and 64 bit versions). The typical workbench install link is right at the bottom of the launchpad:
When the install starts, you should be able to click through all the panels without changing anything:
Make sure you confirm that the IVT tests pass at the end of the install and, if they did, you're ready to start developing for MDM v11!
Note: you should change the defa
Workbench only install
If you don't want a local server to test changes on, installing the workbench is much quicker, since DB2, WebSphere Application Server and the server install are not required. In this case, you'll only need the following part numbers:
CIM7CML, CIR9TML, CIR9UML, CIR9VML, CIE5FML, CIE5GML, CIE5HML, CIE5IML
The launchpad doesn't support this scenario, so you have to install Installation Manager manually, add
Alternatively, you can use the Installation Manager command line to install Rational Application Developer and the workbench in one step. For example, assuming you extract the install images in the same structure as for a typical install:
imcl install com.
A typical install is ideal for demos or evaluating MDM but to set up developer environments I would recommend installing manually. You'll also need to do this if the typical install does not support your environment. The following blog post describes the manual install process:
There is also a wiki page with an up-to-date* list of install related information.
(* Please update it if it's not up-to-date!)
Doug Cowie 270005CYF0 Visits (1717)
From version 11.4 FixPack 3 the MDM Application Toolkit has a new hierarchy widget, which replaces the now deprecated MDM Tree coach view.
This new widget, the MDM Hierarchy coach view uses the latest in web visualisation technology to render hierarchies in BPM coaches. As well as using this new technology the MDM Hierarchy coach view also has a new method of interacting with the MDM operational server.
To highlight some of the new features, this post presents a step-by-step guide of how to get up and running with the new MDM Hierarchy coach view. I will assume a degree of familiarity with IBM BPM, in particular Process Designer.
Step 1: Drag and drop the MDM Hierarchy coach view from the palette onto the canvas, it is listed under the MDMAT grouping.
Switch to the configuration tab. You will notice that most of the fields have default values. In the rootNodeId field enter the values for the hierarchy and a node in the hierarchy in the format <hie
Step 2: Press the “Run” button in BPM. This will launch a browser, showing the coach you have just created. The hierarchy will be visible, and should render data if it has been set up correctly.
That is all that is required to get the MDM Hierarchy coach view up and running.
The coach view has a set of other configuration options; please see the documentation for more details on the configuration options.
The MDM Hierarchy coach view can be augmented by connecting it to a set of other coach views, which provide pop-up dialogs with additional behaviour that complements the hierarchy. These are the MDM Hierarchy Dialog Add, the MDM Hierarchy Dialog Details, the MDM Hierarchy Dialog Error and MDM Hierarchy Dialog MultiParent.
While each of these coach views can be added independently, the instructions below will guide you through adding them all.
Step 1: Adding the other coach views.
Drag and drop the MDM Hierarchy Dialog Add, the MDM Hierarchy Dialog Details, the MDM Hierarchy Dialog Error and MDM Hierarchy Dialog MultiParent on to the canvas that contains the MDM Hierarchy coach view.
Step 2: Create a new MDM_
Switch to the variables tab. Create a new Private variable, call it “events”. Change the Variable Type to MDM_
Step 3: Configure all of the widgets to use the same, shared event framework. Switch back to the Coaches view. For each coach, select it on the canvas, then select the Configuration tab at the bottom.
Locate the EventFramework configuration option; click the purple button next to the label. Then click the Select button to the right hand side. Find the variable you created in Step 2 (events) and select it. Do this for each of the widgets.
Step 4: Configure the visibility for each of the dialog coach views; this step should not be performed on the MDM Hierarchy coach view.
Select the coach view, then click the Visibility tab at the bottom. Leave source as “Value” then press the purple button next to the Visibility label. Press the Select button, then expand the events variable, expand the appropriate event, then select the visibility entry. Each of the different dialogs should be configured against its specific event. The MDM Hierarchy Dialog Add should be configured to use the addNode event; the MDM Hierarchy Dialog Details should be configured to use the nodeDetails event; the MDM Hierarchy Dialog MuliParent should be configured to use the multiParent event; the MDM Hierarchy Error Details should be configured to use the error event.
Step 5: Click the “Run” button in BPM.
The tree now has additional behaviour, if you right click on a node a pop-up dialog should now appear that will display additional data about the node. The add button on this dialog will launch the add dialog that can be used to add nodes into the hierarchy. If a node in the hierarchy has multiple parents in the hierarchy an icon indicating this is displayed to the right of the node, the MultiParent dialog will be launched if that icon is clicked and allows users to re-focus the hierarchy on the different parent nodes.
This brief post has demonstrated how to use the new MDM Hierarchy and associated coach views. In future posts more advanced topics, such as replacing the ajax service which supply the data to the hierarchy, and how to create custom widgets that use the event framework will be explored.
This document outlines how Physical MDM customisations can be built from source artefacts in an automated build and test system. This document does not aim to be a complete guide on this topic, but rather to point the way to how some detailed steps can be implemented using examples.
The MDM Advanced or Standard editions both include the MDM Workbench. In version 11.0 and beyond the MDM Workbench is used by solution developers to create artefacts which customise the MDM solution for the physical, virtual and hybrid implementation styles. These source code artefacts are typically built into a Composite Bundle Archive (CBA) and deployed to WebSphere where they augment the functionality already available in the MDM Server Enterprise Business Application (EBA).
A good practice amongst MDM solution developers is to create an automated build process such that customisation source code is checked-in to a code control repository, and an automated build process takes those source files and builds the CBA ready for deploying onto post-build test systems, placing built artefacts into a second repository or shared file system.
Some automated systems take this “build” concept further, by automating the deployment of such built artefacts to test systems, which in turn report back on the “health” of the build, how many tests passed and failed, and generally quickly provide valuable feedback to developers whether recent changes broke the solution or not. Project managers overseeing such projects are able to reduce project risk by adopting this continuous delivery processes, and changes to MDM solutions become more reliable and safer as a result.
To add MDM solutions to such a continuous build environment it is necessary to:
This article is mostly concerned with step #7 – building source artefacts.
2. Materials and prerequisites
This article is accompanied by a collection of example scripts. We do not intend that these are used directly, but as an example of how you may wish to implement your own automated build process.
The current solution consists of four main files:
In order for the scripts to work, the machine running the scripts needs to have the following products installed:
To run the Ant scripts the user needs to run mdm_wb_build.xml as a build file.
The script contains only the “runBuild” target.
The target checks that necessary properties, such as Eclipse Home, date and time stamps and output folder prefix are set. Provided these properties do exist, it creates a folder based on OutputFolderPrefix and date and time, within which “logs”, “CBAExport” and “workspace” folders are created.
The logs folder contains “MDM
generateDevProject: BUILD SUCCESSFUL
workspaceBuild: BUILD SUCCESSFUL
exportCBA: BUILD SUCCESSFUL
End of report.
The CBAExport folder contains all of the exported CBAs.
The workspace folder contains a local copy of build artefacts.
After the directories have been created, the script checks which operating system it is running on and sets the isLinux or isWindows property to “true” as appropriate and calls either runAnt.sh or runAnt.bat to run a headless Eclipse process. The relevant file (either the batch or the shell script) should be available by default in the bin directory in the Eclipse installation directory.
The runAnt script then sets up the log files, environment variables and runs a second script “mdm
3. Step breakdown of the automated build and test system
Given that automated building and testing of MDM solutions is a worthwhile goal, the following sections provide some guidance in some of these areas where actions specific to the MDM tools and development/build environment are necessary, and some points of discussion are presented where choices exist.
3.1 Identify the pieces of the solution which represent the “source code” for the solution.
The source code for an MDM solution will be made up of a collection of Eclipse projects and their contents. MDM development, MDM configuration, MDM hybrid mapping, MDM service tailoring, MDM custom interface, MDM metadata and other MDM-specific projects types. CBA projects will add to the list.
MDM Development projects contain a “module.mdmxmi” file, which contains a model of the customizations which the project aims to create. This file should always be considered to be source code.
At some point the mdmxmi file will be used to generate Java, XML, SQL and other file artefacts, and there are a few different approaches you can take for these files:
The current solution is to only consider files which have been manually changed as “source code”, and “generate artefacts” from the mdmxmi model as part of the automated build process itself. This approach demands that the MDM workbench tools are installed as part of the build environment, because the “generate artefacts” process that turns .mdmxmi files into other artefacts will be a necessary part of the build process.
A project “MDM
3.2 Create a source code repository.
There are many choices regarding which product to use as a source code repository and covering them is not the aim of this document.
3.3 Recognize when a consistent set of code has been checked-in, at which point a “build” is started.
This event may be triggered manually, automated overnight, or whenever a change-set is delivered to the code stream. The capturing of this event is often specific to the code control system being used, though some solution teams augment this by adding a web page that enables build requests to be manually requested.
3.4 Create a build environment.
A build environment should include RAD (or RSA) which can be called in a “headless” manner such that functionality within RAD can be used without a user-interface being present.
MDM Workbench will be required in addition to RAD to perform a complete build of “module.mdmxmi” files.
For the list of platforms that MDM Workbench v11.0 and onwards support – refer to the product release documentation.
3.5. The build environment “boot-straps” itself.
A small script is responsible for “boot-strapping” the process by it checking-out the other build scripts which in turn build the artefacts from solution developers.
3.6 The build scripts check out the artefacts from code control to the local file system.
These actions are specific to the code control system so will not be discussed further here.
3.7. The source artefacts are processed, transforming them into built artefacts.
This phase of the automated system typically consists of a hierarchy of Ant files which decomposes the overall build process into many smaller steps and “Ant targets”. The Maven framework is a common choice of technology to oversee this phase.
These Ant files can be categorized into two types:
For a detailed walkthrough of specific implementations of the build process refer to the Ant scripts provided with this blog entry.
3.8. The build process often executes “unit tests” to further validate that the solution artefacts are healthy and do what they are expected to do.
The tools and approaches used to execute unit tests vary widely dependent on technology choice. Simple Java JUnit tests offer one simple solution, which can be invoked with scripts once the tests and tested code are built.
3.9 Built artefacts are published to a repository.
Every build against which build metadata can be gathered and reported is versioned by the publish process. Build logs, unit test results and results of other tests indicating the “health” of the build are gathered and published to the repository as well.
Products such as Rational Asset Manager can be used here, or for a really basic solution a simple shared folder on a network drive may suffice.
3.10 Automated deploy and test health of overall build.
If the build is considered “good” then further automation can be added to deploy the built solution to a test environment, with higher-level tests (functional and end-to-end system tests) exercising the solution further. Such tests can also report back to the build repository on the health of each build.
The automated deployment of entire systems for testing is often one of the most complex areas of the whole continuous development process. Products such as Rational Urban Code Deploy (UCD) can be used for this stage of the process, though for some environments a set of (reasonably complex) scripts might be sufficient.
For the MDM pieces, we are mostly concerned with deploying and un-deploying SQL scripts, depl
Prior to deploying extensions to the server, it is often necessary to modify the database. This is possible using the SQL scripts found in the MDMSharedResources project in the built workspace. Rollback scripts in the same location should be applied once testing is complete to reset the database back to a known good state.
For CBA deployment, Jython scripts can be used to manipulate the WebSphere server. Detailed documentation of these steps can be found in the WebSphere documentation.
This blog post details how to use the template models provided in MDM Workbench to create and deploy a new Virtual Configuration Project to an Operational Server, and then deploy the sample data supplied in the template model.
Creating a new Virtual Configuration Project
Creating a connection to the Operational Server
Deploy the new Configuration Project:
Processing and loading the sample data:
Mike Cobbett 11000061J8 Visits (4881)
MDM Workbench v11 is here !
With huge pride, we have just shipped the version eleven of MDM, including the new unified MDM Workbench v11. It's been nearly 2 years in the making, and represents the biggest change to the MDM tooling in recent years. In this article we outline these changes and give the reader familiar with previous MDM tools a gentle introduction to what they can expect when they get their hands on the new tools.
The main changes made for v11 workbench can broadly be categorized under the titles unification, simplification and integration.
Unification is a drive to combine the tools from the v10.1 standard edition (formerly Initiate tools) and the tools from the v10.1 advanced edition into a single set of tools which run in the same Rational Application Developer (RAD) environment. Where the tools were inconsistent, or overlap existed we adopted common approach to make sure both sets of tools work together in a unified tooling environment.
An example of unification: Consistent use of the perspectives, showing the new MDM development perspective.
We want all the tools to be simpler. We aim to cut the time it takes to get value out of the MDM platform; automating where possible to relieve solution developers of repetitive tasks and reducing the amount of knowledge needs to get something working.
Toward this goal the workbench has made these changes :
An example of the way version is simpler can be seen by comparing a version 10.1 workspace against a version 11 workspace:
"No man is an island" as the saying goes, and the same is true for products. MDM tools now play a wider role in enabling the ingestion and distribution of information in an MDM solution.
Enhancements in this area include:
For example: Our list of export wizards has been expanded to help push MDM metadata to more remote systems.
In summary, we hope you like the changes we've made to the tools, and hope you find that creating, configuring and developing an MDM solution is now quicker and easier than ever before.
For more detailed information, and a complete treatment of the MDM version 11 release, please refer to the info
Author: Geetha S Pullipaty
Product: Infosphere Master Data Management.
Other prerequisite software: IBM Business Process Manager 8.5.6 , IBM Process Designer 8.5.6, IBM Stewardship Center 11.5.0 installed and all process applications imported on BPM and EPVs for MDM Connection details set for all of them.
This blog provides detailed steps of connecting two MDM instances to a single Process center environment for IBM Stewardship Center(ISC) component. It provides the manual steps and doesn't do run any scripts. It also doesn’t explain all the steps required for installing and configuring ISC but only those additional steps when two MDM instances are involved.
This document assumes that you haven’t done any configurations on MDM or BPM WAS consoles for configuring ISC. If you have already done please delete those additional configurations.
Lets say you have two MDM instances MDM1 , MDM2 and single Process Center instance BPM1.
Steps to be done on WebSphere Administrative Console of MDM1.
1. Create an Alias Destination on MDM Server's SIBus as below:
2. Create JMS Queue on MDM Environment as below:
3. Create JMS Queue Connection Factory on MDM Environment as below:
4. Creation of Foreign Bus Connection on MDM Environment as below:
Steps to be done on WebSphere Administrative Console of MDM2
1.Create a new Service Integration Bus on MDM2. The reason for this to be done is because both MDM environments are identical and have same SIBus names connecting back from BPM will be an issue if BPM has to connect to two SIBuses with same name.
2. For the new SIBus created , add a Bus member. This is required to create a messaging engine for the new bus created.
Steps to be done on Websphere Administrative console of BPM
1. Creation of Foreign Bus Connection on BPM Environment as below:
Repeat the same steps but this time giving details of MDM2 instance. The values to be given in this case are
Name of service Integration bus to connect to (the foreign bus): Set the value to the name of the new SIBus created during step 1 in previous section in MDM2 instance.
So by the end of these steps you would have two foreign bus connections created on BPM. MDM_BPM_LINK_ONE pointing to MDM1 instance and MDM_BPM_LINK_TWO pointing to the new SIBus created on MDM2 instance.
Steps to be done on BPM Process Center console
1. Login to BPM Process Center Console with admin access
Steps to be done on BPM Process Admin Console
1. Login to BPM Process Admin Console with admin access.
SQLs to be run on MDM databases
1. Run these SQLs on the database connected to MDM1 instance
UPDATE CONFIGELEMENT SET VALUE = 'true', LAST_UPDATE_DT = CURRENT_TIMESTAMP WHERE NAME = '/IB
UPDATE CDEVENTDEFTP SET ENABLE_NOTIFY='Y', LAST
2. Run these SQLs on the database connected to MDM2 instance.
UPDATE CONFIGELEMENT SET VALUE = 'true', LAST_UPDATE_DT = CURRENT_TIMESTAMP WHERE NAME = '/IB
UPDATE CDEVENTDEFTP SET ENABLE_NOTIFY='Y', LAST
update configelement set value='true', last
If you give snapshot names in the Process Center Console different than these make sure to change these SQLs as well. Make sure all the SQLs are run successfully.
Restart all the instances MDM1, MDM2 and BPM. You need to restart the Application servers, nodeagents , deployment managers.
geeta pulipaty 270000GUE0 Visits (1533)
Author: Geetha S Pullipaty
Product: Infosphere Master Data Management.
Other prerequisite software: IBM Business Process Manager 8.5.6 , IBM Process Designer 8.5.6, IBM Stewardship Center 11.5.0 installed and configured.
IBM Stewardship Center provides the ability to perform proactive data stewardship activities around Physical MDM. When a suspect gets created on MDM , a notification is being sent to BPM and a Suspected Duplicate task gets created on BPM Process portal for a Data steward to act upon. ISC default implementation creates all SDP tasks for a single group called DataStewardGroup.
This document explains how to create tasks to different data steward groups based on certain data on the notification message.
1. Open Physical MDM Suspected Duplicates process application in editable mode using IBM Process Designer.
2. Add one input variable called "partySourceId" in process "Resolve Suspected Duplicate process".
3. Create two groups called DSSourceOneGroup and DSSourceTwoGroup . Each of them have two users. DSSourceOneGroup has two users called DSSourceOneUser1 and DSSourceOneUser2. DSSourceTwoGroup has two users called DSSourceTwoUser1 and DSSourceTwoUser2. Basically users and groups creation is done using Process admin console
4. Create two teams with names "PSDP Source One DS Team" and "PSDP Source Two DS Team" and associate each team with one group created in previous step. PSDP Source One DS Team has group “DSSourceOneGroup” and "PSDP Source Two DS Team" has group “DSSourceTwoGroup” associated.
This step is done in Phsyical MDM Suspected Duplicates process application using IBM Process Designer.
5. Change the assignments part for the step called "Suspected Duplicate" in process "Resolve Suspected Duplicate process".
So basically if partySourceId is 1 then we are assigning to the team called "PSDP Source One DS Team" which internally assigns to first group and two users associated with that. Similary if partySourceId is 2 then the task is assigned to second data steward team and group.
6. Add the following script to step called "Create Task" on the service "Process Suspected Duplicate Event Message"
So when we run with making partySourceId as 1 the task gets created for users of group with name “DSSourceOneGroup” and with 2 it gets assigned for users of group with name “DSSourceTwoGroup”. This can be checked by logging into Process portal with different users from two groups that we created.
This is just to show the concept of dynamically routing tasks. In any customer scenario we need to identify the attributes based on which the routing has to be done. This attribute information also need to be available from the message or notification sent from MDM to BPM. This would also need customizing notification classes on MDM to have the required information sent in the message.
Failed to connect to the JMX port on server
When you first connect from MDM Workbench to WebSphere Application Server (AppServer) where MDM Server is installed, for example to deploy a configuration project or to run a virtual job, you might see this error:
Job Manager Error - Failed to connect to the JMX port on server
There can be several reasons why the connection might fail, for background, here is the stack you are relying on when you connect to the JMX port.
In order for the JMX port connection to be successful, you need every component in this diagram to be in a fully functioning healthy state. And yes, that means there are a lot of places you can check! As a result, it's not practical here to explain every possible area to review, but this should give you some idea of where to start investigating.
To begin, cut the problem in half: there is a message associated with blueprint virtual bridge. Look for this, and it will help you decide whether the problem is more likely to be a runtime issue (below and to the right of the blueprint virtual bridge component) or a configuration issue
1. Look for virtual bridge messages
On the Application Server where MDM is hosted, open SystemOut.log or HPEL logs: if possible restart the AppServer first to make sure you have startup messages:
When the MBean starts successfully, you will see messages like these:
Note that these messages will only appear on startup, so they may not be visible if the logs have wrapped
If you have these success messages the Blueprint virtual bridge is available for JMX requests, and everything to the right of the diagram (MPIJNI, JMS, databases, filesystems) is healthy.
In this case the likely cause of the problems is to the left of the diagram, and probably relates to a configuration issue. More information is available in section 3. When the virtual bridge has started successfully
When the MBean has not started, you see messages like this:
If you have these failure messages the Blueprint virtual bridge is not available. More information is available in the next section, 2. When the virtual bridge has not started
No messages found
If you don't find any messages relating to com.
2. When the virtual bridge has not started
When the blueprint virtual bridge has not started, the next step is to investigate potential runtime issues in one or more of the components on the right side of the diagram.
Note that you can choose whether you use a datastore or filestore for the messaging engine data store: the default is datastore (database).
There may be file system errors, these will usually be reported by the component that depends on the file system, for example the database or the JMS filestore.
In many cases you will be able to find technotes or other links on the internet with information about how to resolve the errors, or if not, contact IBM support and provide the logs that show the errors.
These related links have information about resolving blueprint errors:
3. When the virtual bridge has started successfully
Once you have found the success message, the next step is to investigate the configuration in both WebSphere Application Server and MDM Workbench.
Review the server logs for authorization errors
On the Application Server where MDM is hosted, open SystemOut.log or HPEL logs. Look for errors that reference one or more of:
Errors with any of these codes suggest that you need to re-visit the security configuration in the WebSphere Application Server administrative console, and check userid and password settings in the workbench client. Review the error messages, in many cases you will be able to find technotes or other links on the internet with information about how to resolve them, or if not, contact IBM support and provide the logs that show them.
Review the firewall settings
Verify that you can ping from the Workbench machine to the machine that hosts WebSphere Application Server and MDM Server, using your preferred ping tool.
Optionally you can use "Test Connection" from MDM Workbench, although note that in an ND configuration this tool only checks the dmgr, so it may not be the correct status for the actual server where MDM is hosted.
If you can not connect to the target MDM server, the JMX connection will not work and you need to contact your networking support team to make sure the network is available and if necessary, that appropriate firewall ports are opened.
Review the port and host configuration
Nic Townsend 2700051ED4 Visits (5412)
MDM v11.3 leverages IBM BPM technology to provide a data stewardship framework under a single UI. However, it may be desirable to modify this UI to match the look and feel of your existing solutions. While BPM Process Designer allows you to restructure page layout with the Coach designer, the best way to modify the look and feel of a Coach is with CSS.
BPM provides three ways to alter CSS out of the box from within Process Designer:
However, there are occasions where these approaches are not suitable:
The ideal way to insert CSS into a Coach would be to load the CSS as a managed file in BPM - that way you only need to edit the managed file and all Coaches that reference the CSS would use the latest version (pending updated snapshots). Unfortunately, BPM does not offer the mechanism out of the box.
Update 05/11 - If you upload a HTML file to BPM that consists of <style> tags wrapping the CSS, you can use the "Managed File" option of the Custom HTML component to load the "HTML" CSS into the Coach. However, this does not work if the HTML file is inside a .zip archive, or if the CSS needs to reference local resources.
By using this method call, you can get a relative URL to the managed file in BPM - eg
Using the above technique, I created a HTML file containing a <script> that would create a link to the URL for a CSS file in the document's <head> element. I then used the "Managed File" option on a Custom HTML component to load the script into a Coach. This meant that my CSS file was referenced inside the <head> element.
You can use this principle to load your own custom CSS files into a Coach. Custom CSS files can be used to override BPM Coach Views or Coaches, or alternatively CSS files can be used to override the MDM Coach Views supplied with MDM V11 onwards.
As an example, I present an updated solution to:
Dave_Kelsey 100000BGBR Visits (3080)
No results found is a common response to a search request, but how do you detect this in your BPM process ?
A “No Results Found” situation causes an exception to be thrown which means you can detect it using an intermediate error event, but you need to be able to handle a real errors as well as a no results found which is a special kind of exception.
Here we have a simple example of a search human service which applies equally to Physical and Virtual MDM Server.
There is an intermediate error event attached to the do search nested service that calls the MDM Search integration service. The gateway decision determines whether it is a no results found or not.
To determine how to configure the decision gateway we need to have a look at the format of the exception we get when a no results found is generated and this differs slightly between virtual and physical MDM Server.
Handling a Virtual MDM Server response
A sample of part of the xml format of the exception is shown here
Here is is the reason code we are interested in. So the decision gateway configuration looks like this
The decision logic being
extracts the reason code and checks for ENOREC.
Handling Physical MDM Response
The implementation remains the same, the test within the decision gateway needs to be altered as the location of the reason code is different to virtual. In this example the reason code that is checked is applicable to a Party or Person Search, however it is possible that the reason code will be different for different types of physical search.
A sample of part of the xml format of the exception is shown here
Here we see the reason code we are interested in is nested under the <errors> tag within an < element>.
So in this case we need the decision logic to be
From an early MDM Workbench news site, the MDM Developers community has evolved and grown to a group of over 200 members, and it would be great to take a break from the usual posts and forum discussions to find out more about some of you with a quick blog interview. Whether you're a new member or a long term contributor, please say hi and tell us a little about yourself.
Feel free to leave a comment and answer any of the following questions that resonate with you, or add your own questions instead. This is just a casual blog interview and meant to be more like a real world conversation, rather than a formal resume or biography!
For fun, and a bit of encouragement, I have a few limited edition MDM Developer community stickers to give away!
Here are a few questions to get you started:
Update: Unfortunately no one replied in time to claim the ticket that prompted this blog post. Luckily if you're one of the first to reply, you could still get one of these, much more exclusive, MDM Developers stickers!
MDM Application Toolkit for Product Domain
I recently had to build a product bundling process for a demo using BPM and the MDM Application Toolkit(MDMAT). Having built many business processes over the past 2 years using data from InfoSphere MDM I realized this was going to be the first one I that I was to build against the product domain of the physical engine. Using the MDMAT against the Party domain is pretty darn easy and very quickly a rich process can be built that interacts with MDM's library of web services for many different types of processes. How useful would it be for me when operating against the Product Domain, especially when a good chunk of my data was stored in Product domain XML soft specs? Well I'm pleased to say it was also very straight forward. I've written some notes below that will hopefully allow others to also find it just as easy to use the MDMAT against the product domain.
The process was to execute a search against the MDM product domain using some pre defined criteria that would allow me to pull back all products that met a certain criteria. in this case it was to retrieve a list of products that were within the 'Mobile Phone' category of the 'Channels' hierarchy, were aimed at a 'Market Segment' that was 'Affluent' had an 'Effective Date' before today's date and an 'Expiry Date' that was after todays date. This would allow me to show currently active offers on the mobile channel for Affluent customers. The 'Market Segment', 'Effective Date' and 'Expiry Date' attributes were all stored as attributes within an XML spec called 'Offer Attributes'. In the search results that come back from MDM I also needed to pull out some additional attributes that were stored within another XML spec called 'Channel Mobile Phone', these attributes were named 'Mob
Whenever I build a business process I first start by defining the variables that I will need. Since BPM applications are data driven, I find it helpful to define the data upfront and then worry about wiring them into a process at a later stage. Using the MDM Workbench I exported my MDM WSDL and imported it into Process Designer. This gives me access to my MDM Product business objects within BPM, allowing me to easily construct a ProductSearchBobj object with the criteria I need to execute my search and also create a Prod
With the objects defined I could move on to define my process flow. I created a very simple flow to suit the requirements as seen below:
I would first use the 'Configure Spec Search Criteria' node to execute a script to populate the ProductSearch object with the crieteria I needed. I would then configure the 'Retrieve all Offers' node to use the MDM Application Toolkits' Physical MDM Txn service to execute a search an return a list of Prod
With my objects defined and my process defined all I had to do was a little bit of scripting to firstly populate my search and then extract my search results to populate my displayObject. (I had already populated my MDMConnection object with my MDM server's credentials and configured the 'Retrieve all Offers' node to use the MDM Application Toolkit's Physical MDM TXn service to call an MDM 'sea
Populating the Search
I wrote a simple script in my 'Configure Spec Search Criteria' object to pass in the search criteria. I wont include the full script here, but all I had to do was create an instance of a ProductSearch object and set the following attributes:
When passed into the 'Retrieve all Offers' node my search criteria successfully results in a list of products that I am interested in being returned as a list of Prod
Extracting the spec values and populating the display object
Up until now everything I had done was pretty similar to other processes I had built, this final piece was the most challenging, in that I had never extracted values from an XML spec before within a business process. Looking at my Prod
With my spec values now populated inside my Prod
This ended up being a bit of a longer blog post then I intended (sorry JT), but hopefully it will provide you a good starter in using the MDMAT for the product domain. I really enjoyed building this process (and writing this article) as it showed me how cool the MDMAT is for helping me to build MDM centric business processes. The ability to build processes against MDM and not worry about the connection and any complexity in calling MDM Web Services saves a huge amount of time and with a little bit of script I was able to leverage the value of MDM's XML specs. if you want more information drop me an email. I'd love to hear what you are doing.
jaylimburn 2700028UUJ Visits (3515)
I mentioned in my developerWorks status at the middle of last year that I was working on a redbook on MDM and BPM integration. Well I am pleased to announce that this redbook has now finally been published!
The redbook titled : 'Aligning MDM and BPM for Master Data Governance, Stewardship and Enterprise processes' provides a detailed insight into the business benefits of running MDM and BPM projects side by side. It explains how combining the benefits of IBM InfoSphere MDM and IBM Business Process Manager to provide a platform for Master Data Governance and Stewardship, providing you with an industry lead
We'd love to hear what you think....
geeta pulipaty 270000GUE0 Visits (318)
Note : The Localization Resource editor should show a full list of locales existing in JRE. But some of the languages are missing definitely. For example Hindi is not present in the list to select and add translated text. So the workaround in such cases are these steps to be followed instead of steps 3,4 and 5 in the above list.
When using InfoSphere Master Data Management (MDM) customers will see coredumps being created when an application server is being started or stopped in an environment with multiple application servers.
There are several use cases that can lead to this behavior, all involve a WebSphere Application Server topology of more than one JVM in a clustered or non-clustered environment.
- JVMs recycle by themselves no apparent cause
This issue has been identified by Red Hat Linux Support as a defect Bug 1327623 - replacing .so which was opened and closed, leads to segfault on next dlopen/dlsym.
Diagnosing the problem
The javacore and heapdumps will have two things in common:
1. The crash occurring in the ld-l
Resolving the problem
In order to have immediate relief to this issue a workaround is available.
Follow steps below:
1. Stop any JVM that has the core MDM application installed
2. Locate the libMAD.so library and set attribute to IMMUTABLE
Ensure to locate the library in expandedBundles directory.
3. As root, run the command below:
geeta pulipaty 270000GUE0 Visits (1164)
Author: Geetha S Pullipaty
Product: Infosphere Master Data Management.
Other prerequisite software: IBM Business Process Manager 8.5.6 , IBM Process Designer 8.5.6, IBM Stewardship Center 11.5.0 installed and configured.
Communication from MDM to BPM for IBM Stewardship Center is always via messaging. In case of some specific events happening on MDM, the events are to be notified to BPM to create a new task. BPM can only listen to messages that are put in its event queue. To make this possible we create a link between MDM and BPM. MDM can be installed with MQ as messaging provider. BPM suggests to use only WAS default messaging provider. To be able to send messages a Customer needs to do these steps
Configuration steps on MQ using MQ explorer
1. Create a new Sender channel under the Queue manager that is default created for MDM. installation.
2.Create a Receiver channel under the Queue manager that is default created for MDM.
Channel name : Could be anything. In our setup we named it BPMReceiver.
Transmission protocol : TCP
3. Create a local queue with usage type as Transmission under the Queue manager that is default created for MDM.
Queue name : Service Integration Bus name of BPM. In our setup the SIBus name of BPM server is BPM.
Scope : Queue manager
Usage : Transmission
4. Create a remote queue under the Queue manager that is default created for MDM.
Queue name : Could be anything. In our setup we named it EVENTRQ.
Remote queue manager : Service Integration Bus name of BPM. In our setup the SIBus name of BPM server is BPM.
Transmission queue : Same name as given for the queue name in previous step. It is same as Service integration bus name of BPM. In our setup it is “BPM
Remote queue : Destination name for which messages are to be sent on BPM Service Integration bus. Check for destination name that starts with even
5. Start channels created on step 1 and step 2. Sender channel created in step 1 should be in running state.
This completes list of steps to be done on Websphere MQ Explorer. Now we will see the list of steps to be done on MDM and BPM installations.
Configuration steps on Administrative Console of WAS where MDM is installed.
1. Create a new Queue Connection Factory with following details.
2. Create a new Queue with following details
Configuration steps on Administrative Console of WAS where BPM is installed
1.Create a new foreign bus connection to the existing Service Integration Bus of BPM.
You may have seen the recent tech talks that the team here have been producing for our clients. In these tech talks an IBM expert will talk through a specific MDM topic in great detail sharing the deep expertise of the architects and developers that are living and breathing the technology. These tech talks are provided for free and just require a simple registration process to allow you to attend. All sessions are recorded and replays will be available shortly afterwards.
One area of keen interest to our clients has been concerning the Stewardship and Governance capabilities provided by MDM, specifically the IBM Stewardship Center, that was released in MDM 11.3. So it falls to me to host the next MDM tech talk on June 23rd. In this session I will be discussing the new capabilities offered by the IBM Stewardship Center, how we are changing the game for stewardship teams looking to evolve their organization to be more reactive to data quality events, engaging line of business users to provide input to data quality issues and adding advanced business rules and intelligence to automate events from across the entire data quality landscape.
A one hour tech talk is no where near enough time to do such a broad and important area justice however, we will spend some time up front explaining IBM's perspective on Information Governance and how IBM's InfoSphere portfolio provides the market leading integrated suite of comprehensive governance capabilities that can flex to suit your specific industry requirements. We will dive into the IBM Stewardship Center and its comprehensive workflow engine, providing collaboration and orchestration across the enterprise and touch on the MDM Application Toolkit, a suite of accelerators designed and built by some of our development ninja's to make creating custom governance workflows and quick and easy experience....and if we have time we may even have a live demo of the latest version of the Stewardship Center. During the session the live chat will be open allowing you to ask questions and I will have a team of experts ready to respond in real time.
If your organization is trying to address the growing focus on Information Governance, if you are trying to figure out how to make your Stewardship organization more efficient, or you just wanted to take a look at one of the coolest new features from the MDM team then don't miss the Mast
jaylimburn 2700028UUJ Visits (3161)
The team here are pleased to announce that after many nights researching, writing and editing, a new IBM Redbook is due to be published entitled 'Designing and Operating a Data Reservoir'.
The book takes you through the steps an organization should go through when designing and building a data reservoir solution, In this book we base the scenario around a pharmaceutical company, however the discussions, principals and patterns included in the book are relevant for any industry.
A data reservoir provides a platform to share trusted, governed data across an organization. It empowers users to engage in the sharing and reuse of data to ensure that an organization can fully leverage their most important asset. - Data. A data reservoir allows for collection of vast sets of data that can be curated, shaped and monitored to allow advanced analytics to be constructed offering new insights to an organization about their data.
See this blog post for more background on the need for a data reservoir:
The new IBM Redbook, as been authored by thought leaders in the data management space and will be available as a full IBM Redbook publication shortly. In the mean time the draft is available directly from the IBM Redbooks website:
We'd love to hear your feedback on the book and would be keen to hear your stories around data reservoir solutions.
Make Data Work
Developing behavior extensions for InfoSphere MDM v11
Special thanks to Stephanie Hazlewood for providing guidance as well as content for some of the sections of this article!
Many established organizations end up having unmanaged master data. It may be the result of mergers and acquisitions or due to the independent maintenance of information repositories siloed by line of business (LOB) information. In either situation, the result is the same – useful information that could be shared and consistently maintained is not. Unmanaged master data leads to data inconsistency and inaccuracy.
One of the most fundamental extension mechanisms of InfoSphere MDM allows for the modification of service behavior. These extensions are commonly referred to as behavior extensions and the incredible flexibility they provide allows for an organization to implement their own “secret sauce” to the over 700 business services provided out of the box with InfoSphere MDM. The purpose of this tutorial is to introduce you to behavior extensions and guide you through the implementation, testing, packaging and deployment of these extensions. You will be introduced to the Open Service Gateway initiative (OSGi)-based extension approach introduced in the InfoSphere MDM Workbench as of version 11.
With the release of InfoSphere MDM v11, we adopt the OSGi specification which allows, amongst many other things, extensions to be deployed in a more flexible and modular way. This document will describe a real client behavior extension scenario and step you through all of the following, required steps:
- Extension scenario outline.
- Creation of the extension project.
- Development of the extension code.
- Deployment of the extension onto the MDM server.
- Testing deployed code using remote debugging.
We will then conclude this document with the summary of what you have learned.
It is often necessary to customize an MDM implementation in order to meet your solution requirements. One of the extension capabilities InfoSphere MDM provides it the ability to implement additional business rules or logic to a particular out-of-the-box service. These types of extensions are referred to as behavior extensions, as they ultimately change the behavior of a service. In this tutorial we will create a behavior extension to the searchPerson transaction.
The searchPerson transaction is used to retrieve information about a person when provided with a set of search criteria. You can filter out the result set by active, inactive or all records that get retrieved by these criteria. Important to note is that this particular search transaction uses exact match and wildcard characters to retrieve the result set. There are separate APIs available for probabilistic searching – this service is not one of them.
Sometimes, the searchPerson transaction response may contain duplicate parties. For example, if a party contains both legal and business names which are identical, and searchPerson transaction uses last name as criteria, - the parent object will be returned twice in the response, as it will be matched by both of the names. While this behavior is acceptable in some circumstances, some cases might more filtering before it is returned. In order to do so, we will create a behavior extension, which will be responsible for processing transaction output and removing any duplicate records in the result set. The InfoSphere MDM Workbench provides us with exactly the right tools to quickly create and deploy such an extension.
Creating extension project
First, create the extension project structure using the wizards provided by MDM Workbench. Go to File -> New -> Other… and search for Development Project wizard:
If you cannot find Development Project wizard within the list, chances are the Workbench has not been installed, please verify using IBM Installation Manager.
When creating your project, make sure to specify a unique project and package names in order to avoid conflict with the existing ones:
Make sure to choose the correct server runtime for your projects, as well as unique name for the CBA project:
Note: You are allowed to choose from the existing CBAs. A single CBA can contain multiple development project bundles.
Click Finish and wait for the wizard to generate the required assets.
At this point, what we have is a skeletal InfoSphere MDM Development project that contains all of the basic facilities to help us create the desired extension. The next step is to create the extension assets and there are two ways of doing so: either by using the behavior extension wizard, or by using the model editor.
Creating a behavior extension using the extension wizard
You can create an extension using a wizard in the MDM Workbench, much like the one used to create a development project:
1. Open Behavior Extension wizard by going to File -> New -> Other… -> Behavior Extension, located under Master Data Management -> Extension folders
Note: A development projects can contain multiple extensions of various types underneath it. You might choose to use development projects to logically group extensions having a similar purpose, type or to facilitate parallel development activities.
3. Within the next window, choose a name and a description for your behavior extension. Choose a Java class name for your extension. This is the class that we will be populating with custom logic in order to achieve desired behavior. Alternatively, if you require to use an IBM Operational Decision Manager (ODM, and previously known as ILOG) rule – specify this associated parameter. ILOG/ODM rule creation is not covered as a part of this tutorial as we will implement the extension in as a Java extension.
4. Within the “Specify details of the trigger” pane, you need to specify the following parameters:
a. Trigger type:
i. ‘Action‘ will cause the behavior extension to trigger whenever chosen transaction is ran by itself or as a part of another transaction. ‘Actions’ are executed at the component level. .
ii. On the other hand, if you are looking to trigger extension only on a specific standalone transaction event (otherwise known as controller level transaction) select ‘Transaction’ trigger type.
iii. ‘Action Category’ trigger type executes behavior extension on various data actions such as add, update, view or all for extensions to be executed at the component level.
iv. ‘Transaction Category’ trigger type will kick off behavior extension when a transaction of specified type is executed, namely inquiry, persistence or all.
b. When to trigger:
i. ‘Trigger before’ will cause the behavior extension to fire of before the work of the transaction is carried out. Sometimes you will hear this referred to as a preExecute extension. It is a typically used when some sort of preparation procedure has to be executed before the rest of the transaction is carried out. An example of such scenario would be preparing data within the business object that is being persisted.
ii. ‘Trigger after’ will cause the behavior extension to run after the transaction work has carried out. Sometimes you will hear this referred to as a postExecute extension. It is typically used in the scenarios where logic implemented in the behavior extension depends on the result of the transaction. Normally any sort of asynchronous notification would be placed in a post behavior extension, as there would be no way to roll it back in case of transaction failure, if it is sent before the transaction is executed.
c. ‘Priority’ parameter indicates the order in which this behavior extension will be triggered. The lower the priority number, the higher the priority. That is, a behavior extension with priority 1 would execute first followed by behavior extension with priority 2, 3 or 4 in that order.
In our scenario we are looking to filter the response of a specific transaction,, namely searchPerson. Therefore we set the trigger type to be ‘Transaction’ with value of searchPerson. Since we are filtering the response of the transaction – we have to trigger our behavior extension after the transaction has gone through, and response became available. Lastly, in our particular example priority does not play a special role, so we will leave it at default of ‘1’.
5. After the above configuration is done, click Next and review the chosen parameters. Note that there is a checkbox at the top of the dialog, allowing you to generate the code based on the specified parameters immediately. For the purposes of this tutorial leave it checked and click Finish. The workbench will generate all of the required assets for you.
Creating a behavior extension using the model editor
If you have used the wizard approach above to create the behavior extension already, please feel free to skip ahead to the section titled, “review generated assets” that follows.
This section describes how to generate a behavior extension using the model editor. To do so, the following steps will guide you through this process:
1. Go to the development project you created earlier and open the module.mdmxmi file under the root folder of the project. Select the model tab within the opened view.
2. Right click Part
4. Now we will create a transaction event definition under behavior extension. Right-click the behavior extension, then select New - > Transaction Event.
5. Once the transaction even has been created, specify the appropriate properties:
a. Because this event is triggered on the personSearch transaction, PersonSearchEvent is appropriate. Recall that sometimes the “trigger before” behavior is referred to as “preExecute” extension.
b. Because ‘Pre’ checkbox stands for preExecute, (more specifically the behavior extension gets executed before the rest of the transaction) leave it unchecked. Similar to the wizard configuration, leave priority as ‘1’, since priority of execution does not affect this behavior extension.
c. Finally, select searchPerson as the transaction of choice by clicking Edit… -> Party -> CoreParty -> searchPerson.
After all of the above configurations are done and reviewed, go ahead and click Generate Code under the Model Actions section of the view, telling workbench to generate configured assets.
Review your generated extension code
Once either of the above methods is used, let us review the generated assets:
o EXTENSIONSET table record defines the behavior extension, its associated class best
o CDCONDITIONVALTP defines a new condition of transaction name being equal to searchPerson.
o EXTSETCONDVAL connects the above CDCONDITIONVALTP record to the behavior extension record from EXTENSIONSET. Additionally another EXTSETCONDVAL record connects CDCONDITIONVALTP with id of ‘9’, which stands for execution of behavior transaction after transaction.
Let us now move on to developing the extension code required to filter out duplicate person records from the result set returned by the searchPerson transaction.
Develop the extension code
The behavior extension skeleton and supporting configuration assets have now been generated. You add your custom logic, or behavior change, in the execute method of Pers
public void exec
// Only work with vectors in the response
// Get the response object hierarchy
// Iterate through the party search result
// objects to find duplicates
Iterator listIterator =
// We will keep the party ids of objects we've already
// processed to identify the duplicates
Vector partyIdList = new Vector();
Object o = list
if(o instanceof TCRM
String partyId = pers
// If the party id has not been seen yet, this person
// object is not a duplicate, otherwise - remove it from
// the response
Note: The above implementation is not pagination friendly and pagination will not be covered as a part of this tutorial.
Once you have compiled the code above, you will notice that some of the classes are not found and have to be imported. You cannot simply import TCRM
After recompiling the projects again, you will notice that the Part
This error is occurring because the composite bundle that contains Part
Now that all compilation problems have been resolved, we are ready to deploy our extension onto the server.
Deploying your new behavior extension to MDM
Once the implementation of the behavior extension has been developed, we are ready to deploy it onto the server. There are two steps involved into the deployment:
- Deploying code to the server.
- Executing generated SQLs to insert required metadata.
Deploying code to the server
Our customized behavior extension can be deployed to the server as a Composite Bundle Archive (CBA) as follows:
1. Make sure that the customized code has been built and then export the CBA containing the behavior extension by right clicking the CBA project and selecting ‘Export… -> OSGi Composite Bundle (CBA)’.
2. In the opened view, select Part
3. Click ‘Finish’. The CBA containing the behavior extension has now been exported to selected location.
4. At this point, we will assume that the MDM instance is up and running. Let’s open the WebSphere Administrative Console. We are looking to import our new CBA into the internal bundle repository. To do so go to Environment -> OSGi bundle repositories -> Internal bundle repository. In the opened view, click New…, choose Local file system and specify the location of the CBA we’ve exported above. Save your progress.
5. Once the CBA has been imported, attach this new bundle to the MDM application. Go to Applications -> Applications Types -> Business-level applications. Choose MDM application from the opened view. In the next open view, open the MDM .eba file.
6. We are now looking at the properties of the MDM Enterprise Bundle Archive (EBA). In order to attach our CBA, go to Additional Properties section and select Extensions for this composition unit.
7. If this is the first extension that you’ve deployed on your instance, the list of attached extensions will be empty. Let’s now click Add…, and check the CBA we’ve imported above, then click Add. Wait for the addition to complete. Save your changes.
8. You may think that we are done here, but not quite. We’ve only updated the definition of the EBA deployment by adding our extension. The MDM OSGi application itself has not been updated and even if you restart the server, your new behavior extension will not be picked up. So you must update the MDM application to the latest deployment by returning to the EBA properties view.
Before we attached our extension, the button shown above was grayed out; the comment stated that the application is up to date. But since we’ve update our application with a new extension bundle, we need to update it to the latest deployment. Go ahead and click the Update to latest deployment … button.
9. In the next view, you can see that the Part
At this point, scroll down and click Ok to proceed. It may take several minutes depending on your system hardware.
10. At this point, WebSphere will take you through three views, offering multiple information summaries of the deployments and several customization options. There is no need to customize anything, go ahead and click Next three times, followed by Finish. At this point the application will update. It may take some time; please allow 5 – 10 minutes to complete depending on underlying hardware. Once it is complete – save your changes. At this point, the MDM application has been updated to the latest deployment which includes our extension.
Now we need to deploy our custom metadata to the database. This metadata will govern the behavior of our extension in ways discussed above.
Deploy metadata onto the MDM database
As mentioned earlier in this tutorial, the Workbench generates database scripts that insert the required configuration into the metadata tables of the MDM repository. This metadata is generated based on the parameters we provided for our behavior extension as part of the Creating extension project section. In order to deploy this metadata to the database, run the database scripts listed under the resources -> sql folder that are appropriate for your database type. Conversely, if you need to remove extension from the server, you would need to run the rollback scripts provided in the same folder.
Note: In the case where some portion of the script fails, please investigate the error, because it may render the extension useless. Potential reasons for an error may include residual data from previous extension (rollback was not run when extension was removed), incorrect database schema, etc.
Once the scripts have been successfully run, you’re your behavior extension has now been successfully deployed. Restart your WebSphere server so that your new metadata gets picked up when the application runs next.
Testing deployed code using remote debugging
Now that all of the aspects of the behavior extensions have been deployed, we are ready to test it out! To do that, run a searchPerson transaction. It is required to have at least one person in the database so that you can actually search and yield a successful search result to trigger your new extension. This test will show us that the extension is successfully deployed. Once the transaction returns as successful, go to the SystemOut.log of the WebSphere server which is located under the log folder of the WebSphere profile where MDM application is deployed. If the extension has deployed correctly, due to the following line in our custom code:
You should be able to see this message in the logs:
[6/17/14 13:24:59:816 EDT] 000001b3 SystemOut O Part
Note: The log message is there for testing purposes only, and depending on the usage of the behavior extension can significantly impede performance. For that reason please make sure to remove such debugging messages or put them into fine logging level before going into production. Such as:
Configuring WebSphere Application Server debug mode
To observe the behavior of our extension more closely, put WebSphere server into the debug mode, and connect MDM Workbench to the said server in order to debug our code step by step. To put your server in debug mode:
1. Go to WebSphere Application Server administrative console, and navigate to Servers -> Server Types -> WebSphere application server -> <Name of your instance>.
2. Once in the server configuration view, take a look at Server Infrastructure section and navigate to Java and Process Management -> Process definition.
3. In the Additional Properties section, select Java Virtual Machine.
4. Once we are in the Java Virtual Machine view, navigate down to the Debug Mode checkbox, check it and provide the following settings in the Debug arguments textbox:
Note that ‘7777’ is the debug port to which the MDM Workbench will connect. Make sure this port does not conflict with any other assigned ports on the server, and set it accordingly.
5. Save configuration and restart your server. It is now running in debug mode. Note: If later you observe unexpected performance degradation and do not require debug mode any longer, make sure to take the server out of the debug mode using the same steps.
Configuring MDM Workbench to for remote debugging
Once the server is running in debug mode, we can go back to the MDM Workbench and configure it for debugging:
1. In MDM Workbench, go to Run -> Debug Configurations.
2. Within the Debug Configurations window, double click Remote Java Application. This will create a new Remote Java Application profile.
3. When configuring the Remote Java Application, lets name our configuration ‘MDM Local Instance Debug’. The Project setting does not play a role, you may leave it empty, or whatever the default populated value is. Connection Type should remain as ‘Standard (Socket Attach)’. Lastly Connection Properties should reflect the location of the MDM instance and debug port we’ve chosen above.
We will not cover other tabs because the configuration we’ve done so far is sufficient.
4. Once configuration is complete, hit Apply followed by Debug in order to attach to the MDM instance. The attach process may take a little bit of time depending on the environment. Once it is complete, go to the Debug perspective of your environment. In the debug view, you should observe the connected MDM instance if the attach was successful:
You can see above that the instance is available along with all of the threads.
5. Finally set a break point at the beginning of the behavior extension execute method and observe this breakpoint getting engaged once we run a searchPerson transaction:
6. If you have multiple TCRM
As a last point, note that we can debug both local and remote instances as described above, using Eclipse’s Remote Java Application debug capabilities.
In this tutorial we’ve gone through the steps of creating, configuring, deploying and testing a basic yet realistic behavior extension scenario for InfoSphere MDM.
We’ve covered two ways in which an extension template can be created: while the wizard option is straightforward and is preferable for a novice or a simple extension scenario, the model editor allows for more flexibility.
We’ve taken a look at the various configurations that apply to a behavior extension and outlined their effects on its execution. Additionally, we’ve covered the assets that get generated as a result of the configuration.
For the development step, we’ve created and analyzed the implementation of our behavior extension.
And finally, we’ve deployed, tested and debugged our behavior extension to make sure it performs as expected.
All of the above steps constitute a complete development process of an MDM Server behavior extension.