XML mapping in WebSphere Integration Developer V7, Part 1
Using the Mapping Editor to develop maps
This content is part # of # in the series: XML mapping in WebSphere Integration Developer V7, Part 1
This content is part of the series:XML mapping in WebSphere Integration Developer V7, Part 1
Stay tuned for additional content in this series.
An XML map exists to transform a source XML document into a target XML document. The mapping editor takes mappings created in the Mapping Editor and generates an XSL file to perform the actual XML transformations at runtime. In WebSphere Integration Developer, there are two main usages of XML maps:
- When building a mediation flow, there are instances where the output message structure from one primitive does not match the input message structure for the next primitive. The messages pass through a mediation flow as Service Message Objects (SMO), which have a consistent XML structure. In this case, you can use a mapping to convert the message from one structure to another.
- When building a process flow, there are also cases where a variable in the process flow needs to be converted from one type to another. You can also use an XML map within a process flow to convert and manipulate variables.
When creating a mapping, the goal is to build a mapping that will produce a complete and valid target XML document. The XML document that is produced needs to be complete in that it contains all the expected data and the document also needs to be valid and match its corresponding schema.
Part 2 of this series, Working with complex XML structures in the Mapping Editor, explains more advanced XML mapping topics.
The Mapping Editor
The Mapping Editor makes populating a target XML document with data from a source XML document an intuitive and visual task. In general, the source XML structure is shown on the left hand side and the target XML structure on the right hand side, as shown in Figure 1. By drawing connections between the source and target fields, you create mappings between the source and target elements and attributes. A properties view below the editing area allows you to refine and customize the mappings that you create within the Mapping Editor. The Mapping Editor also contains a test view that allows you to associate sample input XML and see the output XML of a map immediately.
Figure 1. The XML Mapping Editor
Where to create maps
Maps are generally created within a Mediation Module project for use within a particular Mediation Flow or within a Module project for use within a particular Business Process Flow. They can also be created in a Library project, which will make them available for re-use within any map file residing in a module that is dependent on the Library.
Within a Mediation Flow, map files are created using the XSL Transformation primitive. Each XSL Transformation primitive is associated with a map file that describes the transformations to be made to the input XML. The map file is created in accordance with information specified when you implement the primitive.
Within a Business Process Flow, map files are created using the Data Map basic action.
While working on a mapping, you may find that you are constantly mapping the same two element types. In that case, you may find it useful to create a Submap that can then be re-used whenever mapping those two types. A Submap is nothing more than a regular map file that you can call from other map files using the Submap transform. You can create a Submap from within the Mapping editor or you can also create it by right-clicking in the Business Integration view and selecting New > XML Map.
Choosing a mapping root
The mapping root is the top level input and output pair within the mapping. Generally, the input is based on one XML Schema type and the output is based on some other XML Schema type.
When creating maps within a Mediation Flow, a mapping root is required. The mapping root determines which part of the primitive input message is used as the mapping input and which part of the primitive output message is used as the mapping output. To choose a mapping root, first determine what part of the target document needs to be changed and then determine where the data that will populate the target document is going to come from. Once you know what areas of the document you are going to work with, you can choose an appropriate root for your map. An XML map that is used within a mediation flow can only have a single input or source and a single output or target. In the case of mediation flows, the messages are SMO messages that are broken up as follows:
You can choose any one section of the SMO as a root or you can choose the root of the SMO if you need more than one of the sections. In most cases, your changes will occur in only the body of the SMO and the body is an appropriate root for your mapping. However, there are times when you may require data from the context or headers to properly populate your target body. There are also times when you may want to update the target context or target headers. In these cases, you will need to use the root of the SMO "/" as your root for the mapping. Using "/" as the root allows access to all areas of the source and target SMO. Once a SMO mapping has been created with the chosen root, you cannot change the root.
When working in the Mapping Editor, you will associate elements and attributes of the source schema with those of the target. Once you create an association between the source and target, the association is called a transformation or a mapping. Each mapping can have a single refinement to indicate what type of mapping it is. This section describes the refinements.
Move is the most basic refinement. It takes a simple or complex field on the source side and moves it unchanged from the source to the target. The only change made to the copied field will be to namespace prefixes to ensure they are valid in the target document. When you want to copy something from the source to the target without altering it, Move is the recommended refinement.
The Convert refinement is used to do simple conversions between simple data types. An example usage of the Convert refinement is to covert a Boolean value (true or false) to an Int value (1 or 0). Another example usage would be to extract a specific type of value from a string. For example, consider a person's age, one business object might express age as a string and another as an int. In this case, you can use a Convert to ensure that the string age, which is expected, but not required to be an int, is actually an int in the target. In cases where the string cannot be converted to an integer, a user-specified default int is placed in the target field.
The Assign refinement is used when you want to assign a constant value to a target element or attribute. Assign is only available for assigning a value to simple type fields, such as string and int. To create an Assign mapping:
- Select the target element or attribute to assign a value.
- Right-click the target element or attribute and select Create Transform.
- Use the Properties view to assign the desired value to the target element or attribute.
A Local map is a tool for organizing a mapping file. It allows you to nest mappings for complex types so that the top level mapping does not become cluttered with too much detail. It is important to realize that creating a Local map between a source and a target does not perform any data transformations on its own. Nothing will move from the source to the target until you go inside the Local map and create mappings using refinements such as Move. Local map is used as a container mapping to localize nested mappings (such as Move), which perform the actual transformations.
A Local map contains a single input field and a single output field. In cases where multiple input fields are required, a Merge mapping replaces a Local mapping, but it behaves similarly.
Once you have created a Local map between a source and target, you can double-click the Local map refinement to navigate into the map. Once inside the Local map you can create the child mappings. While inside a Local map, you will notice that you can navigate out of the Local map, back to the parent mapping by using the "Up a level" icon () in the top right corner of the mapping area. While inside a Local map, you will notice that a gray background is used to indicate that you are working within a nested mapping.
A Local map is not reusable. In cases where you are mapping source and target types that you know will be mapped the same way in other maps, consider using a Submap that you can reuse and share among many mapping files. See the Submap section of this article for more information about creating Submaps.
A Merge refinement is similar to a Local map in the sense that it is a container for nesting other mappings. Unlike a Local map, Merge supports multiple source inputs. This allows you to take data from two different source fields and merge them into a single target field. Merge is also used when working with arrays. For more information, see Working with arrays in Part 2.
A Submap refinement is a mapping between two specific types that is stored in a separate file. A Submap is a root mapping in a regular map file, which you can reference from any other map file making it ideal for reuse. Since Submaps are designed for re-use, we recommend that you store Submaps in libraries where they can be easily shared and reused amongst dependent modules. In cases where you have two different complex types that are frequently associated with each other for mapping purposes, a Submap is a good way to create a re-useable mapping between the types.
If you have already created a local map and afterwards you decide that you want the local map to be reusable, you can refactor the contents of the local map into a submap. To refactor a local map to a submap, right-click the local map and select Refactor to Submap.
In some cases, you may find that you cannot create a Submap for a desired type because the type is not defined in an XSD file. This can be the case if the type is defined in a WSDL file. The Submap creation wizard will not allow you to create a Submap with a non XSD defined type as the input or output. In this case, you can refactor the type out of the WSDL file by doing the following:
- In the Business Integration view, locate the desired type in the Data Types category of the module or referenced library project.
- Right-click the type and select Refactor > Extract In-lined Business Objects.
After extracting the desired type, you can create a Submap using the extracted type as an input or output. The Submap refinement is not available when working with local elements or anonymous types. In the case of local elements or anonymous types, reusable mappings are not an option at this time.
Tip: In cases where there are many maps and submaps within a module or library, you can use the Data Map Catalog to view a detailed summary of available maps. To view the Data Map Catalog, select a project in the Business Integration view, right-click the project and select Open Data Map Catalog.
There are a few common built-in functions that you can use within the Mapping Editor, such as Concat, Normalize, and Substring. In addition to these, there are over 60 XPath and EXSLT Java™ functions that you can easily use to transform data. The following sections explain some of the built-in functions. The XPath and EXSLT Java Functions section explains how to use the other functions.
The Concat function will concatenate two or more strings from the source into a single string value on the target side. The built-in Concat function supports the specification of a prefix, postfix, and delimiters through its property page.
To achieve the required output in the example shown in Table 1, you can use a Concat transform with the following properties:
- Specify the input order so that cityName is the first input and countryName is second input.
- Use a delimiter of "," on the cityName input.
Table 1. Example use case data
The Normalize function will move a string from the source to the target and make the following modifications to the string during the move.
- Remove leading and trailing white-space.
- Replace sequences of white-space within the string with a single space.
To achieve the required output in the example shown in Table 2, you can use a Normalize transform.
Table 2. Normalize example
The Substring function will extract text from a source string. The
Substring function uses a delimiter and index to determine what text to
extract from the incoming source string. Based on the specified delimiter,
the source string is divided into sections. The index is used to identify
which section of the divided string you want to use. By default the index
is 0, indicating that the first section will be used. For example, suppose
that you want to extract the city name from
You can use the Substring function by specifying "," (comma) as the delimiter. By default, the 0 index returns the first section of the separated string, which is Toronto in this example.
To extract the country name from the same string, specify ", " (space after the comma) as the delimiter and set the index to 1, which causes the second section of the divided string to be used, which is Canada in this example.
In cases where you are not guaranteed a space after the comma in your
source string, you can use a Custom refinement with an XPath expression
XPath and EXSLT Java functions
You can use XPath and EXSLT Java functions to easily manipulate, evaluate, and format data. There are a number of such functions supported in the Mapping Editor, which are grouped under String, Math, Boolean, and Data and Time categories.
These functions operate on or return strings. The complete list of string functions supported in the Mapping Editor are: contains, format-number, local-name, name, namespace-uri, starts-with, string, string-length, substring, substring-after, substring-before, system-property, translate, align, array concat, and padding.
As an example to illustrate one of these functions, let's consider the array concat function. The array concat function takes an array of strings as an input and returns a concatenation of the strings into a single string output.
As an example to illustrate one of these functions, consider the array concat function with respect to Table 3.
Table 3. Array concat example
Once an array concat function is created within the Mapping Editor, the General tab within the Properties View contains a table to specify input parameters to the function. The values of these parameters are XPath expressions and can make use of transform variables representing the input connections. By default, the input connection variables are used as the inputs to a function; however, these may be edited if desired.
The General tab and table of input parameters are common across all other XPath and EXSLT functions. Some functions contain optional input parameters, which you can add or remove as needed.
These functions operate on or return numeric values. The complete list of math functions supported in the Mapping Editor are: ceiling, count, floor, number, round, sum, abs, acos, asin, atan, atan2, constant, cos, exp, log, max, min, power, random, sin, sqrt, and tan.
These functions operate on or return Boolean values. The complete list of Boolean functions supported in the Mapping Editor are: boolean, false, lang, not, and true.
Date and time functions
These functions operate on or return date and time values. The complete list of date and time functions supported in the Mapping Editor are: date, dateTime, dayAbbreviation, dayInMonth, dayInWeek, dayInYear, dayName, dayOfWeekInMonth, Formate date, hourInDay, leapYear, minuteInHour, monthAbbreviation, monthInYear, monthName, secondInMinute, time, weekInYear, and year.
You can obtain the current date and time using the
dateTime function with no inputs to the
transform. You can also use the other date and time functions with no
inputs to obtain the current values. For example, using the
dayAbbreviation function with no inputs on
Wednesday will produce a target value of
The Format date function is a useful function to convert a given dateTime according to a specified pattern as shown in Table 4.
Table 4. Format date function example
The General tab within the Properties View contains properties needed to configure the Format date function. The Date and time input parameter is an XPath expression, which by default, uses the variable representing the input connection of the function. Alternatively, you can edit this to input a user-defined XPath expression.
A date picker is also available within the General tab of the Format date function properties, which can be applied as the input date and time. Choosing a date generates an XPath literal expression as input to the function representing the chosen date and time.
The Format date function also contains a pattern to specify how the given date and time will be formatted. There are a number of predefined patterns that can be chosen from the combo box or a user defined pattern can be entered. Also, if the pattern is specified within a source element defined in the input data structure, the XPath variable used to represent the input connection can also be used as the pattern.
Finally, an example field is provided for the Format date function to show what to expect as the output with the information provided.
You can use a Lookup refinement to find a value based on a key. The idea is that the source input of the lookup is used as a key. The lookup will use the key to retrieve a value for the target. A lookup uses a lookup engine to associate the key and value. The following lookup engines are provided in the Mapping Editor:
- Comma Separated Value file lookup: The CSV (comma separated
value) lookup engine will use a comma separated value file to perform
a lookup. Each line of the CSV file is considered a new entry. The
first line of the CSV file can be the column headings for the file.
When doing a lookup in a CSV file, the properties allow you to specify
which column is used as the key column, and which is used as the value
column. You can identify columns using indexes starting at 0 or using
the column heading name. To understand the columns in a CSV file,
consider the following data:
State Name, Abbreviation, Old GPO, FIPS
Alabama, AL, Ala., 01
Alaska, AK, Alaska, 02
Arizona, AZ, Ariz, 04
Arkansas, AR, Ark., 05
If using the above CSV file to retrieve a state name based on a state abbreviation, you can set up the lookup to use column 1 as the key column and column 0 as the value column. Or, you can also set up the lookup to use the “Abbreviation” column as the key column and “State Name” as the value column. In both cases, the key AK returns the value of Alaska.
To use the CSV lookup, the CSV file must have a *.csv file extension. The CSV file must be located in the same project as the map file, or in a dependent project.
- Properties file lookup: A properties file lookup uses a file in the format of "key=value" to perform a lookup.
- Relationship lookup: A relationship uses an existing static relationship to perform a lookup. Static relationships are defined in WebSphere Integration Developer to describe static associations. For example, a relationship might be defined to associate state name with state abbreviations. Once the relationship has been defined, a lookup within a map can use the relationship to retrieve a value. The properties allow you to define which role in the relationship is used as the key and which role is used as the value.
The following example illustrates the use of a Lookup mapping. Let's consider the CSV file shown in Figure 2.
Figure 2. Example CSV file
In this example, the input file will contain a state abbreviation and the output will require the full state name as shown in Figure 3.
Figure 3. Lookup mappings
The properties for the above lookup is shown in Figure 4.
Figure 4. Lookup properties
The result of the above map is shown in Figure 5.
Figure 5. Mapping results
It is also possible to contribute a custom lookup engine. The Creating a custom lookup topic in the product help explains the process of defining a custom lookup engine.
There are often times when transformations within a map depend on conditions. Conditions are used to determine whether a particular mapping will or will not occur at runtime, thereby providing a way to control the logical flow of mappings. The Mapping Editor contains support to implement such conditional logic on mappings with use of the If, Else if, and Else transformations. These transformations group a set of mappings based on a given XPath expression condition. Alternatively, the XML Mapping Editor also supports conditions on a single mapping by directly supplying an XPath expression or using a static Java method call. Finally, there are implicit conditions automatically applied on transform connected to source data which is optional. Any transformations operating over optional source data will only execute if the source data exists during runtime. Otherwise, the transformation will not occur.
If, Else if, and Else refinements
If, Else if, and Else refinements are conditional transforms used to control the flow of mappings within a map. Each of these refinements are container transformations, meaning that they need to contain nested mappings to provide any useful implementations. Since the If, Else if, and Else refinements are container transforms, these containers provide a means to group a set of desired mappings based on a known condition.
Conditions aree supplied on both the If and Else if refinements by assigning an XPath expression to them from the Condition tab within the Properties view. The Else refinement can only exist with an associated If or Else if refinement and does not have any conditions on it. See the XPath conditions section to understand how such expressions are evaluated.
Creating an If refinement within the Mapping Editor is similar to creating other transforms. Simply create a connection between a source element and target element and update the transform type to be an If refinement. Once an If refinement is created, you can add an Else if or Else refinement by selecting the If refinement and invoking the "Add Else if" or "Add Else" actions from the hover action bar or context menu. Once an Else if or Else refinement is created, you must add a target connection to provide the context of what that refinement will populate in the output. Similarly, you can optionally add a source connection to provide the input context of the refinement. Unlike other refinements, If, Else if, and Else refinements can each have multiple input and output connections, which provide the basis of what nested mappings will operate on.
If, Else if, and Else refinements are grouped together to illustrate their association with each other. When a map contains such groupings of If, Else if, or Else refinements, only one or none of these refinements in the group will get executed at runtime based on the supplied conditions. Only the first If or Else if refinement with a condition, which evaluates to true, will get executed. Otherwise, an Else refinement will get executed if all other associated conditions fail within the grouping. Once one of these refinements is executed, all of the nested mappings associated with the conditional refinement will be performed during runtime.
Caution must be taken when creating target connections on a grouping of If, Else if, and Else refinements due to restrictions that are imposed on these refinements:
- All target connections within the grouping of If, Else if, and Else refinements must target sibling elements on the target data structure. Failure to do so will result in an error message as follows: "All of the transformations in the Else if transformation group do not target elements that are siblings of each other". To correct this problem, make sure all target connections within the conditional group of refinements are all siblings of each other.
- Unrelated transform outputs cannot exist within the output scope of
the conditional transformation group. Unrelated transforms are those
that are not contained within the If, Else if, and or Else
refinements. Output scope of the conditional transformation group is
defined as being between the top-most connected output element and the
bottom-most connected output element. Failure to abide by this
restriction will result in the following error message: "The Move
transformation exists outside the output scope of the conditional
transformations, which is not valid". To correct this problem, place
the unrelated transformation within one of the conditional
transformations (or duplicate it in all conditional transformations if
it applies in all cases). The following illustrates an example of a
conditional grouping scope shown in
Figure 6. Conditional mapping
Notice the output scope of the conditional transformation group is between element "field1" and element "field5". No other unrelated transforms can be involved within this highlighted region unless it is contained within one of the If, Else if, or Else refinements.
Single transform conditions
Sometimes only a simple condition is needed on a single transform to perform the desired mapping. In such cases, you can apply an XPath expression on a single transform without the need of an If refinement. Much like adding XPath expressions on If and Else if refinements, you can add a condition on a single transform using the Condition tab within the Properties view.
To better understand single transform conditions and implicit conditions built into the Mapping Editor, consider a scenario when you want to use the value of one input field if it is present in the source. Otherwise, you will use the value of a different input field. Consider the following scenario:
- An optional string input for a specific recommendation.
- A required string input for a general recommendation.
- A single recommendation as specific as possible. When the specific recommendation is present, that is the value. Otherwise, the general recommendation is used.
In this case, you might consider doing this:
- Create a Move refinement between the input specific recommendation and the output recommendation. When the specific recommendation is present, the Move occurs and the value is set correctly. Because a condition is automatically generated for optional inputs, no additional conditions need to be added to this transform.
- Create a Move refinement between the input general recommendation and
the output recommendation. Condition the Move with the following
condition so that it executes only when the specific recommendation is
not present in the input:
Another way to use conditions is to apply a filter to a list of repeatable elements. For more information, see Filtering arrays in Part 2.
You can write onditions for an If, Else If, or other refinements using XPath expressions. When the XPath expression is evaluated, the value of the condition will be determined based on the return value of the expression as shown in Table 5.
Table 5. XPath Condition evaluation criteria
|Expression Return Type||True Result||False Result|
|node-set||non-empty node-set||empty node-set|
|string||a string that is one or more characters in length||a zero length string|
|number||any number other than zero (negative or positive)||zero|
While writing conditions using XPath, you can use relative or absolute paths. Inputs to the transform are available as variables and are inserted into the expressions. You can use content assist in the condition input area to view a list of available variables. You can also use content assist in the condition input area to make use of XPath functions. To invoke content assist in the condition input area, use CTRL+Space.
When using relative paths, you currently cannot reference above the
context node using syntax such as
../<element name>. In cases where
you need to access information that is not accessible within one of the
input values, use an absolute path. For an example of using variables, the
content assist in Figure 7 shows that
cityName is available as a variable:
Figure 7. XPath content assistant
After you click the cityName variable in the list above, "$cityName" is
inserted into the entry field and then you can continue to create a
condition such as
$cityName = "Toronto".
To understand the basics of using conditions, consider this example. A travel company is offering customers advice about their travel destinations in the form of one of the following tips:
- Weather will be hot -- bring your sunscreen.
- Weather will be cold -- bring your parka.
The tip depends on the temperature of the chosen destination. For destinations where the temperature is above 0°C, the hot weather tip is given. Otherwise, the cold weather tip is given. You can condition the mappings like this:
- Create an If refinement from the source celsiusTemperature element, which targets the advisory element on the output.
- Add the following XPath expression as the condition on the If
$celsiusTemperature > 0.
- Add and Else refinement to the If refinement and target the advisory element on the output.
- Add a nested Assign transform within the If refinement with the value "Weather will be hot -- bring your sunscreen."
- Add a nested Assign transform within the Else refinement with the value "Weather will be cold -- bring your parka."
You can also build conditions using calls to static Java methods. When using Java method calls, the return value of the method determines the condition result as shown in Table 6.
Table 6. Java condition evaluation criteria
|Method Return Type||True Condition||False Condition|
|org.w3c.dom.NodeList||A non-empty node list||An empty node list or null|
|String||a String that is one or more characters in length||a zero length String|
|int||any number other than zero (negative or positive)||zero|
To write a Java method to compute the logic for the condition:
- Create the new static Java method in an existing Java project or create a new Java project to store the method within.
- Make sure that your mapping project or one of its library dependencies contains a dependency to your Java project.
- Click the Java Imports property page and click the Add button. Next, specify the prefix and the associated Java Class.
- In the condition entry field, use the content assist (available by pressing CTRL+Space) to insert the appropriate method call.
For example, suppose you have the following:
|Parameters:||cityName (of type String)|
The condition might look like this:
Tips and tricks
This section contains some tips and tricks to help make XML mapping an easier task. The tips in this section include:
- Tips for working faster: Match mapping, content assist.
- Tips for working with large schemas: Scrolling, filtering, navigation trail.
- Other tips: Test map view, sorting transforms, viewing model groups, moving mappings.
You can use match mapping when the source and target are similar or contain similar sections. Match mapping always attempts to match whatever is showing on the source side of the editor with whatever is on the target side of the editor. It is not dependent on a selection. A match is made when an element or attribute on the source side has the same name as an element or attribute on the target side. The types of the matched elements must be similar for the match to produce useful mappings. If the matched types are complex types, a container map is used to match them at the top level, and then match mapping will continue within the container mapping until no more matches are found. To invoke match mapping, use the "Map source to target based on the name and types" toolbar button (). Match mapping will always use container mappings (for example, Local map, or For each in the case of arrays) to map complex types rather than using a Move, which makes it easier to tweak the mappings that are created.
For example, if a Move transform had been used to map the body on the source to the body on the target, and you wanted to make a minor adjustment to something in the body, you need to delete the Move transform and create individual transforms at a more granular level. However, when using match mapping, container maps are used and drilling into those container maps allow you to get to the exact location where the changes need to be made. In this way, only transforms that correspond to the minor adjustments need to be changed.
When the source and target have similar fields of the same type, but may not have the same name, you can use Local maps to match these similar types, and then use match mapping within the Local maps that you create. For example, suppose we have the following source and target as shown in Figure 8.
Figure 8. Mapping editor before match mapping is invoked
Invoking match mapping at this level will not find any matches and will display an error message indicating that no matches were found for the "getWeather" element in the target. However, the GetWeather element in the target is looking for a cityName and a countryName which will come from the first element in the destinations array on the source side. We can reset the starting point for the match mapping by creating a Local map between the destinations element in the source and the GetWeather element in the target as shown in Fiure 9.
Figure 9. Mapping editor after match mapping is invoked
From within the Local map, invoking match mapping will map the CityName and CountryName.
Content assist and XPath Expression Builder
Some of the entry fields that are used within the property pages of the Mapping editor contain content assist to help when syntax is important. If content assist is available in an entry field, a light-bulb icon () will appear next to the entry field and the hover help on the light-bulb will indicate whether or not content assist is available. To invoke the content assist, use CTRL+Space.
When editing an XPath expression, the content assist will contain a menu item called "Insert Simple XPath" expression. Clicking the menu option will display the "Simple XPath Expression Builder" dialog. The XPath Expression Builder assists in creating simple XPath expressions as shown in Figure 10.
Figure 10. Simple XPath expression builder
When creating a transform or mapping in the editor, it is typical to do so by locating a source input field, clicking on the source input field, holding the mouse button down, dragging a connection over to the target field, and finally releasing the mouse button to complete the action. Using this method is quick and easy, but inhibits the ability of using the editor's scroll bars at the same time. This makes it hard to work with large schemas where the source and target fields of an intended transformation are not always in view with each other at the same time. You can use the following methods to create transformations between a source and target that are not visible at the same time.
- Hover over the source input field, single-click the drag handle that
appears, but do not hold down the mouse button as shown in
Figure 11. Drag handle
This will initiate the connection creation process while still giving the ability to use the scrollbars at the same time. While the connection wire is active, you can also initiate scrolling by hovering near the editor edge and pausing briefly.
- Select the source input field without holding down the mouse button. Scroll to the appropriate matching field in the target, hold the CTRL key and click the target field to select it. Finally, right-click on the target field and select Create Transform. A transform between the selected source and target will be created and you can then customize the transform as desired.
- Select the source input field, right-click, and select Create Connection. Once the connection wire appears, you can use the scrollbars at the same time to find the target field and complete the transformation. While the connection is being created, you can also initiate scrolling by hovering near the editor edge and pausing briefly.
In cases where the input or output of a mapping contain many fields, finding and navigating to a particular field can be tedious. A filter is provided on both the source and target columns to make finding such fields easier. Once a filter is applied to a source or target column, the column will show only fields that contain the filter text in the field name. To use a filter, enter the filter text in the following location and press Enter.
Figure 12. Mapping editor source and target filters
To clear a filter, click <Show all> at the bottom of the column or clear the filter text.
As you create nested mappings such as Local maps, you can find yourself deep within an element structure wondering where exactly you came from. At the top of the editor is a navigation trail that lets you know what elements you are nested in, which is shown in Figure 13. Clicking any part of the navigation trail will quickly bring the editor view to the associated level.
Figure 13. Mapping editor navigation trail
Test map view
While working in the Mapping Editor, you can use the Test Map view to test your mapping transforms as you create them. For details on the The Test Map view, see Problem determination.
By default, the Mapping Editor will sort the transforms column based on the source input. This means that the transformations align themselves with their associated source input fields. In some cases, it may be more convenient to have the transform column sorted by the target outputs. Since this will align transformations closest to their target output, it provides a quick way to see what type of transformation is operating on a particular target. To switch the sorting method, right-click in the mapping editor area and use the "Sort Transforms" menu action, or click the desired sort method button on the local toolbar (by source , or by target ).
Viewing model groups
By default, the Mapping Editor uses a simplified view of the XML inputs and outputs. There is some information that may be in the schema that is not shown by default in the Mapping Editor. For example, a choice defined in the schema is shown as a flat list in the mapping editor. To see such model group information in the XML Mapping Editor, edit the XML Mapping preferences to show the groups. To edit the preferences, click the Preferences () button on the Mapping Editor local toolbar.
In addition to the details shown in the Mapping Editor window, you can also see general information about the selected element in the properties view as well.
As of Version 7.0, the Mapping Editor supports cutting, copying, and pasting mappings. The cut, copy, and paste actions are useful in the following cases:
- A map becomes disorganized because not enough container mappings are being used. In this case, you can create new container mapping and then cut (CTRL+X) the existing mappings that are outside the new container mapping and paste (CTRL+V) them onto the new container mapping. The pasted mapping will then be moved inside the new container mapping. For more information about organizing maps, see Organizing mappings.
- An existing mapping needs to be moved from one target to another. In some cases, you may have already created a mapping to a certain target and then realize that you meant to create that mapping from the same source but to a different target with the same type. In that case, you can cut (CTRL+X) the existing map, select the correct target field, and then select paste (CTRL+V).
- An existing mapping needs to be duplicated for a different source and target. If you have already created a mapping between a source and target, then you decide that you want to create the same mapping between a different source and target that have the same types as the original source and target, you can copy (CTRL+C) the existing mapping, select the new desired source and target elements (hold the CTRL key to multi-select), then invoke paste (CTRL+V).
- A mapping from one file needs to be duplicated in another file. As long as the source and target types are the same, you can copy a mapping from one file to another using copy and paste. Start by copying (CTRL+C) the mapping in one file, go to the new file, select the new source and target fields (using the CTRL key to multi-select), and then invoke the paste action (CTRL+V).
While creating a map, it is a good idea to test the map periodically to ensure that the desired results are being achieved.
To test a map in the tooling, the first step is to create or select an input document to test. The easiest way to accomplish this is to use the input document generation tools that are included in the Mapping Editor. These tools will create a basic input document based on the input XML schema. Once the input document has been generated, you can start testing immediately and can later enhance the input by adding additional elements and values. Use the following methods to create input documents:
- While using the Create New XML Mapping wizard from an XSL Transformation primitive in a mediation flow, you can check the Create a sample XML input file for testing the XML Map checkbox.
- From within the Mapping Editor, you can click the Associate XML
files toolbar button, and then use the Generate Input
button to create and associate a new sample input file.
The Generate Sample Input File button is disabled in cases where the root input of the map is not a global element. Maps with global element inputs can only be tested by calling them from another map.
- From within the Mapping Editor, you can click the Test Map toolbar button to open the Test Map view. From the Test Map view, click the Associate XML files toolbar button and then use the Generate Input button to create and associate a new sample input file.
As an alternative to creating a sample input file, if you already have a sample input file, you can associate the existing XML input file with the map file using the Associate XML files toolbar action. Currently, input files are generally saved in the same project as the maps they apply to and show up in the Business Integration view under the Transformations > Data Map Test Data category.
Iterative testing using the Test Map view
Once you have a sample input file associated with the map file, you can use the Test Map view to test the map file. This method of testing can be used as you are still developing your mappings. The view will perform transformations on the input document using whatever transformations you have implemented in your mapping file, even if you have not yet saved the changes to the mapping file. This allows you to ensure that the transforms you create have the desired result before committing changes to your mapping file. To open the Test Map view, use the Test Map action () on the mapping editor toolbar.
While using the Test Map view, you can refresh the output at anytime by clicking the Run transformations button on the Test Map view toolbar. You can also make updates to the input XML file from the Test Map view and can save those changes by clicking the Save selected input file toolbar button. Changes to input XML files can also be committed to a new input file by using the Save selected input file as toolbar button.
The Test Map view provides two methods for viewing the input and output XML. The visual tab represents the XML in a tree format allowing you to navigate and edit the values easily. The visual tab also provides some validation, such as verifying whether the value assigned to a simple type is valid.
Testing maps using the Integration Test Client
In addition to the Test Map view, you can also test a completed XML Map using the Integration Test Client. You can save tests that you create in the test client and re-run them multiple times. You can also debug mapping transformations while testing with the Integration Test Client.
To test a map file using the integration test client, right-click on an XML map file in the Business Integration view and select Test. You can also select an XSL Transformation node in a mediation flow and right-click to invoke the Test XML Map action as well.
To debug a map while using the integration test client, select the Stop for debug before transformations checkbox in the Invoke XML Map Event detailed properties before running the event. You can also set breakpoints on the transformations in the Mapping Editor by right-clicking the transformation and selecting an action from the "Debug" menu.
Testing maps on the server
When using the above methods of testing, the maps are being tested locally in the tooling. There may be cases where the map executes successfully in the tooling, but behaves incorrectly at runtime while running on the server. In these cases, you will need to debug the problem on the server.
Testing an XML map on the server is accomplished by testing the component that contains the XML Map to be tested. In the Assembly Editor, right-click the mediation flow or business process flow component that contains the map to be tested and invoke the Integration Test Client using Test Component or Test Component in Isolation.
To understand how an XSL Transformation primitive in a mediation flow is executed at runtime, consider the following scenario shown in Figure 14.
Figure 14. XSLT primitive
- SMO input object is received by the "SortDestinations" XSL Transformation primitive.
- SMO input is serialized into XML for processing.
- XML input is transformed using the XSL file associated with the primitive and an XML output is produced.
- Output XML is de-serialized into the SMO output object.
Fine-grained trace in the Integration Test Client will allow you to see the inputs and outputs to each XSL Transformation primitive in the mediation flow. In most cases, if an XML map works during local testing and fails at runtime, the problem is in Step 4 from above, the conversion of the output XML to the SMO object. This is the point where an inconsistency between the output XML and the schema will cause a failure. To help diagnose and fix these types of problems, you can turn on server tracing to allow you to examine the messages.
Note: Currently, fine-grained trace is only available within a Mediation Flow component and is not available in a Business Process Flow.
To turn on server tracing:
- In the Servers view, while the server is running, right-click the applicable server and select Administration > Run administrative console.
- Log-in to the administrative console (
adminis the default ID and password).
- In the Troubleshooting section, select Logs and Trace.
- In the Logging and Tracing area, click the server name, such as server1.
- Click Change Log Detail Levels.
In the Change Log Details Level entry field, add the following text to whatever text already exists in the entry field:
For example, if the entry field contains this before making a change:
The entry field will look like this after the changes:
- Click OK.
- Click Save to save changes and promote the configuration changes to the server.
- Logout of the administration console.
- Restart the server.
Once the server tracing has been enabled, rerun your test and then review the trace.log file. To determine the location of the trace.log file:
- Open the Server Logs view.
- In the Server Logs view toolbar, click the Load server console or log button.
- Select Load from server log directory.
- Click Browse.
- Check the checkbox for the trace.log file and click OK.
- Note the location of the trace.log file and click Cancel.
Note: Although you can use the viewer to view the trace.log file, finding the required trace information is easier when using a text file editor to view the log file.
When you open the trace.log file, you can locate serialized SMO instances
by searching for the string:
You can also view the raw XML data before and after a transformation primitive by looking for the following strings:
Serialized DataObject prior to transformation
Result of transformation.
Another option for debugging is to set a breakpoint on the XSL transformation primitive in the Mediation Flow Editor. The debugger will allow inspection of SMO values both before and after the XSL transformation primitive at runtime.
When working in the Mapping Editor, there are things you can do to make the maps easier to navigate, understand, and maintain. One technique for implementing a map is to start by looking at the fields in the target one by one. For each field in the target, consider:
- Does the target field need to be populated? (Is it required?)
- If the target field needs to be populated, where will the data come from?
- If the target is going to be populated from data that is in the source XML, where in the source is the data located?
- Is the data source of the same type as the target data type? If the types are the same, and the data does not need to be modified in any way, consider using a Move transform.
- If the data from the source and the target are not the same type or manipulation of the data is required, consider using a Local map or a Submap to map the source and target. By using a Local map or a Submap, you can nest the details of the mapping inside a child transform, leaving less clutter and confusion at the root level of your mapping file.
- Will there be a future requirement to map the source and target types in another mapping or multiple times in the current mapping? If you determine that these types are frequently mapped together, consider using a Submap which is re-usable. Otherwise, a Local map is the way to go.
- If there are multiple source inputs required to populate a single target, consider using a Merge which is a specialized Local map that accepts multiple inputs.
- Within a Submap or Local map, do many of the source and target fields have the same name and types? If there is a lot of similarity between the source and target, consider using Match Mapping to map the similar fields as a starting point for further customizations.
- For complex types that are the target of a mapping, ensure that none
of the target types' children are targets of any other mappings other
than those that are defined within a container mapping on the parent.
For example, suppose that you wanted to copy all the context
information from the source to the target, but you also wanted to add
something from the body to the context or correlation as well. The
correct way to accomplish this is shown in
Figure 15. Correct way to handle nested mappings
Figure 16 is incorrect because there is a Move on context and another Move to contextor correlation.
Figure 16. Incorrect way to handle nested mappings
Figure 17 is also wrong because the Move to context or correlation is not within the Local map defined on context.
Figure 17. Incorrect way to handle nested mappings
Even if the Local map on context does not map the correlation element, the Move is still wrong, because it must be within the Local map.
Once you have determined the best type of mapping for each target field at the root level, you can use the same process to create mappings within the Local map, Merge, and Submap transforms that you have created.
While organizing mappings, the following actions might be helpful:
- Cut, copy, and paste: See Moving mappings for more information.
- Refactor to submap: If you already created a Local map and later decide it is better as a reusable submap, right-click the existing local map and invoke the Refactor to submap action.
Migrating maps to Integration Developer
In most cases, a .map file that was created in a previous version of Integration Developer will work correctly in the new versions of Integration Developer. In some cases, a warning message may appear to indicate that the XSL file associated with a given map was generated using an earlier version. In those cases, we recommend to regenerate the associated XSL file. The XSL can be regenerated using the quick fix associated with the warning message in the Problems view, or also when the .map file is open in the Mapping Editor by using the "Generate XSLT script" button on the Mapping Editor local toolbar.
One exception to the general migration rule is that the format of XML maps changed significantly between Integration Developer V6.0.2 and Integration Developer V6.1. A tool that migrates Integration Developer V6.0.2 XML mapping files (.xmx) to the newer Integration Developer V6.1.2 and later format (.map) is provided in Integration Developer. After importing projects containing *.xmx files, you will receive warnings in the Problems view. To launch the migration tool, right-click on a warning message and select Quick Fix, or try to open a *.xmx file in Integration Developer V6.1.2 or later.
Once you have completed the auto migration, you will still need to test your migrated map file to ensure that it produces the desired results. In V6.0.2, empty elements were created for required elements in the target XML even if those elements were not mapped in the mapping. However, in V6.1.2 and later, these empty elements are no longer created unless you explicitly create a mapping. Because of this and other differences in the two versions, we recommend that you test the maps after migration.
Part 2 of this series, Working with complex XML structures in the Mapping Editor, explains more advanced XML mapping topics.
In this article, you learned how to create, build, and test XML maps using WebSphere Integration Developer V7, as well as how to create simple mappings using a variety of refinements available within the XML Mapping editor. You now have enough skills to design, develop, and deploy XML maps within WebSphere Integration Developer.
- XML Path Language (XPath) Version 1.0 W3C Recommendation
- XSL Transformations (XSLT) Version 1.0 W3C Recommendation
- WebSphere ESB developer resources page
Technical resources to help you use WebSphere ESB as a flexible connectivity infrastructure for integrating applications and services to support an SOA.
- WebSphere ESB product page
Product descriptions, product news, training information, support information, and more.
- WebSphere ESB
A single Web portal to all WebSphere ESB documentation, with conceptual, task, and reference information on installing, configuring, and using WebSphere ESB.
- WebSphere ESB FAQs
Basic questions and answers about the new WebSphere ESB product and its relationship to other WebSphere products.
- WebSphere ESB support
A searchable database of support problems and their solutions, plus downloads, fixes, problem tracking, and more.
- IBM Redbook: Patterns: SOA Design Using WebSphere Message Broker and WebSphere ESB
Patterns for e-business are a group of proven, reusable assets that can be used to increase the speed of developing and deploying e-business applications. This Redbook shows you how to use WebSphere ESB together with WebSphere Message Broker to implement an ESB within an SOA. Includes scenario to demonstrate design, development, and deployment.
Process Server and WebSphere Integration Developer resource page
Technical resources to help you use the WebSphere Integration Developer IDE to render your existing IT assets as service components, encouraging reuse and efficiency as you build SOA-based integration solutions across WebSphere Process Server, WebSphere ESB, and WebSphere Adapters.
- WebSphere Integration Developer product page
Product descriptions, product news, training information, support information, and more.
Integration Developer Information Center
A single Web portal to all WebSphere Integration Developer documentation, with conceptual, task, and reference information on installing, configuring, and using your WebSphere Integration Developer environment.
- WebSphere Integration Developer information roadmap
Roadmap of articles and resources to help you with installation, migration, administration, development, troubleshooting, and understanding the underlying technology.
- WebSphere Integration Developer product library
WebSphere Integration Developer product manuals.
- WebSphere Integration Developer support
A searchable database of support problems and their solutions, plus downloads, fixes, problem tracking, and more.
WebSphere application connectivity zone
For developers, access to WebSphere Business Integration how-to articles, downloads, tutorials, education, product info, and more.
- WebSphere Business Integration products page
For both business and technical users, a handy overview of all WebSphere Business Integration products.
- Most popular WebSphere trial downloads
No-charge trial downloads for key WebSphere products.