Skip to main content
    Country/region [select]      Terms of use
     Home      Products      Services & solutions      Support & downloads      My account     

developerWorks > Rational >
An introduction to Model-Driven Architecture (MDA)
404 KBe-mail it!
The MDA Toolkit for IBM Rational XDE Java
Lessons in the design and application of MDA solutions
In Part III next month...
About the authors
Rate this article
dW newsletters
dW Subscription
(CDs and downloads)
The Rational Edge
Part II: Lessons from the design and use of an MDA toolkit

Level: Introductory

Alan W. Brown, Distinguished engineer, IBM Rational
Jim Conallen, IP Development, IBM Rational

15 Apr 2005

from The Rational Edge: Over the past two years, the role of model-driven design in improving the productivity and quality of enterprise application development has been widely discussed, yet few documented experiences with the use of MDA styles of development are available. This paper provides a set of practical lessons derived from the design and use of an MDA toolkit at IBM. It highlights the key lessons learned from specific MDA practices, and it offers some observations on the MDA approach in general, including a brief discussion of follow-on work in progress.

IllustrationSoftware development is, ideally, an iterative process of understanding, discovery, and design. Through the application of iterative and incremental development techniques, we increase our understanding of the problems that a software system is being created to solve. We use this understanding to design and deliver solutions that address the concerns of a defined set of stakeholders. Throughout this process, many different kinds of information must be captured, analyzed, refined, and communicated. Models, supported by modeling techniques and tools, help to enable this process.

The Unified Modeling Language (UML) is the standard modeling notation for software-intensive systems.1 Originally conceived over a decade ago as an integration of the most successful modeling ideas of the time, the UML is widely used and is supported by more than a dozen different product offerings. Its evolution is managed through a standards process governed by the Object Management Group (OMG).

One of the reasons for the success of UML is its flexibility. It helps software development teams create a set of models representing both the problem domain and solution domain. The UML can capture and relate multiple perspectives on these domains; it enables system modeling at different levels of abstraction, and encourages the partitioning of models into manageable pieces as required for shared and iterative development approaches. In addition, relationships between model elements can be maintained across modeling perspectives and levels of abstraction, and specialized semantics can be placed on model elements through built-in UML extension mechanisms (e.g., stereotypes and tagged values bundled into UML profiles).

As a consequence, UML is often the basis for system and software design approaches that encourage model-centric styles of development. Users of UML for modeling are supported by well-established methods that offer a set of best practices for creating, evolving, and refining models described in UML. One of the most well-known is the IBM Rational Unified Process®, or RUP®. RUP describes a development process that helps organizations successfully apply a model-driven development (MDD) approach.2 It introduces important practical techniques to the creation, evolution, and maintenance of models, focusing on how teams of practitioners working on large-scale development efforts can reduce technical and developmental risk in a project, produce models of high fidelity, and ensure that the different models are appropriately synchronized.

Organizations have been successfully using UML in the context of RUP-based best practices for some time. Surveys indicate that almost 40 percent of developers use some modeling approach based on UML, and the market-leading tool supporting UML, IBM Rational Rose, has cumulatively sold in excess of 250,000 licenses.3

As investment in these models has increased, some organizations have begun automating many of the model transformation aspects of an MDD approach. In effect, they have been using the UML / RUP / Rose combination as a platform on which to build their own model-driven approaches -- writing RoseScript to manipulate models in IBM Rational Rose, or building external utilities for manipulation of the models that IBM Rational Rose externalizes in the eXtensible Markup Language (XML) Model Interchange format, XMI. In some cases, customers have significant investment in these layers as a way of capturing their organization's best practices for modeling and model transformation.

A particular approach to MDD has been standardized by the OMG. The Model-Driven Architecture (MDA) concept that the OMG has defined focuses on creating models using UML, and transforming those models between different levels of abstraction. This has led to products that support the creation, management, and sharing of such transformation. Commercial products such as Codagen and arcStyler, and open source efforts such as AndroMDA and OpenMDX, augment the capabilities of modeling tools, such as IBM Rational Rose, to support MDA and ease the creation, management, and application of these transformations.4

In Part I of this paper, published in the February 2004 issue of The Rational Edge, we discussed how modeling is used in industry today and the relevance of MDA to today's systems. Here, in Part II, we examine model-driven development approaches from the perspective of the design and use of a toolkit for MDA at IBM. We describe the rationale for the introduction of this toolkit, explore its key capabilities, and highlight many of the important lessons in the application and use of a model-driven style of development based on use of this MDA toolkit in a number of customer situations. In Part III next month, we will discuss the adaptation of traditional software lifecycle practices in regard to MDA approaches and make several general observations on the successful application of model-driven approaches in practice. Throughout the paper, we assume a general familiarity with the OMG's MDA approach, and a background in modeling using the UML. Many introductions to these topics are available elsewhere.

The MDA Toolkit for IBM Rational XDE Java
With the 2002 introduction of IBM Rational XDE, IBM Rational's next-generation modeling environment, it was important that we also offer appropriate support for model-driven styles of development. Since its introduction, the principal mechanism for producing custom automation in IBM Rational XDE has been the executable patterns mechanism. This mechanism allows developers to create and design executable patterns as visual models. A pattern developer uses IBM Rational XDE to create diagrams and model elements within a "pattern" element. The pattern, itself a stereotyped UML model element, can then be executed in other parts of the model or in other models. The mechanism essentially makes the specified part of the target model look like the pattern.

This type of transformation engine inherently involves a declarative style. That is, within a diagram a structure of model elements is constructed. Elements and properties of those elements are parameterized. Conditional and procedural elements in the pattern are expressed with code in special callouts. The callouts are simply callback functions that are executed during special steps in the pattern's execution.

Early problems and conclusions
Our initial attempts at using the patterns mechanism to implement large-scale MDA-style transformations, where entire models are processed in a batch-like operation, were disappointing. While complete and fully functional, we found in practice that the performance suffered, especially as more algorithmic complexity was added to the transformations. This highlighted our emerging opinion that, although they are relatively easy to define and understand, declarative-style transformation mechanisms, such as the executable patterns mechanism in IBM Rational XDE, were not suited to the large-scale MDA-style of transformations we were trying to build.

We concluded that the patterns mechanism is best-suited to small-scale transformations on discrete, individually selected model elements. Patterns are also most efficient when the transformation mapping is mostly declarative. The more algorithmically complex the transformation mapping is, the less suited a visual, executable pattern mechanism is. What we needed was a more scalable approach to implementing MDA-style transformations, an approach that could handle the increased algorithmic complexity, and could also scale to support transformations involving large models.

These driving factors led to the creation of the MDA Toolkit for IBM Rational XDE Java. This toolkit's main goal is to make it practical to develop large-scale MDA-style transformations that use and manipulate UML models created with IBM Rational XDE.

Key aspects of the MDA Toolkit
The MDA Toolkit for IBM Rational XDE Java provides a framework for a transformation author to develop and deploy MDA-style transformations involving UML models.5 Toolkit-created transformations are inherently procedural since they are, for the most part, Java code. Unlike the visual, executable patterns mechanism, most of the work creating an MDA Toolkit transformation lies in writing the code that directly manipulates the models and other artifacts that participate in the transformation. This approach to transformation development is simple and efficient for handling arbitrarily algorithmically complex transformation mappings.

The toolkit provides a special application programming interface (API), layered on top of the base model access API that provides many commonly used features in MDA transformations. Most UML model APIs provide only the most basic model access functionality (get, create, and update model elements and properties). It has been our experience that when developing MDA transformations involving manipulation of UML models, additional higher-level functionality is key to making the transformations robust, scalable, and maintainable. The MDA Toolkit provides an API for model access (an MDA API) that includes this high-level functionality, such as:

  • Intelligent and deep copying of model elements. Copying model elements (especially when the elements can be containers or when they have relationships to other elements outside of the container) requires many discrete decisions. Optional parameters to a copy method determine how relationships and sub-elements are copied. Relationships can be copied relative to the elements on their ends, or one end can be copied and the other remain fixed (for example, a relationship to shared framework object). The toolkit's "intelligent copy" methods offer a callback mechanism to help resolve names.
  • Language-specific behavior. Techniques for comparing elements, such as names or operation signatures, can vary with the target language. For example, in different target programming languages, an operation that searches for or creates named model elements requires a target-language-specific technique for handling upper and lowercase characters. Similarly, when comparing operations for languages like C++ or Java, the operation name alone is insufficient; in this case, operations are compared with each other by their signatures, which only recognize the list of argument types, not names or return values. The ability to have, by default, the API for model access understand and behave with these rules makes working with the model significantly easier.

A principle design goal of the MDA API was to develop an inherent behavior in the API that supports the idea of iterative and repeated transformation invocation. We have found that most uses of an MDA transformation occur in a process where the transformation is re-executed multiple times in an iterative development process. Therefore, it is critical that any artifacts modified by the transformation preserve all information that is not directly linked to the originating elements of the transformation. For example, a transformation that converts an abstract UML class to a Java class, where the source file already exists and contains a number of helper attributes and methods, should not remove any existing code during repeated transformations.

The role of UML profiles
The MDA Toolkit includes tools for creating and refreshing UML profiles. We have found that custom UML profiles are used in most transformations that involve UML models. Profiles play two different roles in a typical MDA transformation. They can be a semantic profile, in which they are used to enable the model to accurately and appropriately capture and express specific information. This is the classical use of a UML profile: To define a set of semantics with which to interpret a model. However, they can also be used as a marking mechanism. Marking a model, in OMG MDA terminology, means tagging a model with extra information that is used by tools, and that is not appropriate to be captured in the model's semantics. This is information that is not within the domain of the level of abstraction of the model content itself, but information that is necessary for an automated transformation to complete its tasks.

The toolkit includes an authoring and runtime component. The authoring component enables UML profile creation and includes a project creation wizard (with sample code). The authoring component is only required in those workstations used to create new custom MDA transformations. The runtime includes the MDA API, which provides the high-level MDA functionality on top of the basic model access API, and is a shared component that is required on every machine that invokes MDA transformations.

Packaging and delivering MDA Toolkit transformations
Transformations created with the toolkit are packaged as Eclipse plugins, making the developed transformations easy to deploy to development teams. The toolkit insulates the transformation developer from most of the details of creating Eclipse plugins, so he or she can focus on developing the transformation itself.

The toolkit manages most of the details of the Eclipse plugin project, including the user interface and cached preferences, thus freeing the transformation developer to focus on implementing the actual transformation logic. Once a transformation and any required custom UML profiles are created and ready for deployment, they are packaged as an Eclipse feature and made available on an internal intranet or Web site. Each developer installs the desired transformations and dependent plugins. Once these are installed, the developer has a new menu item and set of default preferences that when invoked prompts the developer for the required parameters before invoking the transformation itself.

All the detailed transformation logic in the mapping is encoded in and executed with the downloaded transformation. This alone is not sufficient to ensure that any given transformation is going to be used correctly. Therefore, most well-constructed transformations will include new sections to be added to the Eclipse online help. These sections should document when, why, and how a transformation should be invoked. Of course, in every environment where automation is being introduced the process should be updated and communicated accordingly.

The MDA Toolkit for IBM Rational XDE Java was made available in late 2003, and has been used in a variety of customer situations. Our experiences in the design and application of that toolkit have reinforced many of our earlier work, and highlighted a number of critical lessons for anyone interested in the practical application of MDA.

Lessons in the design and application of MDA solutions
Over the past few years there have been many organizations using model-driven development, including use of model transformations. In fact, the authors of this paper have been directly and indirectly involved in developing models for large-scale systems, building model transformations, designing MDA tools, and using MDA tools on several customer engagements. As a result, a set of best practices for developing MDA-style solutions can begin to be distilled based on these experiences.

While each customer situation has its own particular concerns, we believe that the following steps should followed consistently when creating MDA solutions:

  1. Examine the models currently used in the development process, and the semantic connections between elements across abstraction boundaries.
  2. Identify candidate transformations for automation.
  3. Specify (document) the transformation requirements.
  4. Create necessary UML profiles.
  5. Develop the transformation code.
  6. Draft usage documents, then package and deploy.

These steps form the basis of various MDA projects, and when coupled with typical iterative design and risk-reducing development practices, they offer a robust guide to the MDA approach. Most importantly, from our experiences applying these steps, a set of heuristics can be distilled that offer a set of lessons for those practicing MDA. We now examine those lessons in detail.

Semantic model connections

Lesson 1. Develop transformations only when the semantic connections between the model elements are well understood.

Before any attempt to introduce automation in the development process, you must acquire a full and thorough understanding of all the models used and managed in the process. Too often, development efforts begin with the creation of useless models, just because the untailored development process or method states that they are needed. Unless the models being developed during a software project provide clear and useful information to the effort, they should not be created or maintained. On the other hand, many projects are often missing some key models and abstractions that connect various parts of the system. Whenever it is apparent that a significant amount of human interpretation and creativity is employed in any particular part of a project, there might be a need for a formal model to capture the thought processes and design decisions made during those activities.

The semantic connection between elements in models across abstraction boundaries are of particular interest in the context of MDA. Most of the practical activity around MDA today involves the automation of model transformations, particularly transformations between models at higher levels of abstractions and those at lower levels. Typically, connections like this "fan out"; that is, a single model element at a high level of abstraction (e.g., use case) is connected to multiple model elements (e.g., boundaries, entities, and controllers) in lower-level models.

The traceability of these connections is important for several reasons, the most important of which is to support automated transformations. But it also has important significance in iterative development environments where any given transformation is likely to be invoked many times. In our experience, a transformation is rarely a simple matter of moving information from one input model to one output model. More typically, there is one primary input source, with a few extra general-purpose parameters and a set of output models. For example, in a typical J2EE system, an analysis model contains information that is transformed into a database design, a Java interface and object design, a set of Java ServerPages (JSPs), and a number of configuration or deployment descriptors. All these more detailed models must be synchronized to execute properly. Therefore, it is often most convenient to develop a single transformation that manages the transfer of information from the primary high-level source model to the set of low-level models, rather than developing separate transformations for each combination of input and output models. It is this ability to coordinate the semantic content across various low-level models that makes MDA an attractive technology.

Furthermore, an important property of a transformation in this scenario is the developer's ability to execute it yet only update those elements of downstream models that are directly dependent on the upstream ones. Any additional information captured in the detailed models that was not generated by the transformation in the first place should remain intact. Additionally, model elements created by the transformation should not be duplicated in the downstream models during each invocation, rather they should only be updated (or created if not present in the model). The MDA Toolkit API is structured so that transformations inherently follow these design principles.

Identify candidate transformations

Lesson 2. Not all semantic connections make for good MDA transformations.

When tight semantic connections are identified between elements in models, they should be examined to see if the rules governing their relationships are suitable for automation. A suitable transformation can only be implemented when these rules are clear and unambiguous. They may be large and complicated, or they can be trivial and simple; in either case, it may be appropriate to investigate authoring a transformation to implement them. However, if the rules that define these connections, and subsequently, the rules for constructing new elements in the other models, require developer experience or judgment, then automation can be easily ruled out.

Another reason to rule out automation is the inability to programmatically access the necessary elements of the models themselves. For example, most transformation steps that involve the reading of natural language documents, regardless of the level of formality in them, are typically not suited to MDA-style automation. To illustrate, we usually find that transforming a use case specification document into an analysis model is generally not practical. However, transforming a UML sequence diagram, or activity that accompanies a use case, might be suitable for automation (given the rigor under which it was constructed).

Document transformation requirements

Lesson 3. Writing MDA transformations should be treated as a software development project itself.

The most useful transformations automate tasks that are either too tedious or complex for most developers to consistently and reliably implement. MDA automation ensures consistency and in most cases significant time savings. It is not surprising that most successful transformations are in fact non-trivial examples of software. When you are considering the use of MDA automation, and the creation of non-trivial transformations, you should treat this as a separate software development in itself.

A transformation, especially one created with the MDA Toolkit, is an example of custom software. The requirements should be clearly understood and examined both semantically and technically (see Lesson 1). The bulk of the requirements specification is contained in the mapping document. The mapping document describes in detail the semantic connections between the modeling elements in the various models participating in the transformation. Other pertinent requirements may include performance, security, scalability, etc.

Most of the transformations that we were involved with were implemented with a number of classes, and as a result, underwent an analysis and a design phase. Other practical issues, such as testing, deployment, and training, also have their parallels in the classic software development process. While most typical MDA transformations do not require a large team to implement, treating them as an independent software development effort will ensure completeness and quality.

When specifying the requirements for a transformation implemented with the MDA Toolkit, you should consider three distinct behaviors:

  • Validation of input parameters (and their content) for required and consistent information
  • Execution of the core transformation logic / mapping
  • Verification of the semantic connections between the model elements that participated in the transformation

The execution of the transformation's logic and the modification of the downstream models represent the most important work of the entire transformation. However, the validation of input and verification of successful completion are also important in an iterative development environment.

Lesson 4. Validate the integrity of all parameters and artifacts participating in a transformation before executing it.

Since almost anything can be invoked and executed in a transformation created through the MDA Toolkit, there is no inherent transactional process monitoring. In a case where a transformation begin, but aborts before completion, there are no facilities to guarantee that this condition will result in the reset of all input parameters to their previous state. This is because the MDA Toolkit does not limit or restrict the types of artifacts that can participate in a transformation. It is perfectly possible for the invocation of a transformation to invoke external Web services or modify artifacts in a permanent way.

It is generally up to the transformation developer to ensure the integrity of the artifacts that are manipulated in a transformation. As a result, it may be necessary to ensure that the input set of artifacts is in a known state (i.e., all required parameters are specified, expected profiles are applied to models, and expected content in models are verified artifacts) before the transformation takes place. This is also useful when the transformation takes a long period of time to execute (it may not be unusual for a transformation to execute over a period of hours if it is particularly complex and depends on a large number of resources). If the parameters (or any of the participating artifacts) can be determined to be invalid early, this can save user time and preserve artifact integrity.

Lesson 5. A verification specification is required to maintain the transformation's integrity in light of downstream changes.

When the transformation becomes part of an iterative development process, it is likely that even after a transformation is executed, the artifacts, both input and output, will be manually updated. This follows a general principle of model-driven development: changes are first introduced to the system in the models with the most appropriate level of abstraction, regardless of what that abstraction level might be. For example, a Customer entity described in a high-level analysis model might not contain information about persistence strategy (optimistic, pessimistic, etc.). So changes to this strategy need not be introduced into the analysis model, but rather in the design model where they would have the most impact. Consequently, the addition of a new key attribute to the Customer class would probably require a change directly to the high-level analysis model. When appropriate, and after a possible evaluation, this change would be reflected as appropriate to lower-levels models, possibly as the result of an automated MDA transformation.

Because iterative development processes encourage the evolution of models throughout the development process, an important feature of a good automated transformation is the ability to analyze the current state of artifacts and compare them to what would be expected if the transformation were to execute with them. This step is encouraged in MDA Toolkit transformations with a separate and distinct verification step. The verification step is typically run separately, after the downstream artifacts have not only undergone the transformation, but undergone subsequent modification as well. In a typical scenario, a developer would use a transformation to update or create a set of models or code with new information in the abstract model. Then, those downstream models and artifacts might undergo further refinement that is not related to any semantic information managed by the transformation process. During this refinement, it is possible (although undesirable) for changes to be made that break the expected semantic connections between the models. Explicitly executing the verification of a transformation will produce a report to the developer of any breaks in the semantic mapping established by the original transformation. These may then be addressed appropriately by the developer, resulting in either a change to the upstream models, downstream models, or both.

Lesson 6. In most MDA situations, the model-to-model mappings are complex and require careful design and implementation.

The core logic in a transformation usually expresses the algorithm in which one set of model elements is transformed into another set. In a simple declarative-style mapping, the connections are relatively simple and straightforward, and there is little ambiguity. In most large-scale MDA-style transformations that we have seen, the mappings are not always so simple. Often an element in the upstream model will map to one configuration of elements under a complex set of conditions that often involve other upstream model elements, connections, and stereotypes and tag values of various UML profiles.

Take as a simple example a persistent entity called Address that defines a number of attributes, one of which is tagged as a primary key type (in the marking profile). Mapping an abstract class such as this into a Java object and database design is relatively straightforward (see Figure 1). A Java object is created, with the attributes mirrored, and the data types corrected. Getters and setters are created as well. In the database design, a table and columns are created with the names adjusted to the organization's standards. The primary key (identified by a tag in the marking profile) is set. The data types are converted with some help from the marking profile. The marking profile can also be used to determine the null and other typical database properties.

Figure 1: Mapping a simple entity to object and data design models

Figure 1: Mapping a simple entity to object and data design models

This simple example illustrates mapping a single class in the abstract model to a single class in the object model and a single table in the data model. Attributes in the abstract model map one-for-one to attributes in the object model and columns in the data model. Operations in the object model all trace back to exactly one attribute in the abstract model. In another example, however, the mapping is not so straightforward. Take as a second example a set of classes participating in a hierarchy in the analysis model (Figure 2).

Figure 2: Specialization in the analysis model

Figure 2: Specialization in the analysis model

The classes RestrictedProduct and CommericalProduct are specializations of Product. The Product class also identifies two attributes (code and supplier) as the object identifier or primary key; however, this is often identified with a tag value or stereotype, which may not appear rendered in a diagram. Using the organization's naming conventions and attribute mapping strategies, there are still three very different ways in which this set of classes can be transformed into a database model. We refer to the three basic strategies as "Roll Up," "Roll Down," and "Separate Tables," and these are supported by IBM Rational XDE's Data Modeler, as shown in Figure 3.

Figure 3:  Roll up generalization strategy

Figure 3: Roll up generalization strategy

In the first strategy, a new type of table T_PRODUCT_TYPE is created by taking the base class and appending the _TYPE suffix. The columns of this table are predefined by the mapping and are always the same. This "type" table is used simply to provide an extensible means to easily add new types. All the attributes of all the sub-classes are rolled up into the one main table.

In the Roll Down strategy, illustrated in Figure 4, all concrete classes are assigned their own unique table, where columns in the base class are duplicated across all the tables.

Figure 4:  Roll down generalization strategy

Figure 4: Roll down generalization strategy

Finally, in the Separate Tables strategy, illustrated in Figure 5, all classes are mirrored with a table, and identifying relationships are created between the base class and its sub-classes so that base class attributes captured in the base class table can be accessed by corresponding rows of data in the child tables. In all three strategies, the tables use the composite key implied by the main base class.

Figure 5: Separate tables generalization strategy

Figure 5: Separate tables generalization strategy

From this example, it is clear that the mapping strategy is no longer simple, and that it is possible for some elements in the abstract model to map simultaneously to different elements in the database design. Also, in the Roll Up case all three analysis classes map to a pair of tables, with only one of them sharing a common name.

The resultant object model might also have requirements for only a single attribute as an object key (as in J2EE). The resultant transformation into the Java object model is illustrated in Figure 6. Because there are two attributes that make up the object key, a new key class is created and a directional association is added to the object design in addition to the getters and setters.

Figure 6.  Java object design with generalization and composite keys

Figure 6. Java object design with generalization and composite keys

Even this simple scenario illustrates the potential complexity in real-life mappings. This scenario is further complicated as the rules for determining the mapping strategy are dependent on a combination of tag values and analysis model configurations (i.e., use roll up when there are three or fewer classes, substitute composite keys with a single auto-generated integer key unless an override tag value is set to False, etc.).

In the micro view, none of these issues is insurmountable, and each case, separately examined, makes perfect sense, often corresponding to what designers and developers have been doing manually for years. However, when collected into a single transformation, one that needs to coordinate the structure and content of multiple downstream models, the entangled and interwoven logic often makes it difficult to express the transformation simply in a declarative style. In these situations, the transformation is best expressed and implemented algorithmically, which is the default usage style of the MDA Toolkit.

Lesson 7. Transformations can be expressed declaratively or imperatively. In general, the imperative approach is more adaptable when describing complex transformations.

While the MDA Toolkit naturally emphasizes an imperative style of thinking about an implementation, it also allows XDE patterns to invoke, or support, declarative routines. The MDA Toolkit lets transformation developers create reference models with pre-defined sets of model elements that can be selectively copied to a target model and altered during the copy process.

Figure 7 highlights a fragment of a reference model that contains a number of classes and relationships. Some of the classes have attributes and operations. The operations have complete code templates associated with them that are used to generate complete method bodies in the code. The model element names are invalid Java identifiers, as this set of classes is not expected to be used to generate code directly. Rather, this set of classes will be copied into a code model, and during the copy the element names will be updated with the actual names of classes that appear in the transformation's input model.

Figure 7:  A pattern of model elements in a reference model

Figure 7: A pattern of model elements in a reference model

The overall process of the transformation is to process an input model (a Platform-Independent Model -- or PIM) and to look for classes tagged as «managed class». This marking in a PIM model indicates that the class should be transformed into a set of classes in the target Platform-Specific Model -- or PSM. So for each managed class in the PIM, the set of classes in Figure 7 is copied into the appropriate location of the PSM, and during the copy the names are modified with information from the originating PIM class.

Figure 8 shows the results of transforming one class in the PIM stereotyped «managed class» into the PSM. The result is the creation of four new classes, whose names are based on the originating PIM class. The transformation not only copies over the structure and class properties from the reference model fragment, but also augments the newly created classes with the PIM class' attributes and operations as appropriate. Getters and setters are also created in the PSM classes as appropriate.

Figure 8:  Applying a reference-model-based pattern

Figure 8: Applying a reference-model-based pattern

This type of approach to transformation leverages both a declarative and visual style of pattern definition with the underlying control of code. During the transformation, call back methods are set and can be invoked during the copy process. This gives the developer an opportunity to specify and refine the target names of all elements being copied. After the creation and copying of the information in the reference model, the code follows up with the rest of the transformation by copying all public attributes and operations in the PIM classes and by creating getters and setters.

Create UML profiles

Lesson 8. UML profiles can be used to manage model markups as part of the MDA transformation.

UML profiles are used in two distinct and separate ways. First, the traditional role of UML profiles is to provide a mechanism to extend the semantics of UML for a particular domain or method. It enables UML to effectively model something concrete. IBM Rational's Business Modeling Profile and Data Modeling Profile are examples of UML profiles that enable IBM Rational XDE to effectively model business processes and logical / physical database designs, respectively. These types of profile usage can be thought of as semantic profiles, necessary for the model itself to capture and express the detail for the particular level of abstraction and domain.

Profiles are also a useful mechanism for managing model markings, a necessary element of the MDA transformation process. Marking is a step or technique in MDA in which additional information, not within the semantic scope of the model itself, can be added to a model solely for use by automation later. Marking up a model is usually a step done just before a transformation is ready to be invoked. Of course, a model can be marked as it is developed. However, the role and skills of the developer creating the model will typically not be appropriate for understanding the significance and purpose of the various markings themselves.

For example, suppose a simple abstract entity model (part of the traditional analysis model) was developed in UML by a set of analysts. This model defines a number of persistent entities in the proposed system, including their attributes and relationships to each other. This information gets transformed into a logical (or physical) database design. But a problem occurs when the analysis model is used to create a new database model, because the information captured in the analysis model is insufficient to complete a working data model. For instance, a String attribute of an entity could be implemented as a CHAR, VARCHAR, or TEXT type of column. Additionally, the length information of the column is not information that is normally captured in an analysis model.

Transformations going from models at high levels of abstraction to models at lower levels of abstraction, and hence more detail, often suffer this problem of insufficient or incomplete information. Even in the case of mapping a UML sssociation to a Java field, it is unclear how associations with multiplicities of "many" are implemented. The semantics of sets, lists, and maps are beyond the semantic scope of most UML analysis models, but are critical within the scope of the implementing Java class.

UML profiles come to the rescue here by providing a mechanism through which models and model elements can be augmented with information, albeit beyond the model's official semantic scope, to be used by transformations. A marking profile, distinct and separate from any applied semantic profiles, can be applied to a model that defines stereotypes and tag values that contain information used only during the transformation. A data modeling profile would contain tags that describe a column's length or precision. A Java profile would contain information on how an association was implemented in Java.The MDA Toolkit functionality for creating UML profiles is in fact an example of an MDA transformation itself. Profiles are created by first building a UML model and stereotyping classes and attributes in the model that, in the end, are used to generate the actual profile file. This stereotyping of the profile model's elements is "marking" the model with the necessary information for the transformation to build the actual profile file registered with IBM Rational XDE, and subsequently is capable of being applied to any UML model opened by IBM Rational XDE.

Develop the transformation

Lesson 9. All transformations should be created such that they can participate in an iterative process, and hence, be repeatedly applied without loss of unrelated information.

A critical design goal of any transformation is the ability for it to participate in an iterative development process. While one-time-only transformations may be easier to develop, it has been our experience that these types of transformations see very little usage relative to the effort required to create them. Instead, transformations that can be continually applied to evolving development lifecycle artifacts tend to be far more valuable and worth the effort to develop.

Developing transformations with the MDA Toolkit that participate in an iterative development process requires some attention to all elements in the design and implementation. The MDA API that provides the primary method for model access and manipulation is designed to promote this type of transformation development. For example, when creating an attribute on a class, the createAttribute() method of the MdaClass object will, by default, first check to see if an attribute of that name already exists; if it does, it will simply return that instance of the attribute without creating a new duplicate attribute. Interestingly enough, this is not the default behavior of most UML access APIs because the UML specification itself does not require unique attribute names. Therefore, in theory a UML API should permit this. However, most of our transformations target a real-life implementation language like Java or C++, and so in practice this type of behavior in a model API is not desired.

The MDA Toolkit API does have an implementation bias. When it compares class names and attribute names, it looks for uniqueness in its container and uses case-sensitive comparisons. When comparing operations, it performs a case-sensitive name match along with a comparison of input argument data types, and ignores return types. As a result, the MDA Toolkit API makes it significantly easier for a transformation developer who is targeting Java or C++ implementations to develop iterative-friendly transformations.

The overall process of developing an MDA Toolkit transformation is little more than an exercise in developing Java code, and in particular developing an Eclipse Java Plugin. Fortunately, the toolkit wizard provides most of the code that Eclipse requires and lets the transformation developer focus on implementing the transformation logic.

Lesson 10. Good transformations implement discrete validation, transformation, and verification procedures.

When the toolkit wizard creates a project, it provides stubs or sample code for three key methods, as shown in Table 1.

Table 1: Three key methods to be used in a transformation project

	public boolean validate() throws Exception {

The most important of these is the transform() method, which is invoked when the developer wants to invoke the transformation being used. It is in this method that the primary logic of the transformation is encoded. Any parameter values specified by the developer invoking the transformation are obtained with simple method calls and are typically translated into something more meaningful, like MDA API references. The logic of the transformation is most often delegated to helper methods and other custom classes that perform the real work of the transformation. During the transformation, it is often useful to write status messages to the visible log, so the developer can monitor the progress.

In the sample implementation shown in Table 2, analysis model and design model filenames are specified as parameters. Parameter values are accessed by calling a super class method. These are then converted to MdaModel element references. In this example, the analysis model is scanned for all classes that are stereotyped "PersistentObject" (in a custom profile), and that are not sub-classes of another class, since entire hierarchies are processed in a single function. Most of the work is coordinated by the processPersistenceObject() helper method, which in turn uses several delegate classes to perform the bulk of the transformation work.

Table 2: Sample implementation in which the analysis model is scanned for all classes that are stereotyped "PersistentObject"

The validate() method is called before the transform() method, and if it returns a false value, it will abort any attempt to run the transform() method. In the implementation sample shown in Table 3, the analysis model is checked to ensure that it exists and that it has the custom profile applied to it. The design model is checked for its existence and that it is an IBM Rational XDE code model, capable of round-trip engineering Java code.

Table 3: The analysis model is checked for the custom profile and applied to it and the design model is checked to ensure it is capable of round-trip engineering Java code.

Depending on the desired level of verification, the verification() method can be as complex as the transform method itself, and typically will use helper methods and classes similar to the actual transform code. In the example shown in Table 4, the same code is used to identify elements in the analysis model that should have been transformed.

Table 4: The verification() method will typically use helper methods and classes similar to the actual transform code.

The development and testing of an MDA transformation proceeds like any Eclipse plugin project. A transformation can perform debugging using the Eclipse runtime workbench, which essentially starts up a new instance of the Eclipse and IBM Rational XDE shells in debug mode, enabling step-by-step tracing of transformation code.

Deploy the transformation

Lesson 11. Use of the Eclipse plugin architecture greatly simplifies the task of installing and upgrading MDA transformations.

When a transformation has been developed, the task of deploying it and educating a large development team comes next. Fortunately, the Eclipse Update Manager mechanism provides a convenient mechanism for installing plugins into an Eclipse shell. Using the built-in Eclipse Plugin Development Environment (PDE) functions, the transformation can be packaged and placed on an internal HTTP site from where developers can easily download and install MDA Toolkit transformations and other dependent plugins.

Once installed into a developer's workstation, the menu item corresponding to the transformation can be activated for any perspective. The menu item prompts the developer for the parameters of the transformation. The developer can optionally run all three functions (validate, transform, or verify) together or run them individually. The results will appear in the updated models and artifacts and in the log window.

Lesson 12. Each MDA transformation must be well documented, providing samples, guidance, and support information.

Providing the transformation functionality to a developer is not sufficient. It is important that whenever significant new functionality, such as an MDA transformation, is provided to a development team that the team should be educated in its usage. Since most valuable transformations perform relatively complex work, it follows that in many cases the decisions regarding when to use it and exactly what to supply as parameters might be equally complex.

Updating the development process and providing online documentation to accompany the transformation should be considered essential to success. In the Eclipse environment, documentation is easily inserted into the rest of the online help, and is typically part of the transformation plugin itself or a dependent plugin that is installed with the transformation.

In Part III next month...
Building solutions using MDA approaches requires changes to the development process. While our experience has been that many of the current best practices for enterprise software development are still applicable, there are some important changes to those practices as a result of taking a more model-driven perspective to the development process. To explore this topic, next month we will conclude this three-part introduction to MDA with a look at the well-known Rational Unified Process and consider the way that process is interpreted and executed on an MDA project.

1 See J. Rumbaugh, G. Booch, I. Jacobsen, The Unified Modeling Language Reference Manual, Second Edition, Addison-Wesley, 2004.

2 See P. Kruchten, The Rational Unified Process: An Introduction, Addison-Wesley, 1998, and P. Kroll and P. Kruchten, The Rational Unified Process Made Easy: A Practitioner's Guide to the RUP, Addison-Wesley, 2004.

3 According to Evans Data Corp., North American Development Survey: Volume 1, Response to question on "Use of UML in Application Design," Fall 2003.

4 For each of these products, see the following Websites: Codagen --; ArcStyler --; AndroMDA --; openMDX --

5 Further details on the MDA Toolkit for IBM Rational XDE, including access to the download, are available at

About the authors
Author photoAlan Brown is responsible for the technical strategy behind IBM Rational desktop products. He is also a key member of the leadership team responsible for aligning Rational tools with products from across IBM that compose the IBM Software Development platform. In addition, he has been responsible for guiding the vision and strategy for the company's model-driven development tools.

He earned the title of Distinguished Engineer for his contribution to IBM Rational desktop products, as well as for his broader contributions to the future of the software industry. For more than a decade, Alan has been an industry thought leader, directing evolution of the developer experience through his books, papers, and numerous interactions with top IBM Rational clients. For more information about his work and ideas, visit his Web site at

Alan Brown received his PhD from the University of Newcastle-upon-Tyne in 1988.

Author photoJim Conallen is a software engineer in IBM Rational Development Accelerators group, where he is actively involved in the area of asset-based development and the Reusable Asset Specification (RAS). Jim is a frequent conference speaker and article writer. His areas of expertise include Web application development, where he developed the Web Application Extension for UML (WAE), an extension to the UML, allowing developers to model Web-centric architectures with UML at appropriate levels of abstraction and detail. This work served as the basis for Rational Rose and XDE Web Modeling functionality.

Jim has authored two editions of the book Building Web Applications with UML, the first focusing on Microsoft's Active Server Pages and the latest on J2EE technologies. He can be reached via e-mail.

404 KBe-mail it!
Rate this article

This content was helpful to me:

Strongly disagree (1)Disagree (2)Neutral (3)Agree (4)Strongly agree (5)


developerWorks > Rational >
    About IBM Privacy Contact