Governing and managing enterprise models: Part 1. Introduction and concepts

This is Part 1 of a multipart article series about governance and management of enterprise models -- models that have value across an entire company, region, or division. This first part introduces the topic.


Nick Norris (, Rational Solution Architect, WSO2 Inc

Nick Norris is a business and technical professional who has worked in software engineering, architecture, technical sales, consulting, and marketing-the last seven years with IBM Rational software. Currently working as an IBM Rational Solution Architect.

Kim Letkeman (, Development Lead, Modeling Compare Support, IBM Rational

Kim joined IBM in 2003 with 24 years in large financial and telecommunications systems development. He is the development lead for the Rational Model-Driven Development Platform. His responsibilities include UML and EMF compare support, integrations with ClearCase, CVS, Jazz and RAM, domain modeling, patterns, transform core technology, transform authoring for both model to text and model to model transformations, and test automation.

developerWorks Contributing author

13 January 2009

Also available in Chinese


Governing and managing enterprise models: Series

Part 3 of this series is comprised of four white papers that can be downloaded from the series landing page.

This article outlines a recommended approach for governing and managing a set of evolving, under-development enterprise models. The models are evolving because they are being used and reused as the basis for doing model-driven development. This approach provides guidelines and procedures that are known to work with acquired exemplar models, such as the IBM® Industry Models (IFW Banking and IAA Insurance models, for example).

Although intended as a complete workflow for enterprise model management and governance, these procedures can be used in any situation where there are parallel streams or work containing models that are under active development or being used for active development and there is desire to maintain the integrity of the models.

Related article

This article may be read as a companion to the IBM® Redpaper titled Building Service-Oriented Banking Solutions with the IBM Banking Industry Models and Rational SDP, which details high-value model reuse scenarios for IBMs banking industry models (IFW) and IBM® Rational® Software Delivery Platform (SDP) clients and partners.

This article defines specific functional roles to make responsibilities clear – and intents and purposes, too. For example model managers and practitioners are referred to by roles, even though it is possible that the same people might perform both roles at different times. It is necessary, however, to ensure that these roles are performed in an independent fashion. For example, the model manager role may be performed by anyone "wearing the model manager hat," but that person should be thinking only in terms of what is best for the models.

Most importantly, these procedures are meant as guidelines to be adapted to local governance and management cultures. These guidelines should not be considered as cast in stone. Be aware though, that moving away from these relatively formal procedures toward less formal procedures does increase risk to corporate assets such as the enterprise models.


The approach described in this series is designed to support an enterprise or portion of an enterprise that is using models to help describe and control its evolution. These procedures specifically address the need to maintain the integrity of the models in a particular context of use -- that is, controlling changes being made in parallel by multiple teams and practitioners to the model semantics, design points, and structure. These could be models that the enterprise has developed or a framework of models that it has purchased, such as the IBM Industry Models. The IBM Industry Models serve as a concrete example of a seed -- a starting point -- for an enterprise-wide set of models that are used and reused as the basis for doing model-driven development. The remainder of this section introduces those models.

The IBM industry enterprise models are a collection of interrelated models that address key aspects of the analysis and design of business, software (services), and data domains and applications or service-oriented solutions for industries such as banking (IFW) and insurance (IAA). Of particular interest for this article are the IFW and IAA Process and Integration Models that are delivered as IBM® Rational® Software Architect and IBM® Rational® Software Modeler UML 2 models (for Unified Modeling Language 2.0). These can be used as the basis for creating reusable analysis and design descriptions of an enterprise’s processes and supporting software services and data. (See Fgure 1.)

Figure 1. IFW model architecture

It is important to note that the IBM Industry Models do not provide a predefined solution. Rather, they are platform-independent models that abstract the specifics of any single organization in any one problem domain. The models provide blueprints and standards upon which specific processes and underlying services and data may be constructed.

A primary goal for the IBM Industry Models and enterprise modeling in general is the encouragement and enablement of reuse of consistent definitions, associated artifacts, and related implementations, such as software services. This partially answers the question "Why does enterprise modeling and its governance and management matter?" Enterprise modeling is a foundation for establishing an enterprise service-oriented architecture (SOA), model driven development (MDD), and model-driven architecture (MDA).

Evolution, rather than revolution, of enterprise models is of critical importance to model management because modeling tools such as Rational Software Architect can perform accurate and precise model compare-and-merge operations across different versions of these models. Model differences can be individually or collectively accepted or rejected when merging new model elements or features into the enterprise models.

This fundamental model management mechanism also allows model changes to be propagated downward to and harvested upward from projects that are responsible for addressing specific business goals and to effectively serve as agents of enterprise model change. Projects can thus work in parallel and remain isolated from each other’s daily changes without extensive coordination. This mechanism protects each project’s and individual’s productivity while, at the same time, ensuring that they all are continuously contributing to a controlled evolution of the enterprise models.

Although the IBM Industry Models are not referenced directly in the first two parts of this series, all information in these articles is directly relevant to governing and managing their use in an enterprise that wants to practice parallel model-driven development.

Fundamentals of model management

Model management is a general term that encompasses or is related to several key concerns, each having their own implied procedures and processes:

  • Governance: The context within which model management occurs, including the set of policies specifying, among other concerns, the who, what, when, and how of managing, measuring, and controlling the physical assets (models) and the changes that may (or may not) be made to them. Also implied is the need to maintain history for all changes for audit and management purposes.
  • Persistence: Storage of the enterprise assets (models) in a safe, accessible, and secure facility.
  • Versioning: Tracking and controlling all changes as successive generations, or versions, of the models.
  • Publishing: There are at least two types of publishing that are relevant:
    • Publishing models as reusable enterprise assets: Making the right version of the models available to others at the right time in the development cycle. A reusable asset repository in conjunction with a configuration management repository should support and, indeed, enforce this. An example of this would be publishing to IBM® Rational® Asset Manager a version of Rational Software Architect models that are under IBM® Rational® ClearCase® configuration management control.
    • Web publishing: Making the models available for read-only viewing as a set of Web pages.
  • Comparing and merging: Comparing, examining, and possibly merging the differences between two versions of the same model or its constituent parts, or both; accepting or rejecting changes between versions individually or in groups. It is critically important to note that the common ancestor of the two versions of the model being compared also must be involved in the comparison to provide sufficient context to resolve conflicts. Reasons for this are described in Comparing and merging UML models in IBM Rational Software Architect: Part 2. Merging models using "compare with each other" by Kim Letkeman.
  • Reporting: Making the model available in an alternative form that can be perused by using a generally available browser interface. This can also encompass reporting of model issues, such as UML 2 violations (also called model validation) and reporting of security violations, where an exception to the security procedures is found. For example, a situation might arise where a model manager harvests changes into the enterprise models without approval, and the reporting mechanism catches this exception to policy. This type of reporting fits within the more general model governance area.

Although this article addresses all of these concerns to some extent, it focuses primarily on model governance and version control, because these define the procedures needed to successfully manage the use of the IBM industry or, more generally, enterprise models in teams of all sizes.

Introduction to team development

This section answers, at least in part, this question: "Why is all of this model management and governance necessary?"

All nontrivial software is developed and delivered by teams of modelers, analysts, and developers working in parallel across a project lifecycle, from inception through transition delivery. The need for teams arises from the simple fact that software development scales nonlinearly. It is obvious that applications that can be developed by one or two people (for example, the kind you might write in school) need little management or coordination as there is no significant complexity with which to be concerned.

However, even doubling the number of components more than doubles the complexity, because each component’s inherent complexity interacts with others to spiral the overall complexity ever upward. Couple this with accidental complexity -- the complexity that arises from practitioners’ choices rather than anything inherent in the problem -- and it is no wonder that the search for a "magic bullet" continues.

Short of finding this magic bullet, the best available and vetted solution to the complexity problem is to model the business context and software services before implementation to maximize business relevance, to squeeze out excessive complexity, and to maximize reuse as much as possible. This is where the IBM Industry Models are an exceptionally strong exemplar, because they model generalized, consolidated, inter-related best practices harvested from work spanning two decades with industry-leading organizations.

IBM Industry Models

When used with the IBM Industry Models, the IBM Rational Software Delivery Platform (including Rational Software Architect, Rational Software Modeler, and the change and configuration management tools, ClearCase, IBM® Rational® ClearQuest and IBM® Rational® Team Concert) can help organizations, teams, and practitioners master this inherent complexity and avoid accidental complexity. The procedures defined in this article provide a stable and scalable means for evolving and using the IBM Industry Models or, indeed, any enterprise-class set of models.

Parallel development

Team members must have work areas with some level of isolation from each other to achieve and maintain high levels of productivity. Too much interaction and too much coordination between team members can easily bring progress to a virtual halt.

However, as shown in Figure 2, there must be an appropriate level of change coordination so that parallel updates are detected and content is successfully merged to avoid loss of one set of changes or corruption of the final artifact.

Figure 2. Unmanaged parallel development, data loss risk

Common methods for detection and coordination of parallel development are discussed in the following sections.


When tooling does not explicitly support team or parallel development or when procedures are essentially ad hoc, practitioners are expected to coordinate all changes outside of the context of the tools and repositories in use. As mentioned earlier, this is easily done when there are only two people involved, with only one communication path to manage. However, as the team grows, the communication paths grow nonlinearly. By the time there are five team members, the number of communication paths has grown rather dramatically to 10. With just two more team members, the number of communication paths between team members has more than doubled to 21!

Key conclusion:
"Luck" will not hold as the team size scales up. More formal processes are necessary as teams grow beyond quite small.

Manual process

Formal manual procedures and methods can be created and strictly followed to prevent accidental damage to the integrity of the models. This will typically involve tightly controlled access for creation of editable versions of the model, with strong management roles involved at every stage of model use and development. An example of this is to send a copy of the models to all modelers on Monday morning and then to collect and merge all of the work back together on Friday afternoon during the regularly scheduled model management sessions.

Self-governance is not allowable under these rules, and the models are generally very safe. Productivity takes a hit because of the intervention of management at every point in the workflow, but this is acceptable when one considers the risk inherent in the alternative luck method. Notice that the level of effort and process required to use the manual process is much higher than the others. Note also that "big bang" merges tend to be large and risky because so much change has accumulated that overlapping changes are a virtual certainty. Parallel development benefits from frequent harvesting of individual practitioner changes into the master project models for two reasons:

  • Practitioners are working on more current versions of the models, which can help reduce duplication and rework,
  • Conflicts happen less frequently and, when they do occur, the changes needing to be merged are smaller and less risky.

Key conclusion:
Manual procedures do not support frequent merging without a dramatic increase in risk and level of effort.


Tooling that manages the parallel development and control of assets with little or no intervention from the practitioners or management provides the highest degree of governance and security, while still maintaining the desired level of productivity for the team. Because the tooling now performs all of the necessary parallel change detection and coordinates the merging of parallel versions with tracking of the results, there is no need for extra processes at each step. Risk to the artifacts essentially drops to zero, because previous versions are always recoverable in the event of a serious process error.

Practitioners can choose when to propagate others’ work into their workspaces and when they will harvest their changes back up to the project models. This freedom promotes high productivity, which is known to increase quality (as proven by several studies by Tom DeMarco and Tim Lister.)

Key conclusion:
Automated team development solutions enable the highest level of productivity, while still ensuring controlled evolution of the models. Practitioners are able to concentrate on coordination at the conceptual and domain modeling levels and use the tooling to do the heavy lifting for management of model artifacts and coordination of changes.

Definitions and glossary

Before proceeding any further, it is necessary to define a few terms for use throughout the remainder of this article. Without these terms, explanations will become cumbersome.

  • Model layers: Figure 3 shows the typical layering of models that have been externally acquired and propagated to the various layers within the enterprise. Note that these layers capture models that are under development. These relationships exist at specific points in time and can change (for example: projects are started and eventually completed.)
Figure 3. Enterprise models-under-development hierarchy
Exemplar > enterprise > project models
  • Exemplar models: This layer can contain any set of externally acquired exemplar models. An example is the IBM Industry Models for banking (IFW) or insurance (IAA). Externally acquired can mean purchased from a third party or developed outside of a team or a division. The key point is that these models can be reacquired in a new version sometime in the future. This layer is generally identical to the incoming models without any changes. Periodic updates typically contain only those changes that provide high value to clients and customers of the derivative models.
  • Enterprise models: This layer contains the model versions that are considered the master set of models within the enterprise. They were seeded from externally acquired exemplar models; therefore, they are considered to be and modeled as descendants of those models. Note that it is possible for the enterprise models to exist in multiple layers. This might happen when it is desirable to centralize the enterprise models but necessary to maintain permanent variance in one or more divisions, business units or regions. This should be viewed as a form of inheritance at the artifact level.
  • Project models: This layer contains models that descend from the enterprise models and contain transient differences. They exist to provide a platform for the evolution of the models as required by a specific set of business requirements or goals that the project is charged with fulfilling. All such changes are intended to become a part of the enterprise models or to be removed from the project models at some point. The implication is that analysis and design choices that may be optimal for a specific project would marginalize their reusability across the enterprise. This would lead to a proliferation of these types of project-specific models and derivative artifacts that, in turn, undermines a fundamental value proposition of enterprise models and assets, especially in SOA. The interplay between enterprise models and project models drives many of the procedures documented in this article.

General glossary

  • Stream: A logical group of model files. A stream exists for each unique set of IBM models, enterprise models and project models. A child stream descends from a parent stream, with any stream above the current stream in the hierarchy being considered to be an ancestor stream. The major streams are called integration streams, indicating that they are integration points for inputs from above or below. Minor streams exist for practitioners and are called development streams, indicating that their purpose is day to day development and modeling.
  • Repository: A single database that will contain one or more streams of models.
  • Harvest: The act of capturing changes from a descendant model. Two excellent examples are:
    • Capturing a practitioner's changes in the project's integration stream
    • Capturing a project's changes in the enterprise integration stream

Whether this is performed by the descendant stream managers (pushing changes upward) or by the ancestor stream managers (pulling changes upward) is a policy decision.

  • Propagate: The act of moving changes into descendant streams. These are three excellent examples:
    • Propagating new features from an update of the acquired exemplar models
    • Propagating harvested changes from one project to all other projects
    • Propagating other practitioner changes to all project practitioners

Again, it is a policy decision as to whether this is performed by the ancestor stream managers (pushing changes downward) or by the descendant stream managers (pulling changes downward). It is also a policy decision as to when each project must conform to the enterprise models and for how long a project may remain at variance to the enterprise. The same policy decision would be made for the project-to-practitioner relationship.


Each model artifact must be managed as a critical resource or corporate asset to ensure its integrity. However, given that project business goals may come into conflict with or compete with enterprise goals at times, it is necessary to clearly define model management roles and to separate them from one another.

There must be the assumption of a specific role when decisions are required that affect a specific model. So, for example, when considering whether to adopt a project's proposed changes into the enterprise models, it is necessary to assume the role of the enterprise model manager to properly assess the impact and value of the proposed changes with respect to the enterprise's goals, needs, and concerns as opposed to an individual project's objectives, needs, and concerns.

The following roles are referenced throughout the remainder of this article:

  • Practitioner: An individual modeler, business analyst, or developer within a project. This is a precise way of referring to the role that performs day-to-day work on development artifacts such as models or source code. This role creates change within the project models that may eventually be harvested into project models and then again to the enterprise level models. These changes can be further propagated to all other project models if desired by the model managers. Practitioner is used when the term modeler can be construed as a reference to the application itself.
  • Model manager: A role that is responsible for a specific set of models, usually residing in a conceptual stream. This role verifies the value and quality of incoming changes from above or below and in ensuring adherence to local policies by practitioners.
  • Enterprise model manager: This role is responsible for the evolution and management of the enterprise models, including the acceptance and rejection of project-specific changes and of changes brought into the enterprise by updates to the acquired exemplar models.
  • Project model manager: The project model manager is responsible for the evolution and management of one project-specific set of models, according to the business goals that spawned the project in the first place. This role works with the enterprise model manager and potentially other project model managers to establish or coordinate the timing of changes propagating from the enterprise models according to local policy. This role would do well to ensure that project changes are usable at the enterprise level to keep projects running smoothly. To allow changes that are known to be unacceptable at the enterprise level is to condone wasted work, because the project will eventually be forced to roll back those changes.
  • Repository administrator: This role is responsible for the physical management of the repository and can be combined with any other role.

Each model management role is quite distinct from the others. It is important to note that a single person can perform more than one role and that more than one person can fulfill a single role. For example, a domain architect could work individually to control the integrity of her domain (for example, acting as an enterprise model manager responsible for the "product" domain). But she might also work with a group of people (a set of domain architects, each experts and responsible for the integrity of a specific subset or domain of the entire set of enterprise models) who are acting collectively as "enterprise model managers" to ensure consistent evolution and integrity of the all the domains captured in the enterprise models.

Introduction to model differences

This section answers the question "How are models compared and merged?"

Generally speaking, a model is comprised of semantic and notational information. Semantic information defines the meaning of any given portion of the model. For example, a class describes a physical object in the real world. So does an actor, although an actor is presumed to be a human or a machine performing a specific role in a process. After they are documented, these semantic elements are potentially useful to any model in the same domain.

Notation, on the other hand, describes how a class or an actor is represented on a diagram. Specifically, it can describe:

  • Style information such as font, font color, line color, and so on
  • Location information, such as the x and y coordinates on a diagram
  • Relationships or connections between elements using edges
  • Comments, text, geometric shapes and other diagram-local annotations

A semantic item may appear on any number of diagrams, so notation rarely has any concrete meaning to an application. Nonetheless, its appropriate use can significantly enhance the clarity of relationships between semantic elements.

A difference is generated for each change that has been made to an element within one or both of the two versions of the model being compared. Minimizing the quantity of generated differences is very important to minimize the number of differences that the model management function has to assess to ensure model integrity. For example, gratuitously moving elements around on diagrams (just to tweak the look) is generally a bad idea, because every movement of an element, no matter how small, is going to translate into a difference every time that the models are propagated to another level.

Another term that is often used in this context is delta, which describes a difference visible in a model compare-merge editor.


Now you might wonder: How does the model comparison subsystem in an automated model change management tool match objects to each other so that we don't see a huge number of false differences? This is accomplished through the use of identity. Each element, whether it is semantic or notational, is assigned an identifier (ID) when first created. This ID is immutable -- that is, it can never change.

Thus, for example, adding attributes or operations to a class will always be shown as "add deltas" to the correct class, because the class is known by the same identifier in both generations of the model. However, if a class is deleted and then added back, or restored, what looks like a non-change to the practitioner actually creates a new version of the class that will be rendered in a comparison as a "delete" difference for the old class and an "add" difference for the new class. In other words, these are not the same element even though they look the same and have identical characteristics.

Two solutions for the delete-add problem

  • Reinstate the old version of the class with a merge session between the older and the latest versions of the model. This will show a delete delta for the old class, an add delta for the new class, plus whatever legitimate differences already existed between the two. The deletion of the old class and the addition of the new class can both be rejected and all other changes accepted, effectively reinstating the missing class.
  • Repair the model after the fact. If there are many such accidental changes, the model alignment tooling in IBM Rational software Version 7 enables like classes to be realigned. Model alignment can see that these classes have the same name and will reapply the previous identity, effectively restoring ancestry.

Comparisons that go across the point in time of the deletion and the addition will always show the delete and add differences, and these will mask subsequent changes to their content, because changes to contained data are relevant only for elements with the same identity. This issue reinforces, again, the point that it is important for model evolution to take place in an orderly fashion. Deleting an object accidentally and then replacing it with a similar object will lead to unacceptable, superfluous differences from generation to generation of the enterprise models.


Security for the enterprise models comes in two flavors: monitored and enforced.

With monitored security, the models all reside in the same repository, and a model manager uses reports and queries to determine after the fact whether an access violation has occurred. For example, some software configuration management repositories track every change by a specific modeler, thus it is easy to review violations in detail and accept or reject (reverse) changes from that activity.

Furthermore, security is enhanced by the definition and application of policies for governance of the models. Such policies include how long a project may remain at variance, how often a project must be reconciled with the enterprise models, who decides what can be harvested into the enterprise models, and who performs the actual harvest and propagate operations at the project or practitioner levels.

Enforced security, on the other hand, prevents unauthorized people from making changes by separating enterprise models from project models and by separating project models from each other. It provides separate authorization privileges for each. Thus, management retains control over who can access each specific set of models, which prevents potential security violations. The cost of this benefit is much higher process overhead, which almost negates the value of the integrated and automated processes described here.

This article focuses only on monitored security procedures.

Model ancestry and types of model change contributors

Model ancestry is used for comparison purposes when parallel development requires merging two sets of changes into a single final version of the model. Rather than compare two changed models, the tooling compares each changed model to their common ancestor. It then compares the two delta sets to create a list of conflicts that must be resolved by accepting one or the other incoming changes.

The use of a common model ancestor in model comparisons and merges is an extremely important (and rather subtle) process to understand. This is necessary because the goal is for models to evolve in a team environment, with one version of a model becoming the parent or ancestor for multiple new versions or generations. Two or more modelers must periodically synchronize their working environments with the repository, preferably at the beginning of each working cycle (daily, weekly -- the period is typically established by policy.).Later, when the second and any subsequent set of changes is delivered back into the repository, parallel changes are detected automatically and merge sessions are launched.

The repository tooling should coordinate with Rational Software Architect to load the three key model versions and provide a clear list of changes for each of these contributors:

  • Ancestor: Also called the common ancestor and sometimes called the base model, which is the closest or nearest ancestor from which the lineage of the parallel changed models can be traced.
  • Remote: The last changed model that was successfully delivered into the repository. The remote has already had all previous parallel versions from the common ancestor merged into it, so the only requirement is to merge the latest remote version's change set with the local version’s change set to create the newest version of the model.
  • Local. The version of the model in the workspace, currently being delivered.
  • Merged. The merged model is equivalent to the common ancestor model plus accepted changes from the remote model and accepted changes from the local model. The merged model becomes the next version of the model in the repository, and it is a descendant of all three versions of the model (ancestor, remote, and local).

The parallel development pattern

An appropriately functioning repository for enterprise model management has automated parallel change detection and related tooling that is able to automatically launch merge sessions with the appropriate contributors. This enables a parallel development pattern that is repeated throughout the rest of these procedures.

The two key operations in this pattern are propagation and harvesting.

  • Changes propagate downward in the hierarchy: from acquired exemplar models to the enterprise, from the enterprise to projects, and from projects to practitioners.
  • Changes are harvested upward in the hierarchy – from practitioners to projects, and from projects to the enterprise.

Figure 4 illustrates how a single project with two practitioners would look during any one change cycle (timing to be established by local policy).

Figure 4. Parallel development pattern

"Push" versus "pull" harvesting and propagation

It's all a matter of perspective and policy.

As defined previously in the glossary, harvesting is an upward movement of changes; that is, changes made at lower levels in the model layer hierarchy are now being moved and captured within a higher-level stream. Propagation is a downward movement of change; that is, changes are being disseminated from the higher level stream to lower-level, descendant streams.

In the typical execution of the parallel development pattern for practitioners, the owner of a practitioner development stream performs all of these operations. Within a project, allowing practitioners to push changes up to the project model integration stream and to pull others' changes down from that stream to the practitioner’s development stream provides a high level of independence and control for practitioners, which maintains high productivity.

However, local policy may contradict this execution style for the pattern. An example of this is a policy that only the project model manager may perform harvest and propagate operations. The overhead of such a policy is quite high, but so is the control that it offers.

The same choice must be made when defining policies to control the application of the pattern to the enterprise-to-project model boundary. The procedures at this level would probably benefit from the higher level of control, because the end result is going to be changes to the enterprise models.

In most cases, it is likely that the practitioner will control the process in the project-to-practitioner interface; whereas, the enterprise model manager will control the process in the enterprise-to-project interface. This is a matter of local tolerance for centralization of the process and its inherent mitigation of productivity.

The key point here is that policies must be set and documented.

Maintaining model ancestry

A critical point is that every artifact in the enterprise and project streams must have an ancestor. Also, every pair of artifacts must have a common ancestor. It must always be possible to perform a three-way merge during harvesting to combine two sets of changes into a single change set.

Recalling Figure 3 again, Exemplar models, Enterprise models and Project models are related as follows, with ancestors flowing downward to descendents:

Exemplar > enterprise > project models

Each subsequent harvest from parallel versions must result in a three-way compare/merge using the common ancestor and the latest delivered version along with the local version of the models (See Figure 5).

Figure 5. Three-way compare/merge across the Enterprise model and descendant Project model streams

For these assertions to hold, model ancestry will be maintained throughout the repository and under all update scenarios. This ancestry is never to be broken, because that would cause the model management procedures to fail.

More specifically, each model artifact has such relationships in the repository. When reading this article, it is a good idea to think in terms of a single model file or artifact when contemplating propagation and harvesting, because any high-quality repository will handle all of the necessary coordination of multiple artifacts during harvest and propagation operations.

An enterprise-level version of an artifact will have its ancestor inside of the enterprise layer (a previous enterprise version of the artifact) or in the layer above as an acquired exemplar version of the artifact. This follows all the way down to the practitioner level, which is not shown in Figure 3.


Each of the practices and procedures described here must be governed to provide a clear picture of the state of the enterprise models at any point in time. For this purpose, you can think of model governance as the context and policies related to the application of the model management practices and procedures.


Model governance policies need to be applied at model layer boundaries, for example, between the enterprise and project model streams.

While examining the detailed scenarios, it is worth noting that they are controlled or, more accurately, tuned for the enterprise by defining local policies that address key aspects of model management concerns. Examples of key policy issues follow.

Push versus pull harvesting and propagation

This policy addresses the concerns about who has the authority to move model changes back and forth across model layer boundaries.

At various points in the evolution of the enterprise models, changes to the exemplar, enterprise, or project models will necessitate a propagation or harvest operation. The need for such an operation is brought about by the recognition that there is something valuable to move into another stream.

A good example of this is the need to harvest a useful new feature from a project's models. In a push model, the project model manager pushes the changes up to the enterprise level. In a pull model, the enterprise model manager pulls the changes from the project stream into the enterprise stream.

Notice that the repository operation is identical in each case; it is just the individual role performing the operation that changes. In fact, the role that chooses or decides what to accept is more important than the role that performs the physical action. These are examples of these policies:

  • Which role will approve changes for the enterprise models?
    • Enterprise model manager, committee of model managers
  • Which role will harvest to the enterprise models?
    • Enterprise model manager, project model manager
  • Which role will propagate to the project models?
    • Enterprise model manager, project model manager

The same policy questions apply at the project-to-practitioner stream interface.

Project model variance from enterprise model

This policy addresses the concerns about what in a descendant stream may be different, and how long it may remain different from its ancestor.

By definition, project models are always somewhat at variance with the enterprise models. Periodically, important new features will be harvested into the enterprise models as the result of project work. This is, in fact, the primary mechanism for model evolution at the enterprise level.

The enterprise must establish policies that answer these questions:

  • Can a project remain at variance after a harvest or propagation?
  • For how long can a project remain at variance with the enterprise before being forced into conformance?
    • A week, a month, no limit?
  • How often will project variance be reviewed?
    • Weekly, biweekly, monthly, never? ("never" is a bad idea, because the enterprise model manager should always know the state of the projects)

Life cycle of project and practitioner models

This policy addresses concerns about when (timing) and how often (frequency) these model management practices and procedures are applied.

Projects are most often created to fulfill a business objective. A team is formed to explore options until a solution is found. Policies are needed to govern and answer these questions:

  • When and how often are changes harvested?
    • For example, weekly on Fridays at noon, all of a project's practitioner changes to a model are harvested, and changes are compared and validated to create a new version of the project model.
    • Or is it done biweekly, monthly, or ad hoc? Empirical evidence shows weekly or biweekly to be optimal in some organizations.
  • When and how often are enterprise model changes propagated to all projects?
    • For example, weekly at the start of business on Monday, the new version of the project model is propagated to each of the project's practitioners.
  • Do projects survive beyond the solution to the specific business goal for which they are created? Meaning, does the project continue to be assigned new business goals to solve? Or does the project come to an end when it has fulfilled its original charter but its models and generated artifacts live on in the enterprise stream? The issue here is management and administration overhead.
    • A project is typically dissolved upon completion of its goals.
  • Are harvest and propagation cycles synchronized between projects? Or can each project establish its own schedule for synchronization?
    • Synchronized means that all projects must propagate on the same intervals; unsynchronized means that projects choose their own timing.


It must be clear, for example, when a project has "checked-in" its models and they are ready for harvesting into the enterprise stream. Metrics, reports, and queries can be used to assess the state of the models, as well as to track access violations.

The use of a state-aware change management application greatly improves the quality of the process for the creation and management of projects and the movement of features between enterprise and project streams.

Proposed work or changes can be created and managed by change management applications, such as Rational ClearQuest, using a standard or custom schema for enhancement requests and problem records. These records proceed through various state transitions (for example: submitted > assigned > resolved > closed) until the issue is resolved or the goal has been achieved.

A key feature of applications such as ClearQuest and ClearCase is the strong query capabilities. It is possible, for example, to query the database to determine:

  • All changes that went into the models between any two dates or in a specific release
  • Who created the changes that were in the models and when they were approved by the model managers
  • Whether an access violation occurred (the wrong person performed a harvest operation, for example)


When a business goal or need is identified, a requirement record can be created to capture that goal. Business goals, objectives, and needs (business "drivers" in the current lingo) motivate work to be done to determine the best way to ensure their fulfillment. This work often includes commissioning of a project to analyze and, most likely, customize the enterprise models. At this point, the model management procedures described in this series of articles takes effect.

Requirements can be tracked as requests for enhancement (RFEs) or release requirements (RRs) in a state-aware change management software application such as ClearQuest. Requirements can also be much more than a state-aware record with attributes. Requirements can be described by words or images or both. Tools such as IBM® Rational® RequisitePro® can capture this sort of additional requirements information and can be integrated with ClearQuest for cohesive requirements management.


After a preliminary analysis has taken place, a set of suggested enhancements is captured and codified within a change management system as a series of RFE or task records that are assigned to a business analyst or modeler. Progress on this change request work can be tracked by periodic updates (daily or weekly) to the enhancement request record until the enhancement is resolved by the practitioner. This is a signal to the model manager that the work is completed and can be reviewed.

After it is reviewed, an enhancement can be accepted and the RFE can be closed, or it can be reopened and rework begun. This cycle continues until the work is accepted or scrapped entirely. When it is closed, the model manager or practitioner can harvest the reviewed and validated changes into the project stream.


When a problem is discovered in the models, it can be immediately logged into the problem tracking (change management) application as a defect. These are tracked in much the same way as enhancements. Many of these can be quickly fixed and closed by the model manager; however, when several defects cluster in an area that is under development, the defect reports can be assigned to the practitioner who handles that area, and then the defects can be resolved during the normal course of the enhancement work.

Logging changes

Rational Software Architect compare-and-merge tooling allows a detailed list of deltas with their final resolutions to be saved at the end of any merge session, just before committing the changes to the repository. This log may be saved in the repository with annotations for this purpose. An audit trail and accurate historical record is a vital link in the governance chain.


Periodically, the enterprise model manager will want to run queries that track all model updates to determine whether boundaries have been violated. Also important are policy enforcement and reporting, for example how long each project has been at variance with the enterprise models.

History and tracking

Any competent version control and enhancement tracking system will maintain full history information so that queries can be run at any point in the future to determine where errors have crept in and to audit the models and procedures against policy.

A useful procedure is the logging of every change to the enterprise models. Also important is the logging of every rejected change to the enterprise models, so that disputes and confusion can be avoided down the road.


Formal procedures for governance and management enable an enterprise of any size to perform reliable and efficient parallel development on software models. Read Part 2 of this series for detailed tool-agnostic procedures to expand on the introductory material in this article.

Subsequent articles in this series will describe the support for these procedures and practices in enterprise-class change and configuration management tooling, such as IBM Rational ClearCase and ClearQuest, in conjunction with modeling and model management tooling like that provided by the Rational Software Architect products.



Get products and technologies



developerWorks: Sign in

Required fields are indicated with an asterisk (*).

Need an IBM ID?
Forgot your IBM ID?

Forgot your password?
Change your password

By clicking Submit, you agree to the developerWorks terms of use.


The first time you sign into developerWorks, a profile is created for you. Information in your profile (your name, country/region, and company name) is displayed to the public and will accompany any content you post, unless you opt to hide your company name. You may update your IBM account at any time.

All information submitted is secure.

Choose your display name

The first time you sign in to developerWorks, a profile is created for you, so you need to choose a display name. Your display name accompanies the content you post on developerWorks.

Please choose a display name between 3-31 characters. Your display name must be unique in the developerWorks community and should not be your email address for privacy reasons.

Required fields are indicated with an asterisk (*).

(Must be between 3 – 31 characters.)

By clicking Submit, you agree to the developerWorks terms of use.


All information submitted is secure.

Dig deeper into Rational software on developerWorks

ArticleTitle=Governing and managing enterprise models: Part 1. Introduction and concepts