June 15, 2022 By Balakrishnan Sreenivasan 8 min read

Facing challenges and preparing for organizational readiness.

In the previous two blog posts of this series (see Part 1 and Part 2), we saw various aspects of establishing an enterprise-level framework for domain-driven design (DDD)-based modernization into a composable IT ecosystem and a systematic way to modernize the applications and services. However, enterprises must acknowledge significant challenges they will face at different levels and establish an executable step-by-step plan to address them. Part 3 of this blog series essentially talks about various execution challenges that enterprises will encounter and potential approaches to address them based on learnings from engagements.

The previous blog posts establish a “happy path” where everything falls in line per strategy and every squad is composed of people with the right skills, with various domain-aligned product squads operating independently while still knowing how to collaborate. Unfortunately, that’s not the case in most enterprises. There are several challenges an organization is likely to face during this process and let’s examine each of them:

A big bang driven by excitement: Root for failure

Given the excitement around the “shiny new object” (composable IT ecosystem) and the immense value that it brings to the table, we have seen significant interest across the board in enterprises. In theory, end-to-end composability happens only when all parts of the enterprise move into that model, and this is where reality must prevail. There will simply be too many takers for this model, and peer pressure also adds to this. IT teams embrace it well considering the skill transformation they will go through, the shininess of this model and how much market relevant they will gain.

This potentially leads to a big-bang approach of starting too many initiatives across the enterprise to embrace the composable IT model. Transformation to a composable/domain-driven IT ecosystem needs to start in a calibrated way through step-by-step demonstration of value. Enterprises can achieve this by focusing on one value lever, with business functions (and associated IT capabilities) moving into that model and demonstrating value alongside embracing the new operating model. The most difficult part here is to choose the right MVP candidate/the next set of candidates and managing the excitement in a sustainable way.

Evolving domain model, org transformation and change

As one can imagine, domain models are central to the entire program and changes to that could have much larger impact to such transformation programs. It is essential to customize an industry-standard domain model to the needs of the enterprise with deeper business involvement.

It is not a bad idea to establish organizational structures around value streams that are higher-order elements above domains and products. While too many sub-organizations based on domain will result in a very complex, challenging executional ecosystem with too many leaders as stakeholders, too few domains will end up losing the purpose and tend to drive monolith thinking.

IT leaders will have to move to a capability- or service-based measurement model — moving away from application-oriented measurements. As the notion of applications are not going away either, it is important to accommodate the shift to a composable capabilities and services model from an application-oriented model. Every product leader should be able to demonstrate their progress and performance through capabilities and services they have built and deployed (for consumption by applications within or outside domains) and should also include consumption-related performance metrics. There needs to be a funding linkage to this metrics model to balance the needs of each product team based on what each of the teams need to deliver.

Also, from an organizational-change perspective, it is important to focus on enablement of each layer of the organization — from value-stream leaders to product owners to architects, developers, etc. A systematic enablement program that validates the learning and ensures hands-on, side-by-side learning along with experts is critical to the success of the program. Tracking the learning progress (coverage and depth) is important to ensure individuals are really “ready.”

Change becomes impossible if the IT metrics for each of the leaders are not transformed to reflect the composable IT ecosystem model (e.g., shifting focus from applications to capabilities/capability components (services) owned, deployed and operated at desired SLAs, etc.) and the funding model is not aligned along these lines.

Focus more on value and less on transforming the entire IT ecosystem

First and foremost, modernization initiatives typically consider “value”-driven levers to identify candidate applications and services for modernization. Based on experience, it is important to focus on outcomes and resist the urge to eliminate technical debt completely from the enterprise at one go.

It is important to establish a value vs. effort view of various applications and services being modernized and look for value streams that benefit from the modernization. It is best to choose value streams that deliver the maximum impact and establish the modernization scope along various intersecting application capabilities and services. Successful engagements have always focused on the modernization of a set of capabilities impacting one or more important business levers (revenue, customer service, etc.). For example modernizing 20+ user journeys (associated applications and services) in a large UK bank or modernizing certain IT capabilities impacting crew functions of European airline, etc.

Control on modernization scope and efforts

Modernization programs should “identify and refine” their processes in alignment with the modernization scope and identify in-scope applications and services. It is easy to get into an “iceberg” kind of situation where modernizing one service will drive the need to modernize the entire dependency tree underneath and it is important to manage the scope of modernization, with a clear focus on value.

It is also important to align the processes to respective domains/products. This is quite a challenging effort because of many technical and non-technical reasons. Every domain/product team would like to own as many capabilities and services as possible. Also, data becomes a significant element of ownership discussion. It’s important to consider data owned and managed by respective processes and ensure alignment across product teams.

The biggest challenge presents itself when it comes to aggregates, where everyone wants to copy and own data simply because their processing needs are very different from what data-owning products do. There also must be a recognition of the needs of business when making these decisions as there are situations where data is needed to perform necessary analysis and it’s not necessarily an indication of data ownership. These issues take much longer to resolve, and this where a reference domain model including guidance and a decision matrix becomes important.

Decomposing applications to capabilities for multiple domains and building them needs a significant level of coordination across domains

As we saw, a well-institutionalized domain-driven design (DDD) model (e.g., practices, core team, DDD facilitation skills) and cloud-native services work in tandem to help modernize monolith applications into a composable set of capabilities/capability components (microservices) as owned by appropriate product teams. While it is easy to design such a decomposed view, building the same will need several product owners to align timelines and resolve several design conflicts.

These product teams are expected to be independent, and this is where a significant amount of collaboration and roadmap alignment needs to happen before the application can completely be decomposed/modernized. Since enterprises’ priorities change over time, it becomes challenging for each of the teams to undertake constant reprioritization of activities. One would notice that the speed of modernization is much higher in applications and capabilities (including services) that are well contained within a domain:

When it comes to applications decomposed to capabilities owned by many product teams, execution complexity creeps in (e.g., conflicting priorities, resource challenges, roadmap alignment issues, design challenges to accommodate multiple consumers of capabilities, etc.). One of the ways to address this is by ensuring the development of capabilities end-to-end; squads also build services and other dependent capabilities together (performed by one team with squads represented by domains with regards to SMEs, developers, etc.).

It is also far more pragmatic for different product teams to come together and build several application capabilities and services together and subsequently move them to the appropriate day-2 model for subsequent iterations. This approach introduces minimal disruption across the enterprise and helps address various organizational-readiness challenges (e.g., funding, people/skills, getting the roadmap right, etc.). The biggest impact is the business risk introduced by the day-2 operating model that needs to be significantly reskilled and readied to operate in a composable IT ecosystem.

The figure above provides a way to build capabilities and associated capability components (microservices) in a much more integrated/one-team model and subsequently have them moved to the end-state operating model.

It is extremely important to have a good handle on the backlog of open items/technical debt items coming out of design and implementation activities that could be worked on in subsequent roadmap iterations (and more so for various compromises being made to make progress).

Day-2 support model challenges

In continuation with challenges imposed by multi-domain applications — where capabilities are deployed and managed by respective product teams — we look at the challenges imposed by the need for a different day-2 operating model. Traditionally, application teams are in full control of the code base and data of the application, and in the composable IT ecosystem, this becomes decentralized (with a distributed ownership). Teams on the forefront (frontend of the application) need to understand this model and operate in it accordingly. This is also the case with various product teams building, deploying and operating capabilities. Now the incident management/ITSM processes need to accommodate for distributed squads supporting different capabilities (piece-parts) of a given application.

It takes a certain degree of operations processes, tooling and squad maturity and skilling levels to operate in that model with the right routing and segregation of incidents. Moving applications into a composable IT ecosystem model without fully readying support teams, processes, tooling skill levels could result in a significant risk to the business in terms of supportability of the capabilities. It is best to perform a staggered move to the composable IT ecosystem model, with specific capabilities moved to separate product teams or squads with anadequate maturity period.

Measuring success at intermediate points is key

While the larger success of modernization to a composable IT ecosystem is in the business seeing the results (e.g., rapid innovation, improved customer service, etc.), it’s important to also measure early progress indicators. This could include things like the number of products implementing the model and the number of squads (or product teams) self-sufficient in of skill needs (e.g., ability to perform DDD, DevOps readiness, foundation platform readiness, etc.). It can also include incremental capabilities deployed and at what velocity. One should also have a backlog of technical debt or design decisions (compromises), which needs to be a manageable one with an inclusive design authority that governs the decisions.


The evolution of cloud has opened a plethora of possibilities for various enterprises to exploit them, and this makes the composable IT ecosystem a reality. The emergence of various proven practices — such as domain-driven design, DevOps and site reliability engineering — has made full-stack squads a reality, which enables realization of independent product teams that can build end-to-end capabilities and services without layers of IT getting involved (as we have seen in traditional IT ecosystems).

Enterprises embarking on modernization initiatives to transform their IT ecosystem into composable model need to recognize the quantum of change and operating model transformation across the enterprise and think through this more pragmatically. It is important to establish a roadmap and scope of modernization that is defined by the business levers impacted.

Enterprises need to recognize the fact that clarity on domains and processes will evolve with time, and there needs to be room for changes. While value streams and the lowest unit of such an organization — like products and product teams — are not likely to change that often, intermediate organizational constructs do change significantly.

Initial steps should focus on identifying a smaller subset of products (or domains) to pilot and demonstrate success. Learnings should be fed back to refine the roadmap, plans and operating model. Moving to a composable IT ecosystem is a long journey and measuring success at every intermediate change is key. Too much framework or too little framework could pose significant challenges, ranging from analysis paralysis to chaos. Therefore, a first-pass framework needs to be in place quickly, while focused pilot/MVP initiatives should be run to test and refine the framework. The framework should and will evolve with time and only based on real execution experiences (e.g., process overlaps learned from decomposing applications, domain model refinements based on gaps, etc.).

Check out the following links to learn more:

Was this article helpful?

More from Cloud

Microcontrollers vs. microprocessors: What’s the difference?

6 min read - Microcontroller units (MCUs) and microprocessor units (MPUs) are two kinds of integrated circuits that, while similar in certain ways, are very different in many others. Replacing antiquated multi-component central processing units (CPUs) with separate logic units, these single-chip processors are both extremely valuable in the continued development of computing technology. However, microcontrollers and microprocessors differ significantly in component structure, chip architecture, performance capabilities and application. The key difference between these two units is that microcontrollers combine all the necessary elements…

Seven top central processing unit (CPU) use cases

7 min read - The central processing unit (CPU) is the computer’s brain, assigning and processing tasks and managing essential operational functions. Computers have been so seamlessly integrated with modern life that sometimes we’re not even aware of how many CPUs are in use around the world. It’s a staggering amount—so many CPUs that a conclusive figure can only be approximated. How many CPUs are now in use? It’s been estimated that there may be as many as 200 billion CPU cores (or more)…

Prioritizing operational resiliency to reduce downtime in payments

2 min read - The average lost business cost following a data breach was USD 1.3 million in 2023, according to IBM’s Cost of a Data Breach report. With the rapid emergence of real-time payments, any downtime in payments connectivity can be a significant threat. This downtime can harm a business’s reputation, as well as the global financial ecosystem. For this reason, it’s paramount that financial enterprises support their resiliency needs by adopting a robust infrastructure that is integrated across multiple environments, including the…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters