June 15, 2022 By Balakrishnan Sreenivasan 8 min read

Facing challenges and preparing for organizational readiness.

In the previous two blog posts of this series (see Part 1 and Part 2), we saw various aspects of establishing an enterprise-level framework for domain-driven design (DDD)-based modernization into a composable IT ecosystem and a systematic way to modernize the applications and services. However, enterprises must acknowledge significant challenges they will face at different levels and establish an executable step-by-step plan to address them. Part 3 of this blog series essentially talks about various execution challenges that enterprises will encounter and potential approaches to address them based on learnings from engagements.

The previous blog posts establish a “happy path” where everything falls in line per strategy and every squad is composed of people with the right skills, with various domain-aligned product squads operating independently while still knowing how to collaborate. Unfortunately, that’s not the case in most enterprises. There are several challenges an organization is likely to face during this process and let’s examine each of them:

A big bang driven by excitement: Root for failure

Given the excitement around the “shiny new object” (composable IT ecosystem) and the immense value that it brings to the table, we have seen significant interest across the board in enterprises. In theory, end-to-end composability happens only when all parts of the enterprise move into that model, and this is where reality must prevail. There will simply be too many takers for this model, and peer pressure also adds to this. IT teams embrace it well considering the skill transformation they will go through, the shininess of this model and how much market relevant they will gain.

This potentially leads to a big-bang approach of starting too many initiatives across the enterprise to embrace the composable IT model. Transformation to a composable/domain-driven IT ecosystem needs to start in a calibrated way through step-by-step demonstration of value. Enterprises can achieve this by focusing on one value lever, with business functions (and associated IT capabilities) moving into that model and demonstrating value alongside embracing the new operating model. The most difficult part here is to choose the right MVP candidate/the next set of candidates and managing the excitement in a sustainable way.

Evolving domain model, org transformation and change

As one can imagine, domain models are central to the entire program and changes to that could have much larger impact to such transformation programs. It is essential to customize an industry-standard domain model to the needs of the enterprise with deeper business involvement.

It is not a bad idea to establish organizational structures around value streams that are higher-order elements above domains and products. While too many sub-organizations based on domain will result in a very complex, challenging executional ecosystem with too many leaders as stakeholders, too few domains will end up losing the purpose and tend to drive monolith thinking.

IT leaders will have to move to a capability- or service-based measurement model — moving away from application-oriented measurements. As the notion of applications are not going away either, it is important to accommodate the shift to a composable capabilities and services model from an application-oriented model. Every product leader should be able to demonstrate their progress and performance through capabilities and services they have built and deployed (for consumption by applications within or outside domains) and should also include consumption-related performance metrics. There needs to be a funding linkage to this metrics model to balance the needs of each product team based on what each of the teams need to deliver.

Also, from an organizational-change perspective, it is important to focus on enablement of each layer of the organization — from value-stream leaders to product owners to architects, developers, etc. A systematic enablement program that validates the learning and ensures hands-on, side-by-side learning along with experts is critical to the success of the program. Tracking the learning progress (coverage and depth) is important to ensure individuals are really “ready.”

Change becomes impossible if the IT metrics for each of the leaders are not transformed to reflect the composable IT ecosystem model (e.g., shifting focus from applications to capabilities/capability components (services) owned, deployed and operated at desired SLAs, etc.) and the funding model is not aligned along these lines.

Focus more on value and less on transforming the entire IT ecosystem

First and foremost, modernization initiatives typically consider “value”-driven levers to identify candidate applications and services for modernization. Based on experience, it is important to focus on outcomes and resist the urge to eliminate technical debt completely from the enterprise at one go.

It is important to establish a value vs. effort view of various applications and services being modernized and look for value streams that benefit from the modernization. It is best to choose value streams that deliver the maximum impact and establish the modernization scope along various intersecting application capabilities and services. Successful engagements have always focused on the modernization of a set of capabilities impacting one or more important business levers (revenue, customer service, etc.). For example modernizing 20+ user journeys (associated applications and services) in a large UK bank or modernizing certain IT capabilities impacting crew functions of European airline, etc.

Control on modernization scope and efforts

Modernization programs should “identify and refine” their processes in alignment with the modernization scope and identify in-scope applications and services. It is easy to get into an “iceberg” kind of situation where modernizing one service will drive the need to modernize the entire dependency tree underneath and it is important to manage the scope of modernization, with a clear focus on value.

It is also important to align the processes to respective domains/products. This is quite a challenging effort because of many technical and non-technical reasons. Every domain/product team would like to own as many capabilities and services as possible. Also, data becomes a significant element of ownership discussion. It’s important to consider data owned and managed by respective processes and ensure alignment across product teams.

The biggest challenge presents itself when it comes to aggregates, where everyone wants to copy and own data simply because their processing needs are very different from what data-owning products do. There also must be a recognition of the needs of business when making these decisions as there are situations where data is needed to perform necessary analysis and it’s not necessarily an indication of data ownership. These issues take much longer to resolve, and this where a reference domain model including guidance and a decision matrix becomes important.

Decomposing applications to capabilities for multiple domains and building them needs a significant level of coordination across domains

As we saw, a well-institutionalized domain-driven design (DDD) model (e.g., practices, core team, DDD facilitation skills) and cloud-native services work in tandem to help modernize monolith applications into a composable set of capabilities/capability components (microservices) as owned by appropriate product teams. While it is easy to design such a decomposed view, building the same will need several product owners to align timelines and resolve several design conflicts.

These product teams are expected to be independent, and this is where a significant amount of collaboration and roadmap alignment needs to happen before the application can completely be decomposed/modernized. Since enterprises’ priorities change over time, it becomes challenging for each of the teams to undertake constant reprioritization of activities. One would notice that the speed of modernization is much higher in applications and capabilities (including services) that are well contained within a domain:

When it comes to applications decomposed to capabilities owned by many product teams, execution complexity creeps in (e.g., conflicting priorities, resource challenges, roadmap alignment issues, design challenges to accommodate multiple consumers of capabilities, etc.). One of the ways to address this is by ensuring the development of capabilities end-to-end; squads also build services and other dependent capabilities together (performed by one team with squads represented by domains with regards to SMEs, developers, etc.).

It is also far more pragmatic for different product teams to come together and build several application capabilities and services together and subsequently move them to the appropriate day-2 model for subsequent iterations. This approach introduces minimal disruption across the enterprise and helps address various organizational-readiness challenges (e.g., funding, people/skills, getting the roadmap right, etc.). The biggest impact is the business risk introduced by the day-2 operating model that needs to be significantly reskilled and readied to operate in a composable IT ecosystem.

The figure above provides a way to build capabilities and associated capability components (microservices) in a much more integrated/one-team model and subsequently have them moved to the end-state operating model.

It is extremely important to have a good handle on the backlog of open items/technical debt items coming out of design and implementation activities that could be worked on in subsequent roadmap iterations (and more so for various compromises being made to make progress).

Day-2 support model challenges

In continuation with challenges imposed by multi-domain applications — where capabilities are deployed and managed by respective product teams — we look at the challenges imposed by the need for a different day-2 operating model. Traditionally, application teams are in full control of the code base and data of the application, and in the composable IT ecosystem, this becomes decentralized (with a distributed ownership). Teams on the forefront (frontend of the application) need to understand this model and operate in it accordingly. This is also the case with various product teams building, deploying and operating capabilities. Now the incident management/ITSM processes need to accommodate for distributed squads supporting different capabilities (piece-parts) of a given application.

It takes a certain degree of operations processes, tooling and squad maturity and skilling levels to operate in that model with the right routing and segregation of incidents. Moving applications into a composable IT ecosystem model without fully readying support teams, processes, tooling skill levels could result in a significant risk to the business in terms of supportability of the capabilities. It is best to perform a staggered move to the composable IT ecosystem model, with specific capabilities moved to separate product teams or squads with anadequate maturity period.

Measuring success at intermediate points is key

While the larger success of modernization to a composable IT ecosystem is in the business seeing the results (e.g., rapid innovation, improved customer service, etc.), it’s important to also measure early progress indicators. This could include things like the number of products implementing the model and the number of squads (or product teams) self-sufficient in of skill needs (e.g., ability to perform DDD, DevOps readiness, foundation platform readiness, etc.). It can also include incremental capabilities deployed and at what velocity. One should also have a backlog of technical debt or design decisions (compromises), which needs to be a manageable one with an inclusive design authority that governs the decisions.

Conclusion

The evolution of cloud has opened a plethora of possibilities for various enterprises to exploit them, and this makes the composable IT ecosystem a reality. The emergence of various proven practices — such as domain-driven design, DevOps and site reliability engineering — has made full-stack squads a reality, which enables realization of independent product teams that can build end-to-end capabilities and services without layers of IT getting involved (as we have seen in traditional IT ecosystems).

Enterprises embarking on modernization initiatives to transform their IT ecosystem into composable model need to recognize the quantum of change and operating model transformation across the enterprise and think through this more pragmatically. It is important to establish a roadmap and scope of modernization that is defined by the business levers impacted.

Enterprises need to recognize the fact that clarity on domains and processes will evolve with time, and there needs to be room for changes. While value streams and the lowest unit of such an organization — like products and product teams — are not likely to change that often, intermediate organizational constructs do change significantly.

Initial steps should focus on identifying a smaller subset of products (or domains) to pilot and demonstrate success. Learnings should be fed back to refine the roadmap, plans and operating model. Moving to a composable IT ecosystem is a long journey and measuring success at every intermediate change is key. Too much framework or too little framework could pose significant challenges, ranging from analysis paralysis to chaos. Therefore, a first-pass framework needs to be in place quickly, while focused pilot/MVP initiatives should be run to test and refine the framework. The framework should and will evolve with time and only based on real execution experiences (e.g., process overlaps learned from decomposing applications, domain model refinements based on gaps, etc.).

Check out the following links to learn more:

Was this article helpful?
YesNo

More from Cloud

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

5 min read - As large language models (LLMs) have entered the common vernacular, people have discovered how to use apps that access them. Modern AI tools can generate, create, summarize, translate, classify and even converse. Tools in the generative AI domain allow us to generate responses to prompts after learning from existing artifacts. One area that has not seen much innovation is at the far edge and on constrained devices. We see some versions of AI apps running locally on mobile devices with…

IBM Tech Now: April 8, 2024

< 1 min read - ​Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published. IBM Tech Now: Episode 96 On this episode, we're covering the following topics: IBM Cloud Logs A collaboration with IBM watsonx.ai and Anaconda IBM offerings in the G2 Spring Reports Stay plugged in You can check out the…

The advantages and disadvantages of private cloud 

6 min read - The popularity of private cloud is growing, primarily driven by the need for greater data security. Across industries like education, retail and government, organizations are choosing private cloud settings to conduct business use cases involving workloads with sensitive information and to comply with data privacy and compliance needs. In a report from Technavio (link resides outside ibm.com), the private cloud services market size is estimated to grow at a CAGR of 26.71% between 2023 and 2028, and it is forecast to increase by…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters