May 16, 2022 By Balakrishnan Sreenivasan 8 min read

Modernizing applications and services to a composable IT ecosystem.

In Part 1 of this blog series, we saw various aspects of establishing an enterprise-level framework for domain-drive design (DDD)-based modernization into a composable IT ecosystem. Once the base framework is established, teams can focus on modernizing their application in alignment with the framework.

Typically, teams undertake a two-pronged approach to modernize applications and services. The first step is to enumerate and scope the processes (mentioned in Part 1) and use this to conduct DDD-based event-storming sessions to identify various capability components (i.e., microservices). The second step is to decompose the applications and align them with appropriate products (within domains) and map each of the capabilities to respective capability components that help realize the capability. An iterative execution roadmap is built based on dependencies (capabilities and services) and execution starts typically with a minimum viable product (MVP). This blog post — Part 2 of the series — details the above-mentioned approach for modernizing applications to the composable model. Part 3 of the series will look at facing challenges and prepping for organizational readiness.

Decomposing applications and services to capabilities: Overview

So far, the discussion has been about laying the groundwork to composable IT capabilities, which essentially includes organization structure alignment, process scoping by domains and a broad set of products. Next is to focus on modernizing current applications and data into composable IT ecosystem capabilities.  

The following diagram illustrates how different layers of the system are supported by different IT teams in the traditional model versus how the capabilities are built and managed in a composable model. Essentially, a monolith application is decomposed into a set of capabilities and appropriately built and managed by squads based on domain alignment:

While this model is challenging to implement, the value achieved outweighs the challenges:

  • The model provides the best alignment with business domains and the most flexible and agile IT model, driving a high degree of time-to-market improvement.
  • Product-centric model drives clarity of ownership and independency to squads, helping drive an engineering culture across the IT organization.
  • Domain alignment promotes a high degree of reuse of capabilities, reducing enterprise-wide duplication of capabilities (both application and data).
  • This model helps build deeper domain and functional skills in squads and promotes end-to-end ownership and a continuous improvement culture, which, in turn, accelerates adoption of SRE practices.

The following diagram depicts the two major areas addressed in this blog: a domain-driven design (DDD)-based application and services decomposition and the building and deployment of capabilities details:

Domain-driven decomposition of applications and services

Most enterprises have several applications and a set of application services (legacy SOAP services/APIs, integration/messaging services, etc.). These applications have evolved over a period to become monoliths, whereas services have also evolved in a different way to meet demands of consumers. The problem most enterprises are trying to solve is to contain scope of transformation to the capabilities offered via existing applications and services and not end up driving a blue-sky approach.

The following are key steps involved in decomposing applications and services in each domain.

Step 0: Map applications and services to domains per the context and usage scenario

When there are existing applications and services in enterprises, it is important to ensure there are owners for the applications and services. Based on my experience, it is a good idea to look at personas using the applications versus a primary domain that would be looked upon to offer the application or service to consumers. Once the primary domain associated with an application or service is identified, later, the end-to-end consumer ownership of the same lies with the organization that owns the domain.

Step 1: Bottom-up analysis of applications to identify and map capabilities to business process level 3 or deeper (as appropriate)

Once applications and services are mapped to primary (owning) domains, they are then decomposed into capabilities. In general, capabilities are expressed in business terms, and they mostly map to level 3 or slightly deeper:

Applications offer a set of capabilities; and in turn, capabilities could be from different domains based on the business process and bounded context alignment. As applications are decomposed to capabilities, they are also mapped to respective domains to identify who builds and manages (owns) them. While mapping capabilities to domains, it is also important to understand which of the existing services and data that the capability maps to. This is going to be critical to establish input scope (process and boundaries) guidance for event-storming workshops and identification of services.

Step 2: Event-storming and identification of capabilities (services) from business process

There are excellent articles and technique papers for domain-driven design and event storming (e.g., “Event-driven solution implementation methodology” and “DDD approach” by Jerome Boyer and team), and I suggest going through them to get a good understanding of how event-storming is done, the taxonomy followed and so on. The idea here is to ensure that the processes and capabilities that are enabled via the applications in Step 1 drive the scope for event storming. The following are key activities performed in this step:

  • Establish a set of domain events and the actors (persona or system) triggering them and identify relationships between the events (as appropriate) into flows.
  • Review and align (or discard, if appropriate) events to the input scope of the event storming (from the application and service decomposition step).
  • Elaborate the event(s) as a combination of policy, data (business entity, value objects, etc.), command, business rules, actor (or persona), external systems, etc. into one or more flows.
  • Establish aggregate boundaries by analyzing the entities and values in terms of how they establish their context and identify potential services. While the aggregates are typically microservices, the data associated with them forms the bounded context.
  • One could establish user stories based on interactions between elements of each of the flows (it is important to identify user stories to completely implement the flows).
  • Iterate through the above to elaborate/refine each of the flows to such an extent that one can identify the initial set of services to build and likely capabilities to realize.

Step 3: Map application capabilities to services

On one side, we have a set of capabilities identified via decomposing the applications; on the other side, we have a set of microservices elaborated via event storming. It is important to ensure each of the capabilities are mapped to their respective services (or aggregates) to ensure the capabilities (or requirements) can be realized. The detailed operations that include data needed (including Swagger definitions) is defined after this mapping based on the consumption needs of each of the services:

Step 4: Iteration planning

It is also important to establish capability dependencies (with regard to data and services needed to realize them) in such a way that one can figure out how to sequence the build-out of the capabilities to build on top of one another. In most cases, the dependencies are much more complex, but this helps design the necessary coexistence solution to build and deploy capabilities:

The sequenced capabilities are bucketed into a set of iterations and are continuously refined with every iteration. While establishing an iteration plan for the entire system results in waterfall thinking and heavy analysis effort upfront, a high-level roadmap is always prepared based on a group of capabilities, high-level dependencies and approximate t-shirt sizes. As iterations progress, the number of squads and capabilities developed is realigned (or accelerated) based on velocity achieved versus desired velocity.

When building an iterative incremental roadmap, one must think through coexistence because it is a foundational ingredient for success. It implies the ability for the legacy and modernized capabilities to coexist, with a goal to strangulate legacy capabilities over the period while, at the same time, ensuring that consumer ecosystem(s) for legacy capabilities are not disrupted immediately and are given enough time to move towards modernized domain capabilities. A well-crafted co-existence model would allow for uni- or bi-directional data synchronization and/or cross-leverage of functionality not yet modernized through wrapper APIs to achieve such an objective, which needs careful architecture considerations for both functional and non-functional aspects.

Build and deploy capabilities, services and day-2 operations

Modernizing applications and services into a product-aligned, capability-based model is about building capabilities by respective squads per the product alignment. The capabilities built by multiple product squads are composed at the experience layer (originally applications) to ensure consistency for the users.

Squads follow a typical cloud-native development model (based on DevOps and SRE practices) to build and deploy capabilities for consumption. As capabilities and services are developed, their consumption needs are validated and improved continuously (mostly with iterations).

While domain-driven design (DDD) helps identify capabilities that are common across apps, building co-existence code while incrementally modernizing capabilities results in stickiness of the modernized capability to the legacy application and data until the entire capability (including services and data) is modernized. Therefore, the premise of reusability of capabilities and capability components (microservices) needs to be caliberated and governed until the desired level of reusability is achieved. This also contributes to the complexity of day-2 operations (where one will have the monolith legacy application, services and data on one side and distributed/product-led squads supporting modernized capabilities on the other side).

It is important to understand that the day-2 operations model shifts considerably from the traditional monolith-based support model. Product teams collaborate to align the build-out of capabilities that need to be integrated to compose the target application. This means that their iteration plan must be continuously aligned. A day-2 support model for composable applications is different because the capabilities are supported by their respective squads. Incident management and ITSM processes must be restructured to suit a products and services squad model.

Also, the tendency of teams to monitor and manage dependent capabilities (that existed in old monolith model) must be managed through clearly articulated boundaries for capabilities and capability components. One also must skill the teams so that they embrace the cloud-native models. This is a fundamental change in the day-2 support model and it takes a significant amount of organizational readiness to move to this model.

Program management of such programs needs a multi-pronged approach to minimize cross-domain chatter, prioritization challenges and complex dependency challenges. SAFe (Scaled Agile Framework) is probably one of the best models to execute such programs. While one aspect of program management is to keep a razor focus on applications and services being modernized and measure “how much of it is modernized” on a continuous basis, another perspective is to identify complex reusable capabilities and build them via vertically integrated (across products) to accelerate progress. Keeping a critical mass of squads in an application-centric way that builds out the capabilities (even if they are owned by other domains) is critical to ensure knowledge of current application and data is leveraged to the fullest and what is being modernized has functional parity with what exists today while meeting desired SLA levels.

Conclusion

While domain-driven design (DDD) helps establish a disciplined approach to decompose applications and services and establish an overall design that’s well aligned with business domains and processes, it is also important to ensure purity does not get in the way of progress. “Iterate-iterate-iterate” is the mantra, and success depend on how soon teams can quickly build, learn and refine on a continuous basis. Success also depends on the business and SME participation in the above design exercises, without which there will be a tendency to reproduce existing capabilities with minimal transformation.

If you haven’t already, make sure you read Part 1 of this blog series: “Domain-Driven Modernization of Enterprises to a Composable IT Ecosystem: Part 1 – Establishing a framework for a composable IT ecosystem.”

Check out the following links to learn more:

Was this article helpful?
YesNo

More from Cloud

How a US bank modernized its mainframe applications with IBM Consulting and Microsoft Azure

9 min read - As organizations strive to stay ahead of the curve in today's fast-paced digital landscape, mainframe application modernization has emerged as a critical component of any digital transformation strategy. In this blog, we'll discuss the example of a US bank which embarked on a journey to modernize its mainframe applications. This strategic project has helped it to transform into a more modern, flexible and agile business. In looking at the ways in which it approached the problem, you’ll gain insights into…

The power of the mainframe and cloud-native applications 

4 min read - Mainframe modernization refers to the process of transforming legacy mainframe systems, applications and infrastructure to align with modern technology and business standards. This process unlocks the power of mainframe systems, enabling organizations to use their existing investments in mainframe technology and capitalize on the benefits of modernization. By modernizing mainframe systems, organizations can improve agility, increase efficiency, reduce costs, and enhance customer experience.  Mainframe modernization empowers organizations to harness the latest technologies and tools, such as cloud computing, artificial intelligence,…

Modernize your mainframe applications with Azure

4 min read - Mainframes continue to play a vital role in many businesses' core operations. According to new research from IBM's Institute for Business Value, a significant 7 out of 10 IT executives believe that mainframe-based applications are crucial to their business and technology strategies. However, the rapid pace of digital transformation is forcing companies to modernize across their IT landscape, and as the pace of innovation continuously accelerates, organizations must react and adapt to these changes or risk being left behind. Mainframe…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters