Optimize CQRS pattern

Optimize CQRS to deliver core systems integration.

Overview

← Back to Application modernization patterns

The Optimize CQRS pattern is an IBM Z® specialization and optimization when systems of record are on IBM® z/OS®. Trends across many industries are creating the need to transform business processes into real-time or near-real-time responsiveness by using Command Query Response Separation (CQRS), even those industry use cases that traditionally are satisfied with latent information. To achieve the real-time delivery of relevant information to business processes, many organizations are implementing event-based architectures and backbones across their enterprises.

The following business needs drive this optimized CQRS solution pattern:

  • The need for real-time information flow at scale for business operations
  • The need to decouple from specific core systems-of-record contexts
  • The need to mitigate business impact to systems of record from increasing volume and the unpredictability of query activity
  • The need to extend integration to enterprise event-based architecture patterns

In particular, performance and latency challenges exist in implementing event-based architectures that try to flow all data events in real time at large throughput or scale that is associated with high volume systems of record.

Solution and pattern for IBM Z®

The approach in this pattern is to first create an optimized aggregation of the relevant information, as opposed to moving all of the raw data from systems of record. Then, you surface that information through various standards-based interfaces, including through event-based mechanisms such as Kafka. This approach is materialized for IBM® z/OS® through the IBM Z® Digital Integration Hub, which has three fundamental technical components:

  • A low-latency, high-throughput intra-day cache that is built on a fast, flexible, Java® based in-memory compute, runtime, and database for storing the composed or aggregated information. This in-memory database and runtime also integrates with enterprise-wide Kafka event architectures.
  • Applications that both prime and keep the intra-day caches updated at low latency. The IBM Z® Digital Integration Hub provides tooling to significantly accelerate the creation of these java-based applications. The applications can be enhanced for use case specificity and/or leveraged as templates for subsequent use cases.
  • Approaches for integration with core systems of record such as CICS® application exits, application events and log stream for z/OS.

The following diagram depicts a technical component overview:

Advantages

Implementing an optimized CQRS pattern for data-related event interaction with systems of record provides several advantages:

  • Real-time flow of information (currency): Because all of the interaction is memory-to-memory between systems of record and the intra-day caches, the information can be current at very low latencies, within subseconds of when the information is produced by the system of record.
  • Self-serve operations: You can significantly reduce the need for custom data extractions and custom data event creation by using built-in Kafka connectors for the intra-day caches.
  • Cost optimization for inquiry-oriented interaction by leveraging IBM Z specialty engines and avoiding recomputing of information that is already current in the intra-day caches.
  • Ability to handle large volumes in terms of throughput of processing and still maintain low latency.
  • Mitigation of impact to core systems of record from spiky and unpredictable inquiry-based workloads.
  • Increased governance and security through the aggregation of raw data at the source before integration with enterprise-wide event infrastructures.

Considerations

When you implement an optimized CQRS pattern, you need to take into account several considerations. For example, you need to evaluate whether your use case(s) benefit from real-time information flow at scale, whether the information your use case(s) need originates on core systems of record running on IBM® z/OS® as well as to determine the options for keeping the information in the intra-day caches current. You also need to consider how to link the IBM Z Digital Integration Hub optimized CQRS structure to broader enterprise-wide architectures based on standards interfaces.

The use cases that benefit the most from an optimized CQRS implementation using the IBM Z Digital Integration Hub are as follows:

  • More business benefits are realized when real-time and low latency information is shared on the eventing infrastructure.
  • The information that needs to flow originates from core systems of record on z/OS.
  • Secure, highly governed raw data is on z/OS and you can realize business benefit by providing composed or aggregated information versus raw data.
  • A business benefit exists in having an easily accessible and available ‘replay’ of the data-related events that come from the mainframe because the shared information is preserved in an in-memory database.
  • The use case targets information flow for business application purposes instead of machine-learning or AI training purposes.

Reflecting changing information is a key part of the optimized CQRS pattern. A few viable options exist depending on your use case:

  • CICS exits, application event processing, or both to capture changes and write them to the log stream on z/OS or directly to the caches. This action is done asynchronously so as not to impact the online transaction processing (OLTP) application. This option likely has the lowest latency, but some exit processing needs to be created. The IBM Z Digital Integration Hub includes samples that can accelerate implementation of such exits.
  • Organizations that already use IBM CDC tools can either use existing captures or establish new CDC pipelines for the raw data changes, targeting IBM® MQ on z/OS. Applications that populate the caches can consume through Java® Messaging Service (JMS). In this option, you don’t need application exits, but information is likely to be more latent than the CICS exits option. You must maintain a new CDC pipeline and a potential impact exists to OLTP of added captures.
  • Periodic queries of data that is persisted to disk, which can be useful for data that infrequently changes, is not transactional in nature, and when the data doesn’t have proprietary compression routines.

Considerations for connecting to enterprise-wide event structures:

  • In some cases, customers might want to flow the events from the intra-day IBM Z Digital Integration Hub caches to an enterprise-wide architecture for event processing or mediation.
  • With IBM Z Digital Integration Hub, Kafka topics can be automatically configured to be updated after any information in the caches change. Those topics can be used by a cloud-based or distributed infrastructure.
  • When you design recovery for the broader Kafka eventing structure, put optimizations in place to use the information that is in the IBM Z Digital Integration Hub caches instead of relying on the replay logic within Kafka. Depending on the volume of updates, this action can provide a recovery-time benefit.

What's next

Contributors

Mythili Venkatakrishnan
Distinguished Engineer, IBM Z Financial Services Sector CTO IBM