z/TPF - Group home

Publishing to Data Streams with z/TPF support for Java

  

Many z/TPF customers have scenarios where their z/TPF system needs to connect to databases and data streams on other platforms.  For example, your z/TPF system might need to read reference data from a remote relational databases and store a local copy for use during transaction processing.  In another scenario, z/TPF might need to update remote non-relational databases for analytics or machine learning platforms with availability data or customer profiles.  Finally, your z/TPF system might need to publish transaction histories or system statistics to data streams for use by monitoring or analytics platforms.

 

With z/TPF support for Java™, you can use existing Java packages to easily connect your z/TPF system directly to databases and data streams on other platforms.  Java provides a rich ecosystem of software packages and many of these packages include connectors and clients that you can use to connect to servers running on other systems.  In addition, Java is designed to run across a wide variety of platforms, including z/TPF, without requiring any software changes.  With z/TPF support for Java, you can use many of these existing Java packages as-is and use them to easily connect your z/TPF system to other systems, read and write from remote databases, and publish and consume data streams.

 

The z/TPF publish to data streaming platform driver for Java demonstrates one example of how Java can be used to easily connect your z/TPF system to other systems.  In this example, the Apache Kafka Java package is used to publish z/TPF business event messages directly from z/TPF to a Kafka topic on a distributed Kafka cluster.  Apache Kafka is a distributed streaming platform and is designed to publish and consume streams of data such as z/TPF business events.  

 

Without z/TPF support for Java, customers typically would have sent z/TPF business events to intermediate systems over Websphere MQ or TCP/IP connections.  The intermediate systems would have received the events from the z/TPF system and used native Kafka APIs to publish the events to Kafka topics on other systems.  By using the Apache Kafka Java packages on z/TPF, z/TPF can communicate directly with the distributed Kafka cluster without requiring any intermediate messaging or systems between z/TPF and the Kafka cluster.  A direct-connect solution between z/TPF and the desired systems means you can skip the intermediate systems, resulting in a more reliable solution with fewer components to manage and maintain.  In addition, because the Apache Kafka packages are written in Java, they run on z/TPF without requiring any code changes or porting effort.

 

The z/TPF publish to data streaming platform driver for Java provides a working example that creates signal events (a type of z/TPF business event), formats the event as a JSON document, and calls a Java service running on the current z/TPF processor to publish the JSON document to a Kafka topic on a distributed Kafka cluster.  More information on this driver and the download package are available on the z/TPF publish to data streaming platform driver for Java download page.

 

For another example of how you can take advantage of Java on your z/TPF system and add new business logic in Java to your existing z/TPF applications, see the z/TPF rules engine engine driver blog entry.  Additional information on the rules engine driver and the download package are available on the z/TPF rules engine driver for Java download page.

 

For more information about calling Java application services from your z/TPF applications or calling z/TPF application services from Java, see Use Java on the z/TPF system in the z/TPF product documentation in IBM Knowledge Center.