Split
A Split Node should be used when there is a need to split CSV or JSON data processing or when a map can run in a burst mode to process data in batches. This might occur when data processing becomes excessively time consuming.
Such time-consuming behavior might occur when you send big data to the flow input terminal, or it might become memory intense when complex data validation is being performed. The Split Node can be used with these, and with other data processing tasks that can be done in parallel.
The Split Node splits data records into batches. Each batch of data is processed in parallel and passed for further processing by the downstream nodes. Using the Split node can improve the performance and scalability of your flow design.
The source of data that the split node consumes can be a map or data that the split node reads directly from the input terminal or a file.
If the map provides the data, the map must run in the burst mode. This is configured using the map input card settings. The action property ‘fetch as’ must be set to ‘burst’. ‘Fetch as’ property can be set to ‘burst’ if the adapter supports burst mode. Example adapters that support burst mode are FILE, REST and all messaging adapters like Kafka, JMS, MQ. The fetch unit controls how many records to process with one batch. If the fetch unit is not set, the default value is 0 and it means fetch all records. The fetch unit must be set to split in order to split data into multiple batches