Asynchronous High-Volume Data Processing for Loyalty Point Allocation
You can create a Neo dataflow with a Kafka block to handle bulk data processing asynchronously.
Asynchronous processing allows the system to handle tasks without blocking or delaying others while waiting for results. It enables multiple tasks to run in parallel or process them sequentially as they arrive.
Example Scenario
Requirement
An airline brand uses a Passenger Service System (PSS) to send post-flight passenger details to the Capillary system. This data is sent in bulk every hour, reaching up to 1,000 requests per minute. Due to the large volume of data, real-time processing isn’t feasible. The system needs a way to process bulk data asynchronously, validate it, and allocate loyalty points to eligible passengers using Capillary's transaction APIs.
Solution
To address the challenge of processing high-volume post-flight passenger data asynchronously and allocating loyalty points, the following solution is in place:
Step One - Create a Neo Dataflow to Post Data into a Kafka Topic
Create a Neo dataflow with a Kafka block to receive post-flight data from the PSS. To access the dataflow, ensure you have access to the DocDemo org (100737) and access to Neo.
This dataflow pushes incoming passenger data into a Kafka topic. In this setup, the Neo dataflow acts as the Kafka producer, pushing the PSS data into the Kafka topic.
Step Two - Create a Neo Dataflow to Validate and Transform Data to invoke the Add Transaction API
Create a Neo dataflow to validate the input and transform it into the required payload for calling the Add Transaction API.
Step Three - Use a Connect+ template to pull data from the Kafka topic and invoke the Neo Dataflow for Validation and Transformation
Use the Connect+ template, Ingest Kafka Stream in API, to pull data from a Kafka topic, validate and process it, and allocate loyalty points. The template consists of the following blocks:
- Connect-to-source-kafka: Connects to the Kafka brokers and pulls data from the configured topic.
- neo-Transformer: Invokes a Neo dataflow created in step two to validate the input and transform it into the required payload for calling the Add Transaction API.
- Connect-to-destination: Sends the transformed data to the Add Transaction API to allocate loyalty points.
Troubleshooting
Error | Description |
---|---|
Header Validation issues | Ensure the headers are not missing, empty and in the correct format. |
Data pushing to Kafka fails | Ensure the Kafka block is properly configured with the required key and value. |
FAQs
- What is Kafka used for in this Neo dataflow?
Answer: Kafka handles bulk data processing asynchronously by enabling message publishing and consumption between Neo dataflows and Connect+. - Why is Kafka suitable for bulk processing scenarios?
Answer: Kafka also supports asynchronous processing, making it ideal for scenarios where real-time processing is not feasible. - What happens if the Kafka block configuration is incorrect?
Answer: If the Kafka block configuration is incorrect, messages fail to publish or get sent to the wrong topic. - What should I include in the Kafka message key and value?
Answer: The key includes a unique identifier for the event within the topic, and the value contains the information to send to Kafka.
Updated 2 days ago