IBM Support

QRadar: Troubleshooting Pipeline NATIVE_To_MPC messages on Console only



Events are being dropped on Console with Pipeline NATIVE_To_MPC messages


A common cause for this issue is rules that need to be tuned in the QRadar deployment.
If a rule takes too long to execute it could cause a performance issue. The end result may be causing events to be dropped or routed directly to storage.

Examples of Expensive Rules are:

  • Payload related tests using Pattern or Curly regex based calls.
    The order in which these occur in the rule can make the difference between a good rule and an expensive rule.
  • Host and Port Profiles are expensive, especially if the asset and port vulnerability data bases are large.

Diagnosing The Problem

Events are being dropped by the pipeline on console, with the following messages:

[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][ -] [-/- -] ---- PIPELINE STATUS -- Initiated From: NATIVE_To_MPC
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][ -] [-/- -] MPC (Filters: 0.00 pc) (Queues: 1.83 pc) (Sources: 0.00 pc)
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][ -] [-/- -] 100.00 pc - Queue:Processor1 (250/250)
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][ -] [-/- -] 100.00 pc - Queue:from_EP_via_NATIVECOMMS (250/250)
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][ -] [-/- -] EP (Filters: 0.44 pc) (Queues: 23.10 pc) (Sources: 0.00 pc)
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][ -] [-/- -] 0.61 pc - Filter:CRE EP (609/100000)
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][ -] [-/- -] 72.80 pc - Queue:Processor3 (182/250)
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][ -] [-/- -] 100.00 pc - Queue:NATIVE_To_MPC (25000/25000)
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][ -] [-/- -] EC (Filters: 0.00 pc) (Queues: 0.00 pc) (Sources: 0.00 pc)

qradar.log file shows messages such as:

[ecs-ep] [8c229b95-8625-4d34-bf76-fd8976cd98d7/SequentialEventDispatcher] com.q1labs.sem.monitors.ECSQueueMonitor: [WARN] [NOT:0060005100][ -] [-/- -]ECS Queue Monitor has detected a total of 18784693 dropped event(s). 61198 event(s) were dropped in the last 60 seconds. EP Queues: 61198 dropped event(s). MPC Queues: 0 dropped event(s).
[ecs-ep] [8c229b95-8625-4d34-bf76-fd8976cd98d7/SequentialEventDispatcher] com.q1labs.sem.monitors.ECSQueueMonitor: [WARN] [NOT:0000004000][ -] [-/- -]EP Queue [NATIVE_To_MPC] has detected 61198 dropped event(s) in the last 60 seconds and is at 89 percent capacity

Resolving The Problem

What is the findExpensiveCustomRules script?
The script "" script is designed to query the QRadar data pipeline and report on the processing statistics from the Custom Rules Engine (CRE). The script monitors metrics and collecting statistics on how many events hit each rule, how long it takes to process a rule, total execution time and average execution time.  When the script completes it turns off these performance metrics. The findExpensiveCustomRules script is a useful tool for creating on demand reports for rule performance, it is not a tool for tracking historical rule data in QRadar. The core functionality of this script is often run when users begin to see drops in events or events routed to storage between components in QRadar.

Part 1: How to run the findExepensiveCustomRules script

    1. Using SSH, log in to the QRadar Console as the root user.
    2. Optional. Open an SSH session to the QRadar appliance where ECS-EP process runs. The following appliance types run ECS-EP and the log files show the hostname of the appliance that is reporting the issue:
      - QRadar 16xx Event Processor appliances
      - QRadar 17xx Flow Processor appliances
      - QRadar 18xx Combination Event/Flow appliances
      - QRadar 21xx Log Manager appliance
      - QRadar 31xx Consoles
    3. Run the findExpensiveCustomRules script to review for any rules that are expensive and tune them as required:

      /opt/qradar/support/ -d /root

      Note: The -d option specifies the path of the output of

    4. Use WinSCP or an equivalent tool to, move the CustomRule-{timestamp} file to your local laptop or workstation.
    5. Use 7-zip or another compression utility to extract the CustomRule-yyyy-mm-dd-seconds.tar.gz file to a .tar file.
    6. Extract the .tar file a second time to access the Expensive Custom Rules report text file. The output contains two files and a reports folder.
    7. Open CustomRule-yyyy-mm-dd-seconds.txt file in any spreadsheet program as a CSV file.
Part 2: What to look for in the CustomRule report
    1. Sort the AverageExecutionTime column and AverageTestTime column to look for large values. This identifies which rules on average take more time to run than others. Expensive rules will be sorted to the top and typically be a magnitude in size larger than rules running efficiently. Look for values that are 0.01 or larger, these are considered potentially expensive rules that require review.
    2. Alternately, review the TotalExecutionTime column to find rules taking much longer to complete than other rules.
    3. Typically, rules that use 'Payload Contains' or 'Payload Matches REGEX' will be rules that are expensive.
    4. Review the output of this report to match rule names from Dashboard System Notifications to execution time for that specific rule in QRadar.
    5. If AverageExecutionTime is high but the event count is low, the rule may not be what is causing the issue.

Part 3: What to do next
    1. Find the Rule in the Offenses tab and disable it. Do not modify or delete it until it is proven that this is the bad rule.
    2. Verify the Dashboard notifications that the warnings have subsided.
    3. If the Notifications are still occurring recheck the CustomRule report and see if there are any other entries that look suspicious.
    4. If that proves to be the bad rule you will need to either modify it to be less expensive or delete it.

      Note: The sequence that the rules are laid out can made a difference in performance. You want to limit the data you search as much as you can before adding a payload test.
Part 4: Alternate issues that can cause Native_to_MPC error messages
  • Verify if any rules are configured as "Global" rules. In some cases, "Global" rules can cause excessive events to be processed by the console and resulting in MPC queue to max out. In this case, change any Global rules to Local rules, if possible.
  • Verify how many MHs are in the deployment. Sometimes, when there are several MHs in the deployment and the console only has 1k EPS, it may be required to get 5k EPS console license from licensing team. In this case, request 5k EPS console license from

Where do you find more information?

[{"Product":{"code":"SSBQAC","label":"IBM Security QRadar SIEM"},"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Component":"Admin Console","Platform":[{"code":"PF016","label":"Linux"}],"Version":"7.2","Edition":"","Line of Business":{"code":"LOB24","label":"Security Software"}}]

Document Information

Modified date:
10 May 2019