GitHubContribute in GitHub: Edit online

Manta Log Viewer

Manta Log Viewer is a module in the Manta Admin UI web application that serves as a tool to view, browse, and filter logs and, especially, normalized errors, from all IBM Manta Data Lineage applications. This module can be found under the Log Viewer tab in Manta Admin UI.

Note that Manta Log Viewer is in feature preview mode and does not include a complete set of logs at the moment. For complete logs, review the file-based logs in Manta Server, CLI, and Service Utility.

Supported Technologies

Currently, the following Manta Data Lineage components are supported and produce logs compatible with all features of Manta Log Viewer.

Log Viewer UI Components

The following subsections describe specific components of the GUI and their functionality.

Source Selection

A log source is a single Manta Data Lineage component or part of a Manta component that is able to generate logs collected and stored by Manta Log Viewer. Sources that currently can provide logs to the log viewer are:

No alt text provided

Each source provides specific filtering criteria. When Workflow Execution is selected, a dropdown for selecting a specific workflow execution and a tree menu with the executed scenario appear below the Source dropdown. When Admin UI or Orchestration API are selected as the source, then filtering is done by date.

Source: Admin UI, Orchestration API

When this source is selected, a date picker component is provided below the Source dropdown. This component is for choosing a singular date or a range of dates from which logs and errors should be sourced.

Filtering by a single date means sourcing from a specific timeframe of 00:00AM–00:00AM (the next day).

Filtering by a date range means that only logs from 00:00AM of the start date to 00:00AM of the day after the end date are returned.

Reset Button

At the bottom of the date picker component, there is a Reset button. Clicking on this button will reset the component to its default state, which selects the past week starting from the current day.

Context/Input of the Log

Each log belongs to a certain context or input. Logs from Admin UI or Orchestration API can have various contexts.

Source: Workflow Execution

The components described in this subsection are only visible when the value in the Source dropdown is Workflow Execution.

Workflow Executions

On the left side of the screen, there is a collapsible menu with all the executions that are currently stored in the Manta Log Viewer repository. At the top of the menu is a dropdown containing all workflow executions. A workflow execution is either an execution of a defined workflow in Manta Process Manager or an execution of various Manta Data Lineage scenarios using a batch of bash scripts.

In the dropdown menu, the user will see the name of the workflow and the start time of the execution. The name of the workflow is either the name that was previously defined in Manta Process Manager or the

default value APPLICATION RUN, which is reserved for script-based executions.

No alt text provided

In the image, there are three executions: two APPLICATION RUNs (i.e., script-based) and one custom workflow from Manta Process Manager.

Scenario Executions

Below the Search by Execution text field is a menu that can be used to filter errors according to the level of the scenario execution. Scenario executions are stored in a tree-like structure where each level provides a different level of filtration. Here are definitions for each of the levels.

Clicking on any level of the menu will filter out logs according to the selection. When a level is selected, all the levels above it are automatically selected; for example, when Analysis is selected, the connection and technology parameters are also used in the filtration.

Each level has statistics about the errors belonging to that particular level. The first value is the number of warnings, the second value is the number of errors, and the third value is the number of fatal errors.

No alt text provided

Errors from scenarios related to the extraction of the dev1.properties connection for Oracle

Below the workflow execution dropdown is a field providing full-text searches over all levels of execution. This full-text search also works for partial words and will work for any level; for example, when extra is input, all Extractions are returned as well as all other fields that may contain that character sequence.

No alt text provided

Only levels containing extra and their parent levels are shown.

Views

There are two possible views for browsing logs. The first is Group by Error Type, which is selected by default when the log viewer is launched, and the second is Group by Input.

Group by Error Type

The Group by Error Type view serves to present individual errors that occurred during the selected executions, regardless of which input they belong to.

In the center of the screen is the main table containing the actual grouped log records. The logs (as in the log files) are grouped by Issue Type. An issue type is a generalized error log that usually has the same cause and always the same solution and only differs in specific parameters (such as the name of the file or URL). Each issue type belongs in an Issue Category. An issue category is a group of issue types with similar characteristics; for example, DATAFLOW_INPUT_ERRORS will contain all the errors that affect dataflow inputs.

Main Table

The main table has six columns: Issue Category, Issue Type, I (impact), S (severity), A (application), and the number of log entries (not labeled in the column header), signifying how many specific entries this row groups. The columns can be sorted by clicking on the name of the column in the table header. The first two columns were defined above. The impact column describes the impact that an error has on the resulting data lineage. The common value is UNDEFINED, which means that it is not certain that the error has no impact, but it is highly likely that it doesn’t. The severity column contains the severity of the log records, where WARN means that something unusual has happened that may or may not lead to an error. ERROR means that an erroneous state was reached but the application recovered from it. FATAL means that an erroneous state was reached and the application was not able to recover from it. Each row in the table has a downward pointing arrow that expands the given row.

No alt text provided

The severity, impact, and application column values are represented by icons. The exact meaning of an icon is explained in the tooltip text.

No alt text providedNo alt text provided

Expanded Row

Once the button for expanding a row is clicked, a new table appears. This new table contains specific log records (i.e., warn, error, or fatal logs from a common log file) with a timestamp and possibly an object (e.g., a script file) that was assigned to the log. If the value is empty, no object was assigned to this particular log. It is possible to choose the number of log records that will be shown using Items Per Page at the bottom. Each log is selectable, where its metadata will appear in the panel to the right of the error table. If a row with a selected log is collapsed again, the log is automatically deselected.

No alt text provided

No alt text provided

Group by Input

The Group by Input view serves to present all inputs that produced an error during processing, either during extraction, analysis, or export.

Main Table

In the center of the screen is the main table containing the actual inputs and the number of specific errors that occurred during processing. This view works on an inverse basis to the Group by Error Type view. In the Group by Error Type view, there were specific errors and once expanded each affected input appeared, whereas in the Group by Input view, there are affected inputs and once expanded all errors affecting this particular input appear.

No alt text provided

When the input name is too long, a tooltip appears showing the full path. When the input path is highlighted and copied, the full path is copied regardless of the ellipsis at the beginning.

Expanded Row

When expanding a row representing a grouped input, script, or job, the application provides grouped errors as they are defined in the Group by Error Type view. Only the errors that occurred when processing this particular input are returned. The rows are then further expandable, where the application can return either Issue Subtypes (discussed further below), if present, or specific logs.

Subtypes

An Issue Subtype is an optional error specification that is used for additional dynamic grouping based on the processed data. The subtypes may not be constant, and they will likely be different with different data. The purpose of this additional “dynamic“ grouping is to further specify errors that we are forced to define more generally since they depend on the processed data and there may be an infinite number of different occurrences. A significant use of this grouping is for parsing errors, where the parser may produce errors for an infinite number of constructs or tokens. Using the Issue Subtype, the more abstract parsing error may thus be further grouped based on the erroneous tokens such as the subtype for SELECT or Missing ‘;‘.

Issue Subtypes are presented in the log viewer after expanding a grouped error in both the Group by Error Type and Group by Input (Script/Job) views. The subtypes may possibly contain unprintable characters (the names are generated from the data, if the data contains an unprintable character, it may be used for the subtype), in which case the characters are replaced with the character “�“.

No alt text provided

In the screenshot above, we see a parsing error with multiple subtypes— No viable alternative: ‘;‘, No viable alternative: ‘EXEC’, Unsupported command: ALTER QUEUE, or No viable alternative: ‘INSERT‘. Subtypes are sorted by default by the number of occurrences.

Subtypes are presented in the GUI as an inner table with the name of the subtype and the number of occurrences being the only columns in the table. If an error has no valid subtypes, no inner table is displayed and the log records are displayed directly.

For the Group by Input (Script/Job) view the behavior is similar. If a grouped error of a particular input has valid subtypes, an inner table with Issue Subtypes that is exactly the same as the inner table in the Group by Error Type view is displayed.

No alt text provided

Log Record Attributes

On the right side of the log record table is a panel presenting metadata about a selected log record. The attributes that are presented are namely the following.

At the bottom of the panel is the View Log Details button, which opens a new modal window.

No alt text provided

Error log data

Log Detail

A modal window that appears when the View Log Details button is pressed and simulates a log file view of the particular scenario execution the error log was generated in. A specific log record needs to be selected for the modal window to appear. The window presents the error log and a few logs before and after the error for context, sorted by timestamp. The log message that the details have been loaded for is highlighted in red. Using the buttons Load Earlier and Load Later, it is possible to navigate back and forth in the log file.

Filter

At the top of the table are five fields that filter errors using various parameters. Errors in the table are updated after each change to the filter. The Reset Filters button in the upper-right corner of the page will return the filters to their default state.

The full-text Search field accepts input from users and searches the user messages and technical messages in all logs that fit the currently selected filter settings.

The full-text search engine also accepts partial words or regexes.

The full-text search engine does not accept values with leading wildcards such as “*mssql“.

No alt text provided

No alt text provided

Issue Categories

The Issue Categories field is a selection field where multiple values can be chosen. Only errors that belong to the selected categories will appear in the error table. It can also be used to search for a particular category by inputting its name or part of it. If no category is selected, errors from all categories are shown.

No alt text providedNo alt text provided

Issue Types

The Issue Types field is also a selection field where multiple values can be chosen. This field will contain all distinct pairs, issue category + issue type. Only the errors that fit the selection will appear in the error table. This field is searchable as well. If no Error Type is selected, errors of all types are shown.

No alt text providedNo alt text provided

Severity

The Severity selection field is a multiple selection field containing combinations of severity and impact values that are also displayed in the error table. The following options are possible.

This filter combines the values of error table columns with the headers S (severity) and I (impact).

The values are mapped to icons (each of which are also described in a tooltip).

Impacts are displayed as arrow icons in order of importance: SCENARIO > SINGLE_INPUT > SCRIPT > UNDEFINED > DIAGNOSTIC.

No alt text provided

Applications

The Application selection field is a multiple selection field that contains all the distinct applications that currently have logs stored in the Manta Log Viewer repository. Only logs from the selected applications will appear. If no application is selected, errors from all applications are shown.

No alt text provided

Only logs from Manta Data Lineage Server are shown.

First Issue Only

The First Issue Only checkbox makes it possible to filter out any duplicate information and, therefore, helps find the root cause of the issue faster. It displays only the first error for each input/script since many of the subsequent errors may simply be a consequence of the first error, stemming from something such as a failed recovery.

No alt text providedNo alt text provided

Log Package Export

To access this feature, you must have the ROLE_LOG_VIEWER_EXPORT user role.

Manta Log Viewer gives users the option to export logs from selected executions to make it easier to deal with any issues that require assistance from support. The export creates a ZIP archive containing a dump of selected data from the logging database and, optionally, human-readable text log files generated from the selected data. The export can be initiated by clicking on the Export Logs button in the lower-left corner of the Log Viewer page. (To see this button, the user must be logged in and have the required user role.) Note that this may be a lengthy operation, especially for larger databases.

No alt text provided

The export process can be initiated by clicking on the Export Logs button.

Once the log export process has been initiated, a modal window will appear in which the export will be further specified. The selection mirrors the selection of sources from the main screen. The user may choose to export logs from any source or to export everything that is currently stored in the logging database. Each source will have a further specification mirroring the selection from the main screen. For example, when a user wishes to export logs from an Admin UI source, then a field for selecting a date range appears. When a user wishes to export logs from a Workflow Execution, then it is necessary to select a specific workflow execution to export. If the user wants to export everything, then it is also necessary to select a date range. Only logs from that range will be exported. Both the start and end dates are inclusive (i.e., if 2021/06/10 to 2021/06/13 is selected, then the logs from 2021/06/10 12:00 AM to 2021/06/14 12:00 AM are exported).

The user also has the option to generate human-readable, textual logs by checking the appropriate checkbox.

No alt text provided

When exporting from Admin UI, the user has to select a date range for the logs to be exported.

No alt text provided

When exporting a workflow execution, the user has to select a specific workflow execution to export.

No alt text provided

When exporting all the data stored in the logging database, a date range has to be selected.

No alt text provided

After the selection is finalized, clicking the Download button will trigger the download of the generated ZIP archive. Pop-ups need to be enabled for the Manta Service Utility web page (localhost:<port>) for a successful download.

No alt text providedNo alt text providedNo alt text provided

Log Package Import

To access this feature, you must have the ROLE_LOG_VIEWER_IMPORT user role.

Manta Log Viewer gives the user the option to import logs that were previously exported in the same or a different instance of the log viewer. Importing a log package may be useful for efficiently storing log data permanently without having to store it in the logging database, because the log data may be required at a future date for further analysis. Only valid packages, generated by the same or a different instance of Manta Log Viewer can be imported.

Importing a log package will truncate all the data stored in Manta Log Viewer.

It is important to note that importing a log package will truncate all data that is stored in the logging database. If the data is important, it can be, for example, backed up by exporting the required executions or all data.

The import can be initiated by clicking on the Import Logs button in the lower-left corner of the Log Viewer page. (To see this button, the user must be logged in and have the required user role.) Note that this may be a lengthy operation, especially for larger log packages.

This feature is primarily intended for Manta support teams, but it can also be useful to the end user for reviewing older Manta logs or logs from a different Manta instance.

No alt text provided

Once the log import process has been initiated, a modal window appears with an interface for selecting the file that should be uploaded. This can be done either by dragging and dropping the log package in the designated area or by selecting the file in the browse window by clicking on Browse. Only ZIP files are accepted. Any other files will not be accepted by the application, as well as any ZIP files that were previously tampered with.

Once a file is selected, it can be uploaded by clicking the Upload button.

No alt text provided

Automated Pruning

Manta Log Viewer and its log repository provide background functionality to prune old and outdated logs and errors. By default, logs expire after exactly 14 days. Once the workflow execution expires, its logs are pruned in batches in the background and it is no longer accessible from the front end.

Changing the Pruning Delay

The default property can be changed in a Manta Admin UI property file following these steps.

  1. Open the following screen in Manta Admin UI: Configurations > Admin UI > System > Log Viewer.

  2. Change the value of logviewer.pruneafter to the desired value in days. The value can be a decimal; for example, setting the value to 0.5 will cause the log viewer to prune logs that are older than 12 hours.

Changing the Pruned Batch Size

The default property can be changed in a Manta Admin UI property file following these steps.

  1. Open the following screen in Manta Admin UI: Configurations > Admin UI > System > Log Viewer.

  2. Change the value of logviewer.prune.batchSize to the desired size. A smaller batch will be faster to prune when by itself, but it will take more time to prune everything (better for slower systems and those without very large numbers of logs). A larger batch will be slower to prune when by itself, but it will take fewer batches to prune everything (better for faster systems and those with larger numbers of logs).

Changing the Pruning Interval

The default property can be changed in a Manta Admin UI property file following these steps.

  1. Open the following screen in Manta Admin UI: Configurations > Admin UI > System > Log Viewer.

  2. Change the value of logviewer.prune.intervalto the desired value in seconds. The default value is set to two minutes, which is optimal.

Repository Size Limit

The Manta Log Viewer database has an internal size limit to prevent uncontrolled growth that could lead to a full hard drive. There are two size limits, which are handled differently once reached.

Soft Size Limit

The soft size limit is a customizable size limit, which when reached, the log viewer repository triggers internal processes to prune older logs to make space for new logs. If an execution is ongoing, the log viewer will continue to accept logs and delete old ones based on the following rules.

The default soft size limit is 50GB.

The real-world behavior of this limit is that the size of the database keeps increasing once it is reached (if the execution is still ongoing). Once the execution is finished, the database will compact and the size will approach or drop below the soft limit again.

Changing the Soft Size Limit

  1. Open the following screen in Manta Admin UI: Configurations > Admin UI > System > Log Viewer.

  2. Change the value of logviewer.repository.softSizeLimit to the desired value in gigabytes. The given number can be a decimal. The default value is 50GB.

Hard Size Limit

The hard size limit is a customizable limit, which when reached, the Log Viewer repository will not accept any more logs. All incoming logs are discarded. Information that this limit has been reached is logged in a service utility log file ( <MANTA_HOME>/serviceutility/logs/manta-admin-gui.log). To get below the limit once again:

The default value is 70GB.

Changing the Hard Size Limit

  1. Open the following screen in Manta Admin UI: Configurations > Admin UI > System > Log Viewer.

  2. Change the value of logviewer.repository.hardSizeLimit to the desired value in gigabytes. The given number can be a decimal. The default value is 70GB.

Additional Customization for Handling Size Limits

These additional properties may, in certain cases, make the handling of limits more effective.

Connection Idle Timeout

The idle timeout setting specifies how long a connection is persisted before it is eligible for deletion from the connection pool. If there are currently no incoming logs or there is no usage of the log viewer, the connections to the database are evicted. Having no active connections helps the H2 database used by the log viewer to delete unreachable data. A lower timeout may hamper the performance of the log viewer due to the frequent need to create new connections instead of reusing them.

Changing the Connection Idle Timeout Property

  1. Open the following screen in Manta Admin UI: Configurations > Admin UI > System > Log Viewer.

  2. Change the value of logviewer.repository.connectionIdleTimout to the desired value in milliseconds. The default value is 10 seconds.

Time between Eviction Runs

The connection pool runs periodic checks to check whether any of the connections have been idle for too long, and if any have been, it evicts them from the pool. More frequent runs of this thread may provide more effective size limiting, but it can also hamper performance due to the frequent need to create new connections instead of reusing them.

Changing the Eviction Runs Property

  1. Open the following screen in Manta Admin UI: Configurations > Admin UI > System > Log Viewer.

  2. Change the value of logviewer.repository.timeBetweenEvictionRuns to the desired value in milliseconds. The default value is five seconds.

Max Compact Time

The compact time is the length of time the H2 database spends compacting the database. (When old data is removed from it, it does not automatically physically remove it, it only does so during compacting.) This property sets the maximum amount of time that the H2 database spends on this activity. Increasing this value may impact performance due to the database being occupied by compacting, but it may make the size limit handling more effective.

Changing the Max Compact Time Property

  1. Open the following screen in Manta Admin UI: Configurations > Admin UI > System > Log Viewer.

  2. Change the value of logviewer.repository.maxCompactTime to the desired value in milliseconds. The default value is five seconds.

ActiveMQ Artemis

ActiveMQ Artemis is a messaging platform used by Manta Log Viewer to process logs from all Manta Data Lineage applications. Its instance runs embedded in Manta Admin UI.

The messaging broker listens on one port that is set during installation. This can be changed in the artemis.properties file in the mantaflow/serviceutility/webapps/manta-admin-gui/WEB-INF/conf/ and mantaflow/cli/scenarios/manta-dataflow-cli/conf/ directories. (If the files do not exist, they can be created and entries that should be changed can be copied from the default configuration.)

Common Issues

This section should help troubleshoot some common issues that may occur while using ActiveMQ Artemis.

Unsuccessful Broker Startup

The most common reason for an unsuccessful startup of an ActiveMQ Artemis broker is that the port configured to be used by this broker is currently being used by another broker. The fact that this error has occurred can be confirmed by checking the log file at mantaflow/serviceutility/logs/manta-admin-gui.log. The log will contain at least one of the following messages.

2021-04-18 18:24:08.767 [main] ERROR eu.profinit.manta.admin.gui.log.viewer.logic.broker.ArtemisBroker [Context: Manta Admin UI startup - 2021-04-18T18:24:08.673+0200]
ARTEMIS_BROKER_ERRORS UNAVAILABLE_PORT
User message: A secured port with value 61616 is unavailable because another process is using it. Broker will not start and will not accept any logs.
Technical message: Unable to create a secured port in property 'artemis.server.port' with value 61616 because another process is using it. Broker will not start and will not accept any logs.
Solution: Please verify that no other process is running on ports defined in <MANTA_HOME>/serviceutility/webapps/manta-admin-gui/WEB-INF/classes/logViewer.properties. Furthermore, please verify that no other Manta application (as well as Service Utility) is not running on these ports. If all of these ports are different and this error still persists, please contact Manta Support at portal.getmanta.com and submit a support bundle/log export.
Impact: UNDEFINED


2021-04-18 18:24:08.831 [main] ERROR eu.profinit.manta.admin.gui.log.viewer.logic.broker.ArtemisBroker [Context: Manta Admin UI startup - 2021-04-18T18:24:08.673+0200]
ARTEMIS_BROKER_ERRORS UNAVAILABLE_PORT
User message: A unsecured port with value 61617 is unavailable because another process is using it. Broker will not start and will not accept any logs.
Technical message: Unable to create a unsecured port in property 'artemis.consumer.port' with value 61617 because another process is using it. Broker will not start and will not accept any logs.
Solution: Please verify that no other process is running on ports defined in <MANTA_HOME>/serviceutility/webapps/manta-admin-gui/WEB-INF/classes/logViewer.properties. Furthermore, please verify that no other Manta application (as well as Service Utility) is not running on these ports. If all of these ports are different and this error still persists, please contact Manta Support at portal.getmanta.com and submit a support bundle/log export.
Impact: UNDEFINED

There are two options for dealing with this issue.

Change the ActiveMQ Artemis Port

If there is a process that you wish to keep running on the port set during installation, you will have to change the configured port for the broker. This can be done either manually in the configuration files (only for advanced users) or by reinstalling and choosing different port.

To change the port manually:

  1. Open (or create) the artemis.properties files found in mantaflow/serviceutility/manta-admin-ui-dir/conf/artemis.properties and mantaflow/cli/scenarios/manta-dataflow-cli/conf/ in a text editor.

  2. Add/change the property artemis.server.port to the desired value.

It is strongly recommended that non-advanced users change this port using the installation module.

Kill the Process Blocking Port

If you wish to use the configured port, the processes that are currently blocking this port will need to be killed. Sometimes just restarting the machine can solve this issue.

To kill a process manually in Windows (for advanced users):

  1. Press the Windows key + r.

  2. Type cmd and press Enter.

  3. Type netstat -ano | findStr ”<PORT>” into the command line. Replace <PORT> with the artemis.server.port property value. Hit Enter.

  4. You should see one or more rows of text with five columns of values. The last column represents the ID of the process currently blocking this port. If the command did not yield any results, try restarting the machine and running Manta Admin UI again, the issue might be solved by a restart.

  5. Type taskkill /F /PID <PID> in the command line. Replace <PID> with the value yielded in the previous steps. Press Enter. This will kill the process that is currently blocking the port.

To kill the process manually in Linux:

  1. Press CTRL + ALT + T.

  2. Type netstat -ano | grep ”:<PORT>” in the shell. Replace <PORT> with the artemis.server.port property value. Hit Enter.

  3. You should see one or more rows of text with five columns of values. The last column represents the ID of the process currently blocking this port. If the command did not yield any results, try restarting the machine and running Manta Admin UI again, the issue might be resolved by a restart.

  4. Type kill -9 <PID> in the shell. Replace <PID> with the value yielded in the previous steps. Press Enter. This will kill the process that is currently blocking the port.

The Broker Is Running but Does Not Receive Any Logs

This issue can manifest as not being able to see logs from previously or currently run scans in Manta Log Viewer. The reasons why something like this may occur vary, so the basic steps for self-troubleshooting should be:

Check the Manta CLI Log

  1. Navigate to the \mantaflow\cli\log\logging.log file and open it in a text editor.

  2. Verify that there are no errors regarding the connection.

  3. If there are, proceed to the next section.

Check the Manta Admin UI Log

  1. Navigate to the mantaflow\serviceutility\logs\manta-admin-gui.log file and open it in a text editor.

  2. Verify that there are no errors or warnings providing guidance to resolve them regarding ActiveMQ Artemis.

    1. This can be done by searching for the phrase “ org.apache.activemq.artemis“ in the file.

Check the Available Disk Space

Verify that the usage rate for the hard drive that you have Manta Data Lineage installed on is 90% or less. If less than 10% of the space is available, the logs will not be accepted. This error should be logged in the Manta CLI log and the Manta Admin UI log.

By default, the log viewer will not accept any new logs if the disk is more than 90% full. This is a safety measure for cases when there is a heavier message load for the embedded ActiveMQ Artemis instance, during which incoming messages may be stored on the disk rather than in-memory. If the available disk space drops below 10%, the risk of completely filling up the disk is greater. The error will appear in the following logs in the log files.

2021-10-25 11:06:17.923 [Thread-1 (ActiveMQ-server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@4b816904)] WARN  org.apache.activemq.artemis.core.server AMQ222210: Free storage space is at 490.1GB of 506.3GB total. Usage rate is 96.8% which is beyond the configured <max-disk-usage>. System will start blocking producers.
2021-10-25 11:06:18.023 [main] ERROR eu.profinit.manta.connector.jms.MantaJmsConnector [Context: Manta Admin UI startup - 2021-10-25T11:06:11.880+0200] An unrecoverable Artemis error - ADDRESS_FULL - occurred in the producer. The connection will not be re-established for this thread.
2021-10-25 11:06:18.024 [main] ERROR eu.profinit.manta.connector.jms.MantaJmsConnector [Context: Manta Admin UI startup - 2021-10-25T11:06:11.880+0200] Message producer for MantaJmsDestination{name='loggingQueue', type=QUEUE} has encountered an unrecoverable error. The connection will not be recovered for this thread.
javax.jms.JMSException: AMQ219058: Address "loggingQueue" is full. Message encode size = 2,066B
    at org.apache.activemq.artemis.core.client.impl.ClientProducerCreditsImpl.afterAcquired(ClientProducerCreditsImpl.java:57) ~[artemis-core-client-2.18.0.jar:2.18.0]
    at org.apache.activemq.artemis.core.client.impl.AbstractProducerCreditsImpl.acquireCredits(AbstractProducerCreditsImpl.java:79) ~[artemis-core-client-2.18.0.jar:2.18.0]
    at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.sendRegularMessage(ClientProducerImpl.java:294) ~[artemis-core-client-2.18.0.jar:2.18.0]
    at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.doSend(ClientProducerImpl.java:268) ~[artemis-core-client-2.18.0.jar:2.18.0]
    at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.send(ClientProducerImpl.java:143) ~[artemis-core-client-2.18.0.jar:2.18.0]
    at org.apache.activemq.artemis.core.client.impl.ClientProducerImpl.send(ClientProducerImpl.java:125) ~[artemis-core-client-2.18.0.jar:2.18.0]
    at org.apache.activemq.artemis.jms.client.ActiveMQMessageProducer.doSendx(ActiveMQMessageProducer.java:483) ~[artemis-jms-client-2.18.0.jar:2.18.0]
    at org.apache.activemq.artemis.jms.client.ActiveMQMessageProducer.send(ActiveMQMessageProducer.java:193) ~[artemis-jms-client-2.18.0.jar:2.18.0]
    at eu.profinit.manta.connector.jms.MantaJmsConnector.sendMessageWithRecovery(MantaJmsConnector.java:191) ~[manta-connector-jms-34-MINOR-20211020.091842-5.jar:?]
    at eu.profinit.manta.platform.logging.appender.LogViewerManager.send(LogViewerManager.java:86) ~[manta-platform-logging-appender-34-MINOR-20211020.091842-5.jar:?]
    at eu.profinit.manta.platform.logging.appender.LogViewerAppender.append(LogViewerAppender.java:57) ~[manta-platform-logging-appender-34-MINOR-20211020.091842-5.jar:?]
    at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156) ~[log4j-core-2.13.1.jar:2.13.1]
    at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:129) ~[log4j-core-2.13.1.jar:2.13.1]
    at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:120) ~[log4j-core-2.13.1.jar:2.13.1]
    at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84) ~[log4j-core-2.13.1.jar:2.13.1]
    at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:543) ~[log4j-core-2.13.1.jar:2.13.1]
    at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:502) ~[log4j-core-2.13.1.jar:2.13.1]
    at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:485) ~[log4j-core-2.13.1.jar:2.13.1]
    at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:460) ~[log4j-core-2.13.1.jar:2.13.1]
    at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:82) ~[log4j-core-2.13.1.jar:2.13.1]
    at org.apache.logging.log4j.core.Logger.log(Logger.java:161) ~[log4j-core-2.13.1.jar:2.13.1]
    at org.apache.logging.log4j.spi.AbstractLogger.tryLogMessage(AbstractLogger.java:2198) ~[log4j-api-2.13.1.jar:2.13.1]
    at org.apache.logging.log4j.spi.AbstractLogger.logMessageTrackRecursion(AbstractLogger.java:2152) ~[log4j-api-2.13.1.jar:2.13.1]
    at org.apache.logging.log4j.spi.AbstractLogger.logMessageSafely(AbstractLogger.java:2135) ~[log4j-api-2.13.1.jar:2.13.1]
    at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:2022) ~[log4j-api-2.13.1.jar:2.13.1]
    at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1891) ~[log4j-api-2.13.1.jar:2.13.1]
    at org.apache.logging.log4j.spi.AbstractLogger.info(AbstractLogger.java:1280) ~[log4j-api-2.13.1.jar:2.13.1]
    at eu.profinit.manta.platform.logging.api.logging.Logger.info(Logger.java:605) ~[manta-platform-logging-api-34-MINOR-20211020.091842-5.jar:?]
    at eu.profinit.manta.connector.http.client.AbstractHttpsProvider.getMergedTrustManager(AbstractHttpsProvider.java:343) ~[manta-connector-http-34.0.0.jar:?]
    at eu.profinit.manta.connector.http.client.AbstractHttpsProvider.generateContextWithTrustStore(AbstractHttpsProvider.java:192) ~[manta-connector-http-34.0.0.jar:?]
    at eu.profinit.manta.configuration.logic.factories.FlowServerRequestFactory.afterPropertiesSet(FlowServerRequestFactory.java:169) ~[manta-admin-gui-configuration-logic-34-MINOR-20211020.091842-5.jar:?]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1858) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1795) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:594) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:516) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:324) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:322) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:276) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1307) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1227) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:886) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:790) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:228) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1361) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1208) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:556) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:516) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:324) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:322) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:276) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1307) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1227) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:886) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:790) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.ConstructorResolver.autowireConstructor(ConstructorResolver.java:228) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.autowireConstructor(AbstractAutowireCapableBeanFactory.java:1361) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1208) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:556) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:516) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:324) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:322) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:276) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1307) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1227) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.resolveFieldValue(AutowiredAnnotationBeanPostProcessor.java:657) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:640) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:119) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessProperties(AutowiredAnnotationBeanPostProcessor.java:399) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1425) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:593) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:516) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:324) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:322) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:202) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:409) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1341) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1181) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:556) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:516) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:324) ~[spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) [spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:322) [spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:207) [spring-beans-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.context.support.AbstractApplicationContext.initMessageSource(AbstractApplicationContext.java:733) [spring-context-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:539) [spring-context-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:401) [spring-web-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:292) [spring-web-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:103) [spring-web-5.2.13.RELEASE.jar:5.2.13.RELEASE]
    at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4766) [catalina.jar:9.0.52]
    at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5230) [catalina.jar:9.0.52]
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) [catalina.jar:9.0.52]
    at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:726) [catalina.jar:9.0.52]
    at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:698) [catalina.jar:9.0.52]
    at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:696) [catalina.jar:9.0.52]
    at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:1024) [catalina.jar:9.0.52]
    at org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1911) [catalina.jar:9.0.52]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
    at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
    at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75) [tomcat-util.jar:9.0.52]
    at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:123) [?:?]
    at org.apache.catalina.startup.HostConfig.deployWARs(HostConfig.java:825) [catalina.jar:9.0.52]
    at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:475) [catalina.jar:9.0.52]
    at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1618) [catalina.jar:9.0.52]
    at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:319) [catalina.jar:9.0.52]
    at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123) [catalina.jar:9.0.52]
    at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:423) [catalina.jar:9.0.52]
    at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366) [catalina.jar:9.0.52]
    at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:946) [catalina.jar:9.0.52]
    at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:835) [catalina.jar:9.0.52]
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) [catalina.jar:9.0.52]
    at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1396) [catalina.jar:9.0.52]
    at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1386) [catalina.jar:9.0.52]
    at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
    at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75) [tomcat-util.jar:9.0.52]
    at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:145) [?:?]
    at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:919) [catalina.jar:9.0.52]
    at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:263) [catalina.jar:9.0.52]
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) [catalina.jar:9.0.52]
    at org.apache.catalina.core.StandardService.startInternal(StandardService.java:432) [catalina.jar:9.0.52]
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) [catalina.jar:9.0.52]
    at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:927) [catalina.jar:9.0.52]
    at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) [catalina.jar:9.0.52]
    at org.apache.catalina.startup.Catalina.start(Catalina.java:772) [catalina.jar:9.0.52]
    at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
    at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:78) ~[?:?]
    at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
    at java.lang.reflect.Method.invoke(Method.java:567) ~[?:?]
    at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:345) [bootstrap.jar:9.0.52]
    at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:476) [bootstrap.jar:9.0.52]
Caused by: org.apache.activemq.artemis.api.core.ActiveMQAddressFullException: AMQ219058: Address "loggingQueue" is full. Message encode size = 2,066B
    ... 136 more

To change the property:

  1. Open (or create) the artemis.properties files found in <MANTA_HOME>/serviceutility/manta-admin-ui-dir/conf/ and mantaflow/cli/scenarios/manta-dataflow-cli/conf/ in a text editor.

  2. Change/add the property artemis.max.disk.usage to the desired value. (100 is possible.) This value sets the maximum percentage of disk space that can be used.

An Unknown Issue

If the log files state no clear reason as to why the log viewer is not producing any logs, contact our support team and attach the following log files.

Recovering a Corrupted Logging Database

The logging database may become corrupted due to external circumstances. For example, it can occur when a process is killed while performing a crucial IO operation over the database (generally larger). Therefore, the validity of the database is checked each time the Admin UI is started up. If the database is found to be corrupted, it is backed up to mantaflow/serviceutility/webapps/manta-admin-gui/WEB-INF/classes/manta_logs_backup.h2.db. There can be multiple backups.

Note that this method is not 100% effective and sometimes the database is unrecoverable.
Caution should be exercised regarding available disk space while performing the recovery. Expect the generated SQL script to be similar in size to the backed up database.

The database can be recovered using a recovery tool provided by the H2 database. To use it in this context, follow these steps.

  1. Shut down the Manta Admin UI service or application, if it is currently running.

  2. Open the terminal and navigate to the mantaflow/serviceutility/webapps/manta-admin-gui/WEB-INF/classes/ folder.

  3. Delete the following files in the folder mantaflow/serviceutility/webapps/manta-admin-gui/WEB-INF/classes/: manta_logs.h2.db, manta_logs.trace.db, and manta_logs.lock.db (if present).

  4. Execute the following command in the open terminal: java -cp ../lib/h2-1.4.200.jar org.h2.tools.Recover. This will create recovery SQL scripts for each database in that folder.

  5. Execute the following command in the terminal: java -cp ../lib/h2-1.4.200.jar org.h2.tools.RunScript -url jdbc:h2:split:<ABSOLUTE_PATH_TO_THIS_FOLDER>manta_logs;MV_STORE=FALSE; -script manta_logs_backup.h2.sql -checkResults. Replace <ABSOLUTE_PATH_TO_THIS_FOLDER> with the absolute path to …/classes/ folder; for example, “C:\mantaflow\serviceutility\webapps\manta-admin-gui\WEB-INF\classes\".

  6. The command should create a new manta_logs database with the recovered logs.

  7. Start Manta Admin UI.