Troubleshooting

To identify and resolve problems with Maximo® Monitor, you can use the troubleshooting and support information.

The Maximo Application Suite help guide is available.

Self-help

Before you report your problem to IBM Support, see whether the following support options resolve your problem:
  • Ensure that your service is available and is not undergoing any maintenance work.
  • For Maximo Application Suite Dedicated, the named customer contacts for your service offering receive notifications for any planned maintenance that impacts the availability of your service.
  • Search the IBM Support Community knowledge bases and forums for answers to your question or issue.
  • See Troubleshooting Maximo Application Suite issues.
Review the following table for common issues and troubleshooting details:
Table 1. Common issues
Issue Troubleshooting details
A custom function is not in the functions catalog To troubleshoot the issue, complete the following steps:
  1. Ensure that you registered the function.
  2. If you developed the function locally, ensure that you pushed your code to the external repository before you registered the function.
  3. Check whether you renamed, deleted, or moved your custom function.
Maximo Monitor does not validate that your custom function code is error-free when you register it.

A function is not available for streaming data metrics

In Maximo Monitor 8.9, streaming data metrics do not support all function and any support custom functions.

In Maximo Monitor 8.10 and later, streaming data metrics do not support all functions and support custom functions only for ONNX models.

A custom function is in the catalog but Maximo Monitor is unable to access the function in the external repository If Maximo Monitor is unable to download a custom function, it excludes the metric that uses the function from the pipeline.
To troubleshoot the issue, complete the following steps:
  1. Ensure that the repository is externally accessible.
  2. Ensure that the PACKAGE_URL value is setup. You can update the PACKAGE_URL in your custom function, for example, PACKAGE_URL = 'git+https://github.com/jones/functon@prod' Maximo Monitor uses the PACKAGE_URL value to locate the code in the external repository.
  3. If you developed the function locally, push your code to the external repository before you register the function.
  4. Register the function again.
Error in the processing of a function If an error occurs, an analysis stopped message is displayed on the Data tab of the device type. Review the message that is displayed in the pipeline log file and identify the function that caused the error.
  1. Click the warning icon and then click Show log. You can download the log to view it offline. The log shows the function that the pipeline was running when an error occurred, for example, The pipeline failed at the “PythonExpression” stage.
  2. Find the calculation that uses the function. Scroll through the calculated data items to see whether any of these data items display an error. If none display an error, check the configuration of each data item until you find one that uses the function.
  3. Fix the error in your local code and push the updated code to your repository.
  4. Register the function again.
The following common Python errors can occur:
Key error
A function tried to access a key in a dictionary or in a data frame that does not exist. The missing key is typically the name of a data item. The error indicates that the error exists in the calculation of this data item.
Index error
A function tried to access an index that is outside the bounds of a list. A common index error results from an operation in a custom function that is cutting up a data frame.
Name error
A function cannot find the identifier. For example, a device ID is not found. If you are referring to data items in a pandas expression, refer to the data item name by using a quoted string, for example, 'pressure'.
Attribute error
A function tried to access an attribute of a device that does not exist.
Type error
An argument to the function is not of the correct type. For example, in your code, a string is used instead of an integer.
Syntax error
An incorrect statement exists, for example, a statement that adds an extra bracket in the code.
Value error
An argument to a function is of the correct type but the value is invalid. For example, the value is an empty string.
Pipeline does not appear to start If more than 5 minutes pass and no data is processed, the last run of the pipeline might be still in progress. A pipeline is scheduled to start every 5 minutes, but the pipeline does not start until the previous run is complete. Depending on the amount of data that is processed and the type of calculations, it might take an hour to 2 hours for the pipeline to complete. Alternatively, you might see no data for your calculation because the calculation was not triggered. For example, if you use an alert built-in function to generate a new data item when a threshold is exceeded, the data item is not generated unless the condition is met.
Pipeline does not appear to start for a hierarchy node If more than 5 minutes pass and no data is processed, check whether the calculation includes a data item of a lower-level hierarchy node or device as input to the calculation. If the data item includes a grain that is not available on the hierarchy node, the calculation does not produce any data.

To troubleshoot this issue, add the grain to the hierarchy node.

  1. In a device type, click the gear icon and then click Manage grains.
  2. Click Add new.
  3. Configure a grain. Use the same time basis as the input data item.
  4. Click Create.
A custom function does not run in the function pipeline
When this issue occurs, you might see a message that is similar to the following message in the pipeline log files:
9:30
2020-05-19 08:33:15 AM [DEBUG  ] analytics_service.catalog._install : running pip install for url git+https://github.com/jones/function.git@ 
2020-05-19 08:33:16 AM [WARNING] analytics_service.catalog._install : pip install for url git+https://github.com/jones/function.git@ failed: 
 Collecting git+https://github.com/jones/function.git@
ERROR: The URL 'git+https://github.com/jones/function.git@' has an empty revision (after @) which is not supported. Include a revision after @ or remove @ from the URL.
		

This issue can occur if issues exist for the pip installation. A pip installation requires that you specify a branch after the symbol @ or that you remove the symbol @ to use the default branch. Edit your custom function and register the function again or ask your application administrator to change the function URL in the database.

For example, change PACKAGE_URL = 'git+https://github.com/jones/function@' to PACKAGE_URL = 'git+https://github.com/jones/function@prod', where prod is the name of the branch where your code is located.

To use the default branch, use the following PACKAGE_URL value:

PACKAGE_URL = 'git+https://github.com/jones/function'
Data is not flowing in from a device An invalid device token is used by the external device or the device is no longer working and is not sending data. For more information about adding a token, see Registering devices in the IoT tool. You can also check the device for condition or environmental issues.
A calculated metric is not generating a value

Typically, when the pipeline does not generate data for a calculated metric, no data is flowing into the Maximo Monitor. If the pipeline does not receive data, it does not run, produce a log file, or display an error. The pipeline also might not complete a calculation or it might fail if the function is not correctly configured or if a code or configuration error exists in a custom function.

If you find issues with your calculated metric calculations, check that data is flowing into Maximo Monitor or the IoT tool and then review the pipeline.
Cannot drill down from a table in a summary dashboard to an asset dashboard If no drill down is available from the table and the device ID in the table is not hyper linked, a device dashboard might not be configured.

Alternatively, ensure that the configuration file contains the device ID in the table column configuration and in the groupBy configuration in the JSON configuration of the table. Adding the device ID to the configuration provides the dashboard with the device ID that it requires for drill down..

To troubleshoot the issue, complete the following steps:
  1. Verify that a device dashboard is configured for the device type. For more information, see Configuring a device dashboard.
  2. Export the .json configuration for the summary dashboard.
  3. Review the cards.content and cards.dataSource sections of the dashboard configuration to confirm that device ID is shown as a column in the table. You can add last as the aggregator.
  4. Review the cards.dataSource.groupBy attribute of the table .json and ensure that the device ID is added. If the device ID does not ext, add the device ID to groupBy, for example, “groupBy”: [deviceID]

Consider the following information when you are troubleshooting issues for metrics:

  • A function pipeline for a device type is scheduled to run every 5 minutes. The pipeline waits for the previous run of the pipeline to complete before it starts the next run. If the pipeline is processing a large volume of data, it might take longer to run. For example, if the current run of the pipeline takes 1 hour to complete, the next run does not start for at least 1 hour. Note of the average length of time it takes for the pipeline to run for your device type.
  • If no new input data exists, the pipeline starts but does not produce new output. The pipeline still produces a log file that is available from Cloud Object Storage.
    Tip: Check the log file for a message similar to the following message:
    No data retrieved from all sources. Exiting pipeline execution.
  • You might see the same error across several device types if the error is in a function that is used across the types.
  • Ensure that the names of your custom functions are unique for your tenant.
  • Typically, you can ignore the line TypeError: execute() got an unexpected keyword argument 'start_ts' in the log file, for example:
    Traceback (most recent call last):
    File "/opt/conda/lib/python3.6/site-packages/iotfunctions/pipeline.py", line 3830, in _execute_stage
      newdf = stage.execute(df=df,start_ts=start_ts,end_ts=end_ts,entities=entities)
    TypeError: execute() got an unexpected keyword argument 'start_ts'
  • Ensure that your custom function does not cut up the data frame. In the following example, (_timestamp)index is removed from a custom function but it is still used in the calculation. As a result, the data frame no longer has an index.
    def execute(self, df):
          df.reset_index(inplace=True)  # Adds the index to the data frame column
          yesterday = dt.datetime.utcnow() - dt.timedelta(days=1)
          yesterday_values_hour = df.loc[(df['_timestamp'].dt.date == yesterday.date()) & (df['_timestamp'].dt.hour == yesterday.hour)][
              self.input_item]
          df[self.output_item] = df[self.input_item] - yesterday_values_hour.mean()
          return df  # The return data frame does not contain index any longer.
    To avoid index problems, you can save the index columns and then set the index again, for example:
    def execute(self, df):
          index_columns = df.index.names  # Save the index for future use.
          df.reset_index(inplace=True)  # Adds the index to data frame column
          yesterday = dt.datetime.utcnow() - dt.timedelta(days=1)
          yesterday_values_hour = df.loc[(df['_timestamp'].dt.date == yesterday.date()) & (df['_timestamp'].dt.hour == yesterday.hour)][self.input_item]
          df[self.output_item] = df[self.input_item] - yesterday_values_hour.mean()
          df.set_index(keys=index_columns, inplace=True)  # Set the index again.
          return df
    Alternatively, you can copy the data frame to another temporary data frame, for example:
    def execute(self, df):
          df_temp = df.copy()  #Create a copy of the data frame. Note: This may take more time depending on the number of records.
          df_temp.reset_index(inplace=True)  # Adds the index to data frame column
          yesterday = dt.datetime.utcnow() - dt.timedelta(days=1)
          yesterday_values_hour = df_temp.loc[(df['_timestamp'].dt.date == yesterday.date()) & (df_temp['_timestamp'].dt.hour == yesterday.hour)][self.input_item]
          df[self.output_item] = df[self.input_item] - yesterday_values_hour.mean()
          return df
  • To determine where an issue might exist, check that your data is flowing into Maximo Monitor or the IoT tool and then check whether the pipeline is generating calculations.
    • To check that data is flowing into Maximo Monitor, in a device type, on the Data tab, view the data that is generated for your metrics.
    • To check that data is flowing into the IoT tool, in the tool, select a device and review its recent events.
    • To check that the pipeline is generating calculations, in a device type, on the Data tab, select a calculated metric and verify that data is generated for the metric.