Built-in functions
Built-in functions are available from the function catalog. You can use these functions in your calculations for alerts, collecting data, cleansing data, filtering data, transforming data, summarizing data, detect anomalies, administering data, or troubleshooting.
Built-in functions are available from the function catalog. The catalog also contains any custom
functions that you created and registered with Maximo® Monitor. To
view the Python code for the function, see the function class in the
bif.py
module of IoT Functions. Built-in functions from the
bif.py
module in IoT Functions are registered by default.
In Maximo Monitor 8.10 and later, streaming data metrics support custom functions only for ONNX models.
In Maximo Monitor 8.9, streaming data metrics do not support custom functions.
| Name | Type | Purpose | Supported metrics | Description |
|---|---|---|---|---|
| AggregateTimeInState | Transformer | Transform | Batch data metrics |
Calculate the aggregate amount of time that a device spends in a specific state. For example, calculate the total number of seconds that a device is offline per day. Use the output of the PrepareTimeInState function to specify what is considered a state change for a device. For more information, see Tutorial: Calculating the time spent in a specific state. |
| AggregateWithCalculation | Transformer | Transform | Batch data metrics | Create an aggregation by using an expression on a data item. |
| AggregateWithExpression | Transformer | Transform | Batch data metrics | Create an aggregation by using an expression. The calculation is evaluated for each data item that is selected. The resulting data item is available as a pandas series. Refer to the pandas series by using the "x" local variable. The expression must return a scalar value, for example, x.max() - x.min() |
| AnomalyDetector | Transformer | Anomaly detector |
This function is not available in Maximo Monitor 8.9 or later. |
Predict the value of one or more data items based on the values of dependent variables. The function uses a regression model to learn how the dependent variables influence the tagert variables. The function signals an anomaly whenever the behavior of a target variable is not consistent with its predicted behavior, which is given the current values of its dependent variables. The output of the function includes the predicted values and alerts for each predicted value. When you configure the function, select the variables whose values you want to predict in targets. Select the dependent variables in features. Set an absolute threshold value in threshold. For more information, see Detecting anomalies. |
| AnomalyGeneratorExtremeValue | Transformer | Anomaly simulator | Batch data metrics |
Take an input data item and replace some data points with extreme values to it to simulate anomalies. For example, add some extreme values to your temperature data item to create a temperature_with_extremes output item. Try some of the anomaly detectors from the catalog on the output. In the input_item field, choose the metric that you want to use as the basis for the simulated metric. In the factor field, specify how frequently you want to add an extreme value. In the size field, specify how extreme you want the simulated value to be. For more information, see Simulating anomalies. |
| AnomalyGeneratorFlatline | Transformer | Anomaly simulator | Batch data metrics |
Take an input data item and alter the values to simulate some flatline conditions. For example, add some flatline conditions to your temperature data item to create a temperature_with_flatlines output item. Try some of the anomaly detectors from the catalog on the output. In the input_item field, choose the metric that you want to use as the basis for the simulated metric. In the factor field, specify how frequently you want to add flatline conditions. In the width field, specify how wide each flatline condition should be. For more information, see Simulating anomalies. |
| AnomalyGeneratorNoData | Transformer | Anomaly simulator | Batch data metrics |
Take an input data item and alter the values to simulate some conditions where no data is seen. For example, add some gaps to your temperature data item to create a temperature_with_gaps output item. Try the NoDataAnomalyScore detector from the catalog on the output to detect the gaps. In the input_item field, choose the metric that you want to use as the basis for the simulated metric. In the factor field, specify how frequently you want to add gaps to the data set In the width field, specify how wide each gap should be. For more information, see Simulating anomalies. |
| ActivityDuration | Data source | Collect data | Batch data metrics |
Import data from an activity table and merge it with data in the data lake. For example, you might store information about scheduled maintenance activity in an activity table in the data lake. When you configure the function, specify the table name, for example, maintenance_activity. Specify one or more activity codes, for example, scheduled_maint, unscheduled_maint, firmware_upgrade, testing. Note: The table must include the following columns:
The function returns an activity duration for each activity code. For more information, see Adding data from other sources. |
| AlertExpression | Transformer | Alert |
|
For more information, see Alerts. |
| AlertLowValue | Transformer | Alert |
|
For more information, see Alerts. |
| AlertHighValue | Transformer | Alert |
|
For more information, see Alerts. |
| AlertOutOfRange | Transformer | Alert |
|
For more information, see Alerts. |
|
ArithmeticOperator |
Transformer |
Operator |
Streaming data metrics |
Applies the specified operator to the fields. |
| Coalesce | Transformer | Filter | Batch data metrics | Return the first non-null value from a list of data items. |
| CoalesceDimension | Transformer | Filter | Batch data metrics | Return the first non-null value from a list of data items. |
| ConditionalItems | Transformer | Filter | Batch data metrics |
Return a value of null unless a condition is met. For example, if the temperature level is greater than 40 return null, otherwise return the temperature value. Define the conditional expression by using pandas syntax. In the conditional_expression field, enter: `df['temperature']>40` |
| Count | Aggregator | Summarize |
|
Count the number of values for the specified data item. When you configure the function, specify the data item to count in the source field. Optionally, specify that a specific number of values must exist before the count is performed. For example, if you set min_count to 5, a count is performed if there are at least five values that are not set to N/A. The default value is 1. |
|
COPODAnomalyScoreJava |
COPOD analysis |
All |
Streaming data metrics |
For more information, see Unsupervised anomaly detectors. |
| DatabaseLookup | Data source | Collect data | Batch data metrics |
Perform a lookup on non-time variant data in a table in the data lake. When you configure the function, specify the table name, the items to look up, and the key value that maps data items for the device type to the data in the table. For example, you might want to look up EmployeeCount and Country from a Company table and you use the country_code field as the key. For more information, see Adding data from other sources. |
| DataQualityChecks | Aggregator | Anomaly detector | Batch data metrics |
Perform data quality analysis on input data. Determine whether the input data from your sensors has quality issues that might impact the quality of the data in downstream calculations. When you configure the function, select the time series that you want to check in 'input_item' and select the data quality checks to run. The checks are grouped by their output data type. The options are:
For more information, see Detecting anomalies. |
| DateDifference | Transformer | Transform | Batch data metrics |
Calculate the difference between two dates in days. For example, if you configure a shift start_date and a shift end_date, use this function to calculate the number of days in the shift. Note: Timestamp is used if no date is specified.
|
| DateDifferenceConstant | Transformer | Transform |
This function is not available in Maximo Monitor 8.9 or later. |
Calculate the difference between two dates in days, where one of the dates is specified by using a constant. For example, you might want to track the number of days until a planned shutdown date. When you configure the function, select evt_timestamp in date_1 and specify the planned_shutdown constant in the date_constant field. Note: For date_1, timestamp is used if no date is specified.
|
| DateDifferenceReference | Transformer | Transform |
This function is not available in Maximo Monitor 8.9 or later. |
Calculate the difference between two dates in days, where one of the dates is specified by using a data item and the other data is specified by using a reference data in the format DD/MM/YYYY hh:mm. For example, you might want to track the number of days until a planned shutdown date on the 24 October at 13:00. When you configure the function, select evt_timestamp in date_1 and set the reference date to 24/10/2019 13:00. Note: For date_1, timestamp is used if no date is specified.
|
| DeleteInputData | Transformer | Administer | Batch data metrics |
Remove values that are older than a specified number of days from data items. Use this function to clean up your data. When you configure the function, select one or more data items to clean up from dummy_items. Specify the number of days of data to remove in the older_than_days field. For example, to remove values older than 100 days for a data item that is named average_speed, set older_than_days to 100 and set dummy_itmes to average_speed. |
| DistinctCount | Aggregator | Summarize | Batch data metrics |
Count the number of discrete values for the specified data items. When you configure the function, specify the data item to count in the source field. Optionally, specify that a specific number of values must exist before the count is performed. For example, if you set min_count to 5, a distinct count is performed if there are at least five values that are not set to N/A. The default value is 1. |
| DropNull | Transformer | Cleanse | Batch data metrics |
Drop all rows that have null values. Apply to all data items of a device type. Specify which data items you want to exclude from this action. |
| EntityDataGenerator | Transformer | Simulate | Batch data metrics |
Create sample devices by using the specified device IDs. Generate random data for time-series data items. Restriction: You can only use the function with the sample device types. Generate random data where the columns already exist. When you configure the function, specify the device IDs in the ids field. For example:
In the parameters field, specify the time series data to add and its frequency. You can drop existing input tables and generate new data for each run of the function. For example: { "freq": "5min", "data_item_mean": { "torque": 12, "load": 375, "load_rating": 400,
"speed": 3, "travel_time": 1, "cost": 22 }, "drop_existing": true }
|
| EntityFilter | Transformer | Filter | Batch data metrics |
Filter the data item calculations to retrieve data only for the specified entities. For example, to retrieve data for entities 73000 and 73001 only, in the enitity_list field, enter 73000 and 73001. |
|
FastMCDAnomalyScoreJava |
Transformer |
Anomaly detector |
Streaming data metrics |
For more information, see Unsupervised anomaly detectors. |
| FFTbasedGeneralizedAnomalyScore | Transformer | Anomaly detector | Batch data metrics | Batch data metrics can also use the FFTbasedGeneralizedAnomalyScoreV2
function. For more information, see Unsupervised anomaly detectors. |
| Filter | Transformer | Filter | Batch data metrics |
Filter data items by using a Python expression. Define the expression by using pandas syntax. Reference a data item by using the format `df['data_item']` in your expression. Specify the data items to keep when the expression evaluates to true. For example, keep the speed data item, when distance is equal to `2`. In the expression field, specify `df['distance']=2`. In the filtered_sources field, select speed. |
| First | Aggregator | Summarize |
|
Identify the first value for a data item. |
| GBMRegressor | Transformer | Anomaly detector | Batch data metrics | For more information, see Supervised anomaly detectors. |
| GeneralizedAnomalyScore | Transformer | Anomaly detector | Batch data metrics | Batch data metrics can also use the GeneralizedAnomalyScoreV2 function. For
more information, see Unsupervised anomaly detectors. |
| GetEntityData | Transformer | Summarize | Batch data metrics | Get time series data from an entity type. Provide the table name for the entity type and specify the key column to use for mapping the source entity type to the destination. For example, you can add temperature sensor data to a location entity type by selecting location_id as the mapping key on the source entity type. This function is experimental |
| IdentifyShiftFromTimestamp | Transformer | Summarize | Batch data metrics | Identifies the shift that was active when data was received by using the timestamp on the data. |
| IfThenElse | Transformer | Filter | Batch data metrics |
If a conditional expression returns true, return the value of another expression as the value of the new data item. Note: In the expression fields, you must specify an expression rather than as simple variables. For
example, specifying
df['status']=="offline" is valid but simply specifying
"offline" is not valid.Example 1: If speed is greater than 2, you want to adjust the speed value by a factor of 0.9.
You might set the output parameter to adjusted_speed. You must specify the data type of the output data item. For example, for adjusted_speed, set the data type to number. Example 2: If speed is greater than 2, you want to set a new data item,
Set the output parameter to |
| InvokeWatsonStudio | Transformer | Anomaly detector | Batch data metrics |
Develop, train, and test a classification or regression model using Watson Studio. Store the model in Watson Studio or in the Maximo Monitor database. Then, use the InvokeWatsonStudio function to call the model as part of the function pipeline to score data or make predictions. Specify the connection parameters to the model in Watson Studio by using a .json document. When you configure the function, select the data items to use in your model. Specify the name of the global constant that stores the connection parameters. For more information, see Training models externally. |
|
IsolationForestAnomalyScoreJava |
Transformer |
Anomaly detector |
Streaming data metrics |
For more information, see Unsupervised anomaly detectors. |
| KMeansAnomalyScore | Transformer | Anomaly score | See Description column |
Batch data metrics can also use this function and the |
|
KNNKDEAnomalyScoreJava |
Transformer |
Anomaly score |
Streaming data metrics |
For more information, see Unsupervised anomaly detectors. |
| Last | Transformer | Summarize |
|
Identify the last value for the specified data item. |
| LoadTableAndConcat | Transformer | Transform | Batch data metrics | Create a new data item by expression. |
| MatrixProfileAnomalyScore | Aggregator | Summarize | See Description column | Batch data metrics can use the MatrixProfileAnomalyScorefunction. Streaming
data metrics use the MatrixProfileAnomalyScoreJava function. For more information,
seeUnsupervised anomaly detectors. |
| Maximum | Aggregator | Anomaly score |
|
Identify the maximum value for a data item. |
| Mean | Aggregator | Summarize |
|
Identify the mean value for a data item. |
| Median | Aggregator | Summarize | Batch data metrics |
Identify the median value for a data item. |
| MergeByFirstValid | Transformer | Transform | Batch data metrics | Create alerts that are triggered when data values reach a particular range. |
| Minimum | Aggregator | Summarize |
|
Identify the minimum value for a data item. |
| NewColFromCalculation | Transformer | Transform | Batch data metrics |
Create a new data item by using a pandas expression. For example, to create a new distance data item, `df['speed'] * df['travel_time']`. |
| NewColFromScalarSql | Transformer | Transform | Batch data metrics |
Create a new data item by using a scalar SQL query. The query returns a single value. For example, to create a new plant_1_total data item, you might enter `SELECT COUNT(Alerts) FROM Robot_entity_plant_1`. |
| NewColFromSql | Transformer | Transform | Batch data metrics | Create new data items by joining SQL query result. |
| NoDataAnomalyScore | Transformer | Anomaly detector | Batch data metrics | For more information, see Unsupervised anomaly detectors. |
| OccupancyCount | Aggregator | Aggregate | Batch data metrics | Determine the maximum occupancy count per grain interval. |
| OccupancyDuration | Aggregator | Aggregate | Batch data metrics | Calculate the occupancy duration per grain interval. The unit of the result is given by the granularity of input. |
| OccupancyCountByBusinessUnit | Transformer | Transform | Batch data metrics | Weight the occupancy count according to the assignments of business units. |
| OccupancyFrequencyRate | Transformer | Transform | Batch data metrics | Determine the ratio between occupancy duration and availability per day in percent. |
| OccupancyRate | Transformer | Transform | Batch data metrics | Determine the ratio between occupancy count and capacity in percent. |
| PackageInfo | Transformer | Administer | Batch data metrics |
Display version information for packages. When you configure the function, specify the packages to check. For example, future, requests, sklearn, pandas. Optionally, install a package if the package is not installed. |
| PrepareTimeInState | Transformer | Transform | Batch data metrics |
The PrepareTimeInState function identifies a change in state for a device. Create a condition that represents a state change. For example, for a running_status metric, set the condition to "=='running'" or for a temperature metric, set the condition to ">= 37". The function determines when the devices move in or out of this state. Use the output of this function in the AggregateTimeInState function to calculate the total amount of time that the device spends in the state. Tip: You don't need to save the output in the database because it is used in another function. PrepareTimeInState produces a comma-delimited string with a state change value and a unix epoch timestamp. Leaving the state is represented by -1, no change by 0, and entering the state by 1. For example, "0,1638810253" means no change has occurred at the timestamp 1638810253. For more information, see Tutorial: Calculating the time spent in a specific state. |
| Product | Aggregator | Summarize | Batch data metrics |
Multiply the values of the data set to return the product. Select a data item to use in the calculation. |
| PythonExpression | Transformer | Transform | Batch data metrics |
Create a new data item from an expression that includes other data items. Define the expression by using pandas syntax. When you configure the function, enter or paste a pandas expression into the expression field. For example, `df['torque']*df['load']` For more information, see Using expressions. |
| PythonFunction | Transformer | Transform | Batch data metrics |
Run a simple function that you paste into the function field. The function must be called 'f'. The function accepts df (a pandas DataFrame) and parameters (a dict that you can use to externalize the configuration of the function) as inputs. The function can return a DataFrame, Series, NumpyArray or scalar value. For example:
For more information, see Using simple functions. |
| RaiseError | Transformer | Troubleshoot | Batch data metrics |
Stop the pipeline after a specific data item is calculated. When you configure the function, enter the data item in the 'halt_after' field. After the function runs, review any error message that is displayed on the UI. The function is useful for troubleshooting issues. |
| RandomChoiceString | Transformer | Simulate | Batch data metrics |
Generate random categorical values. Add the strings to domain_of_values. Optionally, you can set the probability of each string occurring. Assign probabilities in the same order as the strings. The sum of the probabilities must equal 1. In the following example, London has a higher probability of being assigned than New York: Probabilities: 0.920, 0.080 Domain of values: London, New York |
| RandomDiscreteNumeric | Transformer | Simulate | Batch data metrics |
Create some random numeric values. Use the function to simulate metrics. Add discrete numbers to domain_of_values. Optionally, you can set the probability of each number occurring. Assign probabilities in the same order as the numbers. The sum of the probabilities must equal 1. In the following example, 211 has a higher probability of being assigned than 352: Probabilities: 0.920,0.080 Domain of values: 211,352 |
| RandomNoise | Transformer | Simulate | Batch data metrics |
Add random noise to one or more input data items based on a standard deviation value. |
| RandomNormal | Transformer | Simulate | Batch data metrics |
Generate a set of normally distributed random numbers. Specify a mean value and the standard deviation for the data set. |
| RandomNull | Transformer | Simulate | Batch data metrics |
Insert some null values randomly into the data set for one or more data items. |
| RandomUniform | Transformer | Simulate | Batch data metrics |
Create a set of uniformly distributed random numbers. Use the function to create a repeatable sequence of numbers. Specify the minimum value and the maximum value. |
|
Reidentify |
Transformer |
Transform |
Streaming data metrics |
Re-emit the input metric as a new metric of this device or hierarchy resource. |
|
RobustThreshold |
Transformer |
Anomaly detector |
Streaming data metrics |
To provide outliers based on either quantile or inter-quartile range and anomalies based on median absolute deviation and return the result in two boolean output columns. |
| SaliencybasedGeneralizedAnomalyScore | Transformer | Anomaly detector | Batch data metrics | Batch data metrics can also use the SaliencybasedGeneralizedAnomalyScoreV2
function. For more information, see Unsupervised anomaly detectors. |
| SaveCosDataFrame | Transformer | Transform |
This function is not available in Maximo Monitor 8.9 or later. |
Save data items to Cloud Object Storage. Specify a file name. The data items are saved to Cloud Object Storage using the file name. Note: The Cloud Object Storage bucket is identified from your credentials. After the pipeline runs, verify that the file is available in Cloud Object Storage. |
| SCDLookup | Data source | Collect data | Batch data metrics |
Perform a database lookup for the value of a dimension in a Slow changing dimension (SCD) table. For example, look up an operator name. Specify the table name. The table must have a start_date, end_date, device_ID and some property (SCD). Such tables do not update as frequently as time series data. The value of a dimension tends to be relatively static. The previous lookup value is assumed to be valid until the next lookup. For more information, see Adding data from other sources. |
| ShiftCalendar | Data source | Collect data | Batch data metrics |
Generate data for a shift calendar by using a shift_definition. Add a shift_definition in the form of a dictionary that is keyed on shift_id. The dictionary contains a tuple with the start and end hours of the shift being expressed as numbers. For example,
For more information, see Adding data from other sources. |
| Sleep | Transformer | Troubleshoot | Batch data metrics |
Pause the pipeline for the specified number of seconds for the device type. |
| StandardDeviation | Spectral analysis | Summarize |
|
Calculate the standard deviation of the values in the data set. Select a data item to use in the calculation. |
| SpectralAnomalyScore | Spectral analysis | Anomaly score | Batch data metrics | For more information, see Unsupervised anomaly detectors. |
| SpectralAnomalyScoreExt | Spectral analysis | Flat lines | Batch data metrics | For more information, see Unsupervised anomaly detectors. |
| SplitDataByActiveShifts | Transformer | Transform | Batch data metrics | Identifies the shift that was active when data was received by using the timestamp on the data. |
| Sum | Aggregator | Summarize |
|
Calculate the sum of all values in the data set. Select a data item to use in the calculation. |
| TimestampCol | Transformer | Summarize | Batch data metrics | Deliver a data item containing the timestamp. |
| TraceConstants | Transformer | Summarize | Batch data metrics | Write the values of available constants to the trace |
| Variance | Aggregator | Summarize |
|
Calculate the variance of the data set. Variance is the distance of a value from the mean value. Select a data item to use in the calculation. |