Preprocessing and post-processing

Note: Starting in Maximo® Visual Inspection 8.7, custom models are not supported and custom scripts cannot be uploaded. Custom models are still supported in Maximo Visual Inspection 8.6 and earlier versions. REST APIs still support the retrieval of any existing custom scripts that you uploaded to Maximo Visual Inspection 8.6 and earlier versions.

You can upload customizations to do operations before and after each inference operation with no manual intervention.

  • custom.py Template
  • Optional requirements.txt file
  • Deploying a model with preprocessing, post-processing, or both
Note: Action detection models are not supported.

custom.py Template

Use the custom.py Template to generate one Python file that is named custom.py. The file can contain instructions for preprocessing, post-processing, or both. The version depends on the model type:

  • Python 2 for High resolution, Anomaly optimized, and custom models.
  • Python 3 for Faster R-CNN, GoogLeNet, SSD, YOLO v3, Tiny YOLO v3, and Detectron2 models.
Note: Starting in Maximo Visual Inspection 9.1, Single Shot Detector (SSD) models are no longer supported for model training. However, you can continue to import and deploy SSD models in Maximo Visual Inspection and Maximo Visual Inspection Edge.
Important: This file must be packaged in a .zip file that has the custom.py file in the highest level directory of the .zip file.

Other files, such as extra Python files, shell scripts, and images can also exist in the .zip file. If the customization script requires Python modules aside from those built-in modules in Python, you can create a requirements.txt file, which contains a list of modules to be installed by using pip. The template contains the following information:

  • class CustomInference - The only Python class in the file. It must be named CustomInference and holds the "pre" and "post" callouts.
  • onPreProcessing - If defined, this function must be in CustomInference. This function is called before inference is run on the image.
  • onPostProcessing - If defined, this function must be in CustomInference. This function is called after inference is run on the image.
class CustomInference:
# Callout for inference pre-processing.  Will be called before the
# actual inference on the “image”
#
# Input:
#    image:    Image represented as a NumPy array that inference is to be
#              performed on.
#
# params:  To be used for additional parameters.  This will be
#             a listof key/value pairs.
#
# Output:
#    image:    Return the image represented as a NumPy array that is to
#              be used for inference.  This array may been manipulated
#              in this function, or it may be the same exact NumPy array.
#
def onPreProcessing(self, image, params):

     return image
# Callout for the inference post-processing.  Will be called
# after the image has been inferred.
#
# Input:
#    Image:    Image represented as a NumPy array that inference is to be
#              performed on.
#
# results:  JSON of the inference results.  The JSON will be
#              dependent on thetype of inference.
#
# params: To be used for additional parameters.  This will
#              be a list of key/value pairs
#
# Output:
#    results:  A json object that is a copy of the original
#              inference results.  However, if the callout
#              intends to return additional information, that
#              information can bereturnedin the json results
#              under the key “user”.
#                   
def onPostProcessing(self, image, results, params):

     return results

Optional requirements.txt file

The following sample file shows the contents of a requirements.txt file:

sseclient==0.0.19
tflearn==0.3.2
keras==2.2.4

Deploying a model that uses preprocessing, post-processing, or both

To deploy a model that uses extra processing, you upload the custom .zip file, then specify it on the deployment.

  1. Go to the Custom artifacts page and upload the .zip file that contains custom.py. For artifact type, select Custom inference script.
  2. Go to the model you want to deploy and click Deploy model.
  3. Toggle Advanced deployment on.
  4. For Custom inference script, select the inference script that you want to use, specify what you want done with the inference results, and click Deploy.
    Note:
    • Inference results can be saved even if you do not choose a custom inference script.
    • Inference results for videos are not saved.