Testing a model

After deploying your model, you should test it against other images and videos to make sure that it works as expected.

Procedure

  1. Click Deployed Models from the menu.
  2. Click the deployed model you want to test. The model opens in the Deployed model page.
  3. Use the Test Model area to upload images and videos, one at a time. If you provide a DICOM image, it will be converted to PNG before inferencing.
    If you are testing an action detection model, optionally set the following values:
    Generate annotated video
    Select this option if you want to export the results, including the annotated video. If you are testing with a video, select annotations with dots or bounding boxes. For videos with multiple objects and movement of objects, dots are recommended.
    Minimum action duration (frames)
    Specify the minimum number of consecutive frames in which an action must be detected, with a confidence higher than the specified Confidence threshold, in order for it to be identified in the inference. For example, if this is set to 20 frames and the confidence threshold is 60%, the only actions that are returned are at least 20 frames long and have a confidence level of at least 60%.
    Confidence threshold
    Specify the minimum confidence level for returned actions. For example, if you set the value to 60%, only actions that have at least a 60% confidence threshold are returned.
  4. The results are shown on the bottom of the window.
    If you used an image to test an image classification model
    The test result displays the uploaded picture with the resultant heat map overlayed, and gives the classification and the confidence of the classification. Multiple classes are returned with the decreasing levels of confidence for the different classes. The heat map is for the highest confidence classification and can help you determine whether the model has correctly learned the features of this classification. To hide classes with a lower confidence level, use the Confidence threshold slider.

    The red area of the heat map corresponds to the areas of the picture that are of highest relevance. Use the slider to change the opacity of the heat map. Because the heat map is a square, the test image is compressed into a square. This might cause the image to look distorted, but it will reliably show you the areas that the algorithm identified as relevant.

    If you used an image to test an object detection model
    The identified objects are labeled in the image, with the calculated precision.
    If you used a video to test an object detection model
    Before providing the video, the annotation of dots or bounding boxes was selected. The video is processed, then the processed video is displayed, with a list of all of the objects on the right. As you watch the processed video, the identified objects are labeled as they appear in the video. Objects are labeled with a dot at the center of the object or bounding box, with the name displayed next to the annotation. Polygon annotations are not used in the video object test, even if the model is trained for segmentation.

    If you click an object in the list, it takes you to that point in the video. Processing the video might take a while, depending on its size.

    The inference might take a long time to complete; however, you can run multiple inferences simultaneously. Additionally, you do not have to stay on the deployed model details page. If you leave the page, a notification window opens, where you can watch the progress. Clicking the link in this window loads the inference results section in the deployed model details page.

    To download the result, click Export result in the Results section. A ZIP file is downloaded to your system. This file contains the original video, a JSON file that contains the result information, and the processed video with object labels added as annotations.

    When you close the results area for an inference, the results are not removed. They are saved for seven days, unless you delete them. To access the results of previous inferences, click Results history in the Test Model section of the Deployed Models page. You can open or delete any of the saved results.

    Note: The video object preview does not support non-ascii labels. This is a limitation of the module that generates the displayed label from the label name. The result of the conversion of non-ascii labels will be a label that is all question marks: ?????.
    If you used a video to test an action detection model
    The video is processed, then as you watch the processed video, the identified actions are output, along with the confidence and start and end times, as they appear in the video. Processing the video might take a while, depending on its size.

    The inference might take a long time to complete; however, you can run multiple inferences simultaneously. Additionally, you do not have to stay on the deployed model details page. If you leave the page, a notification window opens, where you can watch the progress. Clicking the link in this window loads the inference results section in the deployed model details page.

    The identified actions are grouped by action tag. To see individual actions that were discovered, expand the action tag. Clicking on an action moves the video preview to the start of that action.

    To download the result, click Export result in the Results section. A ZIP file is downloaded to your system. This file contains the original video, a CSV file that contains the result information, and if the option to generate the annotated video was selected when the inference operation was started, the processed video with action labels added as annotations.

    When you close the results area for an inference, the results are not removed. They are saved for seven days, unless you delete them. To access the results of previous inferences, click Results history in the Test Model section of the Deployed Models page. You can open or delete any of the saved results.

  5. If you are satisfied with the results, the model is ready to be used in production. Otherwise, you can refine the model by following the instructions in this topic: Refining a model.