Testing models

After you deploy a model, test it against other images and videos to make sure that it works as expected.

Procedure

  1. Click Deployed Models from the menu.
  2. Click the deployed model that you want to test. The model opens in the Deployed model page.
  3. Use the Test Model area to upload images and videos, one at a time. If you provide a DICOM image, it is converted to PNG before inferencing.
    • Generate annotated video: Select this option if you want to export the results, including the annotated video. If you are testing with a video, select annotations with dots or bounding boxes. For videos with multiple objects and movement of objects, select dots.
    • Minimum action duration (frames): Specify the minimum number of consecutive frames in which an action must be detected, with a confidence higher than the specified Confidence threshold, in order for it to be identified in the inference. For example, if this duration is set to 20 frames and the confidence threshold is 60%, the only actions that are returned are at least 20 frames long and have a confidence level of at least 60%.
    • Confidence threshold: Specify the minimum confidence level for returned actions. For example, if you set the value to 60%, only actions that have at least a 60% confidence threshold are returned.
  4. Review the results.

Results

If you used an image to test an image classification model

The test result displays the uploaded picture with the resultant heat map overlaid and gives the classification and the confidence of the classification. Multiple classes are returned with the decreasing levels of confidence for the different classes. The heat map is for the highest confidence classification and can help you determine whether the model correctly learned the features of this classification. To hide classes that have a lower confidence level, use the Confidence threshold slider.

The red area of the heat map corresponds to the areas of the picture that are of highest relevance. Use the slider to change the opacity of the heat map. Because the heat map is a square, the test image is compressed into a square. This compression might cause the image to look distorted, but it shows you the areas that the algorithm identified as relevant.

If you used an image to test an object detection model

The identified objects are labeled in the image with the calculated confidence score.

If the object detection model that you are testing is an Anomaly optimized model, additional results are displayed for anomalous objects, such as scratched, dented, or chipped objects. For example, if the model detects car doors and you test the model by uploading an image of a scratched car door, two results are displayed. The first result identifies the door by using the object's label, for example car_door. The second result indicates that the door has an anomaly by prefixing anomaly_ to the object's label, for example anomaly_car_door. Both results include confidence scores. The bounding box for the detected anomaly is enclosed by the bounding box for the identified object.

If you used a video to test an object detection model

Before you provide the video, you select whether to annotate it with dots or bounding boxes. The video is processed, then the processed video is displayed with a list of all of the objects. As you watch the processed video, the identified objects are labeled as they appear in the video. Objects are labeled with a dot at the center of the object or bounding box, and the name is displayed next to the annotation. Polygon annotations are not used in the video object test, even if the model is trained for segmentation.

If you click an object in the list, it takes you to that point in the video. Processing the video might take a while, depending on its size.

The inference might take a long time to complete. However, you can run multiple inferences simultaneously. Additionally, you do not have to stay on the deployed model details page. If you leave the page, a notification window opens, where you can watch the progress. Clicking the link in this window loads the inference results section in the deployed model details page.

To download the result, click Export result in the Results section. A .zip file is downloaded to your system. This file contains the original video, a JSON file that contains the result information, and the processed video with object labels added as annotations.

When you close the results area for an inference, the results are not removed. They are saved for seven days, unless you delete them. To access the results of previous inferences, click Results history in the Test Model section of the Deployed Models page. You can open or delete any of the saved results.

The video object preview does not support non-ASCII labels, which is a limitation of the module that generates the displayed label from the label name. The result of the conversion of non-ASCII labels is a label that is all question marks: "?????".

If you used a video to test an action detection model

The video is processed, then as you watch the processed video. The identified actions are output, along with the confidence and start and end times, as they appear in the video. Processing the video might take a while, depending on its size.

The inference might take a long time to complete. However, you can run multiple inferences simultaneously. Additionally, you do not have to stay on the deployed model details page. If you leave the page, a notification window opens, where you can watch the progress. Clicking the link in this window loads the inference results section in the deployed model details page.

The identified actions are grouped by action tag. To see individual actions that were discovered, expand the action tag. Clicking an action moves the video preview to the start of that action.

To download the result, click Export result in the Results section. A .zip file is downloaded to your system. This file contains the original video, a CSV file that contains the result information, and if the option to generate the annotated video was selected when the inference operation was started, the processed video with action labels added as annotations.

When you close the results area for an inference, the results are not removed. They are saved for seven days, unless you delete them. To access the results of previous inferences, click Results history in the Test Model section of the Deployed Models page. You can open or delete any of the saved results.

What to do next

If you are satisfied with the results, the model is ready to be used in production. Otherwise, you can refine the model by following the instructions in Refining a model.