What's new

The following functions, features, and support have been added for PowerAI Vision Version 1.1.5:

Integration with IBM® Visual Inspector
Visual Inspector is a native iOS/iPadOS mobile app that brings the capabilities of PowerAI Vision to the edge and rapidly enables visual inspections on mounted or handheld devices. Visual Inspector uses the models trained on PowerAI Vision and performs inferencing using the integrated camera on an iOS/iPadOS device. The app can run models remotely or can use Core ML models that are exported from PowerAI Vision, which enables local inferencing on-device without requiring network connectivity. For details, see Integrating with IBM Visual Inspector.
Support for DICOM images
The DICOM format is a widely used standard for processing medical images. PowerAI Vision can process images in DICOM format without any prior processing or conversion. When you upload a DICOM image or use it for inferencing, PowerAI Vision converts it to PNG format.
OpenShift support
You can install PowerAI Vision on an OpenShift cluster.
Integration with Maximo® Asset Monitor
Maximo Asset Monitor. Maximo Asset Monitor is a cloud service that enables users to remotely monitor devices at the edge. For example, it can help you notice manufacturing irregularities and take action. This integration allows PowerAI Vision to send inference results to the Maximo Asset Monitor cloud platform for further analysis. See Integrating PowerAI Vision Training and Inference with Maximo Asset Monitor for more information.
SSD model support
Single Shot Detector (SSD) models are now available for training in PowerAI Vision. SSD models are suitable for real-time inference but are not as accurate as Faster R-CNN. For more information, see Training a model.
GoogLeNet and tiny YOLO V2 model Core ML support
GoogLeNet and tiny YOLO V2 models can be enabled for Core ML support. To enable Core ML support when training an appropriate model, click Training options, then select Enable Core ML. You can download the Core ML assets from the Deployed Models page. For details, see, Training a model.
Add pre- and post- processing customizations
You can upload customizations that enable you to perform operations before and after each inference operation with no manual intervention. For details, see Pre and post processing.
Use videos to test all object detection models
You can now use videos to test all models trained for object detection. For information about testing models, see Testing a model.
TensorRT support
Single Shot Detector and Faster R-CNN models are now enabled for TensorRT. TensorRT enables deep learning on edge devices by maximizing inference performance, speeding up inferences, and delivering low latency across a variety of networks. When a model is enabled for TensorRT, downloadable TensorRT assets are generated. You can download the TensorRT assets from the Deployed models page. For details, see, Training a model.
Set model status
You can set a model as "Production" or "Rejected". Other models are considered "Untested". If you are using project groups with the production work flow, you can use scripts to work with the latest Production or Untested model.
Project groups
Project groups allow a user to group data sets and trained models together. PowerAI Vision lets you easily view and work with the assets associated with a project group. You can optionally use an API to enable production work flow for a project group. For more information, see Creating and working with project groups.
Production work flow
You can optionally use the production work flow with project groups to quickly and easily work with the most recent deployed model of each status. For example, you can use a script to perform inferences on the newest untested deployed model without knowing the model ID. See Production work flow for more information.
Auto deploy
If production work flow is turned on for a project group, you can also turn on auto deploy. When auto deploy is turned on, PowerAI Vision automatically deploys models in the following situations:
  • The most recently trained model with "Untested" (unmarked) status is automatically deployed. If you undeploy that model, another Untested model is not automatically deployed until a new one is trained.
  • The latest trained model with "Production" status is automatically deployed. If you undeploy that model or change its status to "Rejected", the latest trained model with Production status is deployed, if one exists.
For more information, see Automatically deploying the newest model.
Inference history is available for all video testing
The inference results for video testing are saved for seven days unless you delete them. To access the results of previous inferences, click Results history in the Test Model section of the Deployed Models page. You can open or delete any of the saved results.
Python 3 support for custom models
Custom models must conform to Python 3. Any trained custom models from releases prior to Version 1.1.5 will not work if the custom model only supports Python 2. For more information about custom models, see Preparing a model that will be used to train data sets in PowerAI Vision and Preparing a model that will be deployed in PowerAI Vision.
PyTorch custom model support
Imported custom models can now be PyTorch or TensorFlow based.
Auto labeling improvements
Auto labeling has been changed in several ways: you can set the confidence threshold for adding labels, images with auto labels are marked in data sets, and you can save or reject individual auto labels. Additionally, you can view and filter on the confidence value for automatically added labels. This makes it easier to quickly reject labels that are likely incorrect, or to quickly accept labels that are likely accurate. For information, see Automatically labeling objects.
Improved metrics support
The Usage Metrics page now includes more detail. Inferences are broken down by model, then are further broken down to give you more detailed information about the processes being run. Information about each file has also been added. For more information, see Monitoring usage metrics.
New metrics added
These new metrics were added. These metrics will not be tracked for data sets or models created prior to 1.1.5.
Total files
The number of files the user created via uploading, cloning, augmentation, or capturing video frames.
Exports
The number of data sets and models that have been exported..
See Monitoring usage metrics for details.
CISO code signing
You can verify the downloaded install tar file by using the CISO code signing service. For details, see Installing PowerAI Vision stand-alone, Installing PowerAI Vision with IBM Cloud Private, or Installing Inference Server.
Exported models are not encrypted
PowerAI Vision no longer encrypts exported models. See Importing, exporting, and downloading PowerAI Vision information for details.
Containers will run with a non-root user ID
All containers will run with a non-root user ID, which impacts install and upgrade. See Installing, upgrading, and uninstalling PowerAI Vision.
User interface improvements
The following changes have been made to the user interface:
Drag and drop videos when testing
You can upload videos for testing either by navigating to the file or by drag and drop.
Updated look and feel of Training page
The Training button and Advanced settings options were moved to the top toolbar. Additionally, the options are now on descriptive cards instead of radio buttons.
Preview contrast and brightness
The settings panel in the Label Objects page lets you add a Preview contrast and Preview brightness filter on the image being edited. This is particularly useful for grey scale images, such as typical DICOM images. This is a temporary filter to make labeling easier and does not modify the original image in any way.
Custom models renamed to Custom assets
The Custom models tab has been renamed Custom assets, and when you import a Custom model, you can specify the framework (TensorFlow or PyTorch).