Managing data for model evaluations

To enable model evaluations, you must prepare your data for logging to generate insights.

You must provide your model data in a supported format to enable model evaluations. Your model transactions are processed logged in the data mart. The data mart is the logging database that stores the data that is used for model evaluations. The following sections describe the different types of data that are logged for model evaluations:

Training data

You must provide training data to generate the statistics that you need to configure model evaluations. Training data contains labeled feature columns that are measured to determine their impact on model outcomes and a prediction column that contains the outcome that the model is trained to predict. The following example shows training data from the German Credit Risk dataset:

CSV file of training data

To enable model evaluations, you must connect your training data. The training data for your model must be provided in a format that can be processed. For more information, see Managing training data.

Payload data

Payload data contains the input and output transactions for your deployment. To configure evaluations, the payload data from your model is stored in a payload logging table. The payload logging table contains the feature and prediction columns that exist in your training data and a prediction probability column that contains the model's confidence in the prediction that it provides. The table also includes timestamp and ID columns to identify each scoring request that you send to the service that you're using as shown in the following example:

Python SDK sample output of payload logging table

You must send scoring requests to provide a log of your model transactions. For more information, see Managing payload data.

Feedback data

Feedback data is labeled data that matches the structure of training data and includes known model outcomes that are compared to your model predictions to measure the accuracy of your model. You must upload feedback data regularly to continuously measure the accuracy of your model predictions. For more information, see Managing feedback data.

Learn more