Controlling tuning availability in IBM watsonx.ai
Due to the large resource demands of tuning foundation models, you might want to disable a tuning method that is available in Tuning Studio to free up resources. You can wait to enable a specific tuning method when you're ready to tune a foundation model.
Before you begin
The IBM watsonx.ai service must be installed.
Before you enable tuning, be sure you have the necessary resources available to support prompt
tuning in the Tuning Studio.
- For details about the overall resources that are required for the service, see Hardware requirements.
- There must be one available GPU for the model that you plan to prompt tune. For more information about GPU requirements per foundation model, see Foundation models in IBM watsonx.ai.
- For details about the system requirements for various tuning methods, see Planning for foundation model tuning in IBM watsonx.ai.
Procedure
You can control tuning availability by enabling or disabling the settings for a specific tuning method. You can control the availability of the following tuning methods:
- Prompt tuning
-
Warning: Prompt tuning is deprecated in version 5.2.0 and will be removed in a future release.
- You must be an instance administrator.
- You can disable prompt tuning after you install the service by patching the deployment with the
following command:
oc patch watsonxai watsonxai-cr \ --namespace=${PROJECT_CPD_INST_OPERANDS} \ --type=merge \ --patch='{"spec":{"tuning_disabled": true}}' - You can re-enable prompt tuning if you previously disabled it by patching the deployment with
the following command:
oc patch watsonxai watsonxai-cr \ --namespace=${PROJECT_CPD_INST_OPERANDS} \ --type=merge \ --patch='{"spec":{"tuning_disabled": false}}'
- Full fine tuning
-
- You must be a cluster administrator.
- Set the
wml-crto maintenance mode with the following command:oc patch wmlbase wml-cr \ --namespace=${PROJECT_CPD_INST_OPERANDS} \ --type=merge \ --patch='{"spec":{"ignoreForMaintenance": true}}' - Update the training configuration with the following command to disable full fine
tuning:
oc patch cm wmltrainingconfigmap \ --namespace=${PROJECT_CPD_INST_OPERANDS} \ --type=merge \ --patch='{"service":{"fine_tuning": {"full.enabled": false}}}' - Note the names of training pods after running the following
command:
Restart the training pods by using the pod names :oc get pods | grep wmltrainingoc delete pod <training-pod-name> - Note the names of Watson Studio pods
after running the following
command:
Restart the Watson Studio pods by using the pod names:oc get pods | grep portal-ml-dloc delete pod <studio-pod-name>
What to do next
To get started with tuning foundation models, see Tuning Studio.