Shutting down and restarting custom foundation model deployment
You can shut down and restart the runtime pods for your custom foundation model deployment in the Red Hat OpenShift AI container platform.
Shutting down and restarting operators
To shut down and restart your custom foundation model deployment from the Red Hat OpenShift AI container platform, you must shut down and restart the Watson Machine Learning, watsonx.ai and watsonx.ai Inferencing Foundation Model (IFM) operators. For more information, see Shutting down and restarting services.
When you shut down the Watson Machine Learning operator, you cannot deploy custom foundation models, prompt tunes, prompt templates, and machine learning or deep learning frameworks. Additionally, you cannot train prompt tune models and make predictions for machine learning or deep learning models.
When you shut down the watsonx.ai and watsonx.ai IFM operators, you cannot perform inferencing by using the foundation models provided by IBM. In addition, you cannot perform inferecing for your deployed assets, such as custom foundation model deployments, prompt tune deployments, and prompt template deployments, or train prompt tune models.
Order of shutting down operators for custom foundation model deployments
You must shut down the Watson Machine Learning, watsonx.ai and watsonx.ai Inferencing Foundation Model (IFM) operators in the following order:
- watsonx.ai
- watsonx.ai IFM
- Watson Machine Learning
Order of restarting operatos for custom foundation model deployments
You must restart the Watson Machine Learning, watsonx.ai and watsonx.ai Inferencing Foundation Model (IFM) operators in the following order:
- Watson Machine Learning
- watsonx.ai IFM
- watsonx.ai
Parent topic: Deploying custom foundation models