Python library
You can inference and tune foundation models in IBM watsonx.ai programmatically by using the Python library.
See The ibm-watsonx-ai Python library.
You can also work with watsonx.ai foundation models from third-party tools, including:
Learn from available sample notebooks
Sample notebooks are available that you can use as a guide as you create notebooks of your own to do common tasks such as inferencing or tuning a foundation model.
See the Python sample notebooks GitHub repository.
Using the Python library from your IDE
The ibm-watsonx-ai Python library is available on PyPI from the url: https://pypi.org/project/ibm-watsonx-ai/.
You can install the ibm-watsonx-ai Python library in your integrated development environment by using the following command:
pip install ibm-watsonx-ai
If you already have the library installed, include the -U parameter to pick up any updates and work with the latest version of the library.
pip install -U ibm-watsonx-ai
To manage any prerequisite packages that are needed to work with AI models by using the Python library, you can use the wx_ai_samples_conda_env_RT24_1.yml file to set up a conda environment in the integrated development environment that you use for coding.
For more information about using a conda configuration file to set up a virtual environment, see the documentation for your development tool.
Using the Python library from the watsonx.ai lightweight engine
You can use the Python library to work with foundation models that you deploy from the watsonx.ai lightweight engine.
When you use the library from the watsonx.ai lightweight engine, you do not pass the project_id or space_id value to the Credentials object. Instead, you initialize the APIClient object and
use the APIClient object for authentication.
For example, the following code shows how you might specify credentials that you include in an API request that uses foundation models from a full installation:
embedding = Embeddings(
model_id=EmbeddingTypes.IBM_SLATE_30M_ENG,
params=embed_params,
credentials=Credentials(
url="URL",
username="user",
password="***",
instance_id="openshift",
version="5.0",
),
project_id="*****",
)
The following code shows how to authenticate a request that uses foundation models from the watsonx.ai lightweight engine:
client = APIClient(
credentials=Credentials(
url="URL",
username="user",
password="***",
instance_id="openshift",
version="5.0",
)
)
embeddings = Embeddings(model_id=EmbeddingTypes.IBM_SLATE_30M_ENG, api_client=client)
Working with LangChain from a notebook
LangChain is a framework that developers can use to create applications that incorporate large language models. LangChain can be useful when you want to link two or more functions together. For example, you can use LangChain as part of a retrieval-augmented generation (RAG) task.
For more information, see LLMs > IBM watsonx.ai
Use one of the sample RAG notebooks that leverages LangChain to learn more. See RAG examples.
Working with LlamaIndex functions from a notebook
LlamaIndex is a framework for building large language model applications. You can leverage functions available from LlamaIndex, such as text-to-SQL or Pandas DataFrames capabilities in applications that you build with watsonx.ai foundation models.
For more information, see LLMs > IBM watsonx.ai.
You can work with LlamaIndex functions from a notebook in watsonx.ai.
Prerequisites
To get started with the Python library, you first need credentials and a project ID or deployment ID. For more information, see the following topics:
Learn more
- Getting foundation model information
- Inferencing a foundation model
- Inferencing a foundation model by using a prompt template
- Tuning a foundation model
- Converting text to text embeddings
Parent topic: Coding generative AI solutions