Coding generative AI solutions

IBM watsonx.ai has REST APIs that support programmatic tasks for working with foundation models. These APIs are exercised in a Python library and Node.js package that you can use to leverage foundation models in your generative AI applications.

Tasks that you can do programmatically

You can use the watsonx.ai REST API, Python library, or Node.js SDK to do the following tasks programmatically:

Table 1. Tasks you can do programmatically in watsonx.ai
Task Python library REST API
Get details about the available foundation models Get model specs List the supported foundation models
Check the tokens a model calculates for a prompt Tokenize built-in foundation models Text tokenization
Get a list of available custom foundation models Custom models Retrieve the deployments
Use the type=custom_foundation_model parameter.
Inference a foundation model Generate text Text generation
Configure AI guardrails when inferencing a foundation model Removing harmful content Use the moderations field to apply filters to foundation model input and output. See Infer text
Prompt-tune a foundation model See the documentation See the documentation
Inference a tuned foundation model Generate text Infer text
List all prompt templates List all prompt templates Get a prompt template
List the deployed prompt templates List deployed prompt templates List the deployments (type=prompt_template)
Inference a foundation model by using a prompt template Prompt Template Manager Infer text
Vectorize text Embed documents Text embedding
Extract text from documents Text Extractions Text extraction
Integrate with LangChain IBM extension in LangChain
Integrate with LlamaIndex IBM LLMs in LlamaIndex
IBM embeddings in LlamaIndex

Learn more

Parent topic: Developing generative AI solutions