Skip to main contentwatsonx Developer Hub

Text generation

Free token limit increased!
New 300k token limit for all new, free trials to use for LLM API calls and more. Sign up for free here.

Overview

Text generation is the process of automatically producing coherent and meaningful text, which can be in the form of sentences, paragraphs, or even entire documents. The goal is to create text that is not only grammatically correct but also contextually appropriate and engaging for the intended audience.

Text generation is a versatile capability with a wide range of applications in various domains. The following example applications are good areas for text generation:

  • Blog posts and articles
  • News articles and reports
  • Social media posts
  • Product descriptions and reviews
  • Creative writing
  • Language translation
  • Text summaries
  • Virtual assistant interactions
  • Storytelling and narrative generation

Example

You can generate text in IBM watsonx.ai by prompting foundation models programmatically with the API or SDKs.

The following example uses the ibm/granite-13b-instruct-v2 foundation model, which works well with text generation. For information about the full set of supported IBM and third-party foundation models, see Models.

Replace {token}, {watsonx_ai_url}, and {project_id} with your information.

1curl -X POST \
2-H 'Authorization: Bearer {token}' \
3-H 'Content-Type: application/json' \
4-H 'Accept: application/json' \
5--data-raw '{
6  "input": "How far is Paris from Bangalore?",
7  "parameters": {
8    "max_new_tokens": 100,
9    "time_limit": 10000
10  },
11  "model_id": "ibm/granite-13b-instruct-v2",
12  "project_id": "{project_id}"
13}' \
14"{watsonx_ai_url}/ml/v1/text/generation?version=2024-05-31"

Streaming

You can also enable streaming when using the SDK for Node.js or Python.

1try {
2  const textGenerationStream = watsonxAIService
3    .generateTextStream(params)
4    .then(async (res) => {
5      console.log(res);
6
7      for await (const line of res) {
8        console.log(line);
9      }
10    });
11} catch (err) {
12  console.warn(err);
13}

Removing harmful content

To enable the filters with default settings applied when you use the Python library, include the following parameter in the request:

1response = model.generate(prompt,guardrails=True)

The following code example shows how to enable and configure the filters.

1guardrails_hap_params = {
2  GenTextModerationsMetaNames.INPUT: False,
3  GenTextModerationsMetaNames.THRESHOLD: 0.45
4}
5guardrails_pii_params = {
6  GenTextModerationsMetaNames.INPUT: False,
7  GenTextModerationsMetaNames.OUTPUT: True,
8  GenTextModerationsMetaNames.MASK: {"remove_entity_value": True}
9}
10
11response = model.generate(prompt,
12  guardrails=True,
13  guardrails_hap_params=guardrails_hap_params,
14  guardrails_pii_params=guardrails_pii_params)