Adding a generative AI task to a service flow

Generative AI tasks leverage watsonx.ai AI solutions to facilitate the creation of distinctive and engaging content.

Generative AI tasks produce text that is based on your specific input, enabling the creation of a wide range of content. For instance, they can assist in generating product descriptions, blog posts, or social media updates. This feature improves efficiency by automating content generation, enabling your team to dedicate more time to other essential tasks.

How do generative AI tasks work?

Generative AI tasks use a combination of machine learning algorithms and natural language processing (NLP) to understand your input and generate the desired output. The following sequence shows a basic outline of the process:
  1. You provide input, such as a prompt or a description of what you want to generate.
  2. AI uses its machine learning algorithms to analyze the input and identify the key concepts and patterns.
  3. AI then uses its NLP capabilities to generate the output text, based on the input and the identified patterns.
  4. AI refines the output until it meets the required specifications and is ready to be used.
Generative AI uses large language models (LLMs), or foundation models, trained on extensive text data, such as books, articles, and websites, to understand and generate human-like text. These models generate responses that are similar in style and content to the input they receive, or as instructed by the prompt, with various models offering different specializations. To view the complete list of foundation models supported by wastonx.ai, see Foundation models in watsonx.ai. Foundation models in watsonx.ai are not all supported by the generative AI tasks, that support only text generation models.
Note: Models are regularly updated and improved, with older ones deprecated and eventually withdrawn. The model list changes over time, and you may need to update your generative AI task with a different model if your initial chosen model is withdrawn.

For more information about the foundation model lifecycle, see Foundation model lifecycle which shows when current models are planned to be deprecated, which ones have been withdrawn, and alternatives to use.

What are the capabilities of generative AI tasks?

The following basic examples illustrate the types of tasks that generative AI can perform:
  • Classification
    Example: Classify a customer review as positive, neutral, or negative.
  • Extraction
    Example: Extract details from a customer's complaint.
  • Generation
    Example: Draft a professional email.
  • Answering questions
    Example: What is the value of PI?
  • Summarizing
    Example: Summarize a meeting transcript.
  • Code generation and conversion
    Examples: Convert 70 degrees Fahrenheit to Celsius. Write a Java method that calculates the value of PI.
  • Translation
    Example: Translate text from French, German, Italian, or Spanish to English.

The potential applications of generative AI are vast and go far beyond these examples. There are numerous possibilities to align with your specific needs.

What steps are required to implement a generative AI task?

The following steps outline the process for creating a generative AI task:

  1. Create a prompt to send to the watsonx.ai provider using a specific large language model (LLM).
  2. Include instructions and input for the LLM to analyze.
  3. Use variables from the service flow to make the prompt reusable in your library.
  4. Control the amount of data returned in the generated output.
  5. Train the prompt to achieve better results that match your intended task.
For more information about how to construct prompts, refer to the following documentation:

Is the output response from a generative AI task reliable?

The quality of the response from a generative AI task depends on the training of the large language model and the construction of the prompt, including the examples provided. Without human oversight, there is a risk of encountering AI hallucinations: outputs that are not based on training data, are incorrectly decoded by the transformer, or do not follow any identifiable pattern. For more information, see What are AI hallucinations? .

What to do next