Adding a generative AI task to a service flow
Generative AI tasks produce text that is based on your specific input, enabling the creation of a wide range of content. For instance, they can assist in generating product descriptions, blog posts, or social media updates. This feature improves efficiency by automating content generation, enabling your team to dedicate more time to other essential tasks.
How do generative AI tasks work?
- You provide input, such as a prompt or a description of what you want to generate.
- AI uses its machine learning algorithms to analyze the input and identify the key concepts and patterns.
- AI then uses its NLP capabilities to generate the output text, based on the input and the identified patterns.
- AI refines the output until it meets the required specifications and is ready to be used.
For more information about the foundation model lifecycle, see Foundation model lifecycle which shows when current models are planned to be deprecated, which ones have been withdrawn, and alternatives to use.
What are the capabilities of generative AI tasks?
- Classification
Example: Classify a customer review as positive, neutral, or negative. - Extraction
Example: Extract details from a customer's complaint. - Generation
Example: Draft a professional email. - Answering questions
Example: What is the value of PI? - Summarizing
Example: Summarize a meeting transcript. - Code generation and conversion
Examples: Convert 70 degrees Fahrenheit to Celsius. Write a Java method that calculates the value of PI. - Translation
Example: Translate text from French, German, Italian, or Spanish to English.
The potential applications of generative AI are vast and go far beyond these examples. There are numerous possibilities to align with your specific needs.
What steps are required to implement a generative AI task?
The following steps outline the process for creating a generative AI task:
- Create a prompt to send to the watsonx.ai provider using a specific large language model (LLM).
- Include instructions and input for the LLM to analyze.
- Use variables from the service flow to make the prompt reusable in your library.
- Control the amount of data returned in the generated output.
- Train the prompt to achieve better results that match your intended task.
Is the output response from a generative AI task reliable?
The quality of the response from a generative AI task depends on the training of the large language model and the construction of the prompt, including the examples provided. Without human oversight, there is a risk of encountering AI hallucinations: outputs that are not based on training data, are incorrectly decoded by the transformer, or do not follow any identifiable pattern. For more information, see What are AI hallucinations? .