Creating generative AI
To build a project, you might need to analyze or generate specific content based on the context of the project itself. You can specify what needs to be analyzed and produced by using the prompt editor. You can then use the content within skills or a broader project.
When you use skills to complete your tasks, some skills are automating your tasks, such as making a custom quotation, getting it approved, and notifying customers. In this process, you might need to understand or produce contextual information.
For example:
- Generate more information about the target customer when the quotation is created.
- Generate an explanation of why specific terms and conditions are added to the quotation.
- Generate email text that is sent to the customer with the appropriate information and tone.
All these examples are contexual, and might vary from one execution to the next. Each skill execution must take this context into account to analyze or generate the information.
When you need to specify and verify the analysis or output of contextual content, you can create prompts to generate outputs by using large language models (LLM) in the prompt editor. You can also provide inputs and expected outputs as examples to improve the performance of the generation with few-shot prompting. You can then publish and use it as a skill.
The following steps outline how to generate context by creating prompts.
Creating a generative AI skill
- From the menu , click Skill studio.
- In the Skill studio page, click Create, and select Project.
- In the New project window, name your project, describe your project, and click Create.
- Choose the Generative AI skill type.
- Name and describe your skill, and click Create.
The prompt editor opens.
Skills can be private or public. You can exclude a skill from being visible in the catalog after the project is published by marking it private in your list of skills. You can click the icon or next to the skill name in your list to make the skill private or public.
Creating prompts
In the prompt editor, you can create a prompt and generate an output.
-
From the Model pull-down menu, select a model.
All models come with different characteristics as described in the following table:
Table 1. List of supported models Model Provider Maximum tokens context (input only) Maximum tokens context (input + output) Description Supported tasks granite-13b-chat-v2 IBM 8191 8192 This model is optimized for dialog use cases and works well with virtual agent and chat applications. - classification
- extraction
- generation
- question answering
- summarizationgranite-13b-instruct-v2 IBM 8191 8192 This model was trained with high-quality finance data, and is a top-performing model on finance tasks. Financial tasks that are evaluated include: providing sentiment scores for stock and earnings call transcripts, classifying news headlines, extracting credit risk assessments, summarizing financial long-form text, and answering financial or insurance-related questions. - classification
- extraction
- generation
- question answering
- summarizationgranite-20b-multilingual IBM 4096 8192 This model is based on the IBM Granite Instruct foundation model and is trained to understand and generate text in English, German, Spanish, French, and Portuguese. - classification
- extraction
- generation
- question answering
- summarizationllama-2-13b-chat Meta 4095 4096 The Llama 2 Chat model is provided by Meta on Hugging Face. The fine-tuned model is useful for chat generation. The model is pretrained with publicly available online data and fine-tuned using reinforcement learning from human feedback. - classification
- code
- extraction
- generation
- question answering
- retrieval-augmented generation
- summarizationllama-2-70b-chat Meta 4095 4096 The Llama 2 Chat model is provided by Meta on Hugging Face. The fine-tuned model is useful for chat generation. The model is pretrained with publicly available online data and fine-tuned using reinforcement learning from human feedback. - classification
- code
- extraction
- generation
- question answering
- retrieval-augmented generation
- summarizationllama-3-70b-instruct Meta 4096 8192 Meta Llama 3 foundation models are accessible, open large language model that are built with Meta Llama 3 and provided by Meta on Hugging Face. The Llama 3 foundation models are instruction fine-tuned language models that can support various use cases. - classification
- code
- extraction
- generation
- question answering
- retrieval-augmented generation
- summarizationllama-3-8b-instruct Meta 4096 8192 Meta Llama 3 foundation models are accessible, open large language model that are built with Meta Llama 3 and provided by Meta on Hugging Face. The Llama 3 foundation models are instruction fine-tuned language models that can support various use cases. - classification
- code
- extraction
- generation
- question answering
- retrieval-augmented generation
- summarizationmixtral-8x7b-instruct-v01 Mistral AI 16384 32768 The mixtral-8x7b-instruct-v01 foundation model is provided by Mistral AI. The mixtral-8x7b-instruct-v01 foundation model is a pretrained generative sparse mixture-of-experts network that groups the model parameters, and then for each token chooses a subset of groups (referred to as experts) to process the token. As a result, each token has access to 47 billion parameters, but only uses 13 billion active parameters for inferencing, which reduces costs and latency. - classification
- code
- extraction
- generation
- retrieval-augmented generation
- summarization
- translation -
Create a prompt.
-
(Optional) Enter an imperative statement in the Context pane.
-
(Optional) Enter the text that you want the model to answer to in the Prompt input pane.
-
(Optional) Add variables.
Variables are used as inputs in generative AI skills. At least one variable must be defined even if it is not used.
- Click New variable in the Variables pane.
- Name the variable, and enter its default value. The name and value of variables must be string.
- Insert the variable in your prompt in the Prompt input or Context pane.
- The variable name must be surrounded by double curly brackets, for example:
{{topic}}
. - You can automatically insert variables in the following ways:
- Click the Add variable icon in the pane, and select a variable from the list.
- Click Ctrl+Space in the pane, and select a variable.
- The variable name must be surrounded by double curly brackets, for example:
-
(Optional) Set tokens in the Parameters pane to constrain the length of generated outputs.
Tip:A token is a collection of characters that has semantic meaning for a model. The words in your prompt text are converted into tokens before they are processed by LLM. The raw result from a model is also tokens. The output tokens are converted back into words to be displayed as the result.- The minimum and maximum values are set to 1 and 50 by default.
- The minimum value cannot be 0.
- The limit of the maximum value varies depending on the model that you selected.
-
(Optional) Add training examples. See Adding training examples.
-
-
Click Generate.
-
Evaluate the result in the Generated output pane.
Click the Raw prompt icon to see the raw prompts. In View raw prompt, you can see the context, the prompt input, and training examples that are used to obtain the generated output.
You can also click the Save as example icon to save the prompt input and generated output as a training example.
-
Adjust your prompt to get better results if necessary.
Adding training examples
You can add examples to the prompt to improve the precision, quality, and stability of the output generated with your prompt.
Specify one or more pairs of sample input and corresponding output.
- In the Training example pane, click New example.
- Enter input and expected output.
- Click Generate to test your prompt. Check if the generated output is improved.
Deleting a generative AI skill
To delete a generative AI skill, click the Delete icon in the skill.
- When you delete a generative AI, the activities, skills, and assistants that use this skill might no longer work.
- After deletion, make sure to complete the following actions:
- Manually replace or remove the generative AI from activities, skills, and assistants that use it.
- If the generative AI was previously shared, share your changes after the deletion so that this skill deletion is shared.
- If the generative AI was published to the skill catalog, delete it from the skill catalog. To delete it from the skill catalog, go to Skill studio and delete the skill. .
What to do next
When you finished creating your generative AI skill, you can use it in the following ways:
- You can share your changes, create a version of your project, and publish it. For more information, see Sharing changes and Publishing projects.
- You can use the skill in decisions:
- To use it in a decision model, see Building decision models.
- To use it in a ruleflow model, see Building ruleflow models.
- You can use the skill in workflows:
- To use it in a workflow, see Creating workflows.
Parent topic: