Creating generative AI

To build a project, you might need to analyze or generate specific content based on the context of the project itself. You can specify what needs to be analyzed and produced by using the prompt editor. You can then use the content within skills or a broader project.

When you use skills to complete your tasks, some skills are automating your tasks, such as making a custom quotation, getting it approved, and notifying customers. In this process, you might need to understand or produce contextual information.

For example:

  • Generate more information about the target customer when the quotation is created.
  • Generate an explanation of why specific terms and conditions are added to the quotation.
  • Generate email text that is sent to the customer with the appropriate information and tone.

All these examples are contexual, and might vary from one execution to the next. Each skill execution must take this context into account to analyze or generate the information.

When you need to specify and verify the analysis or output of contextual content, you can create prompts to generate outputs by using large language models (LLM) in the prompt editor. You can also provide inputs and expected outputs as examples to improve the performance of the generation with few-shot prompting. You can then publish and use it as a skill.

The following steps outline how to generate context by creating prompts.

Creating a generative AI skill

  1. From the menu Hamburger side menu, click Skill studio.
  2. In the Skill studio page, click Create, and select Project.
  3. In the New project window, name your project, describe your project, and click Create.
  4. Choose the Generative AI skill type.
  5. Name and describe your skill, and click Create.

The prompt editor opens.

Skills can be private or public. You can exclude a skill from being visible in the catalog after the project is published by marking it private in your list of skills. You can click the icon Private skill icon or Public skill icon next to the skill name in your list to make the skill private or public.

Creating prompts

In the prompt editor, you can create a prompt and generate an output.

  1. From the Model pull-down menu, select a model.

    All models come with different characteristics as described in the following table:

    Table 1. List of supported models
    Model Provider Maximum tokens context (input only) Maximum tokens context (input + output) Description Supported tasks
    granite-13b-chat-v2 IBM 8191 8192 This model is optimized for dialog use cases and works well with virtual agent and chat applications. - classification
    - extraction
    - generation
    - question answering
    - summarization
    granite-13b-instruct-v2 IBM 8191 8192 This model was trained with high-quality finance data, and is a top-performing model on finance tasks. Financial tasks that are evaluated include: providing sentiment scores for stock and earnings call transcripts, classifying news headlines, extracting credit risk assessments, summarizing financial long-form text, and answering financial or insurance-related questions. - classification
    - extraction
    - generation
    - question answering
    - summarization
    granite-20b-multilingual IBM 4096 8192 This model is based on the IBM Granite Instruct foundation model and is trained to understand and generate text in English, German, Spanish, French, and Portuguese. - classification
    - extraction
    - generation
    - question answering
    - summarization
    llama-2-13b-chat Meta 4095 4096 The Llama 2 Chat model is provided by Meta on Hugging Face. The fine-tuned model is useful for chat generation. The model is pretrained with publicly available online data and fine-tuned using reinforcement learning from human feedback. - classification
    - code
    - extraction
    - generation
    - question answering
    - retrieval-augmented generation
    - summarization
    llama-2-70b-chat Meta 4095 4096 The Llama 2 Chat model is provided by Meta on Hugging Face. The fine-tuned model is useful for chat generation. The model is pretrained with publicly available online data and fine-tuned using reinforcement learning from human feedback. - classification
    - code
    - extraction
    - generation
    - question answering
    - retrieval-augmented generation
    - summarization
    llama-3-70b-instruct Meta 4096 8192 Meta Llama 3 foundation models are accessible, open large language model that are built with Meta Llama 3 and provided by Meta on Hugging Face. The Llama 3 foundation models are instruction fine-tuned language models that can support various use cases. - classification
    - code
    - extraction
    - generation
    - question answering
    - retrieval-augmented generation
    - summarization
    llama-3-8b-instruct Meta 4096 8192 Meta Llama 3 foundation models are accessible, open large language model that are built with Meta Llama 3 and provided by Meta on Hugging Face. The Llama 3 foundation models are instruction fine-tuned language models that can support various use cases. - classification
    - code
    - extraction
    - generation
    - question answering
    - retrieval-augmented generation
    - summarization
    mixtral-8x7b-instruct-v01 Mistral AI 16384 32768 The mixtral-8x7b-instruct-v01 foundation model is provided by Mistral AI. The mixtral-8x7b-instruct-v01 foundation model is a pretrained generative sparse mixture-of-experts network that groups the model parameters, and then for each token chooses a subset of groups (referred to as experts) to process the token. As a result, each token has access to 47 billion parameters, but only uses 13 billion active parameters for inferencing, which reduces costs and latency. - classification
    - code
    - extraction
    - generation
    - retrieval-augmented generation
    - summarization
    - translation
  2. Create a prompt.

    1. (Optional) Enter an imperative statement in the Context pane.

    2. (Optional) Enter the text that you want the model to answer to in the Prompt input pane.

    3. (Optional) Add variables.

      Variables are used as inputs in generative AI skills. At least one variable must be defined even if it is not used.

      1. Click New variable in the Variables pane.
      2. Name the variable, and enter its default value. The name and value of variables must be string.
      3. Insert the variable in your prompt in the Prompt input or Context pane.
        • The variable name must be surrounded by double curly brackets, for example: {{topic}}.
        • You can automatically insert variables in the following ways:
          • Click the Add variable icon Add variable icon in the pane, and select a variable from the list.
          • Click Ctrl+Space in the pane, and select a variable.
    4. (Optional) Set tokens in the Parameters pane to constrain the length of generated outputs.

      Tip:A token is a collection of characters that has semantic meaning for a model. The words in your prompt text are converted into tokens before they are processed by LLM. The raw result from a model is also tokens. The output tokens are converted back into words to be displayed as the result.
      • The minimum and maximum values are set to 1 and 50 by default.
      • The minimum value cannot be 0.
      • The limit of the maximum value varies depending on the model that you selected.
    5. (Optional) Add training examples. See Adding training examples.

  3. Click Generate.

  4. Evaluate the result in the Generated output pane.

    Click the Raw prompt icon Raw prompt icon to see the raw prompts. In View raw prompt, you can see the context, the prompt input, and training examples that are used to obtain the generated output.

    You can also click the Save as example icon Save as example icon to save the prompt input and generated output as a training example.

  5. Adjust your prompt to get better results if necessary.

Adding training examples

You can add examples to the prompt to improve the precision, quality, and stability of the output generated with your prompt.

Specify one or more pairs of sample input and corresponding output.

  1. In the Training example pane, click New example.
  2. Enter input and expected output.
  3. Click Generate to test your prompt. Check if the generated output is improved.
Tip:In general, the more input/output pairs you provide, the better your results are. However, if you have too many examples, they might take the token space in the maximum input token that is allowed by the model as well as the overall maximum token that is allowed for both input and generated tokens.

Deleting a generative AI skill

To delete a generative AI skill, click the Delete icon Delete icon in the skill.

Important:
  • When you delete a generative AI, the activities, skills, and assistants that use this skill might no longer work.
  • After deletion, make sure to complete the following actions:
    • Manually replace or remove the generative AI from activities, skills, and assistants that use it.
    • If the generative AI was previously shared, share your changes after the deletion so that this skill deletion is shared.
    • If the generative AI was published to the skill catalog, delete it from the skill catalog. To delete it from the skill catalog, go to Skill studio and delete the skill. .

What to do next

When you finished creating your generative AI skill, you can use it in the following ways:


Parent topic:

Building projects