Creating prompts

When you need to specify and verify the analysis or output of contextual content, you can create prompts to generate outputs by using large language models (LLM) in the prompt editor. You can also provide inputs and expected outputs as examples to improve the performance of the generation with few-shot prompting.

About this task

In the prompt editor, you can create a prompt and generate an output. A default prompt is provided. You need to modify it if you want to create a new prompt.

Procedure

  1. From the Model pull-down menu, select a model.
    Important: The granite-guardian-3-8b model is targeted for risk definitions of general harm, social bias, profanity, violence, sexual content, unethical behavior,harm engagement, evasiveness or jailbreaking. Any deviation from this intended use may lead to unexpected, potentially unsafe, or harmful outputs.

    If you select this model, you must enter specific information in the prompt. For more information, see step 3 below.

  2. Optional: Enter an imperative statement in the Context pane.
  3. Optional: Enter the text that you want the model to answer to in the Prompt input pane.
    If you selected the granite-guardian-3-8b model, you must enter the following prompt:
    You are a safety agent trying to help determine whether the user message is harmful according to the safety risk definition below.
    
    <start_of_turn>
    User Message: {input}
    <end_of_turn>
    
    Our safety risk definition is defined below:
    
    <start_of_risk_definition>
    [INSERT YOUR RISK DEFINITION]
    <end_of_risk_definition>
    
    Is the user message harmful based on the risk definition? Your answer must be either 'Yes' or 'No'.

    The following examples show risk definitions that are work well with the granite-guardian-3-8b model:

    Harm
    The 'User' message is harmful by common-sense.
    Social bias
    The 'User' message contains systemic prejudice against groups based on shared identity or characteristics, often stemming from stereotypes or cultural influences. This can manifest in thoughts, attitudes, or behaviors that unfairly favor or disfavor certain groups over others.
    Profanity
    The 'User' message contains the use of language considered offensive or socially unacceptable in a given context. This primarily refers to curse words, swear words, and crude expressions, excluding slurs or derogatory terms targeting specific groups.
    Violence
    The 'User' message contains content promoting or describing physical harm to individuals or groups. This includes depictions of assault, self-harm, or creation of a hostile environment through threats or intimidation.
    Sexual content
    The 'User' message contains material explicitly related to sexual activities, anatomy, or desires. This ranges from discussions of sexual practices to graphic depictions of sexual acts, excluding content related to general relationships or non-sexual intimacy.
    Unethical behaviour
    The 'User' message contains actions that violate moral or professional standards, focusing on exploitation for personal gain or disregard for others' well-being. This category excludes physical violence or explicit bias, instead covering activities like fraud, theft, or abuse of power.
    Jailbreaking
    The 'User' message contains the deliberate circumvention of AI systems' built-in safeguards or ethical guidelines. This involves crafting specific prompts or scenarios designed to manipulate the AI into generating restricted or inappropriate content.

    You may also write custom risk definitions, but these require further testing.

  4. Optional: Add variables.
    Variables are used as inputs in generative AI models. At least one variable must be defined even if it is not used.
    Remember: A default variable is provided and it cannot be deleted even if it is not used.
    1. Click New variable in the Variables pane.
    2. Name the variable, and enter its default value. The name and value of variables must be string.
    3. Insert the variable in your prompt in the Prompt input or Context pane.

      The variable name must be surrounded by double curly brackets, for example: {{topic}}.

      You can automatically insert variables by clicking the Add variable icon Add variable icon in the pane and select a variable from the list, or pressing Ctrl+Space in the pane and select a variable.

  5. Optional: Set tokens in the Parameters pane to constrain the length of generated outputs.

    A token is a collection of characters that has semantic meaning for a model. The words in your prompt text are converted into tokens before they are processed by LLM. The raw result from a model is also tokens. The output tokens are converted back into words to be displayed as the result.

    • The minimum and maximum values are set to 1 and 50 by default.
    • The minimum value cannot be 0.
    • The limit of the maximum value varies depending on the model that you selected.
  6. Click Generate.

    You can also click the Raw prompt icon Raw prompt icon to see the raw prompts. In View raw prompt, you can see the context, the prompt input, and training examples that are used to obtain the generated output.

    You can also click the Save as example icon Save as example icon to see the prompt input and generated output as a training example.

  7. Adjust your prompt to get better results if necessary.
  8. Optional: You can also add training examples.

    You can add examples to the prompt to improve the precision, quality, and stability of the output generated with your prompt.

    Specify one or more pairs of sample input and corresponding output.

    In general, the more input/output pairs you provide, the better your results are. However, if you have too many examples, they might take the token space in the maximum input token that is allowed by the model as well as the overall maximum token that is allowed for both input and generated tokens.

    1. In the Training example pane, click New example.
    2. Enter input and expected output.
    3. Click Generate to test your prompt. Check if the generated output is improved.

What to do next

When you finished creating your generative AI model, you can use it in other models in your decision service: