Anthropic

Anthropic models, such as Claude, prioritize safety and helpfulness, employing techniques, such as Constitutional AI to align model behavior with human values. They are designed for reliable and interpretable outputs, focusing on conversational AI and complex reasoning tasks. Anthropic emphasizes transparency and controllability, aiming to build AI systems that are both powerful and beneficial.

Instrumenting Anthropic Application

To instrument the Anthropic application, complete the following steps:

Make sure that your environment meets all the prerequisites. For more information, see Prerequisites.

  1. To install dependencies for Anthropic, run the following command:

    pip3 install anthropic==0.37.1
    
  2. Export the following credentials to access the Bedrock models used in the sample application.

    export ANTHROPIC_API_KEY=<anthropic-api-key>
    

    To create an API key to access the Anthropic API or use the existing one, see Anthropic.

  3. Run the following code to generate an Anthropic sample application:

    import os
    import time
    import random
    import anthropic
    from traceloop.sdk import Traceloop
    from traceloop.sdk.decorators import workflow
    
    anthropic_client = anthropic.Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))
    
    Traceloop.init(app_name="anthropic_chat_service", disable_batch=True)
    
    @workflow(name="claude_streaming_ask")
    def ask_workflow():
        models = ["claude-3-sonnet-20240229", "claude-3-opus-20240229"] 
        mod = random.choice(models)
        questions = ["What is AIOps?", "What is GitOps?"]
        question = random.choice(questions)
    
        prompt = f"{anthropic.HUMAN_PROMPT} {question}{anthropic.AI_PROMPT}"
    
        stream = anthropic_client.completions.create(
            model=mod,
            max_tokens_to_sample=1024,
            stream=True,
            prompt=prompt
        )
    
        for part in stream:
            if part.completion:
                print(part.completion, end="")
    
    
    for i in range(10):
        ask_workflow()
        time.sleep(3)
    
  4. Execute the following command to run the application:

    python3 ./<groq-sample-application>.py
    

After you configure monitoring, Instana collects the following traces and metrics from the sample application:

To view the traces collected from LLM, see Create an application perspective for viewing traces.

To view the metrics collected from LLM, see View metrics.

Adding LLM Security

When the Personally Identifiable Information (PII) is exposed to LLMs, then that can lead to serious security and privacy risks, such as violating contractual obligations and increased chances of data leakage, or a data breach. For more information, see LLM security.