Amazon BedRock

Amazon Bedrock provides access to a selection of high-performing foundation models (FMs) from leading AI companies, which is accessible through an API for seamless integration into applications. Amazon Bedrock offers serverless deployment and scalability that enables developers to build generative AI applications without managing infrastructure. Bedrock supports customization and fine-tuning, allowing models to be adapted to specific use cases and data.

Instrumenting Amazon Bedrock Application

To instrument the Amazon Bedrock application, complete the following steps:

Make sure that your environment meets all the prerequisites. For more information, see Prerequisites.

  1. To install dependencies for Amazon Bedrock, run the following command:

    pip3 install boto3==1.35.81
    
  2. Export the following credentials to access the Amazon Bedrock models used in the sample application.

    export AWS_ACCESS_KEY_ID=<access-key-id>
    export AWS_SECRET_ACCESS_KEY=<access-key>
    

    To create an API key to access the Amazon Bedrock API or use the existing one, see Amazon IAM.

  3. Run the following code to generate a Bedrock sample application:

    import json, logging
    import boto3
    from traceloop.sdk import Traceloop
    from traceloop.sdk.decorators import task, workflow
    
    Traceloop.init(app_name="bedrock_chat_service")
    bedrock_runtime = boto3.client(service_name="bedrock-runtime", region_name="us-east-1")
    
    logger = logging.getLogger(__name__)
    @task(name="joke_creation")
    def create_joke():
        logger.warning("bedrock to create_joke")
        express_prompt = "Tell me a joke"
        body = json.dumps({
            "inputText": express_prompt,
            "textGenerationConfig":{
                "maxTokenCount":128,
                "stopSequences":[],
                "temperature":0,
                "topP":0.9
            }
        })
        response = bedrock_runtime.invoke_model(
            body=body,
            modelId="amazon.titan-text-express-v1",
            accept="application/json",
            contentType="application/json"
        )
        response_body = json.loads(response.get('body').read())
        outputText = response_body.get('results')[0].get('outputText')
        text = outputText[outputText.index('\n')+1:]
        about_lambda = text.strip()
        return about_lambda
    
    @workflow(name="joke_generator")
    def joke_workflow():
        print(create_joke())
    
    logger.warning("Call joke_workflow ...")
    joke_workflow()
    
  4. Execute the following command to run the application:

    python3 ./<bedrock-sample-application>.py
    

After you configure monitoring, Instana collects the following traces and metrics from the sample application:

To view the traces collected from LLM, see Create an application perspective for viewing traces.

To view the metrics collected from LLM, see View metrics.

Adding LLM Security

When the Personally Identifiable Information (PII) is exposed to LLMs, then that can lead to serious security and privacy risks, such as violating contractual obligations and increased chances of data leakage, or a data breach. For more information, see LLM security.