OpenAI
OpenAI models, which include GPT-3.5 and GPT-4, are transformer-based neural networks that are trained on vast datasets of text and code. OpenAI models enable you to understand and generate versatile natural language. The models leverage deep learning techniques for contextual awareness and pattern recognition to facilitate tasks, such as text completion, translation, and conversational AI. You can access these models by using an API that offers developers a flexible platform for integrating advanced AI capabilities into various applications.
Instrumenting OpenAI Application
To instrument the OpenAI application, complete the following steps:
Make sure that your environment meets all the prerequisites. For more information, see Prerequisites.
-
To install dependencies for OpeanAI, run the following command:
pip3 install openai==1.58.1
-
Export the following credentials to access the OpenAI models used in the sample application.
export OPENAI_API_KEY=<openai-api-key>
To create an API key to access the OpenAI API or use the existing one, see OpenAI.
-
Run the following code to generate a OpenAI sample application:
import os, time, random from openai import OpenAI from traceloop.sdk import Traceloop from traceloop.sdk.decorators import workflow client = OpenAI(api_key=os.getenv("OPENAI_API_KEY")) Traceloop.init(app_name="openai_chat_service", disable_batch=True) @workflow(name="streaming_ask") def ask_workflow(): models = [ "gpt-3.5-turbo", "gpt-4-turbo-preview" ] mod = random.choice(models) questions = [ "What is AIOps?", "What is GitOps?" ] question = random.choice(questions) stream = client.chat.completions.create( model=mod, messages=[{"role": "user", "content": question}], stream=True, ) for part in stream: print(part.choices[0].delta.content or "", end="") for i in range(10): ask_workflow() time.sleep(3)
-
Execute the following command to run the application:
python3 ./<openai-sample-application>.py
After you configure monitoring, Instana collects the following traces and metrics from the sample application:
To view the traces collected from LLM, see Create an application perspective for viewing traces.
To view the metrics collected from LLM, see View metrics.
Adding LLM Security
When the Personally Identifiable Information (PII) is exposed to LLMs, then that can lead to serious security and privacy risks, such as violating contractual obligations and increased chances of data leakage, or a data breach. For more information, see LLM security.