DeepSeek
DeepSeek AI push the boundaries of AI technology by providing open-weight models that can be integrated into diverse workflows, from conversational AI to software development. You can use Instana to monitor DeepSeek AI by collecting traces, metrics, and logs to provide real-time insights into their performance, efficiency, and cost.
Instrumenting DeepSeek Application
Before proceeding, make sure that your environment meets all the prerequisites. For more information, see Prerequisites.
DeepSeek models are available across various LLM providers. To instrument the DeepSeek model available in Groq, complete the following steps:
-
Run the following command, to install the required dependencies:
pip3 install groq==0.18.0
-
Export the following credentials to access the DeepSeek model used in the sample application.
export GROQ_API_KEY=<groq-api-key>
-
Run the following code to generate a DeepSeek sample application:
from traceloop.sdk import Traceloop from traceloop.sdk.decorators import workflow from groq import Groq Traceloop.init(app_name="DeepSeek_App") client = Groq() questions = [ "How does transfer learning improve AI model performance?", "What are the challenges in scaling LLMs for enterprise use?", "Why is tokenization important in natural language processing?", "How does few-shot learning enhance AI adaptability?", ] @workflow(name="deepseek_workflow") def process_questions(): for question in questions: response = client.chat.completions.create( messages=[{"role": "user", "content": question}], model="deepseek-r1-distill-llama-70b", ) result = ( response.choices[0] .message.content.replace("<think>", "") .replace("</think>", "") ) print(f"Question: {question}") print("=" * 100) print(f"Answer: {result}") print("=" * 100) process_questions()
-
Run the application by using the following command:
python3 ./<deepseek-sample-application>.py
To view the traces collected from LLM, see Create an application perspective for viewing traces.
To view the metrics collected from LLM, see View metrics.
Adding LLM Security
When the Personally Identifiable Information (PII) is exposed to LLMs that can lead to serious security and privacy risks such as violating contractual obligations, increased chances of data leakage, or data breach. For more information, see LLM security.