When organizations introduce AI agents, the expectation goes beyond generating responses. Organizations need agents that interpret data consistently, respect-established business guidance and communicate insights in a form that supports informed decisions. This tutorial explores a structured approach to achieving that outcome by combining deterministic data processing and few-shot prompting within IBM watsonx Orchestrate®.
In this tutorial, you will learn how to build a Sales Intelligence Orchestrator with IBM watsonx Orchestrate and its Agent Development Kit (ADK). The agent uses a LangChain-powered prompt compilation tool to apply few-shot prompting and deliver insights grounded in enterprise knowledge documents. It combines deterministic Python logic with few-shot prompting to guide LLM reasoning, a lightweight and highly controllable approach to prompt engineering.
The agent works with a synthetic sales dataset covering four regions across five weeks. It deterministically computes key metrics in Python such as revenue attainment, conversion rate and pipeline coverage. It then packages these metrics into a few‑shot prompt and passes them to the large language model (LLM), which returns insights with recommended actions.
You will build the entire pipeline from scratch. You’ll go from a raw dataset to a fully deployed conversational AI agent. The process uses Python and the watsonx Orchestrate CLI. And the entire pipeline is built and deployed with PowerShell. All required files for this tutorial including the sales dataset, knowledge documents and pipeline code are available in the IBM GitHub repository. You can download them directly from there before starting.
This tutorial requires:
Sign in to watsonx Orchestrate through IBM Cloud and open the watsonx Orchestrate UI. From the profile menu, access Settings and click API details to create a new API key, copy it and then save it. This credential is the one you need for this tutorial because watsonx Orchestrate manages authentication centrally, removing the need for any external credential such as an OpenAI API key.
In this step, you will create a local Python environment to run the ADK CLI, import tools and create agents. Navigate to the directory where you want to build your project and create a virtual environment:
This step creates an isolated Python environment in a .venv folder. Activate it:
On Windows:
On macOS and Linux:
When the virtual environment is active, install the watsonx Orchestrate ADK by running this command.
The full ADK installation steps are available in the official documentation.
Add your watsonx Orchestrate environment to the ADK by running the following command. Replace <YOUR_WATSONX_ORCHESTRATE_URL> with the instance URL found in your API details settings:
Then, activate the environment:
When prompted, enter your watsonx Orchestrate API key created in Step 1.
Note: If you want to run everything locally with the developer edition instead of a cloud instance, you can activate the built-in local environment. This switches the ADK to the default local Orchestrate environment, which is useful for testing without a cloud connection.
You will build the following project structure in this tutorial:
Now, create the main project folder and all required subdirectories with the following commands:
Each folder serves a specific purpose:
analysis_engine contains the Python file and the sales metrics CSV. All data science logic, metric computation, few-shot prompt compilation and the tool definition live together in sales_analysis_pipeline.py, keeping the structure simple and self-contained.
business_context contains the enterprise knowledge documents that define performance standards and escalation guidelines for the agent. These files are uploaded directly through the watsonx Orchestrate UI in a later step.
orchestrator contains the YAML agent definition that configures the agent’s instructions, tool bindings, knowledge documents, guardrails and response format.
Create an .env file at the root of your project folder with the following commands. Replace the placeholder values with your actual credentials:
In a later step, the watsonx Orchestrate server start command uses the -e .env. The server requires this file to load the correct instance and API key at startup.
Install the required Python packages with the following commands:
Create a requirements.txt file at the project root:
The pip install command installs the packages into your local virtual environment, so the pipeline code runs correctly during development. The requirements.txt file is used by the ADK to install the same dependencies into the tool’s execution environment when it is imported into watsonx Orchestrate.
LangChain is used here exclusively as a prompt compiler through its PromptTemplate class, not as an LLM chain or agent framework. There is no LLMChain, ChatOpenAI or ChatPromptTemplate. watsonx Orchestrate handles all language model execution.
Create a file named sales_metrics.csv inside the analysis_engine folder. This setup ensures that the pipeline is able to find it through a package relative path without requiring any special configuration. The reasoning of the agent is provided by two knowledge documents that outline performance criteria and escalation procedures.
sales_performance_guide.docx defines how sales metrics are interpreted for leadership reporting. sales_action_guidelines.docx defines the standard actions, escalation paths and review triggers that follow performance insights. It specifies severity levels (low, medium, high) and maps them to operational responses such as monitoring, regional manager notification or leadership escalation. Place both files in the business_context folder.
These documents are then uploaded directly through the watsonx Orchestrate UI. Open the watsonx Orchestrate UI and navigate to the Manage agents section and scroll down to the Knowledge section from the left sidebar. Upload both the documents in that section and once uploaded, these documents are referenced in the agent YAML under the knowledge field.
Create a file named sales_analysis_pipeline.py inside the analysis_engine folder. This file contains the metric computation, forecasting logic, few-shot prompt compilation and the @tool decorated function that provides everything to the watsonx Orchestrate agent.
The file is organized into five sections:
Data loading: reads sales_metrics.csv through a path relative to the file itself, making sure it works correctly both locally and inside the watsonx Orchestrate cloud sandbox.
Few-shot prompt: returns a list of examples embedded directly in the file. Each example pairs a structured input block containing region metrics with a model output that demonstrates the correct interpretation tone, severity assignment and action orientation. The examples cover the full range of performance scenarios, stable, underperforming, strong and mixed signals, so the agent reasons consistently regardless of what the data shows.
This approach mirrors what LangChain’s FewShotPromptTemplate and example_prompt pattern achieve, but implemented directly in Python for simplicity and ADK compatibility. This design forms the core of the prompt engineering strategy: rather than relying on zero‑shot reasoning or fine‑tuning, few‑shot examples guide the agent to interpret and communicate each type of result.
Metric computation: performs all deterministic calculations including attainment percentage, conversion rate, pipeline coverage, trend direction and a linear revenue forecast, returning them as a structured JSON-compatible dictionary.
Prompt compilation: assembles all region metric blocks and appends them as a new input after the few‑shot examples with LangChain’s PromptTemplate with input_variables. It then produces the final prompt string ready for the agent to interpret.
Tool definition: shows the compiled prompt to the watsonx Orchestrate environment through the @tool decorator, making the function discoverable and callable by the agent at run time.
Here is the code to be copied into the file sales_analysis_pipeline.py:
Create a file named sales_intelligence_orchestrator_agent.yaml inside the orchestrator folder. This YAML file is the agent’s complete definition as it configures the model, instructions, tool bindings, knowledge documents, guardrails and structured response format.
Here is the content to be copied into the YAML file:
Start the local watsonx Orchestrate server with the .env file created in Step 6. Keep this running in a separate command window throughout the remaining steps.
Import sales_analysis_pipeline.py so that watsonx Orchestrate registers it as an executable capability. Run this command from the root of your project directory:
Then, import the agent YAML:
Run the following command to open the watsonx Orchestrate chat UI in your browser:
Then, open the watsonx Orchestrate chat UI in your browser and select the Sales_Intelligence_Orchestrator agent from the agent selector.
You can ask questions in natural language to verify the agent is correctly invoking the tool, computing metrics and returning grounded insights. The agent will call compile_sales_analysis_prompt before every analytical response, calculate the metrics and then return an executive-ready insight based on the business context documents. The responses will contain levels of severity and also specific actions to take, which are in accordance with the sales_action_guidelines.
Every user input will initiate the entire process of tool-calling, where the agent breaks down the question, identifies the appropriate parameters and calls the tool. It then receives the compiled prompt in the form of a JSON snippet.
Here are sample questions that cover the full range of the agent’s workflow and capabilities:
Give me a sales performance summary for all regions.
How is APAC performing this quarter?
Which region has the highest conversion rate?
How can APAC improve its conversion rate?
Summarize pipeline health across all regions.
Rank all regions by revenue attainment.
In this tutorial, you built a fully deployed sales orchestrator agent with watsonx Orchestrate and its ADK. Compared to approaches that rely on retrieval augmented generation RAG, embeddings, fine-tuning OpenAI models or building custom chatbots, this pattern is leaner, easier to maintain and more predictable.
The few-shot examples act as a stable reasoning contract between the dataset and the large language model, one that you can iterate and refine without touching the underlying model or retraining anything.
This approach can be extended to other business domains such as pipeline risk scoring, customer health monitoring and operations dashboards. It applies anywhere structured data needs to be transformed into reliable, leadership‑ready insights through a conversational AI agent interface.
Easily design scalable AI assistants and agents, automate repetitive tasks and simplify complex processes with IBM® watsonx Orchestrate™.
Move your applications from prototype to production with the help of our AI development solutions.
Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.