Few-shot prompting with LangChain

Introduction

When organizations introduce AI agents, the expectation goes beyond generating responses. Organizations need agents that interpret data consistently, respect-established business guidance and communicate insights in a form that supports informed decisions. This tutorial explores a structured approach to achieving that outcome by combining deterministic data processing and few-shot prompting within IBM watsonx Orchestrate®.

Use case: Sales Intelligence Orchestrator

In this tutorial, you will learn how to build a Sales Intelligence Orchestrator with IBM watsonx Orchestrate and its Agent Development Kit (ADK). The agent uses a LangChain-powered prompt compilation tool to apply few-shot prompting and deliver insights grounded in enterprise knowledge documents. It combines deterministic Python logic with few-shot prompting to guide LLM reasoning, a lightweight and highly controllable approach to prompt engineering.

The agent works with a synthetic sales dataset covering four regions across five weeks. It deterministically computes key metrics in Python such as revenue attainment, conversion rate and pipeline coverage. It then packages these metrics into a few‑shot prompt and passes them to the large language model (LLM), which returns insights with recommended actions.

You will build the entire pipeline from scratch. You’ll go from a raw dataset to a fully deployed conversational AI agent. The process uses Python and the watsonx Orchestrate CLI. And the entire pipeline is built and deployed with PowerShell. All required files for this tutorial including the sales dataset, knowledge documents and pipeline code are available in the IBM GitHub repository. You can download them directly from there before starting.

Prerequisites

This tutorial requires:

  • Python 3.11 or the latest version installed on your system
  • A watsonx Orchestrate account. A free 30-day trial account is sufficient. You can create one through IBM Cloud®
  • A watsonx Orchestrate API key from the Orchestrate user interface (UI)
  • The watsonx Orchestrate ADK installed on your system

Steps

Step 1: Sign in to watsonx Orchestrate

Sign in to watsonx Orchestrate through IBM Cloud and open the watsonx Orchestrate UI. From the profile menu, access Settings and click API details to create a new API key, copy it and then save it. This credential is the one you need for this tutorial because watsonx Orchestrate manages authentication centrally, removing the need for any external credential such as an OpenAI API key.

Screenshot of watsonx Orchestrate for a tutorial article.

Step 2: Set up a local development environment

In this step, you will create a local Python environment to run the ADK CLI, import tools and create agents. Navigate to the directory where you want to build your project and create a virtual environment:

python -m venv .venv

This step creates an isolated Python environment in a .venv folder. Activate it:

On Windows:

.\.venv\Scripts\activate

On macOS and Linux:

source ./.venv/bin/activate

Step 3: Install watsonx Orchestrate ADK

When the virtual environment is active, install the watsonx Orchestrate ADK by running this command.

pip install --upgrade ibm-watsonx-orchestrate

The full ADK installation steps are available in the official documentation.

Step 4: Configure the watsonx Orchestrate environment

Add your watsonx Orchestrate environment to the ADK by running the following command. Replace <YOUR_WATSONX_ORCHESTRATE_URL> with the instance URL found in your API details settings:

orchestrate env add `
  -n sales-intel-env `
  -u <YOUR_WATSONX_ORCHESTRATE_URL>

Then, activate the environment:

orchestrate env activate sales-intel-env

When prompted, enter your watsonx Orchestrate API key created in Step 1.

Note: If you want to run everything locally with the developer edition instead of a cloud instance, you can activate the built-in local environment. This switches the ADK to the default local Orchestrate environment, which is useful for testing without a cloud connection.

orchestrate env activate local

Step 5: Set up the project folder structure

You will build the following project structure in this tutorial:

sales_intelligence_orchestrator/
.env
requirements.txt
analysis_engine/
    sales_analysis_pipeline.py
    sales_metrics.csv
business_context/
    sales_action_guidelines.docx
    sales_performance_guide.docx
orchestrator/
    sales_intelligence_orchestrator_agent.yaml

Now, create the main project folder and all required subdirectories with the following commands:

mkdir sales_intelligence_orchestrator
cd sales_intelligence_orchestrator
mkdir orchestrator, analysis_engine, business_context

Each folder serves a specific purpose:

analysis_engine contains the Python file and the sales metrics CSV. All data science logic, metric computation, few-shot prompt compilation and the tool definition live together in sales_analysis_pipeline.py, keeping the structure simple and self-contained.

business_context contains the enterprise knowledge documents that define performance standards and escalation guidelines for the agent. These files are uploaded directly through the watsonx Orchestrate UI in a later step.

orchestrator contains the YAML agent definition that configures the agent’s instructions, tool bindings, knowledge documents, guardrails and response format.

Step 6: Create the .env file

Create an .env file at the root of your project folder with the following commands. Replace the placeholder values with your actual credentials:

WO_DEVELOPER_EDITION_SOURCE=orchestrate
WO_ENV=sales-intel-env
WO_INSTANCE=https://api.dl.watson-orchestrate.ibm.com/instances/<INSTANCE_ID>
WO_API_KEY=<YOUR_API_KEY>

In a later step, the watsonx Orchestrate server start command uses the -e .env. The server requires this file to load the correct instance and API key at startup.

Step 7: Install project dependencies

Install the required Python packages with the following commands:

python -m pip install langchain-core pandas

Create a requirements.txt file at the project root:

langchain-core
pandas

The pip install command installs the packages into your local virtual environment, so the pipeline code runs correctly during development. The requirements.txt file is used by the ADK to install the same dependencies into the tool’s execution environment when it is imported into watsonx Orchestrate.

LangChain is used here exclusively as a prompt compiler through its PromptTemplate class, not as an LLM chain or agent framework. There is no LLMChain, ChatOpenAI or ChatPromptTemplate. watsonx Orchestrate handles all language model execution.

Step 8: Prepare the sales data and add the business context documents

Create a file named sales_metrics.csv inside the analysis_engine folder. This setup ensures that the pipeline is able to find it through a package relative path without requiring any special configuration. The reasoning of the agent is provided by two knowledge documents that outline performance criteria and escalation procedures.

sales_performance_guide.docx defines how sales metrics are interpreted for leadership reporting. sales_action_guidelines.docx defines the standard actions, escalation paths and review triggers that follow performance insights. It specifies severity levels (low, medium, high) and maps them to operational responses such as monitoring, regional manager notification or leadership escalation. Place both files in the business_context folder.

These documents are then uploaded directly through the watsonx Orchestrate UI. Open the watsonx Orchestrate UI and navigate to the Manage agents section and scroll down to the Knowledge section from the left sidebar. Upload both the documents in that section and once uploaded, these documents are referenced in the agent YAML under the knowledge field.

Screenshot of watsonx Orchestrate for a tutorial article.

Step 9: Build the sales analysis pipeline and tool

Create a file named sales_analysis_pipeline.py inside the analysis_engine folder. This file contains the metric computation, forecasting logic, few-shot prompt compilation and the @tool decorated function that provides everything to the watsonx Orchestrate agent.

The file is organized into five sections:

Data loading: reads sales_metrics.csv through a path relative to the file itself, making sure it works correctly both locally and inside the watsonx Orchestrate cloud sandbox.

Few-shot prompt: returns a list of examples embedded directly in the file. Each example pairs a structured input block containing region metrics with a model output that demonstrates the correct interpretation tone, severity assignment and action orientation. The examples cover the full range of performance scenarios, stable, underperforming, strong and mixed signals, so the agent reasons consistently regardless of what the data shows.

This approach mirrors what LangChain’s FewShotPromptTemplate and example_prompt pattern achieve, but implemented directly in Python for simplicity and ADK compatibility. This design forms the core of the prompt engineering strategy: rather than relying on zero‑shot reasoning or fine‑tuning, few‑shot examples guide the agent to interpret and communicate each type of result.

Metric computation: performs all deterministic calculations including attainment percentage, conversion rate, pipeline coverage, trend direction and a linear revenue forecast, returning them as a structured JSON-compatible dictionary.

Prompt compilation: assembles all region metric blocks and appends them as a new input after the few‑shot examples with LangChain’s PromptTemplate with input_variables. It then produces the final prompt string ready for the agent to interpret.

Tool definition: shows the compiled prompt to the watsonx Orchestrate environment through the @tool decorator, making the function discoverable and callable by the agent at run time.

Here is the code to be copied into the file sales_analysis_pipeline.py:

import pandas as pd
from typing import List, Dict, Optional
from pathlib import Path
from langchain_core.prompts import PromptTemplate
from ibm_watsonx_orchestrate.agent_builder.tools import tool
BASE_DIR = Path(__file__).parent
DATASET_PATH = BASE_DIR / “sales_metrics.csv”
DEFAULT_WEEKS_REMAINING = 7

# Data loading
def load_dataset() -> pd.DataFrame:
    “””
    Load sales metrics from CSV packaged alongside this file.
    Uses __file__-relative path so it works in the ADK cloud sandbox.
    “””
    if not DATASET_PATH.exists():
        raise FileNotFoundError(f”Required dataset not found: {DATASET_PATH}”)
    df = pd.read_csv(DATASET_PATH)
    required_columns = [
        “week”,
        “region”,        
        “revenue_usd”,
        “target_usd”,
        “pipeline_usd”,
        “conversion_rate”,
        “deals_closed”,
    ]
    missing = [c for c in required_columns if c not in df.columns]
    if missing:
        raise ValueError(f”Dataset missing required columns: {missing}”)
    return df

# Few-shot prompt
def load_few_shot_prompt() -> str:
    “””
    Returns the embedded few-shot examples used to guide agent reasoning.
    Each example pairs a structured metrics input with a model output that
    demonstrates correct interpretation tone, severity assignment, and
    action orientation. This prompt engineering approach produces consistent,
    policy-aligned responses across all user inputs without requiring fine-tuning or embeddings.
    “””
    return “””You are the Sales Intelligence Orchestrator.

Your responsibility is to interpret pre-computed sales metrics and generate 
executive-ready insights that support operational decision-making.

You MUST follow these constraints:

- You MUST NOT perform numerical calculations
- You MUST NOT invent benchmarks, thresholds, or explanations 
- You MUST rely only on provided computed metrics and contextual guidance
- You MUST focus on interpretation, risk assessment, and action orientation

Guiding principles:
- Evaluate performance holistically, not by a single metric
- Distinguish short-term fluctuations from sustained issues
- Escalate only when supported by multiple reinforcing indicators
- Prefer measured responses over aggressive escalation when signals are mixed
- Use clear, concise executive language suitable for leadership audiences

Avoid:
- Repeating raw metrics verbatim
- Speculative causes (e.g., market conditions, customer behavior)
- Overstating risk when recovery signals exist
- Suggesting actions not justified by the data

Below are examples of correct reasoning, tone, and decision posture.
---
Example 1 — Stable Performance with Minor Variability

Input:
Region: Europe
Periods analyzed: 5 weeks
Computed metrics:
- Average revenue vs target: 92%
- Conversion rate: 23%
- Pipeline coverage: 2.7x target
- Trend: Flat revenue with mild week-to-week volatility

Output:
Europe is slightly below target but operating within acceptable performance bounds.
Conversion efficiency and pipeline coverage remain sufficient to support stable 
execution, and short-term volatility does not indicate structural risk.
No escalation is required; continued monitoring is appropriate.
Severity: Low

---

Example 2 — Sustained Underperformance with Limited Recovery

Input:
Region: APAC
Periods analyzed: 5 weeks
Computed metrics:
- Average revenue vs target: 78%
- Conversion rate: 18%
- Pipeline coverage: 2.2x target
- Trend: Decline followed by partial recovery

Output:
APAC is materially underperforming against revenue targets, driven by persistently 
low conversion efficiency and insufficient pipeline coverage. Although early recovery 
signals are present, performance remains unstable and exposed to continued risk.
Targeted intervention is required to improve execution and strengthen pipeline 
fundamentals.
Severity: High

---

Example 3 — Strong and Consistent Execution

Input:
Region: North America
Periods analyzed: 5 weeks
Computed metrics:
- Average revenue vs target: 98%
- Conversion rate: 27%
- Pipeline coverage: 3.1x target
- Trend: Consistent growth

Output:
North America is performing strongly and remains on track to meet revenue objectives.
High conversion efficiency and robust pipeline coverage indicate effective and 
consistent sales execution. No corrective action is required.
Severity: Low

---


Example 4 — Mixed Signals Requiring Cautious Interpretation

Input:
Region: LATAM
Periods analyzed: 5 weeks
Computed metrics:
- Average revenue vs target: 89%
- Conversion rate: 22%
- Pipeline coverage: 2.6x target
- Trend: Gradual improvement after early decline

Output:

LATAM continues to operate below target, but recent performance trends indicate
gradual improvement.Conversion efficiency is acceptable and pipeline coverage is
sufficient to support further recovery. Performance should be monitored closely,
with escalation considered only if progress stalls.
Severity: Medium

---

Instructions for New Inputs
When generating insights:
1. Assess overall performance relative to expectations
2. Identify reinforcing risk or strength indicators
3. Interpret trends with attention to recovery signals
4. Recommend an appropriate level of operational response
5. Assign a severity level: Low, Medium, or High

Do not repeat metric values verbatim.
Do not infer causes beyond the provided data.
Do not escalate unless justified by sustained or compounding indicators.”””


# Metric computation (deterministic)

def compute_region_metrics(df: pd.DataFrame, region: str, weeks_remaining: int) -> Dict:
    region_df = df[df[“region”] == region].copy()

    total_revenue = region_df[“revenue_usd”].sum()
    total_target = region_df[“target_usd”].sum()

    avg_revenue_vs_target_pct = round(
        (total_revenue / total_target) * 100, 1
    )
    avg_conversion_rate_pct = round(
        region_df[“conversion_rate”].mean() * 100, 1
    )
    avg_pipeline_coverage = round(
        region_df[“pipeline_usd”].mean() / region_df[“target_usd”].mean(), 2
    )

    revenue_by_week = region_df.sort_values(“week”)[“revenue_usd”].tolist()
    periods = len(revenue_by_week)

    if revenue_by_week[-1] > revenue_by_week[0]:
        trend = “Improving trend over time”
    elif revenue_by_week[-1] < revenue_by_week[0]:
        trend = “Declining trend over time”
    else:
        trend = “Flat performance trend”

    weekly_improvement = round(
        (revenue_by_week[-1] - revenue_by_week[0]) / (periods - 1), 0
    ) if periods > 1 else 0

    projected_revenue = round(
        revenue_by_week[-1] + (weekly_improvement * weeks_remaining), 0
    )
    avg_weekly_target = round(total_target / periods, 0)
    gap = round(projected_revenue - avg_weekly_target, 0)

    if gap >= 0:
        gap_closure_forecast = f”ON TRACK — projected to meet target (surplus: ${gap:,.0f}/week)”
    else:
        gap_closure_forecast = f”AT RISK — projected to fall short of target by ${abs(gap):,.0f}/week”

    return {
        “region”: region,
        “periods_analyzed”: periods,
        “average_revenue_vs_target_pct”: avg_revenue_vs_target_pct,
        “average_conversion_rate_pct”: avg_conversion_rate_pct,
        “pipeline_coverage”: avg_pipeline_coverage,
        “trend”: trend,
        “weekly_improvement_usd”: int(weekly_improvement),
        “projected_revenue_usd”: int(projected_revenue),
        “avg_weekly_target_usd”: int(avg_weekly_target),
        “weeks_remaining”: weeks_remaining,
        “gap_closure_forecast”: gap_closure_forecast,
    }


# Prompt compilation (LangChain as prompt compiler only)

def compile_sales_prompt(
    analysis_scope: str,
    regions: Optional[List[str]] = None,
    weeks_remaining: Optional[int] = None,
) -> Dict:
    “””
    Compile a few-shot sales analysis prompt for watsonx Orchestrate.
    Includes linear revenue forecast for each region.
    Does NOT execute an LLM — returns compiled prompt string only.

    Args:
        analysis_scope: Description of what is being analyzed.
        regions: Optional list of region names to filter. If None, all regions are analyzed.
        weeks_remaining: Weeks left in the quarter for forecasting. Defaults to 7 if not provided.
    “””
    df = load_dataset()

    if regions:
        df = df[df[“region”].isin(regions)]

    if df.empty:
        raise ValueError(f”No data available for regions: {regions}”)

    weeks = weeks_remaining if weeks_remaining is not None else DEFAULT_WEEKS_REMAINING
    few_shot_prompt = load_few_shot_prompt()

    # LangChain PromptTemplate is used as a prompt compiler only.
    # It fills the input_variables placeholder with the computed
    # region metrics block and returns the final prompt string.
    prompt = PromptTemplate(
        template=few_shot_prompt + “\n\nInput:\n{input}\n\nOutput:”,
        input_variables=[“input”],
    )

    compiled_inputs = []
    for region in df[“region”].unique():
        metrics = compute_region_metrics(df, region, weeks)
        block = (
            f”Region: {metrics[‘region’]}\n”
            f”Periods analyzed: {metrics[‘periods_analyzed’]} weeks\n”
            f”Computed metrics:\n”
            f”- Average revenue vs target: {metrics[‘average_revenue_vs_target_pct’]}%\n”
            f”- Conversion rate: {metrics[‘average_conversion_rate_pct’]}%\n”
            f”- Pipeline coverage: {metrics[‘pipeline_coverage’]}x target\n”
            f”- Trend: {metrics[‘trend’]}\n”
            f”Forecast ({metrics[‘weeks_remaining’]} weeks remaining):\n”
            f”- Weekly revenue improvement: ${metrics[‘weekly_improvement_usd’]:,}/week\n”
            f”- Projected revenue at quarter-end: ${metrics[‘projected_revenue_usd’]:,}\n”
            f”- Weekly target: ${metrics[‘avg_weekly_target_usd’]:,}\n”
            f”- Gap closure forecast: {metrics[‘gap_closure_forecast’]}”
        )
        compiled_inputs.append(block)

    # Iterate over all region blocks and join them into a single input string
    final_prompt = prompt.format(
        input=”\n\n---\n\n”.join(compiled_inputs)
    )

    return {
        “analysis_scope”: analysis_scope,
        “regions”: regions or list(df[“region”].unique()),
        “weeks_remaining”: weeks,
        “compiled_prompt”: final_prompt,
    }


# Watsonx Orchestrate tool definition

@tool
def compile_sales_analysis_prompt(
    analysis_scope: str,
    regions: Optional[List[str]] = None,
    weeks_remaining: Optional[int] = None,
) -> dict:
    “””
    Compile a few-shot sales analysis prompt using real sales metrics data.
    Computes revenue attainment, conversion rate, pipeline coverage, trend,
    and a linear revenue forecast for each region.

    Use this tool whenever the user asks about:
    - Sales performance, attainment, or executive summaries
    - Region comparisons or rankings
    - Underperforming or at-risk regions
    - Pipeline health or conversion rates
    - Revenue forecasts or quarter-end projections
    - Escalation priorities or recommended actions

    Args:
        analysis_scope: Brief description of what is being analyzed.
                        Examples: “all regions”, “APAC recovery trend”, “regions at risk”
        regions: Optional list of specific regions to analyze.
                 Valid values: “North America”, “Europe”, “APAC”, “LATAM”
                 If not provided, all regions are analyzed.
        weeks_remaining: Number of weeks left in the current quarter for forecasting.
                         If not specified by the user, defaults to 7.

    Returns:
        A dictionary containing:
        - analysis_scope: the scope provided
        - regions: list of regions analyzed
        - weeks_remaining: weeks used for the forecast
        - compiled_prompt: the full few-shot prompt string for the agent to interpret
    “””
    return compile_sales_prompt(
        analysis_scope=analysis_scope,
        regions=regions,
        weeks_remaining=weeks_remaining,
    )

 

Step 10: Define the agent YAML

Create a file named sales_intelligence_orchestrator_agent.yaml inside the orchestrator folder. This YAML file is the agent’s complete definition as it configures the model, instructions, tool bindings, knowledge documents, guardrails and structured response format.

Here is the content to be copied into the YAML file:

spec_version: v1
kind: native
name: Sales_Intelligence_Orchestrator

description: >

  An enterprise AI agent that collaborates with a LangChain-powered
  prompt compilation tool to analyze sales performance. The agent
  executes compiled few-shot prompts using a managed LLM, grounds
  reasoning with enterprise sales knowledge, and orchestrates
  follow-up actions across sales workflows.

instructions: |

  You are the Sales Intelligence Orchestrator.
  You are a conversational, enterprise-grade sales analysis agent.
  Users may ask questions in natural language (for example:
  “Which regions are underperforming?” or
  “Summarize sales performance for APAC and Europe”).

  Your responsibilities are to:
  - Interpret the user’s intent from their user input
  - Determine the appropriate analysis scope
  - Identify relevant regions if mentioned (otherwise analyze all regions)
  - Invoke the approved compile_sales_analysis_prompt tool with explicit parameters
  - Execute the compiled prompt using the configured language model
  - Ground all interpretations using the attached sales knowledge documents
  - Respond in clear, executive-ready language

  Strict rules:
  - Do NOT perform numerical calculations
  - Do NOT invent metrics or benchmarks
  - Do NOT assume regions unless explicitly stated or required for completeness
  - Always invoke the tool before answering analytical questions
  - Treat tool outputs as authoritative

  If the user question is ambiguous:
  - Ask a brief clarification question before invoking the tool
 
When performance indicates risk or underperformance:
  - Reference the Sales Performance Guide
  - Reference the Sales Action Guidelines
  - Select the lowest justified severity level

model: gpt-oss-120b

tools:
  - compile_sales_analysis_prompt

knowledge:
  documents:
    - sales_performance_guide.docx
    - sales_action_guidelines.docx

response_format:
  type: structured
  fields:
    summary:
      type: string
      description: Executive-ready performance summary
    key_findings:
      type: array
      items:
        type: string
      description: Key insights derived from the analysis
    severity:
      type: string
      enum: [Low, Medium, High]
      description: Overall performance severity
    recommended_actions:
      type: array
      items:
        type: string
      description: Actions to be orchestrated based on severity

guardrails:
  disallowed_behaviors:
    - numerical_computation
    - speculative_reasoning
    - unsupported_assumptions

Step 11: Start the watsonx Orchestrate server

Start the local watsonx Orchestrate server with the .env file created in Step 6. Keep this running in a separate command window throughout the remaining steps.

orchestrate server start -e .env -l

Step 12: Import the Python tool and the agent

Import sales_analysis_pipeline.py so that watsonx Orchestrate registers it as an executable capability. Run this command from the root of your project directory:

orchestrate tools import `
  --kind python `
  --file analysis_engine/sales_analysis_pipeline.py `
  --package-root . `
  --requirements-file requirements.txt

Then, import the agent YAML:

orchestrate agents import `
  --file orchestrator\sales_intelligence_orchestrator_agent.yaml

Step 13: Test the agent in watsonx Orchestrate UI

Run the following command to open the watsonx Orchestrate chat UI in your browser:

orchestrate chat start

Then, open the watsonx Orchestrate chat UI in your browser and select the Sales_Intelligence_Orchestrator agent from the agent selector.

Screenshot of watsonx Orchestrate for a tutorial article.

You can ask questions in natural language to verify the agent is correctly invoking the tool, computing metrics and returning grounded insights. The agent will call compile_sales_analysis_prompt before every analytical response, calculate the metrics and then return an executive-ready insight based on the business context documents. The responses will contain levels of severity and also specific actions to take, which are in accordance with the sales_action_guidelines.

Every user input will initiate the entire process of tool-calling, where the agent breaks down the question, identifies the appropriate parameters and calls the tool. It then receives the compiled prompt in the form of a JSON snippet.

Screenshot of watsonx Orchestrate for a tutorial article.
Screenshot of watsonx Orchestrate for a tutorial article.

Here are sample questions that cover the full range of the agent’s workflow and capabilities:

  1. Give me a sales performance summary for all regions.

  2. How is APAC performing this quarter?

  3. Which region has the highest conversion rate?

  4. How can APAC improve its conversion rate?

  5. Summarize pipeline health across all regions.

  6. Rank all regions by revenue attainment.

Conclusion

In this tutorial, you built a fully deployed sales orchestrator agent with watsonx Orchestrate and its ADK. Compared to approaches that rely on retrieval augmented generation RAG, embeddings, fine-tuning OpenAI models or building custom chatbots, this pattern is leaner, easier to maintain and more predictable.

The few-shot examples act as a stable reasoning contract between the dataset and the large language model, one that you can iterate and refine without touching the underlying model or retraining anything.

This approach can be extended to other business domains such as pipeline risk scoring, customer health monitoring and operations dashboards. It applies anywhere structured data needs to be transformed into reliable, leadership‑ready insights through a conversational AI agent interface.

Author

Jobit Varughese

Technical Content Writer

IBM

Related solutions
IBM® watsonx Orchestrate™ 

Easily design scalable AI assistants and agents, automate repetitive tasks and simplify complex processes with IBM® watsonx Orchestrate™.

Explore watsonx Orchestrate
AI for developers

Move your applications from prototype to production with the help of our AI development solutions.

Explore AI development tools
AI consulting and services

Reinvent critical workflows and operations by adding AI to maximize experiences, real-time decision-making and business value.

Explore AI services
Take the next step

Whether you choose to customize pre-built apps and skills or build and deploy custom agentic services using an AI studio, the IBM watsonx platform has you covered.

  1. Explore watsonx Orchestrate
  2. Explore AI development tools