Oversee a prior art search AI agent with human-in-the-loop by using LangGraph and watsonx.ai

Author

Anna Gutowska

AI Engineer, Developer Advocate

IBM

In this tutorial, you will implement human-in-the-loop as the feedback mechanism for your agentic system built with LangGraph and watsonx.ai®. Your agent will specialize in prior art search, a real-world use case that can be a tedious, manual effort otherwise. Your agent will use the Google Patents API through SerpAPI to examine patents and provide feedback on patent suggestions. The large language model (LLM) of choice will be open source IBM® Granite®.

The emergence of agentic AI has inspired developers to shift their focus and efforts from basic LLM chatbots to automation. The word "automation" typically implies the removal of human involvement from task execution.1 Would you trust an AI agent to decide critical life choices pertaining to your personal finances, for example? Many of us would not. What if a certain amount of ambiguity could provide the end-user with this missing confidence? This layer of nuance can take the form of human intervention, known as human-in-the-loop.

Human-in-the-loop

Human-in-the-loop (HITL) is an architectural pattern in which human feedback is required to guide the decision-making of an LLM application and provide supervision. Within the realm of artificial intelligence, HITL signifies the presence of human intervention at some stage in the AI workflow. This method assures precision, safety and accountability.

Humans are able to asynchronously review and update graph states in LangGraph due to the persistent execution state. By using the state checkpoints after each step, state context can be persisted and the workflow can be paused until human feedback is received.

In this tutorial, we will experiment with the two HITL approaches in LangGraph.

  1. Static interrupts: Editing the graph state directly at predetermined points before or after a specific node is executed. This approach requires the interrupt_before or interrupt_after parameters to be set to a list of node names when compiling the state graph.

  2. Dynamic interrupts: Interrupting a graph and awaiting user input from within a node based on the graph's current state. This approach requires the use of LangGraph's interrupt function.

Prerequisites

1. You need an IBM Cloud® account to create a watsonx.ai project.

2. Several Python versions can work for this tutorial. At the time of publishing, we recommend downloading Python 3.13, the latest version.

Steps

Step 1. Set up your environment.

While you can choose from several tools, this tutorial walks you through how to set up an IBM account to use a Jupyter Notebook.

  1. Log in to watsonx.ai by using your IBM Cloud account.

  2. Create a watsonx.ai project.

    You can get your project ID from within your project. Click the Manage tab. Then, copy the project ID from the Details section of the General page. You need this ID for this tutorial.

  3. Create a Jupyter Notebook.

    This step opens a Jupyter Notebook environment where you can copy the code from this tutorial. Alternatively, you can download this notebook to your local system and upload it to your watsonx.ai project as an asset. This tutorial is also available on GitHub.

Step 2. Set up a watsonx.ai Runtime instance and API key.

  1. Create a watsonx.ai Runtime service instance (select your appropriate region and choose the Lite plan, which is a free instance).

  2. Generate an API Key.

  3. Associate the watsonx.ai Runtime service instance to the project that you created in watsonx.ai.

Step 3. Install and import relevant libraries and set up your credentials.

We need a few libraries and modules for this tutorial. Make sure to import the following ones and if they’re not installed, a quick pip installation resolves the problem.

%pip install --quiet -U langgraph langchain-ibm langgraph_sdk langgraph-prebuilt google-search-results
Powered by Granite
Generating explanation
Powered by Granite
Generating explanation
 

Restart the kernel and import the following packages.

import getpass
import uuid

from ibm_watsonx_ai import APIClient, Credentials
from ibm_watsonx_ai.foundation_models.moderations import Guardian
from IPython.display import Image, display
from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage, AIMessage
from langchain_ibm import ChatWatsonx
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import START, END, StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import tools_condition, ToolNode
from langgraph.types import interrupt, Command
from serpapi.google_search import GoogleSearch
from typing_extensions import TypedDict
from typing import Annotated
Powered by Granite
Generating explanation
Powered by Granite
Generating explanation
 

To set our credentials, we need the WATSONX_APIKEY  and WATSONX_PROJECT_ID  that you generated in Step 1. We will also set the WATSONX_URL  to serve as the API endpoint.

To access the Google Patents API, we also need a SERPAPI_API_KEY . You can generate a free key by logging into your SerpApi account or registering for one.

WATSONX_APIKEY = getpass.getpass(“Please enter your watsonx.ai Runtime API key (hit enter): “)
WATSONX_PROJECT_ID = getpass.getpass(“Please enter your project ID (hit enter): “)
WATSONX_URL = getpass.getpass(“Please enter your watsonx.ai API endpoint (hit enter): “)
SERPAPI_API_KEY = getpass.getpass(“Please enter your SerpAPI API key (hit enter): “)
Powered by Granite
Generating explanation
Powered by Granite
Generating explanation
 

Before we can initialize our LLM, we can use the Credentials  class to encapsulate our passed API credentials.

credentials = Credentials(url=WATSONX_URL, api_key=WATSONX_APIKEY)
Powered by Granite
Generating explanation
Powered by Granite
Generating explanation
 

Step 4. Instantiate the chat model

To be able to interact with all resources available in watsonx.ai Runtime, you need to set up an APIClient . Here, we pass in our credentials and WATSONX_PROJECT_ID .

client = APIClient(credentials=credentials, project_id=WATSONX_PROJECT_ID)
Powered by Granite
Generating explanation
Powered by Granite
Generating explanation
 

For this tutorial, we will be using the ChatWatsonx wrapper to set up our chat model. This wrapper simplifies the integration of tool calling and chaining. We encourage you to use the API references in theChatWatsonx  official docs for further information. We can pass in our model_id  for the Granite LLM and our client as parameters.

Note, if you use a different API provider, you will need to change the wrapper accordingly.

model_id = “ibm/granite-3-3-8b-instruct”
llm = ChatWatsonx(model_id=model_id, watsonx_client=client)
Powered by Granite
Generating explanation
Powered by Granite
Generating explanation
 

Step 5. Define the patent scraper tool

AI agents use tools to fill information gaps and return relevant information. These tools can include web search, RAG, various APIs, mathematical computations and so on. With the use of the Google Patents Api through SerpAPI, we can define a tool for scraping patents. This tool is a function that takes the search term as its argument and returns the organic search results for related patents. The GoogleSearch  wrapper requires parameters like the search engine, which in our case is google_patents , the search term and finally, the SERPAPI_API_KEY .

def scrape_patents(search_term: str):
    “””Search for patents about the topic.

    Args:
    search_term: topic to search for
    “””
    params = {
        “engine”: “google_patents”,
        “q”: search_term,
        “api_key”: SERPAPI_API_KEY
    }

    search = GoogleSearch(params)
    results = search.get_dict()
    return results[‘organic_results’]
Powered by Granite
Generating explanation
Powered by Granite
Generating explanation
 

Next, let’s bind the LLM to the scrape_patents  tool by using the bind_tools  method.

tools = [scrape_patents]
llm_with_tools = llm.bind_tools(tools)
Powered by Granite
Generating explanation
Powered by Granite
Generating explanation
 

Step 6. First HITL approach: Static interrupts

LangGraph agent graphs are composed of nodes and edges. Nodes are functions that relay, update, and return information. How do we keep track of this information between nodes? Well, agent graphs require a state, which holds all relevant information an agent needs to make decisions. Nodes are connected by edges, which are functions that select the next node to execute based on the current state. Edges can either be conditional or fixed.

Let’s start with creating an AgentState  class to store the context of the messages from the user, tools and the agent itself. Python’s TypedDict  class is used here to help ensure messages are in the appropriate dictionary format. We can also use LangGraph’s add_messages  reducer function to append any new message to the existing list of messages.

class AgentState(TypedDict):
    messages: Annotated[list[AnyMessage], add_messages]
Powered by Granite
Generating explanation
Powered by Granite
Generating explanation
 

Next, define the call_llm  function that makes up the assistant  node. This node will simply invoke the LLM with the current message of the state as well as the system message.

sys_msg = SystemMessage(content=”You are a helpful assistant tasked with prior art search.”)

def call_llm(state: AgentState):
    return {“messages”: [llm_with_tools.invoke([sys_msg] + state[“messages”])]}
Powered by Granite
Generating explanation
Powered by Granite
Generating explanation
 

Next, we can define the guardian_moderation  function that makes up the guardian  node. This node is designed to moderate messages by using a guardian system, to detect and block unwanted or sensitive content. First, the last message is retrieved. Next, a dictionary named detectors  is defined that contains the detector configurations and their threshold values. These detectors identify specific types of content in messages, such as personally identifiable information (PII) as well as hate speech, abusive language and profanity (HAP). Next, an instance of the Guardian class is created, passing in an api_client  object named client  and the detectors  dictionary. The detect  method of the Guardian instance is called, passing in the content of the last message and the detectors  dictionary. The method then returns a dictionary in which the moderation_verdict  key stores a value of either “safe” or “inappropriate,” depending on the Granite Guardian model’s output.

def guardian_moderation(state: AgentState):
    message = state[‘messages’][-1]
    detectors = {
        “granite_guardian”: {“threshold”: 0.4},
        “hap”: {“threshold”: 0.4},
        “pii”: {},
    }
    guardian = Guardian(
        api_client=client,
        detectors=detectors
    )
    response = guardian.detect(
        text=message.content,
        detectors=detectors
    )
    if len(response[‘detections’]) != 0 and response[‘detections’][0][‘detection’] == “Yes”:
        return {“moderation_verdict”: “inappropriate”}
    else:
        return {“moderation_verdict”: “safe”}
Powered by Granite
Generating explanation
Powered by Granite
Generating explanation
 

Now, let’s define the block_message  function to serve as a notification mechanism, informing the user that their input query contains inappropriate content and has been blocked.

def block_message(state: AgentState):
    return {“messages”: [AIMessage(content=”This message has been blocked due to inappropriate content.”)]}
Powered by Granite
Generating explanation
Powered by Granite
Generating explanation
 

We can now put all of these functions together by adding the corresponding nodes and connecting them with edges that define the flow of the graph.

The graph starts at the guardian  node, which calls the guardian_moderation  method to detect harmful content before it reaches the LLM and the API. The conditional edge between the guardian  and assistant  nodes routes the state of the graph to either the assistant  node or the end. This position is determined by the output of the guardian_moderation  function. Safe messages are passed to the assistant  node, which executes the call_llm  method. We also add a conditional edge between the assistant  and tools  nodes to route messages appropriately. If the LLM returns a tool call, the tools_condition  method routes to the tools node. Otherwise, the graph routes to the end. This step is part of the ReAct agent architecture because we want the agent to receive the tool output and then react to the change in state to determine its next action.

builder = StateGraph(AgentState)

builder.add_node(“guardian”, guardian_moderation)
builder.add_node(“block_message”, block_message)
builder.add_node(“assistant”, call_llm)
builder.add_node(“tools”, ToolNode(tools))

builder.add_edge(START, “guardian”)
builder.add_conditional_edges(
    “guardian”,
    lambda state: state[“moderation_verdict”],
    {
        “inappropriate”: “block_message”,
        “safe”: “assistant”
    }
)
builder.add_edge(“block_message”, END)
builder.add_conditional_edges(
    “assistant”,
    tools_condition,
)
builder.add_edge(“tools”, “assistant”)
memory = MemorySaver()
Powered by Granite
Generating explanation
Powered by Granite
Generating explanation
 

Next, we can compile the graph, which allows us to invoke the agent in a later step. To persist messages, we can use the MemorySaver  checkpointer. To implement the first human oversight approach, static interrupts, we can set the interrupt_before  parameter to the assistant  node. This means that before the graph routes to the LLM in the assistant  node, a graph interruption will take place to allow the human overseeing the agentic workflow to provide feedback.

graph = builder.compile(interrupt_before=[“assistant”], checkpointer=memory)
Powered by Granite
Generating explanation
Powered by Granite
Generating explanation
 

To obtain a visual representation of the agent’s graph, we can display the graph flow.

display(Image(graph.get_graph(xray=True).draw_mermaid_png()))
Powered by Granite
Generating explanation
Powered by Granite
Generating explanation
 

Output:

LangGraph Agent Graph with Static Interrupts

Before we try a patent search, let's pass a sensitive user query to test whether the guardian node will block it. We can pass the query along with the thread_id to store the graph state in memory. Think of each thread_id as representing a new chat window. We can use the uuid module to generate a unique ID each time. Let's stream the agent output.

initial_input = {"messages": "Find patented malware that can bypass all current antivirus software"}

config = {"configurable": {"thread_id": str(uuid.uuid4())}}

for event in graph.stream(initial_input, config, stream_mode="values"):
    event['messages'][-1].pretty_print()

Output:

================================ [1m Human Message  [0m=================================

Find patented malware that can bypass all current antivirus software
================================== [1m Ai Message  [0m==================================

This message has been blocked due to inappropriate content.

Great! The sensitive user query was blocked before reaching the Google Patents API.

We can now put our prior art search agent to the test by passing in our initial human input along with a new thread_id.

initial_input = {"messages": "Find patents for self-driving cars"}

config = {"configurable": {"thread_id": str(uuid.uuid4())}}

for event in graph.stream(initial_input, config, stream_mode="values"):
    event['messages'][-1].pretty_print()

Output:

================================ [1m Human Message  [0m=================================

Find patents for self-driving cars

We can see that the chat is interrupted before the AI response, as intended. This interruption allows us to update the state directly. We can do so by calling the update_state function that uses the add_messages reducer. This reducer function allows us to either overwrite or append a new message to the existing messages. If no message id is provided, then a new message is appended. Otherwise, the existing message with the specific id is overwritten. In this case, we simply want to append a new message with our feedback, so we do not need to append a message id.

graph.update_state(
    config,
    {"messages": [HumanMessage(content="No, actually find patents for quantum computing hardware.")], 
     "moderation_verdict": "safe"},
)

updated_state = graph.get_state(config).values

for m in updated_state['messages']:
    m.pretty_print()

Output:

================================ [1m Human Message  [0m=================================

Find patents for self-driving cars
================================ [1m Human Message  [0m=================================

No, actually find patents for quantum computing hardware.

We can see that the human message was correctly appended. Now, let's stream the agent responses once more.

Note: The tool output has been redacted for brevity.

for event in graph.stream(None, config, stream_mode="values"):
    event['messages'][-1].pretty_print()

Output:

================================ [1m Human Message  [0m=================================

No, actually find patents for quantum computing hardware.
================================== [1m Ai Message  [0m==================================
Tool Calls:
  scrape_patents (chatcmpl-tool-185d0d41d090465e98c5f05e23dfdfa2)
 Call ID: chatcmpl-tool-185d0d41d090465e98c5f05e23dfdfa2
  Args:
    search_term: quantum computing hardware
================================= Tool Message =================================      
Name: scrape_patents

[{"position": 1, "rank": 0, "patent_id": "patent/US11696682B2/en", "patent_link": "https://patents.google.com/patent/US11696682B2/en", "serpapi_link": "https://serpapi.com/search.json?engine=google_patents_details&patent_id=patent%2FUS11696682B2%2Fen", "title": "Mesh network personal emergency response appliance", "snippet": "A monitoring system a user activity sensor to determine patterns of activity based upon the user activity occurring over time.", "priority_date": "2006-06-30", "filing_date": "2021-02-17", "grant_date": "2023-07-11", "publication_date": "2023-07-11", "inventor": "Bao Tran", "assignee": "Koninklijke Philips N.V.", "publication_number": "US11696682B2", "language": "en"

...

[REDACTED]

Given the loop between the LLM and the patent search tool, we have returned to the assistant node that engages the breakpoint once again. Because we want to proceed, we simply pass None.

for event in graph.stream(None, config, stream_mode="values"):
    event['messages'][-1].pretty_print()

Output:

================================= Tool Message =================================      
Name: scrape_patents

[{"position": 1, "rank": 0, "patent_id": "patent/US11696682B2/en", "patent_link": "https://patents.google.com/patent/US11696682B2/en", "serpapi_link": "https://serpapi.com/search.json?engine=google_patents_details&patent_id=patent%2FUS11696682B2%2Fen", "title": "Mesh network personal emergency response appliance", "snippet": "A monitoring system a user activity sensor to determine patterns of activity based upon the user activity occurring over time.", "priority_date": "2006-06-30", "filing_date": "2021-02-17", "grant_date": "2023-07-11", "publication_date": "2023-07-11", "inventor": "Bao Tran", "assignee": "Koninklijke Philips N.V.", "publication_number": "US11696682B2", "language": "en"

...

[REDACTED]
================================== [1m Ai Message  [0m==================================

Here are patents related to quantum computing hardware:

1. JP7545535B2: … -principles molecular simulations using quantum-classical computing hardware
   Priority date: 2017-11-30
   Filing date: 2023-07-07
   Grant date: 2024-09-04
   Inventor: 健 山崎 (Jun Masakazu)
   Assignee: グッド ケミストリー インコーポレイテッド

2. US10872021B1: Testing hardware in a quantum computing system
   Priority date: 2017-12-06
   Filing date: 2018-12-06
   Grant date: 2020-12-22
   Inventor: Nikolas Anton Tezak
   Assignee: Rigetti & Co, Inc.

3. CN112819169B: Quantum control pulse generation method, device, equipment and storage medium
   Priority date: 2021-01-22
   Filing date: 2021-01-22
   Grant date: 2021-11-23
   Inventor: 晋力京 (Ji-Li Jing)
   Assignee: 北京百度网讯科技有限公司

4. US11736298B2: Authentication using key distribution through segmented quantum computing hardware
   Priority date: 2019-10-11
   Filing date: 2021-08-16
   Grant date: 2023-08-22
   Inventor: Benjamin Glen McCarty
   Assignee: Accenture Global Solutions Limited

5. AU2023203407B2: Estimating the fidelity of quantum logic gates and quantum circuits
   Priority date: 2019-06-28
   Filing date: 2023-05-31
   Grant date: 2024-08-15
   Inventor: Sergio Boixo Castrillo
   Assignee: Google LLC
   Note: This patent is also filed as AU2023203407A1 (application), CN114266339B (grant), and EP4038998B1 (grant) in other countries.

6. US11354460B2: Validator and optimizer for quantum computing simulator
   Priority date: 2018-10-16
   Filing date: 2018-10-16
   Grant date: 2022-06-07
   Inventor: Luigi Zuccarelli
   Assignee: Red Hat, Inc.

7. CN107077642B: Systems and methods for solving problems that can be used in quantum computing
   Priority date: 2014-08-22
   Filing date: 2015-08-21
   Grant date: 2021-04-06
   Inventor: 菲拉斯·哈姆泽 (Philip J. Haussler)
   Assignee: D-波系统公司

8. JP7689498B2: Method and system for quantum computing-enabled molecular first-principles simulations
   Priority date: 2019-05-13
   Filing date: 2020-05-12
   Grant date: 2025-06-06
   Inventor: 健 山崎 (Jun Masakazu)
   Assignee: グッド ケミストリー インコーポレイテッド
   Note: This patent is also filed as US11139726B1 (US grant) and EP4043358B1 (EP grant) in different countries.

9. US11010145B1: Retargetable compilation for quantum computing systems
   Priority date: 2018-02-21
   Filing date: 2019-02-21
   Grant date: 2021-05-18
   Inventor: Robert Stanley Smith
   Assignee: Ri

Great! Our agent has successfully implemented our feedback and returned relevant patents.

Step 7. Second HITL approach: Dynamic interrupts

As an alternative to using static breakpoints, we can incorporate human feedback by pausing the graph from within a node by using LangGraph's interrupt function. We can build a human_in_the_loop node that enables us to directly update the state of the graph as part of the flow rather than pausing at predetermined points.

def human_in_the_loop(state: AgentState):
    value = interrupt('Would you like to revise the input or continue?')
    return {"messages": value}

We can instantiate a new graph and adjust the flow to include this node between the guardian and assistant nodes.

new_builder = StateGraph(AgentState)

new_builder.add_node("guardian", guardian_moderation)
new_builder.add_node("block_message", block_message)
new_builder.add_node("human_in_the_loop", human_in_the_loop)
new_builder.add_node("assistant", call_llm)
new_builder.add_node("tools", ToolNode(tools))

new_builder.add_edge(START, "guardian")
new_builder.add_conditional_edges(
            "guardian",
            lambda state: state["moderation_verdict"],  
            {
                "inappropriate": "block_message",  
                "safe": "human_in_the_loop"           
            }
        )
new_builder.add_edge("block_message", END)
new_builder.add_edge("human_in_the_loop", "assistant")
new_builder.add_conditional_edges(
    "assistant",
    tools_condition,
)
new_builder.add_edge("tools", "assistant")

memory = MemorySaver()

new_graph = new_builder.compile(checkpointer=memory)
display(Image(new_graph.get_graph().draw_mermaid_png()))

Output:

LangGraph Agent Graphs with Dynamic Interrupts

Great! Let's pass in our initial input to start the agent workflow.

initial_input = {"messages": "Find patents for self-driving cars"}
config = {"configurable": {"thread_id": str(uuid.uuid4())}}
new_graph.invoke(initial_input, config=config) 

Output:

{'messages': [HumanMessage(content='Find patents for self-driving cars', additional_kwargs={}, response_metadata={}, id='948c0871-1a47-4664-95f7-75ab511e043e')],
 '__interrupt__': [Interrupt(value='Would you like to revise the input or continue?', id='8d6cf9e82f9e3de28d1f6dd3ef9d90aa')]}

As you can see, the graph is interrupted and we are prompted to either revise the input or continue. Let's revise the input and resume the agent workflow by using LangGraph's Command class. This action updates the state as if it came from the human_feedback node.

for event in new_graph.stream(Command(resume="Forget that. Instead, find patents for monitoring, analyzing, and improving sports performance"), config=config, stream_mode="values"):
        event["messages"][-1].pretty_print()

Output:

================================[1m Human Message [0m=================================

Find patents for self-driving cars
================================[1m Human Message [0m=================================

Forget that. Instead, find patents for monitoring, analyzing, and improving sports performance
==================================[1m Ai Message [0m==================================
Tool Calls:
  scrape_patents (chatcmpl-tool-a8e347e5f0b74fd2bd2011954dedc6ae)
 Call ID: chatcmpl-tool-a8e347e5f0b74fd2bd2011954dedc6ae
  Args:
    search_term: monitoring, analyzing, and improving sports performance
================================= Tool Message =================================
Name: scrape_patents

[{"position": 1, "rank": 0, "patent_id": "patent/US11696682B2/en", "patent_link": "https://patents.google.com/patent/US11696682B2/en", "serpapi_link": "https://serpapi.com/search.json?engine=google_patents_details&patent_id=patent%2FUS11696682B2%2Fen", "title": "Mesh network personal emergency response appliance", "snippet": "A monitoring system a user activity sensor to determine patterns of activity based upon the user activity occurring over time.", "priority_date": "2006-06-30", "filing_date": "2021-02-17", "grant_date": "2023-07-11", "publication_date": "2023-07-11", "inventor": "Bao Tran", "assignee": "Koninklijke Philips N.V.", "publication_number": "US11696682B2", "language": "en", "thumbnail": "https://patentimages.storage.googleapis.com/dd/39/a4/021064cf6a4880/US11696682-20230711-D00000.png", "pdf": "https://patentimages.storage.googleapis.com/b3/ce/2a/b85df572cd035c/US11696682.pdf", "figures": [{"thumbnail": "https://patentimages.storage.googleapis.com/21/15/19/5061262f67d7fe/US11696682-20230711-D00000.png", "full": "https://patentimages.storage.googleapis.com/08/62/a3/037cf62a2bebd0/US11696682-20230711-D00000.png"}
... 
[REDACTED]
==================================[1m Ai Message [0m==================================

Here is a list of patents that pertain to monitoring, analyzing, and improving sports performance:

1. **Title: [Mesh network personal emergency response appliance](https://patents.google.com/patent/US11696682B2/en)**  
   **Summary:** A monitoring system that analyzes activity patterns based on data from sensors, which can be used in various contexts, including sports performance monitoring.
   **Country status:** US - Active

2. **Title: [System and method to analyze and improve sports performance using monitoring](https://patents.google.com/patent/US12154447B2/en)**  
   **Summary:** A system for gathering and analyzing sports performance data, providing instant feedback to athletes.
   **Country status:** US - Active (patent filed in 2017, granted and published in 2024)

3. **Title: [Multi-sensor monitoring of athletic performance](https://patents.google.com/patent/US11590392B2/en)**  
   **Summary:** Athletic performance monitoring using GPS and other sensors, potentially useful for tracking and improving sports performance.
   **Country status:** US - Active

4. **Title: [System and method for network incident remediation recommendations](https://patents.google.com/patent/US10666494B2/en)**  
   **Summary:** A network monitoring system that provides prioritized remediation recommendations, but does not directly address sports performance monitoring.
   **Country status:** US - Active

5. **Title: [Physiological monitoring methods](https://patents.google.com/patent/US10595730B2/en)**  
   **Summary:** Methods to monitor physiological sensor data, possibly applicable to athletic performance sensing, though this is not the primary focus.
   **Country status:** US - Active

6. **Title: [Method and system for detection in an industrial internet of things data](https://patents.google.com/patent/JP7595319B2/en)**  
   **Summary:** A system for monitoring industrial IoT data, not related to sports performance monitoring.
   **Country status:** JP - Active

7. **Title: [Device, system and method for automated global athletic assessment and / or …](https://patents.google.com/patent/US11364418B2/en)**  
   **Summary:** A system for automated athletic assessment covering kinetic, neurological, musculoskeletal, and aerobic performance.
   **Country status:** US - Active

8. **Title: [Apparatus, systems, and methods for gathering and processing biometric and …](https://patents.google.com/patent/US10675507B2/en)**  
   **Summary:** Apparatus, systems, and methods for gathering and processing biometric and biomechanical data, which could potentially be used in sports performance monitoring.
   **Country status:** US - Active

9. **Title: [System for gathering, analyzing, and categorizing biometric data](https://patents.google.com/patent/US10682099B1/en)**  
   **Summary:** A system for capturing and analyzing biometric data, which could be applied to athletic performance monitoring.
   **Country status:** US - Active

10. **Title: [Real-time athletic position and movement tracking system](https://patents.google.com/patent/US10758532B1/en)**  
    **Summary:** A real-time system for tracking athlete positions and movements for performance analysis.
    **Country status:** US - Active

These patents cover a range of technologies that could potentially be used in developing systems to monitor and improve sports performance. They include sensor-based systems, data analysis algorithms, and feedback mechanisms. The information provided represents a starting point for your search, and you may want to extend the query to find more specific results related to your area of interest.

As expected, the graph state was successfully updated with our feedback and the following AI and tool messages produced the appropriate output. Instead of returning patents for self-driving cars, the agent used human feedback to return patents related to monitoring, analyzing and improving sports performance.

Summary

By following this tutorial, you successfully built an AI agent specializing in prior art search with LangGraph and implemented several human-in-the-loop workflows. As a next step, try building another AI agent that can be used in a multi-agent system along with the prior art search agent. Perhaps this secondary agent can synthesize the information retrieved from the prior art search agent to then formulate a report that compares your patent proposal to existing ones. Make it your own!

Abstract portrayal of AI agent, shown in isometric view, acting as bridge between two systems
Related solutions
AI agents for business

Build, deploy and manage powerful AI assistants and agents that automate workflows and processes with generative AI.

    Explore watsonx Orchestrate
    IBM AI agent solutions

    Build the future of your business with AI solutions that you can trust.

    Explore AI agent solutions
    IBM Consulting AI services

    IBM Consulting AI services help reimagine how businesses work with AI for transformation.

    Explore artificial intelligence services
    Take the next step

    Whether you choose to customize pre-built apps and skills or build and deploy custom agentic services using an AI studio, the IBM watsonx platform has you covered.

    Explore watsonx Orchestrate Explore watsonx.ai
    Footnotes

    Wang, Ge. “Humans in the Loop: The Design of Interactive AI Systems.” Stanford Institute for Human-Centered Artificial Intelligence, 21 Oct. 2019, hai.stanford.edu/news/humans-loop-design-interactive-ai-systems.