AI agents base their actions on the information that they perceive. However, they often lack the full knowledge required to tackle every subtask within a complex goal. To bridge this gap, they turn to available tools such as external datasets, web searches, APIs and even other agents.
Once the missing information is gathered, the agent updates its knowledge base and engages in agentic reasoning. This process involves continuously reassessing its plan of action and making self-corrections, which enables more informed and adaptive decision-making.
To help illustrate this process, imagine a user planning their vacation. The user tasks an AI agent with predicting which week in the next year would likely have the best weather for their surfing trip in Greece.
Because the LLM model at the core of the agent does not specialize in weather patterns, it cannot rely solely on its internal knowledge. Therefore, the agent gathers information from an external database containing daily weather reports for Greece over the past several years.
Despite acquiring this new information, the agent still cannot determine the optimal weather conditions for surfing and so, the next subtask is created. For this subtask, the agent communicates with an external agent that specializes in surfing. Let’s say that in doing so, the agent learns that high tides and sunny weather with little to no rain provide the best surfing conditions.
The agent can now combine the information it has learned from its tools to identify patterns. It can predict which week next year in Greece will likely have high tides, sunny weather and a low chance of rain. These findings are then presented to the user. This sharing of information between tools is what allows AI agents to be more general purpose than traditional AI models.3