An augmented LLM is an AI language model enhanced by tool access, increasing its capability to complete complex tasks. This enhancement makes the LLM more effective in understanding and responding to queries by leveraging external functionalities. Think of it like a supercharged assistant that calls upon various tools to amplify its competency rather than relying solely on its pre-trained knowledge.
In agentic LLM applications, tools are invoked via function calls. The LLM specifies parameters and requests the application to execute it. These tools can be any definable, describable function, and can drastically extend the capabilities of off-the-shelf LLMs.
Two specialty AI tools in agentic applications are ones that perform data retrieval and persistent memory management. Retrieval systems can be vector databases, web search, keyword search, or even SQL queries. Memory management handles preserving state and user information in an external data store outside of the LLM’s context.
The flow of function calling in LLM-based applications involves providing available tools with each API request. The LLM selects a tool, specifies the function and parameters, and then the application executes this code. The results are then integrated into the LLM’s response.
For AI agents, providing well-detailed tool descriptions is crucial in enhancing their functionality. The field of “Agent-Computer Interaction“ emphasizes how to design interfaces that equip LLMs with clear instructions on tool usage, thus optimizing their performance and decision-making abilities.