Tools
Tools represent specific functions or capabilities that you want your LLM agent to be able to invoke. Agentic provides a fluent API via the Tool
class to define these capabilities clearly.
Defining a Tool
You use the static Tool.make()
method and chain configuration methods to define a tool.
Configuration Methods
.make(name: string)
: (Static) Starts defining a tool with a unique name. Names should only contain letters, numbers, and underscores (^[a-zA-Z0-9_]+$)..description(description: string)
: Sets the human-readable description of the tool's purpose. This is crucial for the LLM to understand when to use the tool. Alias: .desc()..param(name: string, description: string, options?: ToolParamOptions | string)
: Defines a parameter the tool accepts.name
: The parameter name (sent to the LLM).description
: Explanation of the parameter for the LLM.options
: Can be a simple type string (e.g., "string", "number", "boolean"). Defaults to "string". Or an object.Can be a simple type string (e.g., "string", "number", "boolean"). Defaults to "string".
Or an object:
type?: string: Parameter type (default: "string").
required?: boolean: Is the parameter required? (default: true).
enum?: Array<string | number | boolean>: An array of allowed values.
.raw(description?: string)
: Specifies that the tool accepts raw text input provided between the LLM's function call tags (e.g., <fn...>RAW TEXT). The description explains what this raw input represents. If defined, the raw text will be available in the action function's params object under the key raw (params.raw
)..action(callback: ToolActionCallback)
: Defines the asynchronous function to execute when the LLM calls the tool.
Tool Execution Flow
You define Tools and add them to the Conversation's content.
The Conversation formats the tool definitions and includes them in the system prompt sent to the LLM.
The LLM decides to call a tool and responds with a specially formatted block (e.g.,
<fn_...>...</fn_...>
).The Conversation parses this block during the send() stream processing.
A tool.calling event is yielded.
The corresponding Tool's .action() function is invoked with the parsed parameters and raw input (if any).
The .action() function executes asynchronously.
Once the .action() Promise resolves, its result is captured.
A tool event containing the result is yielded by the send() stream.
The Conversation automatically formats the result and includes it in the next message sent to the LLM, allowing it to continue the conversation based on the tool's output.
Best Practices
Clear Names & Descriptions: Make tool names and descriptions unambiguous so the LLM knows exactly what each tool does and when to use it.
Precise Parameters: Define only the necessary parameters. Clearly describe each one. Use required: true/false appropriately. Use enum where applicable to restrict choices.
Handle Errors Gracefully: Inside your .action() function, catch potential errors and return a meaningful { error: "..." } object instead of letting the Promise reject uncaught. This gives the LLM context about the failure.
Keep Actions Focused: Tool actions should ideally perform a single, well-defined task.
Security: Be extremely cautious if a tool executes code, interacts with file systems, or calls external APIs. Always sanitize inputs and consider sandboxing or rate limiting.
Last updated