Tools

Tools represent specific functions or capabilities that you want your LLM agent to be able to invoke. Agentic provides a fluent API via the Tool class to define these capabilities clearly.

Defining a Tool

You use the static Tool.make() method and chain configuration methods to define a tool.

import { Tool } from '@obayd/agentic';

const webSearchTool = Tool.make("web_search") // Unique name (letters, numbers, underscore)
    .description("Performs a web search for a given query.") // What the tool does
    .param("query", "The search query string.", { required: true }) // Define a required parameter
    .param("num_results", "Number of results to return.", { type: "number", required: false }) // Optional number param
    .param("region", "Search region.", { enum: ["US", "EU", "ANY"], required: false }) // Optional enum param
    .action(async (params, conversationInstance, ...args) => {
        // Action to perform when the tool is called
        console.log(`Searching web for '${params.query}'...`);
        // --- Actual search logic here ---
        await new Promise(resolve => setTimeout(resolve, 100)); // Simulate API call
        return [ // Return results (JSON serializable)
            { title: "Result 1", url: "...", snippet: "..." },
            { title: "Result 2", url: "...", snippet: "..." },
        ];
    });

const codeExecTool = Tool.make("execute_code")
    .description("Executes a snippet of Python code.")
    .param("timeout", "Execution timeout in seconds.", { type: "number", required: false })
    .raw("The Python code snippet to execute.") // Define that this tool takes raw text input
    .action(async (params) => {
        const code = params.raw; // Access raw input via params.raw
        const timeout = params.timeout || 10; // Access regular params
        console.log(`Executing code with timeout ${timeout}s:\n${code}`);
        // --- Actual code execution logic (sandboxed!) ---
        await new Promise(resolve => setTimeout(resolve, 150));
        if (code.includes("error")) {
             return { error: "Simulated execution error." }; // Return an error object
        }
        return { output: "Code executed successfully. Output: ...", status: "success" };
    });

Configuration Methods

  • .make(name: string): (Static) Starts defining a tool with a unique name. Names should only contain letters, numbers, and underscores (^[a-zA-Z0-9_]+$).

  • .description(description: string): Sets the human-readable description of the tool's purpose. This is crucial for the LLM to understand when to use the tool. Alias: .desc().

  • .param(name: string, description: string, options?: ToolParamOptions | string): Defines a parameter the tool accepts.

    • name: The parameter name (sent to the LLM).

    • description: Explanation of the parameter for the LLM.

    • options: Can be a simple type string (e.g., "string", "number", "boolean"). Defaults to "string". Or an object.

      • Can be a simple type string (e.g., "string", "number", "boolean"). Defaults to "string".

      • Or an object:

        • type?: string: Parameter type (default: "string").

        • required?: boolean: Is the parameter required? (default: true).

        • enum?: Array<string | number | boolean>: An array of allowed values.

  • .raw(description?: string): Specifies that the tool accepts raw text input provided between the LLM's function call tags (e.g., <fn...>RAW TEXT). The description explains what this raw input represents. If defined, the raw text will be available in the action function's params object under the key raw (params.raw).

  • .action(callback: ToolActionCallback): Defines the asynchronous function to execute when the LLM calls the tool.

Tool Execution Flow

  • You define Tools and add them to the Conversation's content.

  • The Conversation formats the tool definitions and includes them in the system prompt sent to the LLM.

  • The LLM decides to call a tool and responds with a specially formatted block (e.g., <fn_...>...</fn_...>).

  • The Conversation parses this block during the send() stream processing.

  • A tool.calling event is yielded.

  • The corresponding Tool's .action() function is invoked with the parsed parameters and raw input (if any).

  • The .action() function executes asynchronously.

  • Once the .action() Promise resolves, its result is captured.

  • A tool event containing the result is yielded by the send() stream.

  • The Conversation automatically formats the result and includes it in the next message sent to the LLM, allowing it to continue the conversation based on the tool's output.

Best Practices

  • Clear Names & Descriptions: Make tool names and descriptions unambiguous so the LLM knows exactly what each tool does and when to use it.

  • Precise Parameters: Define only the necessary parameters. Clearly describe each one. Use required: true/false appropriately. Use enum where applicable to restrict choices.

  • Handle Errors Gracefully: Inside your .action() function, catch potential errors and return a meaningful { error: "..." } object instead of letting the Promise reject uncaught. This gives the LLM context about the failure.

  • Keep Actions Focused: Tool actions should ideally perform a single, well-defined task.

  • Security: Be extremely cautious if a tool executes code, interacts with file systems, or calls external APIs. Always sanitize inputs and consider sandboxing or rate limiting.

Last updated