Toolpacks
Toolpacks provide a way to group related tools together. This is useful for organizing your tools and controlling their availability within a conversation. Instead of listing every single tool directly in the system prompt, you can expose logical groups that the LLM can choose to activate.
Defining a Toolpack
Similar to Tool
, you use a fluent API starting with Toolpack.make()
.
import { Tool, Toolpack } from '@obayd/agentic';
// Define some tools first
const searchFilesTool = Tool.make("search_files")
.description("Searches user's files by keyword.")
.param("keyword", "Search keyword", { required: true })
.action(async (params) => `File A matching ${params.keyword}`);
const readFileTool = Tool.make("read_file")
.description("Reads the content of a specific file.")
.param("filename", "The name of the file to read.", { required: true })
.action(async (params) => ({ content: `Content of ${params.filename}...` }));
// Create a Toolpack and add the tools
const fileManagementPack = Toolpack.make("file_management") // Unique name
.description("Tools for searching and reading user files.") // Description of the pack
.add(searchFilesTool, readFileTool, /* ... */) // Add a single tool, or multiple tools
// You can also chain tool definitions directly
const webToolsPack = Toolpack.make("web_tools")
.description("Tools for interacting with the web.")
.add(
Tool.make("browse_page")
.description("Gets the content of a webpage.")
.param("url", "URL to browse", { required: true })
.action(async (params) => ({ content: `Content of ${params.url}...` }))
);
Configuration Methods
.make(name: string)
: (Static) Starts defining a toolpack with a unique name (letters, numbers, underscores)..description(description: string)
: Sets a description for the entire toolpack, explaining the capabilities it provides. Alias: .desc()..addTool(tool: Tool)
: Adds a single, already defined Tool instance to the pack..addTools(tools: Tool[])
: Adds an array of Tool instances to the pack.
Using Toolpacks in Conversation
When you include a Toolpack instance in your conversation.content() definition, the framework handles it differently than a direct Tool:
Automatic enable_toolpack Tool: If any Toolpack is defined in the content, @obayd/agentic automatically adds a built-in tool named enable_toolpack.
Toolpack Listing in Prompt: The system prompt will list the available toolpacks (along with their descriptions and whether they are currently enabled or disabled) instead of listing every tool within those packs initially.
Enabling Workflow:
The LLM decides it needs a capability provided by a disabled toolpack (based on the pack's description).
The LLM calls the enable_toolpack tool, providing the pack_name parameter.
The Conversation executes this internal tool, adding the specified pack name to the conversation.enabledToolpacks set.
A success message is returned to the LLM in the next turn .
Crucially: The LLM must wait for this success message before attempting to use tools from the newly enabled pack.
Using Enabled Tools
Once a toolpack is enabled (either initially via initialEnabledToolpacks option or via the enable_toolpack tool), its contained tools become available for the LLM to call directly in subsequent turns, just like tools added directly to the content.
import { Conversation, Tool, Toolpack } from '@obayd/agentic';
// ... (llmCallback, fileManagementPack definitions from above) ...
const conversation = new Conversation(llmCallback);
conversation.content([
"You are an assistant that can manage files.",
fileManagementPack // Include the toolpack
]);
async function run() {
// Turn 1: User asks to search files
const stream1 = conversation.send("Search my files for 'report'");
for await (const event of stream1) {
// Expect LLM to call enable_toolpack(pack_name="file_management")
// ... process events ...
}
// Turn 2: After enable_toolpack succeeds, user confirms or LLM directly calls search_files
// (Assuming LLM was instructed or decides to proceed)
// If the LLM needs another prompt, you might send a confirmation or just wait for it to call the tool.
// For this example, let's assume the LLM calls search_files in the next reasoning step
// (This depends heavily on the LLM's behavior and prompting)
// The conversation loop internally handles sending the result of enable_toolpack back.
// Now, the LLM receives the success message and *can* call search_files.
// Let's simulate the LLM calling the tool in the next step (often it will respond first)
// Example: If the previous turn's assistant message was just confirmation,
// send an empty message or specific instruction to trigger the tool use.
// For simplicity, we assume the LLM calls it directly after getting the enable confirmation.
// (In reality, the assistant might reply "Okay, I've enabled the file tools. Now searching..."
// and then immediately issue the search_files call if prompted well)
// Let's assume the second turn resulted in the LLM calling search_files
// The stream processing would look like:
// 1. tool.calling event for search_files
// 2. tool event with search_files results
// 3. assistant event with the final answer using the search results.
}
run();
Benefits of Toolpacks
Scalability: Keeps the initial system prompt cleaner when you have many tools.
Organization: Groups related functionalities logically.
Control: Allows for dynamic enabling/disabling of capabilities during a conversation.
Contextual Relevance: The LLM only activates tools when needed for a specific task group.
Last updated