# Home

## Introduction to Agentic

Welcome to the documentation for `@obayd/agentic`!

`@obayd/agentic` is a powerful, lightweight framework designed to streamline the development of LLM-powered agents in Node.js environments. It focuses on simplifying the complexities around function calling (tool usage), conversation management, and asynchronous streaming, allowing you to focus on building intelligent agent behavior.

### What Problem Does It Solve?

Building sophisticated LLM agents often involves several challenges:

1. **Function Calling Logic:** Defining functions (tools) the LLM can call, parsing the LLM's requests, executing the functions, and feeding the results back requires significant boilerplate.
2. **Conversation Flow:** Managing the back-and-forth between the user, the LLM, and tool executions can become complex, especially with multiple potential tool calls per turn.
3. **Streaming Responses:** Handling streamed responses from LLMs effectively to provide a responsive user experience while also parsing for potential function calls requires careful state management.
4. **Prompt Engineering:** Crafting effective system prompts that instruct the LLM on how and when to use available tools is crucial but can be tedious.
5. **LLM Agnosticism:** Integrating with different LLM providers often means adapting to slightly different API request/response formats.

`@obayd/agentic` addresses these challenges by providing a structured and intuitive API.

### Key Features

* **✨ Fluent Tool Definition:** Define tools with parameters, descriptions, required flags, enums, and even raw text input using a clean, chainable API (`Tool.make().description().param()...`).
* **📦 Toolpacks:** Group related tools into `Toolpack`s. Enable or disable entire sets of tools dynamically within a conversation using the built-in `enable_toolpack` tool.
* **🌊 Streaming First:** Designed around async generators for handling LLM responses and tool events. Process information as it arrives for a more interactive feel.
* **🔌 Flexible LLM Integration:** Connect to virtually any LLM that supports function calling and streaming responses by providing a simple `llmCallback` async generator function. A utility (`fetchResponseToStream`) is included for common SSE formats.
* **🗣️ Automated Conversation Management:** The `Conversation` class manages message history, builds system prompts incorporating tool definitions, parses LLM responses for tool calls, executes tools, and formats results back for the LLM.
* **⚙️ Dynamic System Prompts:** Define system prompt content using a mix of static strings, `Tool` instances, `Toolpack` instances, or even asynchronous functions that return content dynamically based on the conversation state.
* **🔒 Type-Safe:** Comes with comprehensive TypeScript declaration files (`.d.ts`) for robust type checking and excellent developer experience (autocompletion!) in both TypeScript and JavaScript projects.
* **☀️ Zero-Dependency:** No external dependencies, making it easy to integrate with your existing project.

Ready to build your agent? Head over to the [Getting Started](/agentic/getting-started.md) guide!


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://agentic.gitbook.io/agentic/home.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
