Streaming & Events
A core design principle of @obayd/agentic
is its streaming-first approach. When you call conversation.send()
, you don't get a single, complete response. Instead, you get an async generator that yields events representing the different stages of the conversation turn. This allows you to process information incrementally, providing a more responsive user experience.
The Event Stream
Iterating over the async generator returned by conversation.send()
yields ConversationEvent
objects. Each event object has a type
property indicating its nature, along with other relevant properties.
Event Types
Here are the different types of events you can receive:
assistant
role: 'assistant'
content: string: A chunk of text generated by the assistant. You typically concatenate these chunks to display the full response progressively.
tool.generating
role: 'tool.generating'
callId: string: The unique ID for this tool call attempt.
name: string: The name of the tool being called.
params: Record<string, any>: The parsed parameters for the tool.
rawChunk: string: A chunk of the raw text input being generated by the LLM between the <fn...> tags (for tools defined with .raw()).
raw: string: The accumulated raw text input received so far for this specific call.
Purpose: Allows you to optionally show that the LLM is "thinking" about the raw input for a tool, even before the full call is finalized. Often not needed for direct display.
tool.calling
role: 'tool.calling'
callId: string: The unique ID for this tool call.
name: string: The name of the tool being called.
params: Record<string, any>: The final parsed parameters provided by the LLM.
raw: string | null: The complete raw text input provided by the LLM (if the tool uses .raw()), otherwise null.
Purpose: Signals that the framework has parsed a complete tool call request from the LLM and is about to execute the corresponding tool's .action() function. You might use this to display a "Calling tool..." message.
tool
(Result)role: 'tool'
callId: string: The unique ID matching the tool.calling event.
name: string: The name of the tool that was executed.
params: Record<string, any>: The parameters the tool was called with.
raw: string | null: The raw input the tool was called with.
result: any: The raw, JSON-serializable value returned by the tool's .action() function.
content: ContentPart[]: The normalized result content, suitable for displaying in the history (usually [{ type: 'text', text: '...' }]). See utils.normalizeToolResult.
error?: any: Present if the tool's action returned an { error: ... } object or if normalization failed.
Purpose: Signals that a tool has finished executing. The result is what the framework automatically formats and sends back to the LLM in the next turn. You might use this event to log the outcome or display a confirmation.
error
type: 'error'
role: 'error'
content: string: An error message indicating a failure during the LLM streaming, response parsing, or internal Conversation processing (distinct from tool execution errors, which are part of the tool event).
Purpose: Indicates a problem within the
@obayd/agentic
processing loop or the LLM connection itself.
Why Streaming Matters
Responsiveness: Users see the assistant's text appear incrementally, rather than waiting for the entire response.
Real-time Feedback: You can react instantly when a tool call is initiated (tool.calling) or when results arrive (tool), updating the UI accordingly.
Efficiency: Processing happens as data arrives, potentially overlapping LLM generation time with tool execution time in some scenarios (though the current loop processes tool results after the LLM stream ends).
By handling these events appropriately, you can build sophisticated and interactive agent interfaces.
Last updated