# Utilities

`@obayd/agentic` includes a few utility functions, primarily for internal use but potentially helpful for users, especially when implementing the `llmCallback`.

## fetchResponseToStream

This is the most likely utility you might use directly. It's designed to simplify processing streaming responses from LLM APIs that use the **Server-Sent Events (SSE)** protocol, which is common (e.g., OpenAI, Anthropic in some modes).

**Signature:**

```typescript
import { fetchResponseToStream } from '@obayd/agentic';

declare function fetchResponseToStream(
    response: Response // Standard Fetch API Response object
): AsyncGenerator<string>;
```

### Functionality:

* Takes a standard Response object obtained from a fetch call.
* Checks if the response was successful (`response.ok`). Throws an error with status and body text if not.
* Checks if the response has a body (`response.body`). Throws if not.
* Reads the response body as a stream (`ReadableStream`).
* Decodes the stream chunks using `TextDecoder`.
* Splits the decoded text by lines, handling potential partial lines across chunks.
* Looks for lines starting with data:.
* Attempts to JSON parse the content following data:.
* Specifically looks for text content within the parsed JSON at the path `choices[0].delta.content` (common in OpenAI-like streams).
* If found, yields the `textChunk`.  \
  If JSON parsing fails or the specific path isn't found, it currently yields the raw data content (after `data:`) as a fallback.
* Stops when the stream ends or encounters a data: \[DONE] message.

### Usage Example (in llmCallback):

```javascript
import { fetchResponseToStream } from '@obayd/agentic';

async function* llmCallback(messages, options) {
    // ... setup fetch request ...
    const response = await fetch(API_ENDPOINT, { /* ... headers, body ... */ });

    // Directly yield from the helper generator
    yield* fetchResponseToStream(response);
}
```

**Note:** If your LLM API uses a different streaming format (e.g., newline-delimited JSON, a custom binary format), `fetchResponseToStream` will likely not work correctly, and you will need to implement your own stream parsing logic within your llmCallback.
