Messages and Streaming
Send messages within a chat session and receive AI Representative responses via Server-Sent Events (SSE). The streaming approach delivers tokens as they are generated, providing a responsive, real-time experience.
Send a message
/api/v1/chat/sessions/{session_id}/messagesSend a message and receive a streaming AI response via SSE.
Request body
contentstringRequiredThe message text to send to the AI Representative.
uiContextobjectOptionalOptional context about the visitor's current UI state.
currentPagestringOptionalThe page the visitor is currently viewing.
selectedElementsstring[]OptionalUI elements the visitor has interacted with.
formDataobjectOptionalAny form data present on the current page.
scanContextobjectOptionalOptional context from how the visitor arrived (QR scan, link click, etc.).
tabTitlestringOptionalBrowser tab title at the time of the scan.
pageUrlstringOptionalURL the visitor was on when initiating the chat.
Handling SSE events
The response stream uses the Server-Sent Events protocol. Each event is a line prefixed with data: followed by a JSON payload.
Event types
| Event | Description |
|---|---|
token | A text token from the AI response. Append to the message display. |
done | Stream complete. Contains the full assembled message and metadata. |
error | An error occurred during generation. Contains error details. |
Example SSE stream
UI context improves responses
Passing uiContext helps the AI Representative give more relevant answers. For example, if a visitor is on a pricing page, the AI can proactively address cost questions.
Compact a session
/api/v1/chat/sessions/{session_id}/compactSummarize and compress a long session to free context space. Returns an SSE stream.
Long conversations can exceed the AI context window. Compaction summarizes older messages into a concise recap while preserving the most recent exchanges. The response is streamed via SSE.
Context usage
/api/v1/chat/sessions/{session_id}/context-usageGet token usage statistics for a session.
Use this to monitor how much context a session is consuming and decide when to compact.
Execute a UI command
/api/v1/chat/commandExecute a UI command triggered by the AI Representative (e.g. navigate, open modal).
Some AI responses include structured commands that the client UI should execute. This endpoint processes those commands server-side when needed.
UI commands are typically handled client-side. Use this endpoint only when your integration requires server-side command processing.