AI Representative

Ask (Streaming)

Send a question to an AI Representative and receive a real-time streamed response.

Ask an AI Representative (Streaming)

The streaming ask endpoint sends a question to a professional's AI Representative and returns the response as a real-time Server-Sent Events (SSE) stream. This is the primary endpoint for building interactive chat experiences.

POST
/api/v1/public/ai/ask
Public

Ask a question and receive a streaming SSE response

Request body

profileIdstringRequired

The unique identifier of the professional whose AI Representative you want to ask.

questionstringRequired

The question to ask the AI Representative. Maximum 1,000 characters.

modestringOptional

Conversation mode. Use 'public' for visitor interactions or 'owner' for the profile owner interacting with their own AI Rep. Defaults to 'public'.

Allowed values: public, owner

visitorFingerprintstringOptional

A unique identifier for the visitor. Used for rate limiting and conversation continuity. Generate a stable fingerprint per device or browser session.

sessionIdstringOptional

Session identifier for grouping related questions into a single conversation thread.

conversationHistoryarrayOptional

Previous messages in the conversation for multi-turn context. Each entry has a role and content field.

rolestringRequired

The message author. Either 'user' or 'assistant'.

contentstringRequired

The message text.

postSlugstringOptional

If the question relates to a specific blog post, include the post slug for additional context.

exchangeCountnumberOptional

The number of question-answer exchanges so far in this session. Used for rate limit tracking.

SSE event types

The response is a Server-Sent Events stream. Each event has a type field that tells you what kind of data it contains.

EventDescription
connectedConnection established. The stream is ready.
metaMetadata about the AI Representative (name, profile info).
post_contextBlog post context loaded, if postSlug was provided.
textA chunk of the AI Representative's response. Concatenate these to build the full answer.
errorAn error occurred during generation. Contains an error message.
doneThe response is complete. Includes suggested follow-up questions.

Code examples

Basic streaming request

Multi-turn conversation

To maintain conversation context across multiple questions, pass the conversationHistory and sessionId fields.

Rate limiting

The ask endpoint is rate-limited per profile and visitor fingerprint combination. When you exceed the limit, the stream returns an error event with details about when you can retry.

Rate limit behavior

Rate limits are variable based on the profile owner's plan. Free profiles allow fewer questions per visitor. The done event includes a remaining_questions field so you can show users how many questions they have left.

Error handling

Errors can arrive in two ways:

  1. HTTP errors (4xx, 5xx) before the stream starts, returned as standard JSON error responses.
  2. Stream errors during generation, sent as SSE events with type: "error".

Always handle both cases in your integration.

HTTP StatusMeaning
400Invalid request body (missing profileId or question)
404Profile not found
429Rate limit exceeded
500Internal server error