AI Representative

Ask (Synchronous)

Send a question to an AI Representative and receive a complete JSON response.

Ask an AI Representative (Synchronous)

The synchronous ask endpoint sends a question to a professional's AI Representative and returns the complete answer in a single JSON response. No streaming, no event parsing. This is the simplest way to interact with an AI Representative programmatically.

POST
/api/v1/public/ai/ask-sync
Public

Ask a question and receive a complete JSON response

When to use ask-sync vs ask

Use ask-sync for MCP integrations, backend scripts, CLI tools, and any context where you want the full answer at once. Use the streaming ask endpoint for interactive chat UIs where you want to display the response as it generates.

Request body

profileIdstringRequired

The unique identifier of the professional whose AI Representative you want to ask.

questionstringRequired

The question to ask the AI Representative. Maximum 1,000 characters.

Response

answerstringRequired

The AI Representative's complete response to your question.

suggested_questionsstring[]Required

An array of follow-up questions the AI Representative suggests based on the conversation context.

remaining_questionsnumberRequired

The number of questions the visitor has remaining before hitting the rate limit.

Code examples

Example response

{
  "answer": "I specialize in residential real estate across the greater Miami area. My services include buyer representation, listing and marketing homes for sale, investment property analysis, and relocation assistance. I also offer free market reports for any neighborhood you are interested in.",
  "suggested_questions": [
    "What neighborhoods do you specialize in?",
    "Can you help with investment properties?",
    "How do I get a free market report?"
  ],
  "remaining_questions": 8
}

MCP integration

The synchronous endpoint is ideal for MCP (Model Context Protocol) integrations. MCP clients like Claude Desktop and Cursor can call this endpoint directly to get answers from AI Representatives without handling streaming.

Rate limiting

This endpoint shares the same rate limits as the streaming ask endpoint. The remaining_questions field in the response tells you how many questions the visitor has left.

Rate limit handling

When the rate limit is exceeded, the endpoint returns a 429 status code with a JSON body containing a retryAfter field (in seconds). Implement exponential backoff or respect the retryAfter value in your integration.

Error responses

HTTP StatusMeaning
400Invalid request body (missing profileId or question)
404Profile not found
429Rate limit exceeded
500Internal server error

All error responses follow this shape:

{
  "error": "Rate limit exceeded",
  "retryAfter": 30
}