Ask an AI Representative (Synchronous)
The synchronous ask endpoint sends a question to a professional's AI Representative and returns the complete answer in a single JSON response. No streaming, no event parsing. This is the simplest way to interact with an AI Representative programmatically.
/api/v1/public/ai/ask-syncAsk a question and receive a complete JSON response
When to use ask-sync vs ask
Use ask-sync for MCP integrations, backend scripts, CLI tools, and any context where you want the full answer at once. Use the streaming ask endpoint for interactive chat UIs where you want to display the response as it generates.
Request body
profileIdstringRequiredThe unique identifier of the professional whose AI Representative you want to ask.
questionstringRequiredThe question to ask the AI Representative. Maximum 1,000 characters.
Response
answerstringRequiredThe AI Representative's complete response to your question.
suggested_questionsstring[]RequiredAn array of follow-up questions the AI Representative suggests based on the conversation context.
remaining_questionsnumberRequiredThe number of questions the visitor has remaining before hitting the rate limit.
Code examples
Example response
{
"answer": "I specialize in residential real estate across the greater Miami area. My services include buyer representation, listing and marketing homes for sale, investment property analysis, and relocation assistance. I also offer free market reports for any neighborhood you are interested in.",
"suggested_questions": [
"What neighborhoods do you specialize in?",
"Can you help with investment properties?",
"How do I get a free market report?"
],
"remaining_questions": 8
}
MCP integration
The synchronous endpoint is ideal for MCP (Model Context Protocol) integrations. MCP clients like Claude Desktop and Cursor can call this endpoint directly to get answers from AI Representatives without handling streaming.
Rate limiting
This endpoint shares the same rate limits as the streaming ask endpoint. The remaining_questions field in the response tells you how many questions the visitor has left.
Rate limit handling
When the rate limit is exceeded, the endpoint returns a 429 status code with a JSON body containing a retryAfter field (in seconds). Implement exponential backoff or respect the retryAfter value in your integration.
Error responses
| HTTP Status | Meaning |
|---|---|
400 | Invalid request body (missing profileId or question) |
404 | Profile not found |
429 | Rate limit exceeded |
500 | Internal server error |
All error responses follow this shape:
{
"error": "Rate limit exceeded",
"retryAfter": 30
}