Skip to content

Chat & Streaming

The chat API uses Server-Sent Events (SSE) to stream agent responses in real time. You send a message via POST and receive a stream of events as the agent generates its response.

POST /chat/{agent_id}

Send a message to an agent and receive the response as an SSE stream.

Prerequisites: The agent must have deployment_status: "running".

Request body:

FieldTypeRequiredDescription
messagestringYesThe user message (1-32,000 characters)
{
"message": "What is the capital of France?"
}

Response: 200 OK with Content-Type: text/event-stream

Each event is a JSON object on a data: line:

data: {"type": "text", "content": "The capital"}
data: {"type": "text", "content": " of France is Paris."}
data: {"type": "done"}
TypeFieldsDescription
textcontentA chunk of the agent’s text response
tool_usename, inputAgent is calling a tool
tool_resultname, outputTool execution result
filepath, urlAgent created or referenced a file
errormessageAn error occurred during generation
doneGeneration complete (stream ends)
keepaliveHeartbeat to prevent connection timeout

Since SSE via POST requires a fetch-based approach (the EventSource API only supports GET), use ReadableStream:

async function sendMessage(agentId, message, accessToken) {
const res = await fetch(
`https://api.liberclaw.ai/api/v1/chat/${agentId}`,
{
method: "POST",
headers: {
Authorization: `Bearer ${accessToken}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ message }),
}
);
if (!res.ok) {
const error = await res.json();
throw new Error(error.error.message);
}
const reader = res.body.getReader();
const decoder = new TextDecoder();
let buffer = "";
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split("\n");
buffer = lines.pop(); // Keep incomplete line in buffer
for (const line of lines) {
if (!line.startsWith("data: ")) continue;
const data = JSON.parse(line.slice(6));
switch (data.type) {
case "text":
process.stdout.write(data.content); // Or append to UI
break;
case "tool_use":
console.log(`Tool: ${data.name}(${JSON.stringify(data.input)})`);
break;
case "error":
console.error("Agent error:", data.message);
break;
case "done":
console.log("\n--- Done ---");
return;
}
}
}
}
Terminal window
curl -N -X POST https://api.liberclaw.ai/api/v1/chat/550e8400-e29b-41d4-a716-446655440000 \
-H "Authorization: Bearer <access_token>" \
-H "Content-Type: application/json" \
-d '{"message": "What is the capital of France?"}'

The -N flag disables output buffering so events appear in real time.


GET /chat/{agent_id}/stream

If the connection drops during an active generation, reconnect to receive the remaining events. Returns an SSE stream with the same event types as the POST endpoint.

This endpoint only returns data if the agent is actively generating a response for the current user.

JavaScript:

async function reconnect(agentId, accessToken) {
const res = await fetch(
`https://api.liberclaw.ai/api/v1/chat/${agentId}/stream`,
{
headers: { Authorization: `Bearer ${accessToken}` },
}
);
// Process the SSE stream the same way as sendMessage
}

GET /chat/{agent_id}/active

Check if the agent is currently generating a response for the current user.

Response:

{
"active": true,
"user_message": "What is the capital of France?"
}

Use this after page reload to determine whether to call the reconnect endpoint.


GET /chat/{agent_id}/history

Retrieve the conversation history from the agent VM.

Response:

{
"messages": [
{ "role": "user", "content": "Hello!" },
{ "role": "assistant", "content": "Hi! How can I help you?" }
]
}

Returns {"messages": []} if the agent has no VM or is unreachable.


DELETE /chat/{agent_id}

Clear all conversation history for this agent.

Response: 204 No Content

curl:

Terminal window
curl -X DELETE https://api.liberclaw.ai/api/v1/chat/550e8400-e29b-41d4-a716-446655440000 \
-H "Authorization: Bearer <access_token>"

GET /chat/{agent_id}/pending

Retrieve proactive messages from the agent (e.g., heartbeat updates, background task results).

Response:

{
"messages": [
{
"content": "I finished processing your request.",
"source": "heartbeat"
}
]
}

  1. Check for active generationGET /chat/{agent_id}/active
    • If active: true, reconnect with GET /chat/{agent_id}/stream
  2. Send messagePOST /chat/{agent_id} with the user’s input
  3. Process SSE stream — Handle text, tool_use, error, and done events
  4. Check pending — Optionally poll GET /chat/{agent_id}/pending for proactive messages