Agent Overview
A LiberClaw agent is a FastAPI application running on its own Aleph Cloud virtual machine. Each agent has persistent storage, a set of tools, and a connection to LibertAI for inference. Agents are fully autonomous — they can execute code, browse the web, manage files, and spawn background subagents.
Architecture
Section titled “Architecture”User (App / Telegram) | v LiberClaw API (auth, proxy, usage tracking) | v Agent VM (FastAPI on Aleph Cloud) ├── Tools (bash, files, web, spawn) ├── Memory (MEMORY.md + daily notes) ├── Skills (SKILL.md per skill) └── SQLite (conversation history) | v LibertAI API (OpenAI-compatible inference)Users never talk to agent VMs directly. The LiberClaw API authenticates requests, translates JWT tokens to per-agent Bearer tokens, and proxies chat messages as SSE streams.
Agent lifecycle
Section titled “Agent lifecycle”1. Create
Section titled “1. Create”The user provides a name, system prompt, and model selection through the app or API. A database record is created and a unique shared secret is generated for agent authentication.
2. Deploy
Section titled “2. Deploy”Deployment runs as a background task:
- CRN selection — The deployer queries Aleph Cloud for available Compute Resource Nodes (CRNs), scores them by load and responsiveness, and excludes any that have been blacklisted for recent failures.
- Instance creation — An Aleph Cloud VM instance is created on the selected CRN with a Debian 12 root filesystem. If creation fails, the deployer retries on up to 5 different CRNs automatically.
- SSH deployment — Once the VM boots (up to 5 minutes), the deployer connects via SSH and uploads the agent code as a tarball, installs dependencies, writes the
.envconfiguration, and sets up a systemd service. - Caddy HTTPS — A Caddy reverse proxy is configured on the VM to provide automatic Let’s Encrypt TLS. The agent becomes reachable at its assigned domain.
- Health check — The deployer polls the agent’s
/healthendpoint until it returns successfully, confirming the agent is ready.
3. Ready
Section titled “3. Ready”The agent is now live. It accepts chat messages via the LiberClaw API proxy, runs an agentic loop (inference + tool execution), and streams responses back as SSE events.
4. Ongoing operations
Section titled “4. Ongoing operations”- Heartbeat — If configured, the agent checks
workspace/HEARTBEAT.mdat regular intervals (default 30 minutes) and executes any instructions found there. - Subagents — The agent can spawn background workers for parallel tasks (up to 5 concurrent).
- Memory — The agent reads and writes persistent memory files across conversations.
- Repair/redeploy — If an agent becomes unhealthy, the LiberClaw API can trigger a repair (re-deploy to the same or a different CRN).
Core agentic loop
Section titled “Core agentic loop”When the agent receives a message, it runs an iterative tool-use loop:
- Build the system prompt (static prefix + dynamic memory/skills context)
- Load conversation history from SQLite, compacting if needed
- Call LibertAI inference with the message history and tool definitions
- If the model returns tool calls, execute each tool and append results
- Repeat from step 3 until the model returns a text response with no tool calls
- Stream all events (text, tool_use, file, error, done) back to the client via SSE
The loop runs for up to 50 iterations by default (max_tool_iterations). Inference failures are retried up to 2 additional times with a 5-second delay between attempts.
Authentication
Section titled “Authentication”Every agent has a shared secret (set at deploy time). All API requests except /health require an Authorization: Bearer <token> header. The agent stores a SHA-256 hash of the secret and uses constant-time comparison for verification.
The LiberClaw API holds the plaintext secrets (encrypted at rest with Fernet) and injects the correct Bearer token when proxying requests.
Configuration
Section titled “Configuration”Agents are configured via environment variables written to the VM’s .env file during deployment. See Agent API for the full endpoint reference and Context & Compaction for prompt configuration details.
| Variable | Default | Description |
|---|---|---|
AGENT_NAME | "Agent" | Display name |
SYSTEM_PROMPT | "You are a helpful assistant." | Custom instructions |
MODEL | "hermes-3-8b-tee" | LibertAI model ID |
LIBERTAI_API_KEY | (required) | API key for inference |
AGENT_SECRET_HASH | (required) | SHA-256 hash of the shared secret |
WORKSPACE_PATH | /opt/baal-agent/workspace | Persistent workspace directory |
MAX_TOOL_ITERATIONS | 50 | Max tool calls per turn |
MAX_HISTORY | 100 | Messages loaded from history |
HEARTBEAT_INTERVAL | 1800 | Seconds between heartbeat checks (0 = disabled) |
MAX_CONTEXT_TOKENS | 0 | Context window override (0 = auto-detect) |
GENERATION_RESERVE | 4096 | Tokens reserved for model output |
INFERENCE_TIMEOUT | 180 | Seconds before inference times out |
TELEGRAM_BOT_TOKEN | "" | Optional Telegram bot token for direct messaging |