OpenClaw works well when you are running a single agent in a single repo from a single terminal.Problems usually start once teams begin running multiple agents across separate repos or sessions. You end up managing a handful of disconnected sessions across different repositories, each running its own shell scripts, none of them aware of what the others are doing. Context gets lost between reboots, and nobody has a clear picture of what the agents accomplished overnight.
Paperclip adds an orchestration and governance layer above OpenClaw. It assigns work through tickets, enforces budget limits, runs agents in heartbeat cycles, and routes higher-risk decisions to human approval. The building blocks are companies, roles, reporting lines, budgets, and audit logs.
This guide explains how the integration works, which configuration fields matter most, and how to test the end-to-end path before you automate it.
Why OpenClaw Alone Stops Being Enough in Multi-Agent Operations
Standalone OpenClaw becomes harder to manage once teams move beyond one or two agents. The main problem is that each agent instance runs in its own terminal, with its own context window, against its own slice of the codebase. No instance knows what the others are doing. In practice, this leads to three concrete problems.
- Duplicate and conflicting work
Two agents working on the same service can produce incompatible changes to shared files because neither has visibility into the other's session. You catch this at code review or, worse, at deploy time.
- Cost exposure
An agent stuck in a retry loop or exploring a dead-end implementation path will burn through API tokens until someone notices. Without budget boundaries at the agent level, a single bad run can generate hundreds of dollars in charges.
- No session continuity across restarts
When a machine reboots or a terminal session drops, the agent loses its working state. You can restart it, but it starts cold, with no memory of what it completed or what decisions it made in prior runs. At scale, operators spend more time re-establishing context than the agents spend executing.
These are infrastructure problems rather than model problems. OpenClaw gives you a capable runtime, but it does not provide coordination across agents, budget controls, or reliable session continuity across runs. That's the layer Paperclip adds.
What Paperclip Adds on Top of OpenClaw

Paperclip acts as a control plane around OpenClaw. It decides what work is assigned, when agents run, how much budget they can use, and which actions need human approval.
Organizational Structure
Paperclip organizes agents into isolated units called Companies. Each company has its own goal set, its own agent roster, and its own budget. Agents within a company carry job titles and reporting lines. A typical setup might assign a CEO agent to define strategy, a CTO agent to break that strategy into technical tasks, and a Founding Engineer agent to execute implementation work.
The isolation boundary matters here. Agents in one company cannot access another company's tasks, sessions, or budget. This lets you run parallel workstreams (say, a product company and an infrastructure company) without cross-contamination, and tear down or restructure one without affecting the other.
Execution Rhythm (Heartbeats)
Paperclip doesn't let agents run continuously. Every agent executes in discrete cycles called heartbeats. During a single heartbeat, the agent receives its current context (active goals, assigned tasks, remaining budget), makes decisions or executes work, and then stops. The next heartbeat triggers a new cycle.
This design solves two problems. Continuous execution makes it difficult to inspect what an agent did or why. Heartbeats produce a clear audit record: one cycle, one set of inputs, one set of outputs. It also eliminates the infinite-loop failure mode where an agent retries a broken task indefinitely. If a heartbeat completes with an error, the operator can inspect the logs and decide whether to re-trigger, reassign, or intervene before the next cycle fires.
Budget Discipline
Unchecked autonomous agents can incur massive token costs in short order. Paperclip enforces hard budget limits at the agent and company levels. If an agent hits 100% of their monthly budget, Paperclip pauses that agent and blocks all future heartbeats. Execution only resumes after a human operator (acting as a board member in Paperclip's governance model) reviews the spend and explicitly re-enables the agent.
The limit is hard-enforced by design. The agent cannot override it, request an increase, or reallocate budget from another agent. This is a deliberate constraint for teams running agents on overnight or weekend schedules, where a stuck loop could otherwise accumulate token costs for hours without anyone watching.
Governance and Approvals
The human operator in the Paperclip ecosystem acts as the "Board of Directors". High-impact actions, such as the CEO's proposed company strategy or the hiring of new agent employees, require explicit board approval. This ensures that autonomous agents operate within boundaries defined by humans, rather than pursuing misaligned objectives.
How the OpenClaw–Paperclip Integration Works
Paperclip connects to OpenClaw through a dedicated HTTP adapter built on top of Paperclip's generic webhook system. The OpenClaw adapter extends the base with two additions: a structured payload format that separates orchestration metadata from execution context, and session tracking that persists agent state across heartbeat cycles.
Request flow
Each heartbeat triggers a POST request from Paperclip to the OpenClaw gateway URL. The payload uses a nested structure. Orchestration metadata sits under a paperclip key. The execution context (goals, task details, prior decisions) sits at the root level alongside it.
The payload shape looks like this:
{
"prompt": "...",
"context": { ... },
"paperclip": {
"runId": "hb_run_abc123",
"agentId": "agent_cto_01",
"taskId": "task_impl_auth_module"
}
}This separation is important as the OpenClaw can execute the task without parsing Paperclip’s orchestration fields. The paperclip key travels through the system for traceability and callback routing, but the agent operates on the root-level context.
Synchronous vs. asynchronous execution
The gateway supports two response patterns, and the choice depends on the expected task duration.The gateway can return either a direct result or an acknowledgment, depending on how long the task is expected to run.
For most coding work, you'll use the async pattern. The sync path is primarily useful for decision-making heartbeats where the agent evaluates priorities or reviews task status without executing against the codebase.
Session persistence
OpenClaw returns a sessionId with each response. Paperclip stores this ID and passes it back on the next heartbeat invocation for that agent, so the OpenClaw runtime can restore the agent's workspace, memory, and file state from the previous cycle.
If a session becomes invalid (the OpenClaw host restarted, the session timed out, or the workspace was manually cleared), the gateway returns an error on the next invocation. Paperclip responds by dropping the stored session data and retrying the heartbeat with a clean context. The agent loses its prior working state in this case, but the heartbeat cycle continues rather than stalling on a dead session reference. Operators can monitor for session reset events in the heartbeat run logs to catch environments that are dropping sessions too frequently.
The Key Configuration Elements: Webhook URL, Auth, Payload, Timeout
Four fields in the adapter configuration control how Paperclip connects to the OpenClaw gateway. Getting any of them wrong typically produces a silent failure: the heartbeat fires, the request fails, and the agent never executes.
A minimal working configuration looks like this:
{
"adapter": "openclaw",
"webhookUrl": "http://127.0.0.1:18789",
"webhookAuthHeader": "Bearer ${secrets.openclaw_gateway_token}",
"payloadTemplate": {},
"timeoutSec": 30
}Here's what each field controls and where the common mistakes are.
webhookUrl
This is the address of the OpenClaw gateway. For local development, the default is http://127.0.0.1:18789. For production, you'll point this at a remote host, usually over a private network. Tailscale is a common pattern for teams that want to avoid exposing the gateway to the public internet.
The most frequent setup failure is a webhookUrl that resolves correctly from the operator's machine but isn't reachable from the Paperclip server. If your first heartbeat run returns a connection error, verify that the gateway process is running on the target host, listening on the expected port, and that network rules allow inbound traffic from wherever Paperclip is hosted.
webhookAuthHeader
The gateway expects an authentication token in the request header. Configure this using Paperclip's secret reference syntax (${secrets.openclaw_gateway_token}), not a raw token string. Paperclip encrypts secret values at rest and redacts them from heartbeat run logs. A hardcoded token in the adapter config will appear in plaintext in every log entry that includes the request payload.
Each agent connecting to a gateway needs a valid token. If you add a new agent and skip this step, the heartbeat will fire but the gateway will reject the request with an authentication error. Check the errorCode field in the heartbeat run record if you see agents failing immediately after provisioning.
payloadTemplate
The payloadTemplate field lets you inject additional root-level fields into every request Paperclip sends to the gateway. The standard payload (execution context plus the paperclip metadata key) ships by default. The template merges on top of it.
Use this when you need the OpenClaw runtime to route or configure execution based on metadata that Paperclip doesn't include natively. A practical example: if you run gateway instances across multiple environments, you can pass a target environment in the template so the runtime selects the correct workspace.
"payloadTemplate": {
"environment": "staging",
"priority": "high"
}Most teams leave this empty at initial setup and add fields later as their routing needs become more specific.
timeoutSec
This sets how long Paperclip waits for the gateway to respond before marking the heartbeat run as failed. The default of 30 seconds works for synchronous heartbeats (planning, status review, lightweight decisions). For anything involving code execution, 30 seconds is too short.
If you're using the synchronous execution pattern for heavier tasks, increase this to 120–300 seconds depending on the complexity of the work. If you're using the async pattern (202 Accepted with callback), the timeout only applies to the initial handshake, so 30 seconds is usually sufficient. The gateway just needs to acknowledge receipt, not complete the work.
A useful rule of thumb: if you find yourself pushing timeoutSec past 300 seconds to keep sync heartbeats from failing, switch to async. The sync pattern wasn't designed for workloads that take that long, and a dropped connection at the five-minute mark loses the entire result.
How to Test the Integration with Heartbeat Runs and Logs
A manual heartbeat run is the primary way to verify that the full execution path works. Mock payloads and unit tests won't catch the configuration, networking, and session issues that cause failures in production. Run a real heartbeat against a real gateway before you hand off any agent to an automated schedule.
Step 1: Trigger a single heartbeat
Use the Paperclip CLI to fire one heartbeat against the agent you want to test:
paperclip heartbeat run --agent-id agent_cto_01This command sends the POST request to your configured gateway URL and streams the run output to your terminal. For a synchronous heartbeat, it blocks until the gateway responds. For an async heartbeat, it returns after the 202 handshake and then polls for the callback result.
A passing run looks like this:
[heartbeat] agent_cto_01 | run_id: hb_run_7f3a...
[heartbeat] POST http://127.0.0.1:18789 → 200 OK (4.2s)
[heartbeat] session: ses_abc123 (resumed)
[heartbeat] exitCode: 0
[heartbeat] result: task_impl_auth_module marked completeThe fields to check on a successful run: exitCode: 0 confirms the agent executed without errors. The session line tells you whether the agent started a new session or resumed an existing one. The response time (4.2s in this example) gives you a baseline for timeout tuning.
A failing run surfaces the problem in the errorCode field:
[heartbeat] agent_cto_01 | run_id: hb_run_8b2c...
[heartbeat] POST http://127.0.0.1:18789 → ERROR
[heartbeat] exitCode: 1
[heartbeat] errorCode: openclaw_gateway_unreachableIf your first heartbeat fails, check the errorCode before investigating further. The most common codes on initial setup:
Step 2: Verify session persistence
A first successful heartbeat confirms connectivity and authentication. A second one helps confirm that session state is actually being carried forward. Run the same command a second time:
paperclip heartbeat run --agent-id agent_cto_01On the second run, check two things. First, the session line should show resumed with the same session ID from the first run, not new. If the agent started a fresh session, Paperclip either didn't store the session ID from the first run or the gateway invalidated it between cycles. Second, verify in the dashboard that both heartbeat records for this agent show matching values in runtime.sessionId and runtime.sessionParams.
If the session ID doesn't carry over, the most common causes are: the gateway returned a response without a sessionId field (check the full response payload in the heartbeat logs), or a missing agent:{id}: prefix in the session key is causing the gateway to route the session to its default agent instead of the one you specified.
Step 3: Inspect the full request and response payloads
Heartbeat run logs capture the complete outbound request and the gateway's response. Pull the logs for your test run:
paperclip heartbeat logs --run-id hb_run_7f3aCompare the outbound request against your adapter configuration. Confirm that the paperclip metadata key contains the correct agentId and taskId, that any payloadTemplate fields merged correctly into the root payload, and that the auth header is present (it will appear as [REDACTED] if you used secret references, which is the expected behavior).
On the response side, verify that the gateway returned a well-formed result with a sessionId and the expected output structure. Malformed responses (missing fields, unexpected nesting) typically indicate a version mismatch between the adapter and the gateway, or a payloadTemplate that's overwriting fields the gateway expects at specific paths.
When to consider the integration stable: your agent completes two consecutive heartbeats with exitCode: 0, resumes the same session on the second run, and the request/response payloads in the logs match your expected configuration. At that point, you can move the agent to an automated heartbeat schedule with reasonable confidence that the execution path is sound.
Common Failure Points and What to Check First
Even a well-designed integration can fail for routine operational reasons. The following checklist represents the most frequent failure points in production environments:
- Webhook Unreachable: The most common issue for new users. If Paperclip cannot connect to the gateway, check for DNS failures or firewall blocks. In local setups, ensure that the OpenClaw gateway service is actually running and bound to the correct port.
- Authentication Failures: Often caused by a missing or invalid authentication header in the adapter configuration.
- Timeout Problems: If heartbeat logs show a timeout error code, the agent may be taking longer to respond than the timeoutSec limit allows. Increasing the timeout or implementing asynchronous callbacks is the standard fix.
- Session Persistence Issues: If agents seem to "forget" context between heartbeats, verify that OpenClaw is returning a sessionId and that Paperclip is persisting it. In multi-agent setups, a missing agent:{id}: prefix in the session key can cause the gateway to default to a "main" agent, leading to identity mismatches.
- Pairing Required: In some configurations, Paperclip agents hit a silent failure with errorCode: openclaw_gateway_pairing_required. This necessitates running the openclaw devices approve --latest command on the gateway host to authorize the Paperclip instance as a trusted device.
Conclusion
At this point, you should have a working adapter configuration connecting Paperclip to your OpenClaw gateway, at least one agent that completes consecutive heartbeat runs with exitCode: 0 and stable session persistence, and enough familiarity with the heartbeat logs to diagnose the common failure modes when they appear in production.
The integration itself is mechanically simple: a webhook, an auth header, a payload structure, and a timeout. The operational complexity comes from managing session state across agents, setting budget ceilings that match your actual workload patterns, and building enough monitoring around heartbeat runs that you catch failures before they compound. Focus your early production effort there, not on the transport layer.
If you're evaluating whether this architecture fits your team, the honest test is agent count. Teams running one or two visible agents may not need an orchestration layer yet. Once several agents are running across repos, on schedules, and against real budgets, that layer becomes much easier to justify.

Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript



















.jpg)




