NEW YEAR, NEW GOALS:   Kickstart your SaaS development journey today and secure exclusive savings for the next 3 months!
Check it out here >>
White gift box with red ribbon and bow open to reveal a golden 10% symbol, surrounded by red Christmas trees and ornaments on a red background.
Unlock Your Holiday Savings
Build your SaaS faster and save for the next 3 months. Our limited holiday offer is now live.
White gift box with red ribbon and bow open to reveal a golden 10% symbol, surrounded by red Christmas trees and ornaments on a red background.
Explore the Offer
Valid for a limited time
close icon
Logo Codebridge
AI

OpenClaw Paperclip Integration: How to Connect, Configure, and Test It

April 9, 2026
|
10
min read
Share
text
Link copied icon
table of content
photo of Myroslav Budzanivskyi Co-Founder & CTO of Codebridge
Myroslav Budzanivskyi
Co-Founder & CTO

Get your project estimation!

OpenClaw works well when you are running a single agent in a single repo from a single terminal.Problems usually start once teams begin running multiple agents across separate repos or sessions. You end up managing a handful of disconnected sessions across different repositories, each running its own shell scripts, none of them aware of what the others are doing. Context gets lost between reboots, and nobody has a clear picture of what the agents accomplished overnight.

KEY TAKEAWAYS

Single-agent limits appear fast, standalone OpenClaw starts to break down once multiple agents are running across separate repos and sessions.

Paperclip adds control, it layers coordination, budget enforcement, heartbeat-based execution, and human approvals on top of the OpenClaw runtime.

Heartbeats improve oversight, discrete execution cycles make agent activity easier to inspect and reduce the risk of uncontrolled retry loops.

Stability depends on testing, the integration is only considered stable after consecutive successful heartbeats with resumed session state and correct request and response payloads.

Paperclip adds an orchestration and governance layer above OpenClaw. It assigns work through tickets, enforces budget limits, runs agents in heartbeat cycles, and routes higher-risk decisions to human approval. The building blocks are companies, roles, reporting lines, budgets, and audit logs.

This guide explains how the integration works, which configuration fields matter most, and how to test the end-to-end path before you automate it.

Why OpenClaw Alone Stops Being Enough in Multi-Agent Operations

Standalone OpenClaw becomes harder to manage once teams move beyond one or two agents. The main problem is that each agent instance runs in its own terminal, with its own context window, against its own slice of the codebase. No instance knows what the others are doing. In practice, this leads to three concrete problems.

  1. Duplicate and conflicting work 

Two agents working on the same service can produce incompatible changes to shared files because neither has visibility into the other's session. You catch this at code review or, worse, at deploy time.

  1. Cost exposure

An agent stuck in a retry loop or exploring a dead-end implementation path will burn through API tokens until someone notices. Without budget boundaries at the agent level, a single bad run can generate hundreds of dollars in charges.

⚠️

Key risk, a retry loop or dead-end implementation path can keep consuming API tokens until someone notices, which is why the article frames budget boundaries as an operational requirement.

  1. No session continuity across restarts 

When a machine reboots or a terminal session drops, the agent loses its working state. You can restart it, but it starts cold, with no memory of what it completed or what decisions it made in prior runs. At scale, operators spend more time re-establishing context than the agents spend executing.

These are infrastructure problems rather than model problems. OpenClaw gives you a capable runtime, but it does not provide coordination across agents, budget controls, or reliable session continuity across runs. That's the layer Paperclip adds.

What Paperclip Adds on Top of OpenClaw

Executive-style infographic showing OpenClaw Runtime at the center as the execution layer, surrounded by a Paperclip control plane with four modules: company structure, heartbeat execution cycles, budget limits, and human approval gates.
Paperclip acts as a control plane around OpenClaw, structuring agent organizations, enforcing execution cycles, limiting budgets, and requiring human approval for high-impact decisions.

Paperclip acts as a control plane around OpenClaw. It decides what work is assigned, when agents run, how much budget they can use, and which actions need human approval.

Organizational Structure

Paperclip organizes agents into isolated units called Companies. Each company has its own goal set, its own agent roster, and its own budget. Agents within a company carry job titles and reporting lines. A typical setup might assign a CEO agent to define strategy, a CTO agent to break that strategy into technical tasks, and a Founding Engineer agent to execute implementation work.

The isolation boundary matters here. Agents in one company cannot access another company's tasks, sessions, or budget. This lets you run parallel workstreams (say, a product company and an infrastructure company) without cross-contamination, and tear down or restructure one without affecting the other.

Execution Rhythm (Heartbeats)

Paperclip doesn't let agents run continuously. Every agent executes in discrete cycles called heartbeats. During a single heartbeat, the agent receives its current context (active goals, assigned tasks, remaining budget), makes decisions or executes work, and then stops. The next heartbeat triggers a new cycle.

This design solves two problems. Continuous execution makes it difficult to inspect what an agent did or why. Heartbeats produce a clear audit record: one cycle, one set of inputs, one set of outputs. It also eliminates the infinite-loop failure mode where an agent retries a broken task indefinitely. If a heartbeat completes with an error, the operator can inspect the logs and decide whether to re-trigger, reassign, or intervene before the next cycle fires.

Budget Discipline

Unchecked autonomous agents can incur massive token costs in short order. Paperclip enforces hard budget limits at the agent and company levels. If an agent hits 100% of their monthly budget, Paperclip pauses that agent and blocks all future heartbeats. Execution only resumes after a human operator (acting as a board member in Paperclip's governance model) reviews the spend and explicitly re-enables the agent.

The limit is hard-enforced by design. The agent cannot override it, request an increase, or reallocate budget from another agent. This is a deliberate constraint for teams running agents on overnight or weekend schedules, where a stuck loop could otherwise accumulate token costs for hours without anyone watching.

Governance and Approvals

The human operator in the Paperclip ecosystem acts as the "Board of Directors". High-impact actions, such as the CEO's proposed company strategy or the hiring of new agent employees, require explicit board approval. This ensures that autonomous agents operate within boundaries defined by humans, rather than pursuing misaligned objectives.

How the OpenClaw–Paperclip Integration Works

Paperclip connects to OpenClaw through a dedicated HTTP adapter built on top of Paperclip's generic webhook system. The OpenClaw adapter extends the base with two additions: a structured payload format that separates orchestration metadata from execution context, and session tracking that persists agent state across heartbeat cycles.

Request flow

Each heartbeat triggers a POST request from Paperclip to the OpenClaw gateway URL. The payload uses a nested structure. Orchestration metadata sits under a paperclip key. The execution context (goals, task details, prior decisions) sits at the root level alongside it.

The payload shape looks like this:

{
  "prompt": "...",
  "context": { ... },
  "paperclip": {
    "runId": "hb_run_abc123",
    "agentId": "agent_cto_01",
    "taskId": "task_impl_auth_module"
  }
}

This separation is important as the OpenClaw can execute the task without parsing Paperclip’s orchestration fields. The paperclip key travels through the system for traceability and callback routing, but the agent operates on the root-level context.

Synchronous vs. asynchronous execution

The gateway supports two response patterns, and the choice depends on the expected task duration.The gateway can return either a direct result or an acknowledgment, depending on how long the task is expected to run.

Pattern Response code When to use How it resolves
Synchronous 2xx Short tasks: status checks, strategic reviews, lightweight planning Gateway returns the result directly in the response body. Heartbeat completes immediately.
Asynchronous 202 Accepted Long tasks: multi-file implementation, test generation, large refactors Gateway acknowledges receipt. The agent calls back to Paperclip's completion endpoint with results when finished.

For most coding work, you'll use the async pattern. The sync path is primarily useful for decision-making heartbeats where the agent evaluates priorities or reviews task status without executing against the codebase.

Session persistence

OpenClaw returns a sessionId with each response. Paperclip stores this ID and passes it back on the next heartbeat invocation for that agent, so the OpenClaw runtime can restore the agent's workspace, memory, and file state from the previous cycle.

If a session becomes invalid (the OpenClaw host restarted, the session timed out, or the workspace was manually cleared), the gateway returns an error on the next invocation. Paperclip responds by dropping the stored session data and retrying the heartbeat with a clean context. The agent loses its prior working state in this case, but the heartbeat cycle continues rather than stalling on a dead session reference. Operators can monitor for session reset events in the heartbeat run logs to catch environments that are dropping sessions too frequently.

The Key Configuration Elements: Webhook URL, Auth, Payload, Timeout

Four fields in the adapter configuration control how Paperclip connects to the OpenClaw gateway. Getting any of them wrong typically produces a silent failure: the heartbeat fires, the request fails, and the agent never executes.

A minimal working configuration looks like this:

{
  "adapter": "openclaw",
  "webhookUrl": "http://127.0.0.1:18789",
  "webhookAuthHeader": "Bearer ${secrets.openclaw_gateway_token}",
  "payloadTemplate": {},
  "timeoutSec": 30
}

Here's what each field controls and where the common mistakes are.

webhookUrl

This is the address of the OpenClaw gateway. For local development, the default is http://127.0.0.1:18789. For production, you'll point this at a remote host, usually over a private network. Tailscale is a common pattern for teams that want to avoid exposing the gateway to the public internet.

The most frequent setup failure is a webhookUrl that resolves correctly from the operator's machine but isn't reachable from the Paperclip server. If your first heartbeat run returns a connection error, verify that the gateway process is running on the target host, listening on the expected port, and that network rules allow inbound traffic from wherever Paperclip is hosted.

webhookAuthHeader

The gateway expects an authentication token in the request header. Configure this using Paperclip's secret reference syntax (${secrets.openclaw_gateway_token}), not a raw token string. Paperclip encrypts secret values at rest and redacts them from heartbeat run logs. A hardcoded token in the adapter config will appear in plaintext in every log entry that includes the request payload.

Each agent connecting to a gateway needs a valid token. If you add a new agent and skip this step, the heartbeat will fire but the gateway will reject the request with an authentication error. Check the errorCode field in the heartbeat run record if you see agents failing immediately after provisioning.

payloadTemplate

The payloadTemplate field lets you inject additional root-level fields into every request Paperclip sends to the gateway. The standard payload (execution context plus the paperclip metadata key) ships by default. The template merges on top of it.

Use this when you need the OpenClaw runtime to route or configure execution based on metadata that Paperclip doesn't include natively. A practical example: if you run gateway instances across multiple environments, you can pass a target environment in the template so the runtime selects the correct workspace.

"payloadTemplate": {
  "environment": "staging",
  "priority": "high"
}

Most teams leave this empty at initial setup and add fields later as their routing needs become more specific.

timeoutSec

This sets how long Paperclip waits for the gateway to respond before marking the heartbeat run as failed. The default of 30 seconds works for synchronous heartbeats (planning, status review, lightweight decisions). For anything involving code execution, 30 seconds is too short.

If you're using the synchronous execution pattern for heavier tasks, increase this to 120–300 seconds depending on the complexity of the work. If you're using the async pattern (202 Accepted with callback), the timeout only applies to the initial handshake, so 30 seconds is usually sufficient. The gateway just needs to acknowledge receipt, not complete the work.

A useful rule of thumb: if you find yourself pushing timeoutSec past 300 seconds to keep sync heartbeats from failing, switch to async. The sync pattern wasn't designed for workloads that take that long, and a dropped connection at the five-minute mark loses the entire result.

How to Test the Integration with Heartbeat Runs and Logs

A manual heartbeat run is the primary way to verify that the full execution path works. Mock payloads and unit tests won't catch the configuration, networking, and session issues that cause failures in production. Run a real heartbeat against a real gateway before you hand off any agent to an automated schedule.

Step 1: Trigger a single heartbeat

Use the Paperclip CLI to fire one heartbeat against the agent you want to test:

paperclip heartbeat run --agent-id agent_cto_01

This command sends the POST request to your configured gateway URL and streams the run output to your terminal. For a synchronous heartbeat, it blocks until the gateway responds. For an async heartbeat, it returns after the 202 handshake and then polls for the callback result.

A passing run looks like this:

[heartbeat] agent_cto_01 | run_id: hb_run_7f3a...
[heartbeat] POST http://127.0.0.1:18789 → 200 OK (4.2s)
[heartbeat] session: ses_abc123 (resumed)
[heartbeat] exitCode: 0
[heartbeat] result: task_impl_auth_module marked complete

The fields to check on a successful run: exitCode: 0 confirms the agent executed without errors. The session line tells you whether the agent started a new session or resumed an existing one. The response time (4.2s in this example) gives you a baseline for timeout tuning.

A failing run surfaces the problem in the errorCode field:

[heartbeat] agent_cto_01 | run_id: hb_run_8b2c...
[heartbeat] POST http://127.0.0.1:18789 → ERROR
[heartbeat] exitCode: 1
[heartbeat] errorCode: openclaw_gateway_unreachable

If your first heartbeat fails, check the errorCode before investigating further. The most common codes on initial setup:

errorCode What it means First thing to check
openclaw_gateway_unreachable Paperclip couldn't connect to the webhook URL Gateway process running? Port open? Firewall rules between Paperclip and the host?
openclaw_auth_failed Gateway rejected the authentication header Token valid? Using ${secrets.name} syntax? Token matches what the gateway expects?
openclaw_gateway_timeout Gateway didn't respond within timeoutSec Increase timeout, or switch to async if the task is heavy
openclaw_gateway_pairing_required Gateway requires device authorization Run openclaw devices approve --latest on the gateway host

Step 2: Verify session persistence

A first successful heartbeat confirms connectivity and authentication. A second one helps confirm that session state is actually being carried forward. Run the same command a second time:

paperclip heartbeat run --agent-id agent_cto_01

On the second run, check two things. First, the session line should show resumed with the same session ID from the first run, not new. If the agent started a fresh session, Paperclip either didn't store the session ID from the first run or the gateway invalidated it between cycles. Second, verify in the dashboard that both heartbeat records for this agent show matching values in runtime.sessionId and runtime.sessionParams.

If the session ID doesn't carry over, the most common causes are: the gateway returned a response without a sessionId field (check the full response payload in the heartbeat logs), or a missing agent:{id}: prefix in the session key is causing the gateway to route the session to its default agent instead of the one you specified.

Step 3: Inspect the full request and response payloads

Heartbeat run logs capture the complete outbound request and the gateway's response. Pull the logs for your test run:

paperclip heartbeat logs --run-id hb_run_7f3a

Compare the outbound request against your adapter configuration. Confirm that the paperclip metadata key contains the correct agentId and taskId, that any payloadTemplate fields merged correctly into the root payload, and that the auth header is present (it will appear as [REDACTED] if you used secret references, which is the expected behavior).

On the response side, verify that the gateway returned a well-formed result with a sessionId and the expected output structure. Malformed responses (missing fields, unexpected nesting) typically indicate a version mismatch between the adapter and the gateway, or a payloadTemplate that's overwriting fields the gateway expects at specific paths.

When to consider the integration stable: your agent completes two consecutive heartbeats with exitCode: 0, resumes the same session on the second run, and the request/response payloads in the logs match your expected configuration. At that point, you can move the agent to an automated heartbeat schedule with reasonable confidence that the execution path is sound.

Common Failure Points and What to Check First

Even a well-designed integration can fail for routine operational reasons. The following checklist represents the most frequent failure points in production environments:

  1. Webhook Unreachable: The most common issue for new users. If Paperclip cannot connect to the gateway, check for DNS failures or firewall blocks. In local setups, ensure that the OpenClaw gateway service is actually running and bound to the correct port.
  2. Authentication Failures: Often caused by a missing or invalid authentication header in the adapter configuration.
  3. Timeout Problems: If heartbeat logs show a timeout error code, the agent may be taking longer to respond than the timeoutSec limit allows. Increasing the timeout or implementing asynchronous callbacks is the standard fix.
  4. Session Persistence Issues: If agents seem to "forget" context between heartbeats, verify that OpenClaw is returning a sessionId and that Paperclip is persisting it. In multi-agent setups, a missing agent:{id}: prefix in the session key can cause the gateway to default to a "main" agent, leading to identity mismatches.
  5. Pairing Required: In some configurations, Paperclip agents hit a silent failure with errorCode: openclaw_gateway_pairing_required. This necessitates running the openclaw devices approve --latest command on the gateway host to authorize the Paperclip instance as a trusted device.

Conclusion

At this point, you should have a working adapter configuration connecting Paperclip to your OpenClaw gateway, at least one agent that completes consecutive heartbeat runs with exitCode: 0 and stable session persistence, and enough familiarity with the heartbeat logs to diagnose the common failure modes when they appear in production.

The integration itself is mechanically simple: a webhook, an auth header, a payload structure, and a timeout. The operational complexity comes from managing session state across agents, setting budget ceilings that match your actual workload patterns, and building enough monitoring around heartbeat runs that you catch failures before they compound. Focus your early production effort there, not on the transport layer.

If you're evaluating whether this architecture fits your team, the honest test is agent count. Teams running one or two visible agents may not need an orchestration layer yet. Once several agents are running across repos, on schedules, and against real budgets, that layer becomes much easier to justify.

Need to validate whether this setup fits your agent workflow?

Review the architecture with Codebridge →

What is the OpenClaw Paperclip integration?

The integration connects Paperclip to an OpenClaw gateway through a dedicated HTTP adapter so Paperclip can assign work, trigger heartbeat runs, and track agent sessions across cycles.

Why is OpenClaw alone not enough for multi-agent operations?

The article explains that standalone OpenClaw becomes harder to manage once multiple agents are running because work can overlap, token costs can grow without controls, and session continuity is lost after reboots or dropped terminals.

What does Paperclip add on top of OpenClaw?

Paperclip adds orchestration and governance features such as company-level isolation, role structures, heartbeat-based execution, budget enforcement, and human approval for high-impact decisions.

How does the OpenClaw and Paperclip connection work?

Each heartbeat sends a POST request from Paperclip to the OpenClaw gateway URL. The payload separates execution context at the root level from orchestration metadata under a dedicated paperclip key for traceability and routing.

What configuration fields matter most in the integration setup?

The article identifies four key fields in the adapter configuration: webhookUrl, webhookAuthHeader, payloadTemplate, and timeoutSec. These control gateway reachability, authentication, extra request fields, and response timing.

How do you test whether the integration is working correctly?

The primary test is a manual heartbeat run against a real gateway. The article recommends confirming a successful run, then running a second heartbeat to verify that the same session resumes correctly and that the request and response payloads match the expected configuration.

What are the most common OpenClaw Paperclip integration failures?

The article lists five common issues: unreachable webhooks, authentication failures, timeout problems, session persistence issues, and pairing requirements that require device authorization on the gateway host.

group of professionals discussing the integration of OpenClaw and Paperclip

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

AI
Rate this article!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
97
ratings, average
4.8
out of 5
April 9, 2026
Share
text
Link copied icon

LATEST ARTICLES

Creating domain-specific AI agents using OpenClaw components including skills, memory, and structured agent definition
April 8, 2026
|
10
min read

How to Build Domain-Specific AI Agents with OpenClaw Skills, SOUL.md, and Memory

For business leaders who want to learn how to build domain-specific AI agents with persistent context, governance, and auditability using skills, SOUL.md, and memory with OpenClaw.

by Konstantin Karpushin
AI
Read more
Read more
OpenClaw and the future of personal AI infrastructure with user-controlled systems, local deployment, and workflow ownership
April 7, 2026
|
6
min read

What OpenClaw Reveals About the Future of Personal AI Infrastructure

What the rise of OpenClaw reveals for businesses about local-first AI agents, personal AI infrastructure, runtime control, and governance in the next wave of AI systems.

by Konstantin Karpushin
AI
Read more
Read more
OpenClaw vs SaaS automation comparison showing differences in control, deployment architecture, and workflow execution
April 6, 2026
|
10
min read

OpenClaw vs SaaS Automation: When a Self-Hosted AI Agent Actually Pays Off

We compared OpenClaw, Zapier, and Make to see when self-hosting delivers more control and when managed SaaS automation remains the smarter fit for businesses in 2026.

by Konstantin Karpushin
AI
Read more
Read more
secure OpenClaw deployment with configuration control, access boundaries, and operational safeguards for agent systems
April 2, 2026
|
12
min read

Secure OpenClaw Deployment: How to Start With Safe Boundaries, Not Just Fast Setup

See what secure OpenClaw deployment actually requires, from access control and session isolation to tool permissions, network exposure, and host-level security.

by Konstantin Karpushin
AI
Read more
Read more
Office scene viewed through glass, showing a professional working intently at a laptop in the foreground while another colleague works at a desk in the background.
April 1, 2026
|
6
min read

AI Agent Governance Is an Architecture Problem, Not a Policy Problem

AI agent governance belongs in your system architecture, not a policy doc. Four design patterns CTOs should implement before shipping agents to production.

by Konstantin Karpushin
AI
Read more
Read more
Modern city with AI agent guardrails for production systems. Kill switches, escalation paths, and recovery controls that reduce risk and improve operational resilience.
March 31, 2026
|
8
min read

AI Agent Guardrails for Production: Kill Switches, Escalation Paths, and Safe Recovery

Learn about AI agent guardrails for production systems. Kill switches, escalation paths, and recovery controls that reduce risk and improve operational resilience.

by Konstantin Karpushin
AI
Read more
Read more
CEO of the business company is evaluating different options among AI vendors.
April 3, 2026
|
10
min read

Top 10 AI Development Companies in USA

Compare top AI development companies in the USA and learn how founders and CTOs can choose a partner built for production, governance, and scale. See how to evaluate vendors on delivery depth and maturity.

by Konstantin Karpushin
AI
Read more
Read more
AI agent access control with permission boundaries, tool restrictions, and secure system enforcement
March 30, 2026
|
8
min read

AI Agent Access Control: How to Govern What Agents Can See, Decide, and Do

Learn how AI agent access control works, which control models matter, and how to set safe boundaries for agents in production systems. At the end, there is a checklist to verify if your agent is ready for production.

by Konstantin Karpushin
AI
Read more
Read more
AI agent development companies offering agent architecture, workflow design, and production system implementation
March 27, 2026
|
8
min read

Top 10 AI Agent Development Companies in the USA

Top 10 AI agent development companies serving US businesses in 2026. The list is evaluated on production deployments, architectural depth, and governance readiness.

by Konstantin Karpushin
AI
Read more
Read more
single-agent vs multi-agent architecture comparison showing differences in coordination, scalability, and system design
March 26, 2026
|
10
min read

Single-Agent vs Multi-Agent Architecture: What Changes in Reliability, Cost, and Debuggability

Compare single-agent and multi-agent AI architectures across cost, latency, and debuggability. Aticle includes a decision framework for engineering leaders.

by Konstantin Karpushin
AI
Read more
Read more
Logo Codebridge

Let’s collaborate

Have a project in mind?
Tell us everything about your project or product, we’ll be glad to help.
call icon
+1 302 688 70 80
email icon
business@codebridge.tech
Attach file
By submitting this form, you consent to the processing of your personal data uploaded through the contact form above, in accordance with the terms of Codebridge Technology, Inc.'s  Privacy Policy.

Thank you!

Your submission has been received!

What’s next?

1
Our experts will analyse your requirements and contact you within 1-2 business days.
2
Out team will collect all requirements for your project, and if needed, we will sign an NDA to ensure the highest level of privacy.
3
We will develop a comprehensive proposal and an action plan for your project with estimates, timelines, CVs, etc.
Oops! Something went wrong while submitting the form.