Compare commits

..

2 Commits

Author SHA1 Message Date
dotta
5f80aa9aa0 Remove telemetry fallback and root preflight hooks
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-09 10:53:28 -05:00
dotta
d03efda207 Preflight worktree links and add telemetry fallback 2026-04-09 10:22:34 -05:00
1202 changed files with 12261 additions and 410532 deletions

View File

@@ -154,14 +154,6 @@ Each AGENTS.md body should include not just what the agent does, but how they fi
This turns a collection of agents into an organization that actually works together. Without workflow context, agents operate in isolation — they do their job but don't know what happens before or after them.
Add a concise execution contract to every generated working agent:
- Start actionable work in the same heartbeat and do not stop at a plan unless planning was requested.
- Leave durable progress in comments, documents, or work products with the next action.
- Use child issues for long or parallel delegated work instead of polling agents, sessions, or processes.
- Mark blocked work with the unblock owner and action.
- Respect budget, pause/cancel, approval gates, and company boundaries.
### Step 5: Confirm Output Location
Ask the user where to write the package. Common options:

View File

@@ -105,13 +105,6 @@ Your responsibilities:
- Implement features and fix bugs
- Write tests and documentation
- Participate in code reviews
Execution contract:
- Start actionable implementation work in the same heartbeat; do not stop at a plan unless planning was requested.
- Leave durable progress with a clear next action.
- Use child issues for long or parallel delegated work instead of polling agents, sessions, or processes.
- Mark blocked work with the unblock owner and action.
```
## teams/engineering/TEAM.md

View File

@@ -548,7 +548,7 @@ Import from `@paperclipai/adapter-utils/server-utils`:
### Prompt Templates
- Support `promptTemplate` for every run
- Use `renderTemplate()` with the standard variable set
- Default prompt should use `DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE` from `@paperclipai/adapter-utils/server-utils` so local adapters share Paperclip's execution contract: act in the same heartbeat, avoid planning-only exits unless requested, leave durable progress and a next action, use child issues instead of polling, mark blockers with owner/action, and respect governance boundaries.
- Default prompt: `"You are agent {{agent.id}} ({{agent.name}}). Continue your Paperclip work."`
### Error Handling
- Differentiate timeout vs process error vs parse failure

View File

@@ -1,230 +0,0 @@
---
name: deal-with-security-advisory
description: >
Handle a GitHub Security Advisory response for Paperclip, including
confidential fix development in a temporary private fork, human coordination
on advisory-thread comments, CVE request, synchronized advisory publication,
and immediate security release steps.
---
# Security Vulnerability Response Instructions
## ⚠️ CRITICAL: This is a security vulnerability. Everything about this process is confidential until the advisory is published. Do not mention the vulnerability details in any public commit message, PR title, branch name, or comment. Do not push anything to a public branch. Do not discuss specifics in any public channel. Assume anything on the public repo is visible to attackers who will exploit the window between disclosure and user upgrades.
***
## Context
A security vulnerability has been reported via GitHub Security Advisory:
* **Advisory:** {{ghsaId}} (e.g. GHSA-x8hx-rhr2-9rf7)
* **Reporter:** {{reporterHandle}}
* **Severity:** {{severity}}
* **Notes:** {{notes}}
***
## Step 0: Fetch the Advisory Details
Pull the full advisory so you understand the vulnerability before doing anything else:
```
gh api repos/paperclipai/paperclip/security-advisories/{{ghsaId}}
```
Read the `description`, `severity`, `cvss`, and `vulnerabilities` fields. Understand the attack vector before writing code.
## Step 1: Acknowledge the Report
⚠️ **This step requires a human.** The advisory thread does not have a comment API. Ask the human operator to post a comment on the private advisory thread acknowledging the report. Provide them this template:
> Thanks for the report, @{{reporterHandle}}. We've confirmed the issue and are working on a fix. We're targeting a patch release within {{timeframe}}. We'll keep you updated here.
Give your human this template, but still continue
Below we use `gh` tools - you do have access and credentials outside of your sandbox, so use them.
## Step 2: Create the Temporary Private Fork
This is where all fix development happens. Never push to the public repo.
```
gh api --method POST \
repos/paperclipai/paperclip/security-advisories/{{ghsaId}}/forks
```
This returns a repository object for the private fork. Save the `full_name` and `clone_url`.
Clone it and set up your workspace:
```
# Clone the private fork somewhere outside ~/paperclip
git clone <clone_url_from_response> ~/security-patch-{{ghsaId}}
cd ~/security-patch-{{ghsaId}}
git checkout -b security-fix
```
**Do not edit `~/paperclip`** — the dev server is running off the `~/paperclip` master branch and we don't want to touch it. All work happens in the private fork clone.
**TIPS:**
* Do not commit `pnpm-lock.yaml` — the repo has actions to manage this
* Do not use descriptive branch names that leak the vulnerability (e.g., no `fix-dns-rebinding-rce`). Use something generic like `security-fix`
* All work stays in the private fork until publication
* CI/GitHub Actions will NOT run on the temporary private fork — this is a GitHub limitation by design. You must run tests locally
## Step 3: Develop and Validate the Fix
Write the patch. Same content standards as any PR:
* It must functionally work — **run tests locally** since CI won't run on the private fork
* Consider the whole codebase, not just the narrow vulnerability path. A patch that fixes one vector but opens another is worse than no patch
* Ensure backwards compatibility for the database, or be explicit about what breaks
* Make sure any UI components still look correct if the fix touches them
* The fix should be minimal and focused — don't bundle unrelated changes into a security patch. Reviewers (and the reporter) should be able to read the diff and understand exactly what changed and why
**Specific to security fixes:**
* Verify the fix actually closes the attack vector described in the advisory. Reproduce the vulnerability first (using the reporter's description), then confirm the patch prevents it
* Consider adjacent attack vectors — if DNS rebinding is the issue, are there other endpoints or modes with the same class of problem?
* Do not introduce new dependencies unless absolutely necessary — new deps in a security patch raise eyebrows
Push your fix to the private fork:
```
git add -A
git commit -m "Fix security vulnerability"
git push origin security-fix
```
## Step 4: Coordinate with the Reporter
⚠️ **This step requires a human.** Ask the human operator to post on the advisory thread letting the reporter know the fix is ready and giving them a chance to review. Provide them this template:
> @{{reporterHandle}} — fix is ready in the private fork if you'd like to review before we publish. Planning to release within {{timeframe}}.
Proceed
## Step 5: Request a CVE
This makes vulnerability scanners (npm audit, Snyk, Dependabot) warn users to upgrade. Without it, nobody gets automated notification.
```
gh api --method POST \
repos/paperclipai/paperclip/security-advisories/{{ghsaId}}/cve
```
GitHub is a CVE Numbering Authority and will assign one automatically. The CVE may take a few hours to propagate after the advisory is published.
## Step 6: Publish Everything Simultaneously
This all happens at once — do not stagger these steps. The goal is **zero window** between the vulnerability becoming public knowledge and the fix being available.
### 6a. Verify reporter credit before publishing
```
gh api repos/paperclipai/paperclip/security-advisories/{{ghsaId}} --jq '.credits'
```
If the reporter is not credited, add them:
```
gh api --method PATCH \
repos/paperclipai/paperclip/security-advisories/{{ghsaId}} \
--input - << 'EOF'
{
"credits": [
{
"login": "{{reporterHandle}}",
"type": "reporter"
}
]
}
EOF
```
### 6b. Update the advisory with the patched version and publish
```
gh api --method PATCH \
repos/paperclipai/paperclip/security-advisories/{{ghsaId}} \
--input - << 'EOF'
{
"state": "published",
"vulnerabilities": [
{
"package": {
"ecosystem": "npm",
"name": "paperclip"
},
"vulnerable_version_range": "< {{patchedVersion}}",
"patched_versions": "{{patchedVersion}}"
}
]
}
EOF
```
Publishing the advisory simultaneously:
* Makes the GHSA public
* Merges the temporary private fork into your repo
* Triggers the CVE assignment (if requested in step 5)
### 6c. Cut a release immediately after merge
```
cd ~/paperclip
git pull origin master
gh release create v{{patchedVersion}} \
--repo paperclipai/paperclip \
--title "v{{patchedVersion}} — Security Release" \
--notes "## Security Release
This release fixes a critical security vulnerability.
### What was fixed
{{briefDescription}} (e.g., Remote code execution via DNS rebinding in \`local_trusted\` mode)
### Advisory
https://github.com/paperclipai/paperclip/security/advisories/{{ghsaId}}
### Credit
Thanks to @{{reporterHandle}} for responsibly disclosing this vulnerability.
### Action required
All users running versions prior to {{patchedVersion}} should upgrade immediately."
```
## Step 7: Post-Publication Verification
```
# Verify the advisory is published and CVE is assigned
gh api repos/paperclipai/paperclip/security-advisories/{{ghsaId}} \
--jq '{state: .state, cve_id: .cve_id, published_at: .published_at}'
# Verify the release exists
gh release view v{{patchedVersion}} --repo paperclipai/paperclip
```
If the CVE hasn't been assigned yet, that's normal — it can take a few hours.
⚠️ **Human step:** Ask the human operator to post a final comment on the advisory thread confirming publication and thanking the reporter.
Tell the human operator what you did by posting a comment to this task, including:
* The published advisory URL: `https://github.com/paperclipai/paperclip/security/advisories/{{ghsaId}}`
* The release URL
* Whether the CVE has been assigned yet
* All URLs to any pull requests or branches

View File

@@ -177,12 +177,8 @@ real name or email). To find GitHub usernames:
**Never expose contributor email addresses.** Use `@username` only.
Exclude bot accounts (e.g. `lockfile-bot`, `dependabot`) from the list.
Exclude Paperclip founders from the list (e.g. `cryppadotta`, `forgottendev`, `devinfoley`, `sockmonster`, `scotttong`)
List contributors in alphabetical order by GitHub username (case-insensitive).
If there are no contributors left after exclusions, then just skip this section and don't mention it.
Exclude bot accounts (e.g. `lockfile-bot`, `dependabot`) from the list. List contributors
in alphabetical order by GitHub username (case-insensitive).
## Step 6 — Review Before Release

View File

@@ -2,6 +2,3 @@ DATABASE_URL=postgres://paperclip:paperclip@localhost:5432/paperclip
PORT=3100
SERVE_UI=false
BETTER_AUTH_SECRET=paperclip-dev-secret
# Discord webhook for daily merge digest (scripts/discord-daily-digest.sh)
# DISCORD_WEBHOOK_URL=https://discord.com/api/webhooks/...

View File

@@ -38,8 +38,6 @@
-
> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and discuss it in `#dev` before opening the PR. Feature PRs that overlap with planned core work may need to be redirected — check the roadmap first. See `CONTRIBUTING.md`.
## Model Used
<!--
@@ -59,7 +57,6 @@
- [ ] I have included a thinking path that traces from project context to this change
- [ ] I have specified the model used (with version and capability details)
- [ ] I have checked ROADMAP.md and confirmed this PR does not duplicate planned core work
- [ ] I have run tests locally and they pass
- [ ] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after screenshots

View File

@@ -14,7 +14,7 @@ permissions:
jobs:
build-and-push:
runs-on: ubuntu-latest
timeout-minutes: 60
timeout-minutes: 30
concurrency:
group: docker-${{ github.ref }}
cancel-in-progress: true

View File

@@ -23,9 +23,7 @@ jobs:
- name: Block manual lockfile edits
if: github.head_ref != 'chore/refresh-lockfile'
run: |
# Diff the PR branch against its merge base so recent base-branch commits
# do not masquerade as changes made by the PR itself.
changed="$(git diff --name-only "${{ github.event.pull_request.base.sha }}...${{ github.event.pull_request.head.sha }}")"
changed="$(git diff --name-only "${{ github.event.pull_request.base.sha }}" "${{ github.event.pull_request.head.sha }}")"
if printf '%s\n' "$changed" | grep -qx 'pnpm-lock.yaml'; then
echo "Do not commit pnpm-lock.yaml in pull requests. CI owns lockfile updates."
exit 1
@@ -43,20 +41,48 @@ jobs:
node-version: 24
- name: Validate Dockerfile deps stage
run: node ./scripts/check-docker-deps-stage.mjs
- name: Validate release package manifest
run: node ./scripts/release-package-map.mjs check
- name: Verify release package bootstrap for changed manifests
run: |
mapfile -t changed_paths < <(git diff --name-only "${{ github.event.pull_request.base.sha }}...${{ github.event.pull_request.head.sha }}")
PAPERCLIP_RELEASE_BOOTSTRAP_BASE_SHA="${{ github.event.pull_request.base.sha }}" \
node ./scripts/check-release-package-bootstrap.mjs "${changed_paths[@]}"
missing=0
# Extract only the deps stage from the Dockerfile
deps_stage="$(awk '/^FROM .* AS deps$/{found=1; next} found && /^FROM /{exit} found{print}' Dockerfile)"
if [ -z "$deps_stage" ]; then
echo "::error::Could not extract deps stage from Dockerfile (expected 'FROM ... AS deps')"
exit 1
fi
# Derive workspace search roots from pnpm-workspace.yaml (exclude dev-only packages)
search_roots="$(grep '^ *- ' pnpm-workspace.yaml | sed 's/^ *- //' | sed 's/\*$//' | grep -v 'examples' | grep -v 'create-paperclip-plugin' | tr '\n' ' ')"
if [ -z "$search_roots" ]; then
echo "::error::Could not derive workspace roots from pnpm-workspace.yaml"
exit 1
fi
# Check all workspace package.json files are copied in the deps stage
for pkg in $(find $search_roots -maxdepth 2 -name package.json -not -path '*/examples/*' -not -path '*/create-paperclip-plugin/*' -not -path '*/node_modules/*' 2>/dev/null | sort -u); do
dir="$(dirname "$pkg")"
if ! echo "$deps_stage" | grep -q "^COPY ${dir}/package.json"; then
echo "::error::Dockerfile deps stage missing: COPY ${pkg} ${dir}/"
missing=1
fi
done
# Check patches directory is copied if it exists
if [ -d patches ] && ! echo "$deps_stage" | grep -q '^COPY patches/'; then
echo "::error::Dockerfile deps stage missing: COPY patches/ patches/"
missing=1
fi
if [ "$missing" -eq 1 ]; then
echo "Dockerfile deps stage is out of sync. Update it to include the missing files."
exit 1
fi
- name: Validate dependency resolution when manifests change
run: |
changed="$(git diff --name-only "${{ github.event.pull_request.base.sha }}...${{ github.event.pull_request.head.sha }}")"
changed="$(git diff --name-only "${{ github.event.pull_request.base.sha }}" "${{ github.event.pull_request.head.sha }}")"
manifest_pattern='(^|/)package\.json$|^pnpm-workspace\.yaml$|^\.npmrc$|^pnpmfile\.(cjs|js|mjs)$'
if printf '%s\n' "$changed" | grep -Eq "$manifest_pattern"; then
pnpm install --lockfile-only --ignore-scripts --no-frozen-lockfile
@@ -85,88 +111,16 @@ jobs:
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Typecheck workspaces whose build scripts skip TypeScript
run: pnpm run typecheck:build-gaps
- name: Typecheck
run: pnpm -r typecheck
- name: Run general test suites
run: pnpm test:run:general
- name: Verify release registry test coverage
run: pnpm run test:release-registry
- name: Run tests
run: pnpm test:run
- name: Build
run: pnpm build
verify_serialized_server:
name: Verify serialized server suites (${{ matrix.shard_label }})
needs: [policy]
runs-on: ubuntu-latest
timeout-minutes: 20
strategy:
fail-fast: false
matrix:
include:
- shard_index: 0
shard_count: 4
shard_label: 1/4
- shard_index: 1
shard_count: 4
shard_label: 2/4
- shard_index: 2
shard_count: 4
shard_label: 3/4
- shard_index: 3
shard_count: 4
shard_label: 4/4
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 9.15.4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 24
cache: pnpm
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Run serialized server test shard
run: pnpm test:run:serialized -- --shard-index ${{ matrix.shard_index }} --shard-count ${{ matrix.shard_count }}
canary_dry_run:
name: Canary Dry Run
needs: [policy]
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 9.15.4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 24
cache: pnpm
- name: Install dependencies
run: pnpm install --frozen-lockfile
# `release.sh` always executes its Step 2/7 workspace build, even when
# `--skip-verify` bypasses the initial verification gate.
- name: Release canary dry run via release.sh internal build
- name: Release canary dry run
run: |
git checkout -B master HEAD
git checkout -- pnpm-lock.yaml
@@ -195,6 +149,9 @@ jobs:
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Build
run: pnpm build
- name: Install Playwright
run: npx playwright install --with-deps chromium

View File

@@ -50,9 +50,6 @@ jobs:
node-version: 24
cache: pnpm
- name: Validate release package manifest
run: node ./scripts/release-package-map.mjs check
- name: Install dependencies
run: pnpm install --no-frozen-lockfile
@@ -92,9 +89,6 @@ jobs:
node-version: 24
cache: pnpm
- name: Validate release package manifest
run: node ./scripts/release-package-map.mjs check
- name: Install dependencies
run: pnpm install --no-frozen-lockfile
@@ -145,9 +139,6 @@ jobs:
node-version: 24
cache: pnpm
- name: Validate release package manifest
run: node ./scripts/release-package-map.mjs check
- name: Install dependencies
run: pnpm install --no-frozen-lockfile
@@ -186,9 +177,6 @@ jobs:
node-version: 24
cache: pnpm
- name: Validate release package manifest
run: node ./scripts/release-package-map.mjs check
- name: Install dependencies
run: pnpm install --no-frozen-lockfile

5
.gitignore vendored
View File

@@ -1,9 +1,5 @@
node_modules
node_modules/
**/node_modules
**/node_modules/
dist/
ui/storybook-static/
.env
*.tsbuildinfo
drizzle/meta/
@@ -36,7 +32,6 @@ server/src/**/*.d.ts
server/src/**/*.d.ts.map
tmp/
feedback-export-*
diagnostics/
# Editor / tool temp files
*.tmp

View File

@@ -1,3 +1 @@
Dotta <bippadotta@protonmail.com> <34892728+cryppadotta@users.noreply.github.com>
Dotta <bippadotta@protonmail.com> <forgottenrunes@protonmail.com>
Dotta <bippadotta@protonmail.com> <dotta@example.com>
Dotta <bippadotta@protonmail.com> Forgotten <forgottenrunes@protonmail.com>

View File

@@ -108,24 +108,7 @@ Notes:
## 7. Verification Before Hand-off
Default local/agent test path:
```sh
pnpm test
```
This is the cheap default and only runs the Vitest suite. Browser suites stay opt-in:
```sh
pnpm test:e2e
pnpm test:release-smoke
```
Run the browser suites only when your change touches them or when you are explicitly verifying CI/release flows.
For normal issue work, run the smallest relevant verification first. Do not default to repo-wide typecheck/build/test on every heartbeat when a narrower check is enough to prove the change.
Run this full check before claiming repo work done in a PR-ready hand-off, or when the change scope is broad enough that targeted checks are not sufficient:
Run this full check before claiming done:
```sh
pnpm -r typecheck

View File

@@ -51,21 +51,6 @@ All tests must pass before a PR can be merged. Run them locally first and verify
We use [Greptile](https://greptile.com) for automated code review. Your PR must achieve a **5/5 Greptile score** with **all Greptile comments addressed** before it can be merged. If Greptile leaves comments, fix or respond to each one and request a re-review.
## Feature Contributions
We actively manage the core Paperclip feature roadmap.
Uncoordinated feature PRs against the core product may be closed, even when the implementation is thoughtful and high quality. That is about roadmap ownership, product coherence, and long-term maintenance commitment, not a judgment about the effort.
If you want to contribute a feature:
- Check [ROADMAP.md](ROADMAP.md) first
- Start the discussion in Discord -> `#dev` before writing code
- If the idea fits as an extension, prefer building it with the [plugin system](doc/plugins/PLUGIN_SPEC.md)
- If you want to show a possible direction, reference implementations are welcome as feedback, but they generally will not be merged directly into core
Bugs, docs improvements, and small targeted improvements are still the easiest path to getting merged, and we really do appreciate them.
## General Rules (both paths)
- Write clear commit messages

View File

@@ -1,9 +1,16 @@
# syntax=docker/dockerfile:1.20
FROM node:lts-trixie-slim AS base
ARG USER_UID=1000
ARG USER_GID=1000
RUN apt-get update \
&& apt-get install -y --no-install-recommends ca-certificates gosu curl gh git wget ripgrep python3 \
&& apt-get install -y --no-install-recommends ca-certificates gosu curl git wget ripgrep python3 \
&& mkdir -p -m 755 /etc/apt/keyrings \
&& wget -nv -O/etc/apt/keyrings/githubcli-archive-keyring.gpg https://cli.github.com/packages/githubcli-archive-keyring.gpg \
&& echo "20e0125d6f6e077a9ad46f03371bc26d90b04939fb95170f5a1905099cc6bcc0 /etc/apt/keyrings/githubcli-archive-keyring.gpg" | sha256sum -c - \
&& chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \
&& mkdir -p -m 755 /etc/apt/sources.list.d \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" > /etc/apt/sources.list.d/github-cli.list \
&& apt-get update \
&& apt-get install -y --no-install-recommends gh \
&& rm -rf /var/lib/apt/lists/* \
&& corepack enable
@@ -22,7 +29,6 @@ COPY packages/shared/package.json packages/shared/
COPY packages/db/package.json packages/db/
COPY packages/adapter-utils/package.json packages/adapter-utils/
COPY packages/mcp-server/package.json packages/mcp-server/
COPY packages/adapters/acpx-local/package.json packages/adapters/acpx-local/
COPY packages/adapters/claude-local/package.json packages/adapters/claude-local/
COPY packages/adapters/codex-local/package.json packages/adapters/codex-local/
COPY packages/adapters/cursor-local/package.json packages/adapters/cursor-local/
@@ -31,8 +37,6 @@ COPY packages/adapters/openclaw-gateway/package.json packages/adapters/openclaw-
COPY packages/adapters/opencode-local/package.json packages/adapters/opencode-local/
COPY packages/adapters/pi-local/package.json packages/adapters/pi-local/
COPY packages/plugins/sdk/package.json packages/plugins/sdk/
COPY --parents packages/plugins/sandbox-providers/./*/package.json packages/plugins/sandbox-providers/
COPY packages/plugins/paperclip-plugin-fake-sandbox/package.json packages/plugins/paperclip-plugin-fake-sandbox/
COPY patches/ patches/
RUN pnpm install --frozen-lockfile
@@ -52,9 +56,6 @@ ARG USER_GID=1000
WORKDIR /app
COPY --chown=node:node --from=build /app /app
RUN npm install --global --omit=dev @anthropic-ai/claude-code@latest @openai/codex@latest opencode-ai \
&& apt-get update \
&& apt-get install -y --no-install-recommends openssh-client jq \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /paperclip \
&& chown node:node /paperclip

156
README.md
View File

@@ -6,8 +6,7 @@
<a href="#quickstart"><strong>Quickstart</strong></a> &middot;
<a href="https://paperclip.ing/docs"><strong>Docs</strong></a> &middot;
<a href="https://github.com/paperclipai/paperclip"><strong>GitHub</strong></a> &middot;
<a href="https://discord.gg/m4HZY7xNG3"><strong>Discord</strong></a> &middot;
<a href="https://x.com/papercliping"><strong>Twitter</strong></a>
<a href="https://discord.gg/m4HZY7xNG3"><strong>Discord</strong></a>
</p>
<p align="center">
@@ -157,115 +156,6 @@ Paperclip handles the hard orchestration details correctly.
<br/>
## What's Under the Hood
Paperclip is a full control plane, not a wrapper. Before you build any of this yourself, know that it already exists:
```
┌──────────────────────────────────────────────────────────────┐
│ PAPERCLIP SERVER │
│ │
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
│ │Identity & │ │ Work & │ │ Heartbeat │ │Governance │ │
│ │ Access │ │ Tasks │ │ Execution │ │& Approvals│ │
│ └───────────┘ └───────────┘ └───────────┘ └───────────┘ │
│ │
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
│ │ Org Chart │ │Workspaces │ │ Plugins │ │ Budget │ │
│ │ & Agents │ │ & Runtime │ │ │ │ & Costs │ │
│ └───────────┘ └───────────┘ └───────────┘ └───────────┘ │
│ │
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
│ │ Routines │ │ Secrets & │ │ Activity │ │ Company │ │
│ │& Schedules│ │ Storage │ │ & Events │ │Portability│ │
│ └───────────┘ └───────────┘ └───────────┘ └───────────┘ │
└──────────────────────────────────────────────────────────────┘
▲ ▲ ▲ ▲
┌─────┴─────┐ ┌─────┴─────┐ ┌─────┴─────┐ ┌─────┴─────┐
│ Claude │ │ Codex │ │ CLI │ │ HTTP/web │
│ Code │ │ │ │ agents │ │ bots │
└───────────┘ └───────────┘ └───────────┘ └───────────┘
```
### The Systems
<table>
<tr>
<td width="50%">
**Identity & Access** — Two deployment modes (trusted local or authenticated), board users, agent API keys, short-lived run JWTs, company memberships, invite flows, and OpenClaw onboarding. Every mutating request is traced to an actor.
</td>
<td width="50%">
**Org Chart & Agents** — Agents have roles, titles, reporting lines, permissions, and budgets. Adapter examples match the diagram: Claude Code, Codex, CLI agents such as Cursor/Gemini/bash, HTTP/webhook bots such as OpenClaw, and external adapter plugins. If it can receive a heartbeat, it's hired.
</td>
</tr>
<tr>
<td>
**Work & Task System** — Issues carry company/project/goal/parent links, atomic checkout with execution locks, first-class blocker dependencies, comments, documents, attachments, work products, labels, and inbox state. No double-work, no lost context.
</td>
<td>
**Heartbeat Execution** — DB-backed wakeup queue with coalescing, budget checks, workspace resolution, secret injection, skill loading, and adapter invocation. Runs produce structured logs, cost events, session state, and audit trails. Recovery handles orphaned runs automatically.
</td>
</tr>
<tr>
<td>
**Workspaces & Runtime** — Project workspaces, isolated execution workspaces (git worktrees, operator branches), and runtime services (dev servers, preview URLs). Agents work in the right directory with the right context every time.
</td>
<td>
**Governance & Approvals** — Board approval workflows, execution policies with review/approval stages, decision tracking, budget hard-stops, agent pause/resume/terminate, and full audit logging. You're the board — nothing ships without your sign-off.
</td>
</tr>
<tr>
<td>
**Budget & Cost Control** — Token and cost tracking by company, agent, project, goal, issue, provider, and model. Scoped budget policies with warning thresholds and hard stops. Overspend pauses agents and cancels queued work automatically.
</td>
<td>
**Routines & Schedules** — Recurring tasks with cron, webhook, and API triggers. Concurrency and catch-up policies. Each routine execution creates a tracked issue and wakes the assigned agent — no manual kick-offs needed.
</td>
</tr>
<tr>
<td>
**Plugins** — Instance-wide plugin system with out-of-process workers, capability-gated host services, job scheduling, tool exposure, and UI contributions. Extend Paperclip without forking it.
</td>
<td>
**Secrets & Storage** — Instance and company secrets, encrypted local storage, provider-backed object storage, attachments, and work products. Sensitive values stay out of prompts unless a scoped run explicitly needs them.
</td>
</tr>
<tr>
<td>
**Activity & Events** — Mutating actions, heartbeat state changes, cost events, approvals, comments, and work products are recorded as durable activity so operators can audit what happened and why.
</td>
<td>
**Company Portability** — Export and import entire organizations — agents, skills, projects, routines, and issues — with secret scrubbing and collision handling. One deployment, many companies, complete data isolation.
</td>
</tr>
</table>
<br/>
## What Paperclip is not
| | |
@@ -287,14 +177,6 @@ Open source. Self-hosted. No Paperclip account required.
npx paperclipai onboard --yes
```
That quickstart path now defaults to trusted local loopback mode for the fastest first run. To start in authenticated/private mode instead, choose a bind preset explicitly:
```bash
npx paperclipai onboard --yes --bind lan
# or:
npx paperclipai onboard --yes --bind tailnet
```
If you already have Paperclip configured, rerunning `onboard` keeps the existing config in place. Use `paperclipai configure` to edit settings.
Or manually:
@@ -343,15 +225,11 @@ pnpm dev:once # Full dev without file watching
pnpm dev:server # Server only
pnpm build # Build all
pnpm typecheck # Type checking
pnpm test # Cheap default test run (Vitest only)
pnpm test:watch # Vitest watch mode
pnpm test:e2e # Playwright browser suite
pnpm test:run # Run tests
pnpm db:generate # Generate DB migration
pnpm db:migrate # Apply migrations
```
`pnpm test` does not run Playwright. Browser suites stay separate and are typically run only when working on those flows or in CI.
See [doc/DEVELOPING.md](doc/DEVELOPING.md) for the full development guide.
<br/>
@@ -365,23 +243,14 @@ See [doc/DEVELOPING.md](doc/DEVELOPING.md) for the full development guide.
- ✅ Skills Manager
- ✅ Scheduled Routines
- ✅ Better Budgeting
- Agent Reviews and Approvals
- ✅ Multiple Human Users
- ⚪ Cloud / Sandbox agents (e.g. Cursor / e2b agents)
- ⚪ Artifacts & Work Products
- ⚪ Memory / Knowledge
- ⚪ Enforced Outcomes
- ⚪ MAXIMIZER MODE
- ⚪ Deep Planning
- ⚪ Work Queues
- ⚪ Self-Organization
- ⚪ Automatic Organizational Learning
- Artifacts & Deployments
- ⚪ CEO Chat
- ⚪ MAXIMIZER MODE
- ⚪ Multiple Human Users
- ⚪ Cloud / Sandbox agents (e.g. Cursor / e2b agents)
- ⚪ Cloud deployments
- ⚪ Desktop App
This is the short roadmap preview. See the full roadmap in [ROADMAP.md](ROADMAP.md).
<br/>
## Community & Plugins
@@ -394,12 +263,12 @@ Paperclip collects anonymous usage telemetry to help us understand how the produ
Telemetry is **enabled by default** and can be disabled with any of the following:
| Method | How |
| -------------------- | ------------------------------------------------------- |
| Environment variable | `PAPERCLIP_TELEMETRY_DISABLED=1` |
| Standard convention | `DO_NOT_TRACK=1` |
| CI environments | Automatically disabled when `CI=true` |
| Config file | Set `telemetry.enabled: false` in your Paperclip config |
| Method | How |
|---|---|
| Environment variable | `PAPERCLIP_TELEMETRY_DISABLED=1` |
| Standard convention | `DO_NOT_TRACK=1` |
| CI environments | Automatically disabled when `CI=true` |
| Config file | Set `telemetry.enabled: false` in your Paperclip config |
## Contributing
@@ -410,7 +279,6 @@ We welcome contributions. See the [contributing guide](CONTRIBUTING.md) for deta
## Community
- [Discord](https://discord.gg/m4HZY7xNG3) — Join the community
- [Twitter / X](https://x.com/papercliping) — Follow updates and announcements
- [GitHub Issues](https://github.com/paperclipai/paperclip/issues) — bugs and feature requests
- [GitHub Discussions](https://github.com/paperclipai/paperclip/discussions) — ideas and RFC

View File

@@ -1,97 +0,0 @@
# Roadmap
This document expands the roadmap preview in `README.md`.
Paperclip is still moving quickly. The list below is directional, not promised, and priorities may shift as we learn from users and from operating real AI companies with the product.
We value community involvement and want to make sure contributor energy goes toward areas where it can land.
We may accept contributions in the areas below, but if you want to work on roadmap-level core features, please coordinate with us first in Discord (`#dev`) before writing code. Bugs, docs, polish, and tightly scoped improvements are still the easiest contributions to merge.
If you want to extend Paperclip today, the best path is often the [plugin system](doc/plugins/PLUGIN_SPEC.md). Community reference implementations are also useful feedback even when they are not merged directly into core.
## Milestones
### ✅ Plugin system
Paperclip should keep a thin core and rich edges. Plugins are the path for optional capabilities like knowledge bases, custom tracing, queues, doc editors, and other product-specific surfaces that do not need to live in the control plane itself.
### ✅ Get OpenClaw / claw-style agent employees
Paperclip should be able to hire and manage real claw-style agent workers, not just a narrow built-in runtime. This is part of the larger "bring your own agent" story and keeps the control plane useful across different agent ecosystems.
### ✅ companies.sh - import and export entire organizations
Reusable companies matter. Import/export is the foundation for moving org structures, agent definitions, and reusable company setups between environments and eventually for broader company-template distribution.
### ✅ Easy AGENTS.md configurations
Agent setup should feel repo-native and legible. Simple `AGENTS.md`-style configuration lowers the barrier to getting an agent team running and makes it easier for contributors to understand how a company is wired together.
### ✅ Skills Manager
Agents need a practical way to discover, install, and use skills without every setup becoming bespoke. The skills layer is part of making Paperclip companies more reusable and easier to operate.
### ✅ Scheduled Routines
Recurring work should be native. Routine tasks like reports, reviews, and other periodic work need first-class scheduling so the company keeps operating even when no human is manually kicking work off.
### ✅ Better Budgeting
Budgets are a core control-plane feature, not an afterthought. Better budgeting means clearer spend visibility, safer hard stops, and better operator control over how autonomy turns into real cost.
### ✅ Agent Reviews and Approvals
Paperclip should support explicit review and approval stages as first-class workflow steps, not just ad hoc comments. That means reviewer routing, approval gates, change requests, and durable audit trails that fit the same task model as the rest of the control plane.
### ✅ Multiple Human Users
Paperclip needs a clearer path from solo operator to real human teams. That means shared board access, safer collaboration, and a better model for several humans supervising the same autonomous company.
### ⚪ Cloud / Sandbox agents (e.g. Cursor / e2b agents)
We want agents to run in more remote and sandboxed environments while preserving the same Paperclip control-plane model. This makes the system safer, more flexible, and more useful outside a single trusted local machine.
### ⚪ Artifacts & Work Products
Paperclip should make outputs first-class. That means generated artifacts, previews, deployable outputs, and the handoff from "agent did work" to "here is the result" should become more visible and easier to operate.
### ⚪ Memory / Knowledge
We want a stronger memory and knowledge surface for companies, agents, and projects. That includes durable memory, better recall of prior decisions and context, and a clearer path for knowledge-style capabilities without turning Paperclip into a generic chat app.
### ⚪ Enforced Outcomes
Paperclip should get stricter about what counts as finished work. Tasks, approvals, and execution flows should resolve to clear outcomes like merged code, published artifacts, shipped docs, or explicit decisions instead of stopping at vague status updates.
### ⚪ MAXIMIZER MODE
This is the direction for higher-autonomy execution: more aggressive delegation, deeper follow-through, and stronger operating loops with clear budgets, visibility, and governance. The point is not hidden autonomy; the point is more output per human supervisor.
### ⚪ Deep Planning
Some work needs more than a task description before execution starts. Deeper planning means stronger issue documents, revisionable plans, and clearer review loops for strategy-heavy work before agents begin execution.
### ⚪ Work Queues
Paperclip should support queue-style work streams for repeatable inputs like support, triage, review, and backlog intake. That would make it easier to route work continuously without turning every system into a one-off workflow.
### ⚪ Self-Organization
As companies grow, agents should be able to propose useful structural changes such as role adjustments, delegation changes, and new recurring routines. The goal is adaptive organizations that still stay within governance and approval boundaries.
### ⚪ Automatic Organizational Learning
Paperclip should get better at turning completed work into reusable organizational knowledge. That includes capturing playbooks, recurring fixes, and decision patterns so future work starts from what the company has already learned.
### ⚪ CEO Chat
We want a lighter-weight way to talk to leadership agents, but those conversations should still resolve to real work objects like plans, issues, approvals, or decisions. This should improve interaction without changing the core task-and-comments model.
### ⚪ Cloud deployments
Local-first remains important, but Paperclip also needs a cleaner shared deployment story. Teams should be able to run the same product in hosted or semi-hosted environments without changing the mental model.
### ⚪ Desktop App
A desktop app can make Paperclip feel more accessible and persistent for day-to-day operators. The goal is easier access, better local ergonomics, and a smoother default experience for users who want the control plane always close at hand.

View File

@@ -1,8 +0,0 @@
# Security Policy
## Reporting a Vulnerability
Please report security vulnerabilities through GitHub's Security Advisory feature:
[https://github.com/paperclipai/paperclip/security/advisories/new](https://github.com/paperclipai/paperclip/security/advisories/new)
Do not open public issues for security vulnerabilities.

View File

@@ -6,14 +6,13 @@
<a href="#quickstart"><strong>Quickstart</strong></a> &middot;
<a href="https://paperclip.ing/docs"><strong>Docs</strong></a> &middot;
<a href="https://github.com/paperclipai/paperclip"><strong>GitHub</strong></a> &middot;
<a href="https://discord.gg/m4HZY7xNG3"><strong>Discord</strong></a> &middot;
<a href="https://x.com/papercliping"><strong>Twitter</strong></a>
<a href="https://discord.gg/m4HZY7xNG3"><strong>Discord</strong></a>
</p>
<p align="center">
<a href="https://github.com/paperclipai/paperclip/blob/master/LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue" alt="MIT License" /></a>
<a href="https://github.com/paperclipai/paperclip/stargazers"><img src="https://img.shields.io/github/stars/paperclipai/paperclip?style=flat" alt="Stars" /></a>
<a href="https://discord.gg/m4HZY7xNG3"><img src="https://img.shields.io/discord/000000000?label=discord" alt="Discord" /></a>
<a href="https://discord.gg/m4HZY7xNG3"><img src="https://img.shields.io/badge/discord-join%20chat-5865F2?logo=discord&logoColor=white" alt="Discord" /></a>
</p>
<br/>
@@ -178,14 +177,6 @@ Open source. Self-hosted. No Paperclip account required.
npx paperclipai onboard --yes
```
That quickstart path now defaults to trusted local loopback mode for the fastest first run. To start in authenticated/private mode instead, choose a bind preset explicitly:
```bash
npx paperclipai onboard --yes --bind lan
# or:
npx paperclipai onboard --yes --bind tailnet
```
If you already have Paperclip configured, rerunning `onboard` keeps the existing config in place. Use `paperclipai configure` to edit settings.
Or manually:
@@ -234,15 +225,11 @@ pnpm dev:once # Full dev without file watching
pnpm dev:server # Server only
pnpm build # Build all
pnpm typecheck # Type checking
pnpm test # Cheap default test run (Vitest only)
pnpm test:watch # Vitest watch mode
pnpm test:e2e # Playwright browser suite
pnpm test:run # Run tests
pnpm db:generate # Generate DB migration
pnpm db:migrate # Apply migrations
```
`pnpm test` does not run Playwright. Browser suites stay separate and are typically run only when working on those flows or in CI.
See [doc/DEVELOPING.md](https://github.com/paperclipai/paperclip/blob/master/doc/DEVELOPING.md) for the full development guide.
<br/>
@@ -259,7 +246,7 @@ See [doc/DEVELOPING.md](https://github.com/paperclipai/paperclip/blob/master/doc
- ⚪ Artifacts & Deployments
- ⚪ CEO Chat
- ⚪ MAXIMIZER MODE
- Multiple Human Users
- Multiple Human Users
- ⚪ Cloud / Sandbox agents (e.g. Cursor / e2b agents)
- ⚪ Cloud deployments
- ⚪ Desktop App
@@ -279,7 +266,6 @@ We welcome contributions. See the [contributing guide](https://github.com/paperc
## Community
- [Discord](https://discord.gg/m4HZY7xNG3) — Join the community
- [Twitter / X](https://x.com/papercliping) — Follow updates and announcements
- [GitHub Issues](https://github.com/paperclipai/paperclip/issues) — bugs and feature requests
- [GitHub Discussions](https://github.com/paperclipai/paperclip/discussions) — ideas and RFC

View File

@@ -37,7 +37,6 @@
},
"dependencies": {
"@clack/prompts": "^0.10.0",
"@paperclipai/adapter-acpx-local": "workspace:*",
"@paperclipai/adapter-claude-local": "workspace:*",
"@paperclipai/adapter-codex-local": "workspace:*",
"@paperclipai/adapter-cursor-local": "workspace:*",

View File

@@ -14,7 +14,6 @@ function makeCompany(overrides: Partial<Company>): Company {
issueCounter: 1,
budgetMonthlyCents: 0,
spentMonthlyCents: 0,
attachmentMaxBytes: 10 * 1024 * 1024,
requireBoardApprovalForNewAgents: false,
feedbackDataSharingEnabled: false,
feedbackDataSharingConsentAt: null,

View File

@@ -1,5 +1,5 @@
import { execFile, spawn } from "node:child_process";
import { existsSync, mkdirSync, mkdtempSync, readFileSync, readdirSync, rmSync, writeFileSync } from "node:fs";
import { mkdirSync, mkdtempSync, readFileSync, readdirSync, rmSync, writeFileSync } from "node:fs";
import net from "node:net";
import os from "node:os";
import path from "node:path";
@@ -104,50 +104,20 @@ function writeTestConfig(configPath: string, tempRoot: string, port: number, con
writeFileSync(configPath, `${JSON.stringify(config, null, 2)}\n`, "utf8");
}
interface TestPaperclipEnv {
configPath: string;
paperclipHome: string;
instanceId: string;
shellHome?: string;
}
function createBasePaperclipEnv(options: TestPaperclipEnv) {
function createServerEnv(configPath: string, port: number, connectionString: string) {
const env = { ...process.env };
for (const key of Object.keys(env)) {
if (key.startsWith("PAPERCLIP_")) {
delete env[key];
}
}
env.PAPERCLIP_CONFIG = options.configPath;
env.PAPERCLIP_HOME = options.paperclipHome;
env.PAPERCLIP_INSTANCE_ID = options.instanceId;
env.PAPERCLIP_CONTEXT = path.join(options.paperclipHome, "context.json");
env.PAPERCLIP_AUTH_STORE = path.join(options.paperclipHome, "auth.json");
if (options.shellHome) {
env.HOME = options.shellHome;
}
return env;
}
function createServerEnv(
configPath: string,
port: number,
connectionString: string,
options: Omit<TestPaperclipEnv, "configPath">,
) {
const env = createBasePaperclipEnv({
configPath,
...options,
});
delete env.DATABASE_URL;
delete env.PORT;
delete env.HOST;
delete env.SERVE_UI;
delete env.HEARTBEAT_SCHEDULER_ENABLED;
env.PAPERCLIP_CONFIG = configPath;
env.DATABASE_URL = connectionString;
env.HOST = "127.0.0.1";
env.PORT = String(port);
@@ -160,8 +130,13 @@ function createServerEnv(
return env;
}
function createCliEnv(options: TestPaperclipEnv) {
const env = createBasePaperclipEnv(options);
function createCliEnv() {
const env = { ...process.env };
for (const key of Object.keys(env)) {
if (key.startsWith("PAPERCLIP_")) {
delete env[key];
}
}
delete env.DATABASE_URL;
delete env.PORT;
delete env.HOST;
@@ -208,25 +183,14 @@ async function api<T>(baseUrl: string, pathname: string, init?: RequestInit): Pr
return text ? JSON.parse(text) as T : (null as T);
}
async function runCliJson<T>(
args: string[],
opts: TestPaperclipEnv & { apiBase?: string; includeConfigArg?: boolean },
) {
async function runCliJson<T>(args: string[], opts: { apiBase: string; configPath: string }) {
const repoRoot = path.resolve(path.dirname(fileURLToPath(import.meta.url)), "../../..");
const cliArgs = ["--silent", "paperclipai", ...args];
if (opts.apiBase) {
cliArgs.push("--api-base", opts.apiBase);
}
if (opts.includeConfigArg !== false) {
cliArgs.push("--config", opts.configPath);
}
cliArgs.push("--json");
const result = await execFileAsync(
"pnpm",
cliArgs,
["--silent", "paperclipai", ...args, "--api-base", opts.apiBase, "--config", opts.configPath, "--json"],
{
cwd: repoRoot,
env: createCliEnv(opts),
env: createCliEnv(),
maxBuffer: 10 * 1024 * 1024,
},
);
@@ -271,9 +235,6 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
let configPath = "";
let exportDir = "";
let apiBase = "";
let paperclipHome = "";
let cliShellHome = "";
let paperclipInstanceId = "";
let serverProcess: ServerProcess | null = null;
let tempDb: Awaited<ReturnType<typeof startEmbeddedPostgresTestDatabase>> | null = null;
@@ -281,11 +242,6 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
tempRoot = mkdtempSync(path.join(os.tmpdir(), "paperclip-company-cli-e2e-"));
configPath = path.join(tempRoot, "config", "config.json");
exportDir = path.join(tempRoot, "exported-company");
paperclipHome = path.join(tempRoot, "paperclip-home");
cliShellHome = path.join(tempRoot, "shell-home");
paperclipInstanceId = "company-cli-e2e";
mkdirSync(paperclipHome, { recursive: true });
mkdirSync(cliShellHome, { recursive: true });
tempDb = await startEmbeddedPostgresTestDatabase("paperclip-company-cli-db-");
@@ -300,11 +256,7 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
["paperclipai", "run", "--config", configPath],
{
cwd: repoRoot,
env: createServerEnv(configPath, port, tempDb.connectionString, {
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
}),
env: createServerEnv(configPath, port, tempDb.connectionString),
stdio: ["ignore", "pipe", "pipe"],
},
);
@@ -330,41 +282,11 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
it("exports a company package and imports it into new and existing companies", async () => {
expect(serverProcess).not.toBeNull();
const cliContext = await runCliJson<{
contextPath: string;
profileName: string;
profile: { apiBase?: string };
}>(
["context", "set", "--profile", "isolation-check", "--api-base", "https://example.test"],
{
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
includeConfigArg: false,
},
);
const expectedContextPath = path.join(paperclipHome, "context.json");
const leakedContextPath = path.join(cliShellHome, ".paperclip", "context.json");
expect(cliContext.contextPath).toBe(expectedContextPath);
expect(cliContext.profileName).toBe("isolation-check");
expect(cliContext.profile.apiBase).toBe("https://example.test");
expect(existsSync(expectedContextPath)).toBe(true);
expect(existsSync(leakedContextPath)).toBe(false);
rmSync(expectedContextPath, { force: true });
expect(existsSync(expectedContextPath)).toBe(false);
const sourceCompany = await api<{ id: string; name: string; issuePrefix: string }>(apiBase, "/api/companies", {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify({ name: `CLI Export Source ${Date.now()}` }),
});
await api(apiBase, `/api/companies/${sourceCompany.id}`, {
method: "PATCH",
headers: { "content-type": "application/json" },
body: JSON.stringify({ requireBoardApprovalForNewAgents: false }),
});
const sourceAgent = await api<{ id: string; name: string }>(
apiBase,
@@ -376,11 +298,8 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
name: "Export Engineer",
role: "engineer",
adapterType: "claude_local",
adapterConfig: {},
instructionsBundle: {
files: {
"AGENTS.md": "You verify company portability.",
},
adapterConfig: {
promptTemplate: "You verify company portability.",
},
}),
},
@@ -431,13 +350,7 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
"--include",
"company,agents,projects,issues",
],
{
apiBase,
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
},
{ apiBase, configPath },
);
expect(exportResult.ok).toBe(true);
@@ -461,13 +374,7 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
"company,agents,projects,issues",
"--yes",
],
{
apiBase,
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
},
{ apiBase, configPath },
);
expect(importedNew.company.action).toBe("created");
@@ -486,11 +393,10 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
apiBase,
`/api/companies/${importedNew.company.id}/issues`,
);
const importedMatchingIssues = importedIssues.filter((issue) => issue.title === sourceIssue.title);
expect(importedAgents.map((agent) => agent.name)).toContain(sourceAgent.name);
expect(importedProjects.map((project) => project.name)).toContain(sourceProject.name);
expect(importedMatchingIssues).toHaveLength(1);
expect(importedIssues.map((issue) => issue.title)).toContain(sourceIssue.title);
const previewExisting = await runCliJson<{
errors: string[];
@@ -515,13 +421,7 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
"rename",
"--dry-run",
],
{
apiBase,
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
},
{ apiBase, configPath },
);
expect(previewExisting.errors).toEqual([]);
@@ -548,13 +448,7 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
"rename",
"--yes",
],
{
apiBase,
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
},
{ apiBase, configPath },
);
expect(importedExisting.company.action).toBe("unchanged");
@@ -572,13 +466,11 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
apiBase,
`/api/companies/${importedNew.company.id}/issues`,
);
const twiceImportedMatchingIssues = twiceImportedIssues.filter((issue) => issue.title === sourceIssue.title);
expect(twiceImportedAgents).toHaveLength(2);
expect(new Set(twiceImportedAgents.map((agent) => agent.name)).size).toBe(2);
expect(twiceImportedProjects).toHaveLength(2);
expect(twiceImportedMatchingIssues).toHaveLength(2);
expect(new Set(twiceImportedMatchingIssues.map((issue) => issue.identifier)).size).toBe(2);
expect(twiceImportedIssues).toHaveLength(2);
const zipPath = path.join(tempRoot, "exported-company.zip");
const portableFiles: Record<string, string> = {};
@@ -601,16 +493,10 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
"company,agents,projects,issues",
"--yes",
],
{
apiBase,
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
},
{ apiBase, configPath },
);
expect(importedFromZip.company.action).toBe("created");
expect(importedFromZip.agents.some((agent) => agent.action === "created")).toBe(true);
}, 90_000);
}, 60_000);
});

View File

@@ -160,7 +160,6 @@ describe("renderCompanyImportPreview", () => {
path: "COMPANY.md",
name: "Source Co",
description: null,
attachmentMaxBytes: null,
brandColor: null,
logoPath: null,
requireBoardApprovalForNewAgents: false,
@@ -244,7 +243,6 @@ describe("renderCompanyImportPreview", () => {
billingCode: null,
executionWorkspaceSettings: null,
assigneeAdapterOverrides: null,
comments: [],
metadata: null,
},
],
@@ -377,7 +375,6 @@ describe("import selection catalog", () => {
path: "COMPANY.md",
name: "Source Co",
description: null,
attachmentMaxBytes: null,
brandColor: null,
logoPath: "images/company-logo.png",
requireBoardApprovalForNewAgents: false,
@@ -461,7 +458,6 @@ describe("import selection catalog", () => {
billingCode: null,
executionWorkspaceSettings: null,
assigneeAdapterOverrides: null,
comments: [],
metadata: null,
},
],

View File

@@ -1,24 +0,0 @@
import path from "node:path";
import { describe, expect, it } from "vitest";
import { collectEnvLabDoctorStatus, resolveEnvLabSshStatePath } from "../commands/env-lab.js";
describe("env-lab command", () => {
it("resolves the default SSH fixture state path under the instance root", () => {
const statePath = resolveEnvLabSshStatePath("fixture-test");
expect(statePath).toContain(
path.join("instances", "fixture-test", "env-lab", "ssh-fixture", "state.json"),
);
});
it("reports doctor status for an instance without a running fixture", async () => {
const status = await collectEnvLabDoctorStatus({ instance: "fixture-test-missing" });
expect(status.statePath).toContain(
path.join("instances", "fixture-test-missing", "env-lab", "ssh-fixture", "state.json"),
);
expect(typeof status.ssh.supported).toBe("boolean");
expect(status.ssh.running).toBe(false);
expect(status.ssh.environment).toBeNull();
});
});

View File

@@ -1,62 +0,0 @@
import { describe, expect, it } from "vitest";
import { resolveRuntimeBind, validateConfiguredBindMode } from "@paperclipai/shared";
import { buildPresetServerConfig } from "../config/server-bind.js";
describe("network bind helpers", () => {
it("rejects non-loopback bind modes in local_trusted", () => {
expect(
validateConfiguredBindMode({
deploymentMode: "local_trusted",
deploymentExposure: "private",
bind: "lan",
host: "0.0.0.0",
}),
).toContain("local_trusted requires server.bind=loopback");
});
it("resolves tailnet bind using the detected tailscale address", () => {
const resolved = resolveRuntimeBind({
bind: "tailnet",
host: "127.0.0.1",
tailnetBindHost: "100.64.0.8",
});
expect(resolved.errors).toEqual([]);
expect(resolved.host).toBe("100.64.0.8");
});
it("requires a custom bind host when bind=custom", () => {
const resolved = resolveRuntimeBind({
bind: "custom",
host: "127.0.0.1",
});
expect(resolved.errors).toContain("server.customBindHost is required when server.bind=custom");
});
it("stores the detected tailscale address for tailnet presets", () => {
process.env.PAPERCLIP_TAILNET_BIND_HOST = "100.64.0.8";
const preset = buildPresetServerConfig("tailnet", {
port: 3100,
allowedHostnames: [],
serveUi: true,
});
expect(preset.server.host).toBe("100.64.0.8");
delete process.env.PAPERCLIP_TAILNET_BIND_HOST;
});
it("falls back to loopback when no tailscale address is available for tailnet presets", () => {
delete process.env.PAPERCLIP_TAILNET_BIND_HOST;
const preset = buildPresetServerConfig("tailnet", {
port: 3100,
allowedHostnames: [],
serveUi: true,
});
expect(preset.server.host).toBe("127.0.0.1");
});
});

View File

@@ -74,11 +74,6 @@ function createExistingConfigFixture() {
return { configPath, configText: fs.readFileSync(configPath, "utf8") };
}
function createFreshConfigPath() {
const root = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-onboard-fresh-"));
return path.join(root, ".paperclip", "config.json");
}
describe("onboard", () => {
beforeEach(() => {
process.env = { ...ORIGINAL_ENV };
@@ -110,57 +105,4 @@ describe("onboard", () => {
expect(fs.existsSync(`${fixture.configPath}.backup`)).toBe(false);
expect(fs.existsSync(path.join(path.dirname(fixture.configPath), ".env"))).toBe(true);
});
it("keeps --yes onboarding on local trusted loopback defaults", async () => {
const configPath = createFreshConfigPath();
process.env.HOST = "0.0.0.0";
process.env.PAPERCLIP_BIND = "lan";
await onboard({ config: configPath, yes: true, invokedByRun: true });
const raw = JSON.parse(fs.readFileSync(configPath, "utf8")) as PaperclipConfig;
expect(raw.server.deploymentMode).toBe("local_trusted");
expect(raw.server.exposure).toBe("private");
expect(raw.server.bind).toBe("loopback");
expect(raw.server.host).toBe("127.0.0.1");
});
it("supports authenticated/private quickstart bind presets", async () => {
const configPath = createFreshConfigPath();
process.env.PAPERCLIP_TAILNET_BIND_HOST = "100.64.0.8";
await onboard({ config: configPath, yes: true, invokedByRun: true, bind: "tailnet" });
const raw = JSON.parse(fs.readFileSync(configPath, "utf8")) as PaperclipConfig;
expect(raw.server.deploymentMode).toBe("authenticated");
expect(raw.server.exposure).toBe("private");
expect(raw.server.bind).toBe("tailnet");
expect(raw.server.host).toBe("100.64.0.8");
});
it("keeps tailnet quickstart on loopback until tailscale is available", async () => {
const configPath = createFreshConfigPath();
delete process.env.PAPERCLIP_TAILNET_BIND_HOST;
await onboard({ config: configPath, yes: true, invokedByRun: true, bind: "tailnet" });
const raw = JSON.parse(fs.readFileSync(configPath, "utf8")) as PaperclipConfig;
expect(raw.server.deploymentMode).toBe("authenticated");
expect(raw.server.exposure).toBe("private");
expect(raw.server.bind).toBe("tailnet");
expect(raw.server.host).toBe("127.0.0.1");
});
it("ignores deployment env overrides during --yes quickstart", async () => {
const configPath = createFreshConfigPath();
process.env.PAPERCLIP_DEPLOYMENT_MODE = "authenticated";
await onboard({ config: configPath, yes: true, invokedByRun: true });
const raw = JSON.parse(fs.readFileSync(configPath, "utf8")) as PaperclipConfig;
expect(raw.server.deploymentMode).toBe("local_trusted");
expect(raw.server.exposure).toBe("private");
expect(raw.server.bind).toBe("loopback");
expect(raw.server.host).toBe("127.0.0.1");
});
});

View File

@@ -2,25 +2,10 @@ import fs from "node:fs";
import os from "node:os";
import path from "node:path";
import { execFileSync } from "node:child_process";
import { randomUUID } from "node:crypto";
import { eq } from "drizzle-orm";
import { afterEach, describe, expect, it, vi } from "vitest";
import {
agents,
authUsers,
companies,
createDb,
issueComments,
issues,
projects,
routines,
routineTriggers,
} from "@paperclipai/db";
import {
copyGitHooksToWorktreeGitDir,
copySeededSecretsKey,
pauseSeededScheduledRoutines,
quarantineSeededWorktreeExecutionState,
readSourceAttachmentBody,
rebindWorkspaceCwd,
resolveSourceConfigPath,
@@ -28,7 +13,6 @@ import {
resolveWorktreeReseedTargetPaths,
resolveGitWorktreeAddArgs,
resolveWorktreeMakeTargetPath,
worktreeRepairCommand,
worktreeInitCommand,
worktreeMakeCommand,
worktreeReseedCommand,
@@ -44,22 +28,9 @@ import {
sanitizeWorktreeInstanceId,
} from "../commands/worktree-lib.js";
import type { PaperclipConfig } from "../config/schema.js";
import {
getEmbeddedPostgresTestSupport,
startEmbeddedPostgresTestDatabase,
} from "./helpers/embedded-postgres.js";
const ORIGINAL_CWD = process.cwd();
const ORIGINAL_ENV = { ...process.env };
const embeddedPostgresSupport = await getEmbeddedPostgresTestSupport();
const itEmbeddedPostgres = embeddedPostgresSupport.supported ? it : it.skip;
const describeEmbeddedPostgres = embeddedPostgresSupport.supported ? describe : describe.skip;
if (!embeddedPostgresSupport.supported) {
console.warn(
`Skipping embedded Postgres worktree CLI tests on this host: ${embeddedPostgresSupport.reason ?? "unsupported environment"}`,
);
}
afterEach(() => {
process.chdir(ORIGINAL_CWD);
@@ -190,9 +161,8 @@ describe("worktree helpers", () => {
).toEqual(["worktree", "add", "-b", "my-worktree", "/tmp/my-worktree", "origin/main"]);
});
it("rewrites auth URLs only when they already include a port", () => {
it("rewrites loopback auth URLs to the new port only", () => {
expect(rewriteLocalUrlPort("http://127.0.0.1:3100", 3110)).toBe("http://127.0.0.1:3110/");
expect(rewriteLocalUrlPort("http://my-host.ts.net:3100", 3110)).toBe("http://my-host.ts.net:3110/");
expect(rewriteLocalUrlPort("https://paperclip.example", 3110)).toBe("https://paperclip.example");
});
@@ -287,138 +257,6 @@ describe("worktree helpers", () => {
expect(full.nullifyColumns).toEqual({});
});
itEmbeddedPostgres("quarantines copied live execution state in seeded worktree databases", async () => {
const tempDb = await startEmbeddedPostgresTestDatabase("paperclip-worktree-quarantine-");
const db = createDb(tempDb.connectionString);
const companyId = randomUUID();
const agentId = randomUUID();
const idleAgentId = randomUUID();
const inProgressIssueId = randomUUID();
const todoIssueId = randomUUID();
const reviewIssueId = randomUUID();
const userIssueId = randomUUID();
try {
await db.insert(companies).values({
id: companyId,
name: "Paperclip",
issuePrefix: "WTQ",
requireBoardApprovalForNewAgents: false,
});
await db.insert(agents).values([
{
id: agentId,
companyId,
name: "CodexCoder",
role: "engineer",
status: "running",
adapterType: "codex_local",
adapterConfig: {},
runtimeConfig: {
heartbeat: { enabled: true, intervalSec: 60 },
wakeOnDemand: true,
},
permissions: {},
},
{
id: idleAgentId,
companyId,
name: "Reviewer",
role: "reviewer",
status: "idle",
adapterType: "codex_local",
adapterConfig: {},
runtimeConfig: { heartbeat: { enabled: false, intervalSec: 300 } },
permissions: {},
},
]);
await db.insert(issues).values([
{
id: inProgressIssueId,
companyId,
title: "Copied in-flight issue",
status: "in_progress",
priority: "medium",
assigneeAgentId: agentId,
issueNumber: 1,
identifier: "WTQ-1",
executionAgentNameKey: "codexcoder",
executionLockedAt: new Date("2026-04-18T00:00:00.000Z"),
},
{
id: todoIssueId,
companyId,
title: "Copied assigned todo issue",
status: "todo",
priority: "medium",
assigneeAgentId: agentId,
issueNumber: 2,
identifier: "WTQ-2",
},
{
id: reviewIssueId,
companyId,
title: "Copied assigned review issue",
status: "in_review",
priority: "medium",
assigneeAgentId: idleAgentId,
issueNumber: 3,
identifier: "WTQ-3",
},
{
id: userIssueId,
companyId,
title: "Copied user issue",
status: "todo",
priority: "medium",
assigneeUserId: "user-1",
issueNumber: 4,
identifier: "WTQ-4",
},
]);
await expect(quarantineSeededWorktreeExecutionState(tempDb.connectionString)).resolves.toEqual({
disabledTimerHeartbeats: 1,
resetRunningAgents: 1,
quarantinedInProgressIssues: 1,
unassignedTodoIssues: 1,
unassignedReviewIssues: 1,
});
const [quarantinedAgent] = await db.select().from(agents).where(eq(agents.id, agentId));
expect(quarantinedAgent?.status).toBe("idle");
expect(quarantinedAgent?.runtimeConfig).toMatchObject({
heartbeat: { enabled: false, intervalSec: 60 },
wakeOnDemand: true,
});
const [inProgressIssue] = await db.select().from(issues).where(eq(issues.id, inProgressIssueId));
expect(inProgressIssue?.status).toBe("blocked");
expect(inProgressIssue?.assigneeAgentId).toBeNull();
expect(inProgressIssue?.executionAgentNameKey).toBeNull();
expect(inProgressIssue?.executionLockedAt).toBeNull();
const [todoIssue] = await db.select().from(issues).where(eq(issues.id, todoIssueId));
expect(todoIssue?.status).toBe("todo");
expect(todoIssue?.assigneeAgentId).toBeNull();
const [reviewIssue] = await db.select().from(issues).where(eq(issues.id, reviewIssueId));
expect(reviewIssue?.status).toBe("in_review");
expect(reviewIssue?.assigneeAgentId).toBeNull();
const [userIssue] = await db.select().from(issues).where(eq(issues.id, userIssueId));
expect(userIssue?.status).toBe("todo");
expect(userIssue?.assigneeUserId).toBe("user-1");
const comments = await db.select().from(issueComments).where(eq(issueComments.issueId, inProgressIssueId));
expect(comments).toHaveLength(1);
expect(comments[0]?.body).toContain("Quarantined during worktree seed");
} finally {
await db.$client?.end?.({ timeout: 5 }).catch(() => undefined);
await tempDb.cleanup();
}
}, 20_000);
it("copies the source local_encrypted secrets key into the seeded worktree instance", () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-secrets-"));
const originalInlineMasterKey = process.env.PAPERCLIP_SECRETS_MASTER_KEY;
@@ -512,97 +350,6 @@ describe("worktree helpers", () => {
}
});
itEmbeddedPostgres(
"seeds authenticated users into minimally cloned worktree instances",
async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-auth-seed-"));
const worktreeRoot = path.join(tempRoot, "PAP-999-auth-seed");
const sourceHome = path.join(tempRoot, "source-home");
const sourceConfigDir = path.join(sourceHome, "instances", "source");
const sourceConfigPath = path.join(sourceConfigDir, "config.json");
const sourceEnvPath = path.join(sourceConfigDir, ".env");
const sourceKeyPath = path.join(sourceConfigDir, "secrets", "master.key");
const worktreeHome = path.join(tempRoot, ".paperclip-worktrees");
const originalCwd = process.cwd();
const sourceDb = await startEmbeddedPostgresTestDatabase("paperclip-worktree-auth-source-");
try {
const sourceDbClient = createDb(sourceDb.connectionString);
await sourceDbClient.insert(authUsers).values({
id: "user-existing",
email: "existing@paperclip.ing",
name: "Existing User",
emailVerified: true,
createdAt: new Date(),
updatedAt: new Date(),
});
fs.mkdirSync(path.dirname(sourceKeyPath), { recursive: true });
fs.mkdirSync(worktreeRoot, { recursive: true });
const sourceConfig = buildSourceConfig();
sourceConfig.database = {
mode: "postgres",
embeddedPostgresDataDir: path.join(sourceConfigDir, "db"),
embeddedPostgresPort: 54329,
backup: {
enabled: true,
intervalMinutes: 60,
retentionDays: 30,
dir: path.join(sourceConfigDir, "backups"),
},
connectionString: sourceDb.connectionString,
};
sourceConfig.logging.logDir = path.join(sourceConfigDir, "logs");
sourceConfig.storage.localDisk.baseDir = path.join(sourceConfigDir, "storage");
sourceConfig.secrets.localEncrypted.keyFilePath = sourceKeyPath;
fs.writeFileSync(sourceConfigPath, JSON.stringify(sourceConfig, null, 2) + "\n", "utf8");
fs.writeFileSync(sourceEnvPath, "", "utf8");
fs.writeFileSync(sourceKeyPath, "source-master-key", "utf8");
process.chdir(worktreeRoot);
await worktreeInitCommand({
name: "PAP-999-auth-seed",
home: worktreeHome,
fromConfig: sourceConfigPath,
force: true,
});
const targetConfig = JSON.parse(
fs.readFileSync(path.join(worktreeRoot, ".paperclip", "config.json"), "utf8"),
) as PaperclipConfig;
const { default: EmbeddedPostgres } = await import("embedded-postgres");
const targetPg = new EmbeddedPostgres({
databaseDir: targetConfig.database.embeddedPostgresDataDir,
user: "paperclip",
password: "paperclip",
port: targetConfig.database.embeddedPostgresPort,
persistent: true,
initdbFlags: ["--encoding=UTF8", "--locale=C", "--lc-messages=C"],
onLog: () => {},
onError: () => {},
});
await targetPg.start();
try {
const targetDb = createDb(
`postgres://paperclip:paperclip@127.0.0.1:${targetConfig.database.embeddedPostgresPort}/paperclip`,
);
const seededUsers = await targetDb.select().from(authUsers);
expect(seededUsers.some((row) => row.email === "existing@paperclip.ing")).toBe(true);
} finally {
await targetPg.stop();
}
} finally {
process.chdir(originalCwd);
await sourceDb.cleanup();
fs.rmSync(tempRoot, { recursive: true, force: true });
}
},
30000,
);
it("avoids ports already claimed by sibling worktree instance configs", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-claimed-ports-"));
const repoRoot = path.join(tempRoot, "repo");
@@ -882,7 +629,7 @@ describe("worktree helpers", () => {
}
fs.rmSync(tempRoot, { recursive: true, force: true });
}
}, 30_000);
}, 20_000);
it("restores the current worktree config and instance data if reseed fails", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-reseed-rollback-"));
@@ -1039,7 +786,7 @@ describe("worktree helpers", () => {
execFileSync("git", ["worktree", "remove", "--force", worktreePath], { cwd: repoRoot, stdio: "ignore" });
fs.rmSync(tempRoot, { recursive: true, force: true });
}
}, 15_000);
});
it("creates and initializes a worktree from the top-level worktree:make command", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-make-"));
@@ -1075,246 +822,4 @@ describe("worktree helpers", () => {
fs.rmSync(tempRoot, { recursive: true, force: true });
}
}, 20_000);
it("no-ops on the primary checkout unless --branch is provided", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-repair-primary-"));
const repoRoot = path.join(tempRoot, "repo");
const originalCwd = process.cwd();
try {
fs.mkdirSync(repoRoot, { recursive: true });
execFileSync("git", ["init"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.email", "test@example.com"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.name", "Test User"], { cwd: repoRoot, stdio: "ignore" });
fs.writeFileSync(path.join(repoRoot, "README.md"), "# temp\n", "utf8");
execFileSync("git", ["add", "README.md"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["commit", "-m", "Initial commit"], { cwd: repoRoot, stdio: "ignore" });
process.chdir(repoRoot);
await worktreeRepairCommand({});
expect(fs.existsSync(path.join(repoRoot, ".paperclip", "config.json"))).toBe(false);
expect(fs.existsSync(path.join(repoRoot, ".paperclip", "worktrees"))).toBe(false);
} finally {
process.chdir(originalCwd);
fs.rmSync(tempRoot, { recursive: true, force: true });
}
});
it("repairs the current linked worktree when Paperclip metadata is missing", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-repair-current-"));
const repoRoot = path.join(tempRoot, "repo");
const worktreePath = path.join(repoRoot, ".paperclip", "worktrees", "repair-me");
const sourceConfigPath = path.join(tempRoot, "source-config.json");
const worktreeHome = path.join(tempRoot, ".paperclip-worktrees");
const worktreePaths = resolveWorktreeLocalPaths({
cwd: worktreePath,
homeDir: worktreeHome,
instanceId: sanitizeWorktreeInstanceId(path.basename(worktreePath)),
});
const originalCwd = process.cwd();
try {
fs.mkdirSync(repoRoot, { recursive: true });
execFileSync("git", ["init"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.email", "test@example.com"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.name", "Test User"], { cwd: repoRoot, stdio: "ignore" });
fs.writeFileSync(path.join(repoRoot, "README.md"), "# temp\n", "utf8");
execFileSync("git", ["add", "README.md"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["commit", "-m", "Initial commit"], { cwd: repoRoot, stdio: "ignore" });
fs.mkdirSync(path.dirname(worktreePath), { recursive: true });
execFileSync("git", ["worktree", "add", "-b", "repair-me", worktreePath, "HEAD"], {
cwd: repoRoot,
stdio: "ignore",
});
fs.writeFileSync(sourceConfigPath, JSON.stringify(buildSourceConfig(), null, 2), "utf8");
fs.mkdirSync(worktreePaths.instanceRoot, { recursive: true });
fs.writeFileSync(path.join(worktreePaths.instanceRoot, "marker.txt"), "stale", "utf8");
process.chdir(worktreePath);
await worktreeRepairCommand({
fromConfig: sourceConfigPath,
home: worktreeHome,
noSeed: true,
});
expect(fs.existsSync(path.join(worktreePath, ".paperclip", "config.json"))).toBe(true);
expect(fs.existsSync(path.join(worktreePath, ".paperclip", ".env"))).toBe(true);
expect(fs.existsSync(path.join(worktreePaths.instanceRoot, "marker.txt"))).toBe(false);
} finally {
process.chdir(originalCwd);
fs.rmSync(tempRoot, { recursive: true, force: true });
}
}, 20_000);
it("creates and repairs a missing branch worktree when --branch is provided", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-repair-branch-"));
const repoRoot = path.join(tempRoot, "repo");
const sourceConfigPath = path.join(tempRoot, "source-config.json");
const worktreeHome = path.join(tempRoot, ".paperclip-worktrees");
const originalCwd = process.cwd();
const expectedWorktreePath = path.join(repoRoot, ".paperclip", "worktrees", "feature-repair-me");
try {
fs.mkdirSync(repoRoot, { recursive: true });
execFileSync("git", ["init"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.email", "test@example.com"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.name", "Test User"], { cwd: repoRoot, stdio: "ignore" });
fs.writeFileSync(path.join(repoRoot, "README.md"), "# temp\n", "utf8");
execFileSync("git", ["add", "README.md"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["commit", "-m", "Initial commit"], { cwd: repoRoot, stdio: "ignore" });
fs.writeFileSync(sourceConfigPath, JSON.stringify(buildSourceConfig(), null, 2), "utf8");
process.chdir(repoRoot);
await worktreeRepairCommand({
branch: "feature/repair-me",
fromConfig: sourceConfigPath,
home: worktreeHome,
noSeed: true,
});
expect(fs.existsSync(path.join(expectedWorktreePath, ".git"))).toBe(true);
expect(fs.existsSync(path.join(expectedWorktreePath, ".paperclip", "config.json"))).toBe(true);
expect(fs.existsSync(path.join(expectedWorktreePath, ".paperclip", ".env"))).toBe(true);
} finally {
process.chdir(originalCwd);
fs.rmSync(tempRoot, { recursive: true, force: true });
}
}, 20_000);
});
describeEmbeddedPostgres("pauseSeededScheduledRoutines", () => {
it("pauses only routines with enabled schedule triggers", async () => {
const tempDb = await startEmbeddedPostgresTestDatabase("paperclip-worktree-routines-");
const db = createDb(tempDb.connectionString);
const companyId = randomUUID();
const projectId = randomUUID();
const agentId = randomUUID();
const activeScheduledRoutineId = randomUUID();
const activeApiRoutineId = randomUUID();
const pausedScheduledRoutineId = randomUUID();
const archivedScheduledRoutineId = randomUUID();
const disabledScheduleRoutineId = randomUUID();
try {
await db.insert(companies).values({
id: companyId,
name: "Paperclip",
issuePrefix: `T${companyId.replace(/-/g, "").slice(0, 6).toUpperCase()}`,
requireBoardApprovalForNewAgents: false,
});
await db.insert(agents).values({
id: agentId,
companyId,
name: "Coder",
adapterType: "process",
adapterConfig: {},
runtimeConfig: {},
permissions: {},
});
await db.insert(projects).values({
id: projectId,
companyId,
name: "Project",
status: "in_progress",
});
await db.insert(routines).values([
{
id: activeScheduledRoutineId,
companyId,
projectId,
assigneeAgentId: agentId,
title: "Active scheduled",
status: "active",
},
{
id: activeApiRoutineId,
companyId,
projectId,
assigneeAgentId: agentId,
title: "Active API",
status: "active",
},
{
id: pausedScheduledRoutineId,
companyId,
projectId,
assigneeAgentId: agentId,
title: "Paused scheduled",
status: "paused",
},
{
id: archivedScheduledRoutineId,
companyId,
projectId,
assigneeAgentId: agentId,
title: "Archived scheduled",
status: "archived",
},
{
id: disabledScheduleRoutineId,
companyId,
projectId,
assigneeAgentId: agentId,
title: "Disabled schedule",
status: "active",
},
]);
await db.insert(routineTriggers).values([
{
companyId,
routineId: activeScheduledRoutineId,
kind: "schedule",
enabled: true,
cronExpression: "0 9 * * *",
timezone: "UTC",
},
{
companyId,
routineId: activeApiRoutineId,
kind: "api",
enabled: true,
},
{
companyId,
routineId: pausedScheduledRoutineId,
kind: "schedule",
enabled: true,
cronExpression: "0 10 * * *",
timezone: "UTC",
},
{
companyId,
routineId: archivedScheduledRoutineId,
kind: "schedule",
enabled: true,
cronExpression: "0 11 * * *",
timezone: "UTC",
},
{
companyId,
routineId: disabledScheduleRoutineId,
kind: "schedule",
enabled: false,
cronExpression: "0 12 * * *",
timezone: "UTC",
},
]);
const pausedCount = await pauseSeededScheduledRoutines(tempDb.connectionString);
expect(pausedCount).toBe(1);
const rows = await db.select({ id: routines.id, status: routines.status }).from(routines);
const statusById = new Map(rows.map((row) => [row.id, row.status]));
expect(statusById.get(activeScheduledRoutineId)).toBe("paused");
expect(statusById.get(activeApiRoutineId)).toBe("active");
expect(statusById.get(pausedScheduledRoutineId)).toBe("paused");
expect(statusById.get(archivedScheduledRoutineId)).toBe("archived");
expect(statusById.get(disabledScheduleRoutineId)).toBe("active");
} finally {
await db.$client?.end?.({ timeout: 5 }).catch(() => undefined);
await tempDb.cleanup();
}
}, 20_000);
});

View File

@@ -1,5 +1,4 @@
import type { CLIAdapterModule } from "@paperclipai/adapter-utils";
import { printAcpxStreamEvent } from "@paperclipai/adapter-acpx-local/cli";
import { printClaudeStreamEvent } from "@paperclipai/adapter-claude-local/cli";
import { printCodexStreamEvent } from "@paperclipai/adapter-codex-local/cli";
import { printCursorStreamEvent } from "@paperclipai/adapter-cursor-local/cli";
@@ -15,11 +14,6 @@ const claudeLocalCLIAdapter: CLIAdapterModule = {
formatStdoutEvent: printClaudeStreamEvent,
};
const acpxLocalCLIAdapter: CLIAdapterModule = {
type: "acpx_local",
formatStdoutEvent: printAcpxStreamEvent,
};
const codexLocalCLIAdapter: CLIAdapterModule = {
type: "codex_local",
formatStdoutEvent: printCodexStreamEvent,
@@ -52,7 +46,6 @@ const openclawGatewayCLIAdapter: CLIAdapterModule = {
const adaptersByType = new Map<string, CLIAdapterModule>(
[
acpxLocalCLIAdapter,
claudeLocalCLIAdapter,
codexLocalCLIAdapter,
openCodeLocalCLIAdapter,

View File

@@ -1,21 +1,24 @@
import { inferBindModeFromHost } from "@paperclipai/shared";
import type { PaperclipConfig } from "../config/schema.js";
import type { CheckResult } from "./index.js";
function isLoopbackHost(host: string) {
const normalized = host.trim().toLowerCase();
return normalized === "127.0.0.1" || normalized === "localhost" || normalized === "::1";
}
export function deploymentAuthCheck(config: PaperclipConfig): CheckResult {
const mode = config.server.deploymentMode;
const exposure = config.server.exposure;
const auth = config.auth;
const bind = config.server.bind ?? inferBindModeFromHost(config.server.host);
if (mode === "local_trusted") {
if (bind !== "loopback") {
if (!isLoopbackHost(config.server.host)) {
return {
name: "Deployment/auth mode",
status: "fail",
message: `local_trusted requires loopback binding (found ${bind})`,
message: `local_trusted requires loopback host binding (found ${config.server.host})`,
canRepair: false,
repairHint: "Run `paperclipai configure --section server` and choose Local trusted / loopback reachability",
repairHint: "Run `paperclipai configure --section server` and set host to 127.0.0.1",
};
}
return {
@@ -83,6 +86,6 @@ export function deploymentAuthCheck(config: PaperclipConfig): CheckResult {
return {
name: "Deployment/auth mode",
status: "pass",
message: `Mode ${mode}/${exposure} with bind ${bind} and auth URL mode ${auth.baseUrlMode}`,
message: `Mode ${mode}/${exposure} with auth URL mode ${auth.baseUrlMode}`,
};
}

View File

@@ -3,7 +3,6 @@ import * as p from "@clack/prompts";
import pc from "picocolors";
import { and, eq, gt, isNull } from "drizzle-orm";
import { createDb, instanceUserRoles, invites } from "@paperclipai/db";
import { inferBindModeFromHost } from "@paperclipai/shared";
import { loadPaperclipEnvFile } from "../config/env.js";
import { readConfig, resolveConfigPath } from "../config/store.js";
@@ -41,13 +40,9 @@ function resolveBaseUrl(configPath?: string, explicitBaseUrl?: string) {
if (config?.auth.baseUrlMode === "explicit" && config.auth.publicBaseUrl) {
return config.auth.publicBaseUrl.replace(/\/+$/, "");
}
const bind = config?.server.bind ?? inferBindModeFromHost(config?.server.host);
const host =
bind === "custom"
? config?.server.customBindHost ?? config?.server.host ?? "localhost"
: config?.server.host ?? "localhost";
const host = config?.server.host ?? "localhost";
const port = config?.server.port ?? 3100;
const publicHost = host === "0.0.0.0" || bind === "lan" ? "localhost" : host;
const publicHost = host === "0.0.0.0" ? "localhost" : host;
return `http://${publicHost}:${port}`;
}

View File

@@ -61,7 +61,6 @@ interface IssueUpdateOptions extends BaseClientOptions {
interface IssueCommentOptions extends BaseClientOptions {
body: string;
reopen?: boolean;
resume?: boolean;
}
interface IssueCheckoutOptions extends BaseClientOptions {
@@ -242,14 +241,12 @@ export function registerIssueCommands(program: Command): void {
.argument("<issueId>", "Issue ID")
.requiredOption("--body <text>", "Comment body")
.option("--reopen", "Reopen if issue is done/cancelled")
.option("--resume", "Request explicit follow-up and wake the assignee when resumable")
.action(async (issueId: string, opts: IssueCommentOptions) => {
try {
const ctx = resolveCommandContext(opts);
const payload = addIssueCommentSchema.parse({
body: opts.body,
reopen: opts.reopen,
resume: opts.resume,
});
const comment = await ctx.api.post<IssueComment>(`/api/issues/${issueId}/comments`, payload);
printOutput(comment, { json: ctx.json });

View File

@@ -54,7 +54,6 @@ function defaultConfig(): PaperclipConfig {
server: {
deploymentMode: "local_trusted",
exposure: "private",
bind: "loopback",
host: "127.0.0.1",
port: 3100,
allowedHostnames: [],

View File

@@ -73,7 +73,7 @@ export async function dbBackupCommand(opts: DbBackupOptions): Promise<void> {
const result = await runDatabaseBackup({
connectionString: connection.value,
backupDir,
retention: { dailyDays: retentionDays, weeklyWeeks: 4, monthlyMonths: 1 },
retentionDays,
filenamePrefix,
});
spinner.stop(`Backup saved: ${formatDatabaseBackupResult(result)}`);

View File

@@ -1,174 +0,0 @@
import path from "node:path";
import type { Command } from "commander";
import * as p from "@clack/prompts";
import pc from "picocolors";
import {
buildSshEnvLabFixtureConfig,
getSshEnvLabSupport,
readSshEnvLabFixtureStatus,
startSshEnvLabFixture,
stopSshEnvLabFixture,
} from "@paperclipai/adapter-utils/ssh";
import { resolvePaperclipInstanceId, resolvePaperclipInstanceRoot } from "../config/home.js";
export function resolveEnvLabSshStatePath(instanceId?: string): string {
const resolvedInstanceId = resolvePaperclipInstanceId(instanceId);
return path.resolve(
resolvePaperclipInstanceRoot(resolvedInstanceId),
"env-lab",
"ssh-fixture",
"state.json",
);
}
function printJson(value: unknown) {
process.stdout.write(`${JSON.stringify(value, null, 2)}\n`);
}
function summarizeFixture(state: {
host: string;
port: number;
username: string;
workspaceDir: string;
sshdLogPath: string;
}) {
p.log.message(`Host: ${pc.cyan(state.host)}:${pc.cyan(String(state.port))}`);
p.log.message(`User: ${pc.cyan(state.username)}`);
p.log.message(`Workspace: ${pc.cyan(state.workspaceDir)}`);
p.log.message(`Log: ${pc.dim(state.sshdLogPath)}`);
}
export async function collectEnvLabDoctorStatus(opts: { instance?: string }) {
const statePath = resolveEnvLabSshStatePath(opts.instance);
const [sshSupport, sshStatus] = await Promise.all([
getSshEnvLabSupport(),
readSshEnvLabFixtureStatus(statePath),
]);
const environment = sshStatus.state ? await buildSshEnvLabFixtureConfig(sshStatus.state) : null;
return {
statePath,
ssh: {
supported: sshSupport.supported,
reason: sshSupport.reason,
running: sshStatus.running,
state: sshStatus.state,
environment,
},
};
}
export async function envLabUpCommand(opts: { instance?: string; json?: boolean }) {
const statePath = resolveEnvLabSshStatePath(opts.instance);
const state = await startSshEnvLabFixture({ statePath });
const environment = await buildSshEnvLabFixtureConfig(state);
if (opts.json) {
printJson({ state, environment });
return;
}
p.log.success("SSH env-lab fixture is running.");
summarizeFixture(state);
p.log.message(`State: ${pc.dim(statePath)}`);
}
export async function envLabStatusCommand(opts: { instance?: string; json?: boolean }) {
const statePath = resolveEnvLabSshStatePath(opts.instance);
const status = await readSshEnvLabFixtureStatus(statePath);
const environment = status.state ? await buildSshEnvLabFixtureConfig(status.state) : null;
if (opts.json) {
printJson({ ...status, environment, statePath });
return;
}
if (!status.state || !status.running) {
p.log.info(`SSH env-lab fixture is not running (${pc.dim(statePath)}).`);
return;
}
p.log.success("SSH env-lab fixture is running.");
summarizeFixture(status.state);
p.log.message(`State: ${pc.dim(statePath)}`);
}
export async function envLabDownCommand(opts: { instance?: string; json?: boolean }) {
const statePath = resolveEnvLabSshStatePath(opts.instance);
const stopped = await stopSshEnvLabFixture(statePath);
if (opts.json) {
printJson({ stopped, statePath });
return;
}
if (!stopped) {
p.log.info(`No SSH env-lab fixture was running (${pc.dim(statePath)}).`);
return;
}
p.log.success("SSH env-lab fixture stopped.");
p.log.message(`State: ${pc.dim(statePath)}`);
}
export async function envLabDoctorCommand(opts: { instance?: string; json?: boolean }) {
const status = await collectEnvLabDoctorStatus(opts);
if (opts.json) {
printJson(status);
return;
}
if (status.ssh.supported) {
p.log.success("SSH fixture prerequisites are installed.");
} else {
p.log.warn(`SSH fixture prerequisites are incomplete: ${status.ssh.reason ?? "unknown reason"}`);
}
if (status.ssh.state && status.ssh.running) {
p.log.success("SSH env-lab fixture is running.");
summarizeFixture(status.ssh.state);
p.log.message(`Private key: ${pc.dim(status.ssh.state.clientPrivateKeyPath)}`);
p.log.message(`Known hosts: ${pc.dim(status.ssh.state.knownHostsPath)}`);
} else if (status.ssh.state) {
p.log.warn("SSH env-lab fixture state exists, but the process is not running.");
p.log.message(`State: ${pc.dim(status.statePath)}`);
} else {
p.log.info("SSH env-lab fixture is not running.");
p.log.message(`State: ${pc.dim(status.statePath)}`);
}
p.log.message(`Cleanup: ${pc.dim("pnpm paperclipai env-lab down")}`);
}
export function registerEnvLabCommands(program: Command) {
const envLab = program.command("env-lab").description("Deterministic local environment fixtures");
envLab
.command("up")
.description("Start the default SSH env-lab fixture")
.option("-i, --instance <id>", "Paperclip instance id (default: current/default)")
.option("--json", "Print machine-readable fixture details")
.action(envLabUpCommand);
envLab
.command("status")
.description("Show the current SSH env-lab fixture state")
.option("-i, --instance <id>", "Paperclip instance id (default: current/default)")
.option("--json", "Print machine-readable fixture details")
.action(envLabStatusCommand);
envLab
.command("down")
.description("Stop the default SSH env-lab fixture")
.option("-i, --instance <id>", "Paperclip instance id (default: current/default)")
.option("--json", "Print machine-readable stop details")
.action(envLabDownCommand);
envLab
.command("doctor")
.description("Check SSH fixture prerequisites and current status")
.option("-i, --instance <id>", "Paperclip instance id (default: current/default)")
.option("--json", "Print machine-readable diagnostic details")
.action(envLabDoctorCommand);
}

View File

@@ -3,14 +3,10 @@ import path from "node:path";
import pc from "picocolors";
import {
AUTH_BASE_URL_MODES,
BIND_MODES,
DEPLOYMENT_EXPOSURES,
DEPLOYMENT_MODES,
SECRET_PROVIDERS,
STORAGE_PROVIDERS,
inferBindModeFromHost,
resolveRuntimeBind,
type BindMode,
type AuthBaseUrlMode,
type DeploymentExposure,
type DeploymentMode,
@@ -27,7 +23,6 @@ import { promptLogging } from "../prompts/logging.js";
import { defaultSecretsConfig } from "../prompts/secrets.js";
import { defaultStorageConfig, promptStorage } from "../prompts/storage.js";
import { promptServer } from "../prompts/server.js";
import { buildPresetServerConfig } from "../config/server-bind.js";
import {
describeLocalInstancePaths,
expandHomePrefix,
@@ -51,14 +46,10 @@ type OnboardOptions = {
run?: boolean;
yes?: boolean;
invokedByRun?: boolean;
bind?: BindMode;
};
type OnboardDefaults = Pick<PaperclipConfig, "database" | "logging" | "server" | "auth" | "storage" | "secrets">;
const TAILNET_BIND_WARNING =
"No Tailscale address was detected during setup. The saved config will stay on loopback until Tailscale is available or PAPERCLIP_TAILNET_BIND_HOST is set.";
const ONBOARD_ENV_KEYS = [
"PAPERCLIP_PUBLIC_URL",
"DATABASE_URL",
@@ -68,9 +59,6 @@ const ONBOARD_ENV_KEYS = [
"PAPERCLIP_DB_BACKUP_DIR",
"PAPERCLIP_DEPLOYMENT_MODE",
"PAPERCLIP_DEPLOYMENT_EXPOSURE",
"PAPERCLIP_BIND",
"PAPERCLIP_BIND_HOST",
"PAPERCLIP_TAILNET_BIND_HOST",
"HOST",
"PORT",
"SERVE_UI",
@@ -116,62 +104,29 @@ function resolvePathFromEnv(rawValue: string | undefined): string | null {
return path.resolve(expandHomePrefix(rawValue.trim()));
}
function describeServerBinding(server: Pick<PaperclipConfig["server"], "bind" | "customBindHost" | "host" | "port">): string {
const bind = server.bind ?? inferBindModeFromHost(server.host);
const detail =
bind === "custom"
? server.customBindHost ?? server.host
: bind === "tailnet"
? "detected tailscale address"
: server.host;
return `${bind}${detail ? ` (${detail})` : ""}:${server.port}`;
}
function quickstartDefaultsFromEnv(opts?: { preferTrustedLocal?: boolean }): {
function quickstartDefaultsFromEnv(): {
defaults: OnboardDefaults;
usedEnvKeys: string[];
ignoredEnvKeys: Array<{ key: string; reason: string }>;
} {
const preferTrustedLocal = opts?.preferTrustedLocal ?? false;
const instanceId = resolvePaperclipInstanceId();
const defaultStorage = defaultStorageConfig();
const defaultSecrets = defaultSecretsConfig();
const databaseUrl = process.env.DATABASE_URL?.trim() || undefined;
const publicUrl = preferTrustedLocal
? undefined
: (
process.env.PAPERCLIP_PUBLIC_URL?.trim() ||
process.env.PAPERCLIP_AUTH_PUBLIC_BASE_URL?.trim() ||
process.env.BETTER_AUTH_URL?.trim() ||
process.env.BETTER_AUTH_BASE_URL?.trim() ||
undefined
);
const deploymentMode = preferTrustedLocal
? "local_trusted"
: (parseEnumFromEnv<DeploymentMode>(process.env.PAPERCLIP_DEPLOYMENT_MODE, DEPLOYMENT_MODES) ?? "local_trusted");
const publicUrl =
process.env.PAPERCLIP_PUBLIC_URL?.trim() ||
process.env.PAPERCLIP_AUTH_PUBLIC_BASE_URL?.trim() ||
process.env.BETTER_AUTH_URL?.trim() ||
process.env.BETTER_AUTH_BASE_URL?.trim() ||
undefined;
const deploymentMode =
parseEnumFromEnv<DeploymentMode>(process.env.PAPERCLIP_DEPLOYMENT_MODE, DEPLOYMENT_MODES) ?? "local_trusted";
const deploymentExposureFromEnv = parseEnumFromEnv<DeploymentExposure>(
process.env.PAPERCLIP_DEPLOYMENT_EXPOSURE,
DEPLOYMENT_EXPOSURES,
);
const deploymentExposure =
deploymentMode === "local_trusted" ? "private" : (deploymentExposureFromEnv ?? "private");
const bindFromEnv = parseEnumFromEnv<BindMode>(process.env.PAPERCLIP_BIND, BIND_MODES);
const customBindHostFromEnv = process.env.PAPERCLIP_BIND_HOST?.trim() || undefined;
const hostFromEnv = process.env.HOST?.trim() || undefined;
const configuredBindHost = customBindHostFromEnv ?? hostFromEnv;
const bind = preferTrustedLocal
? "loopback"
: (
deploymentMode === "local_trusted"
? "loopback"
: (bindFromEnv ?? (configuredBindHost ? inferBindModeFromHost(configuredBindHost) : "lan"))
);
const resolvedBind = resolveRuntimeBind({
bind,
host: hostFromEnv ?? (bind === "loopback" ? "127.0.0.1" : "0.0.0.0"),
customBindHost: customBindHostFromEnv,
tailnetBindHost: process.env.PAPERCLIP_TAILNET_BIND_HOST?.trim(),
});
const authPublicBaseUrl = publicUrl;
const authBaseUrlModeFromEnv = parseEnumFromEnv<AuthBaseUrlMode>(
process.env.PAPERCLIP_AUTH_BASE_URL_MODE,
@@ -228,9 +183,7 @@ function quickstartDefaultsFromEnv(opts?: { preferTrustedLocal?: boolean }): {
server: {
deploymentMode,
exposure: deploymentExposure,
bind: resolvedBind.bind,
...(resolvedBind.customBindHost ? { customBindHost: resolvedBind.customBindHost } : {}),
host: resolvedBind.host,
host: process.env.HOST ?? "127.0.0.1",
port: Number(process.env.PORT) || 3100,
allowedHostnames: Array.from(new Set([...allowedHostnamesFromEnv, ...(hostnameFromPublicUrl ? [hostnameFromPublicUrl] : [])])),
serveUi: parseBooleanFromEnv(process.env.SERVE_UI) ?? true,
@@ -267,49 +220,12 @@ function quickstartDefaultsFromEnv(opts?: { preferTrustedLocal?: boolean }): {
},
};
const ignoredEnvKeys: Array<{ key: string; reason: string }> = [];
if (preferTrustedLocal) {
const forcedLocalReason = "Ignored because --yes quickstart forces trusted local loopback defaults";
for (const key of [
"PAPERCLIP_DEPLOYMENT_MODE",
"PAPERCLIP_DEPLOYMENT_EXPOSURE",
"PAPERCLIP_BIND",
"PAPERCLIP_BIND_HOST",
"HOST",
"PAPERCLIP_AUTH_BASE_URL_MODE",
"PAPERCLIP_AUTH_PUBLIC_BASE_URL",
"PAPERCLIP_PUBLIC_URL",
"BETTER_AUTH_URL",
"BETTER_AUTH_BASE_URL",
] as const) {
if (process.env[key] !== undefined) {
ignoredEnvKeys.push({ key, reason: forcedLocalReason });
}
}
}
if (deploymentMode === "local_trusted" && process.env.PAPERCLIP_DEPLOYMENT_EXPOSURE !== undefined) {
ignoredEnvKeys.push({
key: "PAPERCLIP_DEPLOYMENT_EXPOSURE",
reason: "Ignored because deployment mode local_trusted always forces private exposure",
});
}
if (deploymentMode === "local_trusted" && process.env.PAPERCLIP_BIND !== undefined) {
ignoredEnvKeys.push({
key: "PAPERCLIP_BIND",
reason: "Ignored because deployment mode local_trusted always uses loopback reachability",
});
}
if (deploymentMode === "local_trusted" && process.env.PAPERCLIP_BIND_HOST !== undefined) {
ignoredEnvKeys.push({
key: "PAPERCLIP_BIND_HOST",
reason: "Ignored because deployment mode local_trusted always uses loopback reachability",
});
}
if (deploymentMode === "local_trusted" && process.env.HOST !== undefined) {
ignoredEnvKeys.push({
key: "HOST",
reason: "Ignored because deployment mode local_trusted always uses loopback reachability",
});
}
const ignoredKeySet = new Set(ignoredEnvKeys.map((entry) => entry.key));
const usedEnvKeys = ONBOARD_ENV_KEYS.filter(
@@ -323,10 +239,6 @@ function canCreateBootstrapInviteImmediately(config: Pick<PaperclipConfig, "data
}
export async function onboard(opts: OnboardOptions): Promise<void> {
if (opts.bind && !["loopback", "lan", "tailnet"].includes(opts.bind)) {
throw new Error(`Unsupported bind preset for onboard: ${opts.bind}. Use loopback, lan, or tailnet.`);
}
printPaperclipCliBanner();
p.intro(pc.bgCyan(pc.black(" paperclipai onboard ")));
const configPath = resolveConfigPath(opts.config);
@@ -381,7 +293,7 @@ export async function onboard(opts: OnboardOptions): Promise<void> {
`Database: ${existingConfig.database.mode}`,
existingConfig.llm ? `LLM: ${existingConfig.llm.provider}` : "LLM: not configured",
`Logging: ${existingConfig.logging.mode} -> ${existingConfig.logging.logDir}`,
`Server: ${existingConfig.server.deploymentMode}/${existingConfig.server.exposure} @ ${describeServerBinding(existingConfig.server)}`,
`Server: ${existingConfig.server.deploymentMode}/${existingConfig.server.exposure} @ ${existingConfig.server.host}:${existingConfig.server.port}`,
`Allowed hosts: ${existingConfig.server.allowedHostnames.length > 0 ? existingConfig.server.allowedHostnames.join(", ") : "(loopback only)"}`,
`Auth URL mode: ${existingConfig.auth.baseUrlMode}${existingConfig.auth.publicBaseUrl ? ` (${existingConfig.auth.publicBaseUrl})` : ""}`,
`Storage: ${existingConfig.storage.provider}`,
@@ -424,13 +336,7 @@ export async function onboard(opts: OnboardOptions): Promise<void> {
let setupMode: SetupMode = "quickstart";
if (opts.yes) {
p.log.message(
pc.dim(
opts.bind
? `\`--yes\` enabled: using Quickstart defaults with bind=${opts.bind}.`
: "`--yes` enabled: using Quickstart defaults.",
),
);
p.log.message(pc.dim("`--yes` enabled: using Quickstart defaults."));
} else {
const setupModeChoice = await p.select({
message: "Choose setup path",
@@ -459,9 +365,7 @@ export async function onboard(opts: OnboardOptions): Promise<void> {
if (tc) trackInstallStarted(tc);
let llm: PaperclipConfig["llm"] | undefined;
const { defaults: derivedDefaults, usedEnvKeys, ignoredEnvKeys } = quickstartDefaultsFromEnv({
preferTrustedLocal: opts.yes === true && !opts.bind,
});
const { defaults: derivedDefaults, usedEnvKeys, ignoredEnvKeys } = quickstartDefaultsFromEnv();
let {
database,
logging,
@@ -471,19 +375,6 @@ export async function onboard(opts: OnboardOptions): Promise<void> {
secrets,
} = derivedDefaults;
if (opts.bind === "loopback" || opts.bind === "lan" || opts.bind === "tailnet") {
const preset = buildPresetServerConfig(opts.bind, {
port: server.port,
allowedHostnames: server.allowedHostnames,
serveUi: server.serveUi,
});
server = preset.server;
auth = preset.auth;
if (opts.bind === "tailnet" && server.host === "127.0.0.1") {
p.log.warn(TAILNET_BIND_WARNING);
}
}
if (setupMode === "advanced") {
p.log.step(pc.bold("Database"));
database = await promptDatabase(database);
@@ -571,13 +462,7 @@ export async function onboard(opts: OnboardOptions): Promise<void> {
);
} else {
p.log.step(pc.bold("Quickstart"));
p.log.message(
pc.dim(
opts.bind
? `Using quickstart defaults with bind=${opts.bind}.`
: `Using quickstart defaults: ${server.deploymentMode}/${server.exposure} @ ${describeServerBinding(server)}.`,
),
);
p.log.message(pc.dim("Using quickstart defaults."));
if (usedEnvKeys.length > 0) {
p.log.message(pc.dim(`Environment-aware defaults active (${usedEnvKeys.length} env var(s) detected).`));
} else {
@@ -636,7 +521,7 @@ export async function onboard(opts: OnboardOptions): Promise<void> {
`Database: ${database.mode}`,
llm ? `LLM: ${llm.provider}` : "LLM: not configured",
`Logging: ${logging.mode} -> ${logging.logDir}`,
`Server: ${server.deploymentMode}/${server.exposure} @ ${describeServerBinding(server)}`,
`Server: ${server.deploymentMode}/${server.exposure} @ ${server.host}:${server.port}`,
`Allowed hosts: ${server.allowedHostnames.length > 0 ? server.allowedHostnames.join(", ") : "(loopback only)"}`,
`Auth URL mode: ${auth.baseUrlMode}${auth.publicBaseUrl ? ` (${auth.publicBaseUrl})` : ""}`,
`Storage: ${storage.provider}`,

View File

@@ -1,6 +1,5 @@
import fs from "node:fs";
import path from "node:path";
import { spawnSync } from "node:child_process";
import { fileURLToPath, pathToFileURL } from "node:url";
import * as p from "@clack/prompts";
import pc from "picocolors";
@@ -22,7 +21,6 @@ interface RunOptions {
instance?: string;
repair?: boolean;
yes?: boolean;
bind?: "loopback" | "lan" | "tailnet";
}
interface StartedServer {
@@ -59,7 +57,7 @@ export async function runCommand(opts: RunOptions): Promise<void> {
}
p.log.step("No config found. Starting onboarding...");
await onboard({ config: configPath, invokedByRun: true, bind: opts.bind });
await onboard({ config: configPath, invokedByRun: true });
}
p.log.step("Running doctor checks...");
@@ -148,35 +146,11 @@ function maybeEnableUiDevMiddleware(entrypoint: string): void {
}
}
function ensureDevWorkspaceBuildDeps(projectRoot: string): void {
const buildScript = path.resolve(projectRoot, "scripts/ensure-plugin-build-deps.mjs");
if (!fs.existsSync(buildScript)) return;
const result = spawnSync(process.execPath, [buildScript], {
cwd: projectRoot,
stdio: "inherit",
timeout: 120_000,
});
if (result.error) {
throw new Error(
`Failed to prepare workspace build artifacts before starting the Paperclip dev server.\n${formatError(result.error)}`,
);
}
if ((result.status ?? 1) !== 0) {
throw new Error(
"Failed to prepare workspace build artifacts before starting the Paperclip dev server.",
);
}
}
async function importServerEntry(): Promise<StartedServer> {
// Dev mode: try local workspace path (monorepo with tsx)
const projectRoot = path.resolve(path.dirname(fileURLToPath(import.meta.url)), "../../..");
const devEntry = path.resolve(projectRoot, "server/src/index.ts");
if (fs.existsSync(devEntry)) {
ensureDevWorkspaceBuildDeps(projectRoot);
maybeEnableUiDevMiddleware(devEntry);
const mod = await import(pathToFileURL(devEntry).href);
return await startServerFromModule(mod, devEntry);

View File

@@ -75,6 +75,11 @@ function nonEmpty(value: string | null | undefined): string | null {
return typeof value === "string" && value.trim().length > 0 ? value.trim() : null;
}
function isLoopbackHost(hostname: string): boolean {
const value = hostname.trim().toLowerCase();
return value === "127.0.0.1" || value === "localhost" || value === "::1";
}
export function sanitizeWorktreeInstanceId(rawValue: string): string {
const trimmed = rawValue.trim().toLowerCase();
const normalized = trimmed
@@ -163,8 +168,7 @@ export function rewriteLocalUrlPort(rawUrl: string | undefined, port: number): s
if (!rawUrl) return undefined;
try {
const parsed = new URL(rawUrl);
// The URL API normalizes default ports like :80/:443 to "", so treat them as stable URLs.
if (!parsed.port) return rawUrl;
if (!isLoopbackHost(parsed.hostname)) return rawUrl;
parsed.port = String(port);
return parsed.toString();
} catch {
@@ -210,8 +214,6 @@ export function buildWorktreeConfig(input: {
server: {
deploymentMode: source?.server.deploymentMode ?? "local_trusted",
exposure: source?.server.exposure ?? "private",
...(source?.server.bind ? { bind: source.server.bind } : {}),
...(source?.server.customBindHost ? { customBindHost: source.server.customBindHost } : {}),
host: source?.server.host ?? "127.0.0.1",
port: serverPort,
allowedHostnames: source?.server.allowedHostnames ?? [],

View File

@@ -39,8 +39,6 @@ import {
issues,
projectWorkspaces,
projects,
routines,
routineTriggers,
runDatabaseBackup,
runDatabaseRestore,
createEmbeddedPostgresLogBuffer,
@@ -93,7 +91,6 @@ type WorktreeInitOptions = {
dbPort?: number;
seed?: boolean;
seedMode?: string;
preserveLiveWork?: boolean;
force?: boolean;
};
@@ -127,23 +124,10 @@ type WorktreeReseedOptions = {
fromDataDir?: string;
fromInstance?: string;
seedMode?: string;
preserveLiveWork?: boolean;
yes?: boolean;
allowLiveTarget?: boolean;
};
type WorktreeRepairOptions = {
branch?: string;
home?: string;
fromConfig?: string;
fromDataDir?: string;
fromInstance?: string;
seedMode?: string;
preserveLiveWork?: boolean;
noSeed?: boolean;
allowLiveTarget?: boolean;
};
type EmbeddedPostgresInstance = {
initialise(): Promise<void>;
start(): Promise<void>;
@@ -182,8 +166,6 @@ type CopiedGitHooksResult = {
type SeedWorktreeDatabaseResult = {
backupSummary: string;
pausedScheduledRoutines: number;
executionQuarantine: SeededWorktreeExecutionQuarantineSummary;
reboundWorkspaces: Array<{
name: string;
fromCwd: string;
@@ -191,14 +173,6 @@ type SeedWorktreeDatabaseResult = {
}>;
};
export type SeededWorktreeExecutionQuarantineSummary = {
disabledTimerHeartbeats: number;
resetRunningAgents: number;
quarantinedInProgressIssues: number;
unassignedTodoIssues: number;
unassignedReviewIssues: number;
};
function nonEmpty(value: string | null | undefined): string | null {
return typeof value === "string" && value.trim().length > 0 ? value.trim() : null;
}
@@ -211,18 +185,6 @@ function isCurrentSourceConfigPath(sourceConfigPath: string): boolean {
return path.resolve(currentConfigPath) === path.resolve(sourceConfigPath);
}
function formatSeededWorktreeExecutionQuarantineSummary(
summary: SeededWorktreeExecutionQuarantineSummary,
): string {
return [
`disabled timer heartbeats: ${summary.disabledTimerHeartbeats}`,
`reset running agents: ${summary.resetRunningAgents}`,
`quarantined in-progress issues: ${summary.quarantinedInProgressIssues}`,
`unassigned todo issues: ${summary.unassignedTodoIssues}`,
`unassigned review issues: ${summary.unassignedReviewIssues}`,
].join(", ");
}
const WORKTREE_NAME_PREFIX = "paperclip-";
function resolveWorktreeMakeName(name: string): string {
@@ -586,46 +548,6 @@ function detectGitBranchName(cwd: string): string | null {
}
}
function validateGitBranchName(cwd: string, branchName: string): string {
const value = nonEmpty(branchName);
if (!value) {
throw new Error("Branch name is required.");
}
try {
execFileSync("git", ["check-ref-format", "--branch", value], {
cwd,
stdio: ["ignore", "pipe", "pipe"],
});
} catch (error) {
throw new Error(`Invalid branch name "${branchName}": ${extractExecSyncErrorMessage(error) ?? String(error)}`);
}
return value;
}
function isPrimaryGitWorktree(cwd: string): boolean {
const workspace = detectGitWorkspaceInfo(cwd);
return Boolean(workspace && workspace.gitDir === workspace.commonDir);
}
function resolvePrimaryGitRepoRoot(cwd: string): string {
const workspace = detectGitWorkspaceInfo(cwd);
if (!workspace) {
throw new Error("Current directory is not inside a git repository.");
}
if (workspace.gitDir === workspace.commonDir) {
return workspace.root;
}
return path.resolve(workspace.commonDir, "..");
}
function resolveRepairWorktreeDirName(branchName: string): string {
const normalized = branchName.trim()
.replace(/[^A-Za-z0-9._-]+/g, "-")
.replace(/-+/g, "-")
.replace(/^[-._]+|[-._]+$/g, "");
return normalized || "worktree";
}
function detectGitWorkspaceInfo(cwd: string): GitWorkspaceInfo | null {
try {
const root = execFileSync("git", ["rev-parse", "--show-toplevel"], {
@@ -849,21 +771,6 @@ export function resolveWorktreeReseedSource(input: WorktreeReseedOptions): Resol
);
}
function resolveWorktreeRepairSource(input: WorktreeRepairOptions): ResolvedWorktreeReseedSource {
const fromConfig = nonEmpty(input.fromConfig);
const fromDataDir = nonEmpty(input.fromDataDir);
const fromInstance = nonEmpty(input.fromInstance) ?? "default";
const configPath = resolveSourceConfigPath({
fromConfig: fromConfig ?? undefined,
fromDataDir: fromDataDir ?? undefined,
fromInstance,
});
return {
configPath,
label: configPath,
};
}
export function resolveWorktreeReseedTargetPaths(input: {
configPath: string;
rootPath: string;
@@ -885,105 +792,6 @@ export function resolveWorktreeReseedTargetPaths(input: {
});
}
function resolveExistingGitWorktree(selector: string, cwd: string): MergeSourceChoice | null {
const trimmed = selector.trim();
if (trimmed.length === 0) return null;
const directPath = path.resolve(trimmed);
if (existsSync(directPath)) {
return {
worktree: directPath,
branch: null,
branchLabel: path.basename(directPath),
hasPaperclipConfig: existsSync(path.resolve(directPath, ".paperclip", "config.json")),
isCurrent: directPath === path.resolve(cwd),
};
}
return toMergeSourceChoices(cwd).find((choice) =>
choice.worktree === directPath
|| path.basename(choice.worktree) === trimmed
|| choice.branchLabel === trimmed
|| choice.branch === trimmed,
) ?? null;
}
async function ensureRepairTargetWorktree(input: {
selector?: string;
seedMode: WorktreeSeedMode;
opts: WorktreeRepairOptions;
}): Promise<ResolvedWorktreeRepairTarget | null> {
const cwd = process.cwd();
const currentRoot = path.resolve(cwd);
const currentConfigPath = path.resolve(currentRoot, ".paperclip", "config.json");
if (!input.selector) {
if (isPrimaryGitWorktree(cwd)) {
return null;
}
return {
rootPath: currentRoot,
configPath: currentConfigPath,
label: path.basename(currentRoot),
branchName: detectGitBranchName(cwd),
created: false,
};
}
const existing = resolveExistingGitWorktree(input.selector, cwd);
if (existing) {
return {
rootPath: existing.worktree,
configPath: path.resolve(existing.worktree, ".paperclip", "config.json"),
label: existing.branchLabel,
branchName: existing.branchLabel === "(detached)" ? null : existing.branchLabel,
created: false,
};
}
const repoRoot = resolvePrimaryGitRepoRoot(cwd);
const branchName = validateGitBranchName(repoRoot, input.selector);
const targetPath = path.resolve(
repoRoot,
".paperclip",
"worktrees",
resolveRepairWorktreeDirName(branchName),
);
if (existsSync(targetPath)) {
throw new Error(`Target path already exists but is not a registered git worktree: ${targetPath}`);
}
mkdirSync(path.dirname(targetPath), { recursive: true });
const spinner = p.spinner();
spinner.start(`Creating git worktree for ${branchName}...`);
try {
execFileSync("git", resolveGitWorktreeAddArgs({
branchName,
targetPath,
branchExists: localBranchExists(repoRoot, branchName),
}), {
cwd: repoRoot,
stdio: ["ignore", "pipe", "pipe"],
});
spinner.stop(`Created git worktree at ${targetPath}.`);
} catch (error) {
spinner.stop(pc.red("Failed to create git worktree."));
throw new Error(extractExecSyncErrorMessage(error) ?? String(error));
}
installDependenciesBestEffort(targetPath);
return {
rootPath: targetPath,
configPath: path.resolve(targetPath, ".paperclip", "config.json"),
label: branchName,
branchName,
created: true,
};
}
function resolveSourceConnectionString(config: PaperclipConfig, envEntries: Record<string, string>, portOverride?: number): string {
if (config.database.mode === "postgres") {
const connectionString = nonEmpty(envEntries.DATABASE_URL) ?? nonEmpty(config.database.connectionString);
@@ -1114,163 +922,6 @@ async function ensureEmbeddedPostgres(dataDir: string, preferredPort: number): P
};
}
export async function pauseSeededScheduledRoutines(connectionString: string): Promise<number> {
const db = createDb(connectionString);
try {
const scheduledRoutineIds = await db
.selectDistinct({ routineId: routineTriggers.routineId })
.from(routineTriggers)
.where(and(eq(routineTriggers.kind, "schedule"), eq(routineTriggers.enabled, true)));
const idsToPause = scheduledRoutineIds
.map((row) => row.routineId)
.filter((value): value is string => Boolean(value));
if (idsToPause.length === 0) {
return 0;
}
const paused = await db
.update(routines)
.set({
status: "paused",
updatedAt: new Date(),
})
.where(and(inArray(routines.id, idsToPause), sql`${routines.status} <> 'paused'`, sql`${routines.status} <> 'archived'`))
.returning({ id: routines.id });
return paused.length;
} finally {
await db.$client?.end?.({ timeout: 5 }).catch(() => undefined);
}
}
const EMPTY_SEEDED_WORKTREE_EXECUTION_QUARANTINE_SUMMARY: SeededWorktreeExecutionQuarantineSummary = {
disabledTimerHeartbeats: 0,
resetRunningAgents: 0,
quarantinedInProgressIssues: 0,
unassignedTodoIssues: 0,
unassignedReviewIssues: 0,
};
function isRecord(value: unknown): value is Record<string, unknown> {
return Boolean(value) && typeof value === "object" && !Array.isArray(value);
}
function isEnabledValue(value: unknown): boolean {
return value === true || value === "true" || value === 1 || value === "1";
}
function normalizeWorktreeRuntimeConfig(runtimeConfig: unknown): {
runtimeConfig: Record<string, unknown>;
disabledTimerHeartbeat: boolean;
changed: boolean;
} {
const nextRuntimeConfig = isRecord(runtimeConfig) ? { ...runtimeConfig } : {};
const heartbeat = isRecord(nextRuntimeConfig.heartbeat) ? { ...nextRuntimeConfig.heartbeat } : null;
if (!heartbeat) {
return { runtimeConfig: nextRuntimeConfig, disabledTimerHeartbeat: false, changed: false };
}
const disabledTimerHeartbeat = isEnabledValue(heartbeat.enabled);
if (heartbeat.enabled !== false) {
heartbeat.enabled = false;
nextRuntimeConfig.heartbeat = heartbeat;
return { runtimeConfig: nextRuntimeConfig, disabledTimerHeartbeat, changed: true };
}
return { runtimeConfig: nextRuntimeConfig, disabledTimerHeartbeat: false, changed: false };
}
export async function quarantineSeededWorktreeExecutionState(
connectionString: string,
): Promise<SeededWorktreeExecutionQuarantineSummary> {
const db = createDb(connectionString);
const summary = { ...EMPTY_SEEDED_WORKTREE_EXECUTION_QUARANTINE_SUMMARY };
try {
await db.transaction(async (tx) => {
const seededAgents = await tx
.select({
id: agents.id,
status: agents.status,
runtimeConfig: agents.runtimeConfig,
})
.from(agents);
for (const agent of seededAgents) {
const normalized = normalizeWorktreeRuntimeConfig(agent.runtimeConfig);
const nextStatus = agent.status === "running" ? "idle" : agent.status;
if (normalized.disabledTimerHeartbeat) {
summary.disabledTimerHeartbeats += 1;
}
if (agent.status === "running") {
summary.resetRunningAgents += 1;
}
if (normalized.changed || nextStatus !== agent.status) {
await tx
.update(agents)
.set({
runtimeConfig: normalized.runtimeConfig,
status: nextStatus,
updatedAt: new Date(),
})
.where(eq(agents.id, agent.id));
}
}
const affectedIssues = await tx
.select({
id: issues.id,
companyId: issues.companyId,
status: issues.status,
})
.from(issues)
.where(
and(
sql`${issues.assigneeAgentId} is not null`,
sql`${issues.assigneeUserId} is null`,
inArray(issues.status, ["todo", "in_progress", "in_review"]),
),
);
for (const issue of affectedIssues) {
const nextStatus = issue.status === "in_progress" ? "blocked" : issue.status;
await tx
.update(issues)
.set({
status: nextStatus,
assigneeAgentId: null,
checkoutRunId: null,
executionRunId: null,
executionAgentNameKey: null,
executionLockedAt: null,
executionWorkspaceId: null,
updatedAt: new Date(),
})
.where(eq(issues.id, issue.id));
if (issue.status === "in_progress") {
summary.quarantinedInProgressIssues += 1;
await tx.insert(issueComments).values({
companyId: issue.companyId,
issueId: issue.id,
body:
"Quarantined during worktree seed so copied in-flight work does not auto-run in this isolated instance. " +
"Reassign or unblock here only if you intentionally want the worktree instance to own this task.",
});
} else if (issue.status === "todo") {
summary.unassignedTodoIssues += 1;
} else if (issue.status === "in_review") {
summary.unassignedReviewIssues += 1;
}
}
});
return summary;
} finally {
await db.$client?.end?.({ timeout: 5 }).catch(() => undefined);
}
}
async function seedWorktreeDatabase(input: {
sourceConfigPath: string;
sourceConfig: PaperclipConfig;
@@ -1278,7 +929,6 @@ async function seedWorktreeDatabase(input: {
targetPaths: WorktreeLocalPaths;
instanceId: string;
seedMode: WorktreeSeedMode;
preserveLiveWork?: boolean;
}): Promise<SeedWorktreeDatabaseResult> {
const seedPlan = resolveWorktreeSeedPlan(input.seedMode);
const sourceEnvFile = resolvePaperclipEnvFile(input.sourceConfigPath);
@@ -1309,9 +959,8 @@ async function seedWorktreeDatabase(input: {
const backup = await runDatabaseBackup({
connectionString: sourceConnectionString,
backupDir: path.resolve(input.targetPaths.backupDir, "seed"),
retention: { dailyDays: 7, weeklyWeeks: 4, monthlyMonths: 1 },
retentionDays: 7,
filenamePrefix: `${input.instanceId}-seed`,
backupEngine: "javascript",
includeMigrationJournal: true,
excludeTables: seedPlan.excludedTables,
nullifyColumns: seedPlan.nullifyColumns,
@@ -1330,10 +979,6 @@ async function seedWorktreeDatabase(input: {
backupFile: backup.backupFile,
});
await applyPendingMigrations(targetConnectionString);
const executionQuarantine = input.preserveLiveWork
? { ...EMPTY_SEEDED_WORKTREE_EXECUTION_QUARANTINE_SUMMARY }
: await quarantineSeededWorktreeExecutionState(targetConnectionString);
const pausedScheduledRoutines = await pauseSeededScheduledRoutines(targetConnectionString);
const reboundWorkspaces = await rebindSeededProjectWorkspaces({
targetConnectionString,
currentCwd: input.targetPaths.cwd,
@@ -1341,8 +986,6 @@ async function seedWorktreeDatabase(input: {
return {
backupSummary: formatDatabaseBackupResult(backup),
pausedScheduledRoutines,
executionQuarantine,
reboundWorkspaces,
};
} finally {
@@ -1421,8 +1064,6 @@ async function runWorktreeInit(opts: WorktreeInitOptions): Promise<void> {
const copiedGitHooks = copyGitHooksToWorktreeGitDir(cwd);
let seedSummary: string | null = null;
let seedExecutionQuarantineSummary: SeededWorktreeExecutionQuarantineSummary | null = null;
let pausedScheduledRoutineCount: number | null = null;
let reboundWorkspaceSummary: SeedWorktreeDatabaseResult["reboundWorkspaces"] = [];
if (opts.seed !== false) {
if (!sourceConfig) {
@@ -1440,11 +1081,8 @@ async function runWorktreeInit(opts: WorktreeInitOptions): Promise<void> {
targetPaths: paths,
instanceId,
seedMode,
preserveLiveWork: opts.preserveLiveWork,
});
seedSummary = seeded.backupSummary;
seedExecutionQuarantineSummary = seeded.executionQuarantine;
pausedScheduledRoutineCount = seeded.pausedScheduledRoutines;
reboundWorkspaceSummary = seeded.reboundWorkspaces;
spinner.stop(`Seeded isolated worktree database (${seedMode}).`);
} catch (error) {
@@ -1467,16 +1105,6 @@ async function runWorktreeInit(opts: WorktreeInitOptions): Promise<void> {
if (seedSummary) {
p.log.message(pc.dim(`Seed mode: ${seedMode}`));
p.log.message(pc.dim(`Seed snapshot: ${seedSummary}`));
if (opts.preserveLiveWork) {
p.log.warning("Preserved copied live work; this worktree instance may auto-run source-instance assignments.");
} else if (seedExecutionQuarantineSummary) {
p.log.message(
pc.dim(`Seed execution quarantine: ${formatSeededWorktreeExecutionQuarantineSummary(seedExecutionQuarantineSummary)}`),
);
}
if (pausedScheduledRoutineCount != null) {
p.log.message(pc.dim(`Paused scheduled routines: ${pausedScheduledRoutineCount}`));
}
for (const rebound of reboundWorkspaceSummary) {
p.log.message(
pc.dim(`Rebound workspace ${rebound.name}: ${rebound.fromCwd} -> ${rebound.toCwd}`),
@@ -1544,7 +1172,18 @@ export async function worktreeMakeCommand(nameArg: string, opts: WorktreeMakeOpt
throw new Error(extractExecSyncErrorMessage(error) ?? String(error));
}
installDependenciesBestEffort(targetPath);
const installSpinner = p.spinner();
installSpinner.start("Installing dependencies...");
try {
execFileSync("pnpm", ["install"], {
cwd: targetPath,
stdio: ["ignore", "pipe", "pipe"],
});
installSpinner.stop("Installed dependencies.");
} catch (error) {
installSpinner.stop(pc.yellow("Failed to install dependencies (continuing anyway)."));
p.log.warning(extractExecSyncErrorMessage(error) ?? String(error));
}
const originalCwd = process.cwd();
try {
@@ -1561,21 +1200,6 @@ export async function worktreeMakeCommand(nameArg: string, opts: WorktreeMakeOpt
}
}
function installDependenciesBestEffort(targetPath: string): void {
const installSpinner = p.spinner();
installSpinner.start("Installing dependencies...");
try {
execFileSync("pnpm", ["install"], {
cwd: targetPath,
stdio: ["ignore", "pipe", "pipe"],
});
installSpinner.stop("Installed dependencies.");
} catch (error) {
installSpinner.stop(pc.yellow("Failed to install dependencies (continuing anyway)."));
p.log.warning(extractExecSyncErrorMessage(error) ?? String(error));
}
}
type WorktreeCleanupOptions = {
instance?: string;
home?: string;
@@ -1609,14 +1233,6 @@ type ResolvedWorktreeReseedSource = {
label: string;
};
type ResolvedWorktreeRepairTarget = {
rootPath: string;
configPath: string;
label: string;
branchName: string | null;
created: boolean;
};
function parseGitWorktreeList(cwd: string): GitWorktreeListEntry[] {
const raw = execFileSync("git", ["worktree", "list", "--porcelain"], {
cwd,
@@ -3058,7 +2674,10 @@ export async function worktreeMergeHistoryCommand(sourceArg: string | undefined,
}
}
async function runWorktreeReseed(opts: WorktreeReseedOptions): Promise<void> {
export async function worktreeReseedCommand(opts: WorktreeReseedOptions): Promise<void> {
printPaperclipCliBanner();
p.intro(pc.bgCyan(pc.black(" paperclipai worktree reseed ")));
const seedMode = opts.seedMode ?? "full";
if (!isWorktreeSeedMode(seedMode)) {
throw new Error(`Unsupported seed mode "${seedMode}". Expected one of: minimal, full.`);
@@ -3121,20 +2740,11 @@ async function runWorktreeReseed(opts: WorktreeReseedOptions): Promise<void> {
targetPaths,
instanceId: targetPaths.instanceId,
seedMode,
preserveLiveWork: opts.preserveLiveWork,
});
spinner.stop(`Reseeded ${targetEndpoint.label} (${seedMode}).`);
p.log.message(pc.dim(`Source: ${source.configPath}`));
p.log.message(pc.dim(`Target: ${targetEndpoint.configPath}`));
p.log.message(pc.dim(`Seed snapshot: ${seeded.backupSummary}`));
if (opts.preserveLiveWork) {
p.log.warning("Preserved copied live work; this worktree instance may auto-run source-instance assignments.");
} else {
p.log.message(
pc.dim(`Seed execution quarantine: ${formatSeededWorktreeExecutionQuarantineSummary(seeded.executionQuarantine)}`),
);
}
p.log.message(pc.dim(`Paused scheduled routines: ${seeded.pausedScheduledRoutines}`));
for (const rebound of seeded.reboundWorkspaces) {
p.log.message(
pc.dim(`Rebound workspace ${rebound.name}: ${rebound.fromCwd} -> ${rebound.toCwd}`),
@@ -3147,98 +2757,6 @@ async function runWorktreeReseed(opts: WorktreeReseedOptions): Promise<void> {
}
}
export async function worktreeReseedCommand(opts: WorktreeReseedOptions): Promise<void> {
printPaperclipCliBanner();
p.intro(pc.bgCyan(pc.black(" paperclipai worktree reseed ")));
await runWorktreeReseed(opts);
}
export async function worktreeRepairCommand(opts: WorktreeRepairOptions): Promise<void> {
printPaperclipCliBanner();
p.intro(pc.bgCyan(pc.black(" paperclipai worktree repair ")));
const seedMode = opts.seedMode ?? "minimal";
if (!isWorktreeSeedMode(seedMode)) {
throw new Error(`Unsupported seed mode "${seedMode}". Expected one of: minimal, full.`);
}
const target = await ensureRepairTargetWorktree({
selector: nonEmpty(opts.branch) ?? undefined,
seedMode,
opts,
});
if (!target) {
p.log.warn("Current checkout is the primary repo worktree. Pass --branch to create or repair a linked worktree.");
p.outro(pc.yellow("No worktree repaired."));
return;
}
const source = resolveWorktreeRepairSource(opts);
if (!existsSync(source.configPath)) {
throw new Error(`Source config not found at ${source.configPath}.`);
}
if (path.resolve(source.configPath) === path.resolve(target.configPath)) {
throw new Error("Source and target Paperclip configs are the same. Use --from-config/--from-instance to point repair at a different source.");
}
const targetConfig = existsSync(target.configPath) ? readConfig(target.configPath) : null;
const targetEnvEntries = readPaperclipEnvEntries(resolvePaperclipEnvFile(target.configPath));
const targetHasWorktreeEnv = Boolean(
nonEmpty(targetEnvEntries.PAPERCLIP_HOME) && nonEmpty(targetEnvEntries.PAPERCLIP_INSTANCE_ID),
);
if (targetConfig && targetHasWorktreeEnv && opts.noSeed) {
p.log.message(pc.dim(`Target ${target.label} already has worktree-local config/env. Skipping reseed because --no-seed was passed.`));
p.outro(pc.green(`Worktree metadata already looks healthy for ${target.label}.`));
return;
}
if (targetConfig && targetHasWorktreeEnv) {
await runWorktreeReseed({
fromConfig: source.configPath,
to: target.rootPath,
seedMode,
preserveLiveWork: opts.preserveLiveWork,
yes: true,
allowLiveTarget: opts.allowLiveTarget,
});
return;
}
const repairInstanceId = sanitizeWorktreeInstanceId(path.basename(target.rootPath));
const repairPaths = resolveWorktreeLocalPaths({
cwd: target.rootPath,
homeDir: resolveWorktreeHome(opts.home),
instanceId: repairInstanceId,
});
const runningTargetPid = readRunningPostmasterPid(path.resolve(repairPaths.embeddedPostgresDataDir, "postmaster.pid"));
if (runningTargetPid && !opts.allowLiveTarget) {
throw new Error(
`Target worktree database appears to be running (pid ${runningTargetPid}). Stop Paperclip in ${target.rootPath} before repairing, or re-run with --allow-live-target if you want to override this guard.`,
);
}
if (runningTargetPid && opts.allowLiveTarget) {
p.log.warning(`Proceeding even though the target embedded PostgreSQL appears to be running (pid ${runningTargetPid}).`);
}
const originalCwd = process.cwd();
try {
process.chdir(target.rootPath);
await runWorktreeInit({
home: opts.home,
fromConfig: source.configPath,
fromDataDir: opts.fromDataDir,
fromInstance: opts.fromInstance,
seed: opts.noSeed ? false : true,
seedMode,
preserveLiveWork: opts.preserveLiveWork,
force: true,
});
} finally {
process.chdir(originalCwd);
}
}
export function registerWorktreeCommands(program: Command): void {
const worktree = program.command("worktree").description("Worktree-local Paperclip instance helpers");
@@ -3255,7 +2773,6 @@ export function registerWorktreeCommands(program: Command): void {
.option("--server-port <port>", "Preferred server port", (value) => Number(value))
.option("--db-port <port>", "Preferred embedded Postgres port", (value) => Number(value))
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: minimal)", "minimal")
.option("--preserve-live-work", "Do not quarantine copied agent timers or assigned open issues in the seeded worktree", false)
.option("--no-seed", "Skip database seeding from the source instance")
.option("--force", "Replace existing repo-local config and isolated instance data", false)
.action(worktreeMakeCommand);
@@ -3272,7 +2789,6 @@ export function registerWorktreeCommands(program: Command): void {
.option("--server-port <port>", "Preferred server port", (value) => Number(value))
.option("--db-port <port>", "Preferred embedded Postgres port", (value) => Number(value))
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: minimal)", "minimal")
.option("--preserve-live-work", "Do not quarantine copied agent timers or assigned open issues in the seeded worktree", false)
.option("--no-seed", "Skip database seeding from the source instance")
.option("--force", "Replace existing repo-local config and isolated instance data", false)
.action(worktreeInitCommand);
@@ -3312,25 +2828,10 @@ export function registerWorktreeCommands(program: Command): void {
.option("--from-data-dir <path>", "Source PAPERCLIP_HOME used when deriving the source config")
.option("--from-instance <id>", "Source instance id when deriving the source config")
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: full)", "full")
.option("--preserve-live-work", "Do not quarantine copied agent timers or assigned open issues in the seeded worktree", false)
.option("--yes", "Skip the destructive confirmation prompt", false)
.option("--allow-live-target", "Override the guard that requires the target worktree DB to be stopped first", false)
.action(worktreeReseedCommand);
worktree
.command("repair")
.description("Create or repair a linked worktree-local Paperclip instance without touching the primary checkout")
.option("--branch <name>", "Existing branch/worktree selector to repair, or a branch name to create under .paperclip/worktrees")
.option("--home <path>", `Home root for worktree instances (env: PAPERCLIP_WORKTREES_DIR, default: ${DEFAULT_WORKTREE_HOME})`)
.option("--from-config <path>", "Source config.json to seed from")
.option("--from-data-dir <path>", "Source PAPERCLIP_HOME used when deriving the source config")
.option("--from-instance <id>", "Source instance id when deriving the source config (default: default)")
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: minimal)", "minimal")
.option("--preserve-live-work", "Do not quarantine copied agent timers or assigned open issues in the seeded worktree", false)
.option("--no-seed", "Repair metadata only and skip reseeding when bootstrapping a missing worktree config", false)
.option("--allow-live-target", "Override the guard that requires the target worktree DB to be stopped first", false)
.action(worktreeRepairCommand);
program
.command("worktree:cleanup")
.description("Safely remove a worktree, its branch, and its isolated instance data")

View File

@@ -1,183 +0,0 @@
import { execFileSync } from "node:child_process";
import {
ALL_INTERFACES_BIND_HOST,
LOOPBACK_BIND_HOST,
inferBindModeFromHost,
isAllInterfacesHost,
isLoopbackHost,
type BindMode,
type DeploymentExposure,
type DeploymentMode,
} from "@paperclipai/shared";
import type { AuthConfig, ServerConfig } from "./schema.js";
const TAILSCALE_DETECT_TIMEOUT_MS = 3000;
type BaseServerInput = {
port: number;
allowedHostnames: string[];
serveUi: boolean;
};
export function inferConfiguredBind(server?: Partial<ServerConfig>): BindMode {
if (server?.bind) return server.bind;
return inferBindModeFromHost(server?.customBindHost ?? server?.host);
}
export function detectTailnetBindHost(): string | undefined {
const explicit = process.env.PAPERCLIP_TAILNET_BIND_HOST?.trim();
if (explicit) return explicit;
try {
const stdout = execFileSync("tailscale", ["ip", "-4"], {
encoding: "utf8",
stdio: ["ignore", "pipe", "ignore"],
timeout: TAILSCALE_DETECT_TIMEOUT_MS,
});
return stdout
.split(/\r?\n/)
.map((line) => line.trim())
.find(Boolean);
} catch {
return undefined;
}
}
export function buildPresetServerConfig(
bind: Exclude<BindMode, "custom">,
input: BaseServerInput,
): { server: ServerConfig; auth: AuthConfig } {
const host =
bind === "loopback"
? LOOPBACK_BIND_HOST
: bind === "tailnet"
? (detectTailnetBindHost() ?? LOOPBACK_BIND_HOST)
: ALL_INTERFACES_BIND_HOST;
return {
server: {
deploymentMode: bind === "loopback" ? "local_trusted" : "authenticated",
exposure: "private",
bind,
customBindHost: undefined,
host,
port: input.port,
allowedHostnames: input.allowedHostnames,
serveUi: input.serveUi,
},
auth: {
baseUrlMode: "auto",
disableSignUp: false,
},
};
}
export function buildCustomServerConfig(input: BaseServerInput & {
deploymentMode: DeploymentMode;
exposure: DeploymentExposure;
host: string;
publicBaseUrl?: string;
}): { server: ServerConfig; auth: AuthConfig } {
const normalizedHost = input.host.trim();
const bind = isLoopbackHost(normalizedHost)
? "loopback"
: isAllInterfacesHost(normalizedHost)
? "lan"
: "custom";
return {
server: {
deploymentMode: input.deploymentMode,
exposure: input.deploymentMode === "local_trusted" ? "private" : input.exposure,
bind,
customBindHost: bind === "custom" ? normalizedHost : undefined,
host: normalizedHost,
port: input.port,
allowedHostnames: input.allowedHostnames,
serveUi: input.serveUi,
},
auth:
input.deploymentMode === "authenticated" && input.exposure === "public"
? {
baseUrlMode: "explicit",
disableSignUp: false,
publicBaseUrl: input.publicBaseUrl,
}
: {
baseUrlMode: "auto",
disableSignUp: false,
},
};
}
export function resolveQuickstartServerConfig(input: {
bind?: BindMode | null;
deploymentMode?: DeploymentMode | null;
exposure?: DeploymentExposure | null;
host?: string | null;
port: number;
allowedHostnames: string[];
serveUi: boolean;
publicBaseUrl?: string;
}): { server: ServerConfig; auth: AuthConfig } {
const trimmedHost = input.host?.trim();
const explicitBind = input.bind ?? null;
if (explicitBind === "loopback" || explicitBind === "lan" || explicitBind === "tailnet") {
return buildPresetServerConfig(explicitBind, {
port: input.port,
allowedHostnames: input.allowedHostnames,
serveUi: input.serveUi,
});
}
if (explicitBind === "custom") {
return buildCustomServerConfig({
deploymentMode: input.deploymentMode ?? "authenticated",
exposure: input.exposure ?? "private",
host: trimmedHost || LOOPBACK_BIND_HOST,
port: input.port,
allowedHostnames: input.allowedHostnames,
serveUi: input.serveUi,
publicBaseUrl: input.publicBaseUrl,
});
}
if (trimmedHost) {
return buildCustomServerConfig({
deploymentMode: input.deploymentMode ?? (isLoopbackHost(trimmedHost) ? "local_trusted" : "authenticated"),
exposure: input.exposure ?? "private",
host: trimmedHost,
port: input.port,
allowedHostnames: input.allowedHostnames,
serveUi: input.serveUi,
publicBaseUrl: input.publicBaseUrl,
});
}
if (input.deploymentMode === "authenticated") {
if (input.exposure === "public") {
return buildCustomServerConfig({
deploymentMode: "authenticated",
exposure: "public",
host: ALL_INTERFACES_BIND_HOST,
port: input.port,
allowedHostnames: input.allowedHostnames,
serveUi: input.serveUi,
publicBaseUrl: input.publicBaseUrl,
});
}
return buildPresetServerConfig("lan", {
port: input.port,
allowedHostnames: input.allowedHostnames,
serveUi: input.serveUi,
});
}
return buildPresetServerConfig("loopback", {
port: input.port,
allowedHostnames: input.allowedHostnames,
serveUi: input.serveUi,
});
}

View File

@@ -8,7 +8,6 @@ import { heartbeatRun } from "./commands/heartbeat-run.js";
import { runCommand } from "./commands/run.js";
import { bootstrapCeoInvite } from "./commands/auth-bootstrap-ceo.js";
import { dbBackupCommand } from "./commands/db-backup.js";
import { registerEnvLabCommands } from "./commands/env-lab.js";
import { registerContextCommands } from "./commands/client/context.js";
import { registerCompanyCommands } from "./commands/client/company.js";
import { registerIssueCommands } from "./commands/client/issue.js";
@@ -51,8 +50,7 @@ program
.description("Interactive first-run setup wizard")
.option("-c, --config <path>", "Path to config file")
.option("-d, --data-dir <path>", DATA_DIR_OPTION_HELP)
.option("--bind <mode>", "Quickstart reachability preset (loopback, lan, tailnet)")
.option("-y, --yes", "Accept quickstart defaults (trusted local loopback unless --bind is set) and start immediately", false)
.option("-y, --yes", "Accept defaults (quickstart + start immediately)", false)
.option("--run", "Start Paperclip immediately after saving config", false)
.action(onboard);
@@ -110,7 +108,6 @@ program
.option("-c, --config <path>", "Path to config file")
.option("-d, --data-dir <path>", DATA_DIR_OPTION_HELP)
.option("-i, --instance <id>", "Local instance id (default: default)")
.option("--bind <mode>", "On first run, use onboarding reachability preset (loopback, lan, tailnet)")
.option("--repair", "Attempt automatic repairs during doctor", true)
.option("--no-repair", "Disable automatic repairs during doctor")
.action(runCommand);
@@ -148,7 +145,6 @@ registerDashboardCommands(program);
registerRoutineCommands(program);
registerFeedbackCommands(program);
registerWorktreeCommands(program);
registerEnvLabCommands(program);
registerPluginCommands(program);
const auth = program.command("auth").description("Authentication and bootstrap utilities");

View File

@@ -1,16 +1,6 @@
import * as p from "@clack/prompts";
import { isLoopbackHost, type BindMode } from "@paperclipai/shared";
import type { AuthConfig, ServerConfig } from "../config/schema.js";
import { parseHostnameCsv } from "../config/hostnames.js";
import { buildCustomServerConfig, buildPresetServerConfig, inferConfiguredBind } from "../config/server-bind.js";
const TAILNET_BIND_WARNING =
"No Tailscale address was detected during setup. The saved config will stay on loopback until Tailscale is available or PAPERCLIP_TAILNET_BIND_HOST is set.";
function cancelled(): never {
p.cancel("Setup cancelled.");
process.exit(0);
}
export async function promptServer(opts?: {
currentServer?: Partial<ServerConfig>;
@@ -18,37 +8,69 @@ export async function promptServer(opts?: {
}): Promise<{ server: ServerConfig; auth: AuthConfig }> {
const currentServer = opts?.currentServer;
const currentAuth = opts?.currentAuth;
const currentBind = inferConfiguredBind(currentServer);
const bindSelection = await p.select({
message: "Reachability",
const deploymentModeSelection = await p.select({
message: "Deployment mode",
options: [
{
value: "loopback" as const,
label: "Trusted local",
hint: "Recommended for first run: localhost only, no login friction",
value: "local_trusted",
label: "Local trusted",
hint: "Easiest for local setup (no login, localhost-only)",
},
{
value: "lan" as const,
label: "Private network",
hint: "Broad private bind for LAN, VPN, or legacy --tailscale-auth style access",
},
{
value: "tailnet" as const,
label: "Tailnet",
hint: "Private authenticated access using the machine's detected Tailscale address",
},
{
value: "custom" as const,
label: "Custom",
hint: "Choose exact auth mode, exposure, and host manually",
value: "authenticated",
label: "Authenticated",
hint: "Login required; use for private network or public hosting",
},
],
initialValue: currentBind,
initialValue: currentServer?.deploymentMode ?? "local_trusted",
});
if (p.isCancel(bindSelection)) cancelled();
const bind = bindSelection as BindMode;
if (p.isCancel(deploymentModeSelection)) {
p.cancel("Setup cancelled.");
process.exit(0);
}
const deploymentMode = deploymentModeSelection as ServerConfig["deploymentMode"];
let exposure: ServerConfig["exposure"] = "private";
if (deploymentMode === "authenticated") {
const exposureSelection = await p.select({
message: "Exposure profile",
options: [
{
value: "private",
label: "Private network",
hint: "Private access (for example Tailscale), lower setup friction",
},
{
value: "public",
label: "Public internet",
hint: "Internet-facing deployment with stricter requirements",
},
],
initialValue: currentServer?.exposure ?? "private",
});
if (p.isCancel(exposureSelection)) {
p.cancel("Setup cancelled.");
process.exit(0);
}
exposure = exposureSelection as ServerConfig["exposure"];
}
const hostDefault = deploymentMode === "local_trusted" ? "127.0.0.1" : "0.0.0.0";
const hostStr = await p.text({
message: "Bind host",
defaultValue: currentServer?.host ?? hostDefault,
placeholder: hostDefault,
validate: (val) => {
if (!val.trim()) return "Host is required";
},
});
if (p.isCancel(hostStr)) {
p.cancel("Setup cancelled.");
process.exit(0);
}
const portStr = await p.text({
message: "Server port",
@@ -62,113 +84,15 @@ export async function promptServer(opts?: {
},
});
if (p.isCancel(portStr)) cancelled();
const port = Number(portStr) || 3100;
const serveUi = currentServer?.serveUi ?? true;
if (bind === "loopback") {
return buildPresetServerConfig("loopback", {
port,
allowedHostnames: [],
serveUi,
});
if (p.isCancel(portStr)) {
p.cancel("Setup cancelled.");
process.exit(0);
}
if (bind === "lan" || bind === "tailnet") {
const allowedHostnamesInput = await p.text({
message: "Allowed private hostnames (comma-separated, optional)",
defaultValue: (currentServer?.allowedHostnames ?? []).join(", "),
placeholder:
bind === "tailnet"
? "your-machine.tailnet.ts.net"
: "dotta-macbook-pro, host.docker.internal",
validate: (val) => {
try {
parseHostnameCsv(val);
return;
} catch (err) {
return err instanceof Error ? err.message : "Invalid hostname list";
}
},
});
if (p.isCancel(allowedHostnamesInput)) cancelled();
const preset = buildPresetServerConfig(bind, {
port,
allowedHostnames: parseHostnameCsv(allowedHostnamesInput),
serveUi,
});
if (bind === "tailnet" && isLoopbackHost(preset.server.host)) {
p.log.warn(TAILNET_BIND_WARNING);
}
return preset;
}
const deploymentModeSelection = await p.select({
message: "Auth mode",
options: [
{
value: "local_trusted",
label: "Local trusted",
hint: "No login required; only safe with loopback-only or similarly trusted access",
},
{
value: "authenticated",
label: "Authenticated",
hint: "Login required; supports both private-network and public deployments",
},
],
initialValue: currentServer?.deploymentMode ?? "authenticated",
});
if (p.isCancel(deploymentModeSelection)) cancelled();
const deploymentMode = deploymentModeSelection as ServerConfig["deploymentMode"];
let exposure: ServerConfig["exposure"] = "private";
if (deploymentMode === "authenticated") {
const exposureSelection = await p.select({
message: "Exposure profile",
options: [
{
value: "private",
label: "Private network",
hint: "Private access only, with automatic URL handling",
},
{
value: "public",
label: "Public internet",
hint: "Internet-facing deployment with explicit public URL requirements",
},
],
initialValue: currentServer?.exposure ?? "private",
});
if (p.isCancel(exposureSelection)) cancelled();
exposure = exposureSelection as ServerConfig["exposure"];
}
const defaultHost =
currentServer?.customBindHost ??
currentServer?.host ??
(deploymentMode === "local_trusted" ? "127.0.0.1" : "0.0.0.0");
const host = await p.text({
message: "Bind host",
defaultValue: defaultHost,
placeholder: defaultHost,
validate: (val) => {
if (!val.trim()) return "Host is required";
if (deploymentMode === "local_trusted" && !isLoopbackHost(val.trim())) {
return "Local trusted mode requires a loopback host such as 127.0.0.1";
}
},
});
if (p.isCancel(host)) cancelled();
let allowedHostnames: string[] = [];
if (deploymentMode === "authenticated" && exposure === "private") {
const allowedHostnamesInput = await p.text({
message: "Allowed private hostnames (comma-separated, optional)",
message: "Allowed hostnames (comma-separated, optional)",
defaultValue: (currentServer?.allowedHostnames ?? []).join(", "),
placeholder: "dotta-macbook-pro, your-host.tailnet.ts.net",
validate: (val) => {
@@ -181,11 +105,15 @@ export async function promptServer(opts?: {
},
});
if (p.isCancel(allowedHostnamesInput)) cancelled();
if (p.isCancel(allowedHostnamesInput)) {
p.cancel("Setup cancelled.");
process.exit(0);
}
allowedHostnames = parseHostnameCsv(allowedHostnamesInput);
}
let publicBaseUrl: string | undefined;
const port = Number(portStr) || 3100;
let auth: AuthConfig = { baseUrlMode: "auto", disableSignUp: false };
if (deploymentMode === "authenticated" && exposure === "public") {
const urlInput = await p.text({
message: "Public base URL",
@@ -205,17 +133,32 @@ export async function promptServer(opts?: {
}
},
});
if (p.isCancel(urlInput)) cancelled();
publicBaseUrl = urlInput.trim().replace(/\/+$/, "");
if (p.isCancel(urlInput)) {
p.cancel("Setup cancelled.");
process.exit(0);
}
auth = {
baseUrlMode: "explicit",
disableSignUp: false,
publicBaseUrl: urlInput.trim().replace(/\/+$/, ""),
};
} else if (currentAuth?.baseUrlMode === "explicit" && currentAuth.publicBaseUrl) {
auth = {
baseUrlMode: "explicit",
disableSignUp: false,
publicBaseUrl: currentAuth.publicBaseUrl,
};
}
return buildCustomServerConfig({
deploymentMode,
exposure,
host: host.trim(),
port,
allowedHostnames,
serveUi,
publicBaseUrl,
});
return {
server: {
deploymentMode,
exposure,
host: hostStr.trim(),
port,
allowedHostnames,
serveUi: currentServer?.serveUi ?? true,
},
auth,
};
}

View File

@@ -2,7 +2,7 @@
Paperclip CLI now supports both:
- instance setup/diagnostics (`onboard`, `doctor`, `configure`, `env`, `allowed-hostname`, `env-lab`)
- instance setup/diagnostics (`onboard`, `doctor`, `configure`, `env`, `allowed-hostname`)
- control-plane client operations (issues, approvals, agents, activity, dashboard)
## Base Usage
@@ -32,12 +32,10 @@ Mode taxonomy and design intent are documented in `doc/DEPLOYMENT-MODES.md`.
Current CLI behavior:
- `paperclipai onboard` and `paperclipai configure --section server` set deployment mode in config
- server onboarding/configure ask for reachability intent and write `server.bind`
- `paperclipai run --bind <loopback|lan|tailnet>` passes a quickstart bind preset into first-run onboarding when config is missing
- runtime can override mode with `PAPERCLIP_DEPLOYMENT_MODE`
- `paperclipai run` and `paperclipai doctor` still do not expose a direct low-level `--mode` flag
- `paperclipai run` and `paperclipai doctor` do not yet expose a direct `--mode` flag
Canonical behavior is documented in `doc/DEPLOYMENT-MODES.md`.
Target behavior (planned) is documented in `doc/DEPLOYMENT-MODES.md` section 5.
Allow an authenticated/private hostname (for example custom Tailscale DNS):
@@ -45,15 +43,6 @@ Allow an authenticated/private hostname (for example custom Tailscale DNS):
pnpm paperclipai allowed-hostname dotta-macbook-pro
```
Bring up the default local SSH fixture for environment testing:
```sh
pnpm paperclipai env-lab up
pnpm paperclipai env-lab doctor
pnpm paperclipai env-lab status --json
pnpm paperclipai env-lab down
```
All client commands support:
- `--data-dir <path>`

View File

@@ -27,18 +27,6 @@ pnpm db:migrate
When `DATABASE_URL` is unset, this command targets the current embedded PostgreSQL instance for your active Paperclip config/instance.
Issue reference mentions follow the normal migration path: the schema migration creates the tracking table, but it does not backfill historical issue titles, descriptions, comments, or documents automatically.
To backfill existing content manually after migrating, run:
```sh
pnpm issue-references:backfill
# optional: limit to one company
pnpm issue-references:backfill -- --company <company-id>
```
Future issue, comment, and document writes sync references automatically without running the backfill command.
This mode is ideal for local development and one-command installs.
Docker note: the Docker quickstart image also uses embedded PostgreSQL by default. Persist `/paperclip` to keep DB state across container restarts (see `doc/DOCKER.md`).
@@ -59,11 +47,11 @@ cp .env.example .env
# DATABASE_URL=postgres://paperclip:paperclip@localhost:5432/paperclip
```
Run migrations:
Run migrations (once the migration generation issue is fixed) or use `drizzle-kit push`:
```sh
DATABASE_URL=postgres://paperclip:paperclip@localhost:5432/paperclip \
pnpm db:migrate
npx drizzle-kit push
```
Start the server:
@@ -100,27 +88,27 @@ postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:
### Configure
For the application runtime, use a direct PostgreSQL connection unless the database client has explicit prepared-statement configuration for your pooling mode:
```sh
DATABASE_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:5432/postgres
```
If you later run the app with a pooled runtime URL, set `DATABASE_MIGRATION_URL` to the direct connection URL. Paperclip uses it for startup schema checks/migrations and plugin namespace migrations, while the app continues to use `DATABASE_URL` for runtime queries:
Set `DATABASE_URL` in your `.env`:
```sh
DATABASE_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:6543/postgres
DATABASE_MIGRATION_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:5432/postgres
```
If your hosted database requires transaction-pooling-only connections, use a direct or session-pooled connection for Paperclip until runtime pooling support is documented in this guide. Do not edit database client source files as part of deployment setup.
If using connection pooling (port 6543), the `postgres` client must disable prepared statements. Update `packages/db/src/client.ts`:
```ts
export function createDb(url: string) {
const sql = postgres(url, { prepare: false });
return drizzlePg(sql, { schema });
}
```
### Push the schema
```sh
# Use the direct connection (port 5432) for schema changes
DATABASE_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@...5432/postgres \
pnpm db:migrate
npx drizzle-kit push
```
### Free tier limits
@@ -143,22 +131,6 @@ The database mode is controlled by `DATABASE_URL`:
Your Drizzle schema (`packages/db/src/schema/`) stays the same regardless of mode.
## Plugin database namespaces
The plugin runtime tracks plugin-owned database namespaces and migrations in `plugin_database_namespaces` and `plugin_migrations`. Hosted deployments that separate runtime and migration connections should set `DATABASE_MIGRATION_URL`; plugin namespace migration work uses the migration connection when present.
## Backups
Paperclip supports automatic and manual logical database backups. These dumps include
non-system database schemas such as `public`, the Drizzle migration journal, and
plugin-owned database schemas. See `doc/DEVELOPING.md` for the current
`paperclipai db:backup` / `pnpm db:backup` commands and backup retention
configuration.
Database backups do not include non-database instance files such as local-disk
uploads, workspace files, or the local encrypted secrets master key. Back those paths
up separately when you need full instance disaster recovery.
## Secret storage
Paperclip stores secret metadata and versions in:

View File

@@ -17,11 +17,6 @@ Paperclip supports two runtime modes:
This keeps one authenticated auth stack while still separating low-friction private-network defaults from internet-facing hardening requirements.
Paperclip now treats **bind** as a separate concern from auth:
- auth model: `local_trusted` vs `authenticated`, plus `private/public`
- reachability model: `server.bind = loopback | lan | tailnet | custom`
## 2. Canonical Model
| Runtime Mode | Exposure | Human auth | Primary use |
@@ -30,15 +25,6 @@ Paperclip now treats **bind** as a separate concern from auth:
| `authenticated` | `private` | Login required | Private-network access (for example Tailscale/VPN/LAN) |
| `authenticated` | `public` | Login required | Internet-facing/cloud deployment |
## Reachability Model
| Bind | Meaning | Typical use |
|---|---|---|
| `loopback` | Listen on localhost only | default local usage, reverse-proxy deployments |
| `lan` | Listen on all interfaces (`0.0.0.0`) | LAN/VPN/private-network access |
| `tailnet` | Listen on a detected Tailscale IP | Tailscale-only access |
| `custom` | Listen on an explicit host/IP | advanced interface-specific setups |
## 3. Security Policy
## `local_trusted`
@@ -52,14 +38,12 @@ Paperclip now treats **bind** as a separate concern from auth:
- login required
- low-friction URL handling (`auto` base URL mode)
- private-host trust policy required
- bind can be `loopback`, `lan`, `tailnet`, or `custom`
## `authenticated + public`
- login required
- explicit public URL required
- stricter deployment checks and failures in doctor
- recommended bind is `loopback` behind a reverse proxy; direct `lan/custom` is advanced
## 4. Onboarding UX Contract
@@ -71,22 +55,14 @@ pnpm paperclipai onboard
Server prompt behavior:
1. quickstart `--yes` defaults to `server.bind=loopback` and therefore `local_trusted/private`
2. advanced server setup asks reachability first:
- `Trusted local``bind=loopback`, `local_trusted/private`
- `Private network``bind=lan`, `authenticated/private`
- `Tailnet``bind=tailnet`, `authenticated/private`
- `Custom` → manual mode/exposure/host entry
3. raw host entry is only required for the `Custom` path
4. explicit public URL is only required for `authenticated + public`
Examples:
```sh
pnpm paperclipai onboard --yes
pnpm paperclipai onboard --yes --bind lan
pnpm paperclipai run --bind tailnet
```
1. ask mode, default `local_trusted`
2. option copy:
- `local_trusted`: "Easiest for local setup (no login, localhost-only)"
- `authenticated`: "Login required; use for private network or public hosting"
3. if `authenticated`, ask exposure:
- `private`: "Private network access (for example Tailscale), lower setup friction"
- `public`: "Internet-facing deployment, stricter security requirements"
4. ask explicit public URL only for `authenticated + public`
`configure --section server` follows the same interactive behavior.
@@ -142,4 +118,3 @@ This prevents lockout when a user migrates from long-running local trusted usage
- implementation plan: `doc/plans/deployment-auth-mode-consolidation.md`
- V1 contract: `doc/SPEC-implementation.md`
- operator workflows: `doc/DEVELOPING.md` and `doc/CLI.md`
- invite/join state map: `doc/spec/invite-flow.md`

View File

@@ -43,19 +43,6 @@ This starts:
`pnpm dev` and `pnpm dev:once` are now idempotent for the current repo and instance: if the matching Paperclip dev runner is already alive, Paperclip reports the existing process instead of starting a duplicate.
Issue execution may also use project execution workspace policies and workspace runtime services for per-project worktrees, preview servers, and managed dev commands. Configure those through the project workspace/runtime surfaces rather than starting long-running unmanaged processes when a task needs a reusable service.
## Storybook
The board UI Storybook keeps stories and Storybook config under `ui/storybook/` so component review files stay out of the app source routes.
```sh
pnpm storybook
pnpm build-storybook
```
These run the `@paperclipai/ui` Storybook on port `6006` and build the static output to `ui/storybook-static/`.
Inspect or stop the current repo's managed dev runner:
```sh
@@ -67,56 +54,18 @@ pnpm dev:stop
Tailscale/private-auth dev mode:
```sh
pnpm dev --bind lan
```
This runs dev as `authenticated/private` with a private-network bind preset.
For Tailscale-only reachability on a detected tailnet address:
```sh
pnpm dev --bind tailnet
```
Legacy aliases still map to the old broad private-network behavior:
```sh
pnpm dev --tailscale-auth
pnpm dev --authenticated-private
```
This runs dev as `authenticated/private` and binds the server to `0.0.0.0` for private-network access.
Allow additional private hostnames (for example custom Tailscale hostnames):
```sh
pnpm paperclipai allowed-hostname dotta-macbook-pro
```
## Test Commands
Use the cheap local default unless you are specifically working on browser flows:
```sh
pnpm test
```
`pnpm test` runs the Vitest suite only. For interactive Vitest watch mode use:
```sh
pnpm test:watch
```
Browser suites stay separate:
```sh
pnpm test:e2e
pnpm test:release-smoke
```
These browser suites are intended for targeted local verification and CI, not the default agent/human test command.
For normal issue work, start with the smallest targeted check that proves the change. Reserve repo-wide typecheck/build/test runs for PR-ready handoff or changes broad enough that narrow checks do not cover the risk.
## One-Command Local Run
For a first-time local install, you can bootstrap and run in one command:
@@ -198,8 +147,6 @@ For `codex_local`, Paperclip also manages a per-company Codex home under the ins
If the `codex` CLI is not installed or not on `PATH`, `codex_local` agent runs fail at execution time with a clear adapter error. Quota polling uses a short-lived `codex app-server` subprocess: when `codex` cannot be spawned, that provider reports `ok: false` in aggregated quota results and the API server keeps running (it must not exit on a missing binary).
Local adapters require their corresponding CLI/session setup on the machine running Paperclip. External adapters are installed through the adapter/plugin flow and should not require hardcoded imports in `server/` or `ui/`.
## Worktree-local Instances
When developing from multiple git worktrees, do not point two Paperclip servers at the same embedded PostgreSQL data directory.
@@ -226,13 +173,9 @@ Seed modes:
- `full` makes a full logical clone of the source instance
- `--no-seed` creates an empty isolated instance
Seeded worktree instances quarantine copied live execution by default for both `minimal` and `full` seeds. During restore, Paperclip disables copied agent timer heartbeats, resets copied `running` agents to `idle`, blocks and unassigns copied agent-owned `in_progress` issues, and unassigns copied agent-owned `todo`/`in_review` issues. This keeps a freshly booted worktree from starting agents for work already owned by the source instance. Pass `--preserve-live-work` only when you intentionally want the isolated worktree to resume copied assignments.
After `worktree init`, both the server and the CLI auto-load the repo-local `.paperclip/.env` when run inside that worktree, so normal commands like `pnpm dev`, `paperclipai doctor`, and `paperclipai db:backup` stay scoped to the worktree instance.
`pnpm dev` now fails fast in a linked git worktree when `.paperclip/.env` is missing, instead of silently booting against the default instance/port. If that happens, run `paperclipai worktree init` in the worktree first.
Provisioned git worktrees also pause seeded routines that still have enabled schedule triggers in the isolated worktree database by default. This prevents copied daily/cron routines from firing unexpectedly inside the new workspace instance during development without disabling webhook/API-only routines.
Provisioned git worktrees also pause all seeded routines in the isolated worktree database by default. This prevents copied daily/cron routines from firing unexpectedly inside the new workspace instance during development.
That repo-local env also sets:
@@ -241,8 +184,6 @@ That repo-local env also sets:
- `PAPERCLIP_WORKTREE_COLOR=<hex-color>`
The server/UI use those values for worktree-specific branding such as the top banner and dynamically colored favicon.
Authenticated worktree servers also use the `PAPERCLIP_INSTANCE_ID` value to scope Better Auth cookie names.
Browser cookies are shared by host rather than port, so this prevents logging into one `127.0.0.1:<port>` worktree from replacing another worktree server's session cookie.
Print shell exports explicitly when needed:
@@ -283,7 +224,7 @@ paperclipai worktree init --force
Repair an already-created repo-managed worktree and reseed its isolated instance from the main default install:
```sh
cd /path/to/paperclip/.paperclip/worktrees/PAP-884-ai-commits-component
cd ~/.paperclip/worktrees/PAP-884-ai-commits-component
pnpm paperclipai worktree init --force --seed-mode minimal \
--name PAP-884-ai-commits-component \
--from-config ~/.paperclip/instances/default/config.json
@@ -291,33 +232,6 @@ pnpm paperclipai worktree init --force --seed-mode minimal \
That rewrites the worktree-local `.paperclip/config.json` + `.paperclip/.env`, recreates the isolated instance under `~/.paperclip-worktrees/instances/<worktree-id>/`, and preserves the git worktree contents themselves.
For an already-created worktree where you want the CLI to decide whether to rebuild missing worktree metadata or just reseed the isolated DB, use `worktree repair`.
**`pnpm paperclipai worktree repair [options]`** — Repair the current linked worktree by default, or create/repair a named linked worktree under `.paperclip/worktrees/` when `--branch` is provided. The command never targets the primary checkout unless you explicitly pass `--branch`.
| Option | Description |
|---|---|
| `--branch <name>` | Existing branch/worktree selector to repair, or a branch name to create under `.paperclip/worktrees` |
| `--home <path>` | Home root for worktree instances (default: `~/.paperclip-worktrees`) |
| `--from-config <path>` | Source config.json to seed from |
| `--from-data-dir <path>` | Source `PAPERCLIP_HOME` used when deriving the source config |
| `--from-instance <id>` | Source instance id when deriving the source config (default: `default`) |
| `--seed-mode <mode>` | Seed profile: `minimal` or `full` (default: `minimal`) |
| `--no-seed` | Repair metadata only when bootstrapping a missing worktree config |
| `--allow-live-target` | Override the guard that requires the target worktree DB to be stopped first |
Examples:
```sh
# From inside a linked worktree, rebuild missing .paperclip metadata and reseed it from the default instance.
cd /path/to/paperclip/.paperclip/worktrees/PAP-1132-assistant-ui-pap-1131-make-issues-comments-be-like-a-chat
pnpm paperclipai worktree repair
# From the primary checkout, create or repair a linked worktree for a branch under .paperclip/worktrees/.
cd /path/to/paperclip
pnpm paperclipai worktree repair --branch PAP-1132-assistant-ui-pap-1131-make-issues-comments-be-like-a-chat
```
For an already-created worktree where you want to keep the existing repo-local config/env and only overwrite the isolated database, use `worktree reseed` instead. Stop the target worktree's Paperclip server first so the command can replace the DB safely.
**`pnpm paperclipai worktree reseed [options]`** — Re-seed an existing worktree-local instance from another Paperclip instance or worktree while preserving the target worktree's current config, ports, and instance identity.
@@ -421,9 +335,7 @@ If you set `DATABASE_URL`, the server will use that instead of embedded PostgreS
## Automatic DB Backups
Paperclip can run automatic logical database backups on a timer. These backups cover
non-system database schemas, including migration history and plugin-owned database
schemas. Defaults:
Paperclip can run automatic DB backups on a timer. Defaults:
- enabled
- every 60 minutes
@@ -451,10 +363,6 @@ Environment overrides:
- `PAPERCLIP_DB_BACKUP_RETENTION_DAYS=<days>`
- `PAPERCLIP_DB_BACKUP_DIR=/absolute/or/~/path`
DB backups are not full instance filesystem backups. For full local disaster
recovery, also back up local storage files and the local encrypted secrets key if
those providers are enabled.
## Secrets in Dev
Agent env vars now support secret references. By default, secret values are stored with local encryption and only secret refs are persisted in agent config.

View File

@@ -23,7 +23,7 @@ Paperclip is the command, communication, and control plane for a company of AI a
- **Track work in real time** — see at any moment what every agent is working on
- **Control costs** — token salary budgets per agent, spend tracking, burn rate
- **Align to goals** — agents see how their work serves the bigger mission
- **Preserve work context** — comments, documents, work products, attachments, and company state stay attached to the work
- **Store company knowledge** — a shared brain for the organization
## Architecture
@@ -36,20 +36,17 @@ The central nervous system. Manages:
- Agent registry and org chart
- Task assignment and status
- Budget and token spend tracking
- Issue comments, documents, work products, attachments, and company state
- Company knowledge base
- Goal hierarchy (company → team → agent → task)
- Heartbeat monitoring — know when agents are alive, idle, or stuck
It also enforces execution-control semantics such as single-assignee issues, atomic checkout and execution locks, blockers, recovery issues, and workspace/runtime controls.
### 2. Execution Services (adapters)
Agents run externally and report into the control plane. Adapters connect different execution environments and define how a heartbeat is invoked, observed, and cancelled:
Agents run externally and report into the control plane. An agent is just Python code that gets kicked off and does work. Adapters connect different execution environments:
- **Local CLI/session adapters** — built-in adapters for tools such as Claude Code, Codex, Gemini, OpenCode, Pi, and Cursor
- **HTTP/process-style adapters** — command or webhook/API integrations for custom runtimes
- **OpenClaw gateway** — integration for OpenClaw-style remote agents
- **External adapter plugins** — dynamically loaded adapters installed outside the core app
- **OpenClaw** — initial adapter target
- **Heartbeat loop** — simple custom Python that loops, checks in, does work
- **Others** — any runtime that can call an API
The control plane doesn't run agents. It orchestrates them. Agents run wherever they run and phone home.

View File

@@ -3,7 +3,7 @@ Use this exact checklist.
1. Start Paperclip in auth mode.
```bash
cd <paperclip-repo-root>
pnpm dev --bind lan
pnpm dev --tailscale-auth
```
Then verify:
```bash

View File

@@ -32,14 +32,12 @@ Then you define who reports to the CEO: a CTO managing programmers, a CMO managi
### Agent Execution
Paperclip supports several ways to run an agent's heartbeat:
There are two fundamental modes for running an agent's heartbeat:
1. **Local CLI/session adapters** — Paperclip starts or resumes local coding-tool sessions such as Claude Code, Codex, Gemini, OpenCode, Pi, and Cursor, then tracks the run.
2. **Run a command** — Paperclip kicks off a process (shell command, Python script, etc.) and tracks it. The heartbeat is "execute this and monitor it."
3. **Fire and forget a request** — Paperclip sends a webhook/API call to an externally running agent. The heartbeat is "notify this agent to wake up." OpenClaw-style hooks work this way.
4. **External adapter plugins** — Paperclip loads adapter packages through the plugin/adapter flow so self-hosted installs can add runtimes without hardcoding them in core.
1. **Run a command** — Paperclip kicks off a process (shell command, Python script, etc.) and tracks it. The heartbeat is "execute this and monitor it."
2. **Fire and forget a request** — Paperclip sends a webhook/API call to an externally running agent. The heartbeat is "notify this agent to wake up." (OpenClaw hooks work this way.)
Agent runs can use project and execution workspaces, managed runtime services such as preview/dev servers, adapter-specific session state, and HTTP/webhook-style execution. We provide sensible defaults, but the adapter is still the boundary: if a runtime can be invoked, observed, and authorized, Paperclip can coordinate it.
We provide sensible defaults — a default agent that shells out to Claude Code or Codex with your configuration, remembers session IDs, runs basic scripts. But you can plug in anything.
### Task Management
@@ -56,7 +54,7 @@ I am researching the Facebook ads Granola uses (current task)
Tasks have parentage. Every task exists in service of a parent task, all the way up to the company goal. This is what keeps autonomous agents aligned — they can always answer "why am I doing this?"
The current issue model includes stable issue identifiers, parent/sub-issues, blockers, a single assignee, comments, issue documents, attachments and work products, and review/approval handoffs. That structure keeps work inspectable by both the board and agents while still allowing agents to decompose work into smaller tasks.
More detailed task structure TBD.
## Principles
@@ -117,7 +115,7 @@ Paperclips core identity is a **control plane for autonomous AI companies**,
- Do not make the core product a general chat app. The current product definition is explicitly task/comment-centric and “not a chatbot,” and that boundary is valuable.
- Do not build a complete Jira/GitHub replacement. The repo/docs already position Paperclip as organization orchestration, not focused on pull-request review.
- Do not build enterprise-grade RBAC first. Paperclip now has authenticated mode, company memberships, instance roles, and permission grants, but fine-grained enterprise governance should remain secondary to the core company control plane.
- Do not build enterprise-grade RBAC first. The current V1 spec still treats multi-board governance and fine-grained human permissions as out of scope, so the first multi-user version should be coarse and company-scoped.
- Do not lead with raw bash logs and transcripts. Default view should be human-readable intent/progress, with raw detail beneath.
- Do not force users to understand provider/API-key plumbing unless absolutely necessary. There are active onboarding/auth issues already; friction here is clearly real.
@@ -138,14 +136,11 @@ Paperclips core identity is a **control plane for autonomous AI companies**,
5. **Output-first**
Work is not done until the user can see the result: file, document, preview link, screenshot, plan, or PR.
6. **Execution visibility without log worship**
Active runs, recovery issues, productivity review states, blockers, and work products should be first-class surfaces. Raw transcripts are available when needed, but they are not the primary product surface.
7. **Local-first, cloud-ready**
6. **Local-first, cloud-ready**
The mental model should not change between local solo use and shared/private or public/cloud deployment.
8. **Safe autonomy**
7. **Safe autonomy**
Auto mode is allowed; hidden token burn is not.
9. **Thin core, rich edges**
8. **Thin core, rich edges**
Put optional chat, knowledge, and special surfaces into plugins/extensions rather than bloating the control plane.

View File

@@ -115,6 +115,38 @@ If the first real publish returns npm `E404`, check npm-side prerequisites befor
- The initial publish must include `--access public` for a public scoped package.
- npm also requires either account 2FA for publishing or a granular token that is allowed to bypass 2FA.
### Manual first publish for `@paperclipai/mcp-server`
If you need to publish only the MCP server package once by hand, use:
- `@paperclipai/mcp-server`
Recommended flow from the repo root:
```bash
# optional sanity check: this 404s until the first publish exists
npm view @paperclipai/mcp-server version
# make sure the build output is fresh
pnpm --filter @paperclipai/mcp-server build
# confirm your local npm auth before the real publish
npm whoami
# safe preview of the exact publish payload
cd packages/mcp-server
pnpm publish --dry-run --no-git-checks --access public
# real publish
pnpm publish --no-git-checks --access public
```
Notes:
- Publish from `packages/mcp-server/`, not the repo root.
- If `npm view @paperclipai/mcp-server version` already returns the same version that is in [`packages/mcp-server/package.json`](../packages/mcp-server/package.json), do not republish. Bump the version or use the normal repo-wide release flow in [`scripts/release.sh`](../scripts/release.sh).
- The same npm-side prerequisites apply as above: valid npm auth, permission to publish to the `@paperclipai` scope, `--access public`, and the required publish auth/2FA policy.
## Version formats
Paperclip uses calendar versions:
@@ -143,13 +175,6 @@ This keeps the default install path unchanged while allowing explicit installs w
npx paperclipai@canary onboard
```
The release script now verifies two things after a canary publish:
- the `canary` dist-tag resolves to the version that was just published
- every published internal `@paperclipai/*` dependency referenced by that manifest exists on npm
It also treats `latest -> canary` as a failure by default, because npm metadata can otherwise leave the default install path pointing at an unreleased canary dependency graph. Only pass `./scripts/release.sh canary --allow-canary-latest` when that `latest` behavior is explicitly intended.
### Stable
Stable publishes use the npm dist-tag `latest`.
@@ -176,58 +201,6 @@ That means:
See [doc/RELEASE-AUTOMATION-SETUP.md](RELEASE-AUTOMATION-SETUP.md) for the GitHub/npm setup steps.
## Release enrollment for new public packages
Paperclip does not auto-publish every non-private workspace package anymore.
CI publishing is controlled by [`scripts/release-package-manifest.json`](../scripts/release-package-manifest.json).
When you add a new public package:
1. add it to the manifest and decide whether CI should publish it immediately
2. if CI should publish it, bootstrap the package on npm before merge
3. if CI should not publish it yet, keep `"publishFromCi": false`
4. only enable `"publishFromCi": true` after npm trusted publishing is configured for that package
PR CI now checks changed release-enabled package manifests against npm. That catches a missing first-publish bootstrap before the change reaches `master`.
### One-time bootstrap sequence for a new package
The first publish of a brand-new package still needs one human maintainer with npm write access.
After that, trusted publishing can take over.
Example for `@paperclipai/adapter-acpx-local` from the repo root:
```bash
# safe preview
pnpm run release:bootstrap-package -- @paperclipai/adapter-acpx-local
# one-time first publish from an authenticated maintainer machine
pnpm run release:bootstrap-package -- @paperclipai/adapter-acpx-local --publish --otp 123456
```
The helper script:
- checks that the package does not already exist on npm
- builds the target package unless `--skip-build` is passed
- runs `npm pack --dry-run` in the package directory
- only runs the real `npm publish --access public` when `--publish --otp <code>` is provided
For the real `--publish` step, the maintainer machine must already be authenticated to npm.
If `npm whoami` returns `401`, first run `npm logout --registry=https://registry.npmjs.org/` to clear any stale local auth, then run `npm login` or `npm adduser` locally as an npm org member, and finally rerun the helper.
That local human auth is fine for the one-time bootstrap publish; we just do not want the same auth model inside CI.
The helper now requires `--otp <code>` up front for `--publish`, so it fails before the real publish attempt if the one-time password is missing.
After that first publish succeeds:
1. open `https://www.npmjs.com/package/@paperclipai/adapter-acpx-local`
2. go to `Settings``Trusted publishing`
3. add repository `paperclipai/paperclip`
4. set workflow filename to `release.yml`
5. optionally go to `Settings``Publishing access` and enable `Require two-factor authentication and disallow tokens`
6. keep `publishFromCi: true` in [`scripts/release-package-manifest.json`](../scripts/release-package-manifest.json)
Once those steps are done, future canary and stable publishes for that package are automated through GitHub OIDC. The manual step is only the first package creation on npm.
## Rollback model
Rollback does not unpublish anything.

View File

@@ -67,27 +67,6 @@ Why:
- the single `release.yml` workflow handles both canary and stable publishing
- GitHub environments `npm-canary` and `npm-stable` still enforce different approval rules on the GitHub side
### 2.2.1. Newly added public packages need a bootstrap phase
Trusted publishing is configured on the npm package itself, not at the repo scope.
That means a brand-new public package must not be auto-enrolled into CI publishing until its npm package exists and its trusted publisher has been configured.
Repo policy:
1. add every non-private package to [`scripts/release-package-manifest.json`](../scripts/release-package-manifest.json)
2. set `"publishFromCi": true` only when CI is expected to publish that package
3. if the package is not ready for CI publishing yet, keep `"publishFromCi": false`
4. complete the package bootstrap before merging any PR that changes a release-enabled new package
Bootstrap sequence for a new package:
1. publish the package once from a trusted maintainer machine using normal npm auth
2. open that package on npm and add the `paperclipai/paperclip` trusted publisher for `.github/workflows/release.yml`
3. rerun or dry-run the release flow as needed to confirm CI publishing now works
4. only then enable `"publishFromCi": true`
PR CI enforces this by checking changed release-enabled package manifests against npm. That keeps `master` canary publishing healthy while preserving the no-long-lived-token model for normal CI releases.
### 2.3. Verify trusted publishing before removing old auth
After the workflows are live:

View File

@@ -63,8 +63,6 @@ It:
- verifies the pushed commit
- computes the canary version for the current UTC date
- publishes under npm dist-tag `canary`
- verifies that `canary` resolves to the just-published version and that published internal dependencies exist on npm
- fails by default if npm leaves `latest` pointing at a canary; use `--allow-canary-latest` only when that state is intentional
- creates a git tag `canary/vYYYY.MDD.P-canary.N`
Users install canaries with:

View File

@@ -1,7 +1,7 @@
# Paperclip V1 Implementation Spec
Status: Implementation contract for first release (V1)
Date: 2026-04-28
Date: 2026-02-17
Audience: Product, engineering, and agent-integration authors
Source inputs: `GOAL.md`, `PRODUCT.md`, `SPEC.md`, `DATABASE.md`, current monorepo code
@@ -37,9 +37,8 @@ These decisions close open questions from `SPEC.md` for V1.
| Visibility | Full visibility to board and all agents in same company |
| Communication | Tasks + comments only (no separate chat system) |
| Task ownership | Single assignee; atomic checkout required for `in_progress` transition |
| Recovery | Liveness/watchdog recovery preserves explicit ownership: retry lost execution continuity where safe, otherwise create visible recovery issues or require human escalation (see `doc/execution-semantics.md`) |
| Agent adapters | Built-in `process`, `http`, local CLI/session adapters, and OpenClaw gateway support; external adapters can also be loaded through the adapter plugin flow |
| Plugin framework | Local/self-hosted early plugin runtime is in scope; cloud marketplace and packaged public distribution remain out of scope |
| Recovery | No automatic reassignment; work recovery stays manual/explicit |
| Agent adapters | Built-in `process` and `http` adapters |
| Auth | Mode-dependent human auth (`local_trusted` implicit board in current code; authenticated mode uses sessions), API keys for agents |
| Budget period | Monthly UTC calendar window |
| Budget enforcement | Soft alerts + hard limit auto-pause |
@@ -74,7 +73,7 @@ V1 implementation extends this baseline into a company-centric, governance-aware
## 5.2 Out of Scope (V1)
- Cloud-grade plugin marketplace/distribution beyond the local/self-hosted plugin runtime
- Plugin framework and third-party extension SDK
- Revenue/expense accounting beyond model/token costs
- Knowledge base subsystem
- Public marketplace (ClipHub)
@@ -124,16 +123,6 @@ Human auth tables (`users`, `sessions`, and provider-specific auth artifacts) ar
- `name` text not null
- `description` text null
- `status` enum: `active | paused | archived`
- `pause_reason` text null
- `paused_at` timestamptz null
- `issue_prefix` text not null
- `issue_counter` int not null
- `budget_monthly_cents` int not null default 0
- `spent_monthly_cents` int not null default 0
- `attachment_max_bytes` int not null
- `require_board_approval_for_new_agents` boolean not null default false
- feedback sharing consent fields
- branding fields such as `brand_color`
Invariant: every business record belongs to exactly one company.
@@ -144,21 +133,15 @@ Invariant: every business record belongs to exactly one company.
- `name` text not null
- `role` text not null
- `title` text null
- `icon` text null
- `status` enum: `active | paused | idle | running | error | pending_approval | terminated`
- `status` enum: `active | paused | idle | running | error | terminated`
- `reports_to` uuid fk `agents.id` null
- `capabilities` text null
- `adapter_type` text; built-ins include `process`, `http`, `claude_local`, `codex_local`, `gemini_local`, `opencode_local`, `pi_local`, `cursor`, and `openclaw_gateway`
- `adapter_type` enum: `process | http`
- `adapter_config` jsonb not null
- `runtime_config` jsonb not null default `{}`; may include Paperclip runtime policy such as `modelProfiles.cheap.adapterConfig` for an optional low-cost model lane that does not change the primary adapter config
- `default_environment_id` uuid fk `environments.id` null
- `context_mode` enum: `thin | fat` default `thin`
- `budget_monthly_cents` int not null default 0
- `spent_monthly_cents` int not null default 0
- pause fields: `pause_reason`, `paused_at`
- `permissions` jsonb not null default `{}`
- `last_heartbeat_at` timestamptz null
- `metadata` jsonb null
Invariants:
@@ -212,7 +195,6 @@ Invariant:
- `id` uuid pk
- `company_id` uuid fk not null
- `project_id` uuid fk `projects.id` null
- `project_workspace_id` uuid fk `project_workspaces.id` null
- `goal_id` uuid fk `goals.id` null
- `parent_id` uuid fk `issues.id` null
- `title` text not null
@@ -220,22 +202,13 @@ Invariant:
- `status` enum: `backlog | todo | in_progress | in_review | done | blocked | cancelled`
- `priority` enum: `critical | high | medium | low`
- `assignee_agent_id` uuid fk `agents.id` null
- `assignee_user_id` text null
- checkout/execution locks: `checkout_run_id`, `execution_run_id`, `execution_agent_name_key`, `execution_locked_at`
- `created_by_agent_id` uuid fk `agents.id` null
- `created_by_user_id` uuid fk `users.id` null
- identifier fields: `issue_number`, `identifier`
- origin fields: `origin_kind`, `origin_id`, `origin_run_id`, `origin_fingerprint`
- `request_depth` int not null default 0
- `billing_code` text null
- `assignee_adapter_overrides` jsonb null
- `execution_policy` jsonb null
- `execution_state` jsonb null
- execution workspace fields: `execution_workspace_id`, `execution_workspace_preference`, `execution_workspace_settings`
- `started_at` timestamptz null
- `completed_at` timestamptz null
- `cancelled_at` timestamptz null
- `hidden_at` timestamptz null
Invariants:
@@ -288,10 +261,10 @@ Invariant: each event must attach to agent and company; rollups are aggregation,
- `id` uuid pk
- `company_id` uuid fk not null
- `type` enum: `hire_agent | approve_ceo_strategy | budget_override_required | request_board_approval`
- `type` enum: `hire_agent | approve_ceo_strategy`
- `requested_by_agent_id` uuid fk `agents.id` null
- `requested_by_user_id` uuid fk `users.id` null
- `status` enum: `pending | revision_requested | approved | rejected | cancelled`
- `status` enum: `pending | approved | rejected | cancelled`
- `payload` jsonb not null
- `decision_note` text null
- `decided_by_user_id` uuid fk `users.id` null
@@ -390,15 +363,6 @@ Operational policy:
- `document_id` uuid fk not null
- `key` text not null (`plan`, `design`, `notes`, etc.)
## 7.16 Current Implementation Addenda
The current implementation includes additional V1-control-plane tables beyond the original February snapshot:
- Issue structure and review: `issue_relations` for blockers, `labels`/`issue_labels`, `issue_thread_interactions`, `issue_approvals`, `issue_execution_decisions`, `issue_work_products`, `issue_inbox_archives`, `issue_read_states`, and issue reference mention indexes.
- Execution and workspace control: `execution_workspaces`, `project_workspaces`, `workspace_runtime_services`, `workspace_operations`, `environments`, `environment_leases`, `agent_task_sessions`, `agent_runtime_state`, `agent_wakeup_requests`, heartbeat events, and watchdog decision tables.
- Plugins and routines: `plugins`, plugin config/state/entities/jobs/logs/webhooks, plugin database namespaces/migrations, plugin company settings, and `routines`.
- Access and operations: company memberships, instance roles, principal permission grants, invites, join requests, board API keys, CLI auth challenges, budget policies/incidents, feedback exports/votes, company skills, sidebar preferences, and company logos.
## 8. State Machines
## 8.1 Agent Status
@@ -431,15 +395,6 @@ Side effects:
- entering `done` sets `completed_at`
- entering `cancelled` sets `cancelled_at`
V1 non-terminal liveness rule:
- agent-owned `todo`, `in_progress`, `in_review`, and `blocked` issues must have a live execution path, an explicit waiting path, or an explicit recovery path
- `in_review` is healthy only when a typed execution participant, pending issue-thread interaction or approval, user owner, active run, queued wake, or explicit recovery issue owns the next action
- a blocked chain is covered only when each unresolved leaf issue is live or explicitly waiting
- when Paperclip cannot safely infer the next action, it surfaces the problem through visible blocked/recovery work instead of silently completing or reassigning work
Detailed ownership, execution, blocker, active-run watchdog, crash-recovery, and non-terminal liveness semantics are documented in `doc/execution-semantics.md`.
## 8.3 Approval Status
- `pending -> approved | rejected | cancelled`
@@ -527,7 +482,6 @@ All endpoints are under `/api` and return JSON.
- `DELETE /issues/:issueId/documents/:key`
- `POST /issues/:issueId/checkout`
- `POST /issues/:issueId/release`
- `POST /issues/:issueId/admin/force-release` (board-only lock recovery)
- `POST /issues/:issueId/comments`
- `GET /issues/:issueId/comments`
- `POST /companies/:companyId/issues/:issueId/attachments` (multipart upload)
@@ -552,8 +506,6 @@ Server behavior:
2. if updated row count is 0, return `409` with current owner/status
3. successful checkout sets `assignee_agent_id`, `status = in_progress`, and `started_at`
`POST /issues/:issueId/admin/force-release` is an operator recovery endpoint for stale harness locks. It requires board access to the issue company, clears checkout and execution run lock fields, and may clear the agent assignee when `clearAssignee=true` is passed. The route must write an `issue.admin_force_release` activity log entry containing the previous checkout and execution run IDs.
## 10.5 Projects
- `GET /companies/:companyId/projects`
@@ -599,17 +551,6 @@ Dashboard payload must include:
- `422` semantic rule violation
- `500` server error
## 10.10 Current Implementation API Addenda
The current app also exposes V1-supporting surfaces for:
- issue thread interactions (`suggest_tasks`, `ask_user_questions`, `request_confirmation`)
- issue approvals, issue references/search, labels, read state, inbox/archive state, and work products
- execution workspaces, project workspaces, workspace runtime services, and workspace operations
- routines and scheduled/API/webhook triggers
- plugin installation, configuration, state, jobs, logs, webhooks, and plugin database namespace migration
- company import/export preview/apply, feedback export/vote routes, instance backup/config routes, invites, join requests, memberships, and permission grants
## 11. Heartbeat and Adapter Contract
## 11.1 Adapter Interface
@@ -676,7 +617,7 @@ Per-agent schedule fields in `adapter_config`:
- `enabled` boolean
- `intervalSec` integer (minimum 30)
- `maxConcurrentRuns` integer; new agents default to `20`; scheduler clamps configured values to `1..50`
- `maxConcurrentRuns` fixed at `1` for V1
Scheduler must skip invocation when:
@@ -785,14 +726,13 @@ Required UX behaviors:
- Node 20+
- `DATABASE_URL` optional
- if unset, auto-use embedded PostgreSQL under `~/.paperclip/instances/default/db`
- if unset, auto-use PGlite and push schema
## 15.2 Migrations
- Drizzle migrations are source of truth
- local/dev startup applies pending migrations automatically where supported
- `pnpm db:migrate` applies pending migrations manually
- no destructive migration in-place for V1 upgrade path
- provide migration script from existing minimal tables to company-scoped schema
## 15.3 Logging and Audit
@@ -847,8 +787,6 @@ A release candidate is blocked unless these pass:
## 18. Delivery Plan
Current implementation note: the milestones below describe the original V1 sequencing. Several systems originally framed as future work have since shipped or advanced materially, including issue documents/interactions, blockers, routines, execution workspaces, import/export portability, authenticated deployment modes, multi-user basics, and the local/self-hosted plugin runtime.
## Milestone 1: Company Core and Auth
- add `companies` and company scoping to existing entities
@@ -901,7 +839,7 @@ V1 is complete only when all criteria are true:
## 20. Post-V1 Backlog (Explicitly Deferred)
- cloud-grade plugin marketplace/distribution
- plugin architecture
- richer workflow-state customization per team
- milestones/labels/dependency graph depth beyond V1 minimum
- realtime transport optimization (SSE/WebSockets)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

View File

@@ -1,406 +0,0 @@
# Execution Semantics
Status: Current implementation guide
Date: 2026-04-26
Audience: Product and engineering
This document explains how Paperclip interprets issue assignment, issue status, execution runs, wakeups, parent/sub-issue structure, and blocker relationships.
`doc/SPEC-implementation.md` remains the V1 contract. This document is the detailed execution model behind that contract.
## 1. Core Model
Paperclip separates four concepts that are easy to blur together:
1. structure: parent/sub-issue relationships
2. dependency: blocker relationships
3. ownership: who is responsible for the issue now
4. execution: whether the control plane currently has a live path to move the issue forward
The system works best when those are kept separate.
## 2. Assignee Semantics
An issue has at most one assignee.
- `assigneeAgentId` means the issue is owned by an agent
- `assigneeUserId` means the issue is owned by a human board user
- both cannot be set at the same time
This is a hard invariant. Paperclip is single-assignee by design.
## 3. Status Semantics
Paperclip issue statuses are not just UI labels. They imply different expectations about ownership and execution.
### `backlog`
The issue is not ready for active work.
- no execution expectation
- no pickup expectation
- safe resting state for future work
### `todo`
The issue is actionable but not actively claimed.
- it may be assigned or unassigned
- no checkout/execution lock is required yet
- for agent-assigned work, Paperclip may still need a wake path to ensure the assignee actually sees it
### `in_progress`
The issue is actively owned work.
- requires an assignee
- for agent-owned issues, this is a strict execution-backed state
- for user-owned issues, this is a human ownership state and is not backed by heartbeat execution
For agent-owned issues, `in_progress` should not be allowed to become a silent dead state.
### `blocked`
The issue cannot proceed until something external changes.
This is the right state for:
- waiting on another issue
- waiting on a human decision
- waiting on an external dependency or system when Paperclip does not own a scheduled re-check
- work that automatic recovery could not safely continue
### `in_review`
Execution work is paused because the next move belongs to a reviewer or approver, not the current executor.
An external review service can also be a valid review path when the issue keeps an agent assignee and has an active one-shot monitor that will wake that assignee to check the service later.
### `done`
The work is complete and terminal.
### `cancelled`
The work will not continue and is terminal.
## 4. Agent-Owned vs User-Owned Execution
The execution model differs depending on assignee type.
### Agent-owned issues
Agent-owned issues are part of the control plane's execution loop.
- Paperclip can wake the assignee
- Paperclip can track runs linked to the issue
- Paperclip can recover some lost execution state after crashes/restarts
### User-owned issues
User-owned issues are not executed by the heartbeat scheduler.
- Paperclip can track the ownership and status
- Paperclip cannot rely on heartbeat/run semantics to keep them moving
- stranded-work reconciliation does not apply to them
This is why `in_progress` can be strict for agents without forcing the same runtime rules onto human-held work.
## 5. Checkout and Active Execution
Checkout is the bridge from issue ownership to active agent execution.
- checkout is required to move an issue into agent-owned `in_progress`
- `checkoutRunId` represents issue-ownership lock for the current agent run
- `executionRunId` represents the currently active execution path for the issue
These are related but not identical:
- `checkoutRunId` answers who currently owns execution rights for the issue
- `executionRunId` answers which run is actually live right now
Paperclip already clears stale execution locks and can adopt some stale checkout locks when the original run is gone.
## 6. Parent/Sub-Issue vs Blockers
Paperclip uses two different relationships for different jobs.
### Parent/Sub-Issue (`parentId`)
This is structural.
Use it for:
- work breakdown
- rollup context
- explaining why a child issue exists
- waking the parent assignee when all direct children become terminal
Do not treat `parentId` as execution dependency by itself.
### Blockers (`blockedByIssueIds`)
This is dependency semantics.
Use it for:
- \"this issue cannot continue until that issue changes state\"
- explicit waiting relationships
- automatic wakeups when all blockers resolve
Blocked issues should stay idle while blockers remain unresolved. Paperclip should not create a queued heartbeat run for that issue until the final blocker is done and the `issue_blockers_resolved` wake can start real work.
If a parent is truly waiting on a child, model that with blockers. Do not rely on the parent/child relationship alone.
## 7. Non-Terminal Issue Liveness Contract
For agent-owned, non-terminal issues, Paperclip should never leave work in a state where nobody is responsible for the next move and nothing will wake or surface it.
This is a visibility contract, not an auto-completion contract. If Paperclip cannot safely infer the next action, it should surface the ambiguity with a blocked state, a visible comment, or an explicit recovery issue. It must not silently mark work done from prose comments or guess that a dependency is complete.
An issue is healthy when the product can answer "what moves this forward next?" without requiring a human to reconstruct intent from the whole thread. An issue is stalled when it is non-terminal but has no live execution path, no explicit waiting path, and no recovery path.
The valid action-path primitives are:
- an active run linked to the issue
- a queued wake or continuation that can be delivered to the responsible agent
- a typed execution-policy participant, such as `executionState.currentParticipant`
- a pending issue-thread interaction or linked approval that is waiting for a specific responder
- a one-shot issue monitor (`executionPolicy.monitor.nextCheckAt`) that will wake the assignee for a future check
- a human owner via `assigneeUserId`
- a first-class blocker chain whose unresolved leaf issues are themselves healthy
- an open explicit recovery issue that names the owner and action needed to restore liveness
### Agent-assigned `todo`
This is dispatch state: ready to start, not yet actively claimed.
A healthy dispatch state means at least one of these is true:
- the issue already has a queued wake path
- the issue is intentionally resting in `todo` after a completed agent heartbeat, with no interrupted dispatch evidence
- the issue has been explicitly surfaced as stranded through a visible blocked/recovery path
An assigned `todo` issue is stalled when dispatch was interrupted, no wake remains queued or running, and no recovery path has been opened.
### Agent-assigned `backlog`
This is parked state, not dispatch state.
Assigning an issue normally implies executable intent. When create APIs receive an assignee and no explicit status, Paperclip defaults the issue to `todo` so the assignee has a wake path instead of silently inheriting the unassigned `backlog` default.
An explicit assigned `backlog` issue remains valid when the creator is deliberately parking the work. It must not wake the assignee just because it has an assignee. Paperclip should make that choice visible in activity and UI so operators can distinguish intentional parking from a missed handoff.
An assigned `backlog` issue becomes a liveness problem when another issue is blocked on it and there is no explicit waiting path such as a human owner, active run, queued wake, pending interaction or approval, monitor, or open recovery issue. In that case the blocked parent should surface "blocked by parked work" rather than treating the dependency chain as healthy.
### Agent-assigned `in_progress`
This is active-work state.
A healthy active-work state means at least one of these is true:
- there is an active run for the issue
- there is already a queued continuation wake
- there is an active one-shot monitor that will wake the assignee for a future check
- there is an open explicit recovery issue for the lost execution path
An agent-owned `in_progress` issue is stalled when it has no active run, no queued continuation, and no explicit recovery surface. A still-running but silent process is not automatically stalled; it is handled by the active-run watchdog contract.
### `in_review`
This is review/approval state: execution is paused because the next move belongs to a reviewer, approver, board user, or recovery owner.
A healthy `in_review` issue has at least one valid action path:
- a typed execution-policy participant who can approve or request changes
- a pending issue-thread interaction or linked approval waiting for a named responder
- a human owner via `assigneeUserId`
- an active run or queued wake that is expected to process the review state
- an active one-shot monitor for an external service or async review loop that the assignee owns
- an open explicit recovery issue for an ambiguous review handoff
Agent-assigned `in_review` with no typed participant is only healthy when one of the other paths exists. Assignment to the same agent that produced the handoff is not, by itself, a review path.
An `in_review` issue is stalled when it has no typed participant, no pending interaction or approval, no user owner, no active monitor, no active run, no queued wake, and no explicit recovery issue. Paperclip should surface that state as recovery work rather than silently completing the issue or leaving blocker chains parked indefinitely.
### Issue monitors
An issue monitor is a one-shot deferred action path for agent-owned issues in `in_progress` or `in_review`.
Use a monitor when the current assignee owns a future check against an async system or external service. Examples include Greptile review loops, GitHub checks, Vercel deployments, or provider jobs where the agent should come back later and decide what happens next.
Monitor policy lives under `executionPolicy.monitor` and includes:
- `nextCheckAt`: when Paperclip should wake the assignee
- `notes`: non-secret instructions for what the assignee should check
- `serviceName`: optional non-secret external-service context
- `externalRef`: optional external-service reference input; Paperclip treats it as secret-adjacent, redacts it before persistence/visibility, and omits it from activity and wake payloads
- `timeoutAt`, `maxAttempts`, and `recoveryPolicy`: optional recovery hints for bounded waits
Monitors are not recurring intervals. When a monitor fires, Paperclip clears the scheduled monitor and queues an `issue_monitor_due` wake for the assignee. If the external service is still pending, the assignee must explicitly re-arm the monitor with a new `nextCheckAt`. If the issue moves to `done`, `cancelled`, an invalid status, or a human/unassigned owner, the monitor is cleared.
Because `serviceName` and `notes` remain visible in issue activity and wake context, operators should keep them short and non-secret. Put enough context for the assignee to know what to inspect, but do not include signed URLs, bearer tokens, customer secrets, tenant-private identifiers, or provider links with embedded credentials.
Monitor bounds are enforced. Paperclip rejects attempts to re-arm a monitor whose `timeoutAt` or `maxAttempts` is already exhausted. When a scheduled monitor reaches an exhausted bound at trigger time, Paperclip clears it and follows `recoveryPolicy`: `wake_owner` queues a bounded recovery wake for the assignee, `create_recovery_issue` opens visible recovery work, and `escalate_to_board` records a board-visible escalation comment/activity.
Use `blocked` instead of a monitor when no Paperclip assignee owns a responsible polling path. In that case, name the external owner/action or create first-class recovery/blocker work.
### `blocked`
This is explicit waiting state.
A healthy `blocked` issue has an explicit waiting path:
- first-class blockers exist, and each unresolved leaf has a valid action path under this contract
- the issue is blocked on an explicit recovery issue that itself has a live or waiting path
- the issue is waiting on a pending interaction, linked approval, human owner, or clearly named external owner/action
A blocker chain is covered only when its unresolved leaf is live or explicitly waiting. An intermediate `blocked` issue does not make the chain healthy by itself.
A `blocked` issue is stalled when the unresolved blocker leaf has no active run, queued wake, typed participant, pending interaction or approval, user owner, external owner/action, or recovery issue. In that case the parent should show the first stalled leaf instead of presenting the dependency as calmly covered.
## 8. Crash and Restart Recovery
Paperclip now treats crash/restart recovery as a stranded-assigned-work problem, not just a stranded-run problem.
There are two distinct failure modes.
### 8.1 Stranded assigned `todo`
Example:
- issue is assigned to an agent
- status is `todo`
- the original wake/run died during or after dispatch
- after restart there is no queued wake and nothing picks the issue back up
Recovery rule:
- if the latest issue-linked run failed/timed out/cancelled and no live execution path remains, Paperclip queues one automatic assignment recovery wake
- if that recovery wake also finishes and the issue is still stranded, Paperclip moves the issue to `blocked` and posts a visible comment
This is a dispatch recovery, not a continuation recovery.
### 8.2 Stranded assigned `in_progress`
Example:
- issue is assigned to an agent
- status is `in_progress`
- the live run disappeared
- after restart there is no active run and no queued continuation
Recovery rule:
- Paperclip queues one automatic continuation wake
- if that continuation wake also finishes and the issue is still stranded, Paperclip moves the issue to `blocked` and posts a visible comment
This is an active-work continuity recovery.
## 9. Startup and Periodic Reconciliation
Startup recovery and periodic recovery are different from normal wakeup delivery.
On startup and on the periodic recovery loop, Paperclip now does four things in sequence:
1. reap orphaned `running` runs
2. resume persisted `queued` runs
3. reconcile stranded assigned work
4. scan silent active runs and create or update explicit watchdog review issues
The stranded-work pass closes the gap where issue state survives a crash but the wake/run path does not. The silent-run scan covers the separate case where a live process exists but has stopped producing observable output.
## 10. Silent Active-Run Watchdog
An active run can still be unhealthy even when its process is `running`. Paperclip treats prolonged output silence as a watchdog signal, not as proof that the run is failed.
The recovery service owns this contract:
- classify active-run output silence as `ok`, `suspicious`, `critical`, `snoozed`, or `not_applicable`
- collect bounded evidence from run logs, recent run events, child issues, and blockers
- preserve redaction and truncation before evidence is written to issue descriptions
- create at most one open `stale_active_run_evaluation` issue per run
- honor active snooze decisions before creating more review work
- build the `outputSilence` summary shown by live-run and active-run API responses
Suspicious silence creates a medium-priority review issue for the selected recovery owner. Critical silence raises that review issue to high priority and blocks the source issue on the explicit evaluation task without cancelling the active process.
Watchdog decisions are explicit operator/recovery-owner decisions:
- `snooze` records an operator-chosen future quiet-until time and suppresses scan-created review work during that window
- `continue` records that the current evidence is acceptable, does not cancel or mutate the active run, and sets a 30-minute default re-arm window before the watchdog evaluates the still-silent run again
- `dismissed_false_positive` records why the review was not actionable
Operators should prefer `snooze` for known time-bounded quiet periods. `continue` is only a short acknowledgement of the current evidence; if the run remains silent after the re-arm window, the periodic watchdog scan can create or update review work again.
The board can record watchdog decisions. The assigned owner of the watchdog evaluation issue can also record them. Other agents cannot.
## 11. Auto-Recover vs Explicit Recovery vs Human Escalation
Paperclip uses three different recovery outcomes, depending on how much it can safely infer.
### Auto-Recover
Auto-recovery is allowed when ownership is clear and the control plane only lost execution continuity.
Examples:
- requeue one dispatch wake for an assigned `todo` issue whose latest run failed, timed out, or was cancelled
- requeue one continuation wake for an assigned `in_progress` issue whose live execution path disappeared
- assign an orphan blocker back to its creator when that blocker is already preventing other work
Auto-recovery preserves the existing owner. It does not choose a replacement agent.
### Explicit Recovery Issue
Paperclip creates an explicit recovery issue when the system can identify a problem but cannot safely complete the work itself.
Examples:
- automatic stranded-work retry was already exhausted
- a dependency graph has an invalid/uninvokable owner, unassigned blocker, or invalid review participant
- an active run is silent past the watchdog threshold
The source issue remains visible and blocked on the recovery issue when blocking is necessary for correctness. The recovery owner must restore a live path, resolve the source issue manually, or record the reason it is a false positive.
Instance-level issue-graph liveness auto-recovery is disabled by default. When enabled, its lookback window means "dependency paths updated within the last N hours"; older findings remain advisory and are counted as outside the configured lookback instead of creating recovery issues automatically. This is an operator noise control, not the older staleness delay for determining whether a chain is old enough to surface.
### Human Escalation
Human escalation is required when the next safe action depends on board judgment, budget/approval policy, or information unavailable to the control plane.
Examples:
- all candidate recovery owners are paused, terminated, pending approval, or budget-blocked
- the issue is human-owned rather than agent-owned
- the run is intentionally quiet but needs an operator decision before cancellation or continuation
In these cases Paperclip should leave a visible issue/comment trail instead of silently retrying.
## 12. What This Does Not Mean
These semantics do not change V1 into an auto-reassignment system.
Paperclip still does not:
- automatically reassign work to a different agent
- infer dependency semantics from `parentId` alone
- treat human-held work as heartbeat-managed execution
The recovery model is intentionally conservative:
- preserve ownership
- retry once when the control plane lost execution continuity
- create explicit recovery work when the system can identify a bounded recovery owner/action
- escalate visibly when the system cannot safely keep going
## 13. Practical Interpretation
For a board operator, the intended meaning is:
- agent-owned `in_progress` should mean \"this is live work or clearly surfaced as a problem\"
- agent-owned `todo` should not stay assigned forever after a crash with no remaining wake path
- parent/sub-issue explains structure
- blockers explain waiting
That is the execution contract Paperclip should present to operators.

View File

@@ -22,7 +22,6 @@ The question is not "which memory project wins?" The question is "what is the sm
### Hosted memory APIs
- `mem0`
- `AWS Bedrock AgentCore Memory`
- `supermemory`
- `Memori`
@@ -50,7 +49,6 @@ These emphasize local persistence, inspectability, and low operational overhead.
|---|---|---|---|---|
| [nuggets](https://github.com/NeoVertex1/nuggets) | local memory engine + messaging gateway | topic-scoped HRR memory with `remember`, `recall`, `forget`, fact promotion into `MEMORY.md` | good example of lightweight local memory and automatic promotion | very specific architecture; not a general multi-tenant service |
| [mem0](https://github.com/mem0ai/mem0) | hosted + OSS SDK | `add`, `search`, `getAll`, `get`, `update`, `delete`, `deleteAll`; entity partitioning via `user_id`, `agent_id`, `run_id`, `app_id` | closest to a clean provider API with identities and metadata filters | provider owns extraction heavily; Paperclip should not assume every backend behaves like mem0 |
| [AWS Bedrock AgentCore Memory](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/memory.html) | AWS-managed memory service | explicit short-term and long-term memories, actor/session/event APIs, memory strategies, namespace templates, optional self-managed extraction pipeline | strong example of provider-managed memory with clear scoped ids, retention controls, and standalone API access outside a single agent framework | AWS-hosted and IAM-centric; Paperclip would still need its own company/run/comment provenance, cost rollups, and likely a plugin wrapper instead of baking AWS semantics into core |
| [MemOS](https://github.com/MemTensor/MemOS) | memory OS / framework | unified add-retrieve-edit-delete, memory cubes, multimodal memory, tool memory, async scheduler, feedback/correction | strong source for optional capabilities beyond plain search | much broader than the minimal contract Paperclip should standardize first |
| [supermemory](https://github.com/supermemoryai/supermemory) | hosted memory + context API | `add`, `profile`, `search.memories`, `search.documents`, document upload, settings; automatic profile building and forgetting | strong example of "context bundle" rather than raw search results | heavily productized around its own ontology and hosted flow |
| [memU](https://github.com/NevaMind-AI/memU) | proactive agent memory framework | file-system metaphor, proactive loop, intent prediction, always-on companion model | good source for when memory should trigger agent behavior, not just retrieval | proactive assistant framing is broader than Paperclip's task-centric control plane |
@@ -79,7 +77,6 @@ These differences are exactly why Paperclip needs a layered contract instead of
### 1. Who owns extraction?
- `mem0`, `supermemory`, and `Memori` expect the provider to infer memories from conversations.
- `AWS Bedrock AgentCore Memory` supports both provider-managed extraction and self-managed pipelines where the host writes curated long-term memory records.
- `memsearch` expects the host to decide what markdown to write, then indexes it.
- `MemOS`, `memU`, `EverMemOS`, and `OpenViking` sit somewhere in between and often expose richer memory construction pipelines.
@@ -107,7 +104,6 @@ Paperclip should make plain search the minimum contract and richer outputs optio
### 4. Is memory synchronous or asynchronous?
- local tools often work synchronously in-process.
- `AWS Bedrock AgentCore Memory` is synchronous at the API edge, but its long-term memory path includes background extraction/indexing behavior and retention policies managed by the provider.
- larger systems add schedulers, background indexing, compaction, or sync jobs.
Paperclip needs both direct request/response operations and background maintenance hooks.

View File

@@ -7,10 +7,10 @@ Define a Paperclip memory service and surface API that can sit above multiple me
- company scoping
- auditability
- provenance back to Paperclip work objects
- budget and cost visibility
- budget / cost visibility
- plugin-first extensibility
This plan is based on the external landscape summarized in `doc/memory-landscape.md`, the AWS AgentCore comparison captured in [PAP-1274](/PAP/issues/PAP-1274), and the current Paperclip architecture in:
This plan is based on the external landscape summarized in `doc/memory-landscape.md` and on the current Paperclip architecture in:
- `doc/SPEC-implementation.md`
- `doc/plugins/PLUGIN_SPEC.md`
@@ -19,26 +19,23 @@ This plan is based on the external landscape summarized in `doc/memory-landscape
## Recommendation In One Sentence
Paperclip should add a company-scoped memory control plane with company default plus agent override resolution, shared hook delivery, and full operation attribution, while leaving extraction and storage semantics to built-ins and plugins.
Paperclip should not embed one opinionated memory engine into core. It should add a company-scoped memory control plane with a small normalized adapter contract, then let built-ins and plugins implement the provider-specific behavior.
## Product Decisions
### 1. Memory resolution is company default plus agent override
### 1. Memory is company-scoped by default
Every memory binding belongs to exactly one company.
Resolution order in V1:
That binding can then be:
- company default binding
- optional per-agent override
There is no per-project override in V1.
Project context can still appear in scope and provenance so providers can use it for retrieval and partitioning, but projects do not participate in binding selection.
- the company default
- an agent override
- a project override later if we need it
No cross-company memory sharing in the initial design.
### 2. Providers are selected by stable binding key
### 2. Providers are selected by key
Each configured memory provider gets a stable key inside a company, for example:
@@ -47,53 +44,36 @@ Each configured memory provider gets a stable key inside a company, for example:
- `local-markdown`
- `research-kb`
Agents, tools, and background hooks resolve the active provider by key, not by hard-coded vendor logic.
Agents and services resolve the active provider by key, not by hard-coded vendor logic.
### 3. Plugins are the primary provider path
Built-ins are useful for a zero-config local path, but most providers should arrive through the existing Paperclip plugin runtime.
That keeps the core small and matches the broader Paperclip direction that specialized knowledge systems live at the edges.
That keeps the core small and matches the current direction that optional knowledge-like systems live at the edges.
### 4. Paperclip owns routing, provenance, and policy
### 4. Paperclip owns routing, provenance, and accounting
Providers should not decide how Paperclip entities map to governance.
Paperclip core should own:
- binding resolution
- who is allowed to call a memory operation
- which company, agent, issue, project, run, and subject scope is active
- what source object the operation belongs to
- how usage and costs are attributed
- how operators inspect what happened
- which company / agent / project scope is active
- what issue / run / comment / document the operation belongs to
- how usage gets recorded
### 5. Paperclip exposes shared hooks, providers own extraction
Paperclip should emit a common set of memory hooks that built-ins, third-party adapters, and plugins can all use.
Those hooks should pass structured Paperclip source objects plus normalized metadata. The provider then decides how to extract from those objects.
Paperclip should not force one extraction pipeline or one canonical "memory text" transform before the provider sees the input.
### 6. Automatic memory should start narrow, but the hook surface should be general
### 5. Automatic memory should be narrow at first
Automatic capture is useful, but broad silent capture is dangerous.
Initial built-in automatic hooks should be:
Initial automatic hooks should be:
- pre-run hydrate for agent context recall
- post-run capture from agent runs
- optional issue comment capture
- optional issue document capture
- issue comment / document capture when the binding enables it
- pre-run recall for agent context hydration
The hook registry itself should be general enough that other providers can subscribe to the same events without core changes.
### 7. No approval gate for binding changes in the open-source product
For the open-source version, changing memory bindings should not require approvals.
Paperclip should still log those changes in activity and preserve full auditability. Approval-gated memory governance can remain an enterprise or future policy layer.
Everything else should start explicit.
## Proposed Concepts
@@ -103,7 +83,7 @@ A built-in or plugin-supplied implementation that stores and retrieves memory.
Examples:
- local markdown plus semantic index
- local markdown + vector index
- mem0 adapter
- supermemory adapter
- MemOS adapter
@@ -114,15 +94,6 @@ A company-scoped configuration record that points to a provider and carries prov
This is the object selected by key.
### Memory binding target
A mapping from a Paperclip target to a binding.
V1 targets:
- `company`
- `agent`
### Memory scope
The normalized Paperclip scope passed into a provider request.
@@ -134,9 +105,7 @@ At minimum:
- optional `projectId`
- optional `issueId`
- optional `runId`
- optional `subjectId` for external or user identity
- optional `sessionKey` for providers that organize memory around sessions
- optional `namespace` for providers that need an explicit partition hint
- optional `subjectId` for external/user identity
### Memory source reference
@@ -152,36 +121,24 @@ Supported source kinds should include:
- `manual_note`
- `external_document`
### Memory hook
A normalized trigger emitted by Paperclip when something memory-relevant happens.
Initial hook kinds:
- `pre_run_hydrate`
- `post_run_capture`
- `issue_comment_capture`
- `issue_document_capture`
- `manual_capture`
### Memory operation
A normalized capture, record-write, query, browse, get, correction, or delete action performed through Paperclip.
A normalized write, query, browse, or delete action performed through Paperclip.
Paperclip should log every memory operation whether the provider is local, plugin-backed, or external.
Paperclip should log every operation, whether the provider is local or external.
## Required Adapter Contract
The required core should be small enough to fit `memsearch`, `mem0`, `Memori`, `MemOS`, or `OpenViking`, but strong enough to satisfy Paperclip's attribution and inspectability requirements.
The required core should be small enough to fit `memsearch`, `mem0`, `Memori`, `MemOS`, or `OpenViking`.
```ts
export interface MemoryAdapterCapabilities {
profile?: boolean;
browse?: boolean;
correction?: boolean;
asyncIngestion?: boolean;
multimodal?: boolean;
providerManagedExtraction?: boolean;
asyncExtraction?: boolean;
providerNativeBrowse?: boolean;
}
export interface MemoryScope {
@@ -191,8 +148,6 @@ export interface MemoryScope {
issueId?: string;
runId?: string;
subjectId?: string;
sessionKey?: string;
namespace?: string;
}
export interface MemorySourceRef {
@@ -213,34 +168,10 @@ export interface MemorySourceRef {
externalRef?: string;
}
export interface MemoryHookContext {
hookKind:
| "pre_run_hydrate"
| "post_run_capture"
| "issue_comment_capture"
| "issue_document_capture"
| "manual_capture";
hookId: string;
triggeredAt: string;
actorAgentId?: string;
heartbeatRunId?: string;
}
export interface MemorySourcePayload {
text?: string;
mimeType?: string;
metadata?: Record<string, unknown>;
object?: Record<string, unknown>;
}
export interface MemoryUsage {
provider: string;
biller?: string;
model?: string;
billingType?: "metered_api" | "subscription_included" | "subscription_overage" | "unknown";
attributionMode?: "billed_directly" | "included_in_run" | "external_invoice" | "untracked";
inputTokens?: number;
cachedInputTokens?: number;
outputTokens?: number;
embeddingTokens?: number;
costCents?: number;
@@ -248,30 +179,18 @@ export interface MemoryUsage {
details?: Record<string, unknown>;
}
export interface MemoryRecordHandle {
providerKey: string;
providerRecordId: string;
}
export interface MemoryCaptureRequest {
export interface MemoryWriteRequest {
bindingKey: string;
scope: MemoryScope;
source: MemorySourceRef;
payload: MemorySourcePayload;
hook?: MemoryHookContext;
mode?: "capture_residue" | "capture_record";
content: string;
metadata?: Record<string, unknown>;
mode?: "append" | "upsert" | "summarize";
}
export interface MemoryRecordWriteRequest {
bindingKey: string;
scope: MemoryScope;
source?: MemorySourceRef;
records: Array<{
text: string;
summary?: string;
metadata?: Record<string, unknown>;
}>;
export interface MemoryRecordHandle {
providerKey: string;
providerRecordId: string;
}
export interface MemoryQueryRequest {
@@ -283,14 +202,6 @@ export interface MemoryQueryRequest {
metadataFilter?: Record<string, unknown>;
}
export interface MemoryListRequest {
bindingKey: string;
scope: MemoryScope;
cursor?: string;
limit?: number;
metadataFilter?: Record<string, unknown>;
}
export interface MemorySnippet {
handle: MemoryRecordHandle;
text: string;
@@ -306,149 +217,30 @@ export interface MemoryContextBundle {
usage?: MemoryUsage[];
}
export interface MemoryListPage {
items: MemorySnippet[];
nextCursor?: string;
usage?: MemoryUsage[];
}
export interface MemoryExtractionJob {
providerJobId: string;
status: "queued" | "running" | "succeeded" | "failed" | "cancelled";
hookKind?: MemoryHookContext["hookKind"];
source?: MemorySourceRef;
error?: string;
submittedAt?: string;
startedAt?: string;
finishedAt?: string;
}
export interface MemoryAdapter {
key: string;
capabilities: MemoryAdapterCapabilities;
capture(req: MemoryCaptureRequest): Promise<{
records?: MemoryRecordHandle[];
jobs?: MemoryExtractionJob[];
usage?: MemoryUsage[];
}>;
upsertRecords(req: MemoryRecordWriteRequest): Promise<{
write(req: MemoryWriteRequest): Promise<{
records?: MemoryRecordHandle[];
usage?: MemoryUsage[];
}>;
query(req: MemoryQueryRequest): Promise<MemoryContextBundle>;
list(req: MemoryListRequest): Promise<MemoryListPage>;
get(handle: MemoryRecordHandle, scope: MemoryScope): Promise<MemorySnippet | null>;
forget(handles: MemoryRecordHandle[], scope: MemoryScope): Promise<{ usage?: MemoryUsage[] }>;
}
```
This contract intentionally does not force a provider to expose its internal graph, file tree, or ontology. It does require enough structure for Paperclip to browse, attribute, and audit what happened.
This contract intentionally does not force a provider to expose its internal graph, filesystem, or ontology.
## Optional Adapter Surfaces
These should be capability-gated, not required:
- `browse(scope, filters)` for file-system / graph / timeline inspection
- `correct(handle, patch)` for natural-language correction flows
- `profile(scope)` when the provider can synthesize stable preferences or summaries
- `listExtractionJobs(scope, cursor)` when async extraction needs richer operator visibility
- `retryExtractionJob(jobId)` when a provider supports re-drive
- `sync(source)` for connectors or background ingestion
- `explain(queryResult)` for providers that can expose retrieval traces
- provider-native browse or graph surfaces exposed through plugin UI
## Lessons From AWS AgentCore Memory API
AWS AgentCore Memory is a useful check on whether this plan is too abstract or missing important operational surfaces.
The broad direction still looks right:
- AWS splits memory into a control plane (`CreateMemory`, `UpdateMemory`, `ListMemories`) and a data plane (`CreateEvent`, `RetrieveMemoryRecords`, `GetMemoryRecord`, `ListMemoryRecords`)
- AWS separates raw interaction capture from curated long-term memory records
- AWS supports both provider-managed extraction and self-managed pipelines
- AWS treats browse and list operations as first-class APIs, not ad hoc debugging helpers
- AWS exposes extraction jobs instead of hiding asynchronous maintenance completely
That lines up with the Paperclip plan at a high level: provider configuration, scoped writes, scoped retrieval, provider-managed extraction as a capability, and a browse and inspect surface.
The concrete changes Paperclip should take from AWS are:
### 1. Keep config APIs separate from runtime traffic
The rollout should preserve a clean separation between:
- control-plane APIs for binding CRUD, defaults, overrides, and capability metadata
- runtime APIs and tools for capture, record writes, query, list, get, forget, and extraction status
This keeps governance changes distinct from high-volume memory traffic.
### 2. Distinguish capture from curated record writes
AWS does not flatten everything into one write primitive. It distinguishes captured events from durable memory records.
Paperclip should do the same:
- `capture(...)` for raw run, comment, document, or activity residue
- `upsertRecords(...)` for curated durable facts and notes
That is a better fit for provider-managed extraction and for manual curation flows.
### 3. Make list and browse first-class
AWS exposes list and retrieve surfaces directly. Paperclip should not make browse optional at the portable layer.
The minimum portable surface should include:
- `query`
- `list`
- `get`
Provider-native graph or file browsing can remain optional beyond that.
### 4. Add pagination and cursors for operator inspection
AWS consistently uses pagination on browse-heavy APIs.
Paperclip should add cursor-based pagination to:
- record listing
- extraction job listing
- memory operation explorer APIs
Prompt hydration can continue to use `topK`, but operator surfaces need cursors.
### 5. Add explicit session and namespace hints
AWS uses `actorId`, `sessionId`, `namespace`, and `memoryStrategyId` heavily.
Paperclip should keep its own control-plane-centric model, but the adapter contract needs obvious places to map those concepts:
- `sessionKey`
- `namespace`
The provider adapter can map them to AWS or other vendor-specific identifiers without leaking those identifiers into core.
### 6. Treat asynchronous extraction as a real operational surface
AWS exposes extraction jobs explicitly. Paperclip should too.
Operators should be able to see:
- pending extraction work
- failed extraction work
- which hook or source caused the work
- whether a retry is available
### 7. Keep Paperclip provenance primary
Paperclip should continue to center:
- `companyId`
- `agentId`
- `projectId`
- `issueId`
- `runId`
- issue comments, documents, and activity as sources
The lesson from AWS is to support clean mapping into provider-specific models, not to let provider identifiers take over the core product model.
## What Paperclip Should Persist
@@ -456,67 +248,39 @@ Paperclip should not mirror the full provider memory corpus into Postgres unless
Paperclip core should persist:
- memory bindings
- company default and agent override resolution targets
- memory bindings and overrides
- provider keys and capability metadata
- normalized memory operation logs
- source references back to issue comments, documents, runs, and activity
- provider record handles returned by operations when available
- hook delivery records and extraction job state
- usage and cost attribution
- source references back to issue comments, documents, runs, and activity
- usage and cost data
For external providers, the actual memory payload can remain in the provider.
For external providers, the memory payload itself can remain in the provider.
## Hook Model
### Shared hook surface
Paperclip should expose one shared hook system for memory.
That same system must be available to:
- built-in memory providers
- plugin-based memory providers
- third-party adapter integrations that want to use memory hooks
### What a hook delivers
Each hook delivery should include:
- resolved binding key
- normalized `MemoryScope`
- `MemorySourceRef`
- structured source payload
- hook metadata such as hook kind, trigger time, and related run id
The payload should include structured objects where possible so the provider can decide how to extract and chunk.
### Initial automatic hooks
### Automatic hooks
These should be low-risk and easy to reason about:
1. `pre_run_hydrate`
1. `pre-run hydrate`
Before an agent run starts, Paperclip may call `query(... intent = "agent_preamble")` using the active binding.
2. `post_run_capture`
After a run finishes, Paperclip may call `capture(...)` with structured run output, excerpts, and provenance.
2. `post-run capture`
After a run finishes, Paperclip may write a summary or transcript-derived note tied to the run.
3. `issue_comment_capture`
When enabled on the binding, Paperclip may call `capture(...)` for selected issue comments.
3. `issue comment / document capture`
When enabled on the binding, Paperclip may capture selected issue comments or issue documents as memory sources.
4. `issue_document_capture`
When enabled on the binding, Paperclip may call `capture(...)` for selected issue documents.
### Explicit hooks
### Explicit tools and APIs
These should be tool-driven or UI-driven first:
These should be tool- or UI-driven first:
- `memory.search`
- `memory.note`
- `memory.forget`
- `memory.correct`
- memory record list and get
- extraction-job inspection
- `memory.browse`
### Not automatic in the first version
@@ -545,69 +309,34 @@ The initial browse surface should support:
- active binding by company and agent
- recent memory operations
- recent write and capture sources
- record list and record detail with source backlinks
- recent write sources
- query results with source backlinks
- extraction job status
- filters by agent, issue, project, run, source kind, and date
- provider usage, cost, and latency summaries
- filters by agent, issue, run, source kind, and date
- provider usage / cost / latency summaries
When a provider supports richer browsing, the plugin can add deeper views through the existing plugin UI surfaces.
## Cost And Evaluation
Paperclip should treat memory accounting as two related but distinct concerns:
Every adapter response should be able to return usage records.
### 1. `memory_operations` is the authoritative audit trail
Paperclip should roll up:
Every memory action should create a normalized operation record that captures:
- binding
- scope
- source provenance
- operation type
- success or failure
- memory inference tokens
- embedding tokens
- external provider cost
- latency
- usage details reported by the provider
- attribution mode
- related run, issue, and agent when available
- query count
- write count
This is where operators answer "what memory work happened and why?"
### 2. `cost_events` remains the canonical spend ledger for billable metered usage
The current `cost_events` model is already the canonical cost ledger for token and model spend, and `agent_runtime_state` plus `heartbeat_runs.usageJson` already roll up and summarize run usage.
The recommendation is:
- if a memory operation runs inside a normal Paperclip agent heartbeat and the model usage is already counted on that run, do not create a duplicate `cost_event`
- instead, store the memory operation with `attributionMode = "included_in_run"` and link it to the related `heartbeatRunId`
- if a memory provider makes a direct metered model call outside the agent run accounting path, the provider must report usage and Paperclip should create a `cost_event`
- that direct `cost_event` should still link back to the memory operation, agent, company, and issue or run context when possible
### 3. `finance_events` should carry flat subscription or invoice-style costs
If a memory service incurs:
- monthly subscription cost
- storage invoices
- provider platform charges not tied to one request
those should be represented as `finance_events`, not as synthetic per-query memory operations.
That keeps usage telemetry separate from accounting entries like invoices and flat fees.
### 4. Evaluation metrics still matter
Paperclip should record evaluation-oriented metrics where possible:
It should also record evaluation-oriented metrics where possible:
- recall hit rate
- empty query rate
- manual correction count
- extraction failure count
- per-binding success and failure counts
- per-binding success / failure counts
This is important because a memory system that "works" but silently burns budget or silently fails extraction is not acceptable in Paperclip.
This is important because a memory system that "works" but silently burns budget is not acceptable in Paperclip.
## Suggested Data Model Additions
@@ -615,36 +344,23 @@ At the control-plane level, the likely new core tables are:
- `memory_bindings`
- company-scoped key
- provider id or plugin id
- provider id / plugin id
- config blob
- enabled status
- `memory_binding_targets`
- target type (`company`, `agent`)
- target type (`company`, `agent`, later `project`)
- target id
- binding id
- `memory_operations`
- company id
- binding id
- operation type (`capture`, `record_upsert`, `query`, `list`, `get`, `forget`, `correct`)
- operation type (`write`, `query`, `forget`, `browse`, `correct`)
- scope fields
- source refs
- usage, latency, and attribution mode
- related heartbeat run id
- related cost event id
- success or error
- `memory_extraction_jobs`
- company id
- binding id
- operation id
- provider job id
- hook kind
- status
- source refs
- error
- submitted, started, and finished timestamps
- usage / latency / cost
- success / error
Provider-specific long-form state should stay in plugin state or the provider itself unless a built-in local provider needs its own schema.
@@ -666,46 +382,45 @@ The design should still treat that built-in as just another provider behind the
### Phase 1: Control-plane contract
- add memory binding models and API types
- add company default plus agent override resolution
- add plugin capability and registration surface for memory providers
- add plugin capability / registration surface for memory providers
- add operation logging and usage reporting
### Phase 2: Hook delivery and operation audit
- add shared memory hook emission in core
- add operation logging, extraction job state, and usage attribution
- add direct-provider cost and finance-event linkage rules
### Phase 3: One built-in plus one plugin example
### Phase 2: One built-in + one plugin example
- ship a local markdown-first provider
- ship one hosted adapter example to validate the external-provider path
### Phase 4: UI inspection
### Phase 3: UI inspection
- add company and agent memory settings
- add company / agent memory settings
- add a memory operation explorer
- add record list and detail surfaces
- add source backlinks to issues and runs
### Phase 4: Automatic hooks
- pre-run hydrate
- post-run capture
- selected issue comment / document capture
### Phase 5: Rich capabilities
- correction flows
- provider-native browse or graph views
- provider-native browse / graph views
- project-level overrides if needed
- evaluation dashboards
- retention and quota controls
## Remaining Open Questions
## Open Questions
- Which built-in local provider should ship first: pure markdown, markdown plus embeddings, or a lightweight local vector store?
- How much source payload should Paperclip snapshot inside `memory_operations` for debugging without duplicating large transcripts?
- Should correction flows mutate provider state directly, create superseding records, or both depending on provider capability?
- What default retention and size limits should the local built-in enforce?
- Should project overrides exist in V1 of the memory service, or should we force company default + agent override first?
- Do we want Paperclip-managed extraction pipelines at all, or should built-ins be the only place where Paperclip owns extraction?
- Should memory usage extend the current `cost_events` model directly, or should memory operations keep a parallel usage log and roll up into `cost_events` secondarily?
- Do we want provider install / binding changes to require approvals for some companies?
## Bottom Line
The right abstraction is:
- Paperclip owns bindings, resolution, hooks, provenance, policy, and attribution.
- Paperclip owns memory bindings, scopes, provenance, governance, and usage reporting.
- Providers own extraction, ranking, storage, and provider-native memory semantics.
That gives Paperclip a stable memory service without locking the product to one memory philosophy or one vendor, and it integrates the AWS lessons without importing AWS's model into core.
That gives Paperclip a stable "memory service" without locking the product to one memory philosophy or one vendor.

View File

@@ -1,382 +0,0 @@
# VS Code Task Interoperability Plan
Status: planning only, no code changes
Date: 2026-04-12
Related issue: `PAP-1377`
## Summary
Paperclip should not replace its workspace runtime service model with VS Code tasks.
It should add a narrow interoperability layer that can discover and adopt supported entries from `.vscode/tasks.json`.
The core product model should stay:
- Paperclip owns long-running workspace services and their desired state
- Paperclip shows operators exactly which named thing they are starting or stopping
- Paperclip distinguishes long-running services from one-shot jobs
VS Code tasks should be treated as:
- an import/discovery format for workspace commands
- a convenience for repos that already maintain `tasks.json`
- a partial compatibility layer, not a full execution model
## Current State
The current implementation is already service-oriented:
- project workspaces and execution workspaces can store `workspaceRuntime` config plus `desiredState` and per-service `serviceStates`
- the UI renders one control row per configured service and persists start/stop intent
- the backend supervises long-running local processes, reuses eligible services, and restores desired services on startup
Relevant files:
- `packages/shared/src/types/workspace-runtime.ts`
- `server/src/services/workspace-runtime.ts`
- `server/src/services/project-workspace-runtime-config.ts`
- `ui/src/components/WorkspaceRuntimeControls.tsx`
- `ui/src/pages/ProjectWorkspaceDetail.tsx`
- `ui/src/pages/ExecutionWorkspaceDetail.tsx`
This is directionally correct for Paperclip because it gives the control plane an explicit model for service lifecycle, health, reuse, and restart behavior.
## Problem To Solve
The current UX is still too raw:
- operators have to hand-author runtime JSON
- a workspace can have multiple attached services, but the higher-level intent is not obvious
- start/stop controls are visible in multiple places, which makes it easy to lose track of what is being controlled
- there is no interoperability with repos that already define useful local workflows in `.vscode/tasks.json`
The issue is not that services are the wrong abstraction.
The issue is that the configuration surface is too low-level and Paperclip does not yet leverage existing workspace metadata.
## Recommendation
Keep Paperclip runtime services as the source of truth for service supervision.
Add a new workspace command model above the raw JSON layer, with VS Code task discovery as one input.
The product model should become:
1. `Workspace command`
A named runnable thing attached to a workspace.
2. `Workspace service`
A workspace command that is expected to stay alive and be supervised.
3. `Workspace job`
A workspace command that runs once and exits.
4. `Runtime service instance`
The live process record that already exists today in Paperclip.
In that model, VS Code tasks are a way to populate workspace commands.
Only commands that map cleanly to Paperclip service or job semantics should become runnable in Paperclip.
## Why Not Fully Adopt VS Code Tasks
VS Code tasks are broader than Paperclip runtime services.
They include shell/process tasks, compound tasks, background/watch tasks, presentation settings, extension/task-provider types, variable substitution, and problem-matcher-driven lifecycle.
That creates a bad fit if Paperclip tries to use `tasks.json` as its only runtime model:
- many tasks are one-shot jobs, not long-running services
- some tasks depend on VS Code task providers or editor-only variable resolution
- compound task graphs are useful, but they are not the same thing as a supervised service
- problem matcher readiness is useful metadata, but it is not enough to replace Paperclip's persisted service lifecycle model
The right boundary is interoperability, not replacement.
## Interoperability Contract
Paperclip should support a conservative subset of VS Code tasks and clearly mark unsupported entries.
### Supported in phase 1
- `shell` and `process` tasks with a concrete command Paperclip can resolve
- optional task `options.cwd`
- optional task environment values that can be flattened safely
- task labels and detail text for naming and display
- `dependsOn` for import-time expansion or display-only dependency hints
- background/watch-oriented tasks that can reasonably be treated as long-running services
### Maybe supported in later phases
- grouping and default task metadata for better UX
- selected variable substitution when Paperclip can resolve it safely from workspace context
- mapping task metadata into Paperclip readiness/expose hints
- limited compound-task launch flows
### Not supported initially
- extension-provided task types Paperclip cannot execute directly
- arbitrary VS Code variable substitution semantics
- problem matcher parsing as the main source of service health
- full parity with VS Code task execution behavior
## Long-Running Service Detection
Paperclip needs an explicit classification layer instead of assuming every VS Code task is a service.
Recommended classification:
- `service`
Explicitly marked by Paperclip metadata, or confidently inferred from background/watch task semantics
- `job`
One-shot command expected to exit
- `unsupported`
Present in `tasks.json`, but not safely runnable by Paperclip
The important product decision is that service classification must be visible and editable by the operator.
Inference can help, but it should not be the only source of truth.
## Proposed Product Shape
### 1. Replace raw-first editing with command-first editing
Project and execution workspace pages should stop making raw runtime JSON the primary editing surface.
Default UI should show:
- workspace commands
- command type: service or job
- source: Paperclip or VS Code
- exact command and cwd
- current state for services
- explicit start, stop, restart, and run-now actions
Raw JSON should remain available behind an advanced section.
### 2. Add VS Code task discovery on workspaces
For a workspace with `cwd`, Paperclip should look for `.vscode/tasks.json`.
The workspace UI should show:
- whether a `tasks.json` file was found
- last parse time
- supported commands discovered
- unsupported tasks with reasons
- whether commands are inherited into execution workspaces
### 3. Make the controlled thing explicit
Start and stop UI should always name the exact entry being controlled.
Examples:
- `Start web`
- `Stop api`
- `Run db:migrate`
Avoid generic workspace-level labels when multiple commands exist.
### 4. Separate services from jobs in the UI
Do not mix one-shot jobs and long-running services into one undifferentiated list.
Recommended sections:
- `Services`
- `Jobs`
- `Unsupported imported tasks`
That resolves the ambiguity called out in the issue.
## Data Model Direction
Do not replace `workspaceRuntime` immediately.
Instead add a higher-level representation that can compile down to the existing runtime-service machinery.
Suggested workspace metadata shape:
```ts
type WorkspaceCommandSource =
| { type: "paperclip" }
| { type: "vscode_task"; taskLabel: string; taskPath: ".vscode/tasks.json" };
type WorkspaceCommandKind = "service" | "job";
type WorkspaceCommandDefinition = {
id: string;
name: string;
kind: WorkspaceCommandKind;
source: WorkspaceCommandSource;
command: string | null;
cwd: string | null;
env?: Record<string, string> | null;
autoStart?: boolean;
serviceConfig?: {
lifecycle?: "shared" | "ephemeral";
reuseScope?: "project_workspace" | "execution_workspace" | "run";
readiness?: Record<string, unknown> | null;
expose?: Record<string, unknown> | null;
} | null;
importWarnings?: string[];
disabledReason?: string | null;
};
```
`workspaceRuntime` can then become a derived or advanced representation for service-type commands until the rest of the system is migrated.
## VS Code Mapping Rules
Paperclip should map imported tasks with explicit, documented rules.
Recommended rules:
1. A task becomes a `job` by default.
2. A task becomes a `service` only when:
- Paperclip metadata marks it as a service, or
- the task clearly represents a background/watch process and the operator confirms the classification.
3. Unsupported tasks stay visible but disabled.
4. Task labels become default command names.
5. `dependsOn` is preserved as metadata, not silently flattened into hidden behavior.
Paperclip-specific metadata can live in a namespaced field on the imported task definition, for example:
```json
{
"label": "web",
"type": "shell",
"command": "pnpm dev",
"isBackground": true,
"paperclip": {
"kind": "service",
"readiness": {
"type": "http",
"urlTemplate": "http://127.0.0.1:${port}"
},
"expose": {
"type": "url",
"urlTemplate": "http://127.0.0.1:${port}"
}
}
}
```
That gives us interoperability without depending on VS Code-only semantics for service readiness and exposure.
## Execution Policy
Project workspaces should be the main place where imported commands are discovered and curated.
Execution workspaces should inherit that curated command set by default, with optional issue-level overrides.
Recommended precedence:
1. execution workspace override
2. project workspace command set
3. imported VS Code tasks from the linked workspace
4. advanced raw runtime fallback
This matches the existing direction in `doc/plans/2026-03-10-workspace-strategy-and-git-worktrees.md`.
## Implementation Plan
### Phase 1: Discovery and read-only visibility
Goal:
show imported VS Code tasks in the workspace UI without changing runtime behavior.
Work:
- parse `.vscode/tasks.json` for project workspaces with local `cwd`
- derive a list of candidate commands plus unsupported items
- show source, label, command, cwd, and classification
- show parse warnings and unsupported reasons
Success condition:
an operator can see what Paperclip would import and why.
### Phase 2: Command model and explicit classification
Goal:
introduce a first-class workspace command layer above raw runtime JSON.
Work:
- add a persisted command definition model in workspace metadata or a dedicated table
- allow operator edits to imported command classification
- separate `service` and `job` in UI
- keep existing runtime-service storage for live supervised processes
Success condition:
the workspace UI is command-first, and raw runtime JSON is advanced-only.
### Phase 3: Service execution backed by existing runtime supervisor
Goal:
run supported imported service commands through the current Paperclip supervisor.
Work:
- compile service commands into the existing runtime service start/stop path
- persist desired state per named command
- keep startup restoration behavior for service commands
- make the active command name explicit everywhere control actions appear
Success condition:
imported service commands behave like native Paperclip services once adopted.
### Phase 4: Job execution and optional dependency handling
Goal:
support one-shot imported commands without pretending they are services.
Work:
- add `Run` actions for jobs
- record output in workspace operations
- optionally support simple `dependsOn` execution for jobs with clear logging
Success condition:
one-shot tasks are runnable, but they are not mixed into the service lifecycle model.
### Phase 5: Adapter and execution workspace integration
Goal:
let agents and issue-scoped workspaces consume the curated command model consistently.
Work:
- expose inherited workspace commands to execution workspaces
- allow issue-level selection of a default service command when relevant
- make service selection explicit in issue and workspace views
Success condition:
agents, operators, and workspaces all refer to the same named commands.
## Non-Goals
- full VS Code task-runner parity
- support for every VS Code task type
- removal of Paperclip's own runtime supervision model
- editor-dependent execution semantics inside the control plane
## Risks
- overfitting Paperclip to VS Code and making the model worse for non-VS-Code repos
- misclassifying watch tasks as durable services
- hiding too much detail and making debugging harder
- allowing imported task graphs to become implicit magic
These risks are manageable if the import layer stays explicit, conservative, and operator-editable.
## Decision
Paperclip should adopt VS Code tasks as an optional workspace command source, not as the canonical runtime model.
The main UX change should be:
- move from raw runtime JSON to named workspace commands
- separate services from jobs
- make the exact controlled command explicit
- let `.vscode/tasks.json` pre-populate those commands when available
## External References
- VS Code tasks documentation: https://code.visualstudio.com/docs/debugtest/tasks
- Existing Paperclip workspace plan: `doc/plans/2026-03-10-workspace-strategy-and-git-worktrees.md`

View File

@@ -10,12 +10,7 @@ It is intentionally narrower than [PLUGIN_SPEC.md](./PLUGIN_SPEC.md). The spec i
- Plugin UI runs as same-origin JavaScript inside the main Paperclip app.
- Worker-side host APIs are capability-gated.
- Plugin UI is not sandboxed by manifest capabilities.
- Plugin database migrations are restricted to a host-derived plugin namespace.
- Plugin-owned JSON API routes must be declared in the manifest and are mounted
only under `/api/plugins/:pluginId/api/*`.
- The host provides a small shared React component kit through
`@paperclipai/plugin-sdk/ui`; use it for common Paperclip controls before
building custom versions.
- There is no host-provided shared React component kit for plugins yet.
- `ctx.assets` is not supported in the current runtime.
## Scaffold a plugin
@@ -82,14 +77,11 @@ Worker:
- secrets
- activity
- state
- database namespace via `ctx.db`
- scoped JSON API routes declared with `apiRoutes`
- entities
- projects, project workspaces, and plugin-managed projects
- projects and project workspaces
- companies
- issues, comments, namespaced `plugin:<pluginKey>` origins, blocker relations, checkout assertions, assignment wakeups, and orchestration summaries
- agents, plugin-managed agents, and agent sessions
- plugin-managed routines
- issues and comments
- agents and agent sessions
- goals
- data/actions
- streams
@@ -97,210 +89,6 @@ Worker:
- metrics
- logger
### Plugin database declarations
First-party or otherwise trusted orchestration plugins can declare:
```ts
database: {
migrationsDir: "migrations",
coreReadTables: ["issues"],
}
```
Required capabilities are `database.namespace.migrate` and
`database.namespace.read`; add `database.namespace.write` for runtime mutations.
The host derives `ctx.db.namespace`, runs SQL files in filename order before the
worker starts, records checksums in `plugin_migrations`, and rejects changed
already-applied migrations.
Migration SQL may create or alter objects only inside `ctx.db.namespace`. It may
reference whitelisted `public` core tables for foreign keys or read-only views,
but may not mutate/alter/drop/truncate public tables, create extensions,
triggers, untrusted languages, or runtime multi-statement SQL. Runtime
`ctx.db.query()` is restricted to `SELECT`; runtime `ctx.db.execute()` is
restricted to namespace-local `INSERT`, `UPDATE`, and `DELETE`.
### Scoped plugin API routes
Plugins can expose JSON-only routes under their own namespace:
```ts
apiRoutes: [
{
routeKey: "initialize",
method: "POST",
path: "/issues/:issueId/smoke",
auth: "board-or-agent",
capability: "api.routes.register",
checkoutPolicy: "required-for-agent-in-progress",
companyResolution: { from: "issue", param: "issueId" },
},
]
```
The host resolves the plugin, checks that it is ready, enforces
`api.routes.register`, matches the declared method/path, resolves company access,
and applies checkout policy before dispatching to the worker's `onApiRequest`
handler. The worker receives sanitized headers, route params, query, parsed JSON
body, actor context, and company id. Do not use plugin routes to claim core
paths; they always remain under `/api/plugins/:pluginId/api/*`.
## Managed Paperclip resources
Plugins that provide durable Paperclip business objects should declare them in
the manifest and let the host create or relink the actual records per company.
Do this for plugin-owned agents, plugin-owned projects, and recurring automation.
Do not hide long-lived work behind private plugin state when it should be visible
to the board, scoped to a company, audited, budgeted, and assigned like normal
Paperclip work.
Use these surfaces:
- Managed agents: declare top-level `agents[]` and require
`agents.managed`. Use this when the plugin provides a named worker the board
should see in the org, budget, pause, invoke, and inspect. Managed agents are
normal Paperclip agents with plugin ownership metadata, not background plugin
workers.
- Managed projects: declare top-level `projects[]` and require
`projects.managed`. Use this when the plugin needs a stable company-scoped
project for its issues, routines, or workspace-oriented UI. Keep plugin work
in a project instead of scattering generated issues across unrelated projects.
- Managed routines: declare top-level `routines[]` and require
`routines.managed`. Use this for scheduled, webhook, or manually triggered
jobs that should create visible Paperclip issues. Prefer managed routines over
plugin `jobs[]` for recurring business work; plugin jobs are for plugin
runtime maintenance that does not need a board-visible task trail.
Managed resources are resolved by stable plugin keys, not hardcoded database
ids. In a worker action or data handler, call `ctx.agents.managed.reconcile()`,
`ctx.projects.managed.reconcile()`, and `ctx.routines.managed.reconcile()` for
the current `companyId`. `reconcile()` creates the missing resource, relinks a
recoverable binding, or returns the existing resource. `reset()` reapplies the
manifest defaults when the operator wants to restore the plugin's suggested
configuration.
Declare dependencies between managed resources with refs. A routine can point
at a managed agent through `assigneeRef` and at a managed project through
`projectRef`. Reconcile the referenced agent and project before reconciling the
routine; if a ref is still missing, the routine resolution reports
`missing_refs` instead of guessing.
```ts
import type { PaperclipPluginManifestV1 } from "@paperclipai/plugin-sdk";
const manifest: PaperclipPluginManifestV1 = {
id: "example.research-plugin",
apiVersion: 1,
version: "0.1.0",
displayName: "Research Plugin",
description: "Creates a managed research agent and scheduled research routine.",
author: "Example",
categories: ["automation"],
capabilities: [
"agents.managed",
"projects.managed",
"routines.managed",
"instance.settings.register",
],
entrypoints: {
worker: "./dist/worker.js",
ui: "./dist/ui",
},
agents: [
{
agentKey: "researcher",
displayName: "Researcher",
role: "research",
title: "Research Agent",
capabilities: "Runs recurring research briefs for this company.",
adapterPreference: ["codex_local", "claude_local", "process"],
instructions: {
content: "Follow the Paperclip heartbeat and produce concise research briefs.",
},
},
],
projects: [
{
projectKey: "research",
displayName: "Research",
description: "Recurring research work created by the Research Plugin.",
status: "in_progress",
},
],
routines: [
{
routineKey: "weekly-brief",
title: "Weekly research brief",
description: "Create a short research brief for the board.",
assigneeRef: { resourceKind: "agent", resourceKey: "researcher" },
projectRef: { resourceKind: "project", resourceKey: "research" },
priority: "medium",
triggers: [
{
kind: "schedule",
label: "Monday morning",
cronExpression: "0 9 * * 1",
timezone: "America/Chicago",
enabled: false,
},
],
},
],
ui: {
slots: [
{
type: "settingsPage",
id: "settings",
displayName: "Research",
exportName: "SettingsPage",
},
],
},
};
export default manifest;
```
In the worker, expose a small setup action or settings-page action that
reconciles the resources for the selected company:
```ts
import { definePlugin } from "@paperclipai/plugin-sdk";
export default definePlugin({
setup(ctx) {
ctx.actions.register("setup-company", async (params) => {
const companyId = String(params.companyId ?? "");
if (!companyId) throw new Error("companyId is required");
const project = await ctx.projects.managed.reconcile("research", companyId);
const agent = await ctx.agents.managed.reconcile("researcher", companyId);
const routine = await ctx.routines.managed.reconcile("weekly-brief", companyId);
return { project, agent, routine };
});
},
});
```
Authoring rules:
- Keep keys stable once published. Renaming `agentKey`, `projectKey`, or
`routineKey` creates a new managed resource from the host's point of view.
- Use managed agents for plugin-provided labor. Use `ctx.agents.invoke()` or
`ctx.agents.sessions` only after you have a real agent id, either selected by
the operator or resolved from `ctx.agents.managed`.
- Use managed routines for recurring or externally triggered work that should
produce tasks. Schedule, webhook, and API triggers are visible routine
triggers, and each run has the normal Paperclip issue/audit trail.
- Use managed projects to keep plugin-generated work organized and to give
project-scoped plugin UI a stable home. For filesystem access inside a
project, still resolve project workspaces through `ctx.projects`.
- Keep defaults conservative. Managed declarations are suggestions owned by the
plugin, but the resulting resources are normal Paperclip records that the
operator can inspect, pause, and adjust.
UI:
- `usePluginData`
@@ -326,187 +114,6 @@ Mount surfaces currently wired in the host include:
- `commentAnnotation`
- `commentContextMenuItem`
## Shared host components
Use shared components from `@paperclipai/plugin-sdk/ui` when the plugin needs a
Paperclip-native control. The host owns the implementation, so plugins inherit
the board's current styling, ordering, recent selections, and dark-mode behavior
without importing `ui/src` internals.
Currently exposed components include:
- `MarkdownBlock` and `MarkdownEditor` for rendered and editable markdown.
- `FileTree` for serializable file and directory trees.
- `IssuesList` for a native company-scoped issue table.
- `AssigneePicker` for the same agent/user selector used in the new issue pane.
Use the controlled `value` format `agent:<id>`, `user:<id>`, or `""`.
- `ProjectPicker` for the same project selector used in the new issue pane.
Use the controlled project id value, or `""` for no project.
- `ManagedRoutinesList` for plugin-owned routine settings pages.
```tsx
import { AssigneePicker, ProjectPicker } from "@paperclipai/plugin-sdk/ui";
export function PluginAssignmentControls({ companyId }: { companyId: string }) {
const [assignee, setAssignee] = useState("");
const [projectId, setProjectId] = useState("");
return (
<>
<AssigneePicker
companyId={companyId}
value={assignee}
onChange={(value) => setAssignee(value)}
/>
<ProjectPicker
companyId={companyId}
value={projectId}
onChange={setProjectId}
/>
</>
);
}
```
## File and path UI
Plugin UI often needs to render a file tree, accept a folder path, or browse a
project workspace. There are three different surfaces for that, and they map to
different trust and data-flow boundaries. Pick the surface that matches the
data the plugin actually has.
### When to use the shared `FileTree`
Use `FileTree` from `@paperclipai/plugin-sdk/ui` whenever the plugin only needs
to render a serializable file/directory list and react to selection or
expand/collapse. The host owns the implementation, so plugin UI inherits the
board's icons, indent, focus ring, and dark-mode styling without importing host
internals.
```tsx
import {
FileTree,
type FileTreeNode,
} from "@paperclipai/plugin-sdk/ui";
const nodes: FileTreeNode[] = [
{ name: "AGENTS.md", path: "AGENTS.md", kind: "file", children: [] },
{
name: "wiki",
path: "wiki",
kind: "dir",
children: [
{ name: "index.md", path: "wiki/index.md", kind: "file", children: [] },
],
},
];
export function WikiTree() {
const [expanded, setExpanded] = useState<Set<string>>(() => new Set(["wiki"]));
const [selected, setSelected] = useState<string | null>(null);
return (
<FileTree
nodes={nodes}
selectedFile={selected}
expandedPaths={expanded}
onSelectFile={(path) => setSelected(path)}
onToggleDir={(path) =>
setExpanded((current) => {
const next = new Set(current);
next.has(path) ? next.delete(path) : next.add(path);
return next;
})
}
/>
);
}
```
Good fits:
- LLM Wiki page navigation in `packages/plugins/plugin-llm-wiki` builds a
`FileTreeNode[]` from worker query results and renders it through `FileTree`.
- The example `plugin-file-browser-example` lazily fetches a directory's
children through a `loadFileList` action when `onToggleDir` fires, then
merges the children into the local tree state — letting the shared component
handle rendering and selection.
Boundary rules:
- Keep the prop surface serializable (`nodes`, `expandedPaths`, `checkedPaths`,
`fileBadges`, `fileTones`). Do not pass arbitrary render functions across the
plugin/host boundary in v1; the supported escape hatches are
`fileBadges` (status pill keyed by path) and `fileTones` (row tone keyed by
path).
- Do not import the host's `FileTree.tsx` or any `ui/src/*` module. The SDK
declaration is the only supported import path for plugin UI.
- The shared `FileTree` is for rendering and selection. Plugin-specific editors,
ingest flows, query forms, and lint runs stay inside the plugin and do not
belong as `FileTree` props.
### When to declare `localFolders`
When the plugin needs operator-configured filesystem roots — typically for
trusted local plugins like wiki tooling — declare `localFolders[]` on the
manifest and add the `local.folders` capability. The host renders a settings
surface for the operator to set the absolute path, validates the path
server-side (containment, symlinks, required files/directories), and exposes
`ctx.localFolders.readText()` and `ctx.localFolders.writeTextAtomic()` in the
worker.
```ts
export const manifest = {
capabilities: ["local.folders"],
localFolders: [
{
folderKey: "content-root",
displayName: "Content root",
access: "readWrite",
requiredDirectories: ["sources", "pages"],
requiredFiles: ["schema.md"],
},
],
};
```
Use this when:
- The data lives outside any project workspace.
- Reads and writes need company-scoped configuration.
- The operator picks the path once in plugin settings and the worker resolves
files relative to that root.
Do not use `localFolders` to grant the UI direct browser-side access to the
filesystem — there is no such capability. The browser still goes through the
worker via `getData` / `performAction`, and the worker only exposes paths it
chose to expose.
### When to keep worker-mediated project workspace browsing
When the data lives inside an existing project workspace, keep the browsing
flow worker-mediated:
- The worker uses `ctx.projects.listWorkspaces()` to resolve the workspace
path, then reads its filesystem with normal Node APIs.
- The plugin UI calls a `getData` handler for the root listing and an action
for lazy children, then renders them through `FileTree`.
- The worker is the only side that touches the disk. The browser receives a
serializable tree and never sees raw absolute paths it can replay.
The example `plugin-file-browser-example` is the reference for this pattern:
the worker registers `fileList` (data) and `loadFileList` (action) over the
same handler, and the UI uses the action for on-toggle directory loading so the
shared `FileTree` stays the rendering surface.
### Mixing surfaces
A single plugin can use more than one of these. The LLM Wiki uses
`localFolders` for its content root, then renders the resulting page list
through `FileTree`. The file browser example uses `ctx.projects.listWorkspaces`
to pick a workspace and renders its on-disk tree through `FileTree` with lazy
loading. Pick the boundary per data source, not per plugin.
## Company routes
Plugins may declare a `page` slot with `routePath` to own a company route like:

View File

@@ -27,10 +27,7 @@ Current limitations to keep in mind:
- Published npm packages are the intended install artifact for deployed plugins.
- The repo example plugins under `packages/plugins/examples/` are development conveniences. They work from a source checkout and should not be assumed to exist in a generic published build unless they are explicitly shipped with that build.
- Dynamic plugin install is not yet cloud-ready for horizontally scaled or ephemeral deployments. There is no shared artifact store, install coordination, or cross-node distribution layer yet.
- The current runtime ships a small host-provided plugin UI component kit through `@paperclipai/plugin-sdk/ui`, but does not support plugin asset uploads/reads yet. Treat plugin asset APIs as future-scope ideas, not current implementation promises.
- Scoped plugin API routes are JSON-only and must be declared in `apiRoutes`.
They mount under `/api/plugins/:pluginId/api/*`; plugins cannot shadow core
API routes.
- The current runtime does not yet ship a real host-provided plugin UI component kit, and it does not support plugin asset uploads/reads. Treat those as future-scope ideas in this spec, not current implementation promises.
In practice, that means the current implementation is a good fit for local development and self-hosted persistent deployments, but not yet for multi-instance cloud plugin distribution.
@@ -627,46 +624,7 @@ Required SDK clients:
Plugins that need filesystem, git, terminal, or process operations handle those directly using standard Node APIs or libraries. The host provides project workspace metadata through `ctx.projects` so plugins can resolve workspace paths, but the host does not proxy low-level OS operations.
## 14.1 Issue Orchestration APIs
Trusted orchestration plugins can create and update Paperclip issues through `ctx.issues` instead of importing server internals. The public issue contract includes parent/project/goal links, board or agent assignees, blocker IDs, labels, billing code, request depth, execution workspace inheritance, and plugin origin metadata.
Origin rules:
- Built-in core issues keep built-in origins such as `manual` and `routine_execution`.
- Plugin-managed issues use `plugin:<pluginKey>` or a sub-kind such as `plugin:<pluginKey>:feature`.
- The host derives the default plugin origin from the installed plugin key and rejects attempts to set `plugin:<otherPluginKey>` origins.
- `originId` is plugin-defined and should be stable for idempotent generated work.
Relation and read helpers:
- `ctx.issues.relations.get(issueId, companyId)`
- `ctx.issues.relations.setBlockedBy(issueId, blockerIssueIds, companyId)`
- `ctx.issues.relations.addBlockers(issueId, blockerIssueIds, companyId)`
- `ctx.issues.relations.removeBlockers(issueId, blockerIssueIds, companyId)`
- `ctx.issues.getSubtree(issueId, companyId, options)`
- `ctx.issues.summaries.getOrchestration({ issueId, companyId, includeSubtree, billingCode })`
Governance helpers:
- `ctx.issues.assertCheckoutOwner({ issueId, companyId, actorAgentId, actorRunId })` lets plugin actions preserve agent-run checkout ownership.
- `ctx.issues.requestWakeup(issueId, companyId, options)` requests assignment wakeups through host heartbeat semantics, including terminal-status, blocker, assignee, and budget hard-stop checks.
- `ctx.issues.requestWakeups(issueIds, companyId, options)` applies the same host-owned wakeup semantics to a batch and may use an idempotency key prefix for stable coordinator retries.
Plugin-originated issue, relation, document, comment, and wakeup mutations must write activity entries with `actorType: "plugin"` and details fields for `sourcePluginId`, `sourcePluginKey`, `initiatingActorType`, `initiatingActorId`, and `initiatingRunId` when a user or agent run initiated the plugin work.
Scoped API routes:
- `apiRoutes[]` declares `routeKey`, `method`, plugin-local `path`, `auth`,
`capability`, optional checkout policy, and company resolution.
- The host enforces auth, company access, `api.routes.register`, route matching,
and checkout policy before worker dispatch.
- The worker implements `onApiRequest(input)` and returns a JSON response shape
`{ status?, headers?, body? }`.
- Only safe request headers are forwarded; auth/cookie headers are never passed
to the worker.
## 14.2 Example SDK Shape
## 14.1 Example SDK Shape
```ts
/** Top-level helper for defining a plugin with type checking */
@@ -738,24 +696,16 @@ The host enforces capabilities in the SDK layer and refuses calls outside the gr
- `project.workspaces.read`
- `issues.read`
- `issue.comments.read`
- `issue.documents.read`
- `issue.relations.read`
- `issue.subtree.read`
- `agents.read`
- `goals.read`
- `activity.read`
- `costs.read`
- `issues.orchestration.read`
### Data Write
- `issues.create`
- `issues.update`
- `issue.comments.create`
- `issue.documents.write`
- `issue.relations.write`
- `issues.checkout`
- `issues.wakeup`
- `assets.write`
- `assets.read`
- `activity.log.write`
@@ -822,13 +772,6 @@ Minimum event set:
- `issue.created`
- `issue.updated`
- `issue.comment.created`
- `issue.document.created`
- `issue.document.updated`
- `issue.document.deleted`
- `issue.relations.updated`
- `issue.checked_out`
- `issue.released`
- `issue.assignment_wakeup_requested`
- `agent.created`
- `agent.updated`
- `agent.status_changed`
@@ -838,8 +781,6 @@ Minimum event set:
- `agent.run.cancelled`
- `approval.created`
- `approval.decided`
- `budget.incident.opened`
- `budget.incident.resolved`
- `cost_event.created`
- `activity.logged`
@@ -976,23 +917,13 @@ export function DashboardWidget({ context }: PluginWidgetProps) {
The SDK includes a `ui` subpath export that plugin frontends import. This subpath provides:
- **Bridge hooks**: `usePluginData(key, params)`, `usePluginAction(key)`, `useHostContext()`, `useHostNavigation()`
- **Bridge hooks**: `usePluginData(key, params)`, `usePluginAction(key)`, `useHostContext()`
- **Design tokens**: colors, spacing, typography, shadows matching the host theme
- **Shared components**: `MetricCard`, `StatusBadge`, `DataTable`, `LogView`, `ActionBar`, `Spinner`, etc.
- **Type definitions**: `PluginPageProps`, `PluginWidgetProps`, `PluginDetailTabProps`
Plugins are encouraged but not required to use the shared components. A plugin may render entirely custom UI as long as it communicates through the bridge.
`useHostNavigation()` is the supported way for plugin UI to navigate to
Paperclip-internal pages. It exposes `resolveHref(to)`, `navigate(to,
options?)`, and `linkProps(to, options?)`. Plugin links should prefer
`linkProps()` so anchors keep real `href` values for copy-link, modifier-click,
middle-click, and open-in-new-tab behavior while plain left-clicks route through
the host SPA router. The host resolves company-scoped paths against the active
company prefix without double-prefixing already-prefixed paths. Plugin UI should
not use raw same-origin `href`s or `window.location.assign()` for internal
Paperclip navigation because those can force a full document reload.
### 19.0.2 Bundle Isolation
Plugin UI bundles are loaded as standard ES modules, not iframed. This gives plugins full rendering performance and access to the host's design tokens.
@@ -1072,11 +1003,6 @@ The host SDK ships shared components that plugins can import to quickly build UI
| `LogView` | Scrollable log output with timestamps | Webhook deliveries, job output, process logs |
| `JsonTree` | Collapsible JSON tree for debugging | Raw API responses, plugin state inspection |
| `Spinner` | Loading indicator | Data fetch states |
| `FileTree` | Host-styled file/directory tree | Wiki pages, workspace files, import previews |
| `IssuesList` | Host issue list | Plugin pages that need a native issue view |
| `AssigneePicker` | Host assignee picker for agents and board users | Creating issues, assigning routines, filtering work |
| `ProjectPicker` | Host project picker | Creating issues, scoping dashboards, filtering work |
| `ManagedRoutinesList` | Host routine list | Plugin settings pages that manage routines |
Plugins may also use entirely custom components. The shared components exist to reduce boilerplate and keep visual consistency, not to limit what plugins can render.
@@ -1312,8 +1238,6 @@ Plugin-originated mutations should write:
- `actor_type = plugin`
- `actor_id = <plugin-id>`
- details include `sourcePluginId` and `sourcePluginKey`
- details include `initiatingActorType`, `initiatingActorId`, and `initiatingRunId` when a user or agent run triggered the plugin work
## 21.5 Plugin Migrations

View File

@@ -114,14 +114,14 @@ If the connection drops, the UI reconnects automatically.
1. Enable timer wakeups (for example every 300s)
2. Keep assignment wakeups on
3. Use a focused prompt template that tells agents to act in the same heartbeat, leave durable progress, and mark blocked work with an owner/action
3. Use a focused prompt template
4. Watch run logs and adjust prompt/config over time
## 7.2 Event-driven loop (less constant polling)
1. Disable timer or set a long interval
2. Keep wake-on-assignment enabled
3. Use child issues, comments, and on-demand wakeups for handoffs instead of loops that poll agents, sessions, or processes
3. Use on-demand wakeups for manual nudges
## 7.3 Safety-first loop

View File

@@ -1,299 +0,0 @@
# Invite Flow State Map
Status: Current implementation map
Date: 2026-04-13
This document maps the current invite creation and acceptance states implemented in:
- `ui/src/pages/CompanyInvites.tsx`
- `ui/src/pages/CompanySettings.tsx`
- `ui/src/pages/InviteLanding.tsx`
- `server/src/routes/access.ts`
- `server/src/lib/join-request-dedupe.ts`
## State Legend
- Invite state: `active`, `revoked`, `accepted`, `expired`
- Join request status: `pending_approval`, `approved`, `rejected`
- Claim secret state for agent joins: `available`, `consumed`, `expired`
- Invite type: `company_join` or `bootstrap_ceo`
- Join type: `human`, `agent`, or `both`
## Entity Lifecycle
```mermaid
flowchart TD
Board[Board user on invite screen]
HumanInvite[Create human company invite]
OpenClawInvite[Generate OpenClaw invite prompt]
Active[Invite state: active]
Revoked[Invite state: revoked]
Expired[Invite state: expired]
Accepted[Invite state: accepted]
BootstrapDone[Bootstrap accepted<br/>no join request]
HumanReuse{Matching human join request<br/>already exists for same user/email?}
HumanPending[Join request<br/>pending_approval]
HumanApproved[Join request<br/>approved]
HumanRejected[Join request<br/>rejected]
AgentPending[Agent join request<br/>pending_approval<br/>+ optional claim secret]
AgentApproved[Agent join request<br/>approved]
AgentRejected[Agent join request<br/>rejected]
ClaimAvailable[Claim secret available]
ClaimConsumed[Claim secret consumed]
ClaimExpired[Claim secret expired]
OpenClawReplay[Special replay path:<br/>accepted invite can be POSTed again<br/>for openclaw_gateway only]
Board --> HumanInvite --> Active
Board --> OpenClawInvite --> Active
Active --> Revoked: revoke
Active --> Expired: expiresAt passes
Active --> BootstrapDone: bootstrap_ceo accept
BootstrapDone --> Accepted
Active --> HumanReuse: human accept
HumanReuse --> HumanPending: reuse existing pending request
HumanReuse --> HumanApproved: reuse existing approved request
HumanReuse --> HumanPending: no reusable request<br/>create new request
HumanPending --> HumanApproved: board approves
HumanPending --> HumanRejected: board rejects
HumanPending --> Accepted
HumanApproved --> Accepted
Active --> AgentPending: agent accept
AgentPending --> Accepted
AgentPending --> AgentApproved: board approves
AgentPending --> AgentRejected: board rejects
AgentApproved --> ClaimAvailable: createdAgentId + claimSecretHash
ClaimAvailable --> ClaimConsumed: POST claim-api-key succeeds
ClaimAvailable --> ClaimExpired: secret expires
Accepted --> OpenClawReplay
OpenClawReplay --> AgentPending
OpenClawReplay --> AgentApproved
```
## Board-Side Screen States
```mermaid
stateDiagram-v2
[*] --> CompanySelection
CompanySelection --> NoCompany: no company selected
CompanySelection --> LoadingHistory: selectedCompanyId present
LoadingHistory --> HistoryError: listInvites failed
LoadingHistory --> Ready: listInvites succeeded
state Ready {
[*] --> EmptyHistory
EmptyHistory --> PopulatedHistory: invites exist
PopulatedHistory --> LoadingMore: View more
LoadingMore --> PopulatedHistory: next page loaded
PopulatedHistory --> RevokePending: Revoke active invite
RevokePending --> PopulatedHistory: revoke succeeded
RevokePending --> PopulatedHistory: revoke failed
EmptyHistory --> CreatePending: Create invite
PopulatedHistory --> CreatePending: Create invite
CreatePending --> LatestInviteVisible: create succeeded
CreatePending --> Ready: create failed
LatestInviteVisible --> CopiedToast: clipboard copy succeeded
LatestInviteVisible --> Ready: navigate away or refresh
}
CompanySelection --> OpenClawPromptReady: Company settings prompt generator
OpenClawPromptReady --> OpenClawPromptPending: Generate OpenClaw Invite Prompt
OpenClawPromptPending --> OpenClawSnippetVisible: prompt generated
OpenClawPromptPending --> OpenClawPromptReady: generation failed
```
## Invite Landing Screen States
```mermaid
stateDiagram-v2
[*] --> TokenGate
TokenGate --> InvalidToken: token missing
TokenGate --> Loading: token present
Loading --> InviteUnavailable: invite fetch failed or invite not returned
Loading --> CheckingAccess: signed-in session and invite.companyId
Loading --> InviteResolved: invite loaded without membership check
Loading --> AcceptedInviteSummary: invite already consumed<br/>but linked join request still exists
CheckingAccess --> RedirectToBoard: current user already belongs to company
CheckingAccess --> InviteResolved: membership check finished and no join-request summary state is active
CheckingAccess --> AcceptedInviteSummary: membership check finished and invite has joinRequestStatus
state InviteResolved {
[*] --> Branch
Branch --> AgentForm: company_join + allowedJoinTypes=agent
Branch --> InlineAuth: authenticated mode + no session + join is not agent-only
Branch --> AcceptReady: bootstrap invite or human-ready session/local_trusted
InlineAuth --> InlineAuth: toggle sign-up/sign-in
InlineAuth --> InlineAuth: auth validation or auth error message
InlineAuth --> RedirectToBoard: auth succeeded and company membership already exists
InlineAuth --> AcceptPending: auth succeeded and invite still needs acceptance
AgentForm --> AcceptPending: submit request
AgentForm --> AgentForm: validation or accept error
AcceptReady --> AcceptPending: Accept invite
AcceptReady --> AcceptReady: accept error
}
AcceptPending --> BootstrapComplete: bootstrapAccepted=true
AcceptPending --> RedirectToBoard: join status=approved
AcceptPending --> PendingApprovalResult: join status=pending_approval
AcceptPending --> RejectedResult: join status=rejected
state AcceptedInviteSummary {
[*] --> SummaryBranch
SummaryBranch --> PendingApprovalReload: joinRequestStatus=pending_approval
SummaryBranch --> OpeningCompany: joinRequestStatus=approved<br/>and human invite user is now a member
SummaryBranch --> RejectedReload: joinRequestStatus=rejected
SummaryBranch --> ConsumedReload: approved agent invite or other consumed state
}
PendingApprovalResult --> PendingApprovalReload: reload after submit
RejectedResult --> RejectedReload: reload after board rejects
RedirectToBoard --> OpeningCompany: brief pre-navigation render when approved membership is detected
OpeningCompany --> RedirectToBoard: navigate to board
```
## Sequence Diagrams
### Human Invite Creation And First Acceptance
```mermaid
sequenceDiagram
autonumber
actor Board as Board user
participant Settings as Company Invites UI
participant API as Access routes
participant Invites as invites table
actor Invitee as Invite recipient
participant Landing as Invite landing UI
participant Auth as Auth session
participant Join as join_requests table
Board->>Settings: Choose role and click Create invite
Settings->>API: POST /api/companies/:companyId/invites
API->>Invites: Insert active invite
API-->>Settings: inviteUrl + metadata
Invitee->>Landing: Open invite URL
Landing->>API: GET /api/invites/:token
API->>Invites: Load active invite
API-->>Landing: Invite summary
alt Authenticated mode and no session
Landing->>Auth: Sign up or sign in
Auth-->>Landing: Session established
end
Landing->>API: POST /api/invites/:token/accept (requestType=human)
API->>Join: Look for reusable human join request
alt Reusable pending or approved request exists
API->>Invites: Mark invite accepted
API-->>Landing: Existing join request status
else No reusable request exists
API->>Invites: Mark invite accepted
API->>Join: Insert pending_approval join request
API-->>Landing: New pending_approval join request
end
```
### Human Approval And Reload Path
```mermaid
sequenceDiagram
autonumber
actor Invitee as Invite recipient
participant Landing as Invite landing UI
participant API as Access routes
participant Join as join_requests table
actor Approver as Company admin
participant Queue as Access queue UI
participant Membership as company_memberships + grants
Invitee->>Landing: Reload consumed invite URL
Landing->>API: GET /api/invites/:token
API->>Join: Load join request by inviteId
API-->>Landing: joinRequestStatus + joinRequestType
alt joinRequestStatus = pending_approval
Landing-->>Invitee: Show waiting-for-approval panel
Approver->>Queue: Review request in Company Settings -> Access
Queue->>API: POST /companies/:companyId/join-requests/:requestId/approve
API->>Membership: Ensure membership and grants
API->>Join: Mark join request approved
Invitee->>Landing: Refresh after approval
Landing->>API: GET /api/invites/:token
API->>Join: Reload approved join request
API-->>Landing: approved status
Landing-->>Invitee: Opening company and redirect
else joinRequestStatus = rejected
Landing-->>Invitee: Show rejected error panel
else joinRequestStatus = approved but membership missing
Landing-->>Invitee: Fall through to consumed/unavailable state
end
```
### Agent Invite Approval, Claim, And Replay
```mermaid
sequenceDiagram
autonumber
actor Board as Board user
participant Settings as Company Settings UI
participant API as Access routes
participant Invites as invites table
actor Gateway as OpenClaw gateway agent
participant Join as join_requests table
actor Approver as Company admin
participant Agents as agents table
participant Keys as agent_api_keys table
Board->>Settings: Generate OpenClaw invite prompt
Settings->>API: POST /api/companies/:companyId/openclaw-invite-prompt
API->>Invites: Insert active agent invite
API-->>Settings: Prompt text + invite token
Gateway->>API: POST /api/invites/:token/accept (agent, openclaw_gateway)
API->>Invites: Mark invite accepted
API->>Join: Insert pending_approval join request + claimSecretHash
API-->>Gateway: requestId + claimSecret + claimApiKeyPath
Approver->>API: POST /companies/:companyId/join-requests/:requestId/approve
API->>Agents: Create agent + membership + grants
API->>Join: Mark request approved and store createdAgentId
Gateway->>API: POST /api/join-requests/:requestId/claim-api-key (claimSecret)
API->>Keys: Create initial API key
API->>Join: Mark claim secret consumed
API-->>Gateway: Plaintext Paperclip API key
opt Replay accepted invite for updated gateway defaults
Gateway->>API: POST /api/invites/:token/accept again
API->>Join: Reuse existing approved or pending request
API->>Agents: Update approved agent adapter config when applicable
API-->>Gateway: Updated join request payload
end
```
## Notes
- `GET /api/invites/:token` treats `revoked` and `expired` invites as unavailable. Accepted invites remain resolvable when they already have a linked join request, and the summary now includes `joinRequestStatus` plus `joinRequestType`.
- Human acceptance consumes the invite immediately and then either creates a new join request or reuses an existing `pending_approval` or `approved` human join request for the same user/email.
- The landing page has two layers of post-accept UI:
- immediate mutation-result UI from `POST /api/invites/:token/accept`
- reload-time summary UI from `GET /api/invites/:token` once the invite has already been consumed
- Reload behavior for accepted company invites is now status-sensitive:
- `pending_approval` re-renders the waiting-for-approval panel
- `rejected` renders the "This join request was not approved." error panel
- `approved` only becomes a success path for human invites after membership is visible to the current session; otherwise the page falls through to the generic consumed/unavailable state
- `GET /api/invites/:token/logo` still rejects accepted invites, so accepted-invite reload states may fall back to the generated company icon even though the summary payload still carries `companyLogoUrl`.
- The only accepted-invite replay path in the current implementation is `POST /api/invites/:token/accept` for `agent` requests with `adapterType=openclaw_gateway`, and only when the existing join request is still `pending_approval` or already `approved`.
- `bootstrap_ceo` invites are one-time and do not create join requests.

View File

@@ -1,30 +0,0 @@
# AWS ECS Fargate deployment environment
# Copy to .env.aws and fill in values before deploying
#
# Secrets (DATABASE_URL, BETTER_AUTH_SECRET, ANTHROPIC_API_KEY, OPENAI_API_KEY,
# GITHUB_TOKEN) are injected via AWS Secrets Manager — do NOT set them here.
# Deployment mode
PAPERCLIP_DEPLOYMENT_MODE=authenticated
PAPERCLIP_DEPLOYMENT_EXPOSURE=public
PAPERCLIP_PUBLIC_URL=https://paperclip.example.com
# Server
HOST=0.0.0.0
PORT=3100
NODE_ENV=production
SERVE_UI=true
# Paperclip paths
PAPERCLIP_HOME=/paperclip
PAPERCLIP_INSTANCE_ID=default
PAPERCLIP_CONFIG=/paperclip/instances/default/config.json
# Auto-apply migrations on startup
PAPERCLIP_MIGRATION_AUTO_APPLY=true
# Enable heartbeat scheduler for remote agents
HEARTBEAT_SCHEDULER_ENABLED=true
# Post-deploy hardening (uncomment after first user signs up)
# PAPERCLIP_AUTH_DISABLE_SIGN_UP=true

View File

@@ -1,90 +0,0 @@
{
"family": "paperclip-server",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "2048",
"memory": "4096",
"executionRoleArn": "arn:aws:iam::<ACCOUNT_ID>:role/paperclip-ecs-execution",
"taskRoleArn": "arn:aws:iam::<ACCOUNT_ID>:role/paperclip-ecs-task",
"containerDefinitions": [
{
"name": "paperclip-server",
"image": "<ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/paperclip-server:latest",
"essential": true,
"portMappings": [
{
"containerPort": 3100,
"protocol": "tcp"
}
],
"environment": [
{ "name": "NODE_ENV", "value": "production" },
{ "name": "HOST", "value": "0.0.0.0" },
{ "name": "PORT", "value": "3100" },
{ "name": "SERVE_UI", "value": "true" },
{ "name": "PAPERCLIP_HOME", "value": "/paperclip" },
{ "name": "PAPERCLIP_INSTANCE_ID", "value": "default" },
{ "name": "PAPERCLIP_CONFIG", "value": "/paperclip/instances/default/config.json" },
{ "name": "PAPERCLIP_DEPLOYMENT_MODE", "value": "authenticated" },
{ "name": "PAPERCLIP_DEPLOYMENT_EXPOSURE", "value": "public" },
{ "name": "PAPERCLIP_PUBLIC_URL", "value": "https://<DOMAIN>" },
{ "name": "PAPERCLIP_MIGRATION_AUTO_APPLY", "value": "true" },
{ "name": "HEARTBEAT_SCHEDULER_ENABLED", "value": "true" }
],
"secrets": [
{
"name": "DATABASE_URL",
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/database-url"
},
{
"name": "BETTER_AUTH_SECRET",
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/better-auth-secret"
},
{
"name": "ANTHROPIC_API_KEY",
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/anthropic-api-key"
},
{
"name": "OPENAI_API_KEY",
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/openai-api-key"
},
{
"name": "GITHUB_TOKEN",
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/github-token"
}
],
"mountPoints": [
{
"sourceVolume": "paperclip-data",
"containerPath": "/paperclip",
"readOnly": false
}
],
"healthCheck": {
"command": ["CMD-SHELL", "curl -f http://localhost:3100/api/health || exit 1"],
"interval": 30,
"timeout": 5,
"retries": 3,
"startPeriod": 60
},
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/paperclip",
"awslogs-region": "<REGION>",
"awslogs-stream-prefix": "server"
}
}
}
],
"volumes": [
{
"name": "paperclip-data",
"efsVolumeConfiguration": {
"fileSystemId": "<EFS_ID>",
"rootDirectory": "/",
"transitEncryption": "ENABLED"
}
}
]
}

View File

@@ -20,7 +20,6 @@ The `codex_local` adapter runs OpenAI's Codex CLI locally. It supports session p
| `env` | object | No | Environment variables (supports secret refs) |
| `timeoutSec` | number | No | Process timeout (0 = no timeout) |
| `graceSec` | number | No | Grace period before force-kill |
| `fastMode` | boolean | No | Enables Codex Fast mode. Currently supported on `gpt-5.4` only and burns credits faster |
| `dangerouslyBypassApprovalsAndSandbox` | boolean | No | Skip safety checks (dev only) |
## Session Persistence
@@ -31,22 +30,8 @@ Codex uses `previous_response_id` for session continuity. The adapter serializes
The adapter symlinks Paperclip skills into the global Codex skills directory (`~/.codex/skills`). Existing user skills are not overwritten.
## Fast Mode
When `fastMode` is enabled, Paperclip adds Codex config overrides equivalent to:
```sh
-c 'service_tier="fast"' -c 'features.fast_mode=true'
```
Paperclip currently applies that only when the selected model is `gpt-5.4`. On other models, the toggle is preserved in config but ignored at execution time to avoid unsupported runs.
## Managed `CODEX_HOME`
When Paperclip is running inside a managed worktree instance (`PAPERCLIP_IN_WORKTREE=true`), the adapter instead uses a worktree-isolated `CODEX_HOME` under the Paperclip instance so Codex skills, sessions, logs, and other runtime state do not leak across checkouts. It seeds that isolated home from the user's main Codex home for shared auth/config continuity.
## Manual Local CLI
For manual local CLI usage outside heartbeat runs (for example running as `codexcoder` directly), use:
```sh

View File

@@ -203,43 +203,6 @@ export const sessionCodec: AdapterSessionCodec = {
};
```
## Capability Flags
Adapters can declare what "local" capabilities they support by setting optional fields on the `ServerAdapterModule`. The server and UI use these flags to decide which features to enable for agents using the adapter (instructions bundle editor, skills sync, JWT auth, etc.).
| Flag | Type | Default | What it controls |
|------|------|---------|------------------|
| `supportsLocalAgentJwt` | `boolean` | `false` | Whether heartbeat generates a local JWT for the agent |
| `supportsInstructionsBundle` | `boolean` | `false` | Managed instructions bundle (AGENTS.md) — server-side resolution + UI editor |
| `instructionsPathKey` | `string` | `"instructionsFilePath"` | The `adapterConfig` key that holds the instructions file path |
| `requiresMaterializedRuntimeSkills` | `boolean` | `false` | Whether runtime skill entries must be written to disk before execution |
These flags are exposed via `GET /api/adapters` in a `capabilities` object, along with a derived `supportsSkills` flag (true when `listSkills` or `syncSkills` is defined).
### Example
```ts
export function createServerAdapter(): ServerAdapterModule {
return {
type: "my_k8s_adapter",
execute: myExecute,
testEnvironment: myTestEnvironment,
listSkills: myListSkills,
syncSkills: mySyncSkills,
// Capability flags
supportsLocalAgentJwt: true,
supportsInstructionsBundle: true,
instructionsPathKey: "instructionsFilePath",
requiresMaterializedRuntimeSkills: true,
};
}
```
With these flags set, the Paperclip UI will automatically show the instructions bundle editor, skills management tab, and working directory field for agents using this adapter — no Paperclip source changes required.
If capability flags are not set, the server falls back to legacy hardcoded lists for built-in adapter types. External adapters that omit the flags will default to `false` for all capabilities.
## Skills Injection
Make Paperclip skills discoverable to your agent runtime without writing to the agent's working directory:

View File

@@ -124,14 +124,14 @@ If the connection drops, the UI reconnects automatically.
1. Enable timer wakeups (for example every 300s)
2. Keep assignment wakeups on
3. Use a focused prompt template that tells agents to act in the same heartbeat, leave durable progress, and mark blocked work with an owner/action
3. Use a focused prompt template
4. Watch run logs and adjust prompt/config over time
## 7.2 Event-driven loop (less constant polling)
1. Disable timer or set a long interval
2. Keep wake-on-assignment enabled
3. Use child issues, comments, and on-demand wakeups for handoffs instead of loops that poll agents, sessions, or processes
3. Use on-demand wakeups for manual nudges
## 7.3 Safety-first loop

View File

@@ -13,8 +13,6 @@ GET /api/companies/{companyId}/agents
Returns all agents in the company.
This route does not accept query filters. Unsupported query parameters return `400`.
## Get Agent
```

View File

@@ -1,9 +1,9 @@
---
title: Issues
summary: Issue CRUD, checkout/release, comments, documents, interactions, and attachments
summary: Issue CRUD, checkout/release, comments, documents, and attachments
---
Issues are the unit of work in Paperclip. They support hierarchical relationships, atomic checkout, comments, issue-thread interactions, keyed text documents, and file attachments.
Issues are the unit of work in Paperclip. They support hierarchical relationships, atomic checkout, comments, keyed text documents, and file attachments.
## List Issues
@@ -66,8 +66,6 @@ The optional `comment` field adds a comment in the same call.
Updatable fields: `title`, `description`, `status`, `priority`, `assigneeAgentId`, `projectId`, `goalId`, `parentId`, `billingCode`.
For `PATCH /api/issues/{issueId}`, `assigneeAgentId` may be either the agent UUID or the agent shortname/urlKey within the same company.
## Checkout (Claim Task)
```
@@ -121,65 +119,6 @@ POST /api/issues/{issueId}/comments
@-mentions (`@AgentName`) in comments trigger heartbeats for the mentioned agent.
## Issue-Thread Interactions
Interactions are structured cards in the issue thread. Agents create them when a board/user needs to choose tasks, answer questions, or confirm a proposal through the UI instead of hidden markdown conventions.
### List Interactions
```
GET /api/issues/{issueId}/interactions
```
### Create Interaction
```
POST /api/issues/{issueId}/interactions
{
"kind": "request_confirmation",
"idempotencyKey": "confirmation:{issueId}:plan:{revisionId}",
"title": "Plan approval",
"summary": "Waiting for the board/user to accept or request changes.",
"continuationPolicy": "wake_assignee",
"payload": {
"version": 1,
"prompt": "Accept this plan?",
"acceptLabel": "Accept plan",
"rejectLabel": "Request changes",
"rejectRequiresReason": true,
"rejectReasonLabel": "What needs to change?",
"detailsMarkdown": "Review the latest plan document before accepting.",
"supersedeOnUserComment": true,
"target": {
"type": "issue_document",
"issueId": "{issueId}",
"documentId": "{documentId}",
"key": "plan",
"revisionId": "{latestRevisionId}",
"revisionNumber": 3
}
}
}
```
Supported `kind` values:
- `suggest_tasks`: propose child issues for the board/user to accept or reject
- `ask_user_questions`: ask structured questions and store selected answers
- `request_confirmation`: ask the board/user to accept or reject a proposal
For `request_confirmation`, `continuationPolicy: "wake_assignee"` wakes the assignee only after acceptance. Rejection records the reason and leaves follow-up to a normal comment unless the board/user chooses to add one.
### Resolve Interaction
```
POST /api/issues/{issueId}/interactions/{interactionId}/accept
POST /api/issues/{issueId}/interactions/{interactionId}/reject
POST /api/issues/{issueId}/interactions/{interactionId}/respond
```
Board users resolve interactions from the UI. Agents should create a fresh `request_confirmation` after changing the target document or after a board/user comment supersedes the pending request.
## Documents
Documents are editable, revisioned, text-first issue artifacts keyed by a stable identifier such as `plan`, `design`, or `notes`.

View File

@@ -75,28 +75,11 @@ Fields:
```
PATCH /api/routines/{routineId}
{
"status": "paused",
"baseRevisionId": "{latestRevisionId}"
"status": "paused"
}
```
All fields from create are updatable. `baseRevisionId` is optional for backward compatibility; when provided, stale values return `409 Conflict` with the current revision id. **Agents can only update routines assigned to themselves and cannot reassign a routine to another agent.**
## List Revisions
```
GET /api/routines/{routineId}/revisions
```
Returns append-only routine definition revisions newest first. Snapshots include routine fields and safe trigger metadata only; webhook secret values and `secretId` are never returned.
## Restore Revision
```
POST /api/routines/{routineId}/revisions/{revisionId}/restore
```
Restores a historical routine definition by creating a new latest revision copied from the selected revision. Historical revision rows, routine run history, and activity history are preserved. If restoring a deleted webhook trigger requires recreating it, the response can include one-time replacement secret material for that trigger.
All fields from create are updatable. **Agents can only update routines assigned to themselves and cannot reassign a routine to another agent.**
## Add Trigger

Binary file not shown.

Before

Width:  |  Height:  |  Size: 258 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 321 KiB

View File

@@ -89,8 +89,6 @@ Show resolved environment configuration:
pnpm paperclipai env
```
This now includes bind-oriented deployment settings such as `PAPERCLIP_BIND` and `PAPERCLIP_BIND_HOST` when configured.
## `paperclipai allowed-hostname`
Allow a private hostname for authenticated/private mode:

View File

@@ -1,580 +0,0 @@
---
title: AWS ECS Fargate
summary: Deploy Paperclip to AWS using ECS Fargate, RDS Postgres, and EFS
---
Deploy Paperclip to AWS with ECS Fargate (compute), RDS Postgres 17 (database), and EFS (persistent storage). This guide uses the AWS CLI and produces a single-task ECS service behind an ALB with HTTPS.
## Prerequisites
- AWS CLI v2 configured with a profile that has admin-level permissions
- Docker installed locally (for building and pushing the image)
- A registered domain with DNS you control (for the TLS certificate)
- The Paperclip repo cloned locally
Set these shell variables for the rest of the guide:
```bash
export AWS_REGION=us-east-1
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export PAPERCLIP_DOMAIN=paperclip.example.com # your domain
export DB_PASSWORD=$(openssl rand -base64 24 | tr -d '/+=' | head -c 32)
export AUTH_SECRET=$(openssl rand -base64 32)
```
## 1. Create ECR Repository
```bash
aws ecr create-repository \
--repository-name paperclip-server \
--image-scanning-configuration scanOnPush=true \
--region $AWS_REGION
```
## 2. Build and Push Docker Image
```bash
cd /path/to/paperclip
# Authenticate Docker to ECR
aws ecr get-login-password --region $AWS_REGION \
| docker login --username AWS --password-stdin \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
# Build
docker build -t paperclip-server .
# Tag and push
docker tag paperclip-server:latest \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/paperclip-server:latest
docker push \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/paperclip-server:latest
```
## 3. Networking (VPC, Subnets, Security Groups)
Use the default VPC or create a dedicated one. The guide assumes the default VPC with public and private subnets in two AZs.
```bash
# Get default VPC
VPC_ID=$(aws ec2 describe-vpcs \
--filters Name=isDefault,Values=true \
--query 'Vpcs[0].VpcId' --output text)
# Get two public subnets (for ALB)
SUBNET_IDS=$(aws ec2 describe-subnets \
--filters Name=vpc-id,Values=$VPC_ID \
--query 'Subnets[?MapPublicIpOnLaunch==`true`] | [0:2].SubnetId' \
--output text)
SUBNET_1=$(echo $SUBNET_IDS | awk '{print $1}')
SUBNET_2=$(echo $SUBNET_IDS | awk '{print $2}')
```
Create security groups:
```bash
# ALB security group — inbound HTTPS
ALB_SG=$(aws ec2 create-security-group \
--group-name paperclip-alb \
--description "Paperclip ALB" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
aws ec2 authorize-security-group-ingress \
--group-id $ALB_SG \
--protocol tcp --port 443 --cidr 0.0.0.0/0
# Also open port 80 so the ALB can accept HTTP and redirect to HTTPS
aws ec2 authorize-security-group-ingress \
--group-id $ALB_SG \
--protocol tcp --port 80 --cidr 0.0.0.0/0
# ECS task security group — inbound from ALB only
ECS_SG=$(aws ec2 create-security-group \
--group-name paperclip-ecs \
--description "Paperclip ECS tasks" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
aws ec2 authorize-security-group-ingress \
--group-id $ECS_SG \
--protocol tcp --port 3100 \
--source-group $ALB_SG
# RDS security group — inbound from ECS only
RDS_SG=$(aws ec2 create-security-group \
--group-name paperclip-rds \
--description "Paperclip RDS" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
aws ec2 authorize-security-group-ingress \
--group-id $RDS_SG \
--protocol tcp --port 5432 \
--source-group $ECS_SG
# EFS security group — inbound NFS from ECS only
EFS_SG=$(aws ec2 create-security-group \
--group-name paperclip-efs \
--description "Paperclip EFS" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
aws ec2 authorize-security-group-ingress \
--group-id $EFS_SG \
--protocol tcp --port 2049 \
--source-group $ECS_SG
```
## 4. Create RDS Postgres Instance
```bash
# Custom VPCs don't come with a default DB subnet group — create one
# that spans our two subnets so RDS can place the instance.
aws rds create-db-subnet-group \
--db-subnet-group-name paperclip-db-subnet \
--db-subnet-group-description "Paperclip RDS subnets" \
--subnet-ids $SUBNET_1 $SUBNET_2
aws rds create-db-instance \
--db-instance-identifier paperclip-db \
--db-instance-class db.t4g.micro \
--engine postgres \
--engine-version 17 \
--master-username paperclip \
--master-user-password "$DB_PASSWORD" \
--allocated-storage 20 \
--storage-type gp3 \
--vpc-security-group-ids $RDS_SG \
--db-subnet-group-name paperclip-db-subnet \
--no-publicly-accessible \
--backup-retention-period 7 \
--no-multi-az \
--db-name paperclip \
--region $AWS_REGION
# Wait for it to become available (takes 5-10 min)
aws rds wait db-instance-available \
--db-instance-identifier paperclip-db
# Get the endpoint
RDS_ENDPOINT=$(aws rds describe-db-instances \
--db-instance-identifier paperclip-db \
--query 'DBInstances[0].Endpoint.Address' --output text)
DATABASE_URL="postgresql://paperclip:${DB_PASSWORD}@${RDS_ENDPOINT}:5432/paperclip"
```
## 5. Create EFS Filesystem
```bash
EFS_ID=$(aws efs create-file-system \
--performance-mode generalPurpose \
--throughput-mode bursting \
--encrypted \
--tags Key=Name,Value=paperclip-data \
--query 'FileSystemId' --output text)
# Create mount targets in each subnet
for SUBNET in $SUBNET_1 $SUBNET_2; do
aws efs create-mount-target \
--file-system-id $EFS_ID \
--subnet-id $SUBNET \
--security-groups $EFS_SG
done
# Wait for mount targets
aws efs describe-mount-targets --file-system-id $EFS_ID
```
## 6. Store Secrets
```bash
aws secretsmanager create-secret \
--name paperclip/database-url \
--secret-string "$DATABASE_URL"
aws secretsmanager create-secret \
--name paperclip/anthropic-api-key \
--secret-string "YOUR_ANTHROPIC_KEY"
aws secretsmanager create-secret \
--name paperclip/better-auth-secret \
--secret-string "$AUTH_SECRET"
aws secretsmanager create-secret \
--name paperclip/openai-api-key \
--secret-string "YOUR_OPENAI_KEY"
aws secretsmanager create-secret \
--name paperclip/github-token \
--secret-string "YOUR_GITHUB_PAT"
```
## 7. IAM Roles
Create the ECS task execution role (pulls images, reads secrets) and the task role (application permissions).
```bash
# Task execution role
aws iam create-role \
--role-name paperclip-ecs-execution \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "ecs-tasks.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
aws iam attach-role-policy \
--role-name paperclip-ecs-execution \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
# Allow reading secrets
aws iam put-role-policy \
--role-name paperclip-ecs-execution \
--policy-name SecretsAccess \
--policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["secretsmanager:GetSecretValue"],
"Resource": "arn:aws:secretsmanager:'$AWS_REGION':'$AWS_ACCOUNT_ID':secret:paperclip/*"
}]
}'
# Task role (application — add permissions as needed)
aws iam create-role \
--role-name paperclip-ecs-task \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "ecs-tasks.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
```
## 8. ECS Cluster and Task Definition
```bash
aws ecs create-cluster --cluster-name paperclip
aws logs create-log-group --log-group-name /ecs/paperclip
```
Register the task definition using the template at `docker/ecs-task-definition.json`. Before registering, replace the placeholder values:
```bash
sed -e "s|<ACCOUNT_ID>|$AWS_ACCOUNT_ID|g" \
-e "s|<REGION>|$AWS_REGION|g" \
-e "s|<EFS_ID>|$EFS_ID|g" \
-e "s|<DOMAIN>|$PAPERCLIP_DOMAIN|g" \
docker/ecs-task-definition.json > /tmp/paperclip-task-def.json
aws ecs register-task-definition \
--cli-input-json file:///tmp/paperclip-task-def.json
```
## 9. ALB and TLS Certificate
Request a certificate (you must validate via DNS):
```bash
CERT_ARN=$(aws acm request-certificate \
--domain-name $PAPERCLIP_DOMAIN \
--validation-method DNS \
--query 'CertificateArn' --output text)
# Get the CNAME record to add to your DNS
aws acm describe-certificate \
--certificate-arn $CERT_ARN \
--query 'Certificate.DomainValidationOptions[0].ResourceRecord'
```
Add the CNAME to your DNS provider, then wait for validation:
```bash
aws acm wait certificate-validated --certificate-arn $CERT_ARN
```
Create the ALB:
```bash
ALB_ARN=$(aws elbv2 create-load-balancer \
--name paperclip-alb \
--subnets $SUBNET_1 $SUBNET_2 \
--security-groups $ALB_SG \
--scheme internet-facing \
--type application \
--query 'LoadBalancers[0].LoadBalancerArn' --output text)
ALB_DNS=$(aws elbv2 describe-load-balancers \
--load-balancer-arns $ALB_ARN \
--query 'LoadBalancers[0].DNSName' --output text)
# Target group
TG_ARN=$(aws elbv2 create-target-group \
--name paperclip-tg \
--protocol HTTP \
--port 3100 \
--vpc-id $VPC_ID \
--target-type ip \
--health-check-path /api/health \
--health-check-interval-seconds 30 \
--healthy-threshold-count 2 \
--unhealthy-threshold-count 3 \
--query 'TargetGroups[0].TargetGroupArn' --output text)
# HTTPS listener
LISTENER_ARN=$(aws elbv2 create-listener \
--load-balancer-arn $ALB_ARN \
--protocol HTTPS \
--port 443 \
--certificates CertificateArn=$CERT_ARN \
--default-actions Type=forward,TargetGroupArn=$TG_ARN \
--query 'Listeners[0].ListenerArn' --output text)
# HTTP listener — redirect all :80 traffic to :443
HTTP_LISTENER_ARN=$(aws elbv2 create-listener \
--load-balancer-arn $ALB_ARN \
--protocol HTTP \
--port 80 \
--default-actions Type=redirect,RedirectConfig='{Protocol=HTTPS,Port=443,StatusCode=HTTP_301}' \
--query 'Listeners[0].ListenerArn' --output text)
```
Point your DNS to the ALB:
- Create a CNAME or ALIAS record for `$PAPERCLIP_DOMAIN` -> `$ALB_DNS`
## 10. Create ECS Service
```bash
aws ecs create-service \
--cluster paperclip \
--service-name paperclip-server \
--task-definition paperclip-server \
--desired-count 1 \
--launch-type FARGATE \
--deployment-configuration '{
"deploymentCircuitBreaker": {"enable": true, "rollback": true},
"maximumPercent": 200,
"minimumHealthyPercent": 100
}' \
--network-configuration '{
"awsvpcConfiguration": {
"subnets": ["'$SUBNET_1'", "'$SUBNET_2'"],
"securityGroups": ["'$ECS_SG'"],
"assignPublicIp": "ENABLED"
}
}' \
--load-balancers '[{
"targetGroupArn": "'$TG_ARN'",
"containerName": "paperclip-server",
"containerPort": 3100
}]'
```
> **Note:** `assignPublicIp: ENABLED` is needed if using public subnets without a NAT Gateway. For private subnets, set to `DISABLED` and ensure a NAT Gateway is configured for outbound internet access.
## 11. Verify Deployment
```bash
# Watch task come up
aws ecs describe-services \
--cluster paperclip \
--services paperclip-server \
--query 'services[0].{desired:desiredCount,running:runningCount,status:status}'
# Check task health
aws ecs list-tasks --cluster paperclip --service-name paperclip-server
TASK_ARN=$(aws ecs list-tasks --cluster paperclip --service-name paperclip-server --query 'taskArns[0]' --output text)
aws ecs describe-tasks --cluster paperclip --tasks $TASK_ARN \
--query 'tasks[0].{status:lastStatus,health:healthStatus}'
# Check logs
aws logs tail /ecs/paperclip --since 10m --follow
# Hit the health endpoint
curl -sf https://$PAPERCLIP_DOMAIN/api/health
```
**Healthy indicators:**
- ECS task status: `RUNNING`, health: `HEALTHY`
- Logs show `plugin job coordinator started` and `plugin-loader: loadAll complete`
- `/api/health` returns 200
## Post-Deploy Security Hardening
After the first user has signed up (which grants admin role), lock down the instance:
```bash
# Disable public sign-up (prevents unauthorized users from creating accounts)
# Add to the task definition environment section, then redeploy:
# { "name": "PAPERCLIP_AUTH_DISABLE_SIGN_UP", "value": "true" }
# Or update via Secrets Manager / task def override, then force new deployment
aws ecs update-service \
--cluster paperclip \
--service paperclip-server \
--force-new-deployment
```
Use the invite flow (added in v2026.416.0) to grant access to additional users after sign-up is disabled.
## Deploying Updates
Build, push, and force a new deployment:
```bash
# Build and push new image
docker build -t paperclip-server .
docker tag paperclip-server:latest \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/paperclip-server:latest
docker push \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/paperclip-server:latest
# Roll out
aws ecs update-service \
--cluster paperclip \
--service paperclip-server \
--force-new-deployment
# Watch the deployment
aws ecs describe-services \
--cluster paperclip \
--services paperclip-server \
--query 'services[0].deployments[*].{status:status,running:runningCount,desired:desiredCount,rollout:rolloutState}'
```
ECS performs a rolling update: starts a new task, waits for it to pass health checks, then drains the old task.
## Rollback
If the new deployment is unhealthy:
```bash
# ECS automatically rolls back if the new task fails health checks
# (circuit breaker is enabled in the service configuration above).
# To force rollback manually:
# 1. Find the previous task definition revision
aws ecs list-task-definitions \
--family-prefix paperclip-server \
--sort DESC \
--query 'taskDefinitionArns[0:3]'
# 2. Update service to the previous revision
aws ecs update-service \
--cluster paperclip \
--service paperclip-server \
--task-definition paperclip-server:<PREVIOUS_REVISION>
```
## Scaling to Zero (Cost Savings)
Scale down when not in use:
```bash
# Stop
aws ecs update-service \
--cluster paperclip \
--service paperclip-server \
--desired-count 0
# Start
aws ecs update-service \
--cluster paperclip \
--service paperclip-server \
--desired-count 1
```
RDS can also be stopped (auto-restarts after 7 days):
```bash
aws rds stop-db-instance --db-instance-identifier paperclip-db
aws rds start-db-instance --db-instance-identifier paperclip-db
```
## Teardown
Remove all resources in reverse order:
```bash
# 1. ECS service and cluster
aws ecs update-service --cluster paperclip --service paperclip-server --desired-count 0
aws ecs delete-service --cluster paperclip --service paperclip-server --force
aws ecs delete-cluster --cluster paperclip
# 2. ALB and ACM cert
aws elbv2 delete-listener --listener-arn $HTTP_LISTENER_ARN
aws elbv2 delete-listener --listener-arn $LISTENER_ARN
aws elbv2 delete-target-group --target-group-arn $TG_ARN
aws elbv2 delete-load-balancer --load-balancer-arn $ALB_ARN
aws acm delete-certificate --certificate-arn $CERT_ARN
# 3. RDS (creates final snapshot)
aws rds delete-db-instance \
--db-instance-identifier paperclip-db \
--final-db-snapshot-identifier paperclip-db-final
aws rds wait db-instance-deleted --db-instance-identifier paperclip-db
aws rds delete-db-subnet-group --db-subnet-group-name paperclip-db-subnet
# 4. EFS (mount targets must be deleted first)
for MT in $(aws efs describe-mount-targets --file-system-id $EFS_ID --query 'MountTargets[*].MountTargetId' --output text); do
aws efs delete-mount-target --mount-target-id $MT
done
# Mount-target deletion is async; poll until none remain before deleting
# the filesystem, otherwise delete-file-system fails with FileSystemInUse.
echo "Waiting for mount targets to delete..."
while aws efs describe-mount-targets \
--file-system-id $EFS_ID \
--query 'MountTargets[0].MountTargetId' --output text 2>/dev/null | grep -q 'fsmt-'; do
sleep 5
done
aws efs delete-file-system --file-system-id $EFS_ID
# 5. Secrets
for s in database-url anthropic-api-key better-auth-secret openai-api-key github-token; do
aws secretsmanager delete-secret --secret-id paperclip/$s --force-delete-without-recovery
done
# 6. Security groups (after all dependents are gone)
for sg in $EFS_SG $RDS_SG $ECS_SG $ALB_SG; do
aws ec2 delete-security-group --group-id $sg
done
# 7. ECR
aws ecr delete-repository --repository-name paperclip-server --force
# 8. IAM roles
aws iam delete-role-policy --role-name paperclip-ecs-execution --policy-name SecretsAccess
aws iam detach-role-policy --role-name paperclip-ecs-execution \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
aws iam delete-role --role-name paperclip-ecs-execution
aws iam delete-role --role-name paperclip-ecs-task
# 9. Log group
aws logs delete-log-group --log-group-name /ecs/paperclip
```
## Cost Reference
| Service | Config | Monthly |
|---------|--------|---------|
| ECS Fargate | 2 vCPU, 4 GB, 24/7 | ~$70 |
| RDS Postgres | db.t4g.micro, 20 GB | ~$15 |
| ALB | 1 LCU average | ~$22 |
| NAT Gateway | 1 AZ (if using private subnets) | ~$35 |
| EFS | 1 GB Standard | ~$0.30 |
| Secrets Manager | 5 secrets | ~$2 |
| CloudWatch Logs | ~1 GB/mo | ~$0.50 |
| ECR | ~1 GB | ~$0.10 |
| **Total (public subnets, no NAT)** | | **~$110/mo** |
| **Total (private subnets + NAT)** | | **~$145/mo** |
Use Fargate Spot and scheduled scaling to 0 during off-hours to reduce to ~$60-85/mo.

View File

@@ -3,14 +3,13 @@ title: Deployment Modes
summary: local_trusted vs authenticated (private/public)
---
Paperclip supports two runtime modes with different security profiles. Reachability is configured separately with `bind`.
Paperclip supports two runtime modes with different security profiles.
## `local_trusted`
The default mode. Optimized for single-operator local use.
- **Host binding**: loopback only (localhost)
- **Bind**: `loopback`
- **Authentication**: no login required
- **Use case**: local development, solo experimentation
- **Board identity**: auto-created local board user
@@ -32,7 +31,6 @@ For private network access (Tailscale, VPN, LAN).
- **Authentication**: login required via Better Auth
- **URL handling**: auto base URL mode (lower friction)
- **Host trust**: private-host trust policy required
- **Bind**: choose `loopback`, `lan`, `tailnet`, or `custom`
```sh
pnpm paperclipai onboard
@@ -52,7 +50,6 @@ For internet-facing deployment.
- **Authentication**: login required
- **URL**: explicit public URL required
- **Security**: stricter deployment checks in doctor
- **Bind**: usually `loopback` behind a reverse proxy; `lan/custom` is advanced
```sh
pnpm paperclipai onboard
@@ -84,5 +81,5 @@ pnpm paperclipai configure --section server
Runtime override via environment variable:
```sh
PAPERCLIP_DEPLOYMENT_MODE=authenticated PAPERCLIP_BIND=lan pnpm paperclipai run
PAPERCLIP_DEPLOYMENT_MODE=authenticated pnpm paperclipai run
```

View File

@@ -10,15 +10,11 @@ All environment variables that Paperclip uses for server configuration.
| Variable | Default | Description |
|----------|---------|-------------|
| `PORT` | `3100` | Server port |
| `PAPERCLIP_BIND` | `loopback` | Reachability preset: `loopback`, `lan`, `tailnet`, or `custom` |
| `PAPERCLIP_BIND_HOST` | (unset) | Required when `PAPERCLIP_BIND=custom` |
| `HOST` | `127.0.0.1` | Legacy host override; prefer `PAPERCLIP_BIND` for new setups |
| `HOST` | `127.0.0.1` | Server host binding |
| `DATABASE_URL` | (embedded) | PostgreSQL connection string |
| `PAPERCLIP_HOME` | `~/.paperclip` | Base directory for all Paperclip data |
| `PAPERCLIP_INSTANCE_ID` | `default` | Instance identifier (for multiple local instances) |
| `PAPERCLIP_DEPLOYMENT_MODE` | `local_trusted` | Runtime mode override |
| `PAPERCLIP_DEPLOYMENT_EXPOSURE` | `private` | Exposure policy when deployment mode is `authenticated` |
| `PAPERCLIP_API_URL` | (auto-derived) | Paperclip API base URL. When set externally (e.g., via Kubernetes ConfigMap, load balancer, or reverse proxy), the server preserves the value instead of deriving it from the listen host and port. Useful for deployments where the public-facing URL differs from the local bind address. |
## Secrets
@@ -36,7 +32,7 @@ These are set automatically by the server when invoking agents:
|----------|-------------|
| `PAPERCLIP_AGENT_ID` | Agent's unique ID |
| `PAPERCLIP_COMPANY_ID` | Company ID |
| `PAPERCLIP_API_URL` | Paperclip API base URL (inherits the server-level value; see Server Configuration above) |
| `PAPERCLIP_API_URL` | Paperclip API base URL |
| `PAPERCLIP_API_KEY` | Short-lived JWT for API auth |
| `PAPERCLIP_RUN_ID` | Current heartbeat run ID |
| `PAPERCLIP_TASK_ID` | Issue that triggered this wake |

View File

@@ -38,26 +38,19 @@ This does:
2. Runs `paperclipai doctor` with repair enabled
3. Starts the server when checks pass
## Bind Presets In Dev
## Tailscale/Private Auth Dev Mode
Default `pnpm dev` stays in `local_trusted` with loopback-only binding.
To open Paperclip to a private network with login enabled:
```sh
pnpm dev --bind lan
```
For Tailscale-only binding on a detected tailnet address:
```sh
pnpm dev --bind tailnet
```
Legacy aliases still work and map to the older broad private-network behavior:
To run in `authenticated/private` mode for network access:
```sh
pnpm dev --tailscale-auth
```
This binds the server to `0.0.0.0` for private-network access.
Alias:
```sh
pnpm dev --authenticated-private
```

View File

@@ -40,7 +40,7 @@ Paperclip supports three deployment configurations, from zero-friction local to
- **Just trying Paperclip?** Use `local_trusted` (the default)
- **Sharing with a team on private network?** Use `authenticated` + `private`
- **Deploying to the cloud?** Use `authenticated` + `public` — see [AWS ECS Fargate guide](aws-ecs.md)
- **Deploying to the cloud?** Use `authenticated` + `public`
Set the mode during onboarding:

View File

@@ -1,6 +1,6 @@
---
title: Tailscale Private Access
summary: Run Paperclip with Tailscale-friendly bind presets and connect from other devices
summary: Run Paperclip with Tailscale-friendly host binding and connect from other devices
---
Use this when you want to access Paperclip over Tailscale (or a private LAN/VPN) instead of only `localhost`.
@@ -8,25 +8,20 @@ Use this when you want to access Paperclip over Tailscale (or a private LAN/VPN)
## 1. Start Paperclip in private authenticated mode
```sh
pnpm dev --bind tailnet
pnpm dev --tailscale-auth
```
Recommended behavior:
This configures:
- `PAPERCLIP_DEPLOYMENT_MODE=authenticated`
- `PAPERCLIP_DEPLOYMENT_EXPOSURE=private`
- `PAPERCLIP_BIND=tailnet`
- `PAPERCLIP_AUTH_BASE_URL_MODE=auto`
- `HOST=0.0.0.0` (bind on all interfaces)
If you want the old broad private-network behavior instead, use:
Equivalent flag:
```sh
pnpm dev --bind lan
```
Legacy aliases still map to `authenticated/private + bind=lan`:
pnpm dev --authenticated-private
pnpm dev --tailscale-auth
```
## 2. Find your reachable Tailscale address
@@ -78,5 +73,5 @@ Expected result:
## Troubleshooting
- Login or redirect errors on a private hostname: add it with `paperclipai allowed-hostname`.
- App only works on `localhost`: make sure you started with `--bind lan` or `--bind tailnet` instead of plain `pnpm dev`.
- App only works on `localhost`: make sure you started with `--tailscale-auth` (or set `HOST=0.0.0.0` in private mode).
- Can connect locally but not remotely: verify both devices are on the same Tailscale network and port `3100` is reachable.

View File

@@ -48,8 +48,6 @@
"guides/board-operator/managing-tasks",
"guides/board-operator/execution-workspaces-and-runtime-services",
"guides/board-operator/delegation",
"guides/board-operator/execution-workspaces-and-runtime-services",
"guides/board-operator/delegation",
"guides/board-operator/approvals",
"guides/board-operator/costs-and-budgets",
"guides/board-operator/activity-log",

View File

@@ -55,15 +55,3 @@ The name must match the agent's `name` field exactly (case-insensitive). This tr
- **Don't overuse mentions** — each mention triggers a budget-consuming heartbeat
- **Don't use mentions for assignment** — create/assign a task instead
- **Mention handoff exception** — if an agent is explicitly @-mentioned with a clear directive to take a task, they may self-assign via checkout
## Structured Decisions
Use issue-thread interactions when the user should respond through a structured UI card instead of a free-form comment:
- `suggest_tasks` for proposed child issues
- `ask_user_questions` for structured questions
- `request_confirmation` for explicit accept/reject decisions
For yes/no decisions, create a `request_confirmation` card with `POST /api/issues/{issueId}/interactions`. Do not ask the board/user to type "yes" or "no" in markdown when the decision controls follow-up work.
Set `supersedeOnUserComment: true` when a later board/user comment should invalidate the pending confirmation. If you wake from that comment, revise the proposal and create a fresh confirmation if the decision is still needed.

View File

@@ -5,16 +5,6 @@ summary: Agent-side approval request and response
Agents interact with the approval system in two ways: requesting approvals and responding to approval resolutions.
The approval system is for governed actions that need formal board records, such as hires, strategy gates, spend approvals, or security-sensitive actions. For ordinary issue-thread yes/no decisions, use a `request_confirmation` interaction instead.
Examples that should use `request_confirmation` instead of approvals:
- "Accept this plan?"
- "Proceed with this issue breakdown?"
- "Use option A or reject and request changes?"
Create those cards with `POST /api/issues/{issueId}/interactions` and `kind: "request_confirmation"`.
## Requesting a Hire
Managers and CEOs can request to hire new agents:
@@ -47,16 +37,6 @@ POST /api/companies/{companyId}/approvals
}
```
## Plan Approval Cards
For normal issue implementation plans, use the issue-thread confirmation surface:
1. Update the `plan` issue document.
2. Create `request_confirmation` bound to the latest `plan` revision.
3. Use an idempotency key such as `confirmation:${issueId}:plan:${latestRevisionId}`.
4. Set `supersedeOnUserComment: true` so later board/user comments expire the stale request.
5. Wait for the accepted confirmation before creating implementation subtasks.
## Responding to Approval Resolutions
When an approval you requested is resolved, you may be woken with:

View File

@@ -66,11 +66,7 @@ Read ancestors to understand why this task exists. If woken by a specific commen
### Step 7: Do the Work
Use your tools and capabilities to complete the task. If the issue is actionable, take a concrete action in the same heartbeat. Do not stop at a plan unless the issue asked for planning.
Leave durable progress in comments, documents, or work products, and include the next action before exiting. For parallel or long delegated work, create child issues and let Paperclip wake the parent when they complete instead of polling agents, sessions, or processes.
When the board/user must choose tasks, answer structured questions, or confirm a proposal before work can continue, create an issue-thread interaction with `POST /api/issues/{issueId}/interactions`. Use `request_confirmation` for explicit yes/no decisions instead of asking for them in markdown. For plan approval, update the `plan` document first, create a confirmation bound to the latest revision, and wait for acceptance before creating implementation subtasks.
Use your tools and capabilities to complete the task.
### Step 8: Update Status
@@ -106,23 +102,6 @@ Always set `parentId` and `goalId` on subtasks.
- **Always checkout** before working — never PATCH to `in_progress` manually
- **Never retry a 409** — the task belongs to someone else
- **Always comment** on in-progress work before exiting a heartbeat
- **Start actionable work** in the same heartbeat; planning-only exits are for planning tasks
- **Leave a clear next action** in durable issue context
- **Use child issues instead of polling** for long or parallel delegated work
- **Use `request_confirmation`** for issue-scoped yes/no decisions and plan approval cards
- **Always set parentId** on subtasks
- **Never cancel cross-team tasks** — reassign to your manager
- **Escalate when stuck** — use your chain of command
## Run Liveness
Paperclip records run liveness as metadata on heartbeat runs. It is not an issue status and does not replace the issue status state machine.
- Issue status remains authoritative for workflow: `todo`, `in_progress`, `blocked`, `in_review`, `done`, and related states.
- Run liveness describes the latest run outcome: for example `completed`, `advanced`, `plan_only`, `empty_response`, `blocked`, `failed`, or `needs_followup`.
- Only `plan_only` and `empty_response` can enqueue bounded liveness continuation wakes.
- Continuations re-wake the same assigned agent on the same issue when the issue is still active and budget/execution policy allow it.
- `continuationAttempt` counts semantic liveness continuations for a source run chain. It is separate from process recovery, queued wake delivery, adapter session resume, and other operational retries.
- Liveness continuation wake prompts include the attempt, source run, liveness state, liveness reason, and the instruction for the next heartbeat.
- Continuations do not mark the issue `blocked` or `done`. If automatic continuations are exhausted, Paperclip leaves an audit comment so a human or manager can clarify, block, or assign follow-up work.
- Workspace provisioning alone is not treated as concrete task progress. Durable progress should appear as tool/action events, issue comments, document or work-product revisions, activity log entries, commits, or tests.

View File

@@ -68,53 +68,6 @@ POST /api/companies/{companyId}/issues
Always set `parentId` to maintain the task hierarchy. Set `goalId` when applicable.
## Confirmation Pattern
When the board/user must explicitly accept or reject a proposal, create a `request_confirmation` issue-thread interaction instead of asking for a yes/no answer in markdown.
```
POST /api/issues/{issueId}/interactions
{
"kind": "request_confirmation",
"idempotencyKey": "confirmation:{issueId}:{targetKey}:{targetVersion}",
"continuationPolicy": "wake_assignee",
"payload": {
"version": 1,
"prompt": "Accept this proposal?",
"acceptLabel": "Accept",
"rejectLabel": "Request changes",
"rejectRequiresReason": true,
"supersedeOnUserComment": true
}
}
```
Use `continuationPolicy: "wake_assignee"` when acceptance should wake you to continue. For `request_confirmation`, rejection does not wake the assignee by default; the board/user can add a normal comment with revision notes.
## Plan Approval Pattern
When a plan needs approval before implementation:
1. Create or update the issue document with key `plan`.
2. Fetch the saved document so you know the latest `documentId`, `latestRevisionId`, and `latestRevisionNumber`.
3. Create a `request_confirmation` targeting that exact `plan` revision.
4. Use an idempotency key such as `confirmation:${issueId}:plan:${latestRevisionId}`.
5. Wait for acceptance before creating implementation subtasks.
6. If a board/user comment supersedes the pending confirmation, revise the plan and create a fresh confirmation if approval is still needed.
Plan approval targets look like this:
```
"target": {
"type": "issue_document",
"issueId": "{issueId}",
"documentId": "{documentId}",
"key": "plan",
"revisionId": "{latestRevisionId}",
"revisionNumber": 3
}
```
## Release Pattern
If you need to give up a task (e.g. you realize it should go to someone else):

View File

@@ -47,7 +47,7 @@ You do **not** need to tell the CEO to engage specific agents. After you approve
- **Breaks goals into concrete tasks** with clear descriptions, priorities, and acceptance criteria
- **Assigns tasks to the right agent** based on role and capabilities (e.g., engineering tasks go to the CTO or engineers, marketing tasks go to the CMO)
- **Creates subtasks** when work needs to be decomposed further
- **Hires new agents** when the team lacks capacity for a goal, with hire approvals available when enabled in company settings
- **Hires new agents** when the team lacks capacity for a goal (subject to your approval)
- **Monitors progress** on each heartbeat, checking task status and unblocking reports
- **Escalates to you** when it encounters something it can't resolve — budget issues, blocked approvals, or strategic ambiguity

View File

@@ -5,28 +5,22 @@ summary: How project runtime configuration, execution workspaces, and issue runs
This guide documents the intended runtime model for projects, execution workspaces, and issue runs in Paperclip.
Paperclip now presents this as a workspace-command model:
- `Services` are long-running commands that stay supervised.
- `Jobs` are one-shot commands that run once and exit.
- Raw runtime JSON is still available for advanced config, but it is no longer the primary mental model.
## Project runtime configuration
You can define how to run a project on the project workspace itself.
- Project workspace runtime config describes the services and jobs available for that project checkout.
- Project workspace runtime config describes how to run services for that project checkout.
- This is the default runtime configuration that child execution workspaces may inherit.
- Defining the config does not start anything by itself.
## Manual runtime control
Workspace commands are manually controlled from the UI.
Runtime services are manually controlled from the UI.
- Project workspace services are started and stopped from the project workspace UI, and project jobs can be run on demand there.
- Execution workspace services are started and stopped from the execution workspace UI, and execution-workspace jobs can be run on demand there.
- Paperclip does not automatically start or stop these workspace services as part of issue execution.
- Paperclip also does not automatically restart workspace services on server boot.
- Project workspace runtime services are started and stopped from the project workspace UI.
- Execution workspace runtime services are started and stopped from the execution workspace UI.
- Paperclip does not automatically start or stop these runtime services as part of issue execution.
- Paperclip also does not automatically restart workspace runtime services on server boot.
## Execution workspace inheritance
@@ -35,7 +29,7 @@ Execution workspaces isolate code and runtime state from the project primary wor
- An isolated execution workspace has its own checkout path, branch, and local runtime instance.
- The runtime configuration may inherit from the linked project workspace by default.
- The execution workspace may override that runtime configuration with its own workspace-specific settings.
- The inherited configuration answers "which commands exist and how to run them", but any running service process is still specific to that execution workspace.
- The inherited configuration answers "how to run the service", but the running process is still specific to that execution workspace.
## Issues and execution workspaces
@@ -44,7 +38,7 @@ Issues are attached to execution workspace behavior, not to automatic runtime ma
- An issue may create a new execution workspace when you choose an isolated workspace mode.
- An issue may reuse an existing execution workspace when you choose reuse.
- Multiple issues may intentionally share one execution workspace so they can work against the same branch and running runtime services.
- Assigning or running an issue does not automatically start or stop workspace services for that workspace.
- Assigning or running an issue does not automatically start or stop runtime services for that workspace.
## Execution workspace lifecycle
@@ -68,7 +62,7 @@ Heartbeat still resolves a workspace for the run, but that is about code locatio
With the current implementation:
- Project workspace command config is the fallback for execution workspace UI controls.
- Project workspace runtime config is the fallback for execution workspace UI controls.
- Execution workspace runtime overrides are stored on the execution workspace.
- Heartbeat runs do not auto-start workspace services.
- Server startup does not auto-restart workspace services.
- Heartbeat runs do not auto-start workspace runtime services.
- Server startup does not auto-restart workspace runtime services.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 191 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 188 KiB

Some files were not shown because too many files have changed in this diff Show More