Compare commits

..

24 Commits

Author SHA1 Message Date
Dotta
1ec06d3300 Merge public-gh/master into pap-1469-workspace-nav 2026-04-14 12:54:31 -05:00
Dotta
12e5049e2f test(server): align execution workspace config expectations 2026-04-14 12:11:56 -05:00
Dotta
54210b428d docs: document worktree repair 2026-04-14 11:52:02 -05:00
Dotta
27c5cde7ad Persist collapsed inbox groups 2026-04-14 11:52:02 -05:00
Dotta
ab8682d2aa Handle slow and stale workspace startup
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-14 11:52:02 -05:00
Dotta
1dce216cb3 Restore persisted workspaces before runtime control 2026-04-14 11:52:02 -05:00
Dotta
2083888738 Fix inbox workspace grouping refresh persistence 2026-04-14 11:52:02 -05:00
Dotta
93df2d7e69 Scope issue list preferences by context 2026-04-14 11:52:02 -05:00
Dotta
4694a15217 Persist sidebar order preferences 2026-04-14 11:51:38 -05:00
Dotta
84f0ffb9dd Rebalance issue filter popover layout 2026-04-14 11:51:38 -05:00
Dotta
3a02a7de49 Fix inbox keyboard navigation across groups 2026-04-14 11:51:38 -05:00
Dotta
d2c68b4893 Share issue group header styling with inbox 2026-04-14 11:51:38 -05:00
Dotta
8fdfd47af3 Add inbox archive undo shortcut 2026-04-14 11:51:38 -05:00
Dotta
360a1198c2 Fix worktree provision fallback seeding 2026-04-14 11:51:38 -05:00
Dotta
99ed100981 Fix workspace runtime control requests 2026-04-14 11:51:38 -05:00
Dotta
e25a58d1a0 Refine workspace command controls 2026-04-14 11:51:38 -05:00
Dotta
335a8786de fix(ui): make runtime services JSON textarea much taller
Increased min-height from 144px to 384px on project workspace and from
128/192px to 256/384px on execution workspace so the JSON field is
comfortable to edit inline.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-14 11:51:38 -05:00
Dotta
f270f30799 Add mobile collapse for inbox workspace groups 2026-04-14 11:51:37 -05:00
Dotta
dca6823dee Add workspace grouping to inbox 2026-04-14 11:51:37 -05:00
Dotta
46af1e5623 Improve workspace runtime controls 2026-04-14 11:51:37 -05:00
Dotta
f1c504bc48 Improve mobile project workspace cards 2026-04-14 11:51:37 -05:00
Dotta
2ce37a4f94 fix: validate linked worktrees before reuse 2026-04-14 11:51:37 -05:00
Dotta
3638036230 Refine execution workspace detail layout
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-14 11:48:35 -05:00
Dotta
2a932a6db0 Handle invalid workspace runtime shells 2026-04-14 11:48:35 -05:00
1353 changed files with 10687 additions and 426905 deletions

View File

@@ -154,14 +154,6 @@ Each AGENTS.md body should include not just what the agent does, but how they fi
This turns a collection of agents into an organization that actually works together. Without workflow context, agents operate in isolation — they do their job but don't know what happens before or after them.
Add a concise execution contract to every generated working agent:
- Start actionable work in the same heartbeat and do not stop at a plan unless planning was requested.
- Leave durable progress in comments, documents, or work products with the next action.
- Use child issues for long or parallel delegated work instead of polling agents, sessions, or processes.
- Mark blocked work with the unblock owner and action.
- Respect budget, pause/cancel, approval gates, and company boundaries.
### Step 5: Confirm Output Location
Ask the user where to write the package. Common options:

View File

@@ -105,13 +105,6 @@ Your responsibilities:
- Implement features and fix bugs
- Write tests and documentation
- Participate in code reviews
Execution contract:
- Start actionable implementation work in the same heartbeat; do not stop at a plan unless planning was requested.
- Leave durable progress with a clear next action.
- Use child issues for long or parallel delegated work instead of polling agents, sessions, or processes.
- Mark blocked work with the unblock owner and action.
```
## teams/engineering/TEAM.md

View File

@@ -548,7 +548,7 @@ Import from `@paperclipai/adapter-utils/server-utils`:
### Prompt Templates
- Support `promptTemplate` for every run
- Use `renderTemplate()` with the standard variable set
- Default prompt should use `DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE` from `@paperclipai/adapter-utils/server-utils` so local adapters share Paperclip's execution contract: act in the same heartbeat, avoid planning-only exits unless requested, leave durable progress and a next action, use child issues instead of polling, mark blockers with owner/action, and respect governance boundaries.
- Default prompt: `"You are agent {{agent.id}} ({{agent.name}}). Continue your Paperclip work."`
### Error Handling
- Differentiate timeout vs process error vs parse failure

View File

@@ -177,12 +177,8 @@ real name or email). To find GitHub usernames:
**Never expose contributor email addresses.** Use `@username` only.
Exclude bot accounts (e.g. `lockfile-bot`, `dependabot`) from the list.
Exclude Paperclip founders from the list (e.g. `cryppadotta`, `forgottendev`, `devinfoley`, `sockmonster`, `scotttong`)
List contributors in alphabetical order by GitHub username (case-insensitive).
If there are no contributors left after exclusions, then just skip this section and don't mention it.
Exclude bot accounts (e.g. `lockfile-bot`, `dependabot`) from the list. List contributors
in alphabetical order by GitHub username (case-insensitive).
## Step 6 — Review Before Release

View File

@@ -2,6 +2,3 @@ DATABASE_URL=postgres://paperclip:paperclip@localhost:5432/paperclip
PORT=3100
SERVE_UI=false
BETTER_AUTH_SECRET=paperclip-dev-secret
# Discord webhook for daily merge digest (scripts/discord-daily-digest.sh)
# DISCORD_WEBHOOK_URL=https://discord.com/api/webhooks/...

View File

@@ -38,8 +38,6 @@
-
> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and discuss it in `#dev` before opening the PR. Feature PRs that overlap with planned core work may need to be redirected — check the roadmap first. See `CONTRIBUTING.md`.
## Model Used
<!--
@@ -59,7 +57,6 @@
- [ ] I have included a thinking path that traces from project context to this change
- [ ] I have specified the model used (with version and capability details)
- [ ] I have checked ROADMAP.md and confirmed this PR does not duplicate planned core work
- [ ] I have run tests locally and they pass
- [ ] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after screenshots

View File

@@ -14,7 +14,7 @@ permissions:
jobs:
build-and-push:
runs-on: ubuntu-latest
timeout-minutes: 60
timeout-minutes: 30
concurrency:
group: docker-${{ github.ref }}
cancel-in-progress: true

View File

@@ -23,9 +23,7 @@ jobs:
- name: Block manual lockfile edits
if: github.head_ref != 'chore/refresh-lockfile'
run: |
# Diff the PR branch against its merge base so recent base-branch commits
# do not masquerade as changes made by the PR itself.
changed="$(git diff --name-only "${{ github.event.pull_request.base.sha }}...${{ github.event.pull_request.head.sha }}")"
changed="$(git diff --name-only "${{ github.event.pull_request.base.sha }}" "${{ github.event.pull_request.head.sha }}")"
if printf '%s\n' "$changed" | grep -qx 'pnpm-lock.yaml'; then
echo "Do not commit pnpm-lock.yaml in pull requests. CI owns lockfile updates."
exit 1
@@ -43,20 +41,48 @@ jobs:
node-version: 24
- name: Validate Dockerfile deps stage
run: node ./scripts/check-docker-deps-stage.mjs
- name: Validate release package manifest
run: node ./scripts/release-package-map.mjs check
- name: Verify release package bootstrap for changed manifests
run: |
mapfile -t changed_paths < <(git diff --name-only "${{ github.event.pull_request.base.sha }}...${{ github.event.pull_request.head.sha }}")
PAPERCLIP_RELEASE_BOOTSTRAP_BASE_SHA="${{ github.event.pull_request.base.sha }}" \
node ./scripts/check-release-package-bootstrap.mjs "${changed_paths[@]}"
missing=0
# Extract only the deps stage from the Dockerfile
deps_stage="$(awk '/^FROM .* AS deps$/{found=1; next} found && /^FROM /{exit} found{print}' Dockerfile)"
if [ -z "$deps_stage" ]; then
echo "::error::Could not extract deps stage from Dockerfile (expected 'FROM ... AS deps')"
exit 1
fi
# Derive workspace search roots from pnpm-workspace.yaml (exclude dev-only packages)
search_roots="$(grep '^ *- ' pnpm-workspace.yaml | sed 's/^ *- //' | sed 's/\*$//' | grep -v 'examples' | grep -v 'create-paperclip-plugin' | tr '\n' ' ')"
if [ -z "$search_roots" ]; then
echo "::error::Could not derive workspace roots from pnpm-workspace.yaml"
exit 1
fi
# Check all workspace package.json files are copied in the deps stage
for pkg in $(find $search_roots -maxdepth 2 -name package.json -not -path '*/examples/*' -not -path '*/create-paperclip-plugin/*' -not -path '*/node_modules/*' 2>/dev/null | sort -u); do
dir="$(dirname "$pkg")"
if ! echo "$deps_stage" | grep -q "^COPY ${dir}/package.json"; then
echo "::error::Dockerfile deps stage missing: COPY ${pkg} ${dir}/"
missing=1
fi
done
# Check patches directory is copied if it exists
if [ -d patches ] && ! echo "$deps_stage" | grep -q '^COPY patches/'; then
echo "::error::Dockerfile deps stage missing: COPY patches/ patches/"
missing=1
fi
if [ "$missing" -eq 1 ]; then
echo "Dockerfile deps stage is out of sync. Update it to include the missing files."
exit 1
fi
- name: Validate dependency resolution when manifests change
run: |
changed="$(git diff --name-only "${{ github.event.pull_request.base.sha }}...${{ github.event.pull_request.head.sha }}")"
changed="$(git diff --name-only "${{ github.event.pull_request.base.sha }}" "${{ github.event.pull_request.head.sha }}")"
manifest_pattern='(^|/)package\.json$|^pnpm-workspace\.yaml$|^\.npmrc$|^pnpmfile\.(cjs|js|mjs)$'
if printf '%s\n' "$changed" | grep -Eq "$manifest_pattern"; then
pnpm install --lockfile-only --ignore-scripts --no-frozen-lockfile
@@ -85,88 +111,16 @@ jobs:
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Typecheck workspaces whose build scripts skip TypeScript
run: pnpm run typecheck:build-gaps
- name: Typecheck
run: pnpm -r typecheck
- name: Run general test suites
run: pnpm test:run:general
- name: Verify release registry test coverage
run: pnpm run test:release-registry
- name: Run tests
run: pnpm test:run
- name: Build
run: pnpm build
verify_serialized_server:
name: Verify serialized server suites (${{ matrix.shard_label }})
needs: [policy]
runs-on: ubuntu-latest
timeout-minutes: 20
strategy:
fail-fast: false
matrix:
include:
- shard_index: 0
shard_count: 4
shard_label: 1/4
- shard_index: 1
shard_count: 4
shard_label: 2/4
- shard_index: 2
shard_count: 4
shard_label: 3/4
- shard_index: 3
shard_count: 4
shard_label: 4/4
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 9.15.4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 24
cache: pnpm
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Run serialized server test shard
run: pnpm test:run:serialized -- --shard-index ${{ matrix.shard_index }} --shard-count ${{ matrix.shard_count }}
canary_dry_run:
name: Canary Dry Run
needs: [policy]
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 9.15.4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 24
cache: pnpm
- name: Install dependencies
run: pnpm install --frozen-lockfile
# `release.sh` always executes its Step 2/7 workspace build, even when
# `--skip-verify` bypasses the initial verification gate.
- name: Release canary dry run via release.sh internal build
- name: Release canary dry run
run: |
git checkout -B master HEAD
git checkout -- pnpm-lock.yaml
@@ -195,6 +149,9 @@ jobs:
- name: Install dependencies
run: pnpm install --frozen-lockfile
- name: Build
run: pnpm build
- name: Install Playwright
run: npx playwright install --with-deps chromium

View File

@@ -50,9 +50,6 @@ jobs:
node-version: 24
cache: pnpm
- name: Validate release package manifest
run: node ./scripts/release-package-map.mjs check
- name: Install dependencies
run: pnpm install --no-frozen-lockfile
@@ -92,9 +89,6 @@ jobs:
node-version: 24
cache: pnpm
- name: Validate release package manifest
run: node ./scripts/release-package-map.mjs check
- name: Install dependencies
run: pnpm install --no-frozen-lockfile
@@ -145,9 +139,6 @@ jobs:
node-version: 24
cache: pnpm
- name: Validate release package manifest
run: node ./scripts/release-package-map.mjs check
- name: Install dependencies
run: pnpm install --no-frozen-lockfile
@@ -186,9 +177,6 @@ jobs:
node-version: 24
cache: pnpm
- name: Validate release package manifest
run: node ./scripts/release-package-map.mjs check
- name: Install dependencies
run: pnpm install --no-frozen-lockfile

5
.gitignore vendored
View File

@@ -1,9 +1,5 @@
node_modules
node_modules/
**/node_modules
**/node_modules/
dist/
ui/storybook-static/
.env
*.tsbuildinfo
drizzle/meta/
@@ -36,7 +32,6 @@ server/src/**/*.d.ts
server/src/**/*.d.ts.map
tmp/
feedback-export-*
diagnostics/
# Editor / tool temp files
*.tmp

View File

@@ -108,24 +108,7 @@ Notes:
## 7. Verification Before Hand-off
Default local/agent test path:
```sh
pnpm test
```
This is the cheap default and only runs the Vitest suite. Browser suites stay opt-in:
```sh
pnpm test:e2e
pnpm test:release-smoke
```
Run the browser suites only when your change touches them or when you are explicitly verifying CI/release flows.
For normal issue work, run the smallest relevant verification first. Do not default to repo-wide typecheck/build/test on every heartbeat when a narrower check is enough to prove the change.
Run this full check before claiming repo work done in a PR-ready hand-off, or when the change scope is broad enough that targeted checks are not sufficient:
Run this full check before claiming done:
```sh
pnpm -r typecheck

View File

@@ -51,21 +51,6 @@ All tests must pass before a PR can be merged. Run them locally first and verify
We use [Greptile](https://greptile.com) for automated code review. Your PR must achieve a **5/5 Greptile score** with **all Greptile comments addressed** before it can be merged. If Greptile leaves comments, fix or respond to each one and request a re-review.
## Feature Contributions
We actively manage the core Paperclip feature roadmap.
Uncoordinated feature PRs against the core product may be closed, even when the implementation is thoughtful and high quality. That is about roadmap ownership, product coherence, and long-term maintenance commitment, not a judgment about the effort.
If you want to contribute a feature:
- Check [ROADMAP.md](ROADMAP.md) first
- Start the discussion in Discord -> `#dev` before writing code
- If the idea fits as an extension, prefer building it with the [plugin system](doc/plugins/PLUGIN_SPEC.md)
- If you want to show a possible direction, reference implementations are welcome as feedback, but they generally will not be merged directly into core
Bugs, docs improvements, and small targeted improvements are still the easiest path to getting merged, and we really do appreciate them.
## General Rules (both paths)
- Write clear commit messages

View File

@@ -1,9 +1,16 @@
# syntax=docker/dockerfile:1.20
FROM node:lts-trixie-slim AS base
ARG USER_UID=1000
ARG USER_GID=1000
RUN apt-get update \
&& apt-get install -y --no-install-recommends ca-certificates gosu curl gh git wget ripgrep python3 \
&& apt-get install -y --no-install-recommends ca-certificates gosu curl git wget ripgrep python3 \
&& mkdir -p -m 755 /etc/apt/keyrings \
&& wget -nv -O/etc/apt/keyrings/githubcli-archive-keyring.gpg https://cli.github.com/packages/githubcli-archive-keyring.gpg \
&& echo "20e0125d6f6e077a9ad46f03371bc26d90b04939fb95170f5a1905099cc6bcc0 /etc/apt/keyrings/githubcli-archive-keyring.gpg" | sha256sum -c - \
&& chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \
&& mkdir -p -m 755 /etc/apt/sources.list.d \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" > /etc/apt/sources.list.d/github-cli.list \
&& apt-get update \
&& apt-get install -y --no-install-recommends gh \
&& rm -rf /var/lib/apt/lists/* \
&& corepack enable
@@ -22,19 +29,14 @@ COPY packages/shared/package.json packages/shared/
COPY packages/db/package.json packages/db/
COPY packages/adapter-utils/package.json packages/adapter-utils/
COPY packages/mcp-server/package.json packages/mcp-server/
COPY packages/adapters/acpx-local/package.json packages/adapters/acpx-local/
COPY packages/adapters/claude-local/package.json packages/adapters/claude-local/
COPY packages/adapters/codex-local/package.json packages/adapters/codex-local/
COPY packages/adapters/cursor-cloud/package.json packages/adapters/cursor-cloud/
COPY packages/adapters/cursor-local/package.json packages/adapters/cursor-local/
COPY packages/adapters/gemini-local/package.json packages/adapters/gemini-local/
COPY packages/adapters/openclaw-gateway/package.json packages/adapters/openclaw-gateway/
COPY packages/adapters/opencode-local/package.json packages/adapters/opencode-local/
COPY packages/adapters/pi-local/package.json packages/adapters/pi-local/
COPY packages/plugins/sdk/package.json packages/plugins/sdk/
COPY --parents packages/plugins/sandbox-providers/./*/package.json packages/plugins/sandbox-providers/
COPY packages/plugins/paperclip-plugin-fake-sandbox/package.json packages/plugins/paperclip-plugin-fake-sandbox/
COPY packages/plugins/plugin-llm-wiki/package.json packages/plugins/plugin-llm-wiki/
COPY patches/ patches/
RUN pnpm install --frozen-lockfile
@@ -54,9 +56,6 @@ ARG USER_GID=1000
WORKDIR /app
COPY --chown=node:node --from=build /app /app
RUN npm install --global --omit=dev @anthropic-ai/claude-code@latest @openai/codex@latest opencode-ai \
&& apt-get update \
&& apt-get install -y --no-install-recommends openssh-client jq \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /paperclip \
&& chown node:node /paperclip

125
README.md
View File

@@ -6,8 +6,7 @@
<a href="#quickstart"><strong>Quickstart</strong></a> &middot;
<a href="https://paperclip.ing/docs"><strong>Docs</strong></a> &middot;
<a href="https://github.com/paperclipai/paperclip"><strong>GitHub</strong></a> &middot;
<a href="https://discord.gg/m4HZY7xNG3"><strong>Discord</strong></a> &middot;
<a href="https://x.com/papercliping"><strong>Twitter</strong></a>
<a href="https://discord.gg/m4HZY7xNG3"><strong>Discord</strong></a>
</p>
<p align="center">
@@ -157,115 +156,6 @@ Paperclip handles the hard orchestration details correctly.
<br/>
## What's Under the Hood
Paperclip is a full control plane, not a wrapper. Before you build any of this yourself, know that it already exists:
```
┌──────────────────────────────────────────────────────────────┐
│ PAPERCLIP SERVER │
│ │
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
│ │Identity & │ │ Work & │ │ Heartbeat │ │Governance │ │
│ │ Access │ │ Tasks │ │ Execution │ │& Approvals│ │
│ └───────────┘ └───────────┘ └───────────┘ └───────────┘ │
│ │
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
│ │ Org Chart │ │Workspaces │ │ Plugins │ │ Budget │ │
│ │ & Agents │ │ & Runtime │ │ │ │ & Costs │ │
│ └───────────┘ └───────────┘ └───────────┘ └───────────┘ │
│ │
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
│ │ Routines │ │ Secrets & │ │ Activity │ │ Company │ │
│ │& Schedules│ │ Storage │ │ & Events │ │Portability│ │
│ └───────────┘ └───────────┘ └───────────┘ └───────────┘ │
└──────────────────────────────────────────────────────────────┘
▲ ▲ ▲ ▲
┌─────┴─────┐ ┌─────┴─────┐ ┌─────┴─────┐ ┌─────┴─────┐
│ Claude │ │ Codex │ │ CLI │ │ HTTP/web │
│ Code │ │ │ │ agents │ │ bots │
└───────────┘ └───────────┘ └───────────┘ └───────────┘
```
### The Systems
<table>
<tr>
<td width="50%">
**Identity & Access** — Two deployment modes (trusted local or authenticated), board users, agent API keys, short-lived run JWTs, company memberships, invite flows, and OpenClaw onboarding. Every mutating request is traced to an actor.
</td>
<td width="50%">
**Org Chart & Agents** — Agents have roles, titles, reporting lines, permissions, and budgets. Adapter examples match the diagram: Claude Code, Codex, CLI agents such as Cursor/Gemini/bash, HTTP/webhook bots such as OpenClaw, and external adapter plugins. If it can receive a heartbeat, it's hired.
</td>
</tr>
<tr>
<td>
**Work & Task System** — Issues carry company/project/goal/parent links, atomic checkout with execution locks, first-class blocker dependencies, comments, documents, attachments, work products, labels, and inbox state. No double-work, no lost context.
</td>
<td>
**Heartbeat Execution** — DB-backed wakeup queue with coalescing, budget checks, workspace resolution, secret injection, skill loading, and adapter invocation. Runs produce structured logs, cost events, session state, and audit trails. Recovery handles orphaned runs automatically.
</td>
</tr>
<tr>
<td>
**Workspaces & Runtime** — Project workspaces, isolated execution workspaces (git worktrees, operator branches), and runtime services (dev servers, preview URLs). Agents work in the right directory with the right context every time.
</td>
<td>
**Governance & Approvals** — Board approval workflows, execution policies with review/approval stages, decision tracking, budget hard-stops, agent pause/resume/terminate, and full audit logging. You're the board — nothing ships without your sign-off.
</td>
</tr>
<tr>
<td>
**Budget & Cost Control** — Token and cost tracking by company, agent, project, goal, issue, provider, and model. Scoped budget policies with warning thresholds and hard stops. Overspend pauses agents and cancels queued work automatically.
</td>
<td>
**Routines & Schedules** — Recurring tasks with cron, webhook, and API triggers. Concurrency and catch-up policies. Each routine execution creates a tracked issue and wakes the assigned agent — no manual kick-offs needed.
</td>
</tr>
<tr>
<td>
**Plugins** — Instance-wide plugin system with out-of-process workers, capability-gated host services, job scheduling, tool exposure, and UI contributions. Extend Paperclip without forking it.
</td>
<td>
**Secrets & Storage** — Instance and company secrets, encrypted local storage, provider-backed object storage, attachments, and work products. Sensitive values stay out of prompts unless a scoped run explicitly needs them.
</td>
</tr>
<tr>
<td>
**Activity & Events** — Mutating actions, heartbeat state changes, cost events, approvals, comments, and work products are recorded as durable activity so operators can audit what happened and why.
</td>
<td>
**Company Portability** — Export and import entire organizations — agents, skills, projects, routines, and issues — with secret scrubbing and collision handling. One deployment, many companies, complete data isolation.
</td>
</tr>
</table>
<br/>
## What Paperclip is not
| | |
@@ -343,15 +233,11 @@ pnpm dev:once # Full dev without file watching
pnpm dev:server # Server only
pnpm build # Build all
pnpm typecheck # Type checking
pnpm test # Cheap default test run (Vitest only)
pnpm test:watch # Vitest watch mode
pnpm test:e2e # Playwright browser suite
pnpm test:run # Run tests
pnpm db:generate # Generate DB migration
pnpm db:migrate # Apply migrations
```
`pnpm test` does not run Playwright. Browser suites stay separate and are typically run only when working on those flows or in CI.
See [doc/DEVELOPING.md](doc/DEVELOPING.md) for the full development guide.
<br/>
@@ -366,10 +252,10 @@ See [doc/DEVELOPING.md](doc/DEVELOPING.md) for the full development guide.
- ✅ Scheduled Routines
- ✅ Better Budgeting
- ✅ Agent Reviews and Approvals
- Multiple Human Users
- Multiple Human Users
- ⚪ Cloud / Sandbox agents (e.g. Cursor / e2b agents)
- ⚪ Artifacts & Work Products
- ⚪ Memory / Knowledge
- ⚪ Memory & Knowledge
- ⚪ Enforced Outcomes
- ⚪ MAXIMIZER MODE
- ⚪ Deep Planning
@@ -380,8 +266,6 @@ See [doc/DEVELOPING.md](doc/DEVELOPING.md) for the full development guide.
- ⚪ Cloud deployments
- ⚪ Desktop App
This is the short roadmap preview. See the full roadmap in [ROADMAP.md](ROADMAP.md).
<br/>
## Community & Plugins
@@ -410,7 +294,6 @@ We welcome contributions. See the [contributing guide](CONTRIBUTING.md) for deta
## Community
- [Discord](https://discord.gg/m4HZY7xNG3) — Join the community
- [Twitter / X](https://x.com/papercliping) — Follow updates and announcements
- [GitHub Issues](https://github.com/paperclipai/paperclip/issues) — bugs and feature requests
- [GitHub Discussions](https://github.com/paperclipai/paperclip/discussions) — ideas and RFC

View File

@@ -1,97 +0,0 @@
# Roadmap
This document expands the roadmap preview in `README.md`.
Paperclip is still moving quickly. The list below is directional, not promised, and priorities may shift as we learn from users and from operating real AI companies with the product.
We value community involvement and want to make sure contributor energy goes toward areas where it can land.
We may accept contributions in the areas below, but if you want to work on roadmap-level core features, please coordinate with us first in Discord (`#dev`) before writing code. Bugs, docs, polish, and tightly scoped improvements are still the easiest contributions to merge.
If you want to extend Paperclip today, the best path is often the [plugin system](doc/plugins/PLUGIN_SPEC.md). Community reference implementations are also useful feedback even when they are not merged directly into core.
## Milestones
### ✅ Plugin system
Paperclip should keep a thin core and rich edges. Plugins are the path for optional capabilities like knowledge bases, custom tracing, queues, doc editors, and other product-specific surfaces that do not need to live in the control plane itself.
### ✅ Get OpenClaw / claw-style agent employees
Paperclip should be able to hire and manage real claw-style agent workers, not just a narrow built-in runtime. This is part of the larger "bring your own agent" story and keeps the control plane useful across different agent ecosystems.
### ✅ companies.sh - import and export entire organizations
Reusable companies matter. Import/export is the foundation for moving org structures, agent definitions, and reusable company setups between environments and eventually for broader company-template distribution.
### ✅ Easy AGENTS.md configurations
Agent setup should feel repo-native and legible. Simple `AGENTS.md`-style configuration lowers the barrier to getting an agent team running and makes it easier for contributors to understand how a company is wired together.
### ✅ Skills Manager
Agents need a practical way to discover, install, and use skills without every setup becoming bespoke. The skills layer is part of making Paperclip companies more reusable and easier to operate.
### ✅ Scheduled Routines
Recurring work should be native. Routine tasks like reports, reviews, and other periodic work need first-class scheduling so the company keeps operating even when no human is manually kicking work off.
### ✅ Better Budgeting
Budgets are a core control-plane feature, not an afterthought. Better budgeting means clearer spend visibility, safer hard stops, and better operator control over how autonomy turns into real cost.
### ✅ Agent Reviews and Approvals
Paperclip should support explicit review and approval stages as first-class workflow steps, not just ad hoc comments. That means reviewer routing, approval gates, change requests, and durable audit trails that fit the same task model as the rest of the control plane.
### ✅ Multiple Human Users
Paperclip needs a clearer path from solo operator to real human teams. That means shared board access, safer collaboration, and a better model for several humans supervising the same autonomous company.
### ⚪ Cloud / Sandbox agents (e.g. Cursor / e2b agents)
We want agents to run in more remote and sandboxed environments while preserving the same Paperclip control-plane model. This makes the system safer, more flexible, and more useful outside a single trusted local machine.
### ⚪ Artifacts & Work Products
Paperclip should make outputs first-class. That means generated artifacts, previews, deployable outputs, and the handoff from "agent did work" to "here is the result" should become more visible and easier to operate.
### ⚪ Memory / Knowledge
We want a stronger memory and knowledge surface for companies, agents, and projects. That includes durable memory, better recall of prior decisions and context, and a clearer path for knowledge-style capabilities without turning Paperclip into a generic chat app.
### ⚪ Enforced Outcomes
Paperclip should get stricter about what counts as finished work. Tasks, approvals, and execution flows should resolve to clear outcomes like merged code, published artifacts, shipped docs, or explicit decisions instead of stopping at vague status updates.
### ⚪ MAXIMIZER MODE
This is the direction for higher-autonomy execution: more aggressive delegation, deeper follow-through, and stronger operating loops with clear budgets, visibility, and governance. The point is not hidden autonomy; the point is more output per human supervisor.
### ⚪ Deep Planning
Some work needs more than a task description before execution starts. Deeper planning means stronger issue documents, revisionable plans, and clearer review loops for strategy-heavy work before agents begin execution.
### ⚪ Work Queues
Paperclip should support queue-style work streams for repeatable inputs like support, triage, review, and backlog intake. That would make it easier to route work continuously without turning every system into a one-off workflow.
### ⚪ Self-Organization
As companies grow, agents should be able to propose useful structural changes such as role adjustments, delegation changes, and new recurring routines. The goal is adaptive organizations that still stay within governance and approval boundaries.
### ⚪ Automatic Organizational Learning
Paperclip should get better at turning completed work into reusable organizational knowledge. That includes capturing playbooks, recurring fixes, and decision patterns so future work starts from what the company has already learned.
### ⚪ CEO Chat
We want a lighter-weight way to talk to leadership agents, but those conversations should still resolve to real work objects like plans, issues, approvals, or decisions. This should improve interaction without changing the core task-and-comments model.
### ⚪ Cloud deployments
Local-first remains important, but Paperclip also needs a cleaner shared deployment story. Teams should be able to run the same product in hosted or semi-hosted environments without changing the mental model.
### ⚪ Desktop App
A desktop app can make Paperclip feel more accessible and persistent for day-to-day operators. The goal is easier access, better local ergonomics, and a smoother default experience for users who want the control plane always close at hand.

View File

@@ -6,14 +6,13 @@
<a href="#quickstart"><strong>Quickstart</strong></a> &middot;
<a href="https://paperclip.ing/docs"><strong>Docs</strong></a> &middot;
<a href="https://github.com/paperclipai/paperclip"><strong>GitHub</strong></a> &middot;
<a href="https://discord.gg/m4HZY7xNG3"><strong>Discord</strong></a> &middot;
<a href="https://x.com/papercliping"><strong>Twitter</strong></a>
<a href="https://discord.gg/m4HZY7xNG3"><strong>Discord</strong></a>
</p>
<p align="center">
<a href="https://github.com/paperclipai/paperclip/blob/master/LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue" alt="MIT License" /></a>
<a href="https://github.com/paperclipai/paperclip/stargazers"><img src="https://img.shields.io/github/stars/paperclipai/paperclip?style=flat" alt="Stars" /></a>
<a href="https://discord.gg/m4HZY7xNG3"><img src="https://img.shields.io/discord/000000000?label=discord" alt="Discord" /></a>
<a href="https://discord.gg/m4HZY7xNG3"><img src="https://img.shields.io/badge/discord-join%20chat-5865F2?logo=discord&logoColor=white" alt="Discord" /></a>
</p>
<br/>
@@ -234,15 +233,11 @@ pnpm dev:once # Full dev without file watching
pnpm dev:server # Server only
pnpm build # Build all
pnpm typecheck # Type checking
pnpm test # Cheap default test run (Vitest only)
pnpm test:watch # Vitest watch mode
pnpm test:e2e # Playwright browser suite
pnpm test:run # Run tests
pnpm db:generate # Generate DB migration
pnpm db:migrate # Apply migrations
```
`pnpm test` does not run Playwright. Browser suites stay separate and are typically run only when working on those flows or in CI.
See [doc/DEVELOPING.md](https://github.com/paperclipai/paperclip/blob/master/doc/DEVELOPING.md) for the full development guide.
<br/>
@@ -259,7 +254,7 @@ See [doc/DEVELOPING.md](https://github.com/paperclipai/paperclip/blob/master/doc
- ⚪ Artifacts & Deployments
- ⚪ CEO Chat
- ⚪ MAXIMIZER MODE
- Multiple Human Users
- Multiple Human Users
- ⚪ Cloud / Sandbox agents (e.g. Cursor / e2b agents)
- ⚪ Cloud deployments
- ⚪ Desktop App
@@ -279,7 +274,6 @@ We welcome contributions. See the [contributing guide](https://github.com/paperc
## Community
- [Discord](https://discord.gg/m4HZY7xNG3) — Join the community
- [Twitter / X](https://x.com/papercliping) — Follow updates and announcements
- [GitHub Issues](https://github.com/paperclipai/paperclip/issues) — bugs and feature requests
- [GitHub Discussions](https://github.com/paperclipai/paperclip/discussions) — ideas and RFC

View File

@@ -37,10 +37,8 @@
},
"dependencies": {
"@clack/prompts": "^0.10.0",
"@paperclipai/adapter-acpx-local": "workspace:*",
"@paperclipai/adapter-claude-local": "workspace:*",
"@paperclipai/adapter-codex-local": "workspace:*",
"@paperclipai/adapter-cursor-cloud": "workspace:*",
"@paperclipai/adapter-cursor-local": "workspace:*",
"@paperclipai/adapter-gemini-local": "workspace:*",
"@paperclipai/adapter-opencode-local": "workspace:*",
@@ -50,7 +48,7 @@
"@paperclipai/db": "workspace:*",
"@paperclipai/server": "workspace:*",
"@paperclipai/shared": "workspace:*",
"drizzle-orm": "0.45.2",
"drizzle-orm": "0.38.4",
"dotenv": "^17.0.1",
"commander": "^13.1.0",
"embedded-postgres": "^18.1.0-beta.16",

View File

@@ -14,7 +14,6 @@ function makeCompany(overrides: Partial<Company>): Company {
issueCounter: 1,
budgetMonthlyCents: 0,
spentMonthlyCents: 0,
attachmentMaxBytes: 10 * 1024 * 1024,
requireBoardApprovalForNewAgents: false,
feedbackDataSharingEnabled: false,
feedbackDataSharingConsentAt: null,

View File

@@ -1,5 +1,5 @@
import { execFile, spawn } from "node:child_process";
import { existsSync, mkdirSync, mkdtempSync, readFileSync, readdirSync, rmSync, writeFileSync } from "node:fs";
import { mkdirSync, mkdtempSync, readFileSync, readdirSync, rmSync, writeFileSync } from "node:fs";
import net from "node:net";
import os from "node:os";
import path from "node:path";
@@ -104,50 +104,20 @@ function writeTestConfig(configPath: string, tempRoot: string, port: number, con
writeFileSync(configPath, `${JSON.stringify(config, null, 2)}\n`, "utf8");
}
interface TestPaperclipEnv {
configPath: string;
paperclipHome: string;
instanceId: string;
shellHome?: string;
}
function createBasePaperclipEnv(options: TestPaperclipEnv) {
function createServerEnv(configPath: string, port: number, connectionString: string) {
const env = { ...process.env };
for (const key of Object.keys(env)) {
if (key.startsWith("PAPERCLIP_")) {
delete env[key];
}
}
env.PAPERCLIP_CONFIG = options.configPath;
env.PAPERCLIP_HOME = options.paperclipHome;
env.PAPERCLIP_INSTANCE_ID = options.instanceId;
env.PAPERCLIP_CONTEXT = path.join(options.paperclipHome, "context.json");
env.PAPERCLIP_AUTH_STORE = path.join(options.paperclipHome, "auth.json");
if (options.shellHome) {
env.HOME = options.shellHome;
}
return env;
}
function createServerEnv(
configPath: string,
port: number,
connectionString: string,
options: Omit<TestPaperclipEnv, "configPath">,
) {
const env = createBasePaperclipEnv({
configPath,
...options,
});
delete env.DATABASE_URL;
delete env.PORT;
delete env.HOST;
delete env.SERVE_UI;
delete env.HEARTBEAT_SCHEDULER_ENABLED;
env.PAPERCLIP_CONFIG = configPath;
env.DATABASE_URL = connectionString;
env.HOST = "127.0.0.1";
env.PORT = String(port);
@@ -160,8 +130,13 @@ function createServerEnv(
return env;
}
function createCliEnv(options: TestPaperclipEnv) {
const env = createBasePaperclipEnv(options);
function createCliEnv() {
const env = { ...process.env };
for (const key of Object.keys(env)) {
if (key.startsWith("PAPERCLIP_")) {
delete env[key];
}
}
delete env.DATABASE_URL;
delete env.PORT;
delete env.HOST;
@@ -208,25 +183,14 @@ async function api<T>(baseUrl: string, pathname: string, init?: RequestInit): Pr
return text ? JSON.parse(text) as T : (null as T);
}
async function runCliJson<T>(
args: string[],
opts: TestPaperclipEnv & { apiBase?: string; includeConfigArg?: boolean },
) {
async function runCliJson<T>(args: string[], opts: { apiBase: string; configPath: string }) {
const repoRoot = path.resolve(path.dirname(fileURLToPath(import.meta.url)), "../../..");
const cliArgs = ["--silent", "paperclipai", ...args];
if (opts.apiBase) {
cliArgs.push("--api-base", opts.apiBase);
}
if (opts.includeConfigArg !== false) {
cliArgs.push("--config", opts.configPath);
}
cliArgs.push("--json");
const result = await execFileAsync(
"pnpm",
cliArgs,
["--silent", "paperclipai", ...args, "--api-base", opts.apiBase, "--config", opts.configPath, "--json"],
{
cwd: repoRoot,
env: createCliEnv(opts),
env: createCliEnv(),
maxBuffer: 10 * 1024 * 1024,
},
);
@@ -271,9 +235,6 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
let configPath = "";
let exportDir = "";
let apiBase = "";
let paperclipHome = "";
let cliShellHome = "";
let paperclipInstanceId = "";
let serverProcess: ServerProcess | null = null;
let tempDb: Awaited<ReturnType<typeof startEmbeddedPostgresTestDatabase>> | null = null;
@@ -281,11 +242,6 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
tempRoot = mkdtempSync(path.join(os.tmpdir(), "paperclip-company-cli-e2e-"));
configPath = path.join(tempRoot, "config", "config.json");
exportDir = path.join(tempRoot, "exported-company");
paperclipHome = path.join(tempRoot, "paperclip-home");
cliShellHome = path.join(tempRoot, "shell-home");
paperclipInstanceId = "company-cli-e2e";
mkdirSync(paperclipHome, { recursive: true });
mkdirSync(cliShellHome, { recursive: true });
tempDb = await startEmbeddedPostgresTestDatabase("paperclip-company-cli-db-");
@@ -300,11 +256,7 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
["paperclipai", "run", "--config", configPath],
{
cwd: repoRoot,
env: createServerEnv(configPath, port, tempDb.connectionString, {
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
}),
env: createServerEnv(configPath, port, tempDb.connectionString),
stdio: ["ignore", "pipe", "pipe"],
},
);
@@ -330,41 +282,11 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
it("exports a company package and imports it into new and existing companies", async () => {
expect(serverProcess).not.toBeNull();
const cliContext = await runCliJson<{
contextPath: string;
profileName: string;
profile: { apiBase?: string };
}>(
["context", "set", "--profile", "isolation-check", "--api-base", "https://example.test"],
{
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
includeConfigArg: false,
},
);
const expectedContextPath = path.join(paperclipHome, "context.json");
const leakedContextPath = path.join(cliShellHome, ".paperclip", "context.json");
expect(cliContext.contextPath).toBe(expectedContextPath);
expect(cliContext.profileName).toBe("isolation-check");
expect(cliContext.profile.apiBase).toBe("https://example.test");
expect(existsSync(expectedContextPath)).toBe(true);
expect(existsSync(leakedContextPath)).toBe(false);
rmSync(expectedContextPath, { force: true });
expect(existsSync(expectedContextPath)).toBe(false);
const sourceCompany = await api<{ id: string; name: string; issuePrefix: string }>(apiBase, "/api/companies", {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify({ name: `CLI Export Source ${Date.now()}` }),
});
await api(apiBase, `/api/companies/${sourceCompany.id}`, {
method: "PATCH",
headers: { "content-type": "application/json" },
body: JSON.stringify({ requireBoardApprovalForNewAgents: false }),
});
const sourceAgent = await api<{ id: string; name: string }>(
apiBase,
@@ -376,11 +298,8 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
name: "Export Engineer",
role: "engineer",
adapterType: "claude_local",
adapterConfig: {},
instructionsBundle: {
files: {
"AGENTS.md": "You verify company portability.",
},
adapterConfig: {
promptTemplate: "You verify company portability.",
},
}),
},
@@ -431,13 +350,7 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
"--include",
"company,agents,projects,issues",
],
{
apiBase,
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
},
{ apiBase, configPath },
);
expect(exportResult.ok).toBe(true);
@@ -461,13 +374,7 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
"company,agents,projects,issues",
"--yes",
],
{
apiBase,
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
},
{ apiBase, configPath },
);
expect(importedNew.company.action).toBe("created");
@@ -486,11 +393,10 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
apiBase,
`/api/companies/${importedNew.company.id}/issues`,
);
const importedMatchingIssues = importedIssues.filter((issue) => issue.title === sourceIssue.title);
expect(importedAgents.map((agent) => agent.name)).toContain(sourceAgent.name);
expect(importedProjects.map((project) => project.name)).toContain(sourceProject.name);
expect(importedMatchingIssues).toHaveLength(1);
expect(importedIssues.map((issue) => issue.title)).toContain(sourceIssue.title);
const previewExisting = await runCliJson<{
errors: string[];
@@ -515,13 +421,7 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
"rename",
"--dry-run",
],
{
apiBase,
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
},
{ apiBase, configPath },
);
expect(previewExisting.errors).toEqual([]);
@@ -548,13 +448,7 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
"rename",
"--yes",
],
{
apiBase,
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
},
{ apiBase, configPath },
);
expect(importedExisting.company.action).toBe("unchanged");
@@ -572,13 +466,11 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
apiBase,
`/api/companies/${importedNew.company.id}/issues`,
);
const twiceImportedMatchingIssues = twiceImportedIssues.filter((issue) => issue.title === sourceIssue.title);
expect(twiceImportedAgents).toHaveLength(2);
expect(new Set(twiceImportedAgents.map((agent) => agent.name)).size).toBe(2);
expect(twiceImportedProjects).toHaveLength(2);
expect(twiceImportedMatchingIssues).toHaveLength(2);
expect(new Set(twiceImportedMatchingIssues.map((issue) => issue.identifier)).size).toBe(2);
expect(twiceImportedIssues).toHaveLength(2);
const zipPath = path.join(tempRoot, "exported-company.zip");
const portableFiles: Record<string, string> = {};
@@ -601,16 +493,10 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
"company,agents,projects,issues",
"--yes",
],
{
apiBase,
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
},
{ apiBase, configPath },
);
expect(importedFromZip.company.action).toBe("created");
expect(importedFromZip.agents.some((agent) => agent.action === "created")).toBe(true);
}, 90_000);
}, 60_000);
});

View File

@@ -160,7 +160,6 @@ describe("renderCompanyImportPreview", () => {
path: "COMPANY.md",
name: "Source Co",
description: null,
attachmentMaxBytes: null,
brandColor: null,
logoPath: null,
requireBoardApprovalForNewAgents: false,
@@ -244,7 +243,6 @@ describe("renderCompanyImportPreview", () => {
billingCode: null,
executionWorkspaceSettings: null,
assigneeAdapterOverrides: null,
comments: [],
metadata: null,
},
],
@@ -377,7 +375,6 @@ describe("import selection catalog", () => {
path: "COMPANY.md",
name: "Source Co",
description: null,
attachmentMaxBytes: null,
brandColor: null,
logoPath: "images/company-logo.png",
requireBoardApprovalForNewAgents: false,
@@ -461,7 +458,6 @@ describe("import selection catalog", () => {
billingCode: null,
executionWorkspaceSettings: null,
assigneeAdapterOverrides: null,
comments: [],
metadata: null,
},
],

View File

@@ -1,24 +0,0 @@
import path from "node:path";
import { describe, expect, it } from "vitest";
import { collectEnvLabDoctorStatus, resolveEnvLabSshStatePath } from "../commands/env-lab.js";
describe("env-lab command", () => {
it("resolves the default SSH fixture state path under the instance root", () => {
const statePath = resolveEnvLabSshStatePath("fixture-test");
expect(statePath).toContain(
path.join("instances", "fixture-test", "env-lab", "ssh-fixture", "state.json"),
);
});
it("reports doctor status for an instance without a running fixture", async () => {
const status = await collectEnvLabDoctorStatus({ instance: "fixture-test-missing" });
expect(status.statePath).toContain(
path.join("instances", "fixture-test-missing", "env-lab", "ssh-fixture", "state.json"),
);
expect(typeof status.ssh.supported).toBe("boolean");
expect(status.ssh.running).toBe(false);
expect(status.ssh.environment).toBeNull();
});
});

View File

@@ -1,4 +1,3 @@
import fs from "node:fs";
import os from "node:os";
import path from "node:path";
import { afterEach, describe, expect, it } from "vitest";
@@ -17,14 +16,13 @@ describe("home path resolution", () => {
});
it("defaults to ~/.paperclip and default instance", () => {
const home = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-home-paths-"));
process.env.PAPERCLIP_HOME = home;
delete process.env.PAPERCLIP_HOME;
delete process.env.PAPERCLIP_INSTANCE_ID;
const paths = describeLocalInstancePaths();
expect(paths.homeDir).toBe(home);
expect(paths.homeDir).toBe(path.resolve(os.homedir(), ".paperclip"));
expect(paths.instanceId).toBe("default");
expect(paths.configPath).toBe(path.resolve(home, "instances", "default", "config.json"));
expect(paths.configPath).toBe(path.resolve(os.homedir(), ".paperclip", "instances", "default", "config.json"));
});
it("supports PAPERCLIP_HOME and explicit instance ids", () => {
@@ -36,7 +34,7 @@ describe("home path resolution", () => {
});
it("rejects invalid instance ids", () => {
expect(() => resolvePaperclipInstanceId("bad/id")).toThrow(/Invalid PAPERCLIP_INSTANCE_ID/);
expect(() => resolvePaperclipInstanceId("bad/id")).toThrow(/Invalid instance id/);
});
it("expands ~ prefixes", () => {

View File

@@ -6,7 +6,6 @@ import { onboard } from "../commands/onboard.js";
import type { PaperclipConfig } from "../config/schema.js";
const ORIGINAL_ENV = { ...process.env };
const ORIGINAL_CWD = process.cwd();
function createExistingConfigFixture() {
const root = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-onboard-"));
@@ -86,18 +85,10 @@ describe("onboard", () => {
delete process.env.PAPERCLIP_AGENT_JWT_SECRET;
delete process.env.PAPERCLIP_SECRETS_MASTER_KEY;
delete process.env.PAPERCLIP_SECRETS_MASTER_KEY_FILE;
delete process.env.PAPERCLIP_HOME;
delete process.env.PAPERCLIP_CONFIG;
delete process.env.PAPERCLIP_INSTANCE_ID;
delete process.env.PAPERCLIP_BIND;
delete process.env.PAPERCLIP_BIND_HOST;
delete process.env.PAPERCLIP_TAILNET_BIND_HOST;
delete process.env.HOST;
});
afterEach(() => {
process.env = { ...ORIGINAL_ENV };
process.chdir(ORIGINAL_CWD);
});
it("preserves an existing config when rerun without flags", async () => {
@@ -134,27 +125,6 @@ describe("onboard", () => {
expect(raw.server.host).toBe("127.0.0.1");
});
it("creates instance-root config and data paths for a fresh PAPERCLIP_HOME", async () => {
const home = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-onboard-home-"));
const cwd = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-onboard-cwd-"));
process.chdir(cwd);
process.env.PAPERCLIP_HOME = home;
await onboard({ yes: true, invokedByRun: true });
const instanceRoot = path.join(home, "instances", "default");
const configPath = path.join(instanceRoot, "config.json");
const raw = JSON.parse(fs.readFileSync(configPath, "utf8")) as PaperclipConfig;
expect(raw.database.embeddedPostgresDataDir).toBe(path.join(instanceRoot, "db"));
expect(raw.database.backup.dir).toBe(path.join(instanceRoot, "data", "backups"));
expect(raw.logging.logDir).toBe(path.join(instanceRoot, "logs"));
expect(raw.storage.localDisk.baseDir).toBe(path.join(instanceRoot, "data", "storage"));
expect(raw.secrets.localEncrypted.keyFilePath).toBe(path.join(instanceRoot, "secrets", "master.key"));
expect(fs.existsSync(path.join(instanceRoot, ".env"))).toBe(true);
expect(fs.existsSync(path.join(instanceRoot, "secrets", "master.key"))).toBe(true);
});
it("supports authenticated/private quickstart bind presets", async () => {
const configPath = createFreshConfigPath();
process.env.PAPERCLIP_TAILNET_BIND_HOST = "100.64.0.8";

View File

@@ -1,164 +0,0 @@
import fs from "node:fs";
import os from "node:os";
import path from "node:path";
import { Command } from "commander";
import { afterEach, beforeEach, describe, expect, it, vi } from "vitest";
const mocks = vi.hoisted(() => ({
scaffoldPluginProject: vi.fn((options: { outputDir: string }) => options.outputDir),
}));
vi.mock("../../../packages/plugins/create-paperclip-plugin/src/index.js", async () => {
const actual =
await vi.importActual<typeof import("../../../packages/plugins/create-paperclip-plugin/src/index.js")>(
"../../../packages/plugins/create-paperclip-plugin/src/index.js",
);
return {
...actual,
scaffoldPluginProject: mocks.scaffoldPluginProject,
};
});
import {
buildPluginInstallRequest,
buildPluginInitNextCommands,
buildPluginInitScaffoldOptions,
registerPluginCommands,
} from "../commands/client/plugin.js";
const tempDirs: string[] = [];
function makeTempDir(): string {
const dir = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-cli-plugin-"));
tempDirs.push(dir);
return dir;
}
afterEach(() => {
while (tempDirs.length > 0) {
const dir = tempDirs.pop();
if (dir) fs.rmSync(dir, { recursive: true, force: true });
}
});
describe("plugin init", () => {
beforeEach(() => {
mocks.scaffoldPluginProject.mockClear();
});
it("maps package name and flags to scaffolder options", () => {
const cwd = path.resolve("/tmp/paperclip-cli-test");
const options = buildPluginInitScaffoldOptions(
"@acme/plugin-linear",
{
output: "plugins",
template: "connector",
category: "automation",
displayName: "Linear Bridge",
description: "Syncs Linear issues",
author: "Acme",
sdkPath: "../paperclip/packages/plugins/sdk",
},
cwd,
);
expect(options).toEqual({
pluginName: "@acme/plugin-linear",
outputDir: path.resolve(cwd, "plugins", "plugin-linear"),
template: "connector",
category: "automation",
displayName: "Linear Bridge",
description: "Syncs Linear issues",
author: "Acme",
sdkPath: "../paperclip/packages/plugins/sdk",
});
});
it("builds exact next commands using the scaffold path", () => {
expect(buildPluginInitNextCommands("/tmp/acme plugin")).toEqual([
"cd '/tmp/acme plugin'",
"pnpm install",
"pnpm dev",
"paperclipai plugin install '/tmp/acme plugin'",
]);
});
it("registers the CLI wrapper and invokes the existing scaffolder", async () => {
const program = new Command();
program.exitOverride();
program.configureOutput({ writeOut: () => {}, writeErr: () => {} });
registerPluginCommands(program);
await program.parseAsync(
[
"plugin",
"init",
"demo-plugin",
"--output",
"/tmp/paperclip-init-output",
"--template",
"workspace",
"--category",
"workspace",
"--display-name",
"Demo Plugin",
"--description",
"Demo description",
"--author",
"Paperclip",
"--sdk-path",
"/repo/packages/plugins/sdk",
],
{ from: "user" },
);
expect(mocks.scaffoldPluginProject).toHaveBeenCalledTimes(1);
expect(mocks.scaffoldPluginProject).toHaveBeenCalledWith({
pluginName: "demo-plugin",
outputDir: path.resolve("/tmp/paperclip-init-output", "demo-plugin"),
template: "workspace",
category: "workspace",
displayName: "Demo Plugin",
description: "Demo description",
author: "Paperclip",
sdkPath: "/repo/packages/plugins/sdk",
});
});
});
describe("plugin install", () => {
it("resolves an existing relative local path to an absolute local install request", () => {
const cwd = makeTempDir();
const pluginDir = path.join(cwd, "demo-plugin");
fs.mkdirSync(pluginDir);
expect(buildPluginInstallRequest("demo-plugin", {}, { cwd })).toEqual({
packageName: pluginDir,
version: undefined,
isLocalPath: true,
});
});
it("keeps an absolute local path absolute and marks it as local", () => {
const pluginDir = path.join(makeTempDir(), "demo-plugin");
fs.mkdirSync(pluginDir);
expect(buildPluginInstallRequest(pluginDir, {}, { cwd: "/" })).toEqual({
packageName: pluginDir,
version: undefined,
isLocalPath: true,
});
});
it("preserves npm package installs when no local path exists", () => {
expect(
buildPluginInstallRequest("@acme/plugin-linear", { version: "1.2.3" }, {
cwd: makeTempDir(),
}),
).toEqual({
packageName: "@acme/plugin-linear",
version: "1.2.3",
isLocalPath: false,
});
});
});

View File

@@ -1,257 +0,0 @@
import { afterEach, beforeEach, describe, expect, it } from "vitest";
import type { Agent, CompanySecret } from "@paperclipai/shared";
import type { PaperclipConfig } from "../config/schema.js";
import { secretsCheck } from "../checks/secrets-check.js";
import {
buildInlineMigrationSecretName,
buildMigratedAgentEnv,
collectInlineSecretMigrationCandidates,
parseSecretsInclude,
toPlainEnvValue,
} from "../commands/client/secrets.js";
function agent(partial: Partial<Agent>): Agent {
return {
id: "agent-12345678",
companyId: "company-1",
name: "Coder",
urlKey: "coder",
role: "engineer",
title: null,
icon: null,
status: "idle",
reportsTo: null,
capabilities: null,
adapterType: "codex_local",
adapterConfig: {},
runtimeConfig: {},
budgetMonthlyCents: 0,
spentMonthlyCents: 0,
pauseReason: null,
pausedAt: null,
permissions: {
canCreateAgents: false,
},
lastHeartbeatAt: null,
metadata: null,
createdAt: new Date("2026-04-26T00:00:00.000Z"),
updatedAt: new Date("2026-04-26T00:00:00.000Z"),
...partial,
};
}
function secret(partial: Partial<CompanySecret>): CompanySecret {
return {
id: "secret-1",
companyId: "company-1",
key: "agent_agent-12_anthropic_api_key",
name: "agent_agent-12_anthropic_api_key",
provider: "local_encrypted",
status: "active",
managedMode: "paperclip_managed",
externalRef: null,
providerConfigId: null,
providerMetadata: null,
latestVersion: 1,
description: null,
lastResolvedAt: null,
lastRotatedAt: null,
deletedAt: null,
createdByAgentId: null,
createdByUserId: null,
createdAt: new Date("2026-04-26T00:00:00.000Z"),
updatedAt: new Date("2026-04-26T00:00:00.000Z"),
...partial,
};
}
function configWithSecretsProvider(provider: PaperclipConfig["secrets"]["provider"]): PaperclipConfig {
return {
$meta: {
version: 1,
updatedAt: "2026-05-02T00:00:00.000Z",
source: "configure",
},
database: {
mode: "embedded-postgres",
embeddedPostgresDataDir: "/tmp/paperclip/db",
embeddedPostgresPort: 55432,
backup: {
enabled: true,
intervalMinutes: 60,
retentionDays: 30,
dir: "/tmp/paperclip/backups",
},
},
logging: {
mode: "file",
logDir: "/tmp/paperclip/logs",
},
server: {
deploymentMode: "local_trusted",
exposure: "private",
host: "127.0.0.1",
port: 3100,
allowedHostnames: [],
serveUi: true,
},
auth: {
baseUrlMode: "auto",
disableSignUp: false,
},
telemetry: {
enabled: true,
},
storage: {
provider: "local_disk",
localDisk: {
baseDir: "/tmp/paperclip/storage",
},
s3: {
bucket: "paperclip",
region: "us-east-1",
prefix: "",
forcePathStyle: false,
},
},
secrets: {
provider,
strictMode: true,
localEncrypted: {
keyFilePath: "/tmp/paperclip/secrets/master.key",
},
},
};
}
describe("secrets CLI helpers", () => {
const originalEnv = { ...process.env };
beforeEach(() => {
process.env = { ...originalEnv };
delete process.env.PAPERCLIP_SECRETS_AWS_REGION;
delete process.env.AWS_REGION;
delete process.env.AWS_DEFAULT_REGION;
delete process.env.PAPERCLIP_SECRETS_AWS_DEPLOYMENT_ID;
delete process.env.PAPERCLIP_SECRETS_AWS_KMS_KEY_ID;
});
afterEach(() => {
process.env = { ...originalEnv };
});
it("parses declaration include filters", () => {
expect(parseSecretsInclude("agents,projects,tasks")).toEqual({
company: false,
agents: true,
projects: true,
issues: true,
skills: false,
});
});
it("detects inline sensitive env values that need migration", () => {
const rows = collectInlineSecretMigrationCandidates(
[
agent({
id: "agent-12345678",
adapterConfig: {
env: {
ANTHROPIC_API_KEY: "sk-ant-test",
GH_TOKEN: {
type: "plain",
value: "ghp-test",
},
PATH: {
type: "plain",
value: "/usr/bin",
},
OPENAI_API_KEY: {
type: "secret_ref",
secretId: "secret-existing",
},
},
},
}),
],
[
secret({
id: "secret-gh-token",
name: buildInlineMigrationSecretName("agent-12345678", "GH_TOKEN"),
}),
],
);
expect(rows).toEqual([
{
agentId: "agent-12345678",
agentName: "Coder",
envKey: "ANTHROPIC_API_KEY",
secretName: "agent_agent-12_anthropic_api_key",
existingSecretId: null,
},
{
agentId: "agent-12345678",
agentName: "Coder",
envKey: "GH_TOKEN",
secretName: "agent_agent-12_gh_token",
existingSecretId: "secret-gh-token",
},
]);
});
it("builds migrated env bindings without preserving secret values", () => {
const next = buildMigratedAgentEnv(
{
ANTHROPIC_API_KEY: "sk-ant-test",
NODE_ENV: {
type: "plain",
value: "development",
},
},
new Map([["ANTHROPIC_API_KEY", "secret-1"]]),
);
expect(next).toEqual({
ANTHROPIC_API_KEY: {
type: "secret_ref",
secretId: "secret-1",
version: "latest",
},
NODE_ENV: {
type: "plain",
value: "development",
},
});
expect(JSON.stringify(next)).not.toContain("sk-ant-test");
});
it("reads only explicit plain env values", () => {
expect(toPlainEnvValue("plain-value")).toBe("plain-value");
expect(toPlainEnvValue({ type: "plain", value: "wrapped" })).toBe("wrapped");
expect(toPlainEnvValue({ type: "secret_ref", secretId: "secret-1" })).toBeNull();
});
it("reports the AWS bootstrap config required by doctor", () => {
const result = secretsCheck(configWithSecretsProvider("aws_secrets_manager"));
expect(result.status).toBe("fail");
expect(result.message).toContain("PAPERCLIP_SECRETS_AWS_DEPLOYMENT_ID");
expect(result.repairHint).toContain("AWS SDK default credential chain");
expect(result.repairHint).toContain("Do not store AWS root credentials");
});
it("passes AWS doctor checks when non-secret provider config is present", () => {
process.env.PAPERCLIP_SECRETS_AWS_REGION = "us-east-1";
process.env.PAPERCLIP_SECRETS_AWS_DEPLOYMENT_ID = "prod-us-1";
process.env.PAPERCLIP_SECRETS_AWS_KMS_KEY_ID =
"arn:aws:kms:us-east-1:123456789012:key/test";
process.env.AWS_PROFILE = "paperclip-prod";
const result = secretsCheck(configWithSecretsProvider("aws_secrets_manager"));
expect(result.status).toBe("pass");
expect(result.message).toContain("prod-us-1");
expect(result.message).toContain("AWS_PROFILE/shared config");
});
});

View File

@@ -3,15 +3,11 @@ import os from "node:os";
import path from "node:path";
import { execFileSync } from "node:child_process";
import { randomUUID } from "node:crypto";
import { eq } from "drizzle-orm";
import { afterEach, describe, expect, it, vi } from "vitest";
import {
agents,
authUsers,
companies,
createDb,
issueComments,
issues,
projects,
routines,
routineTriggers,
@@ -20,7 +16,6 @@ import {
copyGitHooksToWorktreeGitDir,
copySeededSecretsKey,
pauseSeededScheduledRoutines,
quarantineSeededWorktreeExecutionState,
readSourceAttachmentBody,
rebindWorkspaceCwd,
resolveSourceConfigPath,
@@ -28,7 +23,6 @@ import {
resolveWorktreeReseedTargetPaths,
resolveGitWorktreeAddArgs,
resolveWorktreeMakeTargetPath,
worktreeRepairCommand,
worktreeInitCommand,
worktreeMakeCommand,
worktreeReseedCommand,
@@ -52,7 +46,6 @@ import {
const ORIGINAL_CWD = process.cwd();
const ORIGINAL_ENV = { ...process.env };
const embeddedPostgresSupport = await getEmbeddedPostgresTestSupport();
const itEmbeddedPostgres = embeddedPostgresSupport.supported ? it : it.skip;
const describeEmbeddedPostgres = embeddedPostgresSupport.supported ? describe : describe.skip;
if (!embeddedPostgresSupport.supported) {
@@ -190,9 +183,8 @@ describe("worktree helpers", () => {
).toEqual(["worktree", "add", "-b", "my-worktree", "/tmp/my-worktree", "origin/main"]);
});
it("rewrites auth URLs only when they already include a port", () => {
it("rewrites loopback auth URLs to the new port only", () => {
expect(rewriteLocalUrlPort("http://127.0.0.1:3100", 3110)).toBe("http://127.0.0.1:3110/");
expect(rewriteLocalUrlPort("http://my-host.ts.net:3100", 3110)).toBe("http://my-host.ts.net:3110/");
expect(rewriteLocalUrlPort("https://paperclip.example", 3110)).toBe("https://paperclip.example");
});
@@ -287,138 +279,6 @@ describe("worktree helpers", () => {
expect(full.nullifyColumns).toEqual({});
});
itEmbeddedPostgres("quarantines copied live execution state in seeded worktree databases", async () => {
const tempDb = await startEmbeddedPostgresTestDatabase("paperclip-worktree-quarantine-");
const db = createDb(tempDb.connectionString);
const companyId = randomUUID();
const agentId = randomUUID();
const idleAgentId = randomUUID();
const inProgressIssueId = randomUUID();
const todoIssueId = randomUUID();
const reviewIssueId = randomUUID();
const userIssueId = randomUUID();
try {
await db.insert(companies).values({
id: companyId,
name: "Paperclip",
issuePrefix: "WTQ",
requireBoardApprovalForNewAgents: false,
});
await db.insert(agents).values([
{
id: agentId,
companyId,
name: "CodexCoder",
role: "engineer",
status: "running",
adapterType: "codex_local",
adapterConfig: {},
runtimeConfig: {
heartbeat: { enabled: true, intervalSec: 60 },
wakeOnDemand: true,
},
permissions: {},
},
{
id: idleAgentId,
companyId,
name: "Reviewer",
role: "reviewer",
status: "idle",
adapterType: "codex_local",
adapterConfig: {},
runtimeConfig: { heartbeat: { enabled: false, intervalSec: 300 } },
permissions: {},
},
]);
await db.insert(issues).values([
{
id: inProgressIssueId,
companyId,
title: "Copied in-flight issue",
status: "in_progress",
priority: "medium",
assigneeAgentId: agentId,
issueNumber: 1,
identifier: "WTQ-1",
executionAgentNameKey: "codexcoder",
executionLockedAt: new Date("2026-04-18T00:00:00.000Z"),
},
{
id: todoIssueId,
companyId,
title: "Copied assigned todo issue",
status: "todo",
priority: "medium",
assigneeAgentId: agentId,
issueNumber: 2,
identifier: "WTQ-2",
},
{
id: reviewIssueId,
companyId,
title: "Copied assigned review issue",
status: "in_review",
priority: "medium",
assigneeAgentId: idleAgentId,
issueNumber: 3,
identifier: "WTQ-3",
},
{
id: userIssueId,
companyId,
title: "Copied user issue",
status: "todo",
priority: "medium",
assigneeUserId: "user-1",
issueNumber: 4,
identifier: "WTQ-4",
},
]);
await expect(quarantineSeededWorktreeExecutionState(tempDb.connectionString)).resolves.toEqual({
disabledTimerHeartbeats: 1,
resetRunningAgents: 1,
quarantinedInProgressIssues: 1,
unassignedTodoIssues: 1,
unassignedReviewIssues: 1,
});
const [quarantinedAgent] = await db.select().from(agents).where(eq(agents.id, agentId));
expect(quarantinedAgent?.status).toBe("idle");
expect(quarantinedAgent?.runtimeConfig).toMatchObject({
heartbeat: { enabled: false, intervalSec: 60 },
wakeOnDemand: true,
});
const [inProgressIssue] = await db.select().from(issues).where(eq(issues.id, inProgressIssueId));
expect(inProgressIssue?.status).toBe("blocked");
expect(inProgressIssue?.assigneeAgentId).toBeNull();
expect(inProgressIssue?.executionAgentNameKey).toBeNull();
expect(inProgressIssue?.executionLockedAt).toBeNull();
const [todoIssue] = await db.select().from(issues).where(eq(issues.id, todoIssueId));
expect(todoIssue?.status).toBe("todo");
expect(todoIssue?.assigneeAgentId).toBeNull();
const [reviewIssue] = await db.select().from(issues).where(eq(issues.id, reviewIssueId));
expect(reviewIssue?.status).toBe("in_review");
expect(reviewIssue?.assigneeAgentId).toBeNull();
const [userIssue] = await db.select().from(issues).where(eq(issues.id, userIssueId));
expect(userIssue?.status).toBe("todo");
expect(userIssue?.assigneeUserId).toBe("user-1");
const comments = await db.select().from(issueComments).where(eq(issueComments.issueId, inProgressIssueId));
expect(comments).toHaveLength(1);
expect(comments[0]?.body).toContain("Quarantined during worktree seed");
} finally {
await db.$client?.end?.({ timeout: 5 }).catch(() => undefined);
await tempDb.cleanup();
}
}, 20_000);
it("copies the source local_encrypted secrets key into the seeded worktree instance", () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-secrets-"));
const originalInlineMasterKey = process.env.PAPERCLIP_SECRETS_MASTER_KEY;
@@ -512,97 +372,6 @@ describe("worktree helpers", () => {
}
});
itEmbeddedPostgres(
"seeds authenticated users into minimally cloned worktree instances",
async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-auth-seed-"));
const worktreeRoot = path.join(tempRoot, "PAP-999-auth-seed");
const sourceHome = path.join(tempRoot, "source-home");
const sourceConfigDir = path.join(sourceHome, "instances", "source");
const sourceConfigPath = path.join(sourceConfigDir, "config.json");
const sourceEnvPath = path.join(sourceConfigDir, ".env");
const sourceKeyPath = path.join(sourceConfigDir, "secrets", "master.key");
const worktreeHome = path.join(tempRoot, ".paperclip-worktrees");
const originalCwd = process.cwd();
const sourceDb = await startEmbeddedPostgresTestDatabase("paperclip-worktree-auth-source-");
try {
const sourceDbClient = createDb(sourceDb.connectionString);
await sourceDbClient.insert(authUsers).values({
id: "user-existing",
email: "existing@paperclip.ing",
name: "Existing User",
emailVerified: true,
createdAt: new Date(),
updatedAt: new Date(),
});
fs.mkdirSync(path.dirname(sourceKeyPath), { recursive: true });
fs.mkdirSync(worktreeRoot, { recursive: true });
const sourceConfig = buildSourceConfig();
sourceConfig.database = {
mode: "postgres",
embeddedPostgresDataDir: path.join(sourceConfigDir, "db"),
embeddedPostgresPort: 54329,
backup: {
enabled: true,
intervalMinutes: 60,
retentionDays: 30,
dir: path.join(sourceConfigDir, "backups"),
},
connectionString: sourceDb.connectionString,
};
sourceConfig.logging.logDir = path.join(sourceConfigDir, "logs");
sourceConfig.storage.localDisk.baseDir = path.join(sourceConfigDir, "storage");
sourceConfig.secrets.localEncrypted.keyFilePath = sourceKeyPath;
fs.writeFileSync(sourceConfigPath, JSON.stringify(sourceConfig, null, 2) + "\n", "utf8");
fs.writeFileSync(sourceEnvPath, "", "utf8");
fs.writeFileSync(sourceKeyPath, "source-master-key", "utf8");
process.chdir(worktreeRoot);
await worktreeInitCommand({
name: "PAP-999-auth-seed",
home: worktreeHome,
fromConfig: sourceConfigPath,
force: true,
});
const targetConfig = JSON.parse(
fs.readFileSync(path.join(worktreeRoot, ".paperclip", "config.json"), "utf8"),
) as PaperclipConfig;
const { default: EmbeddedPostgres } = await import("embedded-postgres");
const targetPg = new EmbeddedPostgres({
databaseDir: targetConfig.database.embeddedPostgresDataDir,
user: "paperclip",
password: "paperclip",
port: targetConfig.database.embeddedPostgresPort,
persistent: true,
initdbFlags: ["--encoding=UTF8", "--locale=C", "--lc-messages=C"],
onLog: () => {},
onError: () => {},
});
await targetPg.start();
try {
const targetDb = createDb(
`postgres://paperclip:paperclip@127.0.0.1:${targetConfig.database.embeddedPostgresPort}/paperclip`,
);
const seededUsers = await targetDb.select().from(authUsers);
expect(seededUsers.some((row) => row.email === "existing@paperclip.ing")).toBe(true);
} finally {
await targetPg.stop();
}
} finally {
process.chdir(originalCwd);
await sourceDb.cleanup();
fs.rmSync(tempRoot, { recursive: true, force: true });
}
},
30000,
);
it("avoids ports already claimed by sibling worktree instance configs", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-claimed-ports-"));
const repoRoot = path.join(tempRoot, "repo");
@@ -882,7 +651,7 @@ describe("worktree helpers", () => {
}
fs.rmSync(tempRoot, { recursive: true, force: true });
}
}, 30_000);
}, 20_000);
it("restores the current worktree config and instance data if reseed fails", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-reseed-rollback-"));
@@ -1039,7 +808,7 @@ describe("worktree helpers", () => {
execFileSync("git", ["worktree", "remove", "--force", worktreePath], { cwd: repoRoot, stdio: "ignore" });
fs.rmSync(tempRoot, { recursive: true, force: true });
}
}, 15_000);
});
it("creates and initializes a worktree from the top-level worktree:make command", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-make-"));
@@ -1075,113 +844,6 @@ describe("worktree helpers", () => {
fs.rmSync(tempRoot, { recursive: true, force: true });
}
}, 20_000);
it("no-ops on the primary checkout unless --branch is provided", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-repair-primary-"));
const repoRoot = path.join(tempRoot, "repo");
const originalCwd = process.cwd();
try {
fs.mkdirSync(repoRoot, { recursive: true });
execFileSync("git", ["init"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.email", "test@example.com"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.name", "Test User"], { cwd: repoRoot, stdio: "ignore" });
fs.writeFileSync(path.join(repoRoot, "README.md"), "# temp\n", "utf8");
execFileSync("git", ["add", "README.md"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["commit", "-m", "Initial commit"], { cwd: repoRoot, stdio: "ignore" });
process.chdir(repoRoot);
await worktreeRepairCommand({});
expect(fs.existsSync(path.join(repoRoot, ".paperclip", "config.json"))).toBe(false);
expect(fs.existsSync(path.join(repoRoot, ".paperclip", "worktrees"))).toBe(false);
} finally {
process.chdir(originalCwd);
fs.rmSync(tempRoot, { recursive: true, force: true });
}
});
it("repairs the current linked worktree when Paperclip metadata is missing", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-repair-current-"));
const repoRoot = path.join(tempRoot, "repo");
const worktreePath = path.join(repoRoot, ".paperclip", "worktrees", "repair-me");
const sourceConfigPath = path.join(tempRoot, "source-config.json");
const worktreeHome = path.join(tempRoot, ".paperclip-worktrees");
const worktreePaths = resolveWorktreeLocalPaths({
cwd: worktreePath,
homeDir: worktreeHome,
instanceId: sanitizeWorktreeInstanceId(path.basename(worktreePath)),
});
const originalCwd = process.cwd();
try {
fs.mkdirSync(repoRoot, { recursive: true });
execFileSync("git", ["init"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.email", "test@example.com"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.name", "Test User"], { cwd: repoRoot, stdio: "ignore" });
fs.writeFileSync(path.join(repoRoot, "README.md"), "# temp\n", "utf8");
execFileSync("git", ["add", "README.md"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["commit", "-m", "Initial commit"], { cwd: repoRoot, stdio: "ignore" });
fs.mkdirSync(path.dirname(worktreePath), { recursive: true });
execFileSync("git", ["worktree", "add", "-b", "repair-me", worktreePath, "HEAD"], {
cwd: repoRoot,
stdio: "ignore",
});
fs.writeFileSync(sourceConfigPath, JSON.stringify(buildSourceConfig(), null, 2), "utf8");
fs.mkdirSync(worktreePaths.instanceRoot, { recursive: true });
fs.writeFileSync(path.join(worktreePaths.instanceRoot, "marker.txt"), "stale", "utf8");
process.chdir(worktreePath);
await worktreeRepairCommand({
fromConfig: sourceConfigPath,
home: worktreeHome,
noSeed: true,
});
expect(fs.existsSync(path.join(worktreePath, ".paperclip", "config.json"))).toBe(true);
expect(fs.existsSync(path.join(worktreePath, ".paperclip", ".env"))).toBe(true);
expect(fs.existsSync(path.join(worktreePaths.instanceRoot, "marker.txt"))).toBe(false);
} finally {
process.chdir(originalCwd);
fs.rmSync(tempRoot, { recursive: true, force: true });
}
}, 20_000);
it("creates and repairs a missing branch worktree when --branch is provided", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-repair-branch-"));
const repoRoot = path.join(tempRoot, "repo");
const sourceConfigPath = path.join(tempRoot, "source-config.json");
const worktreeHome = path.join(tempRoot, ".paperclip-worktrees");
const originalCwd = process.cwd();
const expectedWorktreePath = path.join(repoRoot, ".paperclip", "worktrees", "feature-repair-me");
try {
fs.mkdirSync(repoRoot, { recursive: true });
execFileSync("git", ["init"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.email", "test@example.com"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.name", "Test User"], { cwd: repoRoot, stdio: "ignore" });
fs.writeFileSync(path.join(repoRoot, "README.md"), "# temp\n", "utf8");
execFileSync("git", ["add", "README.md"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["commit", "-m", "Initial commit"], { cwd: repoRoot, stdio: "ignore" });
fs.writeFileSync(sourceConfigPath, JSON.stringify(buildSourceConfig(), null, 2), "utf8");
process.chdir(repoRoot);
await worktreeRepairCommand({
branch: "feature/repair-me",
fromConfig: sourceConfigPath,
home: worktreeHome,
noSeed: true,
});
expect(fs.existsSync(path.join(expectedWorktreePath, ".git"))).toBe(true);
expect(fs.existsSync(path.join(expectedWorktreePath, ".paperclip", "config.json"))).toBe(true);
expect(fs.existsSync(path.join(expectedWorktreePath, ".paperclip", ".env"))).toBe(true);
} finally {
process.chdir(originalCwd);
fs.rmSync(tempRoot, { recursive: true, force: true });
}
}, 20_000);
});
describeEmbeddedPostgres("pauseSeededScheduledRoutines", () => {

View File

@@ -1,9 +1,7 @@
import type { CLIAdapterModule } from "@paperclipai/adapter-utils";
import { printAcpxStreamEvent } from "@paperclipai/adapter-acpx-local/cli";
import { printClaudeStreamEvent } from "@paperclipai/adapter-claude-local/cli";
import { printCodexStreamEvent } from "@paperclipai/adapter-codex-local/cli";
import { printCursorStreamEvent } from "@paperclipai/adapter-cursor-local/cli";
import { printCursorCloudEvent } from "@paperclipai/adapter-cursor-cloud/cli";
import { printGeminiStreamEvent } from "@paperclipai/adapter-gemini-local/cli";
import { printOpenCodeStreamEvent } from "@paperclipai/adapter-opencode-local/cli";
import { printPiStreamEvent } from "@paperclipai/adapter-pi-local/cli";
@@ -16,11 +14,6 @@ const claudeLocalCLIAdapter: CLIAdapterModule = {
formatStdoutEvent: printClaudeStreamEvent,
};
const acpxLocalCLIAdapter: CLIAdapterModule = {
type: "acpx_local",
formatStdoutEvent: printAcpxStreamEvent,
};
const codexLocalCLIAdapter: CLIAdapterModule = {
type: "codex_local",
formatStdoutEvent: printCodexStreamEvent,
@@ -41,11 +34,6 @@ const cursorLocalCLIAdapter: CLIAdapterModule = {
formatStdoutEvent: printCursorStreamEvent,
};
const cursorCloudCLIAdapter: CLIAdapterModule = {
type: "cursor_cloud",
formatStdoutEvent: printCursorCloudEvent,
};
const geminiLocalCLIAdapter: CLIAdapterModule = {
type: "gemini_local",
formatStdoutEvent: printGeminiStreamEvent,
@@ -58,13 +46,11 @@ const openclawGatewayCLIAdapter: CLIAdapterModule = {
const adaptersByType = new Map<string, CLIAdapterModule>(
[
acpxLocalCLIAdapter,
claudeLocalCLIAdapter,
codexLocalCLIAdapter,
openCodeLocalCLIAdapter,
piLocalCLIAdapter,
cursorLocalCLIAdapter,
cursorCloudCLIAdapter,
geminiLocalCLIAdapter,
openclawGatewayCLIAdapter,
processCLIAdapter,

View File

@@ -5,9 +5,6 @@ import type { PaperclipConfig } from "../config/schema.js";
import type { CheckResult } from "./index.js";
import { resolveRuntimeLikePath } from "./path-resolver.js";
const AWS_CREDENTIAL_SOURCE_HINT =
"Provide AWS runtime credentials through the AWS SDK default credential chain: IAM role/workload identity, AWS_PROFILE/SSO/shared credentials, web identity, container/instance metadata, or short-lived shell credentials";
function decodeMasterKey(raw: string): Buffer | null {
const trimmed = raw.trim();
if (!trimmed) return null;
@@ -50,16 +47,13 @@ function withStrictModeNote(
export function secretsCheck(config: PaperclipConfig, configPath?: string): CheckResult {
const provider = config.secrets.provider;
if (provider === "aws_secrets_manager") {
return withStrictModeNote(awsSecretsManagerCheck(), config);
}
if (provider !== "local_encrypted") {
return {
name: "Secrets adapter",
status: "fail",
message: `${provider} is configured, but this build only supports local_encrypted and aws_secrets_manager`,
message: `${provider} is configured, but this build only supports local_encrypted`,
canRepair: false,
repairHint: "Run `paperclipai configure --section secrets` and choose local_encrypted or aws_secrets_manager",
repairHint: "Run `paperclipai configure --section secrets` and set provider to local_encrypted",
};
}
@@ -141,100 +135,12 @@ export function secretsCheck(config: PaperclipConfig, configPath?: string): Chec
};
}
const keyMode = fs.statSync(keyFilePath).mode & 0o777;
const permissionWarning =
(keyMode & 0o077) !== 0
? `; key file permissions are ${keyMode.toString(8)} (run chmod 600 ${keyFilePath})`
: "";
return withStrictModeNote(
{
name: "Secrets adapter",
status: permissionWarning ? "warn" : "pass",
message: `Local encrypted provider configured with key file ${keyFilePath}${permissionWarning}`,
repairHint: permissionWarning
? "Restrict the local encrypted secrets key file to owner read/write permissions"
: undefined,
status: "pass",
message: `Local encrypted provider configured with key file ${keyFilePath}`,
},
config,
);
}
function awsSecretsManagerCheck(): CheckResult {
const missingConfig = missingAwsSecretsManagerConfig();
if (missingConfig.length > 0) {
return {
name: "Secrets adapter",
status: "fail",
message: `AWS Secrets Manager provider is missing non-secret config: ${missingConfig.join(", ")}`,
canRepair: false,
repairHint:
`Set ${missingConfig.join(", ")} in the Paperclip server runtime. ${AWS_CREDENTIAL_SOURCE_HINT}. Do not store AWS root credentials or long-lived IAM user keys in Paperclip secrets.`,
};
}
const staticEnvCredentials =
process.env.AWS_ACCESS_KEY_ID?.trim() && process.env.AWS_SECRET_ACCESS_KEY?.trim();
const credentialSource = detectedAwsCredentialSources().join(", ");
const message =
`AWS Secrets Manager provider configured for deployment ${process.env.PAPERCLIP_SECRETS_AWS_DEPLOYMENT_ID}; ` +
`runtime credentials source: ${credentialSource || "AWS SDK default credential chain"}`;
if (staticEnvCredentials) {
return {
name: "Secrets adapter",
status: "warn",
message,
canRepair: false,
repairHint:
"AWS static environment credentials are visible. Use only short-lived shell credentials locally; prefer IAM role/workload identity for hosted deployments and never store AWS access keys in Paperclip company secrets.",
};
}
return {
name: "Secrets adapter",
status: "pass",
message,
};
}
function missingAwsSecretsManagerConfig(): string[] {
const missing: string[] = [];
if (
!(
process.env.PAPERCLIP_SECRETS_AWS_REGION?.trim() ||
process.env.AWS_REGION?.trim() ||
process.env.AWS_DEFAULT_REGION?.trim()
)
) {
missing.push("PAPERCLIP_SECRETS_AWS_REGION or AWS_REGION/AWS_DEFAULT_REGION");
}
if (!process.env.PAPERCLIP_SECRETS_AWS_DEPLOYMENT_ID?.trim()) {
missing.push("PAPERCLIP_SECRETS_AWS_DEPLOYMENT_ID");
}
if (!process.env.PAPERCLIP_SECRETS_AWS_KMS_KEY_ID?.trim()) {
missing.push("PAPERCLIP_SECRETS_AWS_KMS_KEY_ID");
}
return missing;
}
function detectedAwsCredentialSources(): string[] {
const sources: string[] = [];
if (process.env.AWS_PROFILE?.trim()) sources.push("AWS_PROFILE/shared config");
if (process.env.AWS_ACCESS_KEY_ID?.trim() && process.env.AWS_SECRET_ACCESS_KEY?.trim()) {
sources.push("temporary AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY environment credentials");
}
if (process.env.AWS_WEB_IDENTITY_TOKEN_FILE?.trim() && process.env.AWS_ROLE_ARN?.trim()) {
sources.push("AWS web identity token");
}
if (
process.env.AWS_CONTAINER_CREDENTIALS_RELATIVE_URI?.trim() ||
process.env.AWS_CONTAINER_CREDENTIALS_FULL_URI?.trim()
) {
sources.push("AWS container credentials endpoint");
}
if (process.env.AWS_SHARED_CREDENTIALS_FILE?.trim() || process.env.AWS_CONFIG_FILE?.trim()) {
sources.push("custom AWS shared credentials/config file");
}
return sources;
}

View File

@@ -61,7 +61,6 @@ interface IssueUpdateOptions extends BaseClientOptions {
interface IssueCommentOptions extends BaseClientOptions {
body: string;
reopen?: boolean;
resume?: boolean;
}
interface IssueCheckoutOptions extends BaseClientOptions {
@@ -242,14 +241,12 @@ export function registerIssueCommands(program: Command): void {
.argument("<issueId>", "Issue ID")
.requiredOption("--body <text>", "Comment body")
.option("--reopen", "Reopen if issue is done/cancelled")
.option("--resume", "Request explicit follow-up and wake the assignee when resumable")
.action(async (issueId: string, opts: IssueCommentOptions) => {
try {
const ctx = resolveCommandContext(opts);
const payload = addIssueCommentSchema.parse({
body: opts.body,
reopen: opts.reopen,
resume: opts.resume,
});
const comment = await ctx.api.post<IssueComment>(`/api/issues/${issueId}/comments`, payload);
printOutput(comment, { json: ctx.json });

View File

@@ -1,11 +1,5 @@
import path from "node:path";
import { existsSync } from "node:fs";
import { Command, Option } from "commander";
import {
scaffoldPluginProject,
shellQuote,
type ScaffoldPluginOptions,
} from "../../../../packages/plugins/create-paperclip-plugin/src/index.js";
import { Command } from "commander";
import pc from "picocolors";
import {
addCommonClientOptions,
@@ -45,101 +39,28 @@ interface PluginInstallOptions extends BaseClientOptions {
version?: string;
}
interface PluginInstallRequest {
packageName: string;
version?: string;
isLocalPath: boolean;
}
interface PluginUninstallOptions extends BaseClientOptions {
force?: boolean;
}
interface PluginInitOptions extends BaseClientOptions {
output?: string;
template?: ScaffoldPluginOptions["template"];
category?: ScaffoldPluginOptions["category"];
displayName?: string;
description?: string;
author?: string;
sdkPath?: string;
}
interface PluginInitResult {
outputDir: string;
nextCommands: string[];
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
function expandHomePath(packageArg: string): string {
if (!packageArg.startsWith("~")) return packageArg;
const home = process.env.HOME ?? process.env.USERPROFILE ?? "";
return path.resolve(home, packageArg.slice(1).replace(/^[\\/]/, ""));
}
function hasLocalPathSyntax(packageArg: string): boolean {
return (
path.isAbsolute(packageArg) ||
packageArg.startsWith("./") ||
packageArg.startsWith("../") ||
packageArg.startsWith("~") ||
packageArg.startsWith(".\\") ||
packageArg.startsWith("..\\")
);
}
function isExistingRelativePath(
packageArg: string,
cwd: string,
pathExists: (targetPath: string) => boolean,
): boolean {
if (packageArg.trim() === "") return false;
if (hasLocalPathSyntax(packageArg)) return false;
return pathExists(path.resolve(cwd, packageArg));
}
/**
* Resolve a local path argument to an absolute path so the server can find the
* plugin on disk regardless of where the user ran the CLI.
*/
function resolvePackageArg(packageArg: string, isLocal: boolean, cwd = process.cwd()): string {
function resolvePackageArg(packageArg: string, isLocal: boolean): string {
if (!isLocal) return packageArg;
// Already absolute
if (path.isAbsolute(packageArg)) return packageArg;
if (packageArg.startsWith("~")) return expandHomePath(packageArg);
return path.resolve(cwd, packageArg);
}
export function buildPluginInstallRequest(
packageArg: string,
opts: Pick<PluginInstallOptions, "local" | "version"> = {},
deps: { cwd?: string; existsSync?: (targetPath: string) => boolean } = {},
): PluginInstallRequest {
const cwd = deps.cwd ?? process.cwd();
const pathExists = deps.existsSync ?? existsSync;
const isLocal =
opts.local ||
hasLocalPathSyntax(packageArg) ||
(opts.version ? false : isExistingRelativePath(packageArg, cwd, pathExists));
if (isLocal && opts.version) {
throw new Error("--version is only supported for npm package installs, not local plugin paths.");
// Expand leading ~ to home directory
if (packageArg.startsWith("~")) {
const home = process.env.HOME ?? process.env.USERPROFILE ?? "";
return path.resolve(home, packageArg.slice(1).replace(/^[\\/]/, ""));
}
return {
packageName: resolvePackageArg(packageArg, Boolean(isLocal), cwd),
version: opts.version,
isLocalPath: Boolean(isLocal),
};
}
export function renderLocalPluginInstallHint(packagePath: string): string {
return [
pc.dim("Local plugin installs run trusted local code from your machine."),
pc.dim(`Keep ${pc.cyan("pnpm dev")} running in ${packagePath}; Paperclip watches rebuilt dist output and reloads the plugin worker.`),
].join("\n");
return path.resolve(process.cwd(), packageArg);
}
function formatPlugin(p: PluginRecord): string {
@@ -166,58 +87,6 @@ function formatPlugin(p: PluginRecord): string {
return parts.join(" ");
}
function packageToDirName(pluginName: string): string {
return pluginName.replace(/^@[^/]+\//, "");
}
export function buildPluginInitScaffoldOptions(
packageName: string,
opts: PluginInitOptions,
cwd = process.cwd(),
): ScaffoldPluginOptions {
const outputRoot = path.resolve(cwd, opts.output ?? ".");
const outputDir = path.resolve(outputRoot, packageToDirName(packageName));
return {
pluginName: packageName,
outputDir,
template: opts.template,
category: opts.category,
displayName: opts.displayName,
description: opts.description,
author: opts.author,
sdkPath: opts.sdkPath,
};
}
export function buildPluginInitNextCommands(outputDir: string): string[] {
const quotedOutputDir = shellQuote(outputDir);
return [
`cd ${quotedOutputDir}`,
"pnpm install",
"pnpm dev",
`paperclipai plugin install ${quotedOutputDir}`,
];
}
export function renderPluginInitSuccess(result: PluginInitResult): string {
return [
pc.green(`✓ Created plugin scaffold at ${result.outputDir}`),
"",
"Next commands:",
...result.nextCommands.map((command) => ` ${pc.cyan(command)}`),
].join("\n");
}
export function runPluginInitCommand(packageName: string, opts: PluginInitOptions): PluginInitResult {
const scaffoldOptions = buildPluginInitScaffoldOptions(packageName, opts);
const outputDir = scaffoldPluginProject(scaffoldOptions);
return {
outputDir,
nextCommands: buildPluginInitNextCommands(outputDir),
};
}
// ---------------------------------------------------------------------------
// Command registration
// ---------------------------------------------------------------------------
@@ -225,43 +94,6 @@ export function runPluginInitCommand(packageName: string, opts: PluginInitOption
export function registerPluginCommands(program: Command): void {
const plugin = program.command("plugin").description("Plugin lifecycle management");
// -------------------------------------------------------------------------
// plugin init <package-name>
// -------------------------------------------------------------------------
addCommonClientOptions(
plugin
.command("init <packageName>")
.description("Scaffold a local Paperclip plugin project")
.option("--output <dir>", "Directory to create the plugin folder in")
.addOption(
new Option("--template <template>", "Starter template")
.choices(["default", "connector", "workspace", "environment"])
.default("default"),
)
.addOption(
new Option("--category <category>", "Manifest category")
.choices(["connector", "workspace", "automation", "ui", "environment"]),
)
.option("--display-name <name>", "Manifest display name")
.option("--description <description>", "Manifest description")
.option("--author <author>", "Manifest author")
.option("--sdk-path <path>", "Local @paperclipai/plugin-sdk package path")
.action((packageName: string, opts: PluginInitOptions) => {
try {
const result = runPluginInitCommand(packageName, opts);
if (opts.json) {
printOutput(result, { json: true });
return;
}
console.log(renderPluginInitSuccess(result));
} catch (err) {
handleCommandError(err);
}
}),
);
// -------------------------------------------------------------------------
// plugin list
// -------------------------------------------------------------------------
@@ -315,19 +147,31 @@ export function registerPluginCommands(program: Command): void {
try {
const ctx = resolveCommandContext(opts);
const installRequest = buildPluginInstallRequest(packageArg, opts);
// Auto-detect local paths: starts with . or / or ~ or is an absolute path
const isLocal =
opts.local ||
packageArg.startsWith("./") ||
packageArg.startsWith("../") ||
packageArg.startsWith("/") ||
packageArg.startsWith("~");
const resolvedPackage = resolvePackageArg(packageArg, isLocal);
if (!ctx.json) {
console.log(
pc.dim(
installRequest.isLocalPath
? `Installing plugin from local path: ${installRequest.packageName}`
: `Installing plugin: ${installRequest.packageName}${opts.version ? `@${opts.version}` : ""}`,
isLocal
? `Installing plugin from local path: ${resolvedPackage}`
: `Installing plugin: ${resolvedPackage}${opts.version ? `@${opts.version}` : ""}`,
),
);
}
const installedPlugin = await ctx.api.post<PluginRecord>("/api/plugins/install", installRequest);
const installedPlugin = await ctx.api.post<PluginRecord>("/api/plugins/install", {
packageName: resolvedPackage,
version: opts.version,
isLocalPath: isLocal,
});
if (ctx.json) {
printOutput(installedPlugin, { json: true });
@@ -348,10 +192,6 @@ export function registerPluginCommands(program: Command): void {
if (installedPlugin.lastError) {
console.log(pc.red(` Warning: ${installedPlugin.lastError}`));
}
if (installRequest.isLocalPath) {
console.log(renderLocalPluginInstallHint(installRequest.packageName));
}
} catch (err) {
handleCommandError(err);
}

View File

@@ -1,501 +0,0 @@
import { Command } from "commander";
import pc from "picocolors";
import type {
Agent,
AgentEnvConfig,
CompanyPortabilityEnvInput,
CompanyPortabilityExportPreviewResult,
CompanyPortabilityInclude,
CompanySecret,
EnvBinding,
SecretProvider,
SecretProviderDescriptor,
} from "@paperclipai/shared";
import {
addCommonClientOptions,
formatInlineRecord,
handleCommandError,
printOutput,
resolveCommandContext,
type BaseClientOptions,
} from "./common.js";
interface SecretListOptions extends BaseClientOptions {
companyId?: string;
}
interface SecretDeclarationsOptions extends BaseClientOptions {
companyId?: string;
include?: string;
kind?: "all" | "secret" | "plain";
}
interface SecretCreateOptions extends BaseClientOptions {
companyId?: string;
name?: string;
key?: string;
provider?: SecretProvider;
value?: string;
valueEnv?: string;
description?: string;
}
interface SecretLinkOptions extends BaseClientOptions {
companyId?: string;
name?: string;
key?: string;
provider?: SecretProvider;
externalRef?: string;
providerVersionRef?: string;
description?: string;
}
interface SecretDoctorOptions extends BaseClientOptions {
companyId?: string;
}
interface SecretMigrateInlineEnvOptions extends BaseClientOptions {
companyId?: string;
apply?: boolean;
}
interface SecretProviderHealth {
provider: SecretProvider;
status: "ok" | "warn" | "error";
message: string;
warnings?: string[];
backupGuidance?: string[];
details?: Record<string, unknown>;
}
interface SecretProviderHealthResponse {
providers: SecretProviderHealth[];
}
export interface InlineSecretMigrationCandidate {
agentId: string;
agentName: string;
envKey: string;
secretName: string;
existingSecretId: string | null;
}
const SENSITIVE_ENV_KEY_RE =
/(^token$|[-_]?token$|api[-_]?key|access[-_]?token|auth(?:_?token)?|authorization|bearer|secret|passwd|password|credential|jwt|private[-_]?key|cookie|connectionstring)/i;
const DEFAULT_DECLARATION_INCLUDE: CompanyPortabilityInclude = {
company: true,
agents: true,
projects: true,
issues: false,
skills: false,
};
export function parseSecretsInclude(input: string | undefined): CompanyPortabilityInclude {
if (!input?.trim()) return { ...DEFAULT_DECLARATION_INCLUDE };
const values = input.split(",").map((part) => part.trim().toLowerCase()).filter(Boolean);
const include = {
company: values.includes("company"),
agents: values.includes("agents"),
projects: values.includes("projects"),
issues: values.includes("issues") || values.includes("tasks"),
skills: values.includes("skills"),
};
if (!Object.values(include).some(Boolean)) {
throw new Error("Invalid --include value. Use one or more of: company,agents,projects,issues,tasks,skills");
}
return include;
}
export function isSensitiveEnvKey(key: string): boolean {
return SENSITIVE_ENV_KEY_RE.test(key);
}
export function toPlainEnvValue(binding: unknown): string | null {
if (typeof binding === "string") return binding;
if (typeof binding !== "object" || binding === null || Array.isArray(binding)) return null;
const record = binding as Record<string, unknown>;
if (record.type === "plain" && typeof record.value === "string") return record.value;
return null;
}
export function buildInlineMigrationSecretName(agentId: string, key: string): string {
return `agent_${agentId.slice(0, 8)}_${key.toLowerCase()}`;
}
export function collectInlineSecretMigrationCandidates(
agents: Agent[],
existingSecrets: CompanySecret[],
): InlineSecretMigrationCandidate[] {
const secretByName = new Map(existingSecrets.map((secret) => [secret.name, secret]));
const candidates: InlineSecretMigrationCandidate[] = [];
for (const agent of agents) {
const env = asRecord(agent.adapterConfig.env);
if (!env) continue;
for (const [envKey, binding] of Object.entries(env)) {
if (!isSensitiveEnvKey(envKey)) continue;
const plain = toPlainEnvValue(binding);
if (plain === null || plain.trim().length === 0) continue;
const secretName = buildInlineMigrationSecretName(agent.id, envKey);
candidates.push({
agentId: agent.id,
agentName: agent.name,
envKey,
secretName,
existingSecretId: secretByName.get(secretName)?.id ?? null,
});
}
}
return candidates;
}
export function buildMigratedAgentEnv(
env: Record<string, unknown>,
secretIdByEnvKey: Map<string, string>,
): AgentEnvConfig {
const next: AgentEnvConfig = { ...(env as Record<string, EnvBinding>) };
for (const [envKey, secretId] of secretIdByEnvKey) {
next[envKey] = {
type: "secret_ref",
secretId,
version: "latest",
};
}
return next;
}
function asRecord(value: unknown): Record<string, unknown> | null {
if (typeof value !== "object" || value === null || Array.isArray(value)) return null;
return value as Record<string, unknown>;
}
function readValueFromOptions(opts: SecretCreateOptions): string {
if (opts.value !== undefined && opts.valueEnv !== undefined) {
throw new Error("Use only one of --value or --value-env.");
}
if (opts.valueEnv !== undefined) {
const value = process.env[opts.valueEnv];
if (!value) throw new Error(`Environment variable ${opts.valueEnv} is empty or unset.`);
return value;
}
if (opts.value !== undefined) return opts.value;
throw new Error("Secret value is required. Pass --value or --value-env.");
}
function renderDeclaration(input: CompanyPortabilityEnvInput): Record<string, unknown> {
const scope = input.agentSlug
? `agent:${input.agentSlug}`
: input.projectSlug
? `project:${input.projectSlug}`
: "company";
return {
key: input.key,
scope,
kind: input.kind,
requirement: input.requirement,
portability: input.portability,
hasDefault: input.defaultValue !== null && input.defaultValue.length > 0,
description: input.description,
};
}
function renderSecret(secret: CompanySecret): Record<string, unknown> {
return {
id: secret.id,
name: secret.name,
key: secret.key,
provider: secret.provider,
status: secret.status,
managedMode: secret.managedMode,
latestVersion: secret.latestVersion,
externalRef: secret.externalRef ? "yes" : "no",
};
}
function printProviderHealth(rows: SecretProviderHealth[], json: boolean): void {
if (json) {
printOutput(rows, { json: true });
return;
}
if (rows.length === 0) {
printOutput([], { json: false });
return;
}
for (const row of rows) {
console.log(
formatInlineRecord({
id: row.provider,
status: row.status,
message: row.message,
}),
);
for (const warning of row.warnings ?? []) {
console.log(pc.yellow(`warning=${warning}`));
}
const missingConfig = asStringArray(row.details?.missingConfig);
if (missingConfig.length > 0) {
console.log(pc.dim(`missingConfig=${missingConfig.join(",")}`));
}
const credentialSource = typeof row.details?.credentialSource === "string"
? row.details.credentialSource
: null;
if (credentialSource) {
console.log(pc.dim(`credentialSource=${credentialSource}`));
}
const detectedCredentialSources = asStringArray(row.details?.detectedCredentialSources);
if (detectedCredentialSources.length > 0) {
console.log(pc.dim(`detectedCredentialSources=${detectedCredentialSources.join(",")}`));
}
for (const guidance of row.backupGuidance ?? []) {
console.log(pc.dim(`backup=${guidance}`));
}
}
}
function asStringArray(value: unknown): string[] {
return Array.isArray(value)
? value.filter((entry): entry is string => typeof entry === "string" && entry.length > 0)
: [];
}
async function migrateInlineEnv(opts: SecretMigrateInlineEnvOptions): Promise<void> {
const ctx = resolveCommandContext(opts, { requireCompany: true });
const companyId = ctx.companyId!;
const agents = (await ctx.api.get<Agent[]>(`/api/companies/${companyId}/agents`)) ?? [];
const secrets = (await ctx.api.get<CompanySecret[]>(`/api/companies/${companyId}/secrets`)) ?? [];
const candidates = collectInlineSecretMigrationCandidates(agents, secrets);
if (!opts.apply) {
printOutput(
{
apply: false,
agentsToUpdate: new Set(candidates.map((candidate) => candidate.agentId)).size,
secretsToCreate: candidates.filter((candidate) => !candidate.existingSecretId).length,
secretsToRotate: candidates.filter((candidate) => candidate.existingSecretId).length,
candidates,
},
{ json: ctx.json },
);
if (!ctx.json) {
console.log(pc.dim("Re-run with --apply to create/rotate secrets and update agent env bindings."));
}
return;
}
const createdOrRotated = new Map<string, string>();
let createdSecrets = 0;
let rotatedSecrets = 0;
for (const candidate of candidates) {
const agent = agents.find((row) => row.id === candidate.agentId);
const env = asRecord(agent?.adapterConfig.env);
const value = env ? toPlainEnvValue(env[candidate.envKey]) : null;
if (!value) continue;
if (candidate.existingSecretId) {
await ctx.api.post(`/api/secrets/${candidate.existingSecretId}/rotate`, { value });
createdOrRotated.set(`${candidate.agentId}:${candidate.envKey}`, candidate.existingSecretId);
rotatedSecrets += 1;
continue;
}
const created = await ctx.api.post<CompanySecret>(`/api/companies/${companyId}/secrets`, {
name: candidate.secretName,
provider: "local_encrypted",
value,
description: `Migrated from agent ${candidate.agentId} env ${candidate.envKey}`,
});
if (!created) throw new Error(`Secret create returned no data for ${candidate.secretName}`);
createdOrRotated.set(`${candidate.agentId}:${candidate.envKey}`, created.id);
createdSecrets += 1;
}
let updatedAgents = 0;
for (const agent of agents) {
const env = asRecord(agent.adapterConfig.env);
if (!env) continue;
const secretIdByEnvKey = new Map<string, string>();
for (const [key] of Object.entries(env)) {
const secretId = createdOrRotated.get(`${agent.id}:${key}`);
if (secretId) secretIdByEnvKey.set(key, secretId);
}
if (secretIdByEnvKey.size === 0) continue;
const adapterConfig = {
...agent.adapterConfig,
env: buildMigratedAgentEnv(env, secretIdByEnvKey),
};
await ctx.api.patch(`/api/agents/${agent.id}`, {
adapterConfig,
replaceAdapterConfig: true,
});
updatedAgents += 1;
}
printOutput(
{
apply: true,
updatedAgents,
createdSecrets,
rotatedSecrets,
},
{ json: ctx.json },
);
}
export function registerSecretCommands(program: Command): void {
const secrets = program.command("secrets").description("Secret declaration and provider operations");
addCommonClientOptions(
secrets
.command("list")
.description("List secret metadata for a company")
.requiredOption("-C, --company-id <id>", "Company ID")
.action(async (opts: SecretListOptions) => {
try {
const ctx = resolveCommandContext(opts, { requireCompany: true });
const rows = (await ctx.api.get<CompanySecret[]>(`/api/companies/${ctx.companyId}/secrets`)) ?? [];
printOutput(ctx.json ? rows : rows.map(renderSecret), { json: ctx.json });
} catch (err) {
handleCommandError(err);
}
}),
);
addCommonClientOptions(
secrets
.command("declarations")
.description("List portable env declarations emitted by company export")
.requiredOption("-C, --company-id <id>", "Company ID")
.option("--include <values>", "Comma-separated include set: company,agents,projects,issues,tasks,skills", "company,agents,projects")
.option("--kind <kind>", "Filter declarations: all | secret | plain", "all")
.action(async (opts: SecretDeclarationsOptions) => {
try {
const ctx = resolveCommandContext(opts, { requireCompany: true });
const kind = opts.kind ?? "all";
if (!["all", "secret", "plain"].includes(kind)) {
throw new Error("Invalid --kind value. Use: all, secret, plain");
}
const preview = await ctx.api.post<CompanyPortabilityExportPreviewResult>(
`/api/companies/${ctx.companyId}/exports/preview`,
{ include: parseSecretsInclude(opts.include) },
);
const declarations = (preview?.manifest.envInputs ?? [])
.filter((entry) => kind === "all" || entry.kind === kind);
printOutput(ctx.json ? declarations : declarations.map(renderDeclaration), { json: ctx.json });
} catch (err) {
handleCommandError(err);
}
}),
);
addCommonClientOptions(
secrets
.command("create")
.description("Create a Paperclip-managed secret")
.requiredOption("-C, --company-id <id>", "Company ID")
.requiredOption("--name <name>", "Secret display name")
.option("--key <key>", "Portable secret key")
.option("--provider <provider>", "Secret provider id")
.option("--value <value>", "Secret value")
.option("--value-env <name>", "Read secret value from an environment variable")
.option("--description <text>", "Description")
.action(async (opts: SecretCreateOptions) => {
try {
const ctx = resolveCommandContext(opts, { requireCompany: true });
const created = await ctx.api.post<CompanySecret>(`/api/companies/${ctx.companyId}/secrets`, {
name: opts.name,
key: opts.key,
provider: opts.provider,
value: readValueFromOptions(opts),
description: opts.description,
});
printOutput(ctx.json ? created : renderSecret(created!), { json: ctx.json });
} catch (err) {
handleCommandError(err);
}
}),
);
addCommonClientOptions(
secrets
.command("link")
.description("Link an external provider-owned secret without storing its value in Paperclip")
.requiredOption("-C, --company-id <id>", "Company ID")
.requiredOption("--name <name>", "Secret display name")
.requiredOption("--provider <provider>", "Secret provider id")
.requiredOption("--external-ref <ref>", "Provider secret ARN/name/path/reference")
.option("--key <key>", "Portable secret key")
.option("--provider-version-ref <ref>", "Provider version id or label")
.option("--description <text>", "Description")
.action(async (opts: SecretLinkOptions) => {
try {
const ctx = resolveCommandContext(opts, { requireCompany: true });
const created = await ctx.api.post<CompanySecret>(`/api/companies/${ctx.companyId}/secrets`, {
name: opts.name,
key: opts.key,
provider: opts.provider,
managedMode: "external_reference",
externalRef: opts.externalRef,
providerVersionRef: opts.providerVersionRef,
description: opts.description,
});
printOutput(ctx.json ? created : renderSecret(created!), { json: ctx.json });
} catch (err) {
handleCommandError(err);
}
}),
);
addCommonClientOptions(
secrets
.command("doctor")
.description("Run secret provider health checks through the Paperclip API")
.requiredOption("-C, --company-id <id>", "Company ID")
.action(async (opts: SecretDoctorOptions) => {
try {
const ctx = resolveCommandContext(opts, { requireCompany: true });
const health = await ctx.api.get<SecretProviderHealthResponse>(
`/api/companies/${ctx.companyId}/secret-providers/health`,
);
printProviderHealth(health?.providers ?? [], ctx.json);
} catch (err) {
handleCommandError(err);
}
}),
);
addCommonClientOptions(
secrets
.command("providers")
.description("List configured secret provider descriptors")
.requiredOption("-C, --company-id <id>", "Company ID")
.action(async (opts: SecretDoctorOptions) => {
try {
const ctx = resolveCommandContext(opts, { requireCompany: true });
const rows = (await ctx.api.get<SecretProviderDescriptor[]>(
`/api/companies/${ctx.companyId}/secret-providers`,
)) ?? [];
printOutput(rows, { json: ctx.json });
} catch (err) {
handleCommandError(err);
}
}),
);
addCommonClientOptions(
secrets
.command("migrate-inline-env")
.description("Migrate inline sensitive agent env values into secret references")
.requiredOption("-C, --company-id <id>", "Company ID")
.option("--apply", "Persist changes; default is a dry run", false)
.action(async (opts: SecretMigrateInlineEnvOptions) => {
try {
await migrateInlineEnv(opts);
} catch (err) {
handleCommandError(err);
}
}),
);
}

View File

@@ -1,174 +0,0 @@
import path from "node:path";
import type { Command } from "commander";
import * as p from "@clack/prompts";
import pc from "picocolors";
import {
buildSshEnvLabFixtureConfig,
getSshEnvLabSupport,
readSshEnvLabFixtureStatus,
startSshEnvLabFixture,
stopSshEnvLabFixture,
} from "@paperclipai/adapter-utils/ssh";
import { resolvePaperclipInstanceId, resolvePaperclipInstanceRoot } from "../config/home.js";
export function resolveEnvLabSshStatePath(instanceId?: string): string {
const resolvedInstanceId = resolvePaperclipInstanceId(instanceId);
return path.resolve(
resolvePaperclipInstanceRoot(resolvedInstanceId),
"env-lab",
"ssh-fixture",
"state.json",
);
}
function printJson(value: unknown) {
process.stdout.write(`${JSON.stringify(value, null, 2)}\n`);
}
function summarizeFixture(state: {
host: string;
port: number;
username: string;
workspaceDir: string;
sshdLogPath: string;
}) {
p.log.message(`Host: ${pc.cyan(state.host)}:${pc.cyan(String(state.port))}`);
p.log.message(`User: ${pc.cyan(state.username)}`);
p.log.message(`Workspace: ${pc.cyan(state.workspaceDir)}`);
p.log.message(`Log: ${pc.dim(state.sshdLogPath)}`);
}
export async function collectEnvLabDoctorStatus(opts: { instance?: string }) {
const statePath = resolveEnvLabSshStatePath(opts.instance);
const [sshSupport, sshStatus] = await Promise.all([
getSshEnvLabSupport(),
readSshEnvLabFixtureStatus(statePath),
]);
const environment = sshStatus.state ? await buildSshEnvLabFixtureConfig(sshStatus.state) : null;
return {
statePath,
ssh: {
supported: sshSupport.supported,
reason: sshSupport.reason,
running: sshStatus.running,
state: sshStatus.state,
environment,
},
};
}
export async function envLabUpCommand(opts: { instance?: string; json?: boolean }) {
const statePath = resolveEnvLabSshStatePath(opts.instance);
const state = await startSshEnvLabFixture({ statePath });
const environment = await buildSshEnvLabFixtureConfig(state);
if (opts.json) {
printJson({ state, environment });
return;
}
p.log.success("SSH env-lab fixture is running.");
summarizeFixture(state);
p.log.message(`State: ${pc.dim(statePath)}`);
}
export async function envLabStatusCommand(opts: { instance?: string; json?: boolean }) {
const statePath = resolveEnvLabSshStatePath(opts.instance);
const status = await readSshEnvLabFixtureStatus(statePath);
const environment = status.state ? await buildSshEnvLabFixtureConfig(status.state) : null;
if (opts.json) {
printJson({ ...status, environment, statePath });
return;
}
if (!status.state || !status.running) {
p.log.info(`SSH env-lab fixture is not running (${pc.dim(statePath)}).`);
return;
}
p.log.success("SSH env-lab fixture is running.");
summarizeFixture(status.state);
p.log.message(`State: ${pc.dim(statePath)}`);
}
export async function envLabDownCommand(opts: { instance?: string; json?: boolean }) {
const statePath = resolveEnvLabSshStatePath(opts.instance);
const stopped = await stopSshEnvLabFixture(statePath);
if (opts.json) {
printJson({ stopped, statePath });
return;
}
if (!stopped) {
p.log.info(`No SSH env-lab fixture was running (${pc.dim(statePath)}).`);
return;
}
p.log.success("SSH env-lab fixture stopped.");
p.log.message(`State: ${pc.dim(statePath)}`);
}
export async function envLabDoctorCommand(opts: { instance?: string; json?: boolean }) {
const status = await collectEnvLabDoctorStatus(opts);
if (opts.json) {
printJson(status);
return;
}
if (status.ssh.supported) {
p.log.success("SSH fixture prerequisites are installed.");
} else {
p.log.warn(`SSH fixture prerequisites are incomplete: ${status.ssh.reason ?? "unknown reason"}`);
}
if (status.ssh.state && status.ssh.running) {
p.log.success("SSH env-lab fixture is running.");
summarizeFixture(status.ssh.state);
p.log.message(`Private key: ${pc.dim(status.ssh.state.clientPrivateKeyPath)}`);
p.log.message(`Known hosts: ${pc.dim(status.ssh.state.knownHostsPath)}`);
} else if (status.ssh.state) {
p.log.warn("SSH env-lab fixture state exists, but the process is not running.");
p.log.message(`State: ${pc.dim(status.statePath)}`);
} else {
p.log.info("SSH env-lab fixture is not running.");
p.log.message(`State: ${pc.dim(status.statePath)}`);
}
p.log.message(`Cleanup: ${pc.dim("pnpm paperclipai env-lab down")}`);
}
export function registerEnvLabCommands(program: Command) {
const envLab = program.command("env-lab").description("Deterministic local environment fixtures");
envLab
.command("up")
.description("Start the default SSH env-lab fixture")
.option("-i, --instance <id>", "Paperclip instance id (default: current/default)")
.option("--json", "Print machine-readable fixture details")
.action(envLabUpCommand);
envLab
.command("status")
.description("Show the current SSH env-lab fixture state")
.option("-i, --instance <id>", "Paperclip instance id (default: current/default)")
.option("--json", "Print machine-readable fixture details")
.action(envLabStatusCommand);
envLab
.command("down")
.description("Stop the default SSH env-lab fixture")
.option("-i, --instance <id>", "Paperclip instance id (default: current/default)")
.option("--json", "Print machine-readable stop details")
.action(envLabDownCommand);
envLab
.command("doctor")
.description("Check SSH fixture prerequisites and current status")
.option("-i, --instance <id>", "Paperclip instance id (default: current/default)")
.option("--json", "Print machine-readable diagnostic details")
.action(envLabDoctorCommand);
}

View File

@@ -75,6 +75,11 @@ function nonEmpty(value: string | null | undefined): string | null {
return typeof value === "string" && value.trim().length > 0 ? value.trim() : null;
}
function isLoopbackHost(hostname: string): boolean {
const value = hostname.trim().toLowerCase();
return value === "127.0.0.1" || value === "localhost" || value === "::1";
}
export function sanitizeWorktreeInstanceId(rawValue: string): string {
const trimmed = rawValue.trim().toLowerCase();
const normalized = trimmed
@@ -163,8 +168,7 @@ export function rewriteLocalUrlPort(rawUrl: string | undefined, port: number): s
if (!rawUrl) return undefined;
try {
const parsed = new URL(rawUrl);
// The URL API normalizes default ports like :80/:443 to "", so treat them as stable URLs.
if (!parsed.port) return rawUrl;
if (!isLoopbackHost(parsed.hostname)) return rawUrl;
parsed.port = String(port);
return parsed.toString();
} catch {

View File

@@ -93,7 +93,6 @@ type WorktreeInitOptions = {
dbPort?: number;
seed?: boolean;
seedMode?: string;
preserveLiveWork?: boolean;
force?: boolean;
};
@@ -127,23 +126,10 @@ type WorktreeReseedOptions = {
fromDataDir?: string;
fromInstance?: string;
seedMode?: string;
preserveLiveWork?: boolean;
yes?: boolean;
allowLiveTarget?: boolean;
};
type WorktreeRepairOptions = {
branch?: string;
home?: string;
fromConfig?: string;
fromDataDir?: string;
fromInstance?: string;
seedMode?: string;
preserveLiveWork?: boolean;
noSeed?: boolean;
allowLiveTarget?: boolean;
};
type EmbeddedPostgresInstance = {
initialise(): Promise<void>;
start(): Promise<void>;
@@ -182,8 +168,6 @@ type CopiedGitHooksResult = {
type SeedWorktreeDatabaseResult = {
backupSummary: string;
pausedScheduledRoutines: number;
executionQuarantine: SeededWorktreeExecutionQuarantineSummary;
reboundWorkspaces: Array<{
name: string;
fromCwd: string;
@@ -191,14 +175,6 @@ type SeedWorktreeDatabaseResult = {
}>;
};
export type SeededWorktreeExecutionQuarantineSummary = {
disabledTimerHeartbeats: number;
resetRunningAgents: number;
quarantinedInProgressIssues: number;
unassignedTodoIssues: number;
unassignedReviewIssues: number;
};
function nonEmpty(value: string | null | undefined): string | null {
return typeof value === "string" && value.trim().length > 0 ? value.trim() : null;
}
@@ -211,18 +187,6 @@ function isCurrentSourceConfigPath(sourceConfigPath: string): boolean {
return path.resolve(currentConfigPath) === path.resolve(sourceConfigPath);
}
function formatSeededWorktreeExecutionQuarantineSummary(
summary: SeededWorktreeExecutionQuarantineSummary,
): string {
return [
`disabled timer heartbeats: ${summary.disabledTimerHeartbeats}`,
`reset running agents: ${summary.resetRunningAgents}`,
`quarantined in-progress issues: ${summary.quarantinedInProgressIssues}`,
`unassigned todo issues: ${summary.unassignedTodoIssues}`,
`unassigned review issues: ${summary.unassignedReviewIssues}`,
].join(", ");
}
const WORKTREE_NAME_PREFIX = "paperclip-";
function resolveWorktreeMakeName(name: string): string {
@@ -586,46 +550,6 @@ function detectGitBranchName(cwd: string): string | null {
}
}
function validateGitBranchName(cwd: string, branchName: string): string {
const value = nonEmpty(branchName);
if (!value) {
throw new Error("Branch name is required.");
}
try {
execFileSync("git", ["check-ref-format", "--branch", value], {
cwd,
stdio: ["ignore", "pipe", "pipe"],
});
} catch (error) {
throw new Error(`Invalid branch name "${branchName}": ${extractExecSyncErrorMessage(error) ?? String(error)}`);
}
return value;
}
function isPrimaryGitWorktree(cwd: string): boolean {
const workspace = detectGitWorkspaceInfo(cwd);
return Boolean(workspace && workspace.gitDir === workspace.commonDir);
}
function resolvePrimaryGitRepoRoot(cwd: string): string {
const workspace = detectGitWorkspaceInfo(cwd);
if (!workspace) {
throw new Error("Current directory is not inside a git repository.");
}
if (workspace.gitDir === workspace.commonDir) {
return workspace.root;
}
return path.resolve(workspace.commonDir, "..");
}
function resolveRepairWorktreeDirName(branchName: string): string {
const normalized = branchName.trim()
.replace(/[^A-Za-z0-9._-]+/g, "-")
.replace(/-+/g, "-")
.replace(/^[-._]+|[-._]+$/g, "");
return normalized || "worktree";
}
function detectGitWorkspaceInfo(cwd: string): GitWorkspaceInfo | null {
try {
const root = execFileSync("git", ["rev-parse", "--show-toplevel"], {
@@ -849,21 +773,6 @@ export function resolveWorktreeReseedSource(input: WorktreeReseedOptions): Resol
);
}
function resolveWorktreeRepairSource(input: WorktreeRepairOptions): ResolvedWorktreeReseedSource {
const fromConfig = nonEmpty(input.fromConfig);
const fromDataDir = nonEmpty(input.fromDataDir);
const fromInstance = nonEmpty(input.fromInstance) ?? "default";
const configPath = resolveSourceConfigPath({
fromConfig: fromConfig ?? undefined,
fromDataDir: fromDataDir ?? undefined,
fromInstance,
});
return {
configPath,
label: configPath,
};
}
export function resolveWorktreeReseedTargetPaths(input: {
configPath: string;
rootPath: string;
@@ -885,105 +794,6 @@ export function resolveWorktreeReseedTargetPaths(input: {
});
}
function resolveExistingGitWorktree(selector: string, cwd: string): MergeSourceChoice | null {
const trimmed = selector.trim();
if (trimmed.length === 0) return null;
const directPath = path.resolve(trimmed);
if (existsSync(directPath)) {
return {
worktree: directPath,
branch: null,
branchLabel: path.basename(directPath),
hasPaperclipConfig: existsSync(path.resolve(directPath, ".paperclip", "config.json")),
isCurrent: directPath === path.resolve(cwd),
};
}
return toMergeSourceChoices(cwd).find((choice) =>
choice.worktree === directPath
|| path.basename(choice.worktree) === trimmed
|| choice.branchLabel === trimmed
|| choice.branch === trimmed,
) ?? null;
}
async function ensureRepairTargetWorktree(input: {
selector?: string;
seedMode: WorktreeSeedMode;
opts: WorktreeRepairOptions;
}): Promise<ResolvedWorktreeRepairTarget | null> {
const cwd = process.cwd();
const currentRoot = path.resolve(cwd);
const currentConfigPath = path.resolve(currentRoot, ".paperclip", "config.json");
if (!input.selector) {
if (isPrimaryGitWorktree(cwd)) {
return null;
}
return {
rootPath: currentRoot,
configPath: currentConfigPath,
label: path.basename(currentRoot),
branchName: detectGitBranchName(cwd),
created: false,
};
}
const existing = resolveExistingGitWorktree(input.selector, cwd);
if (existing) {
return {
rootPath: existing.worktree,
configPath: path.resolve(existing.worktree, ".paperclip", "config.json"),
label: existing.branchLabel,
branchName: existing.branchLabel === "(detached)" ? null : existing.branchLabel,
created: false,
};
}
const repoRoot = resolvePrimaryGitRepoRoot(cwd);
const branchName = validateGitBranchName(repoRoot, input.selector);
const targetPath = path.resolve(
repoRoot,
".paperclip",
"worktrees",
resolveRepairWorktreeDirName(branchName),
);
if (existsSync(targetPath)) {
throw new Error(`Target path already exists but is not a registered git worktree: ${targetPath}`);
}
mkdirSync(path.dirname(targetPath), { recursive: true });
const spinner = p.spinner();
spinner.start(`Creating git worktree for ${branchName}...`);
try {
execFileSync("git", resolveGitWorktreeAddArgs({
branchName,
targetPath,
branchExists: localBranchExists(repoRoot, branchName),
}), {
cwd: repoRoot,
stdio: ["ignore", "pipe", "pipe"],
});
spinner.stop(`Created git worktree at ${targetPath}.`);
} catch (error) {
spinner.stop(pc.red("Failed to create git worktree."));
throw new Error(extractExecSyncErrorMessage(error) ?? String(error));
}
installDependenciesBestEffort(targetPath);
return {
rootPath: targetPath,
configPath: path.resolve(targetPath, ".paperclip", "config.json"),
label: branchName,
branchName,
created: true,
};
}
function resolveSourceConnectionString(config: PaperclipConfig, envEntries: Record<string, string>, portOverride?: number): string {
if (config.database.mode === "postgres") {
const connectionString = nonEmpty(envEntries.DATABASE_URL) ?? nonEmpty(config.database.connectionString);
@@ -1144,133 +954,6 @@ export async function pauseSeededScheduledRoutines(connectionString: string): Pr
}
}
const EMPTY_SEEDED_WORKTREE_EXECUTION_QUARANTINE_SUMMARY: SeededWorktreeExecutionQuarantineSummary = {
disabledTimerHeartbeats: 0,
resetRunningAgents: 0,
quarantinedInProgressIssues: 0,
unassignedTodoIssues: 0,
unassignedReviewIssues: 0,
};
function isRecord(value: unknown): value is Record<string, unknown> {
return Boolean(value) && typeof value === "object" && !Array.isArray(value);
}
function isEnabledValue(value: unknown): boolean {
return value === true || value === "true" || value === 1 || value === "1";
}
function normalizeWorktreeRuntimeConfig(runtimeConfig: unknown): {
runtimeConfig: Record<string, unknown>;
disabledTimerHeartbeat: boolean;
changed: boolean;
} {
const nextRuntimeConfig = isRecord(runtimeConfig) ? { ...runtimeConfig } : {};
const heartbeat = isRecord(nextRuntimeConfig.heartbeat) ? { ...nextRuntimeConfig.heartbeat } : null;
if (!heartbeat) {
return { runtimeConfig: nextRuntimeConfig, disabledTimerHeartbeat: false, changed: false };
}
const disabledTimerHeartbeat = isEnabledValue(heartbeat.enabled);
if (heartbeat.enabled !== false) {
heartbeat.enabled = false;
nextRuntimeConfig.heartbeat = heartbeat;
return { runtimeConfig: nextRuntimeConfig, disabledTimerHeartbeat, changed: true };
}
return { runtimeConfig: nextRuntimeConfig, disabledTimerHeartbeat: false, changed: false };
}
export async function quarantineSeededWorktreeExecutionState(
connectionString: string,
): Promise<SeededWorktreeExecutionQuarantineSummary> {
const db = createDb(connectionString);
const summary = { ...EMPTY_SEEDED_WORKTREE_EXECUTION_QUARANTINE_SUMMARY };
try {
await db.transaction(async (tx) => {
const seededAgents = await tx
.select({
id: agents.id,
status: agents.status,
runtimeConfig: agents.runtimeConfig,
})
.from(agents);
for (const agent of seededAgents) {
const normalized = normalizeWorktreeRuntimeConfig(agent.runtimeConfig);
const nextStatus = agent.status === "running" ? "idle" : agent.status;
if (normalized.disabledTimerHeartbeat) {
summary.disabledTimerHeartbeats += 1;
}
if (agent.status === "running") {
summary.resetRunningAgents += 1;
}
if (normalized.changed || nextStatus !== agent.status) {
await tx
.update(agents)
.set({
runtimeConfig: normalized.runtimeConfig,
status: nextStatus,
updatedAt: new Date(),
})
.where(eq(agents.id, agent.id));
}
}
const affectedIssues = await tx
.select({
id: issues.id,
companyId: issues.companyId,
status: issues.status,
})
.from(issues)
.where(
and(
sql`${issues.assigneeAgentId} is not null`,
sql`${issues.assigneeUserId} is null`,
inArray(issues.status, ["todo", "in_progress", "in_review"]),
),
);
for (const issue of affectedIssues) {
const nextStatus = issue.status === "in_progress" ? "blocked" : issue.status;
await tx
.update(issues)
.set({
status: nextStatus,
assigneeAgentId: null,
checkoutRunId: null,
executionRunId: null,
executionAgentNameKey: null,
executionLockedAt: null,
executionWorkspaceId: null,
updatedAt: new Date(),
})
.where(eq(issues.id, issue.id));
if (issue.status === "in_progress") {
summary.quarantinedInProgressIssues += 1;
await tx.insert(issueComments).values({
companyId: issue.companyId,
issueId: issue.id,
body:
"Quarantined during worktree seed so copied in-flight work does not auto-run in this isolated instance. " +
"Reassign or unblock here only if you intentionally want the worktree instance to own this task.",
});
} else if (issue.status === "todo") {
summary.unassignedTodoIssues += 1;
} else if (issue.status === "in_review") {
summary.unassignedReviewIssues += 1;
}
}
});
return summary;
} finally {
await db.$client?.end?.({ timeout: 5 }).catch(() => undefined);
}
}
async function seedWorktreeDatabase(input: {
sourceConfigPath: string;
sourceConfig: PaperclipConfig;
@@ -1278,7 +961,6 @@ async function seedWorktreeDatabase(input: {
targetPaths: WorktreeLocalPaths;
instanceId: string;
seedMode: WorktreeSeedMode;
preserveLiveWork?: boolean;
}): Promise<SeedWorktreeDatabaseResult> {
const seedPlan = resolveWorktreeSeedPlan(input.seedMode);
const sourceEnvFile = resolvePaperclipEnvFile(input.sourceConfigPath);
@@ -1311,7 +993,6 @@ async function seedWorktreeDatabase(input: {
backupDir: path.resolve(input.targetPaths.backupDir, "seed"),
retention: { dailyDays: 7, weeklyWeeks: 4, monthlyMonths: 1 },
filenamePrefix: `${input.instanceId}-seed`,
backupEngine: "javascript",
includeMigrationJournal: true,
excludeTables: seedPlan.excludedTables,
nullifyColumns: seedPlan.nullifyColumns,
@@ -1330,10 +1011,7 @@ async function seedWorktreeDatabase(input: {
backupFile: backup.backupFile,
});
await applyPendingMigrations(targetConnectionString);
const executionQuarantine = input.preserveLiveWork
? { ...EMPTY_SEEDED_WORKTREE_EXECUTION_QUARANTINE_SUMMARY }
: await quarantineSeededWorktreeExecutionState(targetConnectionString);
const pausedScheduledRoutines = await pauseSeededScheduledRoutines(targetConnectionString);
await pauseSeededScheduledRoutines(targetConnectionString);
const reboundWorkspaces = await rebindSeededProjectWorkspaces({
targetConnectionString,
currentCwd: input.targetPaths.cwd,
@@ -1341,8 +1019,6 @@ async function seedWorktreeDatabase(input: {
return {
backupSummary: formatDatabaseBackupResult(backup),
pausedScheduledRoutines,
executionQuarantine,
reboundWorkspaces,
};
} finally {
@@ -1421,8 +1097,6 @@ async function runWorktreeInit(opts: WorktreeInitOptions): Promise<void> {
const copiedGitHooks = copyGitHooksToWorktreeGitDir(cwd);
let seedSummary: string | null = null;
let seedExecutionQuarantineSummary: SeededWorktreeExecutionQuarantineSummary | null = null;
let pausedScheduledRoutineCount: number | null = null;
let reboundWorkspaceSummary: SeedWorktreeDatabaseResult["reboundWorkspaces"] = [];
if (opts.seed !== false) {
if (!sourceConfig) {
@@ -1440,11 +1114,8 @@ async function runWorktreeInit(opts: WorktreeInitOptions): Promise<void> {
targetPaths: paths,
instanceId,
seedMode,
preserveLiveWork: opts.preserveLiveWork,
});
seedSummary = seeded.backupSummary;
seedExecutionQuarantineSummary = seeded.executionQuarantine;
pausedScheduledRoutineCount = seeded.pausedScheduledRoutines;
reboundWorkspaceSummary = seeded.reboundWorkspaces;
spinner.stop(`Seeded isolated worktree database (${seedMode}).`);
} catch (error) {
@@ -1467,16 +1138,6 @@ async function runWorktreeInit(opts: WorktreeInitOptions): Promise<void> {
if (seedSummary) {
p.log.message(pc.dim(`Seed mode: ${seedMode}`));
p.log.message(pc.dim(`Seed snapshot: ${seedSummary}`));
if (opts.preserveLiveWork) {
p.log.warning("Preserved copied live work; this worktree instance may auto-run source-instance assignments.");
} else if (seedExecutionQuarantineSummary) {
p.log.message(
pc.dim(`Seed execution quarantine: ${formatSeededWorktreeExecutionQuarantineSummary(seedExecutionQuarantineSummary)}`),
);
}
if (pausedScheduledRoutineCount != null) {
p.log.message(pc.dim(`Paused scheduled routines: ${pausedScheduledRoutineCount}`));
}
for (const rebound of reboundWorkspaceSummary) {
p.log.message(
pc.dim(`Rebound workspace ${rebound.name}: ${rebound.fromCwd} -> ${rebound.toCwd}`),
@@ -1544,7 +1205,18 @@ export async function worktreeMakeCommand(nameArg: string, opts: WorktreeMakeOpt
throw new Error(extractExecSyncErrorMessage(error) ?? String(error));
}
installDependenciesBestEffort(targetPath);
const installSpinner = p.spinner();
installSpinner.start("Installing dependencies...");
try {
execFileSync("pnpm", ["install"], {
cwd: targetPath,
stdio: ["ignore", "pipe", "pipe"],
});
installSpinner.stop("Installed dependencies.");
} catch (error) {
installSpinner.stop(pc.yellow("Failed to install dependencies (continuing anyway)."));
p.log.warning(extractExecSyncErrorMessage(error) ?? String(error));
}
const originalCwd = process.cwd();
try {
@@ -1561,21 +1233,6 @@ export async function worktreeMakeCommand(nameArg: string, opts: WorktreeMakeOpt
}
}
function installDependenciesBestEffort(targetPath: string): void {
const installSpinner = p.spinner();
installSpinner.start("Installing dependencies...");
try {
execFileSync("pnpm", ["install"], {
cwd: targetPath,
stdio: ["ignore", "pipe", "pipe"],
});
installSpinner.stop("Installed dependencies.");
} catch (error) {
installSpinner.stop(pc.yellow("Failed to install dependencies (continuing anyway)."));
p.log.warning(extractExecSyncErrorMessage(error) ?? String(error));
}
}
type WorktreeCleanupOptions = {
instance?: string;
home?: string;
@@ -1609,14 +1266,6 @@ type ResolvedWorktreeReseedSource = {
label: string;
};
type ResolvedWorktreeRepairTarget = {
rootPath: string;
configPath: string;
label: string;
branchName: string | null;
created: boolean;
};
function parseGitWorktreeList(cwd: string): GitWorktreeListEntry[] {
const raw = execFileSync("git", ["worktree", "list", "--porcelain"], {
cwd,
@@ -3058,7 +2707,10 @@ export async function worktreeMergeHistoryCommand(sourceArg: string | undefined,
}
}
async function runWorktreeReseed(opts: WorktreeReseedOptions): Promise<void> {
export async function worktreeReseedCommand(opts: WorktreeReseedOptions): Promise<void> {
printPaperclipCliBanner();
p.intro(pc.bgCyan(pc.black(" paperclipai worktree reseed ")));
const seedMode = opts.seedMode ?? "full";
if (!isWorktreeSeedMode(seedMode)) {
throw new Error(`Unsupported seed mode "${seedMode}". Expected one of: minimal, full.`);
@@ -3121,20 +2773,11 @@ async function runWorktreeReseed(opts: WorktreeReseedOptions): Promise<void> {
targetPaths,
instanceId: targetPaths.instanceId,
seedMode,
preserveLiveWork: opts.preserveLiveWork,
});
spinner.stop(`Reseeded ${targetEndpoint.label} (${seedMode}).`);
p.log.message(pc.dim(`Source: ${source.configPath}`));
p.log.message(pc.dim(`Target: ${targetEndpoint.configPath}`));
p.log.message(pc.dim(`Seed snapshot: ${seeded.backupSummary}`));
if (opts.preserveLiveWork) {
p.log.warning("Preserved copied live work; this worktree instance may auto-run source-instance assignments.");
} else {
p.log.message(
pc.dim(`Seed execution quarantine: ${formatSeededWorktreeExecutionQuarantineSummary(seeded.executionQuarantine)}`),
);
}
p.log.message(pc.dim(`Paused scheduled routines: ${seeded.pausedScheduledRoutines}`));
for (const rebound of seeded.reboundWorkspaces) {
p.log.message(
pc.dim(`Rebound workspace ${rebound.name}: ${rebound.fromCwd} -> ${rebound.toCwd}`),
@@ -3147,98 +2790,6 @@ async function runWorktreeReseed(opts: WorktreeReseedOptions): Promise<void> {
}
}
export async function worktreeReseedCommand(opts: WorktreeReseedOptions): Promise<void> {
printPaperclipCliBanner();
p.intro(pc.bgCyan(pc.black(" paperclipai worktree reseed ")));
await runWorktreeReseed(opts);
}
export async function worktreeRepairCommand(opts: WorktreeRepairOptions): Promise<void> {
printPaperclipCliBanner();
p.intro(pc.bgCyan(pc.black(" paperclipai worktree repair ")));
const seedMode = opts.seedMode ?? "minimal";
if (!isWorktreeSeedMode(seedMode)) {
throw new Error(`Unsupported seed mode "${seedMode}". Expected one of: minimal, full.`);
}
const target = await ensureRepairTargetWorktree({
selector: nonEmpty(opts.branch) ?? undefined,
seedMode,
opts,
});
if (!target) {
p.log.warn("Current checkout is the primary repo worktree. Pass --branch to create or repair a linked worktree.");
p.outro(pc.yellow("No worktree repaired."));
return;
}
const source = resolveWorktreeRepairSource(opts);
if (!existsSync(source.configPath)) {
throw new Error(`Source config not found at ${source.configPath}.`);
}
if (path.resolve(source.configPath) === path.resolve(target.configPath)) {
throw new Error("Source and target Paperclip configs are the same. Use --from-config/--from-instance to point repair at a different source.");
}
const targetConfig = existsSync(target.configPath) ? readConfig(target.configPath) : null;
const targetEnvEntries = readPaperclipEnvEntries(resolvePaperclipEnvFile(target.configPath));
const targetHasWorktreeEnv = Boolean(
nonEmpty(targetEnvEntries.PAPERCLIP_HOME) && nonEmpty(targetEnvEntries.PAPERCLIP_INSTANCE_ID),
);
if (targetConfig && targetHasWorktreeEnv && opts.noSeed) {
p.log.message(pc.dim(`Target ${target.label} already has worktree-local config/env. Skipping reseed because --no-seed was passed.`));
p.outro(pc.green(`Worktree metadata already looks healthy for ${target.label}.`));
return;
}
if (targetConfig && targetHasWorktreeEnv) {
await runWorktreeReseed({
fromConfig: source.configPath,
to: target.rootPath,
seedMode,
preserveLiveWork: opts.preserveLiveWork,
yes: true,
allowLiveTarget: opts.allowLiveTarget,
});
return;
}
const repairInstanceId = sanitizeWorktreeInstanceId(path.basename(target.rootPath));
const repairPaths = resolveWorktreeLocalPaths({
cwd: target.rootPath,
homeDir: resolveWorktreeHome(opts.home),
instanceId: repairInstanceId,
});
const runningTargetPid = readRunningPostmasterPid(path.resolve(repairPaths.embeddedPostgresDataDir, "postmaster.pid"));
if (runningTargetPid && !opts.allowLiveTarget) {
throw new Error(
`Target worktree database appears to be running (pid ${runningTargetPid}). Stop Paperclip in ${target.rootPath} before repairing, or re-run with --allow-live-target if you want to override this guard.`,
);
}
if (runningTargetPid && opts.allowLiveTarget) {
p.log.warning(`Proceeding even though the target embedded PostgreSQL appears to be running (pid ${runningTargetPid}).`);
}
const originalCwd = process.cwd();
try {
process.chdir(target.rootPath);
await runWorktreeInit({
home: opts.home,
fromConfig: source.configPath,
fromDataDir: opts.fromDataDir,
fromInstance: opts.fromInstance,
seed: opts.noSeed ? false : true,
seedMode,
preserveLiveWork: opts.preserveLiveWork,
force: true,
});
} finally {
process.chdir(originalCwd);
}
}
export function registerWorktreeCommands(program: Command): void {
const worktree = program.command("worktree").description("Worktree-local Paperclip instance helpers");
@@ -3255,7 +2806,6 @@ export function registerWorktreeCommands(program: Command): void {
.option("--server-port <port>", "Preferred server port", (value) => Number(value))
.option("--db-port <port>", "Preferred embedded Postgres port", (value) => Number(value))
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: minimal)", "minimal")
.option("--preserve-live-work", "Do not quarantine copied agent timers or assigned open issues in the seeded worktree", false)
.option("--no-seed", "Skip database seeding from the source instance")
.option("--force", "Replace existing repo-local config and isolated instance data", false)
.action(worktreeMakeCommand);
@@ -3272,7 +2822,6 @@ export function registerWorktreeCommands(program: Command): void {
.option("--server-port <port>", "Preferred server port", (value) => Number(value))
.option("--db-port <port>", "Preferred embedded Postgres port", (value) => Number(value))
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: minimal)", "minimal")
.option("--preserve-live-work", "Do not quarantine copied agent timers or assigned open issues in the seeded worktree", false)
.option("--no-seed", "Skip database seeding from the source instance")
.option("--force", "Replace existing repo-local config and isolated instance data", false)
.action(worktreeInitCommand);
@@ -3312,25 +2861,10 @@ export function registerWorktreeCommands(program: Command): void {
.option("--from-data-dir <path>", "Source PAPERCLIP_HOME used when deriving the source config")
.option("--from-instance <id>", "Source instance id when deriving the source config")
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: full)", "full")
.option("--preserve-live-work", "Do not quarantine copied agent timers or assigned open issues in the seeded worktree", false)
.option("--yes", "Skip the destructive confirmation prompt", false)
.option("--allow-live-target", "Override the guard that requires the target worktree DB to be stopped first", false)
.action(worktreeReseedCommand);
worktree
.command("repair")
.description("Create or repair a linked worktree-local Paperclip instance without touching the primary checkout")
.option("--branch <name>", "Existing branch/worktree selector to repair, or a branch name to create under .paperclip/worktrees")
.option("--home <path>", `Home root for worktree instances (env: PAPERCLIP_WORKTREES_DIR, default: ${DEFAULT_WORKTREE_HOME})`)
.option("--from-config <path>", "Source config.json to seed from")
.option("--from-data-dir <path>", "Source PAPERCLIP_HOME used when deriving the source config")
.option("--from-instance <id>", "Source instance id when deriving the source config (default: default)")
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: minimal)", "minimal")
.option("--preserve-live-work", "Do not quarantine copied agent timers or assigned open issues in the seeded worktree", false)
.option("--no-seed", "Repair metadata only and skip reseeding when bootstrapping a missing worktree config", false)
.option("--allow-live-target", "Override the guard that requires the target worktree DB to be stopped first", false)
.action(worktreeRepairCommand);
program
.command("worktree:cleanup")
.description("Safely remove a worktree, its branch, and its isolated instance data")

View File

@@ -1,31 +1,32 @@
import os from "node:os";
import path from "node:path";
import {
expandHomePrefix,
resolveDefaultBackupDir as resolveSharedDefaultBackupDir,
resolveDefaultEmbeddedPostgresDir as resolveSharedDefaultEmbeddedPostgresDir,
resolveDefaultLogsDir as resolveSharedDefaultLogsDir,
resolveDefaultSecretsKeyFilePath as resolveSharedDefaultSecretsKeyFilePath,
resolveDefaultStorageDir as resolveSharedDefaultStorageDir,
resolveHomeAwarePath,
resolvePaperclipConfigPathForInstance,
resolvePaperclipHomeDir,
resolvePaperclipInstanceId,
resolvePaperclipInstanceRoot as resolveSharedPaperclipInstanceRoot,
} from "@paperclipai/shared/home-paths";
export {
expandHomePrefix,
resolveHomeAwarePath,
resolvePaperclipHomeDir,
resolvePaperclipInstanceId,
};
const DEFAULT_INSTANCE_ID = "default";
const INSTANCE_ID_RE = /^[a-zA-Z0-9_-]+$/;
export function resolvePaperclipHomeDir(): string {
const envHome = process.env.PAPERCLIP_HOME?.trim();
if (envHome) return path.resolve(expandHomePrefix(envHome));
return path.resolve(os.homedir(), ".paperclip");
}
export function resolvePaperclipInstanceId(override?: string): string {
const raw = override?.trim() || process.env.PAPERCLIP_INSTANCE_ID?.trim() || DEFAULT_INSTANCE_ID;
if (!INSTANCE_ID_RE.test(raw)) {
throw new Error(
`Invalid instance id '${raw}'. Allowed characters: letters, numbers, '_' and '-'.`,
);
}
return raw;
}
export function resolvePaperclipInstanceRoot(instanceId?: string): string {
return resolveSharedPaperclipInstanceRoot({ instanceId });
const id = resolvePaperclipInstanceId(instanceId);
return path.resolve(resolvePaperclipHomeDir(), "instances", id);
}
export function resolveDefaultConfigPath(instanceId?: string): string {
return resolvePaperclipConfigPathForInstance({ instanceId });
return path.resolve(resolvePaperclipInstanceRoot(instanceId), "config.json");
}
export function resolveDefaultContextPath(): string {
@@ -37,23 +38,29 @@ export function resolveDefaultCliAuthPath(): string {
}
export function resolveDefaultEmbeddedPostgresDir(instanceId?: string): string {
return resolveSharedDefaultEmbeddedPostgresDir({ instanceId });
return path.resolve(resolvePaperclipInstanceRoot(instanceId), "db");
}
export function resolveDefaultLogsDir(instanceId?: string): string {
return resolveSharedDefaultLogsDir({ instanceId });
return path.resolve(resolvePaperclipInstanceRoot(instanceId), "logs");
}
export function resolveDefaultSecretsKeyFilePath(instanceId?: string): string {
return resolveSharedDefaultSecretsKeyFilePath({ instanceId });
return path.resolve(resolvePaperclipInstanceRoot(instanceId), "secrets", "master.key");
}
export function resolveDefaultStorageDir(instanceId?: string): string {
return resolveSharedDefaultStorageDir({ instanceId });
return path.resolve(resolvePaperclipInstanceRoot(instanceId), "data", "storage");
}
export function resolveDefaultBackupDir(instanceId?: string): string {
return resolveSharedDefaultBackupDir({ instanceId });
return path.resolve(resolvePaperclipInstanceRoot(instanceId), "data", "backups");
}
export function expandHomePrefix(value: string): string {
if (value === "~") return os.homedir();
if (value.startsWith("~/")) return path.resolve(os.homedir(), value.slice(2));
return value;
}
export function describeLocalInstancePaths(instanceId?: string) {

View File

@@ -8,7 +8,6 @@ import { heartbeatRun } from "./commands/heartbeat-run.js";
import { runCommand } from "./commands/run.js";
import { bootstrapCeoInvite } from "./commands/auth-bootstrap-ceo.js";
import { dbBackupCommand } from "./commands/db-backup.js";
import { registerEnvLabCommands } from "./commands/env-lab.js";
import { registerContextCommands } from "./commands/client/context.js";
import { registerCompanyCommands } from "./commands/client/company.js";
import { registerIssueCommands } from "./commands/client/issue.js";
@@ -18,7 +17,6 @@ import { registerActivityCommands } from "./commands/client/activity.js";
import { registerDashboardCommands } from "./commands/client/dashboard.js";
import { registerRoutineCommands } from "./commands/routines.js";
import { registerFeedbackCommands } from "./commands/client/feedback.js";
import { registerSecretCommands } from "./commands/client/secrets.js";
import { applyDataDirOverride, type DataDirOptionLike } from "./config/data-dir.js";
import { loadPaperclipEnvFile } from "./config/env.js";
import { initTelemetryFromConfigFile, flushTelemetry } from "./telemetry.js";
@@ -148,9 +146,7 @@ registerActivityCommands(program);
registerDashboardCommands(program);
registerRoutineCommands(program);
registerFeedbackCommands(program);
registerSecretCommands(program);
registerWorktreeCommands(program);
registerEnvLabCommands(program);
registerPluginCommands(program);
const auth = program.command("auth").description("Authentication and bootstrap utilities");

View File

@@ -32,7 +32,7 @@ export async function promptSecrets(current?: SecretsConfig): Promise<SecretsCon
{
value: "aws_secrets_manager" as const,
label: "AWS Secrets Manager",
hint: "requires runtime AWS credentials and provider env config",
hint: "requires external adapter integration",
},
{
value: "gcp_secret_manager" as const,
@@ -84,9 +84,7 @@ export async function promptSecrets(current?: SecretsConfig): Promise<SecretsCon
if (provider !== "local_encrypted") {
p.note(
provider === "aws_secrets_manager"
? "AWS credentials must come from the Paperclip server runtime (IAM role/workload identity, AWS_PROFILE/SSO/shared credentials, or short-lived shell env), not from Paperclip company secrets."
: `${provider} is not fully wired in this build yet. Keep local_encrypted unless you are actively implementing that adapter.`,
`${provider} is not fully wired in this build yet. Keep local_encrypted unless you are actively implementing that adapter.`,
"Heads up",
);
}

View File

@@ -4,5 +4,5 @@
"outDir": "dist",
"rootDir": ".."
},
"include": ["src", "../packages/shared/src", "../packages/plugins/create-paperclip-plugin/src"]
"include": ["src", "../packages/shared/src"]
}

View File

@@ -2,7 +2,7 @@
Paperclip CLI now supports both:
- instance setup/diagnostics (`onboard`, `doctor`, `configure`, `env`, `allowed-hostname`, `env-lab`)
- instance setup/diagnostics (`onboard`, `doctor`, `configure`, `env`, `allowed-hostname`)
- control-plane client operations (issues, approvals, agents, activity, dashboard)
## Base Usage
@@ -45,15 +45,6 @@ Allow an authenticated/private hostname (for example custom Tailscale DNS):
pnpm paperclipai allowed-hostname dotta-macbook-pro
```
Bring up the default local SSH fixture for environment testing:
```sh
pnpm paperclipai env-lab up
pnpm paperclipai env-lab doctor
pnpm paperclipai env-lab status --json
pnpm paperclipai env-lab down
```
All client commands support:
- `--data-dir <path>`
@@ -143,32 +134,6 @@ pnpm paperclipai agent local-cli codexcoder --company-id <company-id>
pnpm paperclipai agent local-cli claudecoder --company-id <company-id>
```
## Secrets Commands
```sh
pnpm paperclipai secrets list --company-id <company-id>
pnpm paperclipai secrets declarations --company-id <company-id> [--include agents,projects] [--kind secret]
pnpm paperclipai secrets create --company-id <company-id> --name anthropic-api-key --value-env ANTHROPIC_API_KEY
pnpm paperclipai secrets link --company-id <company-id> --name prod-stripe-key --provider aws_secrets_manager --external-ref <provider-ref>
pnpm paperclipai secrets doctor --company-id <company-id>
pnpm paperclipai secrets migrate-inline-env --company-id <company-id> [--apply]
```
Secret listing and declarations never print secret values. `create` accepts
`--value-env` so shell history does not capture the value. `link` records
provider-owned references without copying the secret value into Paperclip.
For AWS-backed secrets, `secrets doctor` reports missing non-secret provider
env and the expected AWS SDK runtime credential source; do not store AWS
bootstrap credentials in Paperclip secrets.
Per-company provider vaults (multiple vault instances per provider, default
vault selection, coming-soon GCP/Vault) are configured from the board UI under
`Company Settings → Secrets → Provider vaults` or through
`/api/companies/{companyId}/secret-provider-configs`. There is no CLI surface
for vault management today. See the
[secrets deploy guide](../docs/deploy/secrets.md#provider-vaults) and
[API reference](../docs/api/secrets.md#provider-vaults) for the contract.
## Approval Commands
```sh
@@ -204,28 +169,7 @@ pnpm paperclipai heartbeat run --agent-id <agent-id> [--api-base http://localhos
## Local Storage Defaults
Local Paperclip data lives under the selected instance root. `PAPERCLIP_HOME` chooses the home directory and `PAPERCLIP_INSTANCE_ID` chooses the instance.
```text
~/.paperclip/ # PAPERCLIP_HOME
└── instances/
└── default/ # instance root (PAPERCLIP_INSTANCE_ID)
├── config.json # runtime config
├── .env # instance env file
├── db/ # embedded PostgreSQL data
├── data/
│ ├── storage/ # local_disk uploads
│ └── backups/ # automatic DB backups
├── logs/
├── secrets/
│ └── master.key # local_encrypted master key
├── workspaces/ # default agent workspaces
├── projects/ # project execution workspaces
├── companies/ # per-company adapter homes (e.g. codex-home)
└── codex-home/ # per-instance codex home (when not company-scoped)
```
Default paths for the canonical install:
Default local instance root is `~/.paperclip/instances/default`:
- config: `~/.paperclip/instances/default/config.json`
- embedded db: `~/.paperclip/instances/default/db`

View File

@@ -27,18 +27,6 @@ pnpm db:migrate
When `DATABASE_URL` is unset, this command targets the current embedded PostgreSQL instance for your active Paperclip config/instance.
Issue reference mentions follow the normal migration path: the schema migration creates the tracking table, but it does not backfill historical issue titles, descriptions, comments, or documents automatically.
To backfill existing content manually after migrating, run:
```sh
pnpm issue-references:backfill
# optional: limit to one company
pnpm issue-references:backfill -- --company <company-id>
```
Future issue, comment, and document writes sync references automatically without running the backfill command.
This mode is ideal for local development and one-command installs.
Docker note: the Docker quickstart image also uses embedded PostgreSQL by default. Persist `/paperclip` to keep DB state across container restarts (see `doc/DOCKER.md`).
@@ -59,11 +47,11 @@ cp .env.example .env
# DATABASE_URL=postgres://paperclip:paperclip@localhost:5432/paperclip
```
Run migrations:
Run migrations (once the migration generation issue is fixed) or use `drizzle-kit push`:
```sh
DATABASE_URL=postgres://paperclip:paperclip@localhost:5432/paperclip \
pnpm db:migrate
npx drizzle-kit push
```
Start the server:
@@ -100,27 +88,27 @@ postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:
### Configure
For the application runtime, use a direct PostgreSQL connection unless the database client has explicit prepared-statement configuration for your pooling mode:
```sh
DATABASE_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:5432/postgres
```
If you later run the app with a pooled runtime URL, set `DATABASE_MIGRATION_URL` to the direct connection URL. Paperclip uses it for startup schema checks/migrations and plugin namespace migrations, while the app continues to use `DATABASE_URL` for runtime queries:
Set `DATABASE_URL` in your `.env`:
```sh
DATABASE_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:6543/postgres
DATABASE_MIGRATION_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:5432/postgres
```
If your hosted database requires transaction-pooling-only connections, use a direct or session-pooled connection for Paperclip until runtime pooling support is documented in this guide. Do not edit database client source files as part of deployment setup.
If using connection pooling (port 6543), the `postgres` client must disable prepared statements. Update `packages/db/src/client.ts`:
```ts
export function createDb(url: string) {
const sql = postgres(url, { prepare: false });
return drizzlePg(sql, { schema });
}
```
### Push the schema
```sh
# Use the direct connection (port 5432) for schema changes
DATABASE_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@...5432/postgres \
pnpm db:migrate
npx drizzle-kit push
```
### Free tier limits
@@ -143,22 +131,6 @@ The database mode is controlled by `DATABASE_URL`:
Your Drizzle schema (`packages/db/src/schema/`) stays the same regardless of mode.
## Plugin database namespaces
The plugin runtime tracks plugin-owned database namespaces and migrations in `plugin_database_namespaces` and `plugin_migrations`. Hosted deployments that separate runtime and migration connections should set `DATABASE_MIGRATION_URL`; plugin namespace migration work uses the migration connection when present.
## Backups
Paperclip supports automatic and manual logical database backups. These dumps include
non-system database schemas such as `public`, the Drizzle migration journal, and
plugin-owned database schemas. See `doc/DEVELOPING.md` for the current
`paperclipai db:backup` / `pnpm db:backup` commands and backup retention
configuration.
Database backups do not include non-database instance files such as local-disk
uploads, workspace files, or the local encrypted secrets master key. Back those paths
up separately when you need full instance disaster recovery.
## Secret storage
Paperclip stores secret metadata and versions in:
@@ -171,8 +143,6 @@ For local/default installs, the active provider is `local_encrypted`:
- Secret material is encrypted at rest with a local master key.
- Default key file: `~/.paperclip/instances/default/secrets/master.key` (auto-created if missing).
- CLI config location: `~/.paperclip/instances/default/config.json` under `secrets.localEncrypted.keyFilePath`.
- Backup/restore requires both the database metadata and the local master key file; either artifact alone is insufficient.
- The server best-effort enforces `0600` key file permissions and provider health reports permission warnings.
Optional overrides:
@@ -194,10 +164,5 @@ pnpm paperclipai configure --section secrets
Inline secret migration command:
```sh
pnpm paperclipai secrets migrate-inline-env --company-id <company-id> --apply
# direct database maintenance fallback
pnpm secrets:migrate-inline-env --apply
```
Hosted AWS provider notes live in [SECRETS-AWS-PROVIDER.md](./SECRETS-AWS-PROVIDER.md).

View File

@@ -142,4 +142,3 @@ This prevents lockout when a user migrates from long-running local trusted usage
- implementation plan: `doc/plans/deployment-auth-mode-consolidation.md`
- V1 contract: `doc/SPEC-implementation.md`
- operator workflows: `doc/DEVELOPING.md` and `doc/CLI.md`
- invite/join state map: `doc/spec/invite-flow.md`

View File

@@ -43,19 +43,6 @@ This starts:
`pnpm dev` and `pnpm dev:once` are now idempotent for the current repo and instance: if the matching Paperclip dev runner is already alive, Paperclip reports the existing process instead of starting a duplicate.
Issue execution may also use project execution workspace policies and workspace runtime services for per-project worktrees, preview servers, and managed dev commands. Configure those through the project workspace/runtime surfaces rather than starting long-running unmanaged processes when a task needs a reusable service.
## Storybook
The board UI Storybook keeps stories and Storybook config under `ui/storybook/` so component review files stay out of the app source routes.
```sh
pnpm storybook
pnpm build-storybook
```
These run the `@paperclipai/ui` Storybook on port `6006` and build the static output to `ui/storybook-static/`.
Inspect or stop the current repo's managed dev runner:
```sh
@@ -92,31 +79,6 @@ Allow additional private hostnames (for example custom Tailscale hostnames):
pnpm paperclipai allowed-hostname dotta-macbook-pro
```
## Test Commands
Use the cheap local default unless you are specifically working on browser flows:
```sh
pnpm test
```
`pnpm test` runs the Vitest suite only. For interactive Vitest watch mode use:
```sh
pnpm test:watch
```
Browser suites stay separate:
```sh
pnpm test:e2e
pnpm test:release-smoke
```
These browser suites are intended for targeted local verification and CI, not the default agent/human test command.
For normal issue work, start with the smallest targeted check that proves the change. Reserve repo-wide typecheck/build/test runs for PR-ready handoff or changes broad enough that narrow checks do not cover the risk.
## One-Command Local Run
For a first-time local install, you can bootstrap and run in one command:
@@ -157,27 +119,6 @@ See `doc/DOCKER.md` for API key wiring (`OPENAI_API_KEY` / `ANTHROPIC_API_KEY`)
For a separate review-oriented container that keeps `codex`/`claude` login state in Docker volumes and checks out PRs into an isolated scratch workspace, see `doc/UNTRUSTED-PR-REVIEW.md`.
## Local Instance Layout
Every local install keeps runtime state directly under the selected instance root:
```text
~/.paperclip/instances/default/ # instance root
config.json # runtime config
.env # instance env file
db/ # embedded PostgreSQL data
data/
storage/ # local_disk uploads
backups/ # automatic DB backups
logs/
secrets/master.key # local_encrypted master key
workspaces/<agent-id>/ # default agent workspaces
projects/ # project execution workspaces
companies/<company-id>/codex-home/ # per-company codex_local home
```
`PAPERCLIP_HOME` and `PAPERCLIP_INSTANCE_ID` override the home root and instance id respectively. `paperclipai onboard` echoes the resolved values in its banner (`Local home: <home> | instance: <id> | config: <path>`) so you can confirm where state will land before continuing.
## Database in Dev (Auto-Handled)
For local development, leave `DATABASE_URL` unset.
@@ -185,7 +126,7 @@ The server will automatically use embedded PostgreSQL and persist data at:
- `~/.paperclip/instances/default/db`
Override home or instance:
Override home and instance:
```sh
PAPERCLIP_HOME=/custom/path PAPERCLIP_INSTANCE_ID=dev pnpm paperclipai run
@@ -219,8 +160,6 @@ For `codex_local`, Paperclip also manages a per-company Codex home under the ins
If the `codex` CLI is not installed or not on `PATH`, `codex_local` agent runs fail at execution time with a clear adapter error. Quota polling uses a short-lived `codex app-server` subprocess: when `codex` cannot be spawned, that provider reports `ok: false` in aggregated quota results and the API server keeps running (it must not exit on a missing binary).
Local adapters require their corresponding CLI/session setup on the machine running Paperclip. External adapters are installed through the adapter/plugin flow and should not require hardcoded imports in `server/` or `ui/`.
## Worktree-local Instances
When developing from multiple git worktrees, do not point two Paperclip servers at the same embedded PostgreSQL data directory.
@@ -247,8 +186,6 @@ Seed modes:
- `full` makes a full logical clone of the source instance
- `--no-seed` creates an empty isolated instance
Seeded worktree instances quarantine copied live execution by default for both `minimal` and `full` seeds. During restore, Paperclip disables copied agent timer heartbeats, resets copied `running` agents to `idle`, blocks and unassigns copied agent-owned `in_progress` issues, and unassigns copied agent-owned `todo`/`in_review` issues. This keeps a freshly booted worktree from starting agents for work already owned by the source instance. Pass `--preserve-live-work` only when you intentionally want the isolated worktree to resume copied assignments.
After `worktree init`, both the server and the CLI auto-load the repo-local `.paperclip/.env` when run inside that worktree, so normal commands like `pnpm dev`, `paperclipai doctor`, and `paperclipai db:backup` stay scoped to the worktree instance.
`pnpm dev` now fails fast in a linked git worktree when `.paperclip/.env` is missing, instead of silently booting against the default instance/port. If that happens, run `paperclipai worktree init` in the worktree first.
@@ -262,8 +199,6 @@ That repo-local env also sets:
- `PAPERCLIP_WORKTREE_COLOR=<hex-color>`
The server/UI use those values for worktree-specific branding such as the top banner and dynamically colored favicon.
Authenticated worktree servers also use the `PAPERCLIP_INSTANCE_ID` value to scope Better Auth cookie names.
Browser cookies are shared by host rather than port, so this prevents logging into one `127.0.0.1:<port>` worktree from replacing another worktree server's session cookie.
Print shell exports explicitly when needed:
@@ -301,7 +236,7 @@ paperclipai worktree init --from-data-dir ~/.paperclip
paperclipai worktree init --force
```
Repair an already-created repo-managed worktree and reseed its isolated instance from the main default install. Point `--from-config` at the instance config:
Repair an already-created repo-managed worktree and reseed its isolated instance from the main default install:
```sh
cd /path/to/paperclip/.paperclip/worktrees/PAP-884-ai-commits-component
@@ -442,9 +377,7 @@ If you set `DATABASE_URL`, the server will use that instead of embedded PostgreS
## Automatic DB Backups
Paperclip can run automatic logical database backups on a timer. These backups cover
non-system database schemas, including migration history and plugin-owned database
schemas. Defaults:
Paperclip can run automatic DB backups on a timer. Defaults:
- enabled
- every 60 minutes
@@ -472,10 +405,6 @@ Environment overrides:
- `PAPERCLIP_DB_BACKUP_RETENTION_DAYS=<days>`
- `PAPERCLIP_DB_BACKUP_DIR=/absolute/or/~/path`
DB backups are not full instance filesystem backups. For full local disaster
recovery, also back up local storage files and the local encrypted secrets key if
those providers are enabled.
## Secrets in Dev
Agent env vars now support secret references. By default, secret values are stored with local encryption and only secret refs are persisted in agent config.
@@ -483,7 +412,6 @@ Agent env vars now support secret references. By default, secret values are stor
- Default local key path: `~/.paperclip/instances/default/secrets/master.key`
- Override key material directly: `PAPERCLIP_SECRETS_MASTER_KEY`
- Override key file path: `PAPERCLIP_SECRETS_MASTER_KEY_FILE`
- Back up the key file and database together; either one alone is not enough to restore local encrypted secrets.
Strict mode (recommended outside local trusted machines):
@@ -492,20 +420,12 @@ PAPERCLIP_SECRETS_STRICT_MODE=true
```
When strict mode is enabled, sensitive env keys (for example `*_API_KEY`, `*_TOKEN`, `*_SECRET`) must use secret references instead of inline plain values.
Authenticated deployments default strict mode on unless explicitly overridden.
CLI configuration support:
- `pnpm paperclipai onboard` writes a default `secrets` config section (`local_encrypted`, strict mode off, key file path set) and creates a local key file when needed.
- `pnpm paperclipai configure --section secrets` lets you update provider/strict mode/key path and creates the local key file when needed.
- `pnpm paperclipai doctor` validates secrets adapter configuration, can create a missing local key file with `--repair`, and reports missing AWS Secrets Manager bootstrap env when that provider is selected.
- Provider health is available at `GET /api/companies/:companyId/secret-providers/health` and reports local key permission warnings plus backup guidance.
Per-company provider vaults are configured in the board UI under
`Company Settings → Secrets → Provider vaults`, backed by
`/api/companies/{companyId}/secret-provider-configs`. The CLI does not own
vault lifecycle today. See `docs/deploy/secrets.md` (`Provider Vaults` section)
for the operator model.
- `pnpm paperclipai doctor` validates secrets adapter configuration and can create a missing local key file with `--repair`.
Migration helper for existing inline env secrets:

View File

@@ -23,7 +23,7 @@ Paperclip is the command, communication, and control plane for a company of AI a
- **Track work in real time** — see at any moment what every agent is working on
- **Control costs** — token salary budgets per agent, spend tracking, burn rate
- **Align to goals** — agents see how their work serves the bigger mission
- **Preserve work context** — comments, documents, work products, attachments, and company state stay attached to the work
- **Store company knowledge** — a shared brain for the organization
## Architecture
@@ -36,20 +36,17 @@ The central nervous system. Manages:
- Agent registry and org chart
- Task assignment and status
- Budget and token spend tracking
- Issue comments, documents, work products, attachments, and company state
- Company knowledge base
- Goal hierarchy (company → team → agent → task)
- Heartbeat monitoring — know when agents are alive, idle, or stuck
It also enforces execution-control semantics such as single-assignee issues, atomic checkout and execution locks, blockers, recovery issues, and workspace/runtime controls.
### 2. Execution Services (adapters)
Agents run externally and report into the control plane. Adapters connect different execution environments and define how a heartbeat is invoked, observed, and cancelled:
Agents run externally and report into the control plane. An agent is just Python code that gets kicked off and does work. Adapters connect different execution environments:
- **Local CLI/session adapters** — built-in adapters for tools such as Claude Code, Codex, Gemini, OpenCode, Pi, and Cursor
- **HTTP/process-style adapters** — command or webhook/API integrations for custom runtimes
- **OpenClaw gateway** — integration for OpenClaw-style remote agents
- **External adapter plugins** — dynamically loaded adapters installed outside the core app
- **OpenClaw** — initial adapter target
- **Heartbeat loop** — simple custom Python that loops, checks in, does work
- **Others** — any runtime that can call an API
The control plane doesn't run agents. It orchestrates them. Agents run wherever they run and phone home.

View File

@@ -32,14 +32,12 @@ Then you define who reports to the CEO: a CTO managing programmers, a CMO managi
### Agent Execution
Paperclip supports several ways to run an agent's heartbeat:
There are two fundamental modes for running an agent's heartbeat:
1. **Local CLI/session adapters** — Paperclip starts or resumes local coding-tool sessions such as Claude Code, Codex, Gemini, OpenCode, Pi, and Cursor, then tracks the run.
2. **Run a command** — Paperclip kicks off a process (shell command, Python script, etc.) and tracks it. The heartbeat is "execute this and monitor it."
3. **Fire and forget a request** — Paperclip sends a webhook/API call to an externally running agent. The heartbeat is "notify this agent to wake up." OpenClaw-style hooks work this way.
4. **External adapter plugins** — Paperclip loads adapter packages through the plugin/adapter flow so self-hosted installs can add runtimes without hardcoding them in core.
1. **Run a command** — Paperclip kicks off a process (shell command, Python script, etc.) and tracks it. The heartbeat is "execute this and monitor it."
2. **Fire and forget a request** — Paperclip sends a webhook/API call to an externally running agent. The heartbeat is "notify this agent to wake up." (OpenClaw hooks work this way.)
Agent runs can use project and execution workspaces, managed runtime services such as preview/dev servers, adapter-specific session state, and HTTP/webhook-style execution. We provide sensible defaults, but the adapter is still the boundary: if a runtime can be invoked, observed, and authorized, Paperclip can coordinate it.
We provide sensible defaults — a default agent that shells out to Claude Code or Codex with your configuration, remembers session IDs, runs basic scripts. But you can plug in anything.
### Task Management
@@ -56,7 +54,7 @@ I am researching the Facebook ads Granola uses (current task)
Tasks have parentage. Every task exists in service of a parent task, all the way up to the company goal. This is what keeps autonomous agents aligned — they can always answer "why am I doing this?"
The current issue model includes stable issue identifiers, parent/sub-issues, blockers, a single assignee, comments, issue documents, attachments and work products, and review/approval handoffs. That structure keeps work inspectable by both the board and agents while still allowing agents to decompose work into smaller tasks.
More detailed task structure TBD.
## Principles
@@ -117,7 +115,7 @@ Paperclips core identity is a **control plane for autonomous AI companies**,
- Do not make the core product a general chat app. The current product definition is explicitly task/comment-centric and “not a chatbot,” and that boundary is valuable.
- Do not build a complete Jira/GitHub replacement. The repo/docs already position Paperclip as organization orchestration, not focused on pull-request review.
- Do not build enterprise-grade RBAC first. Paperclip now has authenticated mode, company memberships, instance roles, and permission grants, but fine-grained enterprise governance should remain secondary to the core company control plane.
- Do not build enterprise-grade RBAC first. The current V1 spec still treats multi-board governance and fine-grained human permissions as out of scope, so the first multi-user version should be coarse and company-scoped.
- Do not lead with raw bash logs and transcripts. Default view should be human-readable intent/progress, with raw detail beneath.
- Do not force users to understand provider/API-key plumbing unless absolutely necessary. There are active onboarding/auth issues already; friction here is clearly real.
@@ -138,14 +136,11 @@ Paperclips core identity is a **control plane for autonomous AI companies**,
5. **Output-first**
Work is not done until the user can see the result: file, document, preview link, screenshot, plan, or PR.
6. **Execution visibility without log worship**
Active runs, recovery issues, productivity review states, blockers, and work products should be first-class surfaces. Raw transcripts are available when needed, but they are not the primary product surface.
7. **Local-first, cloud-ready**
6. **Local-first, cloud-ready**
The mental model should not change between local solo use and shared/private or public/cloud deployment.
8. **Safe autonomy**
7. **Safe autonomy**
Auto mode is allowed; hidden token burn is not.
9. **Thin core, rich edges**
8. **Thin core, rich edges**
Put optional chat, knowledge, and special surfaces into plugins/extensions rather than bloating the control plane.

View File

@@ -115,6 +115,38 @@ If the first real publish returns npm `E404`, check npm-side prerequisites befor
- The initial publish must include `--access public` for a public scoped package.
- npm also requires either account 2FA for publishing or a granular token that is allowed to bypass 2FA.
### Manual first publish for `@paperclipai/mcp-server`
If you need to publish only the MCP server package once by hand, use:
- `@paperclipai/mcp-server`
Recommended flow from the repo root:
```bash
# optional sanity check: this 404s until the first publish exists
npm view @paperclipai/mcp-server version
# make sure the build output is fresh
pnpm --filter @paperclipai/mcp-server build
# confirm your local npm auth before the real publish
npm whoami
# safe preview of the exact publish payload
cd packages/mcp-server
pnpm publish --dry-run --no-git-checks --access public
# real publish
pnpm publish --no-git-checks --access public
```
Notes:
- Publish from `packages/mcp-server/`, not the repo root.
- If `npm view @paperclipai/mcp-server version` already returns the same version that is in [`packages/mcp-server/package.json`](../packages/mcp-server/package.json), do not republish. Bump the version or use the normal repo-wide release flow in [`scripts/release.sh`](../scripts/release.sh).
- The same npm-side prerequisites apply as above: valid npm auth, permission to publish to the `@paperclipai` scope, `--access public`, and the required publish auth/2FA policy.
## Version formats
Paperclip uses calendar versions:
@@ -143,13 +175,6 @@ This keeps the default install path unchanged while allowing explicit installs w
npx paperclipai@canary onboard
```
The release script now verifies two things after a canary publish:
- the `canary` dist-tag resolves to the version that was just published
- every published internal `@paperclipai/*` dependency referenced by that manifest exists on npm
It also treats `latest -> canary` as a failure by default, because npm metadata can otherwise leave the default install path pointing at an unreleased canary dependency graph. Only pass `./scripts/release.sh canary --allow-canary-latest` when that `latest` behavior is explicitly intended.
### Stable
Stable publishes use the npm dist-tag `latest`.
@@ -176,58 +201,6 @@ That means:
See [doc/RELEASE-AUTOMATION-SETUP.md](RELEASE-AUTOMATION-SETUP.md) for the GitHub/npm setup steps.
## Release enrollment for new public packages
Paperclip does not auto-publish every non-private workspace package anymore.
CI publishing is controlled by [`scripts/release-package-manifest.json`](../scripts/release-package-manifest.json).
When you add a new public package:
1. add it to the manifest and decide whether CI should publish it immediately
2. if CI should publish it, bootstrap the package on npm before merge
3. if CI should not publish it yet, keep `"publishFromCi": false`
4. only enable `"publishFromCi": true` after npm trusted publishing is configured for that package
PR CI now checks changed release-enabled package manifests against npm. That catches a missing first-publish bootstrap before the change reaches `master`.
### One-time bootstrap sequence for a new package
The first publish of a brand-new package still needs one human maintainer with npm write access.
After that, trusted publishing can take over.
Example for `@paperclipai/adapter-acpx-local` from the repo root:
```bash
# safe preview
pnpm run release:bootstrap-package -- @paperclipai/adapter-acpx-local
# one-time first publish from an authenticated maintainer machine
pnpm run release:bootstrap-package -- @paperclipai/adapter-acpx-local --publish --otp 123456
```
The helper script:
- checks that the package does not already exist on npm
- builds the target package unless `--skip-build` is passed
- runs `npm pack --dry-run` in the package directory
- only runs the real `npm publish --access public` when `--publish --otp <code>` is provided
For the real `--publish` step, the maintainer machine must already be authenticated to npm.
If `npm whoami` returns `401`, first run `npm logout --registry=https://registry.npmjs.org/` to clear any stale local auth, then run `npm login` or `npm adduser` locally as an npm org member, and finally rerun the helper.
That local human auth is fine for the one-time bootstrap publish; we just do not want the same auth model inside CI.
The helper now requires `--otp <code>` up front for `--publish`, so it fails before the real publish attempt if the one-time password is missing.
After that first publish succeeds:
1. open `https://www.npmjs.com/package/@paperclipai/adapter-acpx-local`
2. go to `Settings``Trusted publishing`
3. add repository `paperclipai/paperclip`
4. set workflow filename to `release.yml`
5. optionally go to `Settings``Publishing access` and enable `Require two-factor authentication and disallow tokens`
6. keep `publishFromCi: true` in [`scripts/release-package-manifest.json`](../scripts/release-package-manifest.json)
Once those steps are done, future canary and stable publishes for that package are automated through GitHub OIDC. The manual step is only the first package creation on npm.
## Rollback model
Rollback does not unpublish anything.

View File

@@ -67,27 +67,6 @@ Why:
- the single `release.yml` workflow handles both canary and stable publishing
- GitHub environments `npm-canary` and `npm-stable` still enforce different approval rules on the GitHub side
### 2.2.1. Newly added public packages need a bootstrap phase
Trusted publishing is configured on the npm package itself, not at the repo scope.
That means a brand-new public package must not be auto-enrolled into CI publishing until its npm package exists and its trusted publisher has been configured.
Repo policy:
1. add every non-private package to [`scripts/release-package-manifest.json`](../scripts/release-package-manifest.json)
2. set `"publishFromCi": true` only when CI is expected to publish that package
3. if the package is not ready for CI publishing yet, keep `"publishFromCi": false`
4. complete the package bootstrap before merging any PR that changes a release-enabled new package
Bootstrap sequence for a new package:
1. publish the package once from a trusted maintainer machine using normal npm auth
2. open that package on npm and add the `paperclipai/paperclip` trusted publisher for `.github/workflows/release.yml`
3. rerun or dry-run the release flow as needed to confirm CI publishing now works
4. only then enable `"publishFromCi": true`
PR CI enforces this by checking changed release-enabled package manifests against npm. That keeps `master` canary publishing healthy while preserving the no-long-lived-token model for normal CI releases.
### 2.3. Verify trusted publishing before removing old auth
After the workflows are live:

View File

@@ -63,8 +63,6 @@ It:
- verifies the pushed commit
- computes the canary version for the current UTC date
- publishes under npm dist-tag `canary`
- verifies that `canary` resolves to the just-published version and that published internal dependencies exist on npm
- fails by default if npm leaves `latest` pointing at a canary; use `--allow-canary-latest` only when that state is intentional
- creates a git tag `canary/vYYYY.MDD.P-canary.N`
Users install canaries with:

View File

@@ -1,368 +0,0 @@
# AWS Secrets Manager Provider
Operational contract for the hosted `aws_secrets_manager` secret provider used by Paperclip Cloud.
## Scope
- Hosted provider for Paperclip-managed secrets when Paperclip Cloud runs on AWS.
- Source of truth for secret values is AWS Secrets Manager, not Postgres.
- Paperclip stores only metadata needed for ownership, bindings, version selection, audit, and runtime resolution.
- AWS provider bootstrap credentials are deployment/runtime credentials, not Paperclip-managed company secrets.
- Remote import for existing AWS secrets is metadata-only. Preview/import uses
AWS inventory metadata and creates Paperclip external references; it does not
copy plaintext into Paperclip.
- Per-company AWS provider vaults (named instances of `aws_secrets_manager`
with their own region, namespace, prefix, KMS key id, and tags) are managed
in the board UI under `Company Settings → Secrets → Provider vaults`. See
[Provider Vaults](../docs/deploy/secrets.md#provider-vaults) for the operator
model and [Provider Vaults API](../docs/api/secrets.md#provider-vaults) for
the routes. The bootstrap trust model in this document still applies — vault
config carries non-sensitive routing metadata only, never AWS credentials.
## Bootstrap Trust Model
The AWS provider has a chicken-and-egg boundary: Paperclip cannot use
`company_secrets` to unlock the AWS provider that stores those secrets. The
initial AWS trust must exist before the Paperclip server starts.
Allowed bootstrap locations:
- Infrastructure IAM or workload identity attached to the Paperclip server
runtime.
- Process environment or orchestrator secret store used to start the Paperclip
server.
- Local AWS SDK sources such as `AWS_PROFILE`, AWS SSO/shared config, web
identity, container metadata, or instance metadata.
- Short-lived shell credentials for local development only.
Do not ask operators to paste AWS root credentials or long-lived IAM user access
keys into the Paperclip board UI. Do not store those bootstrap keys in
`company_secrets`.
## Paperclip Cloud Bootstrap
Paperclip Cloud must provision the AWS backing resources before any board user
can create AWS-backed company secrets:
1. Create or select the deployment KMS key.
2. Create the Paperclip server runtime role for the deployment.
3. Attach a minimum IAM policy scoped to the deployment Secrets Manager prefix
and the configured KMS key.
4. Configure the server runtime with the non-secret provider environment
variables below.
5. Run `paperclipai doctor` or the provider health endpoint from the deployed
runtime and confirm that the provider reports the expected region, prefix,
deployment id, KMS setting, and AWS SDK credential source.
Once this is in place, the board UI can create Paperclip-managed AWS secrets and
Paperclip will write them under the deployment/company namespace.
## Self-Hosted And Local Bootstrap
Self-hosted AWS deployments should use the AWS SDK default credential provider
chain. Preferred sources are role-based:
- EC2 instance profile.
- ECS task role.
- EKS IRSA or another OIDC web identity role.
- AWS SSO/shared config via `AWS_PROFILE`.
Local development can use:
```sh
aws sso login --profile paperclip-dev
AWS_PROFILE=paperclip-dev \
PAPERCLIP_SECRETS_PROVIDER=aws_secrets_manager \
PAPERCLIP_SECRETS_AWS_REGION=us-east-1 \
PAPERCLIP_SECRETS_AWS_DEPLOYMENT_ID=dev-local \
PAPERCLIP_SECRETS_AWS_KMS_KEY_ID=arn:aws:kms:us-east-1:123456789012:key/abcd-... \
pnpm dev
```
Temporary `AWS_ACCESS_KEY_ID`/`AWS_SECRET_ACCESS_KEY` environment credentials
are acceptable only as a local break-glass or short-lived test source. They
should not be written to Paperclip config, committed to `.env` files, stored in
`company_secrets`, or used as the default Paperclip Cloud bootstrap path.
## Deployment Config
Required environment variables:
```sh
PAPERCLIP_SECRETS_PROVIDER=aws_secrets_manager
PAPERCLIP_SECRETS_AWS_REGION=us-east-1
PAPERCLIP_SECRETS_AWS_DEPLOYMENT_ID=prod-us-1
PAPERCLIP_SECRETS_AWS_KMS_KEY_ID=arn:aws:kms:us-east-1:123456789012:key/abcd-...
```
Optional environment variables:
```sh
PAPERCLIP_SECRETS_AWS_PREFIX=paperclip
PAPERCLIP_SECRETS_AWS_ENVIRONMENT=production
PAPERCLIP_SECRETS_AWS_PROVIDER_OWNER=paperclip
PAPERCLIP_SECRETS_AWS_ENDPOINT=
PAPERCLIP_SECRETS_AWS_DELETE_RECOVERY_DAYS=30
```
Naming convention for Paperclip-managed secrets:
```text
paperclip/{deploymentId}/{companyId}/{secretKey}
```
Tag set for Paperclip-managed secrets:
- `paperclip:managed-by=paperclip`
- `paperclip:provider-owner=<owner tag>`
- `paperclip:deployment-id=<deployment id>`
- `paperclip:company-id=<company id>`
- `paperclip:secret-key=<secret key>`
- `paperclip:environment=<environment tag>`
## IAM And KMS Assumptions
Launch posture:
- One Paperclip app role per deployment.
- One deployment-scoped KMS key per deployment at launch.
- Future per-company KMS keys remain compatible because Paperclip stores provider refs and version metadata separately from values.
Minimum IAM boundary:
- Allow `secretsmanager:CreateSecret`, `PutSecretValue`, `GetSecretValue`, and `DeleteSecret`.
- Scope resources to the deployment prefix:
```text
arn:aws:secretsmanager:<region>:<account-id>:secret:paperclip/<deployment-id>/*
```
- Allow `kms:Encrypt`, `kms:Decrypt`, `kms:GenerateDataKey`, and `kms:DescribeKey` for the configured deployment CMK.
- Deny wildcard access outside the deployment prefix.
- Prefer workload identity / role-based auth. Do not store AWS credentials inline in Paperclip config.
Example minimum policy shape:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PaperclipDeploymentSecrets",
"Effect": "Allow",
"Action": [
"secretsmanager:CreateSecret",
"secretsmanager:PutSecretValue",
"secretsmanager:GetSecretValue",
"secretsmanager:DeleteSecret"
],
"Resource": "arn:aws:secretsmanager:<region>:<account-id>:secret:paperclip/<deployment-id>/*"
},
{
"Sid": "PaperclipDeploymentKms",
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:GenerateDataKey",
"kms:DescribeKey"
],
"Resource": "arn:aws:kms:<region>:<account-id>:key/<key-id>"
}
]
}
```
Operational expectation:
- Paperclip-managed secrets may be deleted only by Paperclip or an operator with equivalent break-glass access.
- External references may resolve through Paperclip runtime, but Paperclip should not delete the external secret resource.
## Remote Import Inventory IAM
Remote import preview needs one additional AWS permission:
```json
{
"Sid": "PaperclipRemoteSecretInventory",
"Effect": "Allow",
"Action": "secretsmanager:ListSecrets",
"Resource": "*"
}
```
This is intentionally separate from the managed create/rotate/delete policy.
AWS treats `ListSecrets` as an account/Region inventory action; do not document
secret ARNs, names, tags, or AWS request filters as an IAM boundary for it. Use
`Resource: "*"` and decide whether inventory exposure is acceptable for the AWS
account and Region behind each provider vault.
Remote import preview/import must not call:
- `secretsmanager:GetSecretValue`
- `secretsmanager:BatchGetSecretValue`
- `kms:Decrypt`
Those permissions are only needed later when a bound runtime resolves an
imported external reference. For imported refs, scope read permissions to the
operator-approved external prefixes that Paperclip is allowed to consume:
```json
{
"Sid": "PaperclipResolveImportedExternalReferences",
"Effect": "Allow",
"Action": "secretsmanager:GetSecretValue",
"Resource": [
"arn:aws:secretsmanager:<region>:<account-id>:secret:<approved-external-prefix>/*"
]
}
```
If selected external secrets use customer-managed KMS keys, also grant
`kms:Decrypt` and `kms:DescribeKey` on those keys. Keep managed write/delete
permissions scoped to `paperclip/<deployment-id>/*`; do not broaden them for
remote import.
Safe scoping guidance:
- Prefer one Paperclip runtime role per environment/account.
- Point provider vaults at the intended AWS account and Region instead of a
broad central admin role.
- Enable `ListSecrets` only in accounts where inventory exposure is acceptable.
- Keep preview/import board-only; agent API keys must not call these routes.
- Treat AWS tag/name filters as search UX only, not permission enforcement.
Paperclip also blocks importing refs under its own managed namespace as
external references. Use the Paperclip-managed flow for
`paperclip/{deploymentId}/{companyId}/{secretKey}` resources.
## Existing AWS Secrets
V1 keeps existing AWS Secrets Manager entries as **linked external references**, not adopted
Paperclip-managed resources.
Use the Paperclip-managed flow when Paperclip should create and rotate the value. The AWS
secret name is derived from deployment and company scope:
```text
paperclip/{deploymentId}/{companyId}/{secretKey}
```
Use the external-reference flow when the secret already exists at an operator-owned path such
as:
```text
/paperclip-bench/anthropic_api_key
```
In that mode Paperclip stores only the path or ARN, resolves it at runtime, and records
redacted access events. Operators rotate the actual value in AWS. Update the Paperclip
reference only when the AWS path, ARN, or pinned provider version changes.
Paperclip does not currently offer an "adopt existing AWS secret" flow that takes over future
`PutSecretValue` writes for an arbitrary existing secret. Adding that later requires explicit
confirmation UX, scope validation, expected Paperclip tags, and security/cloud-ops review.
## Data Custody
- Paperclip stores `externalRef`, `providerVersionRef`, provider id, fingerprint hash, status, and binding metadata.
- Paperclip does not store AWS secret plaintext in `company_secret_versions.material`.
- Runtime resolution fetches the value from AWS only when a bound consumer needs it.
## Rotation Runbook
Manual Paperclip-managed rotation:
1. Write the new value through the Paperclip secret rotate flow.
2. Paperclip creates a new AWS secret version with `PutSecretValue`.
3. Paperclip records the new `providerVersionRef` in `company_secret_versions`.
4. Re-run or restart affected workloads that consume `latest`, or pin consumers to a specific Paperclip version before rollout when you need staged release safety.
Guidance:
- Prefer pinned Paperclip secret versions for risky rollouts.
- Treat provider-native automatic rotation as a later enhancement; current V1 flow is explicit create-new-version plus controlled rollout.
## Backup And Restore Runbook
What must survive:
- Paperclip database metadata for secret ownership, bindings, status, and provider version refs.
- AWS Secrets Manager namespace under the configured deployment prefix.
- The configured KMS key and its decrypt permissions.
Restore checklist:
1. Restore Paperclip database metadata.
2. Confirm the same AWS Secrets Manager namespace still exists.
3. Confirm the Paperclip runtime role can call `GetSecretValue` on the restored prefix.
4. Confirm the role still has decrypt access to the CMK referenced by `PAPERCLIP_SECRETS_AWS_KMS_KEY_ID`.
5. Run the live smoke below or a targeted runtime secret resolution test.
## Provider Outage Runbook
Symptoms:
- Secret create/rotate/resolve operations fail with AWS provider errors.
- Agent runs fail before adapter invocation on required secret resolution.
- Remote import preview fails to list AWS inventory.
Immediate actions:
1. Confirm AWS regional health and Secrets Manager availability.
2. Confirm the runtime role still has `GetSecretValue` and KMS decrypt permissions.
3. Check for accidental prefix, region, deployment id, or KMS key config drift.
4. Retry a single resolution after AWS service health is green.
5. If outage persists, pause high-risk runs that require secret access rather than churning retries.
Remote import-specific actions:
- Missing list permission: add `secretsmanager:ListSecrets` with
`Resource: "*"` only when inventory import is approved for that vault's
AWS account and Region.
- Throttling: narrow the search, wait briefly, and retry with backoff. Avoid
full-account enumeration.
- Invalid or stale cursor: refresh the preview and discard the old
`NextToken`.
- Large account: load pages intentionally, keep one in-flight preview request
per vault/search, and do not run background full-account crawls.
- Runtime read failure after import: verify `GetSecretValue` and KMS decrypt
on the selected external secret. Visibility in `ListSecrets` does not prove
read permission.
## Incident Response Runbook
Potential incidents:
- Cross-company access caused by IAM scoping drift.
- KMS policy drift causing decrypt failures or over-broad access.
- Suspected secret exposure in logs, transcripts, or downstream agent output.
Response steps:
1. Stop or pause affected Paperclip runs.
2. Audit recent Paperclip secret access events for impacted secret ids and consumers.
3. Audit AWS CloudTrail for `ListSecrets`, `GetSecretValue`,
`PutSecretValue`, and `DeleteSecret` calls on the relevant vault account,
Region, deployment prefix, and approved external prefixes.
4. Rotate impacted secrets in AWS through Paperclip-managed versioning.
5. Re-scope IAM and KMS policies before resuming normal traffic.
6. If a value may have reached an agent transcript or external system, treat it as exposed and rotate immediately.
## Optional Live Smoke
This is safe to skip locally. Run it only against a dedicated AWS test namespace.
Prerequisites:
- AWS credentials or workload identity with the deployment-scoped IAM permissions above.
- `PAPERCLIP_SECRETS_PROVIDER=aws_secrets_manager`
- The required `PAPERCLIP_SECRETS_AWS_*` environment variables set.
Suggested smoke:
1. Create a test secret through the Paperclip board or API under a throwaway company.
2. Confirm the resulting AWS secret name matches `paperclip/{deploymentId}/{companyId}/{secretKey}`.
3. Rotate the secret once and confirm a new `providerVersionRef` appears in Paperclip metadata.
4. Resolve the secret through a bound runtime path, not by adding a general-purpose reveal endpoint.
5. Delete the throwaway secret and confirm AWS schedules deletion with the configured recovery window.

View File

@@ -1,7 +1,7 @@
# Paperclip V1 Implementation Spec
Status: Implementation contract for first release (V1)
Date: 2026-04-28
Date: 2026-02-17
Audience: Product, engineering, and agent-integration authors
Source inputs: `GOAL.md`, `PRODUCT.md`, `SPEC.md`, `DATABASE.md`, current monorepo code
@@ -37,9 +37,8 @@ These decisions close open questions from `SPEC.md` for V1.
| Visibility | Full visibility to board and all agents in same company |
| Communication | Tasks + comments only (no separate chat system) |
| Task ownership | Single assignee; atomic checkout required for `in_progress` transition |
| Recovery | Liveness/watchdog recovery preserves explicit ownership: retry lost execution continuity where safe, otherwise open visible source-scoped recovery actions by default, use issue-backed recovery only for independent repair work, or require human escalation (see `doc/execution-semantics.md`) |
| Agent adapters | Built-in `process`, `http`, local CLI/session adapters, and OpenClaw gateway support; external adapters can also be loaded through the adapter plugin flow |
| Plugin framework | Local/self-hosted early plugin runtime is in scope; cloud marketplace and packaged public distribution remain out of scope |
| Recovery | No automatic reassignment; work recovery stays manual/explicit |
| Agent adapters | Built-in `process` and `http` adapters |
| Auth | Mode-dependent human auth (`local_trusted` implicit board in current code; authenticated mode uses sessions), API keys for agents |
| Budget period | Monthly UTC calendar window |
| Budget enforcement | Soft alerts + hard limit auto-pause |
@@ -74,7 +73,7 @@ V1 implementation extends this baseline into a company-centric, governance-aware
## 5.2 Out of Scope (V1)
- Cloud-grade plugin marketplace/distribution beyond the local/self-hosted plugin runtime
- Plugin framework and third-party extension SDK
- Revenue/expense accounting beyond model/token costs
- Knowledge base subsystem
- Public marketplace (ClipHub)
@@ -124,16 +123,6 @@ Human auth tables (`users`, `sessions`, and provider-specific auth artifacts) ar
- `name` text not null
- `description` text null
- `status` enum: `active | paused | archived`
- `pause_reason` text null
- `paused_at` timestamptz null
- `issue_prefix` text not null
- `issue_counter` int not null
- `budget_monthly_cents` int not null default 0
- `spent_monthly_cents` int not null default 0
- `attachment_max_bytes` int not null
- `require_board_approval_for_new_agents` boolean not null default false
- feedback sharing consent fields
- branding fields such as `brand_color`
Invariant: every business record belongs to exactly one company.
@@ -144,21 +133,15 @@ Invariant: every business record belongs to exactly one company.
- `name` text not null
- `role` text not null
- `title` text null
- `icon` text null
- `status` enum: `active | paused | idle | running | error | pending_approval | terminated`
- `status` enum: `active | paused | idle | running | error | terminated`
- `reports_to` uuid fk `agents.id` null
- `capabilities` text null
- `adapter_type` text; built-ins include `process`, `http`, `claude_local`, `codex_local`, `gemini_local`, `opencode_local`, `pi_local`, `cursor`, and `openclaw_gateway`
- `adapter_type` enum: `process | http`
- `adapter_config` jsonb not null
- `runtime_config` jsonb not null default `{}`; may include Paperclip runtime policy such as `modelProfiles.cheap.adapterConfig` for an optional low-cost model lane that does not change the primary adapter config
- `default_environment_id` uuid fk `environments.id` null
- `context_mode` enum: `thin | fat` default `thin`
- `budget_monthly_cents` int not null default 0
- `spent_monthly_cents` int not null default 0
- pause fields: `pause_reason`, `paused_at`
- `permissions` jsonb not null default `{}`
- `last_heartbeat_at` timestamptz null
- `metadata` jsonb null
Invariants:
@@ -212,7 +195,6 @@ Invariant:
- `id` uuid pk
- `company_id` uuid fk not null
- `project_id` uuid fk `projects.id` null
- `project_workspace_id` uuid fk `project_workspaces.id` null
- `goal_id` uuid fk `goals.id` null
- `parent_id` uuid fk `issues.id` null
- `title` text not null
@@ -220,22 +202,13 @@ Invariant:
- `status` enum: `backlog | todo | in_progress | in_review | done | blocked | cancelled`
- `priority` enum: `critical | high | medium | low`
- `assignee_agent_id` uuid fk `agents.id` null
- `assignee_user_id` text null
- checkout/execution locks: `checkout_run_id`, `execution_run_id`, `execution_agent_name_key`, `execution_locked_at`
- `created_by_agent_id` uuid fk `agents.id` null
- `created_by_user_id` uuid fk `users.id` null
- identifier fields: `issue_number`, `identifier`
- origin fields: `origin_kind`, `origin_id`, `origin_run_id`, `origin_fingerprint`
- `request_depth` int not null default 0
- `billing_code` text null
- `assignee_adapter_overrides` jsonb null
- `execution_policy` jsonb null
- `execution_state` jsonb null
- execution workspace fields: `execution_workspace_id`, `execution_workspace_preference`, `execution_workspace_settings`
- `started_at` timestamptz null
- `completed_at` timestamptz null
- `cancelled_at` timestamptz null
- `hidden_at` timestamptz null
Invariants:
@@ -288,10 +261,10 @@ Invariant: each event must attach to agent and company; rollups are aggregation,
- `id` uuid pk
- `company_id` uuid fk not null
- `type` enum: `hire_agent | approve_ceo_strategy | budget_override_required | request_board_approval`
- `type` enum: `hire_agent | approve_ceo_strategy`
- `requested_by_agent_id` uuid fk `agents.id` null
- `requested_by_user_id` uuid fk `users.id` null
- `status` enum: `pending | revision_requested | approved | rejected | cancelled`
- `status` enum: `pending | approved | rejected | cancelled`
- `payload` jsonb not null
- `decision_note` text null
- `decided_by_user_id` uuid fk `users.id` null
@@ -390,15 +363,6 @@ Operational policy:
- `document_id` uuid fk not null
- `key` text not null (`plan`, `design`, `notes`, etc.)
## 7.16 Current Implementation Addenda
The current implementation includes additional V1-control-plane tables beyond the original February snapshot:
- Issue structure and review: `issue_relations` for blockers, `labels`/`issue_labels`, `issue_thread_interactions`, `issue_approvals`, `issue_execution_decisions`, `issue_work_products`, `issue_inbox_archives`, `issue_read_states`, and issue reference mention indexes.
- Execution and workspace control: `execution_workspaces`, `project_workspaces`, `workspace_runtime_services`, `workspace_operations`, `environments`, `environment_leases`, `agent_task_sessions`, `agent_runtime_state`, `agent_wakeup_requests`, heartbeat events, and watchdog decision tables.
- Plugins and routines: `plugins`, plugin config/state/entities/jobs/logs/webhooks, plugin database namespaces/migrations, plugin company settings, and `routines`.
- Access and operations: company memberships, instance roles, principal permission grants, invites, join requests, board API keys, CLI auth challenges, budget policies/incidents, feedback exports/votes, company skills, sidebar preferences, and company logos.
## 8. State Machines
## 8.1 Agent Status
@@ -431,16 +395,6 @@ Side effects:
- entering `done` sets `completed_at`
- entering `cancelled` sets `cancelled_at`
V1 non-terminal liveness rule:
- agent-owned `todo`, `in_progress`, `in_review`, and `blocked` issues must have a live execution path, an explicit waiting path, or an explicit recovery path
- `in_review` is healthy only when a typed execution participant, pending issue-thread interaction or approval, user owner, active run, queued wake, or explicit recovery action owns the next action
- a blocked chain is covered only when each unresolved leaf issue is live or explicitly waiting
- when Paperclip cannot safely infer the next action, it surfaces the problem through visible blocked/recovery work instead of silently completing or reassigning work
- explicit recovery actions are the liveness primitive; source-scoped actions are the default form, issue-backed recovery is a fallback for independent repair work or safety boundaries, and comments alone are evidence rather than a healthy liveness path
Detailed ownership, execution, blocker, active-run watchdog, crash-recovery, and non-terminal liveness semantics are documented in `doc/execution-semantics.md`.
## 8.3 Approval Status
- `pending -> approved | rejected | cancelled`
@@ -528,7 +482,6 @@ All endpoints are under `/api` and return JSON.
- `DELETE /issues/:issueId/documents/:key`
- `POST /issues/:issueId/checkout`
- `POST /issues/:issueId/release`
- `POST /issues/:issueId/admin/force-release` (board-only lock recovery)
- `POST /issues/:issueId/comments`
- `GET /issues/:issueId/comments`
- `POST /companies/:companyId/issues/:issueId/attachments` (multipart upload)
@@ -553,8 +506,6 @@ Server behavior:
2. if updated row count is 0, return `409` with current owner/status
3. successful checkout sets `assignee_agent_id`, `status = in_progress`, and `started_at`
`POST /issues/:issueId/admin/force-release` is an operator recovery endpoint for stale harness locks. It requires board access to the issue company, clears checkout and execution run lock fields, and may clear the agent assignee when `clearAssignee=true` is passed. The route must write an `issue.admin_force_release` activity log entry containing the previous checkout and execution run IDs.
## 10.5 Projects
- `GET /companies/:companyId/projects`
@@ -600,17 +551,6 @@ Dashboard payload must include:
- `422` semantic rule violation
- `500` server error
## 10.10 Current Implementation API Addenda
The current app also exposes V1-supporting surfaces for:
- issue thread interactions (`suggest_tasks`, `ask_user_questions`, `request_confirmation`)
- issue approvals, issue references/search, labels, read state, inbox/archive state, and work products
- execution workspaces, project workspaces, workspace runtime services, and workspace operations
- routines and scheduled/API/webhook triggers
- plugin installation, configuration, state, jobs, logs, webhooks, and plugin database namespace migration
- company import/export preview/apply, feedback export/vote routes, instance backup/config routes, invites, join requests, memberships, and permission grants
## 11. Heartbeat and Adapter Contract
## 11.1 Adapter Interface
@@ -677,7 +617,7 @@ Per-agent schedule fields in `adapter_config`:
- `enabled` boolean
- `intervalSec` integer (minimum 30)
- `maxConcurrentRuns` integer; new agents default to `20`; scheduler clamps configured values to `1..50`
- `maxConcurrentRuns` fixed at `1` for V1
Scheduler must skip invocation when:
@@ -786,14 +726,13 @@ Required UX behaviors:
- Node 20+
- `DATABASE_URL` optional
- if unset, auto-use embedded PostgreSQL under `~/.paperclip/instances/default/db`
- if unset, auto-use PGlite and push schema
## 15.2 Migrations
- Drizzle migrations are source of truth
- local/dev startup applies pending migrations automatically where supported
- `pnpm db:migrate` applies pending migrations manually
- no destructive migration in-place for V1 upgrade path
- provide migration script from existing minimal tables to company-scoped schema
## 15.3 Logging and Audit
@@ -848,8 +787,6 @@ A release candidate is blocked unless these pass:
## 18. Delivery Plan
Current implementation note: the milestones below describe the original V1 sequencing. Several systems originally framed as future work have since shipped or advanced materially, including issue documents/interactions, blockers, routines, execution workspaces, import/export portability, authenticated deployment modes, multi-user basics, and the local/self-hosted plugin runtime.
## Milestone 1: Company Core and Auth
- add `companies` and company scoping to existing entities
@@ -902,7 +839,7 @@ V1 is complete only when all criteria are true:
## 20. Post-V1 Backlog (Explicitly Deferred)
- cloud-grade plugin marketplace/distribution
- plugin architecture
- richer workflow-state customization per team
- milestones/labels/dependency graph depth beyond V1 minimum
- realtime transport optimization (SSE/WebSockets)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 140 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

View File

@@ -1,433 +0,0 @@
# Execution Semantics
Status: Current implementation guide
Date: 2026-04-26
Audience: Product and engineering
This document explains how Paperclip interprets issue assignment, issue status, execution runs, wakeups, parent/sub-issue structure, and blocker relationships.
`doc/SPEC-implementation.md` remains the V1 contract. This document is the detailed execution model behind that contract.
## 1. Core Model
Paperclip separates four concepts that are easy to blur together:
1. structure: parent/sub-issue relationships
2. dependency: blocker relationships
3. ownership: who is responsible for the issue now
4. execution: whether the control plane currently has a live path to move the issue forward
The system works best when those are kept separate.
## 2. Assignee Semantics
An issue has at most one assignee.
- `assigneeAgentId` means the issue is owned by an agent
- `assigneeUserId` means the issue is owned by a human board user
- both cannot be set at the same time
This is a hard invariant. Paperclip is single-assignee by design.
## 3. Status Semantics
Paperclip issue statuses are not just UI labels. They imply different expectations about ownership and execution.
### `backlog`
The issue is not ready for active work.
- no execution expectation
- no pickup expectation
- safe resting state for future work
### `todo`
The issue is actionable but not actively claimed.
- it may be assigned or unassigned
- no checkout/execution lock is required yet
- for agent-assigned work, Paperclip may still need a wake path to ensure the assignee actually sees it
### `in_progress`
The issue is actively owned work.
- requires an assignee
- for agent-owned issues, this is a strict execution-backed state
- for user-owned issues, this is a human ownership state and is not backed by heartbeat execution
For agent-owned issues, `in_progress` should not be allowed to become a silent dead state.
### `blocked`
The issue cannot proceed until something external changes.
This is the right state for:
- waiting on another issue
- waiting on a human decision
- waiting on an external dependency or system when Paperclip does not own a scheduled re-check
- work that automatic recovery could not safely continue
### `in_review`
Execution work is paused because the next move belongs to a reviewer or approver, not the current executor.
An external review service can also be a valid review path when the issue keeps an agent assignee and has an active one-shot monitor that will wake that assignee to check the service later.
### `done`
The work is complete and terminal.
### `cancelled`
The work will not continue and is terminal.
## 4. Agent-Owned vs User-Owned Execution
The execution model differs depending on assignee type.
### Agent-owned issues
Agent-owned issues are part of the control plane's execution loop.
- Paperclip can wake the assignee
- Paperclip can track runs linked to the issue
- Paperclip can recover some lost execution state after crashes/restarts
### User-owned issues
User-owned issues are not executed by the heartbeat scheduler.
- Paperclip can track the ownership and status
- Paperclip cannot rely on heartbeat/run semantics to keep them moving
- stranded-work reconciliation does not apply to them
This is why `in_progress` can be strict for agents without forcing the same runtime rules onto human-held work.
## 5. Checkout and Active Execution
Checkout is the bridge from issue ownership to active agent execution.
- checkout is required to move an issue into agent-owned `in_progress`
- `checkoutRunId` represents issue-ownership lock for the current agent run
- `executionRunId` represents the currently active execution path for the issue
These are related but not identical:
- `checkoutRunId` answers who currently owns execution rights for the issue
- `executionRunId` answers which run is actually live right now
Paperclip already clears stale execution locks and can adopt some stale checkout locks when the original run is gone.
## 6. Parent/Sub-Issue vs Blockers
Paperclip uses two different relationships for different jobs.
### Parent/Sub-Issue (`parentId`)
This is structural.
Use it for:
- work breakdown
- rollup context
- explaining why a child issue exists
- waking the parent assignee when all direct children become terminal
Do not treat `parentId` as execution dependency by itself.
### Blockers (`blockedByIssueIds`)
This is dependency semantics.
Use it for:
- \"this issue cannot continue until that issue changes state\"
- explicit waiting relationships
- automatic wakeups when all blockers resolve
Blocked issues should stay idle while blockers remain unresolved. Paperclip should not create a queued heartbeat run for that issue until the final blocker is done and the `issue_blockers_resolved` wake can start real work.
If a parent is truly waiting on a child, model that with blockers. Do not rely on the parent/child relationship alone.
## 7. Non-Terminal Issue Liveness Contract
For agent-owned, non-terminal issues, Paperclip should never leave work in a state where nobody is responsible for the next move and nothing will wake or surface it.
This is a visibility contract, not an auto-completion contract. If Paperclip cannot safely infer the next action, it should surface the ambiguity with a blocked state, a visible notice, or an explicit recovery action. It must not silently mark work done from prose comments or guess that a dependency is complete.
An issue is healthy when the product can answer "what moves this forward next?" without requiring a human to reconstruct intent from the whole thread. An issue is stalled when it is non-terminal but has no live execution path, no explicit waiting path, and no recovery path.
The valid action-path primitives are:
- an active run linked to the issue
- a queued wake or continuation that can be delivered to the responsible agent
- a typed execution-policy participant, such as `executionState.currentParticipant`
- a pending issue-thread interaction or linked approval that is waiting for a specific responder
- a one-shot issue monitor (`executionPolicy.monitor.nextCheckAt`) that will wake the assignee for a future check
- a human owner via `assigneeUserId`
- a first-class blocker chain whose unresolved leaf issues are themselves healthy
- an open explicit recovery action that names the owner and action needed to restore liveness
### Explicit recovery actions
An explicit recovery action is a typed liveness repair path for a source issue. It is the recovery primitive; the action can be rendered directly on the source issue or backed by a separate recovery issue when the repair needs its own work item.
A valid recovery action must name:
- the source issue and company
- the recovery kind and idempotency fingerprint
- the recovery owner, plus previous or return owner when ownership may temporarily shift
- the cause, bounded evidence, and next action
- the wake, monitor, timeout, retry, or escalation policy that will move the action forward
- the resolution outcome when closed, such as restored, delegated, false positive, blocked, escalated, or cancelled
A source-scoped recovery action is the default form. Use it when the next safe move is to repair the source issue's liveness directly: restore a wake path, clarify disposition, re-establish a monitor, record a false positive, or delegate real follow-up work from the source issue.
Use an issue-backed recovery action only when the recovery is genuinely independent work or when source-scoped handling would be unsafe or unclear. Examples include:
- long or cross-agent repair work with its own assignee, subtasks, or blockers
- real delegated follow-up that should block the source issue as a first-class dependency
- active-run watchdog work that must observe a still-running source process without interfering with it
- recovery that needs separate review, approval, security handling, or escalation ownership
- cases where source issue ownership cannot be changed or restored safely
A comment or system notice can be evidence for a recovery action, but it is not a recovery action by itself. Comment-only recovery is not a healthy liveness path because it does not define a typed owner, wake or monitor policy, retry bound, timeout, escalation path, or resolution outcome.
### Agent-assigned `todo`
This is dispatch state: ready to start, not yet actively claimed.
A healthy dispatch state means at least one of these is true:
- the issue already has a queued wake path
- the issue is intentionally resting in `todo` after a completed agent heartbeat, with no interrupted dispatch evidence
- the issue has been explicitly surfaced as stranded through a visible blocked/recovery path
An assigned `todo` issue is stalled when dispatch was interrupted, no wake remains queued or running, and no recovery path has been opened.
### Agent-assigned `backlog`
This is parked state, not dispatch state.
Assigning an issue normally implies executable intent. When create APIs receive an assignee and no explicit status, Paperclip defaults the issue to `todo` so the assignee has a wake path instead of silently inheriting the unassigned `backlog` default.
An explicit assigned `backlog` issue remains valid when the creator is deliberately parking the work. It must not wake the assignee just because it has an assignee. Paperclip should make that choice visible in activity and UI so operators can distinguish intentional parking from a missed handoff.
An assigned `backlog` issue becomes a liveness problem when another issue is blocked on it and there is no explicit waiting path such as a human owner, active run, queued wake, pending interaction or approval, monitor, or open recovery action. In that case the blocked parent should surface "blocked by parked work" rather than treating the dependency chain as healthy.
### Agent-assigned `in_progress`
This is active-work state.
A healthy active-work state means at least one of these is true:
- there is an active run for the issue
- there is already a queued continuation wake
- there is an active one-shot monitor that will wake the assignee for a future check
- there is an open explicit recovery action for the lost execution path
An agent-owned `in_progress` issue is stalled when it has no active run, no queued continuation, and no explicit recovery surface. A still-running but silent process is not automatically stalled; it is handled by the active-run watchdog contract.
### `in_review`
This is review/approval state: execution is paused because the next move belongs to a reviewer, approver, board user, or recovery owner.
A healthy `in_review` issue has at least one valid action path:
- a typed execution-policy participant who can approve or request changes
- a pending issue-thread interaction or linked approval waiting for a named responder
- a human owner via `assigneeUserId`
- an active run or queued wake that is expected to process the review state
- an active one-shot monitor for an external service or async review loop that the assignee owns
- an open explicit recovery action for an ambiguous review handoff
Agent-assigned `in_review` with no typed participant is only healthy when one of the other paths exists. Assignment to the same agent that produced the handoff is not, by itself, a review path.
An `in_review` issue is stalled when it has no typed participant, no pending interaction or approval, no user owner, no active monitor, no active run, no queued wake, and no explicit recovery action. Paperclip should surface that state as recovery work rather than silently completing the issue or leaving blocker chains parked indefinitely.
### Issue monitors
An issue monitor is a one-shot deferred action path for agent-owned issues in `in_progress` or `in_review`.
Use a monitor when the current assignee owns a future check against an async system or external service. Examples include Greptile review loops, GitHub checks, Vercel deployments, or provider jobs where the agent should come back later and decide what happens next.
Monitor policy lives under `executionPolicy.monitor` and includes:
- `nextCheckAt`: when Paperclip should wake the assignee
- `notes`: non-secret instructions for what the assignee should check
- `serviceName`: optional non-secret external-service context
- `externalRef`: optional external-service reference input; Paperclip treats it as secret-adjacent, redacts it before persistence/visibility, and omits it from activity and wake payloads
- `timeoutAt`, `maxAttempts`, and `recoveryPolicy`: optional recovery hints for bounded waits
Monitors are not recurring intervals. When a monitor fires, Paperclip clears the scheduled monitor and queues an `issue_monitor_due` wake for the assignee. If the external service is still pending, the assignee must explicitly re-arm the monitor with a new `nextCheckAt`. If the issue moves to `done`, `cancelled`, an invalid status, or a human/unassigned owner, the monitor is cleared.
Because `serviceName` and `notes` remain visible in issue activity and wake context, operators should keep them short and non-secret. Put enough context for the assignee to know what to inspect, but do not include signed URLs, bearer tokens, customer secrets, tenant-private identifiers, or provider links with embedded credentials.
Monitor bounds are enforced. Paperclip rejects attempts to re-arm a monitor whose `timeoutAt` or `maxAttempts` is already exhausted. When a scheduled monitor reaches an exhausted bound at trigger time, Paperclip clears it and follows `recoveryPolicy`: `wake_owner` queues a bounded recovery wake for the assignee, `create_recovery_issue` opens visible issue-backed recovery work, and `escalate_to_board` records a board-visible escalation comment/activity.
Use `blocked` instead of a monitor when no Paperclip assignee owns a responsible polling path. In that case, name the external owner/action or create first-class recovery/blocker work.
### `blocked`
This is explicit waiting state.
A healthy `blocked` issue has an explicit waiting path:
- first-class blockers exist, and each unresolved leaf has a valid action path under this contract
- the issue has an explicit recovery action that itself has a live or waiting path
- the issue is waiting on a pending interaction, linked approval, human owner, or clearly named external owner/action
A blocker chain is covered only when its unresolved leaf is live or explicitly waiting. An intermediate `blocked` issue does not make the chain healthy by itself.
A `blocked` issue is stalled when the unresolved blocker leaf has no active run, queued wake, typed participant, pending interaction or approval, user owner, external owner/action, or recovery action. In that case the parent should show the first stalled leaf instead of presenting the dependency as calmly covered.
## 8. Crash and Restart Recovery
Paperclip now treats crash/restart recovery as a stranded-assigned-work problem, not just a stranded-run problem.
There are two distinct failure modes.
### 8.1 Stranded assigned `todo`
Example:
- issue is assigned to an agent
- status is `todo`
- the original wake/run died during or after dispatch
- after restart there is no queued wake and nothing picks the issue back up
Recovery rule:
- if the latest issue-linked run failed/timed out/cancelled and no live execution path remains, Paperclip queues one automatic assignment recovery wake
- if that recovery wake also finishes and the issue is still stranded, Paperclip moves the issue to `blocked` and opens or updates an explicit recovery action when a bounded owner/action is known; the visible comment is evidence, not the recovery path by itself
This is a dispatch recovery, not a continuation recovery.
### 8.2 Stranded assigned `in_progress`
Example:
- issue is assigned to an agent
- status is `in_progress`
- the live run disappeared
- after restart there is no active run and no queued continuation
Recovery rule:
- Paperclip queues one automatic continuation wake
- if that continuation wake also finishes and the issue is still stranded, Paperclip moves the issue to `blocked` and opens or updates an explicit recovery action when a bounded owner/action is known; the visible comment is evidence, not the recovery path by itself
This is an active-work continuity recovery.
## 9. Startup and Periodic Reconciliation
Startup recovery and periodic recovery are different from normal wakeup delivery.
On startup and on the periodic recovery loop, Paperclip now does four things in sequence:
1. reap orphaned `running` runs
2. resume persisted `queued` runs
3. reconcile stranded assigned work
4. scan silent active runs and create or update explicit watchdog recovery actions
The stranded-work pass closes the gap where issue state survives a crash but the wake/run path does not. The silent-run scan covers the separate case where a live process exists but has stopped producing observable output.
## 10. Silent Active-Run Watchdog
An active run can still be unhealthy even when its process is `running`. Paperclip treats prolonged output silence as a watchdog signal, not as proof that the run is failed.
The recovery service owns this contract:
- classify active-run output silence as `ok`, `suspicious`, `critical`, `snoozed`, or `not_applicable`
- collect bounded evidence from run logs, recent run events, child issues, and blockers
- preserve redaction and truncation before evidence is written to issue descriptions
- create at most one open watchdog recovery action per run; issue-backed implementations use `stale_active_run_evaluation` issues
- honor active snooze decisions before creating more review work
- build the `outputSilence` summary shown by live-run and active-run API responses
Suspicious silence creates a medium-priority watchdog recovery action for the selected recovery owner. Critical silence raises that recovery action to high priority and, when issue-backed evaluation is needed for correctness, blocks the source issue on the explicit evaluation task without cancelling the active process.
Watchdog decisions are explicit operator/recovery-owner decisions:
- `snooze` records an operator-chosen future quiet-until time and suppresses scan-created review work during that window
- `continue` records that the current evidence is acceptable, does not cancel or mutate the active run, and sets a 30-minute default re-arm window before the watchdog evaluates the still-silent run again
- `dismissed_false_positive` records why the review was not actionable
Operators should prefer `snooze` for known time-bounded quiet periods. `continue` is only a short acknowledgement of the current evidence; if the run remains silent after the re-arm window, the periodic watchdog scan can create or update review work again.
The board can record watchdog decisions. The assigned owner of an issue-backed watchdog evaluation can also record them. Other agents cannot.
## 11. Auto-Recover vs Explicit Recovery vs Human Escalation
Paperclip uses three different recovery outcomes, depending on how much it can safely infer.
### Auto-Recover
Auto-recovery is allowed when ownership is clear and the control plane only lost execution continuity.
Examples:
- requeue one dispatch wake for an assigned `todo` issue whose latest run failed, timed out, or was cancelled
- requeue one continuation wake for an assigned `in_progress` issue whose live execution path disappeared
- assign an orphan blocker back to its creator when that blocker is already preventing other work
Auto-recovery preserves the existing owner. It does not choose a replacement agent.
### Explicit Recovery Action
Paperclip opens an explicit recovery action when the system can identify a problem but cannot safely complete the work itself.
Examples:
- automatic stranded-work retry was already exhausted
- a dependency graph has an invalid/uninvokable owner, unassigned blocker, or invalid review participant
- an active run is silent past the watchdog threshold
The recovery action stays source-scoped by default. The source issue should show the recovery owner, cause, evidence, next action, and wake or monitor policy in its own thread/detail surface.
Create an issue-backed recovery action only when a separate issue is the right execution object. In that fallback form, the source issue remains visible and is blocked on the recovery issue when blocking is necessary for correctness. The recovery owner must restore a live path, resolve the source issue manually, delegate real follow-up work, or record the reason the signal is a false positive.
Instance-level issue-graph liveness auto-recovery is disabled by default. When enabled, its lookback window means "dependency paths updated within the last N hours"; older findings remain advisory and are counted as outside the configured lookback instead of creating recovery actions automatically. This is an operator noise control, not the older staleness delay for determining whether a chain is old enough to surface.
### Human Escalation
Human escalation is required when the next safe action depends on board judgment, budget/approval policy, or information unavailable to the control plane.
Examples:
- all candidate recovery owners are paused, terminated, pending approval, or budget-blocked
- the issue is human-owned rather than agent-owned
- the run is intentionally quiet but needs an operator decision before cancellation or continuation
In these cases Paperclip should leave a visible issue/comment trail instead of silently retrying.
## 12. What This Does Not Mean
These semantics do not change V1 into an auto-reassignment system.
Paperclip still does not:
- automatically reassign work to a different agent
- infer dependency semantics from `parentId` alone
- treat human-held work as heartbeat-managed execution
The recovery model is intentionally conservative:
- preserve ownership
- retry once when the control plane lost execution continuity
- open an explicit recovery action when the system can identify a bounded recovery owner/action
- escalate visibly when the system cannot safely keep going
## 13. Practical Interpretation
For a board operator, the intended meaning is:
- agent-owned `in_progress` should mean \"this is live work or clearly surfaced as a problem\"
- agent-owned `todo` should not stay assigned forever after a crash with no remaining wake path
- parent/sub-issue explains structure
- blockers explain waiting
That is the execution contract Paperclip should present to operators.

View File

@@ -1,382 +0,0 @@
# VS Code Task Interoperability Plan
Status: planning only, no code changes
Date: 2026-04-12
Related issue: `PAP-1377`
## Summary
Paperclip should not replace its workspace runtime service model with VS Code tasks.
It should add a narrow interoperability layer that can discover and adopt supported entries from `.vscode/tasks.json`.
The core product model should stay:
- Paperclip owns long-running workspace services and their desired state
- Paperclip shows operators exactly which named thing they are starting or stopping
- Paperclip distinguishes long-running services from one-shot jobs
VS Code tasks should be treated as:
- an import/discovery format for workspace commands
- a convenience for repos that already maintain `tasks.json`
- a partial compatibility layer, not a full execution model
## Current State
The current implementation is already service-oriented:
- project workspaces and execution workspaces can store `workspaceRuntime` config plus `desiredState` and per-service `serviceStates`
- the UI renders one control row per configured service and persists start/stop intent
- the backend supervises long-running local processes, reuses eligible services, and restores desired services on startup
Relevant files:
- `packages/shared/src/types/workspace-runtime.ts`
- `server/src/services/workspace-runtime.ts`
- `server/src/services/project-workspace-runtime-config.ts`
- `ui/src/components/WorkspaceRuntimeControls.tsx`
- `ui/src/pages/ProjectWorkspaceDetail.tsx`
- `ui/src/pages/ExecutionWorkspaceDetail.tsx`
This is directionally correct for Paperclip because it gives the control plane an explicit model for service lifecycle, health, reuse, and restart behavior.
## Problem To Solve
The current UX is still too raw:
- operators have to hand-author runtime JSON
- a workspace can have multiple attached services, but the higher-level intent is not obvious
- start/stop controls are visible in multiple places, which makes it easy to lose track of what is being controlled
- there is no interoperability with repos that already define useful local workflows in `.vscode/tasks.json`
The issue is not that services are the wrong abstraction.
The issue is that the configuration surface is too low-level and Paperclip does not yet leverage existing workspace metadata.
## Recommendation
Keep Paperclip runtime services as the source of truth for service supervision.
Add a new workspace command model above the raw JSON layer, with VS Code task discovery as one input.
The product model should become:
1. `Workspace command`
A named runnable thing attached to a workspace.
2. `Workspace service`
A workspace command that is expected to stay alive and be supervised.
3. `Workspace job`
A workspace command that runs once and exits.
4. `Runtime service instance`
The live process record that already exists today in Paperclip.
In that model, VS Code tasks are a way to populate workspace commands.
Only commands that map cleanly to Paperclip service or job semantics should become runnable in Paperclip.
## Why Not Fully Adopt VS Code Tasks
VS Code tasks are broader than Paperclip runtime services.
They include shell/process tasks, compound tasks, background/watch tasks, presentation settings, extension/task-provider types, variable substitution, and problem-matcher-driven lifecycle.
That creates a bad fit if Paperclip tries to use `tasks.json` as its only runtime model:
- many tasks are one-shot jobs, not long-running services
- some tasks depend on VS Code task providers or editor-only variable resolution
- compound task graphs are useful, but they are not the same thing as a supervised service
- problem matcher readiness is useful metadata, but it is not enough to replace Paperclip's persisted service lifecycle model
The right boundary is interoperability, not replacement.
## Interoperability Contract
Paperclip should support a conservative subset of VS Code tasks and clearly mark unsupported entries.
### Supported in phase 1
- `shell` and `process` tasks with a concrete command Paperclip can resolve
- optional task `options.cwd`
- optional task environment values that can be flattened safely
- task labels and detail text for naming and display
- `dependsOn` for import-time expansion or display-only dependency hints
- background/watch-oriented tasks that can reasonably be treated as long-running services
### Maybe supported in later phases
- grouping and default task metadata for better UX
- selected variable substitution when Paperclip can resolve it safely from workspace context
- mapping task metadata into Paperclip readiness/expose hints
- limited compound-task launch flows
### Not supported initially
- extension-provided task types Paperclip cannot execute directly
- arbitrary VS Code variable substitution semantics
- problem matcher parsing as the main source of service health
- full parity with VS Code task execution behavior
## Long-Running Service Detection
Paperclip needs an explicit classification layer instead of assuming every VS Code task is a service.
Recommended classification:
- `service`
Explicitly marked by Paperclip metadata, or confidently inferred from background/watch task semantics
- `job`
One-shot command expected to exit
- `unsupported`
Present in `tasks.json`, but not safely runnable by Paperclip
The important product decision is that service classification must be visible and editable by the operator.
Inference can help, but it should not be the only source of truth.
## Proposed Product Shape
### 1. Replace raw-first editing with command-first editing
Project and execution workspace pages should stop making raw runtime JSON the primary editing surface.
Default UI should show:
- workspace commands
- command type: service or job
- source: Paperclip or VS Code
- exact command and cwd
- current state for services
- explicit start, stop, restart, and run-now actions
Raw JSON should remain available behind an advanced section.
### 2. Add VS Code task discovery on workspaces
For a workspace with `cwd`, Paperclip should look for `.vscode/tasks.json`.
The workspace UI should show:
- whether a `tasks.json` file was found
- last parse time
- supported commands discovered
- unsupported tasks with reasons
- whether commands are inherited into execution workspaces
### 3. Make the controlled thing explicit
Start and stop UI should always name the exact entry being controlled.
Examples:
- `Start web`
- `Stop api`
- `Run db:migrate`
Avoid generic workspace-level labels when multiple commands exist.
### 4. Separate services from jobs in the UI
Do not mix one-shot jobs and long-running services into one undifferentiated list.
Recommended sections:
- `Services`
- `Jobs`
- `Unsupported imported tasks`
That resolves the ambiguity called out in the issue.
## Data Model Direction
Do not replace `workspaceRuntime` immediately.
Instead add a higher-level representation that can compile down to the existing runtime-service machinery.
Suggested workspace metadata shape:
```ts
type WorkspaceCommandSource =
| { type: "paperclip" }
| { type: "vscode_task"; taskLabel: string; taskPath: ".vscode/tasks.json" };
type WorkspaceCommandKind = "service" | "job";
type WorkspaceCommandDefinition = {
id: string;
name: string;
kind: WorkspaceCommandKind;
source: WorkspaceCommandSource;
command: string | null;
cwd: string | null;
env?: Record<string, string> | null;
autoStart?: boolean;
serviceConfig?: {
lifecycle?: "shared" | "ephemeral";
reuseScope?: "project_workspace" | "execution_workspace" | "run";
readiness?: Record<string, unknown> | null;
expose?: Record<string, unknown> | null;
} | null;
importWarnings?: string[];
disabledReason?: string | null;
};
```
`workspaceRuntime` can then become a derived or advanced representation for service-type commands until the rest of the system is migrated.
## VS Code Mapping Rules
Paperclip should map imported tasks with explicit, documented rules.
Recommended rules:
1. A task becomes a `job` by default.
2. A task becomes a `service` only when:
- Paperclip metadata marks it as a service, or
- the task clearly represents a background/watch process and the operator confirms the classification.
3. Unsupported tasks stay visible but disabled.
4. Task labels become default command names.
5. `dependsOn` is preserved as metadata, not silently flattened into hidden behavior.
Paperclip-specific metadata can live in a namespaced field on the imported task definition, for example:
```json
{
"label": "web",
"type": "shell",
"command": "pnpm dev",
"isBackground": true,
"paperclip": {
"kind": "service",
"readiness": {
"type": "http",
"urlTemplate": "http://127.0.0.1:${port}"
},
"expose": {
"type": "url",
"urlTemplate": "http://127.0.0.1:${port}"
}
}
}
```
That gives us interoperability without depending on VS Code-only semantics for service readiness and exposure.
## Execution Policy
Project workspaces should be the main place where imported commands are discovered and curated.
Execution workspaces should inherit that curated command set by default, with optional issue-level overrides.
Recommended precedence:
1. execution workspace override
2. project workspace command set
3. imported VS Code tasks from the linked workspace
4. advanced raw runtime fallback
This matches the existing direction in `doc/plans/2026-03-10-workspace-strategy-and-git-worktrees.md`.
## Implementation Plan
### Phase 1: Discovery and read-only visibility
Goal:
show imported VS Code tasks in the workspace UI without changing runtime behavior.
Work:
- parse `.vscode/tasks.json` for project workspaces with local `cwd`
- derive a list of candidate commands plus unsupported items
- show source, label, command, cwd, and classification
- show parse warnings and unsupported reasons
Success condition:
an operator can see what Paperclip would import and why.
### Phase 2: Command model and explicit classification
Goal:
introduce a first-class workspace command layer above raw runtime JSON.
Work:
- add a persisted command definition model in workspace metadata or a dedicated table
- allow operator edits to imported command classification
- separate `service` and `job` in UI
- keep existing runtime-service storage for live supervised processes
Success condition:
the workspace UI is command-first, and raw runtime JSON is advanced-only.
### Phase 3: Service execution backed by existing runtime supervisor
Goal:
run supported imported service commands through the current Paperclip supervisor.
Work:
- compile service commands into the existing runtime service start/stop path
- persist desired state per named command
- keep startup restoration behavior for service commands
- make the active command name explicit everywhere control actions appear
Success condition:
imported service commands behave like native Paperclip services once adopted.
### Phase 4: Job execution and optional dependency handling
Goal:
support one-shot imported commands without pretending they are services.
Work:
- add `Run` actions for jobs
- record output in workspace operations
- optionally support simple `dependsOn` execution for jobs with clear logging
Success condition:
one-shot tasks are runnable, but they are not mixed into the service lifecycle model.
### Phase 5: Adapter and execution workspace integration
Goal:
let agents and issue-scoped workspaces consume the curated command model consistently.
Work:
- expose inherited workspace commands to execution workspaces
- allow issue-level selection of a default service command when relevant
- make service selection explicit in issue and workspace views
Success condition:
agents, operators, and workspaces all refer to the same named commands.
## Non-Goals
- full VS Code task-runner parity
- support for every VS Code task type
- removal of Paperclip's own runtime supervision model
- editor-dependent execution semantics inside the control plane
## Risks
- overfitting Paperclip to VS Code and making the model worse for non-VS-Code repos
- misclassifying watch tasks as durable services
- hiding too much detail and making debugging harder
- allowing imported task graphs to become implicit magic
These risks are manageable if the import layer stays explicit, conservative, and operator-editable.
## Decision
Paperclip should adopt VS Code tasks as an optional workspace command source, not as the canonical runtime model.
The main UX change should be:
- move from raw runtime JSON to named workspace commands
- separate services from jobs
- make the exact controlled command explicit
- let `.vscode/tasks.json` pre-populate those commands when available
## External References
- VS Code tasks documentation: https://code.visualstudio.com/docs/debugtest/tasks
- Existing Paperclip workspace plan: `doc/plans/2026-03-10-workspace-strategy-and-git-worktrees.md`

View File

@@ -1,86 +0,0 @@
# Plugin Secret Refs: Company Scope Reintroduction Plan
Date: 2026-04-26
Status: follow-up after fail-closed mitigation
Related issue: PAP-2394
## Current state
`PAP-2394` now fails closed:
- `POST /api/plugins/:pluginId/config` rejects any config containing plugin secret refs.
- `ctx.secrets.resolve()` is disabled for plugin workers.
This removes the release-blocking cross-company exposure path, but it also disables plugin secret-ref support until the runtime carries company scope end to end.
## Vulnerability summary
The original design mixed an instance-global config store with company-scoped secret bindings:
- [server/src/routes/plugins.ts](/Users/dotta/paperclip/.paperclip/worktrees/PAP-2339-secrets-make-a-plan/server/src/routes/plugins.ts:1898) saved one global plugin config row, then wrote bindings into `company_secret_bindings` grouped by each referenced secret's owning company.
- [packages/db/src/schema/plugin_config.ts](/Users/dotta/paperclip/.paperclip/worktrees/PAP-2339-secrets-make-a-plan/packages/db/src/schema/plugin_config.ts:15) stored one config row per plugin, with no company dimension.
- [packages/db/src/schema/company_secret_bindings.ts](/Users/dotta/paperclip/.paperclip/worktrees/PAP-2339-secrets-make-a-plan/packages/db/src/schema/company_secret_bindings.ts:5) already modeled bindings as company-scoped.
- [server/src/services/plugin-secrets-handler.ts](/Users/dotta/paperclip/.paperclip/worktrees/PAP-2339-secrets-make-a-plan/server/src/services/plugin-secrets-handler.ts:212) resolved by `pluginId` + secret UUID, with no active company context from the bridge call.
- [packages/plugins/sdk/src/worker-rpc-host.ts](/Users/dotta/paperclip/.paperclip/worktrees/PAP-2339-secrets-make-a-plan/packages/plugins/sdk/src/worker-rpc-host.ts:384) exposed `ctx.config.get()` and `ctx.secrets.resolve()` without a company parameter.
This violated Least Privilege, Complete Mediation, and Secure Defaults.
## Recommended end state
Re-enable plugin secret refs only after both of these are true:
1. Plugin config reads/writes are company-scoped.
2. Runtime secret resolution carries explicit company context and enforces it at resolution time.
## Implementation plan
### 1. Make plugin config company-scoped
- Add `company_id` to `plugin_config`, with a unique index on `(plugin_id, company_id)`.
- Update registry helpers to require `companyId` for `getConfig`, `upsertConfig`, `patchConfig`, and `deleteConfig`.
- Update plugin config routes to require `companyId` and call `assertCompanyAccess(req, companyId)`.
- Keep instance-global plugin lifecycle state separate from company-scoped plugin config.
### 2. Propagate company context through the worker runtime
- Extend the SDK so `ctx.config.get()` and `ctx.secrets.resolve()` can receive or derive `companyId`.
- Introduce worker request context storage for handlers that already run with company scope:
- `getData`
- `performAction`
- scoped API routes
- tool executions
- environment driver calls
- Fail closed when plugin code tries to read company-scoped config or secrets outside an active company context.
### 3. Rebind secrets by `(companyId, pluginId, configPath)`
- On config save, validate every referenced secret belongs to the authorized company.
- Store bindings only for that company.
- Resolve secrets only by the current company-scoped binding, never by bare plugin ID plus UUID.
- Treat stale bindings as invalid and remove them on config replacement.
### 4. Prevent cross-company config disclosure
- When returning config to the UI, only materialize the selected company's secret refs.
- Never expose another company's secret UUIDs through the global plugin config surface.
## Required regression coverage
- Company A board user cannot save plugin config that references a Company B secret.
- Company A plugin execution cannot resolve a Company B secret even if the same plugin is configured for Company B.
- Company-scoped config reads only return the selected company's secret bindings.
- Config replacement removes stale bindings for the same `(companyId, pluginId)` target.
- Runtime calls without company context fail closed.
## Migration notes
- Existing `plugin_config` rows need a migration strategy before re-enable.
- Safest default: do not auto-assume a company for historical secret refs.
- Prefer one of:
- explicit admin migration per company, or
- import existing rows as non-secret config only and require re-entry of secret refs.
## Release posture
- Keep plugin secret refs disabled until all steps above land.
- Do not restore the feature behind a soft warning; the insecure path must remain unavailable by default.

View File

@@ -1,135 +0,0 @@
# LLM Wiki Paperclip Asset And Work-Product Security Gate
Status: accepted Phase 5 policy
Date: 2026-05-06
Owner: Security engineering
Scope: Paperclip-derived ingestion into the LLM Wiki before any asset or work-product content indexing ships
## Decision
Phase 5 remains **fail-closed** for Paperclip assets and work products.
- Paperclip-derived **text extraction is allowed only** for issue titles/descriptions, issue comments, and issue documents.
- Paperclip **assets/attachments** and **issue work products** are **metadata-only** in Phase 5.
- **Linked summaries** and **content extraction** for assets/work products are **not approved** in Phase 5.
- No implementation may fetch `/api/assets/:id/content`, dereference a work-product `url`, scrape preview pages, or embed binary/blob content into source bundles or source snapshots.
This keeps the secure path easier than the insecure one and avoids broadening the wiki into a second content-distribution channel.
## Allowed Source Kinds
These source kinds may contribute body text to Paperclip-derived source bundles:
| Source kind | Allowed body fields | Reason |
| --- | --- | --- |
| Issue | `title`, `description`, identifier/status metadata | First-party Paperclip text under company ACL |
| Comment | `body` | First-party Paperclip text under company ACL |
| Document | `body`, `title`, `key`, revision metadata | First-party Paperclip text under company ACL |
## Assets And Work Products
### Assets / attachments
Allowed in Phase 5:
- metadata-only references built from allowlisted structured fields already stored in Paperclip
- recommended fields: `issueId`, `issueCommentId`, `attachmentId`, `assetId`, `originalFilename`, `contentType`, `byteSize`, `sha256`, `createdAt`, `createdByAgentId`, `createdByUserId`
Disallowed in Phase 5:
- fetching asset bytes from `/api/assets/:id/content`
- parsing any blob body, including `text/plain`, `text/markdown`, `application/json`, images, SVG, PDFs, archives, or office formats
- storing `contentPath` in wiki source bundles or source snapshots
- model summarization of attachment bodies
### Work products
Allowed in Phase 5:
- metadata-only references built from allowlisted structured fields already stored in Paperclip
- recommended fields: `issueId`, `workProductId`, `type`, `provider`, `title`, `status`, `reviewState`, `healthStatus`, `externalId`, `isPrimary`, `createdAt`, `updatedAt`
- optional boolean/derived metadata such as `hasUrl: true`
Disallowed in Phase 5:
- fetching or crawling the work-product `url`
- scraping preview pages, artifacts, pull requests, branches, commits, or custom provider targets through the wiki ingestion path
- storing raw `url` values in wiki source bundles or source snapshots
- model-authored linked summaries derived from off-record content
## MIME Allowlists And Size Caps
No MIME allowlist is approved for asset content extraction in Phase 5 because **no asset body extraction is approved at all**.
- Every asset MIME type is treated as opaque for Paperclip-derived indexing.
- Existing upload limits remain storage concerns, not ingestion approvals.
- Work-product destinations are also opaque regardless of MIME type or size.
Any future issue that wants blob parsing must define:
- a positive MIME allowlist
- per-type parser strategy
- per-source size caps
- sandbox/isolation requirements
- prompt-injection handling
- regression tests for refusal paths
## Redaction Rules
Metadata-only means **structured facts only**, not capability-bearing links.
- Do not persist `contentPath` for assets.
- Do not persist raw work-product `url` values.
- Do not persist query strings, fragments, signed URL tokens, or userinfo.
- Prefer stable identifiers (`assetId`, `workProductId`, `externalId`) over links.
This addresses Sensitive Information Disclosure, Unsafe Consumption of APIs, and Insecure Output Handling risks.
## Provenance Rules
Every metadata-only reference must preserve enough provenance to explain where it came from without reading the underlying content:
- `companyId`
- `issueId`
- attachment/work-product id
- producer identity when available
- timestamps
- an explicit `metadata_only` marker in any future reference/snapshot schema
## Review-Required Behavior
Human review is **not** required for plain metadata-only references that stay inside the allowlisted fields above.
Human review **is required**, with a separate security sign-off issue, before enabling any of the following:
- asset body extraction
- work-product URL fetching
- linked summaries generated from asset/work-product content
- storing raw blob links or raw remote URLs in wiki source material
- non-default-space routing for Paperclip-derived asset/work-product references
## Security Rationale
This gate exists because the current host surfaces have different trust properties:
- issue/comment/document text is first-party Paperclip content already exposed through company-scoped issue/document APIs
- asset content is a blob download surface (`/api/assets/:id/content`) and can carry prompt-injection or parser-risk payloads
- work products can point at arbitrary destinations through `url`, which reintroduces SSRF, token leakage, and prompt-injection risk if dereferenced automatically
Relevant threat classes:
- OWASP LLM Top 10: Prompt Injection, Sensitive Information Disclosure, Insecure Output Handling, Excessive Agency
- OWASP API Top 10: SSRF, Unsafe Consumption of APIs, Broken Object Property Level Authorization
- Saltzer & Schroeder: Least Privilege, Fail Securely, Complete Mediation, Secure Defaults
## Follow-Up Implementation Scope
A follow-up implementation issue is justified only for **metadata-only references**.
That implementation must:
- keep assets/work products out of source-bundle body text
- never fetch blob bytes or remote URLs
- redact capability-bearing link fields
- mark references as `metadata_only`
- ship tests proving source bundles/snapshots never contain `contentPath` or raw work-product `url` fields

View File

@@ -1,136 +0,0 @@
# Local Plugin Development
This is the short happy-path guide for developing a Paperclip plugin from a folder on your machine. You will scaffold a plugin, run it in watch mode, install it into a running Paperclip instance from an absolute local path, and edit code with the plugin worker reloading after each rebuild.
For the full alpha surface — manifest fields, capabilities, managed agents/projects/routines, UI slots, scoped API routes — see [`PLUGIN_AUTHORING_GUIDE.md`](./PLUGIN_AUTHORING_GUIDE.md).
## Prerequisites
- Node.js 22+ and `pnpm`.
- A local Paperclip checkout you can run from source. Local plugin installs read source from disk, so the running server must be able to see the path you give it.
## The five steps
```bash
# 1. Start Paperclip locally
pnpm paperclipai run
# 2. Scaffold a plugin outside the Paperclip repo
paperclipai plugin init @acme/hello-plugin --output ~/dev/paperclip-plugins
# 3. Install dependencies and start the watch build
cd ~/dev/paperclip-plugins/hello-plugin
pnpm install
pnpm dev
# 4. In another terminal, install the plugin from its absolute path
paperclipai plugin install ~/dev/paperclip-plugins/hello-plugin
# 5. Confirm it loaded
paperclipai plugin list
paperclipai plugin inspect acme.hello-plugin
```
That's the loop. The rest of this page explains what each step does and what to expect when you edit code.
### 1. Start Paperclip
```bash
pnpm paperclipai run
```
Paperclip listens on `http://127.0.0.1:3100` by default. The CLI talks to that server, so leave it running.
### 2. Scaffold the plugin
```bash
paperclipai plugin init @acme/hello-plugin --output ~/dev/paperclip-plugins
```
This creates `~/dev/paperclip-plugins/hello-plugin/` with `src/manifest.ts`, `src/worker.ts`, `src/ui/index.tsx`, an esbuild watch config, a Vitest config, and a snapshot of `@paperclipai/plugin-sdk` from your local Paperclip checkout. You can run the package and tests without publishing anything to npm.
Useful flags:
- `--template <default|connector|workspace|environment>` — starter shape.
- `--category <connector|workspace|automation|ui|environment>` — manifest category.
- `--display-name`, `--description`, `--author` — manifest metadata.
- `--sdk-path <absolute-path>` — point at a specific `packages/plugins/sdk` checkout if you have more than one.
When `plugin init` finishes, it prints the next four commands literally. You can copy them.
### 3. Install dependencies and run the watch build
```bash
cd ~/dev/paperclip-plugins/hello-plugin
pnpm install
pnpm dev
```
`pnpm dev` runs `esbuild --watch` against the plugin source and emits `dist/manifest.js`, `dist/worker.js`, and `dist/ui/`. Leave it running. Every time you save, esbuild rebuilds the affected output file.
If your plugin has UI and you want a browser-side dev server with hot module replacement during local UI iteration, run `pnpm dev:ui` in a second terminal. It serves `dist/ui/` on `http://127.0.0.1:4177`. This is optional; Paperclip can load the built UI directly from `dist/ui/` without it.
### 4. Install from the absolute path
```bash
paperclipai plugin install ~/dev/paperclip-plugins/hello-plugin
```
The CLI auto-detects local paths (anything that looks absolute, starts with `./`, `../`, or `~`, or resolves to an existing folder relative to the current directory) and sends `{ isLocalPath: true }` to `POST /api/plugins/install` with the resolved absolute path. If you want to be explicit, pass `--local`.
You will see a confirmation like:
```
Installing plugin from local path: /Users/you/dev/paperclip-plugins/hello-plugin
✓ Installed acme.hello-plugin v0.1.0 (ready)
Local plugin installs run trusted local code from your machine.
Keep `pnpm dev` running in /Users/you/dev/paperclip-plugins/hello-plugin;
Paperclip watches rebuilt dist output and reloads the plugin worker.
```
Relative paths are resolved against the current working directory, so `paperclipai plugin install .` from inside the plugin folder works too.
### 5. Inspect
```bash
paperclipai plugin list
paperclipai plugin inspect acme.hello-plugin
```
`list` shows plugin key, status, version, and short error. `inspect` prints the same record with the full last error if there is one. Both accept `--json` if you want to script against them.
## Reload semantics, honestly
Paperclip watches the on-disk plugin package after a local install. The watcher targets the runtime entrypoints declared in the package's `paperclipPlugin` field (`dist/manifest.js`, `dist/worker.js`, `dist/ui/`).
What that means in practice:
- **Worker code:** save a `.ts` file → esbuild rewrites `dist/worker.js` → Paperclip debounces ~500ms and restarts the plugin worker. The next worker call uses the new code. There is no in-process hot module replacement for worker code; it is a worker restart.
- **Manifest:** save `src/manifest.ts``dist/manifest.js` rewrites → the worker restarts and the host re-reads the manifest.
- **Plugin UI:** save a `.tsx` file → esbuild rewrites `dist/ui/` → Paperclip reloads the UI bundle on its next mount. To get HMR during UI iteration, run `pnpm dev:ui` and point at the dev server with `devUiUrl` in your manifest while developing.
- **Without `pnpm dev`:** the watcher only fires on `dist/*` changes. If you stop the watch build, source edits do not reach Paperclip. Restart `pnpm dev` (or run `pnpm build` once) before expecting changes.
- **`node_modules`, `.git`, `.paperclip-sdk`, and other dotfolders are ignored.** Adding a dependency requires the new code to actually be imported and rebuilt before the worker sees it.
The server never compiles plugin source for you. The package's own build scripts own that step.
## Local path plugins vs npm packages
Both go through the same install endpoint, but they mean different things:
- **Local path plugins are trusted local code.** Paperclip executes worker code from disk under the same trust boundary as the rest of the running instance. This is meant for developing or operating a plugin against a checkout you control. There is no signature check, no sandboxing of worker code, and no provenance metadata beyond the path. Do not install local-path plugins you did not write.
- **npm packages are the deployable artifact.** `paperclipai plugin install @acme/plugin-foo` (optionally `--version 1.2.3`) installs from your configured npm registry, version-pins, and produces an install record that other operators can reproduce. Ship plugins this way.
When you are done iterating locally, publish the package and reinstall the npm-package form so the install reflects what you will ship.
## Common things to do next
- **Restart cleanly:** `paperclipai plugin disable <key>` pauses the plugin without removing it. `paperclipai plugin enable <key>` brings it back. `paperclipai plugin uninstall <key>` removes the install record; add `--force` to also purge plugin state and settings.
- **Browse examples:** `paperclipai plugin examples` lists the bundled example plugins that ship with the repo, each with a ready-to-run `paperclipai plugin install <path>` line.
- **Go deeper:** [`PLUGIN_AUTHORING_GUIDE.md`](./PLUGIN_AUTHORING_GUIDE.md) covers worker capabilities, managed agents/projects/routines, plugin database namespaces, scoped API routes, and the shared UI components in `@paperclipai/plugin-sdk/ui`. [`PLUGIN_SPEC.md`](./PLUGIN_SPEC.md) is the longer-form specification, including future ideas that are not yet implemented.
## Troubleshooting
- **`Plugin install returned no plugin record` or `error` status.** Run `paperclipai plugin inspect <key>` for the last error. The most common causes are (1) the plugin has not built yet — run `pnpm dev` or `pnpm build` first, (2) the `paperclipPlugin` entries in `package.json` point at files that do not exist on disk, or (3) the manifest failed validation. The Paperclip server log has the full validation error.
- **Edits do not seem to reload.** Confirm `pnpm dev` is still running and writing to `dist/`. If you renamed entry files, update the `paperclipPlugin.manifest` / `paperclipPlugin.worker` / `paperclipPlugin.ui` fields in `package.json` so the watcher targets them.
- **Worker restarts but UI is stale.** Hard-reload the page. If you want HMR, run `pnpm dev:ui` and set `devUiUrl` in your manifest to `http://127.0.0.1:4177` during development.
- **Path arguments fail on Windows.** Quote paths that contain spaces, and prefer absolute paths over `~`-prefixed paths in non-bash shells.

View File

@@ -4,31 +4,34 @@ This guide describes the current, implemented way to create a Paperclip plugin i
It is intentionally narrower than [PLUGIN_SPEC.md](./PLUGIN_SPEC.md). The spec includes future ideas; this guide only covers the alpha surface that exists now.
> **New to plugins?** Start with the short [Local Plugin Development guide](./LOCAL_PLUGIN_DEVELOPMENT.md) — it walks the CLI happy path (`plugin init` → `pnpm dev` → `plugin install <path>`) end to end. Come back here for the full manifest surface, worker capabilities, and UI components.
## Current reality
- Treat plugin workers and plugin UI as trusted code.
- Plugin UI runs as same-origin JavaScript inside the main Paperclip app.
- Worker-side host APIs are capability-gated.
- Plugin UI is not sandboxed by manifest capabilities.
- Plugin database migrations are restricted to a host-derived plugin namespace.
- Plugin-owned JSON API routes must be declared in the manifest and are mounted
only under `/api/plugins/:pluginId/api/*`.
- The host provides a small shared React component kit through
`@paperclipai/plugin-sdk/ui`; use it for common Paperclip controls before
building custom versions.
- There is no host-provided shared React component kit for plugins yet.
- `ctx.assets` is not supported in the current runtime.
## Scaffold a plugin
Use the CLI scaffold command:
Use the scaffold package:
```bash
paperclipai plugin init @yourscope/plugin-name --output /absolute/path/to/plugin-repos
pnpm --filter @paperclipai/create-paperclip-plugin build
node packages/plugins/create-paperclip-plugin/dist/index.js @yourscope/plugin-name --output ./packages/plugins/examples
```
That creates `<output>/plugin-name/` with:
For a plugin that lives outside the Paperclip repo:
```bash
pnpm --filter @paperclipai/create-paperclip-plugin build
node packages/plugins/create-paperclip-plugin/dist/index.js @yourscope/plugin-name \
--output /absolute/path/to/plugin-repos \
--sdk-path /absolute/path/to/paperclip/packages/plugins/sdk
```
That creates a package with:
- `src/manifest.ts`
- `src/worker.ts`
@@ -39,13 +42,11 @@ That creates `<output>/plugin-name/` with:
Inside this monorepo, the scaffold uses `workspace:*` for `@paperclipai/plugin-sdk`.
Outside this monorepo, the scaffold snapshots `@paperclipai/plugin-sdk` from the local Paperclip checkout into a `.paperclip-sdk/` tarball so you can build and test a plugin without publishing anything to npm first. Pass `--sdk-path /absolute/path/to/paperclip/packages/plugins/sdk` if you have more than one Paperclip checkout.
Outside this monorepo, the scaffold snapshots `@paperclipai/plugin-sdk` from the local Paperclip checkout into a `.paperclip-sdk/` tarball so you can build and test a plugin without publishing anything to npm first.
## Local development workflow
## Recommended local workflow
See the short [Local Plugin Development guide](./LOCAL_PLUGIN_DEVELOPMENT.md) for the full happy path (`pnpm dev``paperclipai plugin install <absolute-path>``paperclipai plugin list`) and reload semantics.
Minimum verification from the generated plugin folder:
From the generated plugin folder:
```bash
pnpm install
@@ -54,6 +55,16 @@ pnpm test
pnpm build
```
For local development, install it into Paperclip from an absolute local path through the plugin manager or API. The server supports local filesystem installs and watches local-path plugins for file changes so worker restarts happen automatically after rebuilds.
Example:
```bash
curl -X POST http://127.0.0.1:3100/api/plugins/install \
-H "Content-Type: application/json" \
-d '{"packageName":"/absolute/path/to/your-plugin","isLocalPath":true}'
```
## Supported alpha surface
Worker:
@@ -66,14 +77,11 @@ Worker:
- secrets
- activity
- state
- database namespace via `ctx.db`
- scoped JSON API routes declared with `apiRoutes`
- entities
- projects, project workspaces, and plugin-managed projects
- projects and project workspaces
- companies
- issues, comments, namespaced `plugin:<pluginKey>` origins, blocker relations, checkout assertions, assignment wakeups, and orchestration summaries
- agents, plugin-managed agents, and agent sessions
- plugin-managed routines
- issues and comments
- agents and agent sessions
- goals
- data/actions
- streams
@@ -81,210 +89,6 @@ Worker:
- metrics
- logger
### Plugin database declarations
First-party or otherwise trusted orchestration plugins can declare:
```ts
database: {
migrationsDir: "migrations",
coreReadTables: ["issues"],
}
```
Required capabilities are `database.namespace.migrate` and
`database.namespace.read`; add `database.namespace.write` for runtime mutations.
The host derives `ctx.db.namespace`, runs SQL files in filename order before the
worker starts, records checksums in `plugin_migrations`, and rejects changed
already-applied migrations.
Migration SQL may create or alter objects only inside `ctx.db.namespace`. It may
reference whitelisted `public` core tables for foreign keys or read-only views,
but may not mutate/alter/drop/truncate public tables, create extensions,
triggers, untrusted languages, or runtime multi-statement SQL. Runtime
`ctx.db.query()` is restricted to `SELECT`; runtime `ctx.db.execute()` is
restricted to namespace-local `INSERT`, `UPDATE`, and `DELETE`.
### Scoped plugin API routes
Plugins can expose JSON-only routes under their own namespace:
```ts
apiRoutes: [
{
routeKey: "initialize",
method: "POST",
path: "/issues/:issueId/smoke",
auth: "board-or-agent",
capability: "api.routes.register",
checkoutPolicy: "required-for-agent-in-progress",
companyResolution: { from: "issue", param: "issueId" },
},
]
```
The host resolves the plugin, checks that it is ready, enforces
`api.routes.register`, matches the declared method/path, resolves company access,
and applies checkout policy before dispatching to the worker's `onApiRequest`
handler. The worker receives sanitized headers, route params, query, parsed JSON
body, actor context, and company id. Do not use plugin routes to claim core
paths; they always remain under `/api/plugins/:pluginId/api/*`.
## Managed Paperclip resources
Plugins that provide durable Paperclip business objects should declare them in
the manifest and let the host create or relink the actual records per company.
Do this for plugin-owned agents, plugin-owned projects, and recurring automation.
Do not hide long-lived work behind private plugin state when it should be visible
to the board, scoped to a company, audited, budgeted, and assigned like normal
Paperclip work.
Use these surfaces:
- Managed agents: declare top-level `agents[]` and require
`agents.managed`. Use this when the plugin provides a named worker the board
should see in the org, budget, pause, invoke, and inspect. Managed agents are
normal Paperclip agents with plugin ownership metadata, not background plugin
workers.
- Managed projects: declare top-level `projects[]` and require
`projects.managed`. Use this when the plugin needs a stable company-scoped
project for its issues, routines, or workspace-oriented UI. Keep plugin work
in a project instead of scattering generated issues across unrelated projects.
- Managed routines: declare top-level `routines[]` and require
`routines.managed`. Use this for scheduled, webhook, or manually triggered
jobs that should create visible Paperclip issues. Prefer managed routines over
plugin `jobs[]` for recurring business work; plugin jobs are for plugin
runtime maintenance that does not need a board-visible task trail.
Managed resources are resolved by stable plugin keys, not hardcoded database
ids. In a worker action or data handler, call `ctx.agents.managed.reconcile()`,
`ctx.projects.managed.reconcile()`, and `ctx.routines.managed.reconcile()` for
the current `companyId`. `reconcile()` creates the missing resource, relinks a
recoverable binding, or returns the existing resource. `reset()` reapplies the
manifest defaults when the operator wants to restore the plugin's suggested
configuration.
Declare dependencies between managed resources with refs. A routine can point
at a managed agent through `assigneeRef` and at a managed project through
`projectRef`. Reconcile the referenced agent and project before reconciling the
routine; if a ref is still missing, the routine resolution reports
`missing_refs` instead of guessing.
```ts
import type { PaperclipPluginManifestV1 } from "@paperclipai/plugin-sdk";
const manifest: PaperclipPluginManifestV1 = {
id: "example.research-plugin",
apiVersion: 1,
version: "0.1.0",
displayName: "Research Plugin",
description: "Creates a managed research agent and scheduled research routine.",
author: "Example",
categories: ["automation"],
capabilities: [
"agents.managed",
"projects.managed",
"routines.managed",
"instance.settings.register",
],
entrypoints: {
worker: "./dist/worker.js",
ui: "./dist/ui",
},
agents: [
{
agentKey: "researcher",
displayName: "Researcher",
role: "research",
title: "Research Agent",
capabilities: "Runs recurring research briefs for this company.",
adapterPreference: ["codex_local", "claude_local", "process"],
instructions: {
content: "Follow the Paperclip heartbeat and produce concise research briefs.",
},
},
],
projects: [
{
projectKey: "research",
displayName: "Research",
description: "Recurring research work created by the Research Plugin.",
status: "in_progress",
},
],
routines: [
{
routineKey: "weekly-brief",
title: "Weekly research brief",
description: "Create a short research brief for the board.",
assigneeRef: { resourceKind: "agent", resourceKey: "researcher" },
projectRef: { resourceKind: "project", resourceKey: "research" },
priority: "medium",
triggers: [
{
kind: "schedule",
label: "Monday morning",
cronExpression: "0 9 * * 1",
timezone: "America/Chicago",
enabled: false,
},
],
},
],
ui: {
slots: [
{
type: "settingsPage",
id: "settings",
displayName: "Research",
exportName: "SettingsPage",
},
],
},
};
export default manifest;
```
In the worker, expose a small setup action or settings-page action that
reconciles the resources for the selected company:
```ts
import { definePlugin } from "@paperclipai/plugin-sdk";
export default definePlugin({
setup(ctx) {
ctx.actions.register("setup-company", async (params) => {
const companyId = String(params.companyId ?? "");
if (!companyId) throw new Error("companyId is required");
const project = await ctx.projects.managed.reconcile("research", companyId);
const agent = await ctx.agents.managed.reconcile("researcher", companyId);
const routine = await ctx.routines.managed.reconcile("weekly-brief", companyId);
return { project, agent, routine };
});
},
});
```
Authoring rules:
- Keep keys stable once published. Renaming `agentKey`, `projectKey`, or
`routineKey` creates a new managed resource from the host's point of view.
- Use managed agents for plugin-provided labor. Use `ctx.agents.invoke()` or
`ctx.agents.sessions` only after you have a real agent id, either selected by
the operator or resolved from `ctx.agents.managed`.
- Use managed routines for recurring or externally triggered work that should
produce tasks. Schedule, webhook, and API triggers are visible routine
triggers, and each run has the normal Paperclip issue/audit trail.
- Use managed projects to keep plugin-generated work organized and to give
project-scoped plugin UI a stable home. For filesystem access inside a
project, still resolve project workspaces through `ctx.projects`.
- Keep defaults conservative. Managed declarations are suggestions owned by the
plugin, but the resulting resources are normal Paperclip records that the
operator can inspect, pause, and adjust.
UI:
- `usePluginData`
@@ -310,187 +114,6 @@ Mount surfaces currently wired in the host include:
- `commentAnnotation`
- `commentContextMenuItem`
## Shared host components
Use shared components from `@paperclipai/plugin-sdk/ui` when the plugin needs a
Paperclip-native control. The host owns the implementation, so plugins inherit
the board's current styling, ordering, recent selections, and dark-mode behavior
without importing `ui/src` internals.
Currently exposed components include:
- `MarkdownBlock` and `MarkdownEditor` for rendered and editable markdown.
- `FileTree` for serializable file and directory trees.
- `IssuesList` for a native company-scoped issue table.
- `AssigneePicker` for the same agent/user selector used in the new issue pane.
Use the controlled `value` format `agent:<id>`, `user:<id>`, or `""`.
- `ProjectPicker` for the same project selector used in the new issue pane.
Use the controlled project id value, or `""` for no project.
- `ManagedRoutinesList` for plugin-owned routine settings pages.
```tsx
import { AssigneePicker, ProjectPicker } from "@paperclipai/plugin-sdk/ui";
export function PluginAssignmentControls({ companyId }: { companyId: string }) {
const [assignee, setAssignee] = useState("");
const [projectId, setProjectId] = useState("");
return (
<>
<AssigneePicker
companyId={companyId}
value={assignee}
onChange={(value) => setAssignee(value)}
/>
<ProjectPicker
companyId={companyId}
value={projectId}
onChange={setProjectId}
/>
</>
);
}
```
## File and path UI
Plugin UI often needs to render a file tree, accept a folder path, or browse a
project workspace. There are three different surfaces for that, and they map to
different trust and data-flow boundaries. Pick the surface that matches the
data the plugin actually has.
### When to use the shared `FileTree`
Use `FileTree` from `@paperclipai/plugin-sdk/ui` whenever the plugin only needs
to render a serializable file/directory list and react to selection or
expand/collapse. The host owns the implementation, so plugin UI inherits the
board's icons, indent, focus ring, and dark-mode styling without importing host
internals.
```tsx
import {
FileTree,
type FileTreeNode,
} from "@paperclipai/plugin-sdk/ui";
const nodes: FileTreeNode[] = [
{ name: "AGENTS.md", path: "AGENTS.md", kind: "file", children: [] },
{
name: "wiki",
path: "wiki",
kind: "dir",
children: [
{ name: "index.md", path: "wiki/index.md", kind: "file", children: [] },
],
},
];
export function WikiTree() {
const [expanded, setExpanded] = useState<Set<string>>(() => new Set(["wiki"]));
const [selected, setSelected] = useState<string | null>(null);
return (
<FileTree
nodes={nodes}
selectedFile={selected}
expandedPaths={expanded}
onSelectFile={(path) => setSelected(path)}
onToggleDir={(path) =>
setExpanded((current) => {
const next = new Set(current);
next.has(path) ? next.delete(path) : next.add(path);
return next;
})
}
/>
);
}
```
Good fits:
- LLM Wiki page navigation in `packages/plugins/plugin-llm-wiki` builds a
`FileTreeNode[]` from worker query results and renders it through `FileTree`.
- The example `plugin-file-browser-example` lazily fetches a directory's
children through a `loadFileList` action when `onToggleDir` fires, then
merges the children into the local tree state — letting the shared component
handle rendering and selection.
Boundary rules:
- Keep the prop surface serializable (`nodes`, `expandedPaths`, `checkedPaths`,
`fileBadges`, `fileTones`). Do not pass arbitrary render functions across the
plugin/host boundary in v1; the supported escape hatches are
`fileBadges` (status pill keyed by path) and `fileTones` (row tone keyed by
path).
- Do not import the host's `FileTree.tsx` or any `ui/src/*` module. The SDK
declaration is the only supported import path for plugin UI.
- The shared `FileTree` is for rendering and selection. Plugin-specific editors,
ingest flows, query forms, and lint runs stay inside the plugin and do not
belong as `FileTree` props.
### When to declare `localFolders`
When the plugin needs operator-configured filesystem roots — typically for
trusted local plugins like wiki tooling — declare `localFolders[]` on the
manifest and add the `local.folders` capability. The host renders a settings
surface for the operator to set the absolute path, validates the path
server-side (containment, symlinks, required files/directories), and exposes
`ctx.localFolders.readText()` and `ctx.localFolders.writeTextAtomic()` in the
worker.
```ts
export const manifest = {
capabilities: ["local.folders"],
localFolders: [
{
folderKey: "content-root",
displayName: "Content root",
access: "readWrite",
requiredDirectories: ["sources", "pages"],
requiredFiles: ["schema.md"],
},
],
};
```
Use this when:
- The data lives outside any project workspace.
- Reads and writes need company-scoped configuration.
- The operator picks the path once in plugin settings and the worker resolves
files relative to that root.
Do not use `localFolders` to grant the UI direct browser-side access to the
filesystem — there is no such capability. The browser still goes through the
worker via `getData` / `performAction`, and the worker only exposes paths it
chose to expose.
### When to keep worker-mediated project workspace browsing
When the data lives inside an existing project workspace, keep the browsing
flow worker-mediated:
- The worker uses `ctx.projects.listWorkspaces()` to resolve the workspace
path, then reads its filesystem with normal Node APIs.
- The plugin UI calls a `getData` handler for the root listing and an action
for lazy children, then renders them through `FileTree`.
- The worker is the only side that touches the disk. The browser receives a
serializable tree and never sees raw absolute paths it can replay.
The example `plugin-file-browser-example` is the reference for this pattern:
the worker registers `fileList` (data) and `loadFileList` (action) over the
same handler, and the UI uses the action for on-toggle directory loading so the
shared `FileTree` stays the rendering surface.
### Mixing surfaces
A single plugin can use more than one of these. The LLM Wiki uses
`localFolders` for its content root, then renders the resulting page list
through `FileTree`. The file browser example uses `ctx.projects.listWorkspaces`
to pick a workspace and renders its on-disk tree through `FileTree` with lazy
loading. Pick the boundary per data source, not per plugin.
## Company routes
Plugins may declare a `page` slot with `routePath` to own a company route like:

View File

@@ -27,10 +27,7 @@ Current limitations to keep in mind:
- Published npm packages are the intended install artifact for deployed plugins.
- The repo example plugins under `packages/plugins/examples/` are development conveniences. They work from a source checkout and should not be assumed to exist in a generic published build unless they are explicitly shipped with that build.
- Dynamic plugin install is not yet cloud-ready for horizontally scaled or ephemeral deployments. There is no shared artifact store, install coordination, or cross-node distribution layer yet.
- The current runtime ships a small host-provided plugin UI component kit through `@paperclipai/plugin-sdk/ui`, but does not support plugin asset uploads/reads yet. Treat plugin asset APIs as future-scope ideas, not current implementation promises.
- Scoped plugin API routes are JSON-only and must be declared in `apiRoutes`.
They mount under `/api/plugins/:pluginId/api/*`; plugins cannot shadow core
API routes.
- The current runtime does not yet ship a real host-provided plugin UI component kit, and it does not support plugin asset uploads/reads. Treat those as future-scope ideas in this spec, not current implementation promises.
In practice, that means the current implementation is a good fit for local development and self-hosted persistent deployments, but not yet for multi-instance cloud plugin distribution.
@@ -627,46 +624,7 @@ Required SDK clients:
Plugins that need filesystem, git, terminal, or process operations handle those directly using standard Node APIs or libraries. The host provides project workspace metadata through `ctx.projects` so plugins can resolve workspace paths, but the host does not proxy low-level OS operations.
## 14.1 Issue Orchestration APIs
Trusted orchestration plugins can create and update Paperclip issues through `ctx.issues` instead of importing server internals. The public issue contract includes parent/project/goal links, board or agent assignees, blocker IDs, labels, billing code, request depth, execution workspace inheritance, and plugin origin metadata.
Origin rules:
- Built-in core issues keep built-in origins such as `manual` and `routine_execution`.
- Plugin-managed issues use `plugin:<pluginKey>` or a sub-kind such as `plugin:<pluginKey>:feature`.
- The host derives the default plugin origin from the installed plugin key and rejects attempts to set `plugin:<otherPluginKey>` origins.
- `originId` is plugin-defined and should be stable for idempotent generated work.
Relation and read helpers:
- `ctx.issues.relations.get(issueId, companyId)`
- `ctx.issues.relations.setBlockedBy(issueId, blockerIssueIds, companyId)`
- `ctx.issues.relations.addBlockers(issueId, blockerIssueIds, companyId)`
- `ctx.issues.relations.removeBlockers(issueId, blockerIssueIds, companyId)`
- `ctx.issues.getSubtree(issueId, companyId, options)`
- `ctx.issues.summaries.getOrchestration({ issueId, companyId, includeSubtree, billingCode })`
Governance helpers:
- `ctx.issues.assertCheckoutOwner({ issueId, companyId, actorAgentId, actorRunId })` lets plugin actions preserve agent-run checkout ownership.
- `ctx.issues.requestWakeup(issueId, companyId, options)` requests assignment wakeups through host heartbeat semantics, including terminal-status, blocker, assignee, and budget hard-stop checks.
- `ctx.issues.requestWakeups(issueIds, companyId, options)` applies the same host-owned wakeup semantics to a batch and may use an idempotency key prefix for stable coordinator retries.
Plugin-originated issue, relation, document, comment, and wakeup mutations must write activity entries with `actorType: "plugin"` and details fields for `sourcePluginId`, `sourcePluginKey`, `initiatingActorType`, `initiatingActorId`, and `initiatingRunId` when a user or agent run initiated the plugin work.
Scoped API routes:
- `apiRoutes[]` declares `routeKey`, `method`, plugin-local `path`, `auth`,
`capability`, optional checkout policy, and company resolution.
- The host enforces auth, company access, `api.routes.register`, route matching,
and checkout policy before worker dispatch.
- The worker implements `onApiRequest(input)` and returns a JSON response shape
`{ status?, headers?, body? }`.
- Only safe request headers are forwarded; auth/cookie headers are never passed
to the worker.
## 14.2 Example SDK Shape
## 14.1 Example SDK Shape
```ts
/** Top-level helper for defining a plugin with type checking */
@@ -738,24 +696,16 @@ The host enforces capabilities in the SDK layer and refuses calls outside the gr
- `project.workspaces.read`
- `issues.read`
- `issue.comments.read`
- `issue.documents.read`
- `issue.relations.read`
- `issue.subtree.read`
- `agents.read`
- `goals.read`
- `activity.read`
- `costs.read`
- `issues.orchestration.read`
### Data Write
- `issues.create`
- `issues.update`
- `issue.comments.create`
- `issue.documents.write`
- `issue.relations.write`
- `issues.checkout`
- `issues.wakeup`
- `assets.write`
- `assets.read`
- `activity.log.write`
@@ -822,13 +772,6 @@ Minimum event set:
- `issue.created`
- `issue.updated`
- `issue.comment.created`
- `issue.document.created`
- `issue.document.updated`
- `issue.document.deleted`
- `issue.relations.updated`
- `issue.checked_out`
- `issue.released`
- `issue.assignment_wakeup_requested`
- `agent.created`
- `agent.updated`
- `agent.status_changed`
@@ -838,8 +781,6 @@ Minimum event set:
- `agent.run.cancelled`
- `approval.created`
- `approval.decided`
- `budget.incident.opened`
- `budget.incident.resolved`
- `cost_event.created`
- `activity.logged`
@@ -976,23 +917,13 @@ export function DashboardWidget({ context }: PluginWidgetProps) {
The SDK includes a `ui` subpath export that plugin frontends import. This subpath provides:
- **Bridge hooks**: `usePluginData(key, params)`, `usePluginAction(key)`, `useHostContext()`, `useHostNavigation()`
- **Bridge hooks**: `usePluginData(key, params)`, `usePluginAction(key)`, `useHostContext()`
- **Design tokens**: colors, spacing, typography, shadows matching the host theme
- **Shared components**: `MetricCard`, `StatusBadge`, `DataTable`, `LogView`, `ActionBar`, `Spinner`, etc.
- **Type definitions**: `PluginPageProps`, `PluginWidgetProps`, `PluginDetailTabProps`
Plugins are encouraged but not required to use the shared components. A plugin may render entirely custom UI as long as it communicates through the bridge.
`useHostNavigation()` is the supported way for plugin UI to navigate to
Paperclip-internal pages. It exposes `resolveHref(to)`, `navigate(to,
options?)`, and `linkProps(to, options?)`. Plugin links should prefer
`linkProps()` so anchors keep real `href` values for copy-link, modifier-click,
middle-click, and open-in-new-tab behavior while plain left-clicks route through
the host SPA router. The host resolves company-scoped paths against the active
company prefix without double-prefixing already-prefixed paths. Plugin UI should
not use raw same-origin `href`s or `window.location.assign()` for internal
Paperclip navigation because those can force a full document reload.
### 19.0.2 Bundle Isolation
Plugin UI bundles are loaded as standard ES modules, not iframed. This gives plugins full rendering performance and access to the host's design tokens.
@@ -1072,11 +1003,6 @@ The host SDK ships shared components that plugins can import to quickly build UI
| `LogView` | Scrollable log output with timestamps | Webhook deliveries, job output, process logs |
| `JsonTree` | Collapsible JSON tree for debugging | Raw API responses, plugin state inspection |
| `Spinner` | Loading indicator | Data fetch states |
| `FileTree` | Host-styled file/directory tree | Wiki pages, workspace files, import previews |
| `IssuesList` | Host issue list | Plugin pages that need a native issue view |
| `AssigneePicker` | Host assignee picker for agents and board users | Creating issues, assigning routines, filtering work |
| `ProjectPicker` | Host project picker | Creating issues, scoping dashboards, filtering work |
| `ManagedRoutinesList` | Host routine list | Plugin settings pages that manage routines |
Plugins may also use entirely custom components. The shared components exist to reduce boilerplate and keep visual consistency, not to limit what plugins can render.
@@ -1312,8 +1238,6 @@ Plugin-originated mutations should write:
- `actor_type = plugin`
- `actor_id = <plugin-id>`
- details include `sourcePluginId` and `sourcePluginKey`
- details include `initiatingActorType`, `initiatingActorId`, and `initiatingRunId` when a user or agent run triggered the plugin work
## 21.5 Plugin Migrations

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

View File

@@ -114,14 +114,14 @@ If the connection drops, the UI reconnects automatically.
1. Enable timer wakeups (for example every 300s)
2. Keep assignment wakeups on
3. Use a focused prompt template that tells agents to act in the same heartbeat, leave durable progress, and mark blocked work with an owner/action
3. Use a focused prompt template
4. Watch run logs and adjust prompt/config over time
## 7.2 Event-driven loop (less constant polling)
1. Disable timer or set a long interval
2. Keep wake-on-assignment enabled
3. Use child issues, comments, and on-demand wakeups for handoffs instead of loops that poll agents, sessions, or processes
3. Use on-demand wakeups for manual nudges
## 7.3 Safety-first loop

View File

@@ -1,299 +0,0 @@
# Invite Flow State Map
Status: Current implementation map
Date: 2026-04-13
This document maps the current invite creation and acceptance states implemented in:
- `ui/src/pages/CompanyInvites.tsx`
- `ui/src/pages/CompanySettings.tsx`
- `ui/src/pages/InviteLanding.tsx`
- `server/src/routes/access.ts`
- `server/src/lib/join-request-dedupe.ts`
## State Legend
- Invite state: `active`, `revoked`, `accepted`, `expired`
- Join request status: `pending_approval`, `approved`, `rejected`
- Claim secret state for agent joins: `available`, `consumed`, `expired`
- Invite type: `company_join` or `bootstrap_ceo`
- Join type: `human`, `agent`, or `both`
## Entity Lifecycle
```mermaid
flowchart TD
Board[Board user on invite screen]
HumanInvite[Create human company invite]
OpenClawInvite[Generate OpenClaw invite prompt]
Active[Invite state: active]
Revoked[Invite state: revoked]
Expired[Invite state: expired]
Accepted[Invite state: accepted]
BootstrapDone[Bootstrap accepted<br/>no join request]
HumanReuse{Matching human join request<br/>already exists for same user/email?}
HumanPending[Join request<br/>pending_approval]
HumanApproved[Join request<br/>approved]
HumanRejected[Join request<br/>rejected]
AgentPending[Agent join request<br/>pending_approval<br/>+ optional claim secret]
AgentApproved[Agent join request<br/>approved]
AgentRejected[Agent join request<br/>rejected]
ClaimAvailable[Claim secret available]
ClaimConsumed[Claim secret consumed]
ClaimExpired[Claim secret expired]
OpenClawReplay[Special replay path:<br/>accepted invite can be POSTed again<br/>for openclaw_gateway only]
Board --> HumanInvite --> Active
Board --> OpenClawInvite --> Active
Active --> Revoked: revoke
Active --> Expired: expiresAt passes
Active --> BootstrapDone: bootstrap_ceo accept
BootstrapDone --> Accepted
Active --> HumanReuse: human accept
HumanReuse --> HumanPending: reuse existing pending request
HumanReuse --> HumanApproved: reuse existing approved request
HumanReuse --> HumanPending: no reusable request<br/>create new request
HumanPending --> HumanApproved: board approves
HumanPending --> HumanRejected: board rejects
HumanPending --> Accepted
HumanApproved --> Accepted
Active --> AgentPending: agent accept
AgentPending --> Accepted
AgentPending --> AgentApproved: board approves
AgentPending --> AgentRejected: board rejects
AgentApproved --> ClaimAvailable: createdAgentId + claimSecretHash
ClaimAvailable --> ClaimConsumed: POST claim-api-key succeeds
ClaimAvailable --> ClaimExpired: secret expires
Accepted --> OpenClawReplay
OpenClawReplay --> AgentPending
OpenClawReplay --> AgentApproved
```
## Board-Side Screen States
```mermaid
stateDiagram-v2
[*] --> CompanySelection
CompanySelection --> NoCompany: no company selected
CompanySelection --> LoadingHistory: selectedCompanyId present
LoadingHistory --> HistoryError: listInvites failed
LoadingHistory --> Ready: listInvites succeeded
state Ready {
[*] --> EmptyHistory
EmptyHistory --> PopulatedHistory: invites exist
PopulatedHistory --> LoadingMore: View more
LoadingMore --> PopulatedHistory: next page loaded
PopulatedHistory --> RevokePending: Revoke active invite
RevokePending --> PopulatedHistory: revoke succeeded
RevokePending --> PopulatedHistory: revoke failed
EmptyHistory --> CreatePending: Create invite
PopulatedHistory --> CreatePending: Create invite
CreatePending --> LatestInviteVisible: create succeeded
CreatePending --> Ready: create failed
LatestInviteVisible --> CopiedToast: clipboard copy succeeded
LatestInviteVisible --> Ready: navigate away or refresh
}
CompanySelection --> OpenClawPromptReady: Company settings prompt generator
OpenClawPromptReady --> OpenClawPromptPending: Generate OpenClaw Invite Prompt
OpenClawPromptPending --> OpenClawSnippetVisible: prompt generated
OpenClawPromptPending --> OpenClawPromptReady: generation failed
```
## Invite Landing Screen States
```mermaid
stateDiagram-v2
[*] --> TokenGate
TokenGate --> InvalidToken: token missing
TokenGate --> Loading: token present
Loading --> InviteUnavailable: invite fetch failed or invite not returned
Loading --> CheckingAccess: signed-in session and invite.companyId
Loading --> InviteResolved: invite loaded without membership check
Loading --> AcceptedInviteSummary: invite already consumed<br/>but linked join request still exists
CheckingAccess --> RedirectToBoard: current user already belongs to company
CheckingAccess --> InviteResolved: membership check finished and no join-request summary state is active
CheckingAccess --> AcceptedInviteSummary: membership check finished and invite has joinRequestStatus
state InviteResolved {
[*] --> Branch
Branch --> AgentForm: company_join + allowedJoinTypes=agent
Branch --> InlineAuth: authenticated mode + no session + join is not agent-only
Branch --> AcceptReady: bootstrap invite or human-ready session/local_trusted
InlineAuth --> InlineAuth: toggle sign-up/sign-in
InlineAuth --> InlineAuth: auth validation or auth error message
InlineAuth --> RedirectToBoard: auth succeeded and company membership already exists
InlineAuth --> AcceptPending: auth succeeded and invite still needs acceptance
AgentForm --> AcceptPending: submit request
AgentForm --> AgentForm: validation or accept error
AcceptReady --> AcceptPending: Accept invite
AcceptReady --> AcceptReady: accept error
}
AcceptPending --> BootstrapComplete: bootstrapAccepted=true
AcceptPending --> RedirectToBoard: join status=approved
AcceptPending --> PendingApprovalResult: join status=pending_approval
AcceptPending --> RejectedResult: join status=rejected
state AcceptedInviteSummary {
[*] --> SummaryBranch
SummaryBranch --> PendingApprovalReload: joinRequestStatus=pending_approval
SummaryBranch --> OpeningCompany: joinRequestStatus=approved<br/>and human invite user is now a member
SummaryBranch --> RejectedReload: joinRequestStatus=rejected
SummaryBranch --> ConsumedReload: approved agent invite or other consumed state
}
PendingApprovalResult --> PendingApprovalReload: reload after submit
RejectedResult --> RejectedReload: reload after board rejects
RedirectToBoard --> OpeningCompany: brief pre-navigation render when approved membership is detected
OpeningCompany --> RedirectToBoard: navigate to board
```
## Sequence Diagrams
### Human Invite Creation And First Acceptance
```mermaid
sequenceDiagram
autonumber
actor Board as Board user
participant Settings as Company Invites UI
participant API as Access routes
participant Invites as invites table
actor Invitee as Invite recipient
participant Landing as Invite landing UI
participant Auth as Auth session
participant Join as join_requests table
Board->>Settings: Choose role and click Create invite
Settings->>API: POST /api/companies/:companyId/invites
API->>Invites: Insert active invite
API-->>Settings: inviteUrl + metadata
Invitee->>Landing: Open invite URL
Landing->>API: GET /api/invites/:token
API->>Invites: Load active invite
API-->>Landing: Invite summary
alt Authenticated mode and no session
Landing->>Auth: Sign up or sign in
Auth-->>Landing: Session established
end
Landing->>API: POST /api/invites/:token/accept (requestType=human)
API->>Join: Look for reusable human join request
alt Reusable pending or approved request exists
API->>Invites: Mark invite accepted
API-->>Landing: Existing join request status
else No reusable request exists
API->>Invites: Mark invite accepted
API->>Join: Insert pending_approval join request
API-->>Landing: New pending_approval join request
end
```
### Human Approval And Reload Path
```mermaid
sequenceDiagram
autonumber
actor Invitee as Invite recipient
participant Landing as Invite landing UI
participant API as Access routes
participant Join as join_requests table
actor Approver as Company admin
participant Queue as Access queue UI
participant Membership as company_memberships + grants
Invitee->>Landing: Reload consumed invite URL
Landing->>API: GET /api/invites/:token
API->>Join: Load join request by inviteId
API-->>Landing: joinRequestStatus + joinRequestType
alt joinRequestStatus = pending_approval
Landing-->>Invitee: Show waiting-for-approval panel
Approver->>Queue: Review request in Company Settings -> Access
Queue->>API: POST /companies/:companyId/join-requests/:requestId/approve
API->>Membership: Ensure membership and grants
API->>Join: Mark join request approved
Invitee->>Landing: Refresh after approval
Landing->>API: GET /api/invites/:token
API->>Join: Reload approved join request
API-->>Landing: approved status
Landing-->>Invitee: Opening company and redirect
else joinRequestStatus = rejected
Landing-->>Invitee: Show rejected error panel
else joinRequestStatus = approved but membership missing
Landing-->>Invitee: Fall through to consumed/unavailable state
end
```
### Agent Invite Approval, Claim, And Replay
```mermaid
sequenceDiagram
autonumber
actor Board as Board user
participant Settings as Company Settings UI
participant API as Access routes
participant Invites as invites table
actor Gateway as OpenClaw gateway agent
participant Join as join_requests table
actor Approver as Company admin
participant Agents as agents table
participant Keys as agent_api_keys table
Board->>Settings: Generate OpenClaw invite prompt
Settings->>API: POST /api/companies/:companyId/openclaw-invite-prompt
API->>Invites: Insert active agent invite
API-->>Settings: Prompt text + invite token
Gateway->>API: POST /api/invites/:token/accept (agent, openclaw_gateway)
API->>Invites: Mark invite accepted
API->>Join: Insert pending_approval join request + claimSecretHash
API-->>Gateway: requestId + claimSecret + claimApiKeyPath
Approver->>API: POST /companies/:companyId/join-requests/:requestId/approve
API->>Agents: Create agent + membership + grants
API->>Join: Mark request approved and store createdAgentId
Gateway->>API: POST /api/join-requests/:requestId/claim-api-key (claimSecret)
API->>Keys: Create initial API key
API->>Join: Mark claim secret consumed
API-->>Gateway: Plaintext Paperclip API key
opt Replay accepted invite for updated gateway defaults
Gateway->>API: POST /api/invites/:token/accept again
API->>Join: Reuse existing approved or pending request
API->>Agents: Update approved agent adapter config when applicable
API-->>Gateway: Updated join request payload
end
```
## Notes
- `GET /api/invites/:token` treats `revoked` and `expired` invites as unavailable. Accepted invites remain resolvable when they already have a linked join request, and the summary now includes `joinRequestStatus` plus `joinRequestType`.
- Human acceptance consumes the invite immediately and then either creates a new join request or reuses an existing `pending_approval` or `approved` human join request for the same user/email.
- The landing page has two layers of post-accept UI:
- immediate mutation-result UI from `POST /api/invites/:token/accept`
- reload-time summary UI from `GET /api/invites/:token` once the invite has already been consumed
- Reload behavior for accepted company invites is now status-sensitive:
- `pending_approval` re-renders the waiting-for-approval panel
- `rejected` renders the "This join request was not approved." error panel
- `approved` only becomes a success path for human invites after membership is visible to the current session; otherwise the page falls through to the generic consumed/unavailable state
- `GET /api/invites/:token/logo` still rejects accepted invites, so accepted-invite reload states may fall back to the generated company icon even though the summary payload still carries `companyLogoUrl`.
- The only accepted-invite replay path in the current implementation is `POST /api/invites/:token/accept` for `agent` requests with `adapterType=openclaw_gateway`, and only when the existing join request is still `pending_approval` or already `approved`.
- `bootstrap_ceo` invites are one-time and do not create join requests.

View File

@@ -1,30 +0,0 @@
# AWS ECS Fargate deployment environment
# Copy to .env.aws and fill in values before deploying
#
# Secrets (DATABASE_URL, BETTER_AUTH_SECRET, ANTHROPIC_API_KEY, OPENAI_API_KEY,
# GITHUB_TOKEN) are injected via AWS Secrets Manager — do NOT set them here.
# Deployment mode
PAPERCLIP_DEPLOYMENT_MODE=authenticated
PAPERCLIP_DEPLOYMENT_EXPOSURE=public
PAPERCLIP_PUBLIC_URL=https://paperclip.example.com
# Server
HOST=0.0.0.0
PORT=3100
NODE_ENV=production
SERVE_UI=true
# Paperclip paths
PAPERCLIP_HOME=/paperclip
PAPERCLIP_INSTANCE_ID=default
PAPERCLIP_CONFIG=/paperclip/instances/default/config.json
# Auto-apply migrations on startup
PAPERCLIP_MIGRATION_AUTO_APPLY=true
# Enable heartbeat scheduler for remote agents
HEARTBEAT_SCHEDULER_ENABLED=true
# Post-deploy hardening (uncomment after first user signs up)
# PAPERCLIP_AUTH_DISABLE_SIGN_UP=true

View File

@@ -1,90 +0,0 @@
{
"family": "paperclip-server",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "2048",
"memory": "4096",
"executionRoleArn": "arn:aws:iam::<ACCOUNT_ID>:role/paperclip-ecs-execution",
"taskRoleArn": "arn:aws:iam::<ACCOUNT_ID>:role/paperclip-ecs-task",
"containerDefinitions": [
{
"name": "paperclip-server",
"image": "<ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/paperclip-server:latest",
"essential": true,
"portMappings": [
{
"containerPort": 3100,
"protocol": "tcp"
}
],
"environment": [
{ "name": "NODE_ENV", "value": "production" },
{ "name": "HOST", "value": "0.0.0.0" },
{ "name": "PORT", "value": "3100" },
{ "name": "SERVE_UI", "value": "true" },
{ "name": "PAPERCLIP_HOME", "value": "/paperclip" },
{ "name": "PAPERCLIP_INSTANCE_ID", "value": "default" },
{ "name": "PAPERCLIP_CONFIG", "value": "/paperclip/instances/default/config.json" },
{ "name": "PAPERCLIP_DEPLOYMENT_MODE", "value": "authenticated" },
{ "name": "PAPERCLIP_DEPLOYMENT_EXPOSURE", "value": "public" },
{ "name": "PAPERCLIP_PUBLIC_URL", "value": "https://<DOMAIN>" },
{ "name": "PAPERCLIP_MIGRATION_AUTO_APPLY", "value": "true" },
{ "name": "HEARTBEAT_SCHEDULER_ENABLED", "value": "true" }
],
"secrets": [
{
"name": "DATABASE_URL",
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/database-url"
},
{
"name": "BETTER_AUTH_SECRET",
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/better-auth-secret"
},
{
"name": "ANTHROPIC_API_KEY",
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/anthropic-api-key"
},
{
"name": "OPENAI_API_KEY",
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/openai-api-key"
},
{
"name": "GITHUB_TOKEN",
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/github-token"
}
],
"mountPoints": [
{
"sourceVolume": "paperclip-data",
"containerPath": "/paperclip",
"readOnly": false
}
],
"healthCheck": {
"command": ["CMD-SHELL", "curl -f http://localhost:3100/api/health || exit 1"],
"interval": 30,
"timeout": 5,
"retries": 3,
"startPeriod": 60
},
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/paperclip",
"awslogs-region": "<REGION>",
"awslogs-stream-prefix": "server"
}
}
}
],
"volumes": [
{
"name": "paperclip-data",
"efsVolumeConfiguration": {
"fileSystemId": "<EFS_ID>",
"rootDirectory": "/",
"transitEncryption": "ENABLED"
}
}
]
}

View File

@@ -203,43 +203,6 @@ export const sessionCodec: AdapterSessionCodec = {
};
```
## Capability Flags
Adapters can declare what "local" capabilities they support by setting optional fields on the `ServerAdapterModule`. The server and UI use these flags to decide which features to enable for agents using the adapter (instructions bundle editor, skills sync, JWT auth, etc.).
| Flag | Type | Default | What it controls |
|------|------|---------|------------------|
| `supportsLocalAgentJwt` | `boolean` | `false` | Whether heartbeat generates a local JWT for the agent |
| `supportsInstructionsBundle` | `boolean` | `false` | Managed instructions bundle (AGENTS.md) — server-side resolution + UI editor |
| `instructionsPathKey` | `string` | `"instructionsFilePath"` | The `adapterConfig` key that holds the instructions file path |
| `requiresMaterializedRuntimeSkills` | `boolean` | `false` | Whether runtime skill entries must be written to disk before execution |
These flags are exposed via `GET /api/adapters` in a `capabilities` object, along with a derived `supportsSkills` flag (true when `listSkills` or `syncSkills` is defined).
### Example
```ts
export function createServerAdapter(): ServerAdapterModule {
return {
type: "my_k8s_adapter",
execute: myExecute,
testEnvironment: myTestEnvironment,
listSkills: myListSkills,
syncSkills: mySyncSkills,
// Capability flags
supportsLocalAgentJwt: true,
supportsInstructionsBundle: true,
instructionsPathKey: "instructionsFilePath",
requiresMaterializedRuntimeSkills: true,
};
}
```
With these flags set, the Paperclip UI will automatically show the instructions bundle editor, skills management tab, and working directory field for agents using this adapter — no Paperclip source changes required.
If capability flags are not set, the server falls back to legacy hardcoded lists for built-in adapter types. External adapters that omit the flags will default to `false` for all capabilities.
## Skills Injection
Make Paperclip skills discoverable to your agent runtime without writing to the agent's working directory:

View File

@@ -124,14 +124,14 @@ If the connection drops, the UI reconnects automatically.
1. Enable timer wakeups (for example every 300s)
2. Keep assignment wakeups on
3. Use a focused prompt template that tells agents to act in the same heartbeat, leave durable progress, and mark blocked work with an owner/action
3. Use a focused prompt template
4. Watch run logs and adjust prompt/config over time
## 7.2 Event-driven loop (less constant polling)
1. Disable timer or set a long interval
2. Keep wake-on-assignment enabled
3. Use child issues, comments, and on-demand wakeups for handoffs instead of loops that poll agents, sessions, or processes
3. Use on-demand wakeups for manual nudges
## 7.3 Safety-first loop

View File

@@ -13,8 +13,6 @@ GET /api/companies/{companyId}/agents
Returns all agents in the company.
This route does not accept query filters. Unsupported query parameters return `400`.
## Get Agent
```

View File

@@ -1,9 +1,9 @@
---
title: Issues
summary: Issue CRUD, checkout/release, comments, documents, interactions, and attachments
summary: Issue CRUD, checkout/release, comments, documents, and attachments
---
Issues are the unit of work in Paperclip. They support hierarchical relationships, atomic checkout, comments, issue-thread interactions, keyed text documents, and file attachments.
Issues are the unit of work in Paperclip. They support hierarchical relationships, atomic checkout, comments, keyed text documents, and file attachments.
## List Issues
@@ -66,8 +66,6 @@ The optional `comment` field adds a comment in the same call.
Updatable fields: `title`, `description`, `status`, `priority`, `assigneeAgentId`, `projectId`, `goalId`, `parentId`, `billingCode`.
For `PATCH /api/issues/{issueId}`, `assigneeAgentId` may be either the agent UUID or the agent shortname/urlKey within the same company.
## Checkout (Claim Task)
```
@@ -121,65 +119,6 @@ POST /api/issues/{issueId}/comments
@-mentions (`@AgentName`) in comments trigger heartbeats for the mentioned agent.
## Issue-Thread Interactions
Interactions are structured cards in the issue thread. Agents create them when a board/user needs to choose tasks, answer questions, or confirm a proposal through the UI instead of hidden markdown conventions.
### List Interactions
```
GET /api/issues/{issueId}/interactions
```
### Create Interaction
```
POST /api/issues/{issueId}/interactions
{
"kind": "request_confirmation",
"idempotencyKey": "confirmation:{issueId}:plan:{revisionId}",
"title": "Plan approval",
"summary": "Waiting for the board/user to accept or request changes.",
"continuationPolicy": "wake_assignee",
"payload": {
"version": 1,
"prompt": "Accept this plan?",
"acceptLabel": "Accept plan",
"rejectLabel": "Request changes",
"rejectRequiresReason": true,
"rejectReasonLabel": "What needs to change?",
"detailsMarkdown": "Review the latest plan document before accepting.",
"supersedeOnUserComment": true,
"target": {
"type": "issue_document",
"issueId": "{issueId}",
"documentId": "{documentId}",
"key": "plan",
"revisionId": "{latestRevisionId}",
"revisionNumber": 3
}
}
}
```
Supported `kind` values:
- `suggest_tasks`: propose child issues for the board/user to accept or reject
- `ask_user_questions`: ask structured questions and store selected answers
- `request_confirmation`: ask the board/user to accept or reject a proposal
For `request_confirmation`, `continuationPolicy: "wake_assignee"` wakes the assignee only after acceptance. Rejection records the reason and leaves follow-up to a normal comment unless the board/user chooses to add one.
### Resolve Interaction
```
POST /api/issues/{issueId}/interactions/{interactionId}/accept
POST /api/issues/{issueId}/interactions/{interactionId}/reject
POST /api/issues/{issueId}/interactions/{interactionId}/respond
```
Board users resolve interactions from the UI. Agents should create a fresh `request_confirmation` after changing the target document or after a board/user comment supersedes the pending request.
## Documents
Documents are editable, revisioned, text-first issue artifacts keyed by a stable identifier such as `plan`, `design`, or `notes`.

View File

@@ -75,28 +75,11 @@ Fields:
```
PATCH /api/routines/{routineId}
{
"status": "paused",
"baseRevisionId": "{latestRevisionId}"
"status": "paused"
}
```
All fields from create are updatable. `baseRevisionId` is optional for backward compatibility; when provided, stale values return `409 Conflict` with the current revision id. **Agents can only update routines assigned to themselves and cannot reassign a routine to another agent.**
## List Revisions
```
GET /api/routines/{routineId}/revisions
```
Returns append-only routine definition revisions newest first. Snapshots include routine fields and safe trigger metadata only; webhook secret values and `secretId` are never returned.
## Restore Revision
```
POST /api/routines/{routineId}/revisions/{revisionId}/restore
```
Restores a historical routine definition by creating a new latest revision copied from the selected revision. Historical revision rows, routine run history, and activity history are preserved. If restoring a deleted webhook trigger requires recreating it, the response can include one-time replacement secret material for that trigger.
All fields from create are updatable. **Agents can only update routines assigned to themselves and cannot reassign a routine to another agent.**
## Add Trigger

View File

@@ -1,133 +0,0 @@
---
title: Secrets Remote Import
summary: AWS Secrets Manager metadata-only remote import API
---
Remote import lets the board link existing AWS Secrets Manager entries as
Paperclip `external_reference` secrets without copying plaintext into
Paperclip.
Both routes are board-only and company-scoped. The selected provider vault must
belong to the company, use `aws_secrets_manager`, and have a selectable status
(`ready` or `warning`). Disabled, coming-soon, or cross-company vaults are
rejected.
Remote import is an inventory and metadata workflow. Preview calls AWS
`ListSecrets` only and import stores a Paperclip external reference plus
fingerprint/version metadata. Neither route calls `GetSecretValue` or
`BatchGetSecretValue`, requests `SecretString`, requires KMS decrypt, logs raw
remote metadata, or copies secret plaintext into Paperclip.
## Preview Remote AWS Secrets
```
POST /api/companies/{companyId}/secrets/remote-import/preview
{
"providerConfigId": "<aws-vault-uuid>",
"query": "stripe",
"nextToken": "optional-provider-page-token",
"pageSize": 50
}
```
`query` is optional and is sent to AWS as an inventory filter. Treat it as
non-secret metadata because AWS may record list request parameters in
CloudTrail. `nextToken` is an opaque AWS cursor; pass it back unchanged.
`pageSize` is capped at 100.
Response:
```json
{
"providerConfigId": "<aws-vault-uuid>",
"provider": "aws_secrets_manager",
"nextToken": null,
"candidates": [
{
"externalRef": "arn:aws:secretsmanager:us-east-1:123456789012:secret:prod/stripe",
"remoteName": "prod/stripe",
"name": "prod/stripe",
"key": "prod-stripe",
"providerVersionRef": null,
"providerMetadata": {
"lastChangedDate": "2026-05-06T00:00:00.000Z",
"hasDescription": true
},
"status": "ready",
"importable": true,
"conflicts": []
}
]
}
```
Candidate `status` values:
- `ready`: no existing exact external reference and no name/key collision.
- `duplicate`: an existing secret already has the exact provider `externalRef`.
- `conflict`: the suggested Paperclip `name` or `key` is already in use.
Conflict `type` values are `exact_reference`, `name`, `key`, and
`provider_guardrail`. AWS refs under Paperclip's own managed namespace are
blocked as external references so one company cannot import another company's
Paperclip-managed AWS secret through a broad runtime role.
## Import Remote AWS Secret References
```
POST /api/companies/{companyId}/secrets/remote-import
{
"providerConfigId": "<aws-vault-uuid>",
"secrets": [
{
"externalRef": "arn:aws:secretsmanager:us-east-1:123456789012:secret:prod/stripe",
"name": "Stripe production key",
"key": "stripe-production-key",
"description": "Stripe key used by production checkout",
"providerVersionRef": null,
"providerMetadata": {
"lastChangedDate": "2026-05-06T00:00:00.000Z",
"hasDescription": true
}
}
]
}
```
The import response is row-level. Ready rows become active
`external_reference` secrets with version metadata only. Exact-reference
duplicates and name/key conflicts are skipped without failing the whole request.
The `secrets` array accepts 1-100 rows, and the backend re-checks duplicates and
conflicts at submit time.
Each row may include an optional Paperclip `description` entered during review;
blank descriptions are stored as `null`. AWS provider descriptions are not
copied into this field.
```json
{
"providerConfigId": "<aws-vault-uuid>",
"provider": "aws_secrets_manager",
"importedCount": 1,
"skippedCount": 1,
"errorCount": 0,
"results": [
{
"externalRef": "arn:aws:secretsmanager:us-east-1:123456789012:secret:prod/stripe",
"name": "Stripe production key",
"key": "stripe-production-key",
"status": "imported",
"reason": null,
"secretId": "<paperclip-secret-id>",
"conflicts": []
}
]
}
```
Activity logs record aggregate counts and provider/vault ids only, not remote
secret names, ARNs, tags, or values.
Imported references may still fail during a future bound runtime resolution if
the Paperclip runtime role can list the AWS secret but lacks
`secretsmanager:GetSecretValue` or required KMS decrypt permission for that
specific secret.

View File

@@ -25,357 +25,16 @@ POST /api/companies/{companyId}/secrets
The value is encrypted at rest. Only the secret ID and metadata are returned.
To link a provider-owned secret without copying the value into Paperclip, create
an external-reference secret:
```json
{
"name": "prod-stripe-key",
"provider": "aws_secrets_manager",
"managedMode": "external_reference",
"externalRef": "arn:aws:secretsmanager:us-east-1:123456789012:secret:paperclip/prod/stripe",
"providerVersionRef": "version-id-or-label"
}
```
Paperclip stores the provider reference and a non-sensitive fingerprint only.
The value is resolved, when the provider is configured, through the server
runtime path that enforces binding context and records access events.
## Provider Health
## Update Secret
```
GET /api/companies/{companyId}/secret-providers/health
```
Returns provider setup diagnostics, warnings, and local backup guidance. Health
responses must not include secret values or provider credentials.
For `aws_secrets_manager`, an unready health response names the missing
non-secret provider environment variables, the AWS SDK default credential source
expected by the server runtime, and the custody rule that AWS bootstrap
credentials must not be stored in Paperclip `company_secrets`.
The equivalent CLI check is:
```sh
pnpm paperclipai secrets doctor --company-id {companyId}
```
## Provider Vaults
Provider vaults are named, company-scoped configurations that route secret
material to one of the supported provider backends. See the
[secrets deploy guide](/deploy/secrets#provider-vaults) for the operator model
and custody rules.
All routes below require board auth and company access. Mutating routes emit
`secret_provider_config.*` activity-log entries. No route in this surface
returns provider credential values; submitting credential-shaped fields in
`config` is rejected at validation time.
### List Vaults
```
GET /api/companies/{companyId}/secret-provider-configs
```
Returns every vault for the company (including disabled rows for audit), each
with id, provider, displayName, status, isDefault, non-sensitive `config`,
latest health snapshot (`healthStatus`, `healthCheckedAt`, `healthMessage`,
`healthDetails`), `disabledAt`, and audit columns.
### Create Vault
```
POST /api/companies/{companyId}/secret-provider-configs
{
"provider": "aws_secrets_manager",
"displayName": "Prod US-East",
"isDefault": true,
"config": {
"region": "us-east-1",
"namespace": "paperclip",
"secretNamePrefix": "paperclip",
"kmsKeyId": "arn:aws:kms:us-east-1:123456789012:key/abcd-...",
"environmentTag": "production"
}
}
```
Per-provider `config` shapes:
- `local_encrypted`: optional `backupReminderAcknowledged: boolean`.
- `aws_secrets_manager`: required `region`; optional `namespace`,
`secretNamePrefix`, `kmsKeyId`, `ownerTag`, `environmentTag`.
- `gcp_secret_manager` (coming soon): optional `projectId`, `location`,
`namespace`, `secretNamePrefix`.
- `vault` (coming soon): optional origin-only HTTPS `address`, `namespace`,
`mountPath`, `secretPathPrefix`. `address` values with embedded credentials,
paths, query strings, or fragments are rejected.
`status` defaults to `ready` for `local_encrypted` and `aws_secrets_manager`,
and to `coming_soon` for `gcp_secret_manager` and `vault`. Coming-soon and
disabled vaults cannot be marked `isDefault`. Setting `isDefault: true` clears
the previous default for the same provider in the same transaction.
### Get Vault
```
GET /api/secret-provider-configs/{id}
```
### Update Vault
```
PATCH /api/secret-provider-configs/{id}
{
"displayName": "Prod US-East-2",
"config": {
"region": "us-east-2",
"kmsKeyId": "arn:aws:kms:us-east-2:123456789012:key/abcd-..."
}
}
```
`config` is replaced wholesale on update — pass the full provider config
payload, not a partial diff. Status transitions for `gcp_secret_manager` and
`vault` are constrained to `coming_soon` and `disabled` until their runtime
modules ship.
### Disable Vault
```
DELETE /api/secret-provider-configs/{id}
```
Soft-deletes the vault: status flips to `disabled`, `isDefault` clears, and
`disabledAt` is stamped. Disabled vaults remain in `GET` results for audit
purposes but are no longer offered in the secret create/rotate flow.
### Set Default
```
POST /api/secret-provider-configs/{id}/default
```
Marks the target vault as the default for its provider family and clears the
previous default. Returns 422 when the target is `coming_soon` or `disabled`.
### Run Health Check
```
POST /api/secret-provider-configs/{id}/health
```
Runs a provider-specific health probe and persists the result on the vault.
Response shape:
```json
{
"configId": "<uuid>",
"provider": "aws_secrets_manager",
"status": "ready" | "warning" | "error" | "coming_soon" | "disabled",
"message": "Provider vault is ready to handle managed writes",
"details": {
"code": "provider_ready",
"message": "...",
"guidance": ["..."]
},
"checkedAt": "2026-05-06T14:00:00.000Z"
}
```
Health responses never include provider credentials or secret values. For AWS
vaults, `details.guidance` may include missing non-secret env names and the
expected AWS SDK credential source; coming-soon vaults always return
`status: "coming_soon"` with `code: "runtime_locked"` and never call into
provider modules.
### Selecting A Vault When Creating Or Rotating Secrets
`POST /api/companies/{companyId}/secrets` and
`POST /api/secrets/{secretId}/rotate` both accept an optional
`providerConfigId` field that pins the secret to a specific vault. When
omitted (or null), the operation runs through the deployment-level provider
configuration — the same path existing installs already use. The board UI
preselects the company's default vault for the chosen provider before
submitting, so callers should usually send an explicit `providerConfigId`.
Coming-soon and disabled vaults are rejected with a 422; a vault that does not
match the secret's provider is rejected the same way.
```json
POST /api/companies/{companyId}/secrets
{
"name": "prod-stripe-key",
"provider": "aws_secrets_manager",
"providerConfigId": "<vault-uuid>",
"managedMode": "external_reference",
"externalRef": "arn:aws:secretsmanager:us-east-1:123456789012:secret:paperclip/prod/stripe"
}
```
### Response Redaction Rules
Every route in this surface enforces the same redaction contract:
- Secret values are never returned. The board UI never has a "reveal value"
affordance; resolution happens server-side at runtime under a binding.
- Provider credential values are never accepted, stored, returned, logged, or
echoed in error messages. Submitting credential-shaped fields fails
validation with a non-leaking error.
- Activity log entries record vault id, provider, displayName, status, and
isDefault transitions — never `config` payloads or health detail bodies.
## Remote Import From AWS Secrets Manager
Remote import links existing AWS Secrets Manager entries into Paperclip as
`external_reference` secrets. Import stores provider reference metadata only; it
does not copy the remote secret plaintext into Paperclip.
The routes are board-only and company-scoped. `providerConfigId` must point to
a same-company AWS provider vault with status `ready` or `warning`. Disabled,
coming-soon, non-AWS, and cross-company vaults are rejected. Imported secrets
resolve later through the selected vault, so runtime reads still need
`secretsmanager:GetSecretValue` and any required KMS decrypt permission on the
selected external secret.
### Preview Remote Import Candidates
```
POST /api/companies/{companyId}/secrets/remote-import/preview
{
"providerConfigId": "<aws-vault-uuid>",
"query": "stripe",
"nextToken": "opaque-provider-token",
"pageSize": 50
}
```
`query` is optional and is passed to AWS Secrets Manager inventory filtering.
Treat it as non-secret metadata because AWS may record list request parameters
in CloudTrail. `nextToken` is an opaque AWS cursor; callers must pass it back
unchanged and must not synthesize offsets. `pageSize` is optional, defaults to
50 in the UI, and is capped at 100.
Preview uses AWS `ListSecrets` only. It must not call `GetSecretValue` or
`BatchGetSecretValue`, must not request `SecretString`, and must not require KMS
decrypt. The response contains sanitized metadata for display and conflict
decisions:
```json
{
"providerConfigId": "<aws-vault-uuid>",
"provider": "aws_secrets_manager",
"nextToken": null,
"candidates": [
{
"externalRef": "arn:aws:secretsmanager:us-east-1:123456789012:secret:prod/stripe",
"remoteName": "prod/stripe",
"name": "prod/stripe",
"key": "prod-stripe",
"providerVersionRef": null,
"providerMetadata": {
"createdDate": "2026-05-06T00:00:00.000Z",
"lastChangedDate": "2026-05-06T00:00:00.000Z",
"hasDescription": true,
"hasKmsKey": true,
"tagCount": 3
},
"status": "ready",
"importable": true,
"conflicts": []
}
]
}
```
Candidate statuses:
- `ready`: the row can be selected for import.
- `duplicate`: a Paperclip secret already links the same canonical provider
reference for the same provider vault.
- `conflict`: the row has a name/key collision or provider guardrail failure.
Conflict types are `exact_reference`, `name`, `key`, and
`provider_guardrail`. AWS refs under Paperclip's own managed namespace are
blocked as external references; use the Paperclip-managed secret flow for those
resources instead.
### Import Selected Remote References
```
POST /api/companies/{companyId}/secrets/remote-import
{
"providerConfigId": "<aws-vault-uuid>",
"secrets": [
{
"externalRef": "arn:aws:secretsmanager:us-east-1:123456789012:secret:prod/stripe",
"name": "Stripe production key",
"key": "stripe-production-key",
"description": "Stripe key used by production checkout",
"providerVersionRef": null,
"providerMetadata": {
"createdDate": "2026-05-06T00:00:00.000Z"
}
}
]
}
```
The `secrets` array accepts 1-100 rows. Each row may override the suggested
Paperclip `name`, `key`, optional Paperclip `description`,
`providerVersionRef`, and sanitized `providerMetadata`. Blank descriptions are
stored as `null`; AWS provider descriptions are not copied into Paperclip
descriptions. The backend re-checks duplicate refs and name/key conflicts at
submit time; a stale preview does not bypass those checks.
The import response is row-level:
```json
{
"providerConfigId": "<aws-vault-uuid>",
"provider": "aws_secrets_manager",
"importedCount": 1,
"skippedCount": 1,
"errorCount": 0,
"results": [
{
"externalRef": "arn:aws:secretsmanager:us-east-1:123456789012:secret:prod/stripe",
"name": "Stripe production key",
"key": "stripe-production-key",
"status": "imported",
"reason": null,
"secretId": "<paperclip-secret-id>",
"conflicts": []
}
]
}
```
Row statuses:
- `imported`: Paperclip created an active `external_reference` secret and one
metadata-only version row.
- `skipped`: the row had an exact-reference duplicate or name/key conflict.
- `error`: the provider rejected the reference or the row failed validation.
Activity logs for preview/import store aggregate counts, provider id, and vault
id only. They must not store remote secret names, ARNs, descriptions, tags,
plaintext values, provider credentials, or raw AWS error blobs.
## Rotate Secret
```
POST /api/secrets/{secretId}/rotate
PATCH /api/secrets/{secretId}
{
"value": "sk-ant-new-value..."
}
```
Creates a new version of the secret. Agents referencing `"version": "latest"`
automatically get the new value on next heartbeat. Pin to a specific version
when a bad `latest` rollout would affect many agents at once.
Creates a new version of the secret. Agents referencing `"version": "latest"` automatically get the new value on next heartbeat.
## Using Secrets in Agent Config
@@ -393,20 +52,4 @@ Reference secrets in agent adapter config instead of inline values:
}
```
The server resolves and decrypts secret references at runtime, injecting the
real value into the agent process environment. Paperclip's custody guarantees
end at injection: the agent process can read, log, or forward the value, so
treat any secret bound to an agent as exposed to that agent. See the custody
boundaries note in the [secrets deploy guide](/deploy/secrets#custody-boundaries).
## Portability
Company export/import APIs represent agent and project environment requirements
as declarations in the package manifest. Exports omit secret values, secret IDs,
provider references, and encrypted provider material. Use:
```sh
pnpm paperclipai secrets declarations --company-id {companyId}
```
to inspect the declarations that an export would emit before moving a package.
The server resolves and decrypts secret references at runtime, injecting the real value into the agent process environment.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 258 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 321 KiB

View File

@@ -57,16 +57,6 @@ pnpm paperclipai context set --api-key-env-var-name PAPERCLIP_API_KEY
export PAPERCLIP_API_KEY=...
```
Secret operations are available under `paperclipai secrets`:
```sh
pnpm paperclipai secrets declarations --company-id <company-id> --kind secret
pnpm paperclipai secrets create --company-id <company-id> --name anthropic-api-key --value-env ANTHROPIC_API_KEY
pnpm paperclipai secrets link --company-id <company-id> --name prod-stripe-key --provider aws_secrets_manager --external-ref <provider-ref>
pnpm paperclipai secrets doctor --company-id <company-id>
pnpm paperclipai secrets migrate-inline-env --company-id <company-id> --apply
```
Context is stored at `~/.paperclip/context.json`.
## Command Categories

View File

@@ -67,8 +67,7 @@ Validates:
- Server configuration
- Database connectivity
- Secrets adapter configuration, including AWS Secrets Manager non-secret env
config when selected
- Secrets adapter configuration
- Storage configuration
- Missing key files
@@ -82,13 +81,6 @@ pnpm paperclipai configure --section secrets
pnpm paperclipai configure --section storage
```
`--section secrets` updates the deployment-level provider used as the fallback
for secrets that do not target a specific company vault. Per-company provider
vaults (named instances, default vault selection, multiple vaults per provider,
coming-soon GCP/Vault) live in the board UI under
`Company Settings → Secrets → Provider vaults` and the
`/api/companies/{companyId}/secret-provider-configs` API.
## `paperclipai env`
Show resolved environment configuration:

View File

@@ -1,580 +0,0 @@
---
title: AWS ECS Fargate
summary: Deploy Paperclip to AWS using ECS Fargate, RDS Postgres, and EFS
---
Deploy Paperclip to AWS with ECS Fargate (compute), RDS Postgres 17 (database), and EFS (persistent storage). This guide uses the AWS CLI and produces a single-task ECS service behind an ALB with HTTPS.
## Prerequisites
- AWS CLI v2 configured with a profile that has admin-level permissions
- Docker installed locally (for building and pushing the image)
- A registered domain with DNS you control (for the TLS certificate)
- The Paperclip repo cloned locally
Set these shell variables for the rest of the guide:
```bash
export AWS_REGION=us-east-1
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export PAPERCLIP_DOMAIN=paperclip.example.com # your domain
export DB_PASSWORD=$(openssl rand -base64 24 | tr -d '/+=' | head -c 32)
export AUTH_SECRET=$(openssl rand -base64 32)
```
## 1. Create ECR Repository
```bash
aws ecr create-repository \
--repository-name paperclip-server \
--image-scanning-configuration scanOnPush=true \
--region $AWS_REGION
```
## 2. Build and Push Docker Image
```bash
cd /path/to/paperclip
# Authenticate Docker to ECR
aws ecr get-login-password --region $AWS_REGION \
| docker login --username AWS --password-stdin \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
# Build
docker build -t paperclip-server .
# Tag and push
docker tag paperclip-server:latest \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/paperclip-server:latest
docker push \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/paperclip-server:latest
```
## 3. Networking (VPC, Subnets, Security Groups)
Use the default VPC or create a dedicated one. The guide assumes the default VPC with public and private subnets in two AZs.
```bash
# Get default VPC
VPC_ID=$(aws ec2 describe-vpcs \
--filters Name=isDefault,Values=true \
--query 'Vpcs[0].VpcId' --output text)
# Get two public subnets (for ALB)
SUBNET_IDS=$(aws ec2 describe-subnets \
--filters Name=vpc-id,Values=$VPC_ID \
--query 'Subnets[?MapPublicIpOnLaunch==`true`] | [0:2].SubnetId' \
--output text)
SUBNET_1=$(echo $SUBNET_IDS | awk '{print $1}')
SUBNET_2=$(echo $SUBNET_IDS | awk '{print $2}')
```
Create security groups:
```bash
# ALB security group — inbound HTTPS
ALB_SG=$(aws ec2 create-security-group \
--group-name paperclip-alb \
--description "Paperclip ALB" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
aws ec2 authorize-security-group-ingress \
--group-id $ALB_SG \
--protocol tcp --port 443 --cidr 0.0.0.0/0
# Also open port 80 so the ALB can accept HTTP and redirect to HTTPS
aws ec2 authorize-security-group-ingress \
--group-id $ALB_SG \
--protocol tcp --port 80 --cidr 0.0.0.0/0
# ECS task security group — inbound from ALB only
ECS_SG=$(aws ec2 create-security-group \
--group-name paperclip-ecs \
--description "Paperclip ECS tasks" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
aws ec2 authorize-security-group-ingress \
--group-id $ECS_SG \
--protocol tcp --port 3100 \
--source-group $ALB_SG
# RDS security group — inbound from ECS only
RDS_SG=$(aws ec2 create-security-group \
--group-name paperclip-rds \
--description "Paperclip RDS" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
aws ec2 authorize-security-group-ingress \
--group-id $RDS_SG \
--protocol tcp --port 5432 \
--source-group $ECS_SG
# EFS security group — inbound NFS from ECS only
EFS_SG=$(aws ec2 create-security-group \
--group-name paperclip-efs \
--description "Paperclip EFS" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
aws ec2 authorize-security-group-ingress \
--group-id $EFS_SG \
--protocol tcp --port 2049 \
--source-group $ECS_SG
```
## 4. Create RDS Postgres Instance
```bash
# Custom VPCs don't come with a default DB subnet group — create one
# that spans our two subnets so RDS can place the instance.
aws rds create-db-subnet-group \
--db-subnet-group-name paperclip-db-subnet \
--db-subnet-group-description "Paperclip RDS subnets" \
--subnet-ids $SUBNET_1 $SUBNET_2
aws rds create-db-instance \
--db-instance-identifier paperclip-db \
--db-instance-class db.t4g.micro \
--engine postgres \
--engine-version 17 \
--master-username paperclip \
--master-user-password "$DB_PASSWORD" \
--allocated-storage 20 \
--storage-type gp3 \
--vpc-security-group-ids $RDS_SG \
--db-subnet-group-name paperclip-db-subnet \
--no-publicly-accessible \
--backup-retention-period 7 \
--no-multi-az \
--db-name paperclip \
--region $AWS_REGION
# Wait for it to become available (takes 5-10 min)
aws rds wait db-instance-available \
--db-instance-identifier paperclip-db
# Get the endpoint
RDS_ENDPOINT=$(aws rds describe-db-instances \
--db-instance-identifier paperclip-db \
--query 'DBInstances[0].Endpoint.Address' --output text)
DATABASE_URL="postgresql://paperclip:${DB_PASSWORD}@${RDS_ENDPOINT}:5432/paperclip"
```
## 5. Create EFS Filesystem
```bash
EFS_ID=$(aws efs create-file-system \
--performance-mode generalPurpose \
--throughput-mode bursting \
--encrypted \
--tags Key=Name,Value=paperclip-data \
--query 'FileSystemId' --output text)
# Create mount targets in each subnet
for SUBNET in $SUBNET_1 $SUBNET_2; do
aws efs create-mount-target \
--file-system-id $EFS_ID \
--subnet-id $SUBNET \
--security-groups $EFS_SG
done
# Wait for mount targets
aws efs describe-mount-targets --file-system-id $EFS_ID
```
## 6. Store Secrets
```bash
aws secretsmanager create-secret \
--name paperclip/database-url \
--secret-string "$DATABASE_URL"
aws secretsmanager create-secret \
--name paperclip/anthropic-api-key \
--secret-string "YOUR_ANTHROPIC_KEY"
aws secretsmanager create-secret \
--name paperclip/better-auth-secret \
--secret-string "$AUTH_SECRET"
aws secretsmanager create-secret \
--name paperclip/openai-api-key \
--secret-string "YOUR_OPENAI_KEY"
aws secretsmanager create-secret \
--name paperclip/github-token \
--secret-string "YOUR_GITHUB_PAT"
```
## 7. IAM Roles
Create the ECS task execution role (pulls images, reads secrets) and the task role (application permissions).
```bash
# Task execution role
aws iam create-role \
--role-name paperclip-ecs-execution \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "ecs-tasks.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
aws iam attach-role-policy \
--role-name paperclip-ecs-execution \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
# Allow reading secrets
aws iam put-role-policy \
--role-name paperclip-ecs-execution \
--policy-name SecretsAccess \
--policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["secretsmanager:GetSecretValue"],
"Resource": "arn:aws:secretsmanager:'$AWS_REGION':'$AWS_ACCOUNT_ID':secret:paperclip/*"
}]
}'
# Task role (application — add permissions as needed)
aws iam create-role \
--role-name paperclip-ecs-task \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "ecs-tasks.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
```
## 8. ECS Cluster and Task Definition
```bash
aws ecs create-cluster --cluster-name paperclip
aws logs create-log-group --log-group-name /ecs/paperclip
```
Register the task definition using the template at `docker/ecs-task-definition.json`. Before registering, replace the placeholder values:
```bash
sed -e "s|<ACCOUNT_ID>|$AWS_ACCOUNT_ID|g" \
-e "s|<REGION>|$AWS_REGION|g" \
-e "s|<EFS_ID>|$EFS_ID|g" \
-e "s|<DOMAIN>|$PAPERCLIP_DOMAIN|g" \
docker/ecs-task-definition.json > /tmp/paperclip-task-def.json
aws ecs register-task-definition \
--cli-input-json file:///tmp/paperclip-task-def.json
```
## 9. ALB and TLS Certificate
Request a certificate (you must validate via DNS):
```bash
CERT_ARN=$(aws acm request-certificate \
--domain-name $PAPERCLIP_DOMAIN \
--validation-method DNS \
--query 'CertificateArn' --output text)
# Get the CNAME record to add to your DNS
aws acm describe-certificate \
--certificate-arn $CERT_ARN \
--query 'Certificate.DomainValidationOptions[0].ResourceRecord'
```
Add the CNAME to your DNS provider, then wait for validation:
```bash
aws acm wait certificate-validated --certificate-arn $CERT_ARN
```
Create the ALB:
```bash
ALB_ARN=$(aws elbv2 create-load-balancer \
--name paperclip-alb \
--subnets $SUBNET_1 $SUBNET_2 \
--security-groups $ALB_SG \
--scheme internet-facing \
--type application \
--query 'LoadBalancers[0].LoadBalancerArn' --output text)
ALB_DNS=$(aws elbv2 describe-load-balancers \
--load-balancer-arns $ALB_ARN \
--query 'LoadBalancers[0].DNSName' --output text)
# Target group
TG_ARN=$(aws elbv2 create-target-group \
--name paperclip-tg \
--protocol HTTP \
--port 3100 \
--vpc-id $VPC_ID \
--target-type ip \
--health-check-path /api/health \
--health-check-interval-seconds 30 \
--healthy-threshold-count 2 \
--unhealthy-threshold-count 3 \
--query 'TargetGroups[0].TargetGroupArn' --output text)
# HTTPS listener
LISTENER_ARN=$(aws elbv2 create-listener \
--load-balancer-arn $ALB_ARN \
--protocol HTTPS \
--port 443 \
--certificates CertificateArn=$CERT_ARN \
--default-actions Type=forward,TargetGroupArn=$TG_ARN \
--query 'Listeners[0].ListenerArn' --output text)
# HTTP listener — redirect all :80 traffic to :443
HTTP_LISTENER_ARN=$(aws elbv2 create-listener \
--load-balancer-arn $ALB_ARN \
--protocol HTTP \
--port 80 \
--default-actions Type=redirect,RedirectConfig='{Protocol=HTTPS,Port=443,StatusCode=HTTP_301}' \
--query 'Listeners[0].ListenerArn' --output text)
```
Point your DNS to the ALB:
- Create a CNAME or ALIAS record for `$PAPERCLIP_DOMAIN` -> `$ALB_DNS`
## 10. Create ECS Service
```bash
aws ecs create-service \
--cluster paperclip \
--service-name paperclip-server \
--task-definition paperclip-server \
--desired-count 1 \
--launch-type FARGATE \
--deployment-configuration '{
"deploymentCircuitBreaker": {"enable": true, "rollback": true},
"maximumPercent": 200,
"minimumHealthyPercent": 100
}' \
--network-configuration '{
"awsvpcConfiguration": {
"subnets": ["'$SUBNET_1'", "'$SUBNET_2'"],
"securityGroups": ["'$ECS_SG'"],
"assignPublicIp": "ENABLED"
}
}' \
--load-balancers '[{
"targetGroupArn": "'$TG_ARN'",
"containerName": "paperclip-server",
"containerPort": 3100
}]'
```
> **Note:** `assignPublicIp: ENABLED` is needed if using public subnets without a NAT Gateway. For private subnets, set to `DISABLED` and ensure a NAT Gateway is configured for outbound internet access.
## 11. Verify Deployment
```bash
# Watch task come up
aws ecs describe-services \
--cluster paperclip \
--services paperclip-server \
--query 'services[0].{desired:desiredCount,running:runningCount,status:status}'
# Check task health
aws ecs list-tasks --cluster paperclip --service-name paperclip-server
TASK_ARN=$(aws ecs list-tasks --cluster paperclip --service-name paperclip-server --query 'taskArns[0]' --output text)
aws ecs describe-tasks --cluster paperclip --tasks $TASK_ARN \
--query 'tasks[0].{status:lastStatus,health:healthStatus}'
# Check logs
aws logs tail /ecs/paperclip --since 10m --follow
# Hit the health endpoint
curl -sf https://$PAPERCLIP_DOMAIN/api/health
```
**Healthy indicators:**
- ECS task status: `RUNNING`, health: `HEALTHY`
- Logs show `plugin job coordinator started` and `plugin-loader: loadAll complete`
- `/api/health` returns 200
## Post-Deploy Security Hardening
After the first user has signed up (which grants admin role), lock down the instance:
```bash
# Disable public sign-up (prevents unauthorized users from creating accounts)
# Add to the task definition environment section, then redeploy:
# { "name": "PAPERCLIP_AUTH_DISABLE_SIGN_UP", "value": "true" }
# Or update via Secrets Manager / task def override, then force new deployment
aws ecs update-service \
--cluster paperclip \
--service paperclip-server \
--force-new-deployment
```
Use the invite flow (added in v2026.416.0) to grant access to additional users after sign-up is disabled.
## Deploying Updates
Build, push, and force a new deployment:
```bash
# Build and push new image
docker build -t paperclip-server .
docker tag paperclip-server:latest \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/paperclip-server:latest
docker push \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/paperclip-server:latest
# Roll out
aws ecs update-service \
--cluster paperclip \
--service paperclip-server \
--force-new-deployment
# Watch the deployment
aws ecs describe-services \
--cluster paperclip \
--services paperclip-server \
--query 'services[0].deployments[*].{status:status,running:runningCount,desired:desiredCount,rollout:rolloutState}'
```
ECS performs a rolling update: starts a new task, waits for it to pass health checks, then drains the old task.
## Rollback
If the new deployment is unhealthy:
```bash
# ECS automatically rolls back if the new task fails health checks
# (circuit breaker is enabled in the service configuration above).
# To force rollback manually:
# 1. Find the previous task definition revision
aws ecs list-task-definitions \
--family-prefix paperclip-server \
--sort DESC \
--query 'taskDefinitionArns[0:3]'
# 2. Update service to the previous revision
aws ecs update-service \
--cluster paperclip \
--service paperclip-server \
--task-definition paperclip-server:<PREVIOUS_REVISION>
```
## Scaling to Zero (Cost Savings)
Scale down when not in use:
```bash
# Stop
aws ecs update-service \
--cluster paperclip \
--service paperclip-server \
--desired-count 0
# Start
aws ecs update-service \
--cluster paperclip \
--service paperclip-server \
--desired-count 1
```
RDS can also be stopped (auto-restarts after 7 days):
```bash
aws rds stop-db-instance --db-instance-identifier paperclip-db
aws rds start-db-instance --db-instance-identifier paperclip-db
```
## Teardown
Remove all resources in reverse order:
```bash
# 1. ECS service and cluster
aws ecs update-service --cluster paperclip --service paperclip-server --desired-count 0
aws ecs delete-service --cluster paperclip --service paperclip-server --force
aws ecs delete-cluster --cluster paperclip
# 2. ALB and ACM cert
aws elbv2 delete-listener --listener-arn $HTTP_LISTENER_ARN
aws elbv2 delete-listener --listener-arn $LISTENER_ARN
aws elbv2 delete-target-group --target-group-arn $TG_ARN
aws elbv2 delete-load-balancer --load-balancer-arn $ALB_ARN
aws acm delete-certificate --certificate-arn $CERT_ARN
# 3. RDS (creates final snapshot)
aws rds delete-db-instance \
--db-instance-identifier paperclip-db \
--final-db-snapshot-identifier paperclip-db-final
aws rds wait db-instance-deleted --db-instance-identifier paperclip-db
aws rds delete-db-subnet-group --db-subnet-group-name paperclip-db-subnet
# 4. EFS (mount targets must be deleted first)
for MT in $(aws efs describe-mount-targets --file-system-id $EFS_ID --query 'MountTargets[*].MountTargetId' --output text); do
aws efs delete-mount-target --mount-target-id $MT
done
# Mount-target deletion is async; poll until none remain before deleting
# the filesystem, otherwise delete-file-system fails with FileSystemInUse.
echo "Waiting for mount targets to delete..."
while aws efs describe-mount-targets \
--file-system-id $EFS_ID \
--query 'MountTargets[0].MountTargetId' --output text 2>/dev/null | grep -q 'fsmt-'; do
sleep 5
done
aws efs delete-file-system --file-system-id $EFS_ID
# 5. Secrets
for s in database-url anthropic-api-key better-auth-secret openai-api-key github-token; do
aws secretsmanager delete-secret --secret-id paperclip/$s --force-delete-without-recovery
done
# 6. Security groups (after all dependents are gone)
for sg in $EFS_SG $RDS_SG $ECS_SG $ALB_SG; do
aws ec2 delete-security-group --group-id $sg
done
# 7. ECR
aws ecr delete-repository --repository-name paperclip-server --force
# 8. IAM roles
aws iam delete-role-policy --role-name paperclip-ecs-execution --policy-name SecretsAccess
aws iam detach-role-policy --role-name paperclip-ecs-execution \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
aws iam delete-role --role-name paperclip-ecs-execution
aws iam delete-role --role-name paperclip-ecs-task
# 9. Log group
aws logs delete-log-group --log-group-name /ecs/paperclip
```
## Cost Reference
| Service | Config | Monthly |
|---------|--------|---------|
| ECS Fargate | 2 vCPU, 4 GB, 24/7 | ~$70 |
| RDS Postgres | db.t4g.micro, 20 GB | ~$15 |
| ALB | 1 LCU average | ~$22 |
| NAT Gateway | 1 AZ (if using private subnets) | ~$35 |
| EFS | 1 GB Standard | ~$0.30 |
| Secrets Manager | 5 secrets | ~$2 |
| CloudWatch Logs | ~1 GB/mo | ~$0.50 |
| ECR | ~1 GB | ~$0.10 |
| **Total (public subnets, no NAT)** | | **~$110/mo** |
| **Total (private subnets + NAT)** | | **~$145/mo** |
Use Fargate Spot and scheduled scaling to 0 during off-hours to reduce to ~$60-85/mo.

View File

@@ -18,7 +18,6 @@ All environment variables that Paperclip uses for server configuration.
| `PAPERCLIP_INSTANCE_ID` | `default` | Instance identifier (for multiple local instances) |
| `PAPERCLIP_DEPLOYMENT_MODE` | `local_trusted` | Runtime mode override |
| `PAPERCLIP_DEPLOYMENT_EXPOSURE` | `private` | Exposure policy when deployment mode is `authenticated` |
| `PAPERCLIP_API_URL` | (auto-derived) | Paperclip API base URL. When set externally (e.g., via Kubernetes ConfigMap, load balancer, or reverse proxy), the server preserves the value instead of deriving it from the listen host and port. Useful for deployments where the public-facing URL differs from the local bind address. |
## Secrets
@@ -36,7 +35,7 @@ These are set automatically by the server when invoking agents:
|----------|-------------|
| `PAPERCLIP_AGENT_ID` | Agent's unique ID |
| `PAPERCLIP_COMPANY_ID` | Company ID |
| `PAPERCLIP_API_URL` | Paperclip API base URL (inherits the server-level value; see Server Configuration above) |
| `PAPERCLIP_API_URL` | Paperclip API base URL |
| `PAPERCLIP_API_KEY` | Short-lived JWT for API auth |
| `PAPERCLIP_RUN_ID` | Current heartbeat run ID |
| `PAPERCLIP_TASK_ID` | Issue that triggered this wake |

View File

@@ -40,7 +40,7 @@ Paperclip supports three deployment configurations, from zero-friction local to
- **Just trying Paperclip?** Use `local_trusted` (the default)
- **Sharing with a team on private network?** Use `authenticated` + `private`
- **Deploying to the cloud?** Use `authenticated` + `public` — see [AWS ECS Fargate guide](aws-ecs.md)
- **Deploying to the cloud?** Use `authenticated` + `public`
Set the mode during onboarding:

View File

@@ -5,52 +5,6 @@ summary: Master key, encryption, and strict mode
Paperclip encrypts secrets at rest using a local master key. Agent environment variables that contain sensitive values (API keys, tokens) are stored as encrypted secret references.
## Custody Boundaries
Paperclip protects secret values up to the moment they are handed to an agent
or workload:
- Storage: values are encrypted at rest by the active provider. The local
provider keeps them encrypted with a key that never leaves the host.
- Transport: values are decrypted server-side and injected into the agent
process environment, SSH command env, sandbox driver, or HTTP request
immediately before the call. Paperclip does not return decrypted values to
the board UI.
- Audit: each resolution records a non-sensitive event (secret id, version,
provider id, consumer, outcome) without the value or provider credentials.
Once a value reaches the consuming process, Paperclip can no longer guarantee
secrecy. The agent (or sandbox, or remote host) can read the value, write it to
its own logs or transcript, or pass it to downstream tools. Treat any secret
you bind to an agent as exposed to that agent. Limit blast radius with bindings
(only bind what each agent needs), short-lived provider credentials where the
provider supports them, and rotation when an agent transcript or downstream
system might have captured a value.
## Using Secrets In Runs
Creating a company secret does not automatically create an environment variable.
You use a secret by binding it into an agent, project, environment, or plugin
configuration field that supports secret references.
For agent and project environment variables:
1. Create or link the secret in `Company Settings > Secrets`.
2. Open the agent's `Environment variables` field, or the project's `Env`
field.
3. Add the environment variable key the process expects, such as `GH_TOKEN` or
`OPENAI_API_KEY`.
4. Set the row source to `Secret`, select the stored secret, and choose either
`latest` or a pinned version.
At runtime, Paperclip resolves the selected secret server-side and injects the
resolved value under the env key from the binding row. The stored secret name
can be human-readable; the binding key is what the agent process receives.
Project env applies to every issue run in that project. When a project env key
matches an agent env key, the project value wins before Paperclip injects its
own `PAPERCLIP_*` runtime variables.
## Default Provider: `local_encrypted`
Secrets are encrypted with a local master key stored at:
@@ -60,13 +14,6 @@ Secrets are encrypted with a local master key stored at:
```
This key is auto-created during onboarding. The key never leaves your machine.
Paperclip best-effort enforces `0600` permissions when it creates or loads the
key file. `paperclipai doctor` and the provider health API warn when the file is
readable by group or other users.
Back up the key file together with database backups. A database backup without
the key cannot decrypt local secrets, and a key backup without the database
metadata is not enough to restore named secret versions.
## Configuration
@@ -88,7 +35,6 @@ Validate secrets config:
```sh
pnpm paperclipai doctor
pnpm paperclipai secrets doctor --company-id <company-id>
```
### Environment Overrides
@@ -109,279 +55,15 @@ PAPERCLIP_SECRETS_STRICT_MODE=true
Recommended for any deployment beyond local trusted.
Authenticated deployments default strict mode on unless explicitly overridden by
configuration or `PAPERCLIP_SECRETS_STRICT_MODE=false`.
## External References
Provider-owned secrets can be linked without copying values into Paperclip by
using `managedMode: "external_reference"` plus a provider `externalRef`.
Paperclip stores metadata and a non-sensitive fingerprint, never the value.
Runtime resolution remains server-side and binding-enforced.
The built-in AWS, GCP, and Vault provider IDs currently accept external
reference metadata, but runtime resolution requires provider configuration in the
deployment. Their provider health check reports this as a warning until
configured.
For hosted Paperclip Cloud on AWS, see the AWS Secrets Manager operational
contract — required env vars, IAM/KMS scoping, naming and tag conventions, and
backup/rotation/incident runbooks — in `doc/SECRETS-AWS-PROVIDER.md`.
## Provider Vaults
A *provider vault* is a named, company-scoped configuration that points secret
material at one of the supported provider backends. Each company can configure
multiple vaults, including more than one vault per provider family, and pick a
default vault per family for new secret operations. Existing secrets created
before any vault was configured continue to resolve through the deployment-level
default provider — no migration is required.
### Where to configure
Open `Company Settings → Secrets` in the board UI and switch to the
`Provider vaults` tab. From there you can:
- Create a vault for any supported provider family.
- Edit the non-secret config of an existing vault.
- Set one ready vault per provider family as the company default.
- Disable a vault (a soft delete that keeps audit history).
- Run a health check against a vault and read the latest result inline.
The same operations are exposed under
`/api/companies/{companyId}/secret-provider-configs` for automation. See the
[secrets API reference](/api/secrets#provider-vaults) for the full route table.
### Custody Of Provider Credentials
Provider vaults intentionally store only **non-sensitive** configuration:
region, project id, namespace, prefix, KMS key id, mount path, address, and
similar routing metadata. The API, UI, and activity log never accept, return,
or display provider credential values. Submitting fields with names like
`accessKeyId`, `secretAccessKey`, `token`, `password`, `serviceAccountJson`,
`privateKey`, `keyFile`, `unsealKey`, or any common credential alias is rejected
at validation time.
That keeps the bootstrap rule from the AWS provider applicable to every
provider family: **provider credentials live in deployment infrastructure
identity, not in Paperclip company secrets**. Allowed credential sources are
workload identity attached to the Paperclip server (instance profile, IRSA, ECS
task role), `AWS_PROFILE` / SSO / shared config for local runs, an orchestrator
secret store that boots the server, or short-lived shell credentials for local
development. Do not paste long-lived API keys into the vault config.
### Vault Status
Each vault carries a status that drives what the runtime can do with it:
| Status | Meaning |
|---------------|-----------------------------------------------------------------------------------------------|
| `ready` | Selectable for create/rotate/resolve. Eligible to be the default. |
| `warning` | Saved config exists but health needs attention (for example missing AWS env). Still selectable. |
| `coming_soon` | Visible and editable as draft metadata, but locked out of all runtime operations. |
| `disabled` | Soft-deleted. Hidden from the secret create/rotate flow. |
`gcp_secret_manager` and `vault` are pinned to `coming_soon` until their
runtime modules ship. The settings UI lets you save draft configuration for
those providers (and surfaces them on the vault list), but secret create,
rotate, and resolve calls that target a coming-soon vault fail with a clear
runtime-locked error.
### Default Vault Behavior
A company can mark **one** ready (or warning) vault per provider family as the
default. The secret create and rotate dialogs preselect the default vault for
the chosen provider so operators don't have to remember which vault to pick.
Coming-soon and disabled vaults cannot be marked default; attempting to do so
returns a validation error. Setting a new default automatically clears the
previous default for that provider.
If a secret is created without any `providerConfigId` (no vaults exist yet, or
the operator clears the selector), runtime resolution falls back to the
deployment-level provider configuration — the same path existing installs use.
This keeps secrets created before any provider vault was configured working
without migration. Picking the default in the UI is an explicit selection, not
a runtime fallback: the create call still sends an explicit `providerConfigId`.
### Multiple Vaults Per Provider
Multiple vaults from the same provider family are first-class. Common patterns:
- Two AWS vaults pointing at different regions or KMS keys for environment
separation.
- A staging Vault address alongside a production address.
- A dedicated GCP project for a single product line while the rest of the
company uses another.
Each vault has its own display name, status, default flag, and health record.
Operators choose the vault explicitly when creating or rotating a secret; the
default vault is preselected to avoid accidental routing to the wrong account.
### Per-Vault Health Checks
`POST /api/secret-provider-configs/{id}/health` runs a provider-specific health
probe and stores the result on the vault row. The settings UI exposes the same
action and renders the result inline. Health responses include a status,
operator-facing message, and structured guidance (such as missing env var
names, expected credential sources, and backup reminders). They never include
provider credentials or secret values. Coming-soon vaults always return a
`runtime_locked` health code and never call into provider modules.
### Provider-Specific Notes
**Local encrypted vaults** wrap the existing `local_encrypted` provider. The
master key path and rotation guidance described above still applies. A local
vault config is mostly bookkeeping plus an explicit acknowledgement that the
key file is backed up alongside the database.
**AWS Secrets Manager vaults** read the per-vault `region`, `namespace`,
`secretNamePrefix`, `kmsKeyId`, `ownerTag`, and `environmentTag` to route
managed writes and external-reference reads. The vault config supplements (and
can override) the deployment-level `PAPERCLIP_SECRETS_AWS_*` env. Bootstrap
credentials still come from the AWS SDK default credential chain — see
`doc/SECRETS-AWS-PROVIDER.md` for the full IAM and KMS contract.
**GCP Secret Manager** and **HashiCorp Vault** vaults are coming soon. You can
save draft `projectId`, `location`, `namespace`, `address`, and `mountPath`
metadata so the company is ready to flip them on when the provider modules
ship. Vault `address` values must be origin-only `http(s)://host[:port]` URLs;
addresses with embedded credentials, paths, query strings, or fragments are
rejected.
### Remote Import From AWS Vaults
AWS provider vaults can import existing AWS Secrets Manager entries as
Paperclip `external_reference` secrets. This is a metadata-only link: Paperclip
stores the AWS ARN/path, a fingerprint/version reference, and binding metadata.
It does not read, copy, store, log, or display the remote plaintext secret
value during preview or import.
Operator flow in the board UI:
1. Open `Company Settings -> Secrets`.
2. Confirm at least one AWS provider vault is `ready` or `warning`.
3. In the `Secrets` tab, choose `Import from vault`.
4. Select an AWS vault, search the remote inventory, and load more pages as
needed.
5. Check the rows to import, review/edit the Paperclip name and key, then
submit.
6. Review the result summary for created, skipped, and failed rows.
The preview list is intentionally paged and search-first. AWS accounts can have
large per-Region inventories, and `ListSecrets` returns opaque `NextToken`
cursors. Do not expect Paperclip to crawl a whole account in the background;
load pages deliberately and retry throttled requests with backoff.
Remote import exposes AWS secret metadata visible to the Paperclip runtime
role, including names/ARNs and safe derived fields such as dates, whether a
description or KMS key exists, and tag count. Treat names, ARNs, tags, and
search text as operational metadata that may be sensitive. The API and activity
log must not store raw descriptions, tags, plaintext values, provider
credentials, or raw AWS error blobs.
Required AWS posture:
- Preview needs optional `secretsmanager:ListSecrets` permission on
`Resource: "*"`. AWS does not support constraining `ListSecrets` to
individual secret ARNs or tags as an IAM boundary.
- Preview/import must not call `secretsmanager:GetSecretValue`,
`secretsmanager:BatchGetSecretValue`, or KMS decrypt.
- Runtime resolution of an imported reference still needs
`secretsmanager:GetSecretValue` on the selected external ARN/path and KMS
decrypt when that secret uses a customer-managed key.
- Keep managed create/rotate/delete permissions scoped to the Paperclip
deployment prefix. Do not broaden managed write/delete permissions just
because import inventory is enabled.
Safe scoping comes from deployment posture rather than AWS list filtering:
dedicated Paperclip runtime roles per environment/account, AWS vaults pointed at
the intended account and Region, import-enabled roles only where inventory
exposure is acceptable, and board-only access to the import routes. Tags and
name filters are search aids, not a permission model.
If import preview fails:
- `AccessDenied` or `not authorized`: the runtime role is missing
`secretsmanager:ListSecrets`; add the optional inventory statement only if
remote import should be enabled for that vault.
- Throttling: retry after a short delay and narrow the search before loading
more pages.
- Invalid cursor: refresh the preview; AWS `NextToken` values are opaque and
can expire or become stale.
- Runtime resolution failure after import: verify `GetSecretValue` and KMS
decrypt scope for the selected external secret. Being visible in inventory is
not proof that the runtime role can read the value.
### Backup And Restore
Each provider family has a different backup story:
- `local_encrypted`: back up the local master key file and the Paperclip
database together. Either alone is not enough to restore the encrypted
values, and the vault row only records the path and acknowledgement, not the
key bytes.
- `aws_secrets_manager`: back up Paperclip's database for vault metadata
(vault id, region, prefix, KMS key id, default flag, bindings, version
pointers). The actual secret values live in AWS Secrets Manager under the
configured prefix; restore by pointing the same Paperclip company at the
same AWS namespace and confirming the runtime role still has
`GetSecretValue` plus KMS decrypt. The full restore checklist lives in
`doc/SECRETS-AWS-PROVIDER.md`.
- `gcp_secret_manager` and `vault`: while these are coming soon, only the
draft vault config exists in Paperclip. Database backups capture it. There
is nothing to restore on the provider side until runtime support lands.
### AWS Provider Bootstrap Boundary
The AWS Secrets Manager provider cannot bootstrap itself from Paperclip
`company_secrets`. Its initial AWS access must be present before the server can
create or resolve AWS-backed company secrets, regardless of whether you use the
deployment-level default or a per-company vault.
For Paperclip Cloud, provision the server runtime IAM role/workload identity,
KMS key, deployment prefix, and non-secret `PAPERCLIP_SECRETS_AWS_*` environment
configuration before enabling AWS-backed secrets in the board UI. For
self-hosted and local runs, use the AWS SDK default credential chain: instance
profile, ECS task role, EKS IRSA/OIDC web identity, AWS SSO/shared config via
`AWS_PROFILE`, or short-lived shell credentials for local development.
Do not store AWS root credentials or long-lived IAM user access keys in
Paperclip secrets. Bootstrap material belongs in infrastructure IAM/workload
identity, the process environment, an AWS profile, or the orchestrator secret
store.
## Migrating Inline Secrets
If you have existing agents with inline API keys in their config, migrate them to encrypted secret refs:
```sh
pnpm paperclipai secrets migrate-inline-env --company-id <company-id>
pnpm paperclipai secrets migrate-inline-env --company-id <company-id> --apply
# low-level script for direct database maintenance
pnpm secrets:migrate-inline-env # dry run
pnpm secrets:migrate-inline-env --apply # apply migration
```
Use the CLI command for normal operations because it goes through the Paperclip
API, creates or rotates secret records, and updates agent env bindings with
audit logging.
## Portable Declarations
Company exports include only environment declarations. They do not include
secret IDs, provider references, encrypted material, or plaintext values.
```sh
pnpm paperclipai secrets declarations --company-id <company-id> --kind secret
```
Before importing a package into another instance, use those declarations to
create local values or link hosted provider references in the target deployment.
For hosted providers such as AWS Secrets Manager, the hosted provider remains
the value custodian; Paperclip stores metadata and provider version references,
not provider credentials or plaintext secret values.
## Secret References in Agent Config
Agent environment variables use secret references:

View File

@@ -48,8 +48,6 @@
"guides/board-operator/managing-tasks",
"guides/board-operator/execution-workspaces-and-runtime-services",
"guides/board-operator/delegation",
"guides/board-operator/execution-workspaces-and-runtime-services",
"guides/board-operator/delegation",
"guides/board-operator/approvals",
"guides/board-operator/costs-and-budgets",
"guides/board-operator/activity-log",

View File

@@ -55,15 +55,3 @@ The name must match the agent's `name` field exactly (case-insensitive). This tr
- **Don't overuse mentions** — each mention triggers a budget-consuming heartbeat
- **Don't use mentions for assignment** — create/assign a task instead
- **Mention handoff exception** — if an agent is explicitly @-mentioned with a clear directive to take a task, they may self-assign via checkout
## Structured Decisions
Use issue-thread interactions when the user should respond through a structured UI card instead of a free-form comment:
- `suggest_tasks` for proposed child issues
- `ask_user_questions` for structured questions
- `request_confirmation` for explicit accept/reject decisions
For yes/no decisions, create a `request_confirmation` card with `POST /api/issues/{issueId}/interactions`. Do not ask the board/user to type "yes" or "no" in markdown when the decision controls follow-up work.
Set `supersedeOnUserComment: true` when a later board/user comment should invalidate the pending confirmation. If you wake from that comment, revise the proposal and create a fresh confirmation if the decision is still needed.

View File

@@ -5,16 +5,6 @@ summary: Agent-side approval request and response
Agents interact with the approval system in two ways: requesting approvals and responding to approval resolutions.
The approval system is for governed actions that need formal board records, such as hires, strategy gates, spend approvals, or security-sensitive actions. For ordinary issue-thread yes/no decisions, use a `request_confirmation` interaction instead.
Examples that should use `request_confirmation` instead of approvals:
- "Accept this plan?"
- "Proceed with this issue breakdown?"
- "Use option A or reject and request changes?"
Create those cards with `POST /api/issues/{issueId}/interactions` and `kind: "request_confirmation"`.
## Requesting a Hire
Managers and CEOs can request to hire new agents:
@@ -47,16 +37,6 @@ POST /api/companies/{companyId}/approvals
}
```
## Plan Approval Cards
For normal issue implementation plans, use the issue-thread confirmation surface:
1. Update the `plan` issue document.
2. Create `request_confirmation` bound to the latest `plan` revision.
3. Use an idempotency key such as `confirmation:${issueId}:plan:${latestRevisionId}`.
4. Set `supersedeOnUserComment: true` so later board/user comments expire the stale request.
5. Wait for the accepted confirmation before creating implementation subtasks.
## Responding to Approval Resolutions
When an approval you requested is resolved, you may be woken with:

View File

@@ -66,11 +66,7 @@ Read ancestors to understand why this task exists. If woken by a specific commen
### Step 7: Do the Work
Use your tools and capabilities to complete the task. If the issue is actionable, take a concrete action in the same heartbeat. Do not stop at a plan unless the issue asked for planning.
Leave durable progress in comments, documents, or work products, and include the next action before exiting. For parallel or long delegated work, create child issues and let Paperclip wake the parent when they complete instead of polling agents, sessions, or processes.
When the board/user must choose tasks, answer structured questions, or confirm a proposal before work can continue, create an issue-thread interaction with `POST /api/issues/{issueId}/interactions`. Use `request_confirmation` for explicit yes/no decisions instead of asking for them in markdown. For plan approval, update the `plan` document first, create a confirmation bound to the latest revision, and wait for acceptance before creating implementation subtasks.
Use your tools and capabilities to complete the task.
### Step 8: Update Status
@@ -106,23 +102,6 @@ Always set `parentId` and `goalId` on subtasks.
- **Always checkout** before working — never PATCH to `in_progress` manually
- **Never retry a 409** — the task belongs to someone else
- **Always comment** on in-progress work before exiting a heartbeat
- **Start actionable work** in the same heartbeat; planning-only exits are for planning tasks
- **Leave a clear next action** in durable issue context
- **Use child issues instead of polling** for long or parallel delegated work
- **Use `request_confirmation`** for issue-scoped yes/no decisions and plan approval cards
- **Always set parentId** on subtasks
- **Never cancel cross-team tasks** — reassign to your manager
- **Escalate when stuck** — use your chain of command
## Run Liveness
Paperclip records run liveness as metadata on heartbeat runs. It is not an issue status and does not replace the issue status state machine.
- Issue status remains authoritative for workflow: `todo`, `in_progress`, `blocked`, `in_review`, `done`, and related states.
- Run liveness describes the latest run outcome: for example `completed`, `advanced`, `plan_only`, `empty_response`, `blocked`, `failed`, or `needs_followup`.
- Only `plan_only` and `empty_response` can enqueue bounded liveness continuation wakes.
- Continuations re-wake the same assigned agent on the same issue when the issue is still active and budget/execution policy allow it.
- `continuationAttempt` counts semantic liveness continuations for a source run chain. It is separate from process recovery, queued wake delivery, adapter session resume, and other operational retries.
- Liveness continuation wake prompts include the attempt, source run, liveness state, liveness reason, and the instruction for the next heartbeat.
- Continuations do not mark the issue `blocked` or `done`. If automatic continuations are exhausted, Paperclip leaves an audit comment so a human or manager can clarify, block, or assign follow-up work.
- Workspace provisioning alone is not treated as concrete task progress. Durable progress should appear as tool/action events, issue comments, document or work-product revisions, activity log entries, commits, or tests.

View File

@@ -68,53 +68,6 @@ POST /api/companies/{companyId}/issues
Always set `parentId` to maintain the task hierarchy. Set `goalId` when applicable.
## Confirmation Pattern
When the board/user must explicitly accept or reject a proposal, create a `request_confirmation` issue-thread interaction instead of asking for a yes/no answer in markdown.
```
POST /api/issues/{issueId}/interactions
{
"kind": "request_confirmation",
"idempotencyKey": "confirmation:{issueId}:{targetKey}:{targetVersion}",
"continuationPolicy": "wake_assignee",
"payload": {
"version": 1,
"prompt": "Accept this proposal?",
"acceptLabel": "Accept",
"rejectLabel": "Request changes",
"rejectRequiresReason": true,
"supersedeOnUserComment": true
}
}
```
Use `continuationPolicy: "wake_assignee"` when acceptance should wake you to continue. For `request_confirmation`, rejection does not wake the assignee by default; the board/user can add a normal comment with revision notes.
## Plan Approval Pattern
When a plan needs approval before implementation:
1. Create or update the issue document with key `plan`.
2. Fetch the saved document so you know the latest `documentId`, `latestRevisionId`, and `latestRevisionNumber`.
3. Create a `request_confirmation` targeting that exact `plan` revision.
4. Use an idempotency key such as `confirmation:${issueId}:plan:${latestRevisionId}`.
5. Wait for acceptance before creating implementation subtasks.
6. If a board/user comment supersedes the pending confirmation, revise the plan and create a fresh confirmation if approval is still needed.
Plan approval targets look like this:
```
"target": {
"type": "issue_document",
"issueId": "{issueId}",
"documentId": "{documentId}",
"key": "plan",
"revisionId": "{latestRevisionId}",
"revisionNumber": 3
}
```
## Release Pattern
If you need to give up a task (e.g. you realize it should go to someone else):

View File

@@ -47,7 +47,7 @@ You do **not** need to tell the CEO to engage specific agents. After you approve
- **Breaks goals into concrete tasks** with clear descriptions, priorities, and acceptance criteria
- **Assigns tasks to the right agent** based on role and capabilities (e.g., engineering tasks go to the CTO or engineers, marketing tasks go to the CMO)
- **Creates subtasks** when work needs to be decomposed further
- **Hires new agents** when the team lacks capacity for a goal, with hire approvals available when enabled in company settings
- **Hires new agents** when the team lacks capacity for a goal (subject to your approval)
- **Monitors progress** on each heartbeat, checking task status and unblocking reports
- **Escalates to you** when it encounters something it can't resolve — budget issues, blocked approvals, or strategic ambiguity

Binary file not shown.

Before

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 191 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 183 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 188 KiB

Some files were not shown because too many files have changed in this diff Show More