Compare commits
3 Commits
master
...
pap-1497-d
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
66cbf5260f | ||
|
|
42189e1bf9 | ||
|
|
9c231d925a |
@@ -154,14 +154,6 @@ Each AGENTS.md body should include not just what the agent does, but how they fi
|
||||
|
||||
This turns a collection of agents into an organization that actually works together. Without workflow context, agents operate in isolation — they do their job but don't know what happens before or after them.
|
||||
|
||||
Add a concise execution contract to every generated working agent:
|
||||
|
||||
- Start actionable work in the same heartbeat and do not stop at a plan unless planning was requested.
|
||||
- Leave durable progress in comments, documents, or work products with the next action.
|
||||
- Use child issues for long or parallel delegated work instead of polling agents, sessions, or processes.
|
||||
- Mark blocked work with the unblock owner and action.
|
||||
- Respect budget, pause/cancel, approval gates, and company boundaries.
|
||||
|
||||
### Step 5: Confirm Output Location
|
||||
|
||||
Ask the user where to write the package. Common options:
|
||||
|
||||
@@ -105,13 +105,6 @@ Your responsibilities:
|
||||
- Implement features and fix bugs
|
||||
- Write tests and documentation
|
||||
- Participate in code reviews
|
||||
|
||||
Execution contract:
|
||||
|
||||
- Start actionable implementation work in the same heartbeat; do not stop at a plan unless planning was requested.
|
||||
- Leave durable progress with a clear next action.
|
||||
- Use child issues for long or parallel delegated work instead of polling agents, sessions, or processes.
|
||||
- Mark blocked work with the unblock owner and action.
|
||||
```
|
||||
|
||||
## teams/engineering/TEAM.md
|
||||
|
||||
@@ -548,7 +548,7 @@ Import from `@paperclipai/adapter-utils/server-utils`:
|
||||
### Prompt Templates
|
||||
- Support `promptTemplate` for every run
|
||||
- Use `renderTemplate()` with the standard variable set
|
||||
- Default prompt should use `DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE` from `@paperclipai/adapter-utils/server-utils` so local adapters share Paperclip's execution contract: act in the same heartbeat, avoid planning-only exits unless requested, leave durable progress and a next action, use child issues instead of polling, mark blockers with owner/action, and respect governance boundaries.
|
||||
- Default prompt: `"You are agent {{agent.id}} ({{agent.name}}). Continue your Paperclip work."`
|
||||
|
||||
### Error Handling
|
||||
- Differentiate timeout vs process error vs parse failure
|
||||
|
||||
@@ -177,12 +177,8 @@ real name or email). To find GitHub usernames:
|
||||
|
||||
**Never expose contributor email addresses.** Use `@username` only.
|
||||
|
||||
Exclude bot accounts (e.g. `lockfile-bot`, `dependabot`) from the list.
|
||||
Exclude Paperclip founders from the list (e.g. `cryppadotta`, `forgottendev`, `devinfoley`, `sockmonster`, `scotttong`)
|
||||
|
||||
List contributors in alphabetical order by GitHub username (case-insensitive).
|
||||
|
||||
If there are no contributors left after exclusions, then just skip this section and don't mention it.
|
||||
Exclude bot accounts (e.g. `lockfile-bot`, `dependabot`) from the list. List contributors
|
||||
in alphabetical order by GitHub username (case-insensitive).
|
||||
|
||||
## Step 6 — Review Before Release
|
||||
|
||||
|
||||
@@ -2,6 +2,3 @@ DATABASE_URL=postgres://paperclip:paperclip@localhost:5432/paperclip
|
||||
PORT=3100
|
||||
SERVE_UI=false
|
||||
BETTER_AUTH_SECRET=paperclip-dev-secret
|
||||
|
||||
# Discord webhook for daily merge digest (scripts/discord-daily-digest.sh)
|
||||
# DISCORD_WEBHOOK_URL=https://discord.com/api/webhooks/...
|
||||
|
||||
3
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -38,8 +38,6 @@
|
||||
|
||||
-
|
||||
|
||||
> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and discuss it in `#dev` before opening the PR. Feature PRs that overlap with planned core work may need to be redirected — check the roadmap first. See `CONTRIBUTING.md`.
|
||||
|
||||
## Model Used
|
||||
|
||||
<!--
|
||||
@@ -59,7 +57,6 @@
|
||||
|
||||
- [ ] I have included a thinking path that traces from project context to this change
|
||||
- [ ] I have specified the model used (with version and capability details)
|
||||
- [ ] I have checked ROADMAP.md and confirmed this PR does not duplicate planned core work
|
||||
- [ ] I have run tests locally and they pass
|
||||
- [ ] I have added or updated tests where applicable
|
||||
- [ ] If this change affects the UI, I have included before/after screenshots
|
||||
|
||||
2
.github/workflows/docker.yml
vendored
@@ -14,7 +14,7 @@ permissions:
|
||||
jobs:
|
||||
build-and-push:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 60
|
||||
timeout-minutes: 30
|
||||
concurrency:
|
||||
group: docker-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
137
.github/workflows/pr.yml
vendored
@@ -23,9 +23,7 @@ jobs:
|
||||
- name: Block manual lockfile edits
|
||||
if: github.head_ref != 'chore/refresh-lockfile'
|
||||
run: |
|
||||
# Diff the PR branch against its merge base so recent base-branch commits
|
||||
# do not masquerade as changes made by the PR itself.
|
||||
changed="$(git diff --name-only "${{ github.event.pull_request.base.sha }}...${{ github.event.pull_request.head.sha }}")"
|
||||
changed="$(git diff --name-only "${{ github.event.pull_request.base.sha }}" "${{ github.event.pull_request.head.sha }}")"
|
||||
if printf '%s\n' "$changed" | grep -qx 'pnpm-lock.yaml'; then
|
||||
echo "Do not commit pnpm-lock.yaml in pull requests. CI owns lockfile updates."
|
||||
exit 1
|
||||
@@ -43,20 +41,48 @@ jobs:
|
||||
node-version: 24
|
||||
|
||||
- name: Validate Dockerfile deps stage
|
||||
run: node ./scripts/check-docker-deps-stage.mjs
|
||||
|
||||
- name: Validate release package manifest
|
||||
run: node ./scripts/release-package-map.mjs check
|
||||
|
||||
- name: Verify release package bootstrap for changed manifests
|
||||
run: |
|
||||
mapfile -t changed_paths < <(git diff --name-only "${{ github.event.pull_request.base.sha }}...${{ github.event.pull_request.head.sha }}")
|
||||
PAPERCLIP_RELEASE_BOOTSTRAP_BASE_SHA="${{ github.event.pull_request.base.sha }}" \
|
||||
node ./scripts/check-release-package-bootstrap.mjs "${changed_paths[@]}"
|
||||
missing=0
|
||||
|
||||
# Extract only the deps stage from the Dockerfile
|
||||
deps_stage="$(awk '/^FROM .* AS deps$/{found=1; next} found && /^FROM /{exit} found{print}' Dockerfile)"
|
||||
|
||||
if [ -z "$deps_stage" ]; then
|
||||
echo "::error::Could not extract deps stage from Dockerfile (expected 'FROM ... AS deps')"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Derive workspace search roots from pnpm-workspace.yaml (exclude dev-only packages)
|
||||
search_roots="$(grep '^ *- ' pnpm-workspace.yaml | sed 's/^ *- //' | sed 's/\*$//' | grep -v 'examples' | grep -v 'create-paperclip-plugin' | tr '\n' ' ')"
|
||||
|
||||
if [ -z "$search_roots" ]; then
|
||||
echo "::error::Could not derive workspace roots from pnpm-workspace.yaml"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check all workspace package.json files are copied in the deps stage
|
||||
for pkg in $(find $search_roots -maxdepth 2 -name package.json -not -path '*/examples/*' -not -path '*/create-paperclip-plugin/*' -not -path '*/node_modules/*' 2>/dev/null | sort -u); do
|
||||
dir="$(dirname "$pkg")"
|
||||
if ! echo "$deps_stage" | grep -q "^COPY ${dir}/package.json"; then
|
||||
echo "::error::Dockerfile deps stage missing: COPY ${pkg} ${dir}/"
|
||||
missing=1
|
||||
fi
|
||||
done
|
||||
|
||||
# Check patches directory is copied if it exists
|
||||
if [ -d patches ] && ! echo "$deps_stage" | grep -q '^COPY patches/'; then
|
||||
echo "::error::Dockerfile deps stage missing: COPY patches/ patches/"
|
||||
missing=1
|
||||
fi
|
||||
|
||||
if [ "$missing" -eq 1 ]; then
|
||||
echo "Dockerfile deps stage is out of sync. Update it to include the missing files."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Validate dependency resolution when manifests change
|
||||
run: |
|
||||
changed="$(git diff --name-only "${{ github.event.pull_request.base.sha }}...${{ github.event.pull_request.head.sha }}")"
|
||||
changed="$(git diff --name-only "${{ github.event.pull_request.base.sha }}" "${{ github.event.pull_request.head.sha }}")"
|
||||
manifest_pattern='(^|/)package\.json$|^pnpm-workspace\.yaml$|^\.npmrc$|^pnpmfile\.(cjs|js|mjs)$'
|
||||
if printf '%s\n' "$changed" | grep -Eq "$manifest_pattern"; then
|
||||
pnpm install --lockfile-only --ignore-scripts --no-frozen-lockfile
|
||||
@@ -85,88 +111,16 @@ jobs:
|
||||
- name: Install dependencies
|
||||
run: pnpm install --frozen-lockfile
|
||||
|
||||
- name: Typecheck workspaces whose build scripts skip TypeScript
|
||||
run: pnpm run typecheck:build-gaps
|
||||
- name: Typecheck
|
||||
run: pnpm -r typecheck
|
||||
|
||||
- name: Run general test suites
|
||||
run: pnpm test:run:general
|
||||
|
||||
- name: Verify release registry test coverage
|
||||
run: pnpm run test:release-registry
|
||||
- name: Run tests
|
||||
run: pnpm test:run
|
||||
|
||||
- name: Build
|
||||
run: pnpm build
|
||||
|
||||
verify_serialized_server:
|
||||
name: Verify serialized server suites (${{ matrix.shard_label }})
|
||||
needs: [policy]
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 20
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- shard_index: 0
|
||||
shard_count: 4
|
||||
shard_label: 1/4
|
||||
- shard_index: 1
|
||||
shard_count: 4
|
||||
shard_label: 2/4
|
||||
- shard_index: 2
|
||||
shard_count: 4
|
||||
shard_label: 3/4
|
||||
- shard_index: 3
|
||||
shard_count: 4
|
||||
shard_label: 4/4
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup pnpm
|
||||
uses: pnpm/action-setup@v4
|
||||
with:
|
||||
version: 9.15.4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 24
|
||||
cache: pnpm
|
||||
|
||||
- name: Install dependencies
|
||||
run: pnpm install --frozen-lockfile
|
||||
|
||||
- name: Run serialized server test shard
|
||||
run: pnpm test:run:serialized -- --shard-index ${{ matrix.shard_index }} --shard-count ${{ matrix.shard_count }}
|
||||
|
||||
canary_dry_run:
|
||||
name: Canary Dry Run
|
||||
needs: [policy]
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 20
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup pnpm
|
||||
uses: pnpm/action-setup@v4
|
||||
with:
|
||||
version: 9.15.4
|
||||
|
||||
- name: Setup Node.js
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 24
|
||||
cache: pnpm
|
||||
|
||||
- name: Install dependencies
|
||||
run: pnpm install --frozen-lockfile
|
||||
|
||||
# `release.sh` always executes its Step 2/7 workspace build, even when
|
||||
# `--skip-verify` bypasses the initial verification gate.
|
||||
- name: Release canary dry run via release.sh internal build
|
||||
- name: Release canary dry run
|
||||
run: |
|
||||
git checkout -B master HEAD
|
||||
git checkout -- pnpm-lock.yaml
|
||||
@@ -195,6 +149,9 @@ jobs:
|
||||
- name: Install dependencies
|
||||
run: pnpm install --frozen-lockfile
|
||||
|
||||
- name: Build
|
||||
run: pnpm build
|
||||
|
||||
- name: Install Playwright
|
||||
run: npx playwright install --with-deps chromium
|
||||
|
||||
|
||||
12
.github/workflows/release.yml
vendored
@@ -50,9 +50,6 @@ jobs:
|
||||
node-version: 24
|
||||
cache: pnpm
|
||||
|
||||
- name: Validate release package manifest
|
||||
run: node ./scripts/release-package-map.mjs check
|
||||
|
||||
- name: Install dependencies
|
||||
run: pnpm install --no-frozen-lockfile
|
||||
|
||||
@@ -92,9 +89,6 @@ jobs:
|
||||
node-version: 24
|
||||
cache: pnpm
|
||||
|
||||
- name: Validate release package manifest
|
||||
run: node ./scripts/release-package-map.mjs check
|
||||
|
||||
- name: Install dependencies
|
||||
run: pnpm install --no-frozen-lockfile
|
||||
|
||||
@@ -145,9 +139,6 @@ jobs:
|
||||
node-version: 24
|
||||
cache: pnpm
|
||||
|
||||
- name: Validate release package manifest
|
||||
run: node ./scripts/release-package-map.mjs check
|
||||
|
||||
- name: Install dependencies
|
||||
run: pnpm install --no-frozen-lockfile
|
||||
|
||||
@@ -186,9 +177,6 @@ jobs:
|
||||
node-version: 24
|
||||
cache: pnpm
|
||||
|
||||
- name: Validate release package manifest
|
||||
run: node ./scripts/release-package-map.mjs check
|
||||
|
||||
- name: Install dependencies
|
||||
run: pnpm install --no-frozen-lockfile
|
||||
|
||||
|
||||
5
.gitignore
vendored
@@ -1,9 +1,5 @@
|
||||
node_modules
|
||||
node_modules/
|
||||
**/node_modules
|
||||
**/node_modules/
|
||||
dist/
|
||||
ui/storybook-static/
|
||||
.env
|
||||
*.tsbuildinfo
|
||||
drizzle/meta/
|
||||
@@ -36,7 +32,6 @@ server/src/**/*.d.ts
|
||||
server/src/**/*.d.ts.map
|
||||
tmp/
|
||||
feedback-export-*
|
||||
diagnostics/
|
||||
|
||||
# Editor / tool temp files
|
||||
*.tmp
|
||||
|
||||
@@ -123,9 +123,7 @@ pnpm test:release-smoke
|
||||
|
||||
Run the browser suites only when your change touches them or when you are explicitly verifying CI/release flows.
|
||||
|
||||
For normal issue work, run the smallest relevant verification first. Do not default to repo-wide typecheck/build/test on every heartbeat when a narrower check is enough to prove the change.
|
||||
|
||||
Run this full check before claiming repo work done in a PR-ready hand-off, or when the change scope is broad enough that targeted checks are not sufficient:
|
||||
Run this full check before claiming done:
|
||||
|
||||
```sh
|
||||
pnpm -r typecheck
|
||||
|
||||
@@ -51,21 +51,6 @@ All tests must pass before a PR can be merged. Run them locally first and verify
|
||||
|
||||
We use [Greptile](https://greptile.com) for automated code review. Your PR must achieve a **5/5 Greptile score** with **all Greptile comments addressed** before it can be merged. If Greptile leaves comments, fix or respond to each one and request a re-review.
|
||||
|
||||
## Feature Contributions
|
||||
|
||||
We actively manage the core Paperclip feature roadmap.
|
||||
|
||||
Uncoordinated feature PRs against the core product may be closed, even when the implementation is thoughtful and high quality. That is about roadmap ownership, product coherence, and long-term maintenance commitment, not a judgment about the effort.
|
||||
|
||||
If you want to contribute a feature:
|
||||
|
||||
- Check [ROADMAP.md](ROADMAP.md) first
|
||||
- Start the discussion in Discord -> `#dev` before writing code
|
||||
- If the idea fits as an extension, prefer building it with the [plugin system](doc/plugins/PLUGIN_SPEC.md)
|
||||
- If you want to show a possible direction, reference implementations are welcome as feedback, but they generally will not be merged directly into core
|
||||
|
||||
Bugs, docs improvements, and small targeted improvements are still the easiest path to getting merged, and we really do appreciate them.
|
||||
|
||||
## General Rules (both paths)
|
||||
|
||||
- Write clear commit messages
|
||||
|
||||
17
Dockerfile
@@ -1,9 +1,16 @@
|
||||
# syntax=docker/dockerfile:1.20
|
||||
FROM node:lts-trixie-slim AS base
|
||||
ARG USER_UID=1000
|
||||
ARG USER_GID=1000
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y --no-install-recommends ca-certificates gosu curl gh git wget ripgrep python3 \
|
||||
&& apt-get install -y --no-install-recommends ca-certificates gosu curl git wget ripgrep python3 \
|
||||
&& mkdir -p -m 755 /etc/apt/keyrings \
|
||||
&& wget -nv -O/etc/apt/keyrings/githubcli-archive-keyring.gpg https://cli.github.com/packages/githubcli-archive-keyring.gpg \
|
||||
&& echo "20e0125d6f6e077a9ad46f03371bc26d90b04939fb95170f5a1905099cc6bcc0 /etc/apt/keyrings/githubcli-archive-keyring.gpg" | sha256sum -c - \
|
||||
&& chmod go+r /etc/apt/keyrings/githubcli-archive-keyring.gpg \
|
||||
&& mkdir -p -m 755 /etc/apt/sources.list.d \
|
||||
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" > /etc/apt/sources.list.d/github-cli.list \
|
||||
&& apt-get update \
|
||||
&& apt-get install -y --no-install-recommends gh \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& corepack enable
|
||||
|
||||
@@ -22,7 +29,6 @@ COPY packages/shared/package.json packages/shared/
|
||||
COPY packages/db/package.json packages/db/
|
||||
COPY packages/adapter-utils/package.json packages/adapter-utils/
|
||||
COPY packages/mcp-server/package.json packages/mcp-server/
|
||||
COPY packages/adapters/acpx-local/package.json packages/adapters/acpx-local/
|
||||
COPY packages/adapters/claude-local/package.json packages/adapters/claude-local/
|
||||
COPY packages/adapters/codex-local/package.json packages/adapters/codex-local/
|
||||
COPY packages/adapters/cursor-local/package.json packages/adapters/cursor-local/
|
||||
@@ -31,8 +37,6 @@ COPY packages/adapters/openclaw-gateway/package.json packages/adapters/openclaw-
|
||||
COPY packages/adapters/opencode-local/package.json packages/adapters/opencode-local/
|
||||
COPY packages/adapters/pi-local/package.json packages/adapters/pi-local/
|
||||
COPY packages/plugins/sdk/package.json packages/plugins/sdk/
|
||||
COPY --parents packages/plugins/sandbox-providers/./*/package.json packages/plugins/sandbox-providers/
|
||||
COPY packages/plugins/paperclip-plugin-fake-sandbox/package.json packages/plugins/paperclip-plugin-fake-sandbox/
|
||||
COPY patches/ patches/
|
||||
|
||||
RUN pnpm install --frozen-lockfile
|
||||
@@ -52,9 +56,6 @@ ARG USER_GID=1000
|
||||
WORKDIR /app
|
||||
COPY --chown=node:node --from=build /app /app
|
||||
RUN npm install --global --omit=dev @anthropic-ai/claude-code@latest @openai/codex@latest opencode-ai \
|
||||
&& apt-get update \
|
||||
&& apt-get install -y --no-install-recommends openssh-client jq \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& mkdir -p /paperclip \
|
||||
&& chown node:node /paperclip
|
||||
|
||||
|
||||
119
README.md
@@ -6,8 +6,7 @@
|
||||
<a href="#quickstart"><strong>Quickstart</strong></a> ·
|
||||
<a href="https://paperclip.ing/docs"><strong>Docs</strong></a> ·
|
||||
<a href="https://github.com/paperclipai/paperclip"><strong>GitHub</strong></a> ·
|
||||
<a href="https://discord.gg/m4HZY7xNG3"><strong>Discord</strong></a> ·
|
||||
<a href="https://x.com/papercliping"><strong>Twitter</strong></a>
|
||||
<a href="https://discord.gg/m4HZY7xNG3"><strong>Discord</strong></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@@ -157,115 +156,6 @@ Paperclip handles the hard orchestration details correctly.
|
||||
|
||||
<br/>
|
||||
|
||||
## What's Under the Hood
|
||||
|
||||
Paperclip is a full control plane, not a wrapper. Before you build any of this yourself, know that it already exists:
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────┐
|
||||
│ PAPERCLIP SERVER │
|
||||
│ │
|
||||
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
|
||||
│ │Identity & │ │ Work & │ │ Heartbeat │ │Governance │ │
|
||||
│ │ Access │ │ Tasks │ │ Execution │ │& Approvals│ │
|
||||
│ └───────────┘ └───────────┘ └───────────┘ └───────────┘ │
|
||||
│ │
|
||||
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
|
||||
│ │ Org Chart │ │Workspaces │ │ Plugins │ │ Budget │ │
|
||||
│ │ & Agents │ │ & Runtime │ │ │ │ & Costs │ │
|
||||
│ └───────────┘ └───────────┘ └───────────┘ └───────────┘ │
|
||||
│ │
|
||||
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
|
||||
│ │ Routines │ │ Secrets & │ │ Activity │ │ Company │ │
|
||||
│ │& Schedules│ │ Storage │ │ & Events │ │Portability│ │
|
||||
│ └───────────┘ └───────────┘ └───────────┘ └───────────┘ │
|
||||
└──────────────────────────────────────────────────────────────┘
|
||||
▲ ▲ ▲ ▲
|
||||
┌─────┴─────┐ ┌─────┴─────┐ ┌─────┴─────┐ ┌─────┴─────┐
|
||||
│ Claude │ │ Codex │ │ CLI │ │ HTTP/web │
|
||||
│ Code │ │ │ │ agents │ │ bots │
|
||||
└───────────┘ └───────────┘ └───────────┘ └───────────┘
|
||||
```
|
||||
|
||||
### The Systems
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td width="50%">
|
||||
|
||||
**Identity & Access** — Two deployment modes (trusted local or authenticated), board users, agent API keys, short-lived run JWTs, company memberships, invite flows, and OpenClaw onboarding. Every mutating request is traced to an actor.
|
||||
|
||||
</td>
|
||||
<td width="50%">
|
||||
|
||||
**Org Chart & Agents** — Agents have roles, titles, reporting lines, permissions, and budgets. Adapter examples match the diagram: Claude Code, Codex, CLI agents such as Cursor/Gemini/bash, HTTP/webhook bots such as OpenClaw, and external adapter plugins. If it can receive a heartbeat, it's hired.
|
||||
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
|
||||
**Work & Task System** — Issues carry company/project/goal/parent links, atomic checkout with execution locks, first-class blocker dependencies, comments, documents, attachments, work products, labels, and inbox state. No double-work, no lost context.
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
**Heartbeat Execution** — DB-backed wakeup queue with coalescing, budget checks, workspace resolution, secret injection, skill loading, and adapter invocation. Runs produce structured logs, cost events, session state, and audit trails. Recovery handles orphaned runs automatically.
|
||||
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
|
||||
**Workspaces & Runtime** — Project workspaces, isolated execution workspaces (git worktrees, operator branches), and runtime services (dev servers, preview URLs). Agents work in the right directory with the right context every time.
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
**Governance & Approvals** — Board approval workflows, execution policies with review/approval stages, decision tracking, budget hard-stops, agent pause/resume/terminate, and full audit logging. You're the board — nothing ships without your sign-off.
|
||||
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
|
||||
**Budget & Cost Control** — Token and cost tracking by company, agent, project, goal, issue, provider, and model. Scoped budget policies with warning thresholds and hard stops. Overspend pauses agents and cancels queued work automatically.
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
**Routines & Schedules** — Recurring tasks with cron, webhook, and API triggers. Concurrency and catch-up policies. Each routine execution creates a tracked issue and wakes the assigned agent — no manual kick-offs needed.
|
||||
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
|
||||
**Plugins** — Instance-wide plugin system with out-of-process workers, capability-gated host services, job scheduling, tool exposure, and UI contributions. Extend Paperclip without forking it.
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
**Secrets & Storage** — Instance and company secrets, encrypted local storage, provider-backed object storage, attachments, and work products. Sensitive values stay out of prompts unless a scoped run explicitly needs them.
|
||||
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>
|
||||
|
||||
**Activity & Events** — Mutating actions, heartbeat state changes, cost events, approvals, comments, and work products are recorded as durable activity so operators can audit what happened and why.
|
||||
|
||||
</td>
|
||||
<td>
|
||||
|
||||
**Company Portability** — Export and import entire organizations — agents, skills, projects, routines, and issues — with secret scrubbing and collision handling. One deployment, many companies, complete data isolation.
|
||||
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
<br/>
|
||||
|
||||
## What Paperclip is not
|
||||
|
||||
| | |
|
||||
@@ -366,10 +256,10 @@ See [doc/DEVELOPING.md](doc/DEVELOPING.md) for the full development guide.
|
||||
- ✅ Scheduled Routines
|
||||
- ✅ Better Budgeting
|
||||
- ✅ Agent Reviews and Approvals
|
||||
- ✅ Multiple Human Users
|
||||
- ⚪ Multiple Human Users
|
||||
- ⚪ Cloud / Sandbox agents (e.g. Cursor / e2b agents)
|
||||
- ⚪ Artifacts & Work Products
|
||||
- ⚪ Memory / Knowledge
|
||||
- ⚪ Memory & Knowledge
|
||||
- ⚪ Enforced Outcomes
|
||||
- ⚪ MAXIMIZER MODE
|
||||
- ⚪ Deep Planning
|
||||
@@ -380,8 +270,6 @@ See [doc/DEVELOPING.md](doc/DEVELOPING.md) for the full development guide.
|
||||
- ⚪ Cloud deployments
|
||||
- ⚪ Desktop App
|
||||
|
||||
This is the short roadmap preview. See the full roadmap in [ROADMAP.md](ROADMAP.md).
|
||||
|
||||
<br/>
|
||||
|
||||
## Community & Plugins
|
||||
@@ -410,7 +298,6 @@ We welcome contributions. See the [contributing guide](CONTRIBUTING.md) for deta
|
||||
## Community
|
||||
|
||||
- [Discord](https://discord.gg/m4HZY7xNG3) — Join the community
|
||||
- [Twitter / X](https://x.com/papercliping) — Follow updates and announcements
|
||||
- [GitHub Issues](https://github.com/paperclipai/paperclip/issues) — bugs and feature requests
|
||||
- [GitHub Discussions](https://github.com/paperclipai/paperclip/discussions) — ideas and RFC
|
||||
|
||||
|
||||
97
ROADMAP.md
@@ -1,97 +0,0 @@
|
||||
# Roadmap
|
||||
|
||||
This document expands the roadmap preview in `README.md`.
|
||||
|
||||
Paperclip is still moving quickly. The list below is directional, not promised, and priorities may shift as we learn from users and from operating real AI companies with the product.
|
||||
|
||||
We value community involvement and want to make sure contributor energy goes toward areas where it can land.
|
||||
|
||||
We may accept contributions in the areas below, but if you want to work on roadmap-level core features, please coordinate with us first in Discord (`#dev`) before writing code. Bugs, docs, polish, and tightly scoped improvements are still the easiest contributions to merge.
|
||||
|
||||
If you want to extend Paperclip today, the best path is often the [plugin system](doc/plugins/PLUGIN_SPEC.md). Community reference implementations are also useful feedback even when they are not merged directly into core.
|
||||
|
||||
## Milestones
|
||||
|
||||
### ✅ Plugin system
|
||||
|
||||
Paperclip should keep a thin core and rich edges. Plugins are the path for optional capabilities like knowledge bases, custom tracing, queues, doc editors, and other product-specific surfaces that do not need to live in the control plane itself.
|
||||
|
||||
### ✅ Get OpenClaw / claw-style agent employees
|
||||
|
||||
Paperclip should be able to hire and manage real claw-style agent workers, not just a narrow built-in runtime. This is part of the larger "bring your own agent" story and keeps the control plane useful across different agent ecosystems.
|
||||
|
||||
### ✅ companies.sh - import and export entire organizations
|
||||
|
||||
Reusable companies matter. Import/export is the foundation for moving org structures, agent definitions, and reusable company setups between environments and eventually for broader company-template distribution.
|
||||
|
||||
### ✅ Easy AGENTS.md configurations
|
||||
|
||||
Agent setup should feel repo-native and legible. Simple `AGENTS.md`-style configuration lowers the barrier to getting an agent team running and makes it easier for contributors to understand how a company is wired together.
|
||||
|
||||
### ✅ Skills Manager
|
||||
|
||||
Agents need a practical way to discover, install, and use skills without every setup becoming bespoke. The skills layer is part of making Paperclip companies more reusable and easier to operate.
|
||||
|
||||
### ✅ Scheduled Routines
|
||||
|
||||
Recurring work should be native. Routine tasks like reports, reviews, and other periodic work need first-class scheduling so the company keeps operating even when no human is manually kicking work off.
|
||||
|
||||
### ✅ Better Budgeting
|
||||
|
||||
Budgets are a core control-plane feature, not an afterthought. Better budgeting means clearer spend visibility, safer hard stops, and better operator control over how autonomy turns into real cost.
|
||||
|
||||
### ✅ Agent Reviews and Approvals
|
||||
|
||||
Paperclip should support explicit review and approval stages as first-class workflow steps, not just ad hoc comments. That means reviewer routing, approval gates, change requests, and durable audit trails that fit the same task model as the rest of the control plane.
|
||||
|
||||
### ✅ Multiple Human Users
|
||||
|
||||
Paperclip needs a clearer path from solo operator to real human teams. That means shared board access, safer collaboration, and a better model for several humans supervising the same autonomous company.
|
||||
|
||||
### ⚪ Cloud / Sandbox agents (e.g. Cursor / e2b agents)
|
||||
|
||||
We want agents to run in more remote and sandboxed environments while preserving the same Paperclip control-plane model. This makes the system safer, more flexible, and more useful outside a single trusted local machine.
|
||||
|
||||
### ⚪ Artifacts & Work Products
|
||||
|
||||
Paperclip should make outputs first-class. That means generated artifacts, previews, deployable outputs, and the handoff from "agent did work" to "here is the result" should become more visible and easier to operate.
|
||||
|
||||
### ⚪ Memory / Knowledge
|
||||
|
||||
We want a stronger memory and knowledge surface for companies, agents, and projects. That includes durable memory, better recall of prior decisions and context, and a clearer path for knowledge-style capabilities without turning Paperclip into a generic chat app.
|
||||
|
||||
### ⚪ Enforced Outcomes
|
||||
|
||||
Paperclip should get stricter about what counts as finished work. Tasks, approvals, and execution flows should resolve to clear outcomes like merged code, published artifacts, shipped docs, or explicit decisions instead of stopping at vague status updates.
|
||||
|
||||
### ⚪ MAXIMIZER MODE
|
||||
|
||||
This is the direction for higher-autonomy execution: more aggressive delegation, deeper follow-through, and stronger operating loops with clear budgets, visibility, and governance. The point is not hidden autonomy; the point is more output per human supervisor.
|
||||
|
||||
### ⚪ Deep Planning
|
||||
|
||||
Some work needs more than a task description before execution starts. Deeper planning means stronger issue documents, revisionable plans, and clearer review loops for strategy-heavy work before agents begin execution.
|
||||
|
||||
### ⚪ Work Queues
|
||||
|
||||
Paperclip should support queue-style work streams for repeatable inputs like support, triage, review, and backlog intake. That would make it easier to route work continuously without turning every system into a one-off workflow.
|
||||
|
||||
### ⚪ Self-Organization
|
||||
|
||||
As companies grow, agents should be able to propose useful structural changes such as role adjustments, delegation changes, and new recurring routines. The goal is adaptive organizations that still stay within governance and approval boundaries.
|
||||
|
||||
### ⚪ Automatic Organizational Learning
|
||||
|
||||
Paperclip should get better at turning completed work into reusable organizational knowledge. That includes capturing playbooks, recurring fixes, and decision patterns so future work starts from what the company has already learned.
|
||||
|
||||
### ⚪ CEO Chat
|
||||
|
||||
We want a lighter-weight way to talk to leadership agents, but those conversations should still resolve to real work objects like plans, issues, approvals, or decisions. This should improve interaction without changing the core task-and-comments model.
|
||||
|
||||
### ⚪ Cloud deployments
|
||||
|
||||
Local-first remains important, but Paperclip also needs a cleaner shared deployment story. Teams should be able to run the same product in hosted or semi-hosted environments without changing the mental model.
|
||||
|
||||
### ⚪ Desktop App
|
||||
|
||||
A desktop app can make Paperclip feel more accessible and persistent for day-to-day operators. The goal is easier access, better local ergonomics, and a smoother default experience for users who want the control plane always close at hand.
|
||||
@@ -6,14 +6,13 @@
|
||||
<a href="#quickstart"><strong>Quickstart</strong></a> ·
|
||||
<a href="https://paperclip.ing/docs"><strong>Docs</strong></a> ·
|
||||
<a href="https://github.com/paperclipai/paperclip"><strong>GitHub</strong></a> ·
|
||||
<a href="https://discord.gg/m4HZY7xNG3"><strong>Discord</strong></a> ·
|
||||
<a href="https://x.com/papercliping"><strong>Twitter</strong></a>
|
||||
<a href="https://discord.gg/m4HZY7xNG3"><strong>Discord</strong></a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/paperclipai/paperclip/blob/master/LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue" alt="MIT License" /></a>
|
||||
<a href="https://github.com/paperclipai/paperclip/stargazers"><img src="https://img.shields.io/github/stars/paperclipai/paperclip?style=flat" alt="Stars" /></a>
|
||||
<a href="https://discord.gg/m4HZY7xNG3"><img src="https://img.shields.io/discord/000000000?label=discord" alt="Discord" /></a>
|
||||
<a href="https://discord.gg/m4HZY7xNG3"><img src="https://img.shields.io/badge/discord-join%20chat-5865F2?logo=discord&logoColor=white" alt="Discord" /></a>
|
||||
</p>
|
||||
|
||||
<br/>
|
||||
@@ -259,7 +258,7 @@ See [doc/DEVELOPING.md](https://github.com/paperclipai/paperclip/blob/master/doc
|
||||
- ⚪ Artifacts & Deployments
|
||||
- ⚪ CEO Chat
|
||||
- ⚪ MAXIMIZER MODE
|
||||
- ✅ Multiple Human Users
|
||||
- ⚪ Multiple Human Users
|
||||
- ⚪ Cloud / Sandbox agents (e.g. Cursor / e2b agents)
|
||||
- ⚪ Cloud deployments
|
||||
- ⚪ Desktop App
|
||||
@@ -279,7 +278,6 @@ We welcome contributions. See the [contributing guide](https://github.com/paperc
|
||||
## Community
|
||||
|
||||
- [Discord](https://discord.gg/m4HZY7xNG3) — Join the community
|
||||
- [Twitter / X](https://x.com/papercliping) — Follow updates and announcements
|
||||
- [GitHub Issues](https://github.com/paperclipai/paperclip/issues) — bugs and feature requests
|
||||
- [GitHub Discussions](https://github.com/paperclipai/paperclip/discussions) — ideas and RFC
|
||||
|
||||
|
||||
@@ -37,7 +37,6 @@
|
||||
},
|
||||
"dependencies": {
|
||||
"@clack/prompts": "^0.10.0",
|
||||
"@paperclipai/adapter-acpx-local": "workspace:*",
|
||||
"@paperclipai/adapter-claude-local": "workspace:*",
|
||||
"@paperclipai/adapter-codex-local": "workspace:*",
|
||||
"@paperclipai/adapter-cursor-local": "workspace:*",
|
||||
|
||||
@@ -14,7 +14,6 @@ function makeCompany(overrides: Partial<Company>): Company {
|
||||
issueCounter: 1,
|
||||
budgetMonthlyCents: 0,
|
||||
spentMonthlyCents: 0,
|
||||
attachmentMaxBytes: 10 * 1024 * 1024,
|
||||
requireBoardApprovalForNewAgents: false,
|
||||
feedbackDataSharingEnabled: false,
|
||||
feedbackDataSharingConsentAt: null,
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import { execFile, spawn } from "node:child_process";
|
||||
import { existsSync, mkdirSync, mkdtempSync, readFileSync, readdirSync, rmSync, writeFileSync } from "node:fs";
|
||||
import { mkdirSync, mkdtempSync, readFileSync, readdirSync, rmSync, writeFileSync } from "node:fs";
|
||||
import net from "node:net";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
@@ -104,50 +104,20 @@ function writeTestConfig(configPath: string, tempRoot: string, port: number, con
|
||||
writeFileSync(configPath, `${JSON.stringify(config, null, 2)}\n`, "utf8");
|
||||
}
|
||||
|
||||
interface TestPaperclipEnv {
|
||||
configPath: string;
|
||||
paperclipHome: string;
|
||||
instanceId: string;
|
||||
shellHome?: string;
|
||||
}
|
||||
|
||||
function createBasePaperclipEnv(options: TestPaperclipEnv) {
|
||||
function createServerEnv(configPath: string, port: number, connectionString: string) {
|
||||
const env = { ...process.env };
|
||||
for (const key of Object.keys(env)) {
|
||||
if (key.startsWith("PAPERCLIP_")) {
|
||||
delete env[key];
|
||||
}
|
||||
}
|
||||
|
||||
env.PAPERCLIP_CONFIG = options.configPath;
|
||||
env.PAPERCLIP_HOME = options.paperclipHome;
|
||||
env.PAPERCLIP_INSTANCE_ID = options.instanceId;
|
||||
env.PAPERCLIP_CONTEXT = path.join(options.paperclipHome, "context.json");
|
||||
env.PAPERCLIP_AUTH_STORE = path.join(options.paperclipHome, "auth.json");
|
||||
if (options.shellHome) {
|
||||
env.HOME = options.shellHome;
|
||||
}
|
||||
|
||||
return env;
|
||||
}
|
||||
|
||||
function createServerEnv(
|
||||
configPath: string,
|
||||
port: number,
|
||||
connectionString: string,
|
||||
options: Omit<TestPaperclipEnv, "configPath">,
|
||||
) {
|
||||
const env = createBasePaperclipEnv({
|
||||
configPath,
|
||||
...options,
|
||||
});
|
||||
|
||||
delete env.DATABASE_URL;
|
||||
delete env.PORT;
|
||||
delete env.HOST;
|
||||
delete env.SERVE_UI;
|
||||
delete env.HEARTBEAT_SCHEDULER_ENABLED;
|
||||
|
||||
env.PAPERCLIP_CONFIG = configPath;
|
||||
env.DATABASE_URL = connectionString;
|
||||
env.HOST = "127.0.0.1";
|
||||
env.PORT = String(port);
|
||||
@@ -160,8 +130,13 @@ function createServerEnv(
|
||||
return env;
|
||||
}
|
||||
|
||||
function createCliEnv(options: TestPaperclipEnv) {
|
||||
const env = createBasePaperclipEnv(options);
|
||||
function createCliEnv() {
|
||||
const env = { ...process.env };
|
||||
for (const key of Object.keys(env)) {
|
||||
if (key.startsWith("PAPERCLIP_")) {
|
||||
delete env[key];
|
||||
}
|
||||
}
|
||||
delete env.DATABASE_URL;
|
||||
delete env.PORT;
|
||||
delete env.HOST;
|
||||
@@ -208,25 +183,14 @@ async function api<T>(baseUrl: string, pathname: string, init?: RequestInit): Pr
|
||||
return text ? JSON.parse(text) as T : (null as T);
|
||||
}
|
||||
|
||||
async function runCliJson<T>(
|
||||
args: string[],
|
||||
opts: TestPaperclipEnv & { apiBase?: string; includeConfigArg?: boolean },
|
||||
) {
|
||||
async function runCliJson<T>(args: string[], opts: { apiBase: string; configPath: string }) {
|
||||
const repoRoot = path.resolve(path.dirname(fileURLToPath(import.meta.url)), "../../..");
|
||||
const cliArgs = ["--silent", "paperclipai", ...args];
|
||||
if (opts.apiBase) {
|
||||
cliArgs.push("--api-base", opts.apiBase);
|
||||
}
|
||||
if (opts.includeConfigArg !== false) {
|
||||
cliArgs.push("--config", opts.configPath);
|
||||
}
|
||||
cliArgs.push("--json");
|
||||
const result = await execFileAsync(
|
||||
"pnpm",
|
||||
cliArgs,
|
||||
["--silent", "paperclipai", ...args, "--api-base", opts.apiBase, "--config", opts.configPath, "--json"],
|
||||
{
|
||||
cwd: repoRoot,
|
||||
env: createCliEnv(opts),
|
||||
env: createCliEnv(),
|
||||
maxBuffer: 10 * 1024 * 1024,
|
||||
},
|
||||
);
|
||||
@@ -271,9 +235,6 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
|
||||
let configPath = "";
|
||||
let exportDir = "";
|
||||
let apiBase = "";
|
||||
let paperclipHome = "";
|
||||
let cliShellHome = "";
|
||||
let paperclipInstanceId = "";
|
||||
let serverProcess: ServerProcess | null = null;
|
||||
let tempDb: Awaited<ReturnType<typeof startEmbeddedPostgresTestDatabase>> | null = null;
|
||||
|
||||
@@ -281,11 +242,6 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
|
||||
tempRoot = mkdtempSync(path.join(os.tmpdir(), "paperclip-company-cli-e2e-"));
|
||||
configPath = path.join(tempRoot, "config", "config.json");
|
||||
exportDir = path.join(tempRoot, "exported-company");
|
||||
paperclipHome = path.join(tempRoot, "paperclip-home");
|
||||
cliShellHome = path.join(tempRoot, "shell-home");
|
||||
paperclipInstanceId = "company-cli-e2e";
|
||||
mkdirSync(paperclipHome, { recursive: true });
|
||||
mkdirSync(cliShellHome, { recursive: true });
|
||||
|
||||
tempDb = await startEmbeddedPostgresTestDatabase("paperclip-company-cli-db-");
|
||||
|
||||
@@ -300,11 +256,7 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
|
||||
["paperclipai", "run", "--config", configPath],
|
||||
{
|
||||
cwd: repoRoot,
|
||||
env: createServerEnv(configPath, port, tempDb.connectionString, {
|
||||
paperclipHome,
|
||||
instanceId: paperclipInstanceId,
|
||||
shellHome: cliShellHome,
|
||||
}),
|
||||
env: createServerEnv(configPath, port, tempDb.connectionString),
|
||||
stdio: ["ignore", "pipe", "pipe"],
|
||||
},
|
||||
);
|
||||
@@ -330,41 +282,11 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
|
||||
it("exports a company package and imports it into new and existing companies", async () => {
|
||||
expect(serverProcess).not.toBeNull();
|
||||
|
||||
const cliContext = await runCliJson<{
|
||||
contextPath: string;
|
||||
profileName: string;
|
||||
profile: { apiBase?: string };
|
||||
}>(
|
||||
["context", "set", "--profile", "isolation-check", "--api-base", "https://example.test"],
|
||||
{
|
||||
configPath,
|
||||
paperclipHome,
|
||||
instanceId: paperclipInstanceId,
|
||||
shellHome: cliShellHome,
|
||||
includeConfigArg: false,
|
||||
},
|
||||
);
|
||||
|
||||
const expectedContextPath = path.join(paperclipHome, "context.json");
|
||||
const leakedContextPath = path.join(cliShellHome, ".paperclip", "context.json");
|
||||
expect(cliContext.contextPath).toBe(expectedContextPath);
|
||||
expect(cliContext.profileName).toBe("isolation-check");
|
||||
expect(cliContext.profile.apiBase).toBe("https://example.test");
|
||||
expect(existsSync(expectedContextPath)).toBe(true);
|
||||
expect(existsSync(leakedContextPath)).toBe(false);
|
||||
rmSync(expectedContextPath, { force: true });
|
||||
expect(existsSync(expectedContextPath)).toBe(false);
|
||||
|
||||
const sourceCompany = await api<{ id: string; name: string; issuePrefix: string }>(apiBase, "/api/companies", {
|
||||
method: "POST",
|
||||
headers: { "content-type": "application/json" },
|
||||
body: JSON.stringify({ name: `CLI Export Source ${Date.now()}` }),
|
||||
});
|
||||
await api(apiBase, `/api/companies/${sourceCompany.id}`, {
|
||||
method: "PATCH",
|
||||
headers: { "content-type": "application/json" },
|
||||
body: JSON.stringify({ requireBoardApprovalForNewAgents: false }),
|
||||
});
|
||||
|
||||
const sourceAgent = await api<{ id: string; name: string }>(
|
||||
apiBase,
|
||||
@@ -376,11 +298,8 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
|
||||
name: "Export Engineer",
|
||||
role: "engineer",
|
||||
adapterType: "claude_local",
|
||||
adapterConfig: {},
|
||||
instructionsBundle: {
|
||||
files: {
|
||||
"AGENTS.md": "You verify company portability.",
|
||||
},
|
||||
adapterConfig: {
|
||||
promptTemplate: "You verify company portability.",
|
||||
},
|
||||
}),
|
||||
},
|
||||
@@ -431,13 +350,7 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
|
||||
"--include",
|
||||
"company,agents,projects,issues",
|
||||
],
|
||||
{
|
||||
apiBase,
|
||||
configPath,
|
||||
paperclipHome,
|
||||
instanceId: paperclipInstanceId,
|
||||
shellHome: cliShellHome,
|
||||
},
|
||||
{ apiBase, configPath },
|
||||
);
|
||||
|
||||
expect(exportResult.ok).toBe(true);
|
||||
@@ -461,13 +374,7 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
|
||||
"company,agents,projects,issues",
|
||||
"--yes",
|
||||
],
|
||||
{
|
||||
apiBase,
|
||||
configPath,
|
||||
paperclipHome,
|
||||
instanceId: paperclipInstanceId,
|
||||
shellHome: cliShellHome,
|
||||
},
|
||||
{ apiBase, configPath },
|
||||
);
|
||||
|
||||
expect(importedNew.company.action).toBe("created");
|
||||
@@ -486,11 +393,10 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
|
||||
apiBase,
|
||||
`/api/companies/${importedNew.company.id}/issues`,
|
||||
);
|
||||
const importedMatchingIssues = importedIssues.filter((issue) => issue.title === sourceIssue.title);
|
||||
|
||||
expect(importedAgents.map((agent) => agent.name)).toContain(sourceAgent.name);
|
||||
expect(importedProjects.map((project) => project.name)).toContain(sourceProject.name);
|
||||
expect(importedMatchingIssues).toHaveLength(1);
|
||||
expect(importedIssues.map((issue) => issue.title)).toContain(sourceIssue.title);
|
||||
|
||||
const previewExisting = await runCliJson<{
|
||||
errors: string[];
|
||||
@@ -515,13 +421,7 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
|
||||
"rename",
|
||||
"--dry-run",
|
||||
],
|
||||
{
|
||||
apiBase,
|
||||
configPath,
|
||||
paperclipHome,
|
||||
instanceId: paperclipInstanceId,
|
||||
shellHome: cliShellHome,
|
||||
},
|
||||
{ apiBase, configPath },
|
||||
);
|
||||
|
||||
expect(previewExisting.errors).toEqual([]);
|
||||
@@ -548,13 +448,7 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
|
||||
"rename",
|
||||
"--yes",
|
||||
],
|
||||
{
|
||||
apiBase,
|
||||
configPath,
|
||||
paperclipHome,
|
||||
instanceId: paperclipInstanceId,
|
||||
shellHome: cliShellHome,
|
||||
},
|
||||
{ apiBase, configPath },
|
||||
);
|
||||
|
||||
expect(importedExisting.company.action).toBe("unchanged");
|
||||
@@ -572,13 +466,11 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
|
||||
apiBase,
|
||||
`/api/companies/${importedNew.company.id}/issues`,
|
||||
);
|
||||
const twiceImportedMatchingIssues = twiceImportedIssues.filter((issue) => issue.title === sourceIssue.title);
|
||||
|
||||
expect(twiceImportedAgents).toHaveLength(2);
|
||||
expect(new Set(twiceImportedAgents.map((agent) => agent.name)).size).toBe(2);
|
||||
expect(twiceImportedProjects).toHaveLength(2);
|
||||
expect(twiceImportedMatchingIssues).toHaveLength(2);
|
||||
expect(new Set(twiceImportedMatchingIssues.map((issue) => issue.identifier)).size).toBe(2);
|
||||
expect(twiceImportedIssues).toHaveLength(2);
|
||||
|
||||
const zipPath = path.join(tempRoot, "exported-company.zip");
|
||||
const portableFiles: Record<string, string> = {};
|
||||
@@ -601,16 +493,10 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
|
||||
"company,agents,projects,issues",
|
||||
"--yes",
|
||||
],
|
||||
{
|
||||
apiBase,
|
||||
configPath,
|
||||
paperclipHome,
|
||||
instanceId: paperclipInstanceId,
|
||||
shellHome: cliShellHome,
|
||||
},
|
||||
{ apiBase, configPath },
|
||||
);
|
||||
|
||||
expect(importedFromZip.company.action).toBe("created");
|
||||
expect(importedFromZip.agents.some((agent) => agent.action === "created")).toBe(true);
|
||||
}, 90_000);
|
||||
}, 60_000);
|
||||
});
|
||||
|
||||
@@ -160,7 +160,6 @@ describe("renderCompanyImportPreview", () => {
|
||||
path: "COMPANY.md",
|
||||
name: "Source Co",
|
||||
description: null,
|
||||
attachmentMaxBytes: null,
|
||||
brandColor: null,
|
||||
logoPath: null,
|
||||
requireBoardApprovalForNewAgents: false,
|
||||
@@ -244,7 +243,6 @@ describe("renderCompanyImportPreview", () => {
|
||||
billingCode: null,
|
||||
executionWorkspaceSettings: null,
|
||||
assigneeAdapterOverrides: null,
|
||||
comments: [],
|
||||
metadata: null,
|
||||
},
|
||||
],
|
||||
@@ -377,7 +375,6 @@ describe("import selection catalog", () => {
|
||||
path: "COMPANY.md",
|
||||
name: "Source Co",
|
||||
description: null,
|
||||
attachmentMaxBytes: null,
|
||||
brandColor: null,
|
||||
logoPath: "images/company-logo.png",
|
||||
requireBoardApprovalForNewAgents: false,
|
||||
@@ -461,7 +458,6 @@ describe("import selection catalog", () => {
|
||||
billingCode: null,
|
||||
executionWorkspaceSettings: null,
|
||||
assigneeAdapterOverrides: null,
|
||||
comments: [],
|
||||
metadata: null,
|
||||
},
|
||||
],
|
||||
|
||||
@@ -1,24 +0,0 @@
|
||||
import path from "node:path";
|
||||
import { describe, expect, it } from "vitest";
|
||||
import { collectEnvLabDoctorStatus, resolveEnvLabSshStatePath } from "../commands/env-lab.js";
|
||||
|
||||
describe("env-lab command", () => {
|
||||
it("resolves the default SSH fixture state path under the instance root", () => {
|
||||
const statePath = resolveEnvLabSshStatePath("fixture-test");
|
||||
|
||||
expect(statePath).toContain(
|
||||
path.join("instances", "fixture-test", "env-lab", "ssh-fixture", "state.json"),
|
||||
);
|
||||
});
|
||||
|
||||
it("reports doctor status for an instance without a running fixture", async () => {
|
||||
const status = await collectEnvLabDoctorStatus({ instance: "fixture-test-missing" });
|
||||
|
||||
expect(status.statePath).toContain(
|
||||
path.join("instances", "fixture-test-missing", "env-lab", "ssh-fixture", "state.json"),
|
||||
);
|
||||
expect(typeof status.ssh.supported).toBe("boolean");
|
||||
expect(status.ssh.running).toBe(false);
|
||||
expect(status.ssh.environment).toBeNull();
|
||||
});
|
||||
});
|
||||
@@ -3,15 +3,11 @@ import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { execFileSync } from "node:child_process";
|
||||
import { randomUUID } from "node:crypto";
|
||||
import { eq } from "drizzle-orm";
|
||||
import { afterEach, describe, expect, it, vi } from "vitest";
|
||||
import {
|
||||
agents,
|
||||
authUsers,
|
||||
companies,
|
||||
createDb,
|
||||
issueComments,
|
||||
issues,
|
||||
projects,
|
||||
routines,
|
||||
routineTriggers,
|
||||
@@ -20,7 +16,6 @@ import {
|
||||
copyGitHooksToWorktreeGitDir,
|
||||
copySeededSecretsKey,
|
||||
pauseSeededScheduledRoutines,
|
||||
quarantineSeededWorktreeExecutionState,
|
||||
readSourceAttachmentBody,
|
||||
rebindWorkspaceCwd,
|
||||
resolveSourceConfigPath,
|
||||
@@ -52,7 +47,6 @@ import {
|
||||
const ORIGINAL_CWD = process.cwd();
|
||||
const ORIGINAL_ENV = { ...process.env };
|
||||
const embeddedPostgresSupport = await getEmbeddedPostgresTestSupport();
|
||||
const itEmbeddedPostgres = embeddedPostgresSupport.supported ? it : it.skip;
|
||||
const describeEmbeddedPostgres = embeddedPostgresSupport.supported ? describe : describe.skip;
|
||||
|
||||
if (!embeddedPostgresSupport.supported) {
|
||||
@@ -190,9 +184,8 @@ describe("worktree helpers", () => {
|
||||
).toEqual(["worktree", "add", "-b", "my-worktree", "/tmp/my-worktree", "origin/main"]);
|
||||
});
|
||||
|
||||
it("rewrites auth URLs only when they already include a port", () => {
|
||||
it("rewrites loopback auth URLs to the new port only", () => {
|
||||
expect(rewriteLocalUrlPort("http://127.0.0.1:3100", 3110)).toBe("http://127.0.0.1:3110/");
|
||||
expect(rewriteLocalUrlPort("http://my-host.ts.net:3100", 3110)).toBe("http://my-host.ts.net:3110/");
|
||||
expect(rewriteLocalUrlPort("https://paperclip.example", 3110)).toBe("https://paperclip.example");
|
||||
});
|
||||
|
||||
@@ -287,138 +280,6 @@ describe("worktree helpers", () => {
|
||||
expect(full.nullifyColumns).toEqual({});
|
||||
});
|
||||
|
||||
itEmbeddedPostgres("quarantines copied live execution state in seeded worktree databases", async () => {
|
||||
const tempDb = await startEmbeddedPostgresTestDatabase("paperclip-worktree-quarantine-");
|
||||
const db = createDb(tempDb.connectionString);
|
||||
const companyId = randomUUID();
|
||||
const agentId = randomUUID();
|
||||
const idleAgentId = randomUUID();
|
||||
const inProgressIssueId = randomUUID();
|
||||
const todoIssueId = randomUUID();
|
||||
const reviewIssueId = randomUUID();
|
||||
const userIssueId = randomUUID();
|
||||
|
||||
try {
|
||||
await db.insert(companies).values({
|
||||
id: companyId,
|
||||
name: "Paperclip",
|
||||
issuePrefix: "WTQ",
|
||||
requireBoardApprovalForNewAgents: false,
|
||||
});
|
||||
await db.insert(agents).values([
|
||||
{
|
||||
id: agentId,
|
||||
companyId,
|
||||
name: "CodexCoder",
|
||||
role: "engineer",
|
||||
status: "running",
|
||||
adapterType: "codex_local",
|
||||
adapterConfig: {},
|
||||
runtimeConfig: {
|
||||
heartbeat: { enabled: true, intervalSec: 60 },
|
||||
wakeOnDemand: true,
|
||||
},
|
||||
permissions: {},
|
||||
},
|
||||
{
|
||||
id: idleAgentId,
|
||||
companyId,
|
||||
name: "Reviewer",
|
||||
role: "reviewer",
|
||||
status: "idle",
|
||||
adapterType: "codex_local",
|
||||
adapterConfig: {},
|
||||
runtimeConfig: { heartbeat: { enabled: false, intervalSec: 300 } },
|
||||
permissions: {},
|
||||
},
|
||||
]);
|
||||
await db.insert(issues).values([
|
||||
{
|
||||
id: inProgressIssueId,
|
||||
companyId,
|
||||
title: "Copied in-flight issue",
|
||||
status: "in_progress",
|
||||
priority: "medium",
|
||||
assigneeAgentId: agentId,
|
||||
issueNumber: 1,
|
||||
identifier: "WTQ-1",
|
||||
executionAgentNameKey: "codexcoder",
|
||||
executionLockedAt: new Date("2026-04-18T00:00:00.000Z"),
|
||||
},
|
||||
{
|
||||
id: todoIssueId,
|
||||
companyId,
|
||||
title: "Copied assigned todo issue",
|
||||
status: "todo",
|
||||
priority: "medium",
|
||||
assigneeAgentId: agentId,
|
||||
issueNumber: 2,
|
||||
identifier: "WTQ-2",
|
||||
},
|
||||
{
|
||||
id: reviewIssueId,
|
||||
companyId,
|
||||
title: "Copied assigned review issue",
|
||||
status: "in_review",
|
||||
priority: "medium",
|
||||
assigneeAgentId: idleAgentId,
|
||||
issueNumber: 3,
|
||||
identifier: "WTQ-3",
|
||||
},
|
||||
{
|
||||
id: userIssueId,
|
||||
companyId,
|
||||
title: "Copied user issue",
|
||||
status: "todo",
|
||||
priority: "medium",
|
||||
assigneeUserId: "user-1",
|
||||
issueNumber: 4,
|
||||
identifier: "WTQ-4",
|
||||
},
|
||||
]);
|
||||
|
||||
await expect(quarantineSeededWorktreeExecutionState(tempDb.connectionString)).resolves.toEqual({
|
||||
disabledTimerHeartbeats: 1,
|
||||
resetRunningAgents: 1,
|
||||
quarantinedInProgressIssues: 1,
|
||||
unassignedTodoIssues: 1,
|
||||
unassignedReviewIssues: 1,
|
||||
});
|
||||
|
||||
const [quarantinedAgent] = await db.select().from(agents).where(eq(agents.id, agentId));
|
||||
expect(quarantinedAgent?.status).toBe("idle");
|
||||
expect(quarantinedAgent?.runtimeConfig).toMatchObject({
|
||||
heartbeat: { enabled: false, intervalSec: 60 },
|
||||
wakeOnDemand: true,
|
||||
});
|
||||
|
||||
const [inProgressIssue] = await db.select().from(issues).where(eq(issues.id, inProgressIssueId));
|
||||
expect(inProgressIssue?.status).toBe("blocked");
|
||||
expect(inProgressIssue?.assigneeAgentId).toBeNull();
|
||||
expect(inProgressIssue?.executionAgentNameKey).toBeNull();
|
||||
expect(inProgressIssue?.executionLockedAt).toBeNull();
|
||||
|
||||
const [todoIssue] = await db.select().from(issues).where(eq(issues.id, todoIssueId));
|
||||
expect(todoIssue?.status).toBe("todo");
|
||||
expect(todoIssue?.assigneeAgentId).toBeNull();
|
||||
|
||||
const [reviewIssue] = await db.select().from(issues).where(eq(issues.id, reviewIssueId));
|
||||
expect(reviewIssue?.status).toBe("in_review");
|
||||
expect(reviewIssue?.assigneeAgentId).toBeNull();
|
||||
|
||||
const [userIssue] = await db.select().from(issues).where(eq(issues.id, userIssueId));
|
||||
expect(userIssue?.status).toBe("todo");
|
||||
expect(userIssue?.assigneeUserId).toBe("user-1");
|
||||
|
||||
const comments = await db.select().from(issueComments).where(eq(issueComments.issueId, inProgressIssueId));
|
||||
expect(comments).toHaveLength(1);
|
||||
expect(comments[0]?.body).toContain("Quarantined during worktree seed");
|
||||
} finally {
|
||||
await db.$client?.end?.({ timeout: 5 }).catch(() => undefined);
|
||||
await tempDb.cleanup();
|
||||
}
|
||||
}, 20_000);
|
||||
|
||||
it("copies the source local_encrypted secrets key into the seeded worktree instance", () => {
|
||||
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-secrets-"));
|
||||
const originalInlineMasterKey = process.env.PAPERCLIP_SECRETS_MASTER_KEY;
|
||||
@@ -512,97 +373,6 @@ describe("worktree helpers", () => {
|
||||
}
|
||||
});
|
||||
|
||||
itEmbeddedPostgres(
|
||||
"seeds authenticated users into minimally cloned worktree instances",
|
||||
async () => {
|
||||
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-auth-seed-"));
|
||||
const worktreeRoot = path.join(tempRoot, "PAP-999-auth-seed");
|
||||
const sourceHome = path.join(tempRoot, "source-home");
|
||||
const sourceConfigDir = path.join(sourceHome, "instances", "source");
|
||||
const sourceConfigPath = path.join(sourceConfigDir, "config.json");
|
||||
const sourceEnvPath = path.join(sourceConfigDir, ".env");
|
||||
const sourceKeyPath = path.join(sourceConfigDir, "secrets", "master.key");
|
||||
const worktreeHome = path.join(tempRoot, ".paperclip-worktrees");
|
||||
const originalCwd = process.cwd();
|
||||
const sourceDb = await startEmbeddedPostgresTestDatabase("paperclip-worktree-auth-source-");
|
||||
|
||||
try {
|
||||
const sourceDbClient = createDb(sourceDb.connectionString);
|
||||
await sourceDbClient.insert(authUsers).values({
|
||||
id: "user-existing",
|
||||
email: "existing@paperclip.ing",
|
||||
name: "Existing User",
|
||||
emailVerified: true,
|
||||
createdAt: new Date(),
|
||||
updatedAt: new Date(),
|
||||
});
|
||||
|
||||
fs.mkdirSync(path.dirname(sourceKeyPath), { recursive: true });
|
||||
fs.mkdirSync(worktreeRoot, { recursive: true });
|
||||
|
||||
const sourceConfig = buildSourceConfig();
|
||||
sourceConfig.database = {
|
||||
mode: "postgres",
|
||||
embeddedPostgresDataDir: path.join(sourceConfigDir, "db"),
|
||||
embeddedPostgresPort: 54329,
|
||||
backup: {
|
||||
enabled: true,
|
||||
intervalMinutes: 60,
|
||||
retentionDays: 30,
|
||||
dir: path.join(sourceConfigDir, "backups"),
|
||||
},
|
||||
connectionString: sourceDb.connectionString,
|
||||
};
|
||||
sourceConfig.logging.logDir = path.join(sourceConfigDir, "logs");
|
||||
sourceConfig.storage.localDisk.baseDir = path.join(sourceConfigDir, "storage");
|
||||
sourceConfig.secrets.localEncrypted.keyFilePath = sourceKeyPath;
|
||||
|
||||
fs.writeFileSync(sourceConfigPath, JSON.stringify(sourceConfig, null, 2) + "\n", "utf8");
|
||||
fs.writeFileSync(sourceEnvPath, "", "utf8");
|
||||
fs.writeFileSync(sourceKeyPath, "source-master-key", "utf8");
|
||||
|
||||
process.chdir(worktreeRoot);
|
||||
await worktreeInitCommand({
|
||||
name: "PAP-999-auth-seed",
|
||||
home: worktreeHome,
|
||||
fromConfig: sourceConfigPath,
|
||||
force: true,
|
||||
});
|
||||
|
||||
const targetConfig = JSON.parse(
|
||||
fs.readFileSync(path.join(worktreeRoot, ".paperclip", "config.json"), "utf8"),
|
||||
) as PaperclipConfig;
|
||||
const { default: EmbeddedPostgres } = await import("embedded-postgres");
|
||||
const targetPg = new EmbeddedPostgres({
|
||||
databaseDir: targetConfig.database.embeddedPostgresDataDir,
|
||||
user: "paperclip",
|
||||
password: "paperclip",
|
||||
port: targetConfig.database.embeddedPostgresPort,
|
||||
persistent: true,
|
||||
initdbFlags: ["--encoding=UTF8", "--locale=C", "--lc-messages=C"],
|
||||
onLog: () => {},
|
||||
onError: () => {},
|
||||
});
|
||||
|
||||
await targetPg.start();
|
||||
try {
|
||||
const targetDb = createDb(
|
||||
`postgres://paperclip:paperclip@127.0.0.1:${targetConfig.database.embeddedPostgresPort}/paperclip`,
|
||||
);
|
||||
const seededUsers = await targetDb.select().from(authUsers);
|
||||
expect(seededUsers.some((row) => row.email === "existing@paperclip.ing")).toBe(true);
|
||||
} finally {
|
||||
await targetPg.stop();
|
||||
}
|
||||
} finally {
|
||||
process.chdir(originalCwd);
|
||||
await sourceDb.cleanup();
|
||||
fs.rmSync(tempRoot, { recursive: true, force: true });
|
||||
}
|
||||
},
|
||||
30000,
|
||||
);
|
||||
|
||||
it("avoids ports already claimed by sibling worktree instance configs", async () => {
|
||||
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-claimed-ports-"));
|
||||
const repoRoot = path.join(tempRoot, "repo");
|
||||
@@ -882,7 +652,7 @@ describe("worktree helpers", () => {
|
||||
}
|
||||
fs.rmSync(tempRoot, { recursive: true, force: true });
|
||||
}
|
||||
}, 30_000);
|
||||
}, 20_000);
|
||||
|
||||
it("restores the current worktree config and instance data if reseed fails", async () => {
|
||||
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-reseed-rollback-"));
|
||||
@@ -1039,7 +809,7 @@ describe("worktree helpers", () => {
|
||||
execFileSync("git", ["worktree", "remove", "--force", worktreePath], { cwd: repoRoot, stdio: "ignore" });
|
||||
fs.rmSync(tempRoot, { recursive: true, force: true });
|
||||
}
|
||||
}, 15_000);
|
||||
});
|
||||
|
||||
it("creates and initializes a worktree from the top-level worktree:make command", async () => {
|
||||
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-make-"));
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
import type { CLIAdapterModule } from "@paperclipai/adapter-utils";
|
||||
import { printAcpxStreamEvent } from "@paperclipai/adapter-acpx-local/cli";
|
||||
import { printClaudeStreamEvent } from "@paperclipai/adapter-claude-local/cli";
|
||||
import { printCodexStreamEvent } from "@paperclipai/adapter-codex-local/cli";
|
||||
import { printCursorStreamEvent } from "@paperclipai/adapter-cursor-local/cli";
|
||||
@@ -15,11 +14,6 @@ const claudeLocalCLIAdapter: CLIAdapterModule = {
|
||||
formatStdoutEvent: printClaudeStreamEvent,
|
||||
};
|
||||
|
||||
const acpxLocalCLIAdapter: CLIAdapterModule = {
|
||||
type: "acpx_local",
|
||||
formatStdoutEvent: printAcpxStreamEvent,
|
||||
};
|
||||
|
||||
const codexLocalCLIAdapter: CLIAdapterModule = {
|
||||
type: "codex_local",
|
||||
formatStdoutEvent: printCodexStreamEvent,
|
||||
@@ -52,7 +46,6 @@ const openclawGatewayCLIAdapter: CLIAdapterModule = {
|
||||
|
||||
const adaptersByType = new Map<string, CLIAdapterModule>(
|
||||
[
|
||||
acpxLocalCLIAdapter,
|
||||
claudeLocalCLIAdapter,
|
||||
codexLocalCLIAdapter,
|
||||
openCodeLocalCLIAdapter,
|
||||
|
||||
@@ -61,7 +61,6 @@ interface IssueUpdateOptions extends BaseClientOptions {
|
||||
interface IssueCommentOptions extends BaseClientOptions {
|
||||
body: string;
|
||||
reopen?: boolean;
|
||||
resume?: boolean;
|
||||
}
|
||||
|
||||
interface IssueCheckoutOptions extends BaseClientOptions {
|
||||
@@ -242,14 +241,12 @@ export function registerIssueCommands(program: Command): void {
|
||||
.argument("<issueId>", "Issue ID")
|
||||
.requiredOption("--body <text>", "Comment body")
|
||||
.option("--reopen", "Reopen if issue is done/cancelled")
|
||||
.option("--resume", "Request explicit follow-up and wake the assignee when resumable")
|
||||
.action(async (issueId: string, opts: IssueCommentOptions) => {
|
||||
try {
|
||||
const ctx = resolveCommandContext(opts);
|
||||
const payload = addIssueCommentSchema.parse({
|
||||
body: opts.body,
|
||||
reopen: opts.reopen,
|
||||
resume: opts.resume,
|
||||
});
|
||||
const comment = await ctx.api.post<IssueComment>(`/api/issues/${issueId}/comments`, payload);
|
||||
printOutput(comment, { json: ctx.json });
|
||||
|
||||
@@ -1,174 +0,0 @@
|
||||
import path from "node:path";
|
||||
import type { Command } from "commander";
|
||||
import * as p from "@clack/prompts";
|
||||
import pc from "picocolors";
|
||||
import {
|
||||
buildSshEnvLabFixtureConfig,
|
||||
getSshEnvLabSupport,
|
||||
readSshEnvLabFixtureStatus,
|
||||
startSshEnvLabFixture,
|
||||
stopSshEnvLabFixture,
|
||||
} from "@paperclipai/adapter-utils/ssh";
|
||||
import { resolvePaperclipInstanceId, resolvePaperclipInstanceRoot } from "../config/home.js";
|
||||
|
||||
export function resolveEnvLabSshStatePath(instanceId?: string): string {
|
||||
const resolvedInstanceId = resolvePaperclipInstanceId(instanceId);
|
||||
return path.resolve(
|
||||
resolvePaperclipInstanceRoot(resolvedInstanceId),
|
||||
"env-lab",
|
||||
"ssh-fixture",
|
||||
"state.json",
|
||||
);
|
||||
}
|
||||
|
||||
function printJson(value: unknown) {
|
||||
process.stdout.write(`${JSON.stringify(value, null, 2)}\n`);
|
||||
}
|
||||
|
||||
function summarizeFixture(state: {
|
||||
host: string;
|
||||
port: number;
|
||||
username: string;
|
||||
workspaceDir: string;
|
||||
sshdLogPath: string;
|
||||
}) {
|
||||
p.log.message(`Host: ${pc.cyan(state.host)}:${pc.cyan(String(state.port))}`);
|
||||
p.log.message(`User: ${pc.cyan(state.username)}`);
|
||||
p.log.message(`Workspace: ${pc.cyan(state.workspaceDir)}`);
|
||||
p.log.message(`Log: ${pc.dim(state.sshdLogPath)}`);
|
||||
}
|
||||
|
||||
export async function collectEnvLabDoctorStatus(opts: { instance?: string }) {
|
||||
const statePath = resolveEnvLabSshStatePath(opts.instance);
|
||||
const [sshSupport, sshStatus] = await Promise.all([
|
||||
getSshEnvLabSupport(),
|
||||
readSshEnvLabFixtureStatus(statePath),
|
||||
]);
|
||||
const environment = sshStatus.state ? await buildSshEnvLabFixtureConfig(sshStatus.state) : null;
|
||||
|
||||
return {
|
||||
statePath,
|
||||
ssh: {
|
||||
supported: sshSupport.supported,
|
||||
reason: sshSupport.reason,
|
||||
running: sshStatus.running,
|
||||
state: sshStatus.state,
|
||||
environment,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
export async function envLabUpCommand(opts: { instance?: string; json?: boolean }) {
|
||||
const statePath = resolveEnvLabSshStatePath(opts.instance);
|
||||
const state = await startSshEnvLabFixture({ statePath });
|
||||
const environment = await buildSshEnvLabFixtureConfig(state);
|
||||
|
||||
if (opts.json) {
|
||||
printJson({ state, environment });
|
||||
return;
|
||||
}
|
||||
|
||||
p.log.success("SSH env-lab fixture is running.");
|
||||
summarizeFixture(state);
|
||||
p.log.message(`State: ${pc.dim(statePath)}`);
|
||||
}
|
||||
|
||||
export async function envLabStatusCommand(opts: { instance?: string; json?: boolean }) {
|
||||
const statePath = resolveEnvLabSshStatePath(opts.instance);
|
||||
const status = await readSshEnvLabFixtureStatus(statePath);
|
||||
const environment = status.state ? await buildSshEnvLabFixtureConfig(status.state) : null;
|
||||
|
||||
if (opts.json) {
|
||||
printJson({ ...status, environment, statePath });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!status.state || !status.running) {
|
||||
p.log.info(`SSH env-lab fixture is not running (${pc.dim(statePath)}).`);
|
||||
return;
|
||||
}
|
||||
|
||||
p.log.success("SSH env-lab fixture is running.");
|
||||
summarizeFixture(status.state);
|
||||
p.log.message(`State: ${pc.dim(statePath)}`);
|
||||
}
|
||||
|
||||
export async function envLabDownCommand(opts: { instance?: string; json?: boolean }) {
|
||||
const statePath = resolveEnvLabSshStatePath(opts.instance);
|
||||
const stopped = await stopSshEnvLabFixture(statePath);
|
||||
|
||||
if (opts.json) {
|
||||
printJson({ stopped, statePath });
|
||||
return;
|
||||
}
|
||||
|
||||
if (!stopped) {
|
||||
p.log.info(`No SSH env-lab fixture was running (${pc.dim(statePath)}).`);
|
||||
return;
|
||||
}
|
||||
|
||||
p.log.success("SSH env-lab fixture stopped.");
|
||||
p.log.message(`State: ${pc.dim(statePath)}`);
|
||||
}
|
||||
|
||||
export async function envLabDoctorCommand(opts: { instance?: string; json?: boolean }) {
|
||||
const status = await collectEnvLabDoctorStatus(opts);
|
||||
|
||||
if (opts.json) {
|
||||
printJson(status);
|
||||
return;
|
||||
}
|
||||
|
||||
if (status.ssh.supported) {
|
||||
p.log.success("SSH fixture prerequisites are installed.");
|
||||
} else {
|
||||
p.log.warn(`SSH fixture prerequisites are incomplete: ${status.ssh.reason ?? "unknown reason"}`);
|
||||
}
|
||||
|
||||
if (status.ssh.state && status.ssh.running) {
|
||||
p.log.success("SSH env-lab fixture is running.");
|
||||
summarizeFixture(status.ssh.state);
|
||||
p.log.message(`Private key: ${pc.dim(status.ssh.state.clientPrivateKeyPath)}`);
|
||||
p.log.message(`Known hosts: ${pc.dim(status.ssh.state.knownHostsPath)}`);
|
||||
} else if (status.ssh.state) {
|
||||
p.log.warn("SSH env-lab fixture state exists, but the process is not running.");
|
||||
p.log.message(`State: ${pc.dim(status.statePath)}`);
|
||||
} else {
|
||||
p.log.info("SSH env-lab fixture is not running.");
|
||||
p.log.message(`State: ${pc.dim(status.statePath)}`);
|
||||
}
|
||||
|
||||
p.log.message(`Cleanup: ${pc.dim("pnpm paperclipai env-lab down")}`);
|
||||
}
|
||||
|
||||
export function registerEnvLabCommands(program: Command) {
|
||||
const envLab = program.command("env-lab").description("Deterministic local environment fixtures");
|
||||
|
||||
envLab
|
||||
.command("up")
|
||||
.description("Start the default SSH env-lab fixture")
|
||||
.option("-i, --instance <id>", "Paperclip instance id (default: current/default)")
|
||||
.option("--json", "Print machine-readable fixture details")
|
||||
.action(envLabUpCommand);
|
||||
|
||||
envLab
|
||||
.command("status")
|
||||
.description("Show the current SSH env-lab fixture state")
|
||||
.option("-i, --instance <id>", "Paperclip instance id (default: current/default)")
|
||||
.option("--json", "Print machine-readable fixture details")
|
||||
.action(envLabStatusCommand);
|
||||
|
||||
envLab
|
||||
.command("down")
|
||||
.description("Stop the default SSH env-lab fixture")
|
||||
.option("-i, --instance <id>", "Paperclip instance id (default: current/default)")
|
||||
.option("--json", "Print machine-readable stop details")
|
||||
.action(envLabDownCommand);
|
||||
|
||||
envLab
|
||||
.command("doctor")
|
||||
.description("Check SSH fixture prerequisites and current status")
|
||||
.option("-i, --instance <id>", "Paperclip instance id (default: current/default)")
|
||||
.option("--json", "Print machine-readable diagnostic details")
|
||||
.action(envLabDoctorCommand);
|
||||
}
|
||||
@@ -75,6 +75,11 @@ function nonEmpty(value: string | null | undefined): string | null {
|
||||
return typeof value === "string" && value.trim().length > 0 ? value.trim() : null;
|
||||
}
|
||||
|
||||
function isLoopbackHost(hostname: string): boolean {
|
||||
const value = hostname.trim().toLowerCase();
|
||||
return value === "127.0.0.1" || value === "localhost" || value === "::1";
|
||||
}
|
||||
|
||||
export function sanitizeWorktreeInstanceId(rawValue: string): string {
|
||||
const trimmed = rawValue.trim().toLowerCase();
|
||||
const normalized = trimmed
|
||||
@@ -163,8 +168,7 @@ export function rewriteLocalUrlPort(rawUrl: string | undefined, port: number): s
|
||||
if (!rawUrl) return undefined;
|
||||
try {
|
||||
const parsed = new URL(rawUrl);
|
||||
// The URL API normalizes default ports like :80/:443 to "", so treat them as stable URLs.
|
||||
if (!parsed.port) return rawUrl;
|
||||
if (!isLoopbackHost(parsed.hostname)) return rawUrl;
|
||||
parsed.port = String(port);
|
||||
return parsed.toString();
|
||||
} catch {
|
||||
|
||||
@@ -93,7 +93,6 @@ type WorktreeInitOptions = {
|
||||
dbPort?: number;
|
||||
seed?: boolean;
|
||||
seedMode?: string;
|
||||
preserveLiveWork?: boolean;
|
||||
force?: boolean;
|
||||
};
|
||||
|
||||
@@ -127,7 +126,6 @@ type WorktreeReseedOptions = {
|
||||
fromDataDir?: string;
|
||||
fromInstance?: string;
|
||||
seedMode?: string;
|
||||
preserveLiveWork?: boolean;
|
||||
yes?: boolean;
|
||||
allowLiveTarget?: boolean;
|
||||
};
|
||||
@@ -139,7 +137,6 @@ type WorktreeRepairOptions = {
|
||||
fromDataDir?: string;
|
||||
fromInstance?: string;
|
||||
seedMode?: string;
|
||||
preserveLiveWork?: boolean;
|
||||
noSeed?: boolean;
|
||||
allowLiveTarget?: boolean;
|
||||
};
|
||||
@@ -182,8 +179,6 @@ type CopiedGitHooksResult = {
|
||||
|
||||
type SeedWorktreeDatabaseResult = {
|
||||
backupSummary: string;
|
||||
pausedScheduledRoutines: number;
|
||||
executionQuarantine: SeededWorktreeExecutionQuarantineSummary;
|
||||
reboundWorkspaces: Array<{
|
||||
name: string;
|
||||
fromCwd: string;
|
||||
@@ -191,14 +186,6 @@ type SeedWorktreeDatabaseResult = {
|
||||
}>;
|
||||
};
|
||||
|
||||
export type SeededWorktreeExecutionQuarantineSummary = {
|
||||
disabledTimerHeartbeats: number;
|
||||
resetRunningAgents: number;
|
||||
quarantinedInProgressIssues: number;
|
||||
unassignedTodoIssues: number;
|
||||
unassignedReviewIssues: number;
|
||||
};
|
||||
|
||||
function nonEmpty(value: string | null | undefined): string | null {
|
||||
return typeof value === "string" && value.trim().length > 0 ? value.trim() : null;
|
||||
}
|
||||
@@ -211,18 +198,6 @@ function isCurrentSourceConfigPath(sourceConfigPath: string): boolean {
|
||||
return path.resolve(currentConfigPath) === path.resolve(sourceConfigPath);
|
||||
}
|
||||
|
||||
function formatSeededWorktreeExecutionQuarantineSummary(
|
||||
summary: SeededWorktreeExecutionQuarantineSummary,
|
||||
): string {
|
||||
return [
|
||||
`disabled timer heartbeats: ${summary.disabledTimerHeartbeats}`,
|
||||
`reset running agents: ${summary.resetRunningAgents}`,
|
||||
`quarantined in-progress issues: ${summary.quarantinedInProgressIssues}`,
|
||||
`unassigned todo issues: ${summary.unassignedTodoIssues}`,
|
||||
`unassigned review issues: ${summary.unassignedReviewIssues}`,
|
||||
].join(", ");
|
||||
}
|
||||
|
||||
const WORKTREE_NAME_PREFIX = "paperclip-";
|
||||
|
||||
function resolveWorktreeMakeName(name: string): string {
|
||||
@@ -1144,133 +1119,6 @@ export async function pauseSeededScheduledRoutines(connectionString: string): Pr
|
||||
}
|
||||
}
|
||||
|
||||
const EMPTY_SEEDED_WORKTREE_EXECUTION_QUARANTINE_SUMMARY: SeededWorktreeExecutionQuarantineSummary = {
|
||||
disabledTimerHeartbeats: 0,
|
||||
resetRunningAgents: 0,
|
||||
quarantinedInProgressIssues: 0,
|
||||
unassignedTodoIssues: 0,
|
||||
unassignedReviewIssues: 0,
|
||||
};
|
||||
|
||||
function isRecord(value: unknown): value is Record<string, unknown> {
|
||||
return Boolean(value) && typeof value === "object" && !Array.isArray(value);
|
||||
}
|
||||
|
||||
function isEnabledValue(value: unknown): boolean {
|
||||
return value === true || value === "true" || value === 1 || value === "1";
|
||||
}
|
||||
|
||||
function normalizeWorktreeRuntimeConfig(runtimeConfig: unknown): {
|
||||
runtimeConfig: Record<string, unknown>;
|
||||
disabledTimerHeartbeat: boolean;
|
||||
changed: boolean;
|
||||
} {
|
||||
const nextRuntimeConfig = isRecord(runtimeConfig) ? { ...runtimeConfig } : {};
|
||||
const heartbeat = isRecord(nextRuntimeConfig.heartbeat) ? { ...nextRuntimeConfig.heartbeat } : null;
|
||||
if (!heartbeat) {
|
||||
return { runtimeConfig: nextRuntimeConfig, disabledTimerHeartbeat: false, changed: false };
|
||||
}
|
||||
|
||||
const disabledTimerHeartbeat = isEnabledValue(heartbeat.enabled);
|
||||
if (heartbeat.enabled !== false) {
|
||||
heartbeat.enabled = false;
|
||||
nextRuntimeConfig.heartbeat = heartbeat;
|
||||
return { runtimeConfig: nextRuntimeConfig, disabledTimerHeartbeat, changed: true };
|
||||
}
|
||||
|
||||
return { runtimeConfig: nextRuntimeConfig, disabledTimerHeartbeat: false, changed: false };
|
||||
}
|
||||
|
||||
export async function quarantineSeededWorktreeExecutionState(
|
||||
connectionString: string,
|
||||
): Promise<SeededWorktreeExecutionQuarantineSummary> {
|
||||
const db = createDb(connectionString);
|
||||
const summary = { ...EMPTY_SEEDED_WORKTREE_EXECUTION_QUARANTINE_SUMMARY };
|
||||
try {
|
||||
await db.transaction(async (tx) => {
|
||||
const seededAgents = await tx
|
||||
.select({
|
||||
id: agents.id,
|
||||
status: agents.status,
|
||||
runtimeConfig: agents.runtimeConfig,
|
||||
})
|
||||
.from(agents);
|
||||
|
||||
for (const agent of seededAgents) {
|
||||
const normalized = normalizeWorktreeRuntimeConfig(agent.runtimeConfig);
|
||||
const nextStatus = agent.status === "running" ? "idle" : agent.status;
|
||||
if (normalized.disabledTimerHeartbeat) {
|
||||
summary.disabledTimerHeartbeats += 1;
|
||||
}
|
||||
if (agent.status === "running") {
|
||||
summary.resetRunningAgents += 1;
|
||||
}
|
||||
if (normalized.changed || nextStatus !== agent.status) {
|
||||
await tx
|
||||
.update(agents)
|
||||
.set({
|
||||
runtimeConfig: normalized.runtimeConfig,
|
||||
status: nextStatus,
|
||||
updatedAt: new Date(),
|
||||
})
|
||||
.where(eq(agents.id, agent.id));
|
||||
}
|
||||
}
|
||||
|
||||
const affectedIssues = await tx
|
||||
.select({
|
||||
id: issues.id,
|
||||
companyId: issues.companyId,
|
||||
status: issues.status,
|
||||
})
|
||||
.from(issues)
|
||||
.where(
|
||||
and(
|
||||
sql`${issues.assigneeAgentId} is not null`,
|
||||
sql`${issues.assigneeUserId} is null`,
|
||||
inArray(issues.status, ["todo", "in_progress", "in_review"]),
|
||||
),
|
||||
);
|
||||
|
||||
for (const issue of affectedIssues) {
|
||||
const nextStatus = issue.status === "in_progress" ? "blocked" : issue.status;
|
||||
await tx
|
||||
.update(issues)
|
||||
.set({
|
||||
status: nextStatus,
|
||||
assigneeAgentId: null,
|
||||
checkoutRunId: null,
|
||||
executionRunId: null,
|
||||
executionAgentNameKey: null,
|
||||
executionLockedAt: null,
|
||||
executionWorkspaceId: null,
|
||||
updatedAt: new Date(),
|
||||
})
|
||||
.where(eq(issues.id, issue.id));
|
||||
|
||||
if (issue.status === "in_progress") {
|
||||
summary.quarantinedInProgressIssues += 1;
|
||||
await tx.insert(issueComments).values({
|
||||
companyId: issue.companyId,
|
||||
issueId: issue.id,
|
||||
body:
|
||||
"Quarantined during worktree seed so copied in-flight work does not auto-run in this isolated instance. " +
|
||||
"Reassign or unblock here only if you intentionally want the worktree instance to own this task.",
|
||||
});
|
||||
} else if (issue.status === "todo") {
|
||||
summary.unassignedTodoIssues += 1;
|
||||
} else if (issue.status === "in_review") {
|
||||
summary.unassignedReviewIssues += 1;
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
return summary;
|
||||
} finally {
|
||||
await db.$client?.end?.({ timeout: 5 }).catch(() => undefined);
|
||||
}
|
||||
}
|
||||
|
||||
async function seedWorktreeDatabase(input: {
|
||||
sourceConfigPath: string;
|
||||
sourceConfig: PaperclipConfig;
|
||||
@@ -1278,7 +1126,6 @@ async function seedWorktreeDatabase(input: {
|
||||
targetPaths: WorktreeLocalPaths;
|
||||
instanceId: string;
|
||||
seedMode: WorktreeSeedMode;
|
||||
preserveLiveWork?: boolean;
|
||||
}): Promise<SeedWorktreeDatabaseResult> {
|
||||
const seedPlan = resolveWorktreeSeedPlan(input.seedMode);
|
||||
const sourceEnvFile = resolvePaperclipEnvFile(input.sourceConfigPath);
|
||||
@@ -1311,7 +1158,6 @@ async function seedWorktreeDatabase(input: {
|
||||
backupDir: path.resolve(input.targetPaths.backupDir, "seed"),
|
||||
retention: { dailyDays: 7, weeklyWeeks: 4, monthlyMonths: 1 },
|
||||
filenamePrefix: `${input.instanceId}-seed`,
|
||||
backupEngine: "javascript",
|
||||
includeMigrationJournal: true,
|
||||
excludeTables: seedPlan.excludedTables,
|
||||
nullifyColumns: seedPlan.nullifyColumns,
|
||||
@@ -1330,10 +1176,7 @@ async function seedWorktreeDatabase(input: {
|
||||
backupFile: backup.backupFile,
|
||||
});
|
||||
await applyPendingMigrations(targetConnectionString);
|
||||
const executionQuarantine = input.preserveLiveWork
|
||||
? { ...EMPTY_SEEDED_WORKTREE_EXECUTION_QUARANTINE_SUMMARY }
|
||||
: await quarantineSeededWorktreeExecutionState(targetConnectionString);
|
||||
const pausedScheduledRoutines = await pauseSeededScheduledRoutines(targetConnectionString);
|
||||
await pauseSeededScheduledRoutines(targetConnectionString);
|
||||
const reboundWorkspaces = await rebindSeededProjectWorkspaces({
|
||||
targetConnectionString,
|
||||
currentCwd: input.targetPaths.cwd,
|
||||
@@ -1341,8 +1184,6 @@ async function seedWorktreeDatabase(input: {
|
||||
|
||||
return {
|
||||
backupSummary: formatDatabaseBackupResult(backup),
|
||||
pausedScheduledRoutines,
|
||||
executionQuarantine,
|
||||
reboundWorkspaces,
|
||||
};
|
||||
} finally {
|
||||
@@ -1421,8 +1262,6 @@ async function runWorktreeInit(opts: WorktreeInitOptions): Promise<void> {
|
||||
const copiedGitHooks = copyGitHooksToWorktreeGitDir(cwd);
|
||||
|
||||
let seedSummary: string | null = null;
|
||||
let seedExecutionQuarantineSummary: SeededWorktreeExecutionQuarantineSummary | null = null;
|
||||
let pausedScheduledRoutineCount: number | null = null;
|
||||
let reboundWorkspaceSummary: SeedWorktreeDatabaseResult["reboundWorkspaces"] = [];
|
||||
if (opts.seed !== false) {
|
||||
if (!sourceConfig) {
|
||||
@@ -1440,11 +1279,8 @@ async function runWorktreeInit(opts: WorktreeInitOptions): Promise<void> {
|
||||
targetPaths: paths,
|
||||
instanceId,
|
||||
seedMode,
|
||||
preserveLiveWork: opts.preserveLiveWork,
|
||||
});
|
||||
seedSummary = seeded.backupSummary;
|
||||
seedExecutionQuarantineSummary = seeded.executionQuarantine;
|
||||
pausedScheduledRoutineCount = seeded.pausedScheduledRoutines;
|
||||
reboundWorkspaceSummary = seeded.reboundWorkspaces;
|
||||
spinner.stop(`Seeded isolated worktree database (${seedMode}).`);
|
||||
} catch (error) {
|
||||
@@ -1467,16 +1303,6 @@ async function runWorktreeInit(opts: WorktreeInitOptions): Promise<void> {
|
||||
if (seedSummary) {
|
||||
p.log.message(pc.dim(`Seed mode: ${seedMode}`));
|
||||
p.log.message(pc.dim(`Seed snapshot: ${seedSummary}`));
|
||||
if (opts.preserveLiveWork) {
|
||||
p.log.warning("Preserved copied live work; this worktree instance may auto-run source-instance assignments.");
|
||||
} else if (seedExecutionQuarantineSummary) {
|
||||
p.log.message(
|
||||
pc.dim(`Seed execution quarantine: ${formatSeededWorktreeExecutionQuarantineSummary(seedExecutionQuarantineSummary)}`),
|
||||
);
|
||||
}
|
||||
if (pausedScheduledRoutineCount != null) {
|
||||
p.log.message(pc.dim(`Paused scheduled routines: ${pausedScheduledRoutineCount}`));
|
||||
}
|
||||
for (const rebound of reboundWorkspaceSummary) {
|
||||
p.log.message(
|
||||
pc.dim(`Rebound workspace ${rebound.name}: ${rebound.fromCwd} -> ${rebound.toCwd}`),
|
||||
@@ -3121,20 +2947,11 @@ async function runWorktreeReseed(opts: WorktreeReseedOptions): Promise<void> {
|
||||
targetPaths,
|
||||
instanceId: targetPaths.instanceId,
|
||||
seedMode,
|
||||
preserveLiveWork: opts.preserveLiveWork,
|
||||
});
|
||||
spinner.stop(`Reseeded ${targetEndpoint.label} (${seedMode}).`);
|
||||
p.log.message(pc.dim(`Source: ${source.configPath}`));
|
||||
p.log.message(pc.dim(`Target: ${targetEndpoint.configPath}`));
|
||||
p.log.message(pc.dim(`Seed snapshot: ${seeded.backupSummary}`));
|
||||
if (opts.preserveLiveWork) {
|
||||
p.log.warning("Preserved copied live work; this worktree instance may auto-run source-instance assignments.");
|
||||
} else {
|
||||
p.log.message(
|
||||
pc.dim(`Seed execution quarantine: ${formatSeededWorktreeExecutionQuarantineSummary(seeded.executionQuarantine)}`),
|
||||
);
|
||||
}
|
||||
p.log.message(pc.dim(`Paused scheduled routines: ${seeded.pausedScheduledRoutines}`));
|
||||
for (const rebound of seeded.reboundWorkspaces) {
|
||||
p.log.message(
|
||||
pc.dim(`Rebound workspace ${rebound.name}: ${rebound.fromCwd} -> ${rebound.toCwd}`),
|
||||
@@ -3198,7 +3015,6 @@ export async function worktreeRepairCommand(opts: WorktreeRepairOptions): Promis
|
||||
fromConfig: source.configPath,
|
||||
to: target.rootPath,
|
||||
seedMode,
|
||||
preserveLiveWork: opts.preserveLiveWork,
|
||||
yes: true,
|
||||
allowLiveTarget: opts.allowLiveTarget,
|
||||
});
|
||||
@@ -3231,7 +3047,6 @@ export async function worktreeRepairCommand(opts: WorktreeRepairOptions): Promis
|
||||
fromInstance: opts.fromInstance,
|
||||
seed: opts.noSeed ? false : true,
|
||||
seedMode,
|
||||
preserveLiveWork: opts.preserveLiveWork,
|
||||
force: true,
|
||||
});
|
||||
} finally {
|
||||
@@ -3255,7 +3070,6 @@ export function registerWorktreeCommands(program: Command): void {
|
||||
.option("--server-port <port>", "Preferred server port", (value) => Number(value))
|
||||
.option("--db-port <port>", "Preferred embedded Postgres port", (value) => Number(value))
|
||||
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: minimal)", "minimal")
|
||||
.option("--preserve-live-work", "Do not quarantine copied agent timers or assigned open issues in the seeded worktree", false)
|
||||
.option("--no-seed", "Skip database seeding from the source instance")
|
||||
.option("--force", "Replace existing repo-local config and isolated instance data", false)
|
||||
.action(worktreeMakeCommand);
|
||||
@@ -3272,7 +3086,6 @@ export function registerWorktreeCommands(program: Command): void {
|
||||
.option("--server-port <port>", "Preferred server port", (value) => Number(value))
|
||||
.option("--db-port <port>", "Preferred embedded Postgres port", (value) => Number(value))
|
||||
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: minimal)", "minimal")
|
||||
.option("--preserve-live-work", "Do not quarantine copied agent timers or assigned open issues in the seeded worktree", false)
|
||||
.option("--no-seed", "Skip database seeding from the source instance")
|
||||
.option("--force", "Replace existing repo-local config and isolated instance data", false)
|
||||
.action(worktreeInitCommand);
|
||||
@@ -3312,7 +3125,6 @@ export function registerWorktreeCommands(program: Command): void {
|
||||
.option("--from-data-dir <path>", "Source PAPERCLIP_HOME used when deriving the source config")
|
||||
.option("--from-instance <id>", "Source instance id when deriving the source config")
|
||||
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: full)", "full")
|
||||
.option("--preserve-live-work", "Do not quarantine copied agent timers or assigned open issues in the seeded worktree", false)
|
||||
.option("--yes", "Skip the destructive confirmation prompt", false)
|
||||
.option("--allow-live-target", "Override the guard that requires the target worktree DB to be stopped first", false)
|
||||
.action(worktreeReseedCommand);
|
||||
@@ -3326,7 +3138,6 @@ export function registerWorktreeCommands(program: Command): void {
|
||||
.option("--from-data-dir <path>", "Source PAPERCLIP_HOME used when deriving the source config")
|
||||
.option("--from-instance <id>", "Source instance id when deriving the source config (default: default)")
|
||||
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: minimal)", "minimal")
|
||||
.option("--preserve-live-work", "Do not quarantine copied agent timers or assigned open issues in the seeded worktree", false)
|
||||
.option("--no-seed", "Repair metadata only and skip reseeding when bootstrapping a missing worktree config", false)
|
||||
.option("--allow-live-target", "Override the guard that requires the target worktree DB to be stopped first", false)
|
||||
.action(worktreeRepairCommand);
|
||||
|
||||
@@ -8,7 +8,6 @@ import { heartbeatRun } from "./commands/heartbeat-run.js";
|
||||
import { runCommand } from "./commands/run.js";
|
||||
import { bootstrapCeoInvite } from "./commands/auth-bootstrap-ceo.js";
|
||||
import { dbBackupCommand } from "./commands/db-backup.js";
|
||||
import { registerEnvLabCommands } from "./commands/env-lab.js";
|
||||
import { registerContextCommands } from "./commands/client/context.js";
|
||||
import { registerCompanyCommands } from "./commands/client/company.js";
|
||||
import { registerIssueCommands } from "./commands/client/issue.js";
|
||||
@@ -148,7 +147,6 @@ registerDashboardCommands(program);
|
||||
registerRoutineCommands(program);
|
||||
registerFeedbackCommands(program);
|
||||
registerWorktreeCommands(program);
|
||||
registerEnvLabCommands(program);
|
||||
registerPluginCommands(program);
|
||||
|
||||
const auth = program.command("auth").description("Authentication and bootstrap utilities");
|
||||
|
||||
11
doc/CLI.md
@@ -2,7 +2,7 @@
|
||||
|
||||
Paperclip CLI now supports both:
|
||||
|
||||
- instance setup/diagnostics (`onboard`, `doctor`, `configure`, `env`, `allowed-hostname`, `env-lab`)
|
||||
- instance setup/diagnostics (`onboard`, `doctor`, `configure`, `env`, `allowed-hostname`)
|
||||
- control-plane client operations (issues, approvals, agents, activity, dashboard)
|
||||
|
||||
## Base Usage
|
||||
@@ -45,15 +45,6 @@ Allow an authenticated/private hostname (for example custom Tailscale DNS):
|
||||
pnpm paperclipai allowed-hostname dotta-macbook-pro
|
||||
```
|
||||
|
||||
Bring up the default local SSH fixture for environment testing:
|
||||
|
||||
```sh
|
||||
pnpm paperclipai env-lab up
|
||||
pnpm paperclipai env-lab doctor
|
||||
pnpm paperclipai env-lab status --json
|
||||
pnpm paperclipai env-lab down
|
||||
```
|
||||
|
||||
All client commands support:
|
||||
|
||||
- `--data-dir <path>`
|
||||
|
||||
@@ -27,18 +27,6 @@ pnpm db:migrate
|
||||
|
||||
When `DATABASE_URL` is unset, this command targets the current embedded PostgreSQL instance for your active Paperclip config/instance.
|
||||
|
||||
Issue reference mentions follow the normal migration path: the schema migration creates the tracking table, but it does not backfill historical issue titles, descriptions, comments, or documents automatically.
|
||||
|
||||
To backfill existing content manually after migrating, run:
|
||||
|
||||
```sh
|
||||
pnpm issue-references:backfill
|
||||
# optional: limit to one company
|
||||
pnpm issue-references:backfill -- --company <company-id>
|
||||
```
|
||||
|
||||
Future issue, comment, and document writes sync references automatically without running the backfill command.
|
||||
|
||||
This mode is ideal for local development and one-command installs.
|
||||
|
||||
Docker note: the Docker quickstart image also uses embedded PostgreSQL by default. Persist `/paperclip` to keep DB state across container restarts (see `doc/DOCKER.md`).
|
||||
@@ -59,11 +47,11 @@ cp .env.example .env
|
||||
# DATABASE_URL=postgres://paperclip:paperclip@localhost:5432/paperclip
|
||||
```
|
||||
|
||||
Run migrations:
|
||||
Run migrations (once the migration generation issue is fixed) or use `drizzle-kit push`:
|
||||
|
||||
```sh
|
||||
DATABASE_URL=postgres://paperclip:paperclip@localhost:5432/paperclip \
|
||||
pnpm db:migrate
|
||||
npx drizzle-kit push
|
||||
```
|
||||
|
||||
Start the server:
|
||||
@@ -100,27 +88,27 @@ postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:
|
||||
|
||||
### Configure
|
||||
|
||||
For the application runtime, use a direct PostgreSQL connection unless the database client has explicit prepared-statement configuration for your pooling mode:
|
||||
|
||||
```sh
|
||||
DATABASE_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:5432/postgres
|
||||
```
|
||||
|
||||
If you later run the app with a pooled runtime URL, set `DATABASE_MIGRATION_URL` to the direct connection URL. Paperclip uses it for startup schema checks/migrations and plugin namespace migrations, while the app continues to use `DATABASE_URL` for runtime queries:
|
||||
Set `DATABASE_URL` in your `.env`:
|
||||
|
||||
```sh
|
||||
DATABASE_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:6543/postgres
|
||||
DATABASE_MIGRATION_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:5432/postgres
|
||||
```
|
||||
|
||||
If your hosted database requires transaction-pooling-only connections, use a direct or session-pooled connection for Paperclip until runtime pooling support is documented in this guide. Do not edit database client source files as part of deployment setup.
|
||||
If using connection pooling (port 6543), the `postgres` client must disable prepared statements. Update `packages/db/src/client.ts`:
|
||||
|
||||
```ts
|
||||
export function createDb(url: string) {
|
||||
const sql = postgres(url, { prepare: false });
|
||||
return drizzlePg(sql, { schema });
|
||||
}
|
||||
```
|
||||
|
||||
### Push the schema
|
||||
|
||||
```sh
|
||||
# Use the direct connection (port 5432) for schema changes
|
||||
DATABASE_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@...5432/postgres \
|
||||
pnpm db:migrate
|
||||
npx drizzle-kit push
|
||||
```
|
||||
|
||||
### Free tier limits
|
||||
@@ -143,22 +131,6 @@ The database mode is controlled by `DATABASE_URL`:
|
||||
|
||||
Your Drizzle schema (`packages/db/src/schema/`) stays the same regardless of mode.
|
||||
|
||||
## Plugin database namespaces
|
||||
|
||||
The plugin runtime tracks plugin-owned database namespaces and migrations in `plugin_database_namespaces` and `plugin_migrations`. Hosted deployments that separate runtime and migration connections should set `DATABASE_MIGRATION_URL`; plugin namespace migration work uses the migration connection when present.
|
||||
|
||||
## Backups
|
||||
|
||||
Paperclip supports automatic and manual logical database backups. These dumps include
|
||||
non-system database schemas such as `public`, the Drizzle migration journal, and
|
||||
plugin-owned database schemas. See `doc/DEVELOPING.md` for the current
|
||||
`paperclipai db:backup` / `pnpm db:backup` commands and backup retention
|
||||
configuration.
|
||||
|
||||
Database backups do not include non-database instance files such as local-disk
|
||||
uploads, workspace files, or the local encrypted secrets master key. Back those paths
|
||||
up separately when you need full instance disaster recovery.
|
||||
|
||||
## Secret storage
|
||||
|
||||
Paperclip stores secret metadata and versions in:
|
||||
|
||||
@@ -142,4 +142,3 @@ This prevents lockout when a user migrates from long-running local trusted usage
|
||||
- implementation plan: `doc/plans/deployment-auth-mode-consolidation.md`
|
||||
- V1 contract: `doc/SPEC-implementation.md`
|
||||
- operator workflows: `doc/DEVELOPING.md` and `doc/CLI.md`
|
||||
- invite/join state map: `doc/spec/invite-flow.md`
|
||||
|
||||
@@ -43,19 +43,6 @@ This starts:
|
||||
|
||||
`pnpm dev` and `pnpm dev:once` are now idempotent for the current repo and instance: if the matching Paperclip dev runner is already alive, Paperclip reports the existing process instead of starting a duplicate.
|
||||
|
||||
Issue execution may also use project execution workspace policies and workspace runtime services for per-project worktrees, preview servers, and managed dev commands. Configure those through the project workspace/runtime surfaces rather than starting long-running unmanaged processes when a task needs a reusable service.
|
||||
|
||||
## Storybook
|
||||
|
||||
The board UI Storybook keeps stories and Storybook config under `ui/storybook/` so component review files stay out of the app source routes.
|
||||
|
||||
```sh
|
||||
pnpm storybook
|
||||
pnpm build-storybook
|
||||
```
|
||||
|
||||
These run the `@paperclipai/ui` Storybook on port `6006` and build the static output to `ui/storybook-static/`.
|
||||
|
||||
Inspect or stop the current repo's managed dev runner:
|
||||
|
||||
```sh
|
||||
@@ -115,8 +102,6 @@ pnpm test:release-smoke
|
||||
|
||||
These browser suites are intended for targeted local verification and CI, not the default agent/human test command.
|
||||
|
||||
For normal issue work, start with the smallest targeted check that proves the change. Reserve repo-wide typecheck/build/test runs for PR-ready handoff or changes broad enough that narrow checks do not cover the risk.
|
||||
|
||||
## One-Command Local Run
|
||||
|
||||
For a first-time local install, you can bootstrap and run in one command:
|
||||
@@ -198,8 +183,6 @@ For `codex_local`, Paperclip also manages a per-company Codex home under the ins
|
||||
|
||||
If the `codex` CLI is not installed or not on `PATH`, `codex_local` agent runs fail at execution time with a clear adapter error. Quota polling uses a short-lived `codex app-server` subprocess: when `codex` cannot be spawned, that provider reports `ok: false` in aggregated quota results and the API server keeps running (it must not exit on a missing binary).
|
||||
|
||||
Local adapters require their corresponding CLI/session setup on the machine running Paperclip. External adapters are installed through the adapter/plugin flow and should not require hardcoded imports in `server/` or `ui/`.
|
||||
|
||||
## Worktree-local Instances
|
||||
|
||||
When developing from multiple git worktrees, do not point two Paperclip servers at the same embedded PostgreSQL data directory.
|
||||
@@ -226,8 +209,6 @@ Seed modes:
|
||||
- `full` makes a full logical clone of the source instance
|
||||
- `--no-seed` creates an empty isolated instance
|
||||
|
||||
Seeded worktree instances quarantine copied live execution by default for both `minimal` and `full` seeds. During restore, Paperclip disables copied agent timer heartbeats, resets copied `running` agents to `idle`, blocks and unassigns copied agent-owned `in_progress` issues, and unassigns copied agent-owned `todo`/`in_review` issues. This keeps a freshly booted worktree from starting agents for work already owned by the source instance. Pass `--preserve-live-work` only when you intentionally want the isolated worktree to resume copied assignments.
|
||||
|
||||
After `worktree init`, both the server and the CLI auto-load the repo-local `.paperclip/.env` when run inside that worktree, so normal commands like `pnpm dev`, `paperclipai doctor`, and `paperclipai db:backup` stay scoped to the worktree instance.
|
||||
|
||||
`pnpm dev` now fails fast in a linked git worktree when `.paperclip/.env` is missing, instead of silently booting against the default instance/port. If that happens, run `paperclipai worktree init` in the worktree first.
|
||||
@@ -241,8 +222,6 @@ That repo-local env also sets:
|
||||
- `PAPERCLIP_WORKTREE_COLOR=<hex-color>`
|
||||
|
||||
The server/UI use those values for worktree-specific branding such as the top banner and dynamically colored favicon.
|
||||
Authenticated worktree servers also use the `PAPERCLIP_INSTANCE_ID` value to scope Better Auth cookie names.
|
||||
Browser cookies are shared by host rather than port, so this prevents logging into one `127.0.0.1:<port>` worktree from replacing another worktree server's session cookie.
|
||||
|
||||
Print shell exports explicitly when needed:
|
||||
|
||||
@@ -421,9 +400,7 @@ If you set `DATABASE_URL`, the server will use that instead of embedded PostgreS
|
||||
|
||||
## Automatic DB Backups
|
||||
|
||||
Paperclip can run automatic logical database backups on a timer. These backups cover
|
||||
non-system database schemas, including migration history and plugin-owned database
|
||||
schemas. Defaults:
|
||||
Paperclip can run automatic DB backups on a timer. Defaults:
|
||||
|
||||
- enabled
|
||||
- every 60 minutes
|
||||
@@ -451,10 +428,6 @@ Environment overrides:
|
||||
- `PAPERCLIP_DB_BACKUP_RETENTION_DAYS=<days>`
|
||||
- `PAPERCLIP_DB_BACKUP_DIR=/absolute/or/~/path`
|
||||
|
||||
DB backups are not full instance filesystem backups. For full local disaster
|
||||
recovery, also back up local storage files and the local encrypted secrets key if
|
||||
those providers are enabled.
|
||||
|
||||
## Secrets in Dev
|
||||
|
||||
Agent env vars now support secret references. By default, secret values are stored with local encryption and only secret refs are persisted in agent config.
|
||||
|
||||
15
doc/GOAL.md
@@ -23,7 +23,7 @@ Paperclip is the command, communication, and control plane for a company of AI a
|
||||
- **Track work in real time** — see at any moment what every agent is working on
|
||||
- **Control costs** — token salary budgets per agent, spend tracking, burn rate
|
||||
- **Align to goals** — agents see how their work serves the bigger mission
|
||||
- **Preserve work context** — comments, documents, work products, attachments, and company state stay attached to the work
|
||||
- **Store company knowledge** — a shared brain for the organization
|
||||
|
||||
## Architecture
|
||||
|
||||
@@ -36,20 +36,17 @@ The central nervous system. Manages:
|
||||
- Agent registry and org chart
|
||||
- Task assignment and status
|
||||
- Budget and token spend tracking
|
||||
- Issue comments, documents, work products, attachments, and company state
|
||||
- Company knowledge base
|
||||
- Goal hierarchy (company → team → agent → task)
|
||||
- Heartbeat monitoring — know when agents are alive, idle, or stuck
|
||||
|
||||
It also enforces execution-control semantics such as single-assignee issues, atomic checkout and execution locks, blockers, recovery issues, and workspace/runtime controls.
|
||||
|
||||
### 2. Execution Services (adapters)
|
||||
|
||||
Agents run externally and report into the control plane. Adapters connect different execution environments and define how a heartbeat is invoked, observed, and cancelled:
|
||||
Agents run externally and report into the control plane. An agent is just Python code that gets kicked off and does work. Adapters connect different execution environments:
|
||||
|
||||
- **Local CLI/session adapters** — built-in adapters for tools such as Claude Code, Codex, Gemini, OpenCode, Pi, and Cursor
|
||||
- **HTTP/process-style adapters** — command or webhook/API integrations for custom runtimes
|
||||
- **OpenClaw gateway** — integration for OpenClaw-style remote agents
|
||||
- **External adapter plugins** — dynamically loaded adapters installed outside the core app
|
||||
- **OpenClaw** — initial adapter target
|
||||
- **Heartbeat loop** — simple custom Python that loops, checks in, does work
|
||||
- **Others** — any runtime that can call an API
|
||||
|
||||
The control plane doesn't run agents. It orchestrates them. Agents run wherever they run and phone home.
|
||||
|
||||
|
||||
@@ -32,14 +32,12 @@ Then you define who reports to the CEO: a CTO managing programmers, a CMO managi
|
||||
|
||||
### Agent Execution
|
||||
|
||||
Paperclip supports several ways to run an agent's heartbeat:
|
||||
There are two fundamental modes for running an agent's heartbeat:
|
||||
|
||||
1. **Local CLI/session adapters** — Paperclip starts or resumes local coding-tool sessions such as Claude Code, Codex, Gemini, OpenCode, Pi, and Cursor, then tracks the run.
|
||||
2. **Run a command** — Paperclip kicks off a process (shell command, Python script, etc.) and tracks it. The heartbeat is "execute this and monitor it."
|
||||
3. **Fire and forget a request** — Paperclip sends a webhook/API call to an externally running agent. The heartbeat is "notify this agent to wake up." OpenClaw-style hooks work this way.
|
||||
4. **External adapter plugins** — Paperclip loads adapter packages through the plugin/adapter flow so self-hosted installs can add runtimes without hardcoding them in core.
|
||||
1. **Run a command** — Paperclip kicks off a process (shell command, Python script, etc.) and tracks it. The heartbeat is "execute this and monitor it."
|
||||
2. **Fire and forget a request** — Paperclip sends a webhook/API call to an externally running agent. The heartbeat is "notify this agent to wake up." (OpenClaw hooks work this way.)
|
||||
|
||||
Agent runs can use project and execution workspaces, managed runtime services such as preview/dev servers, adapter-specific session state, and HTTP/webhook-style execution. We provide sensible defaults, but the adapter is still the boundary: if a runtime can be invoked, observed, and authorized, Paperclip can coordinate it.
|
||||
We provide sensible defaults — a default agent that shells out to Claude Code or Codex with your configuration, remembers session IDs, runs basic scripts. But you can plug in anything.
|
||||
|
||||
### Task Management
|
||||
|
||||
@@ -56,7 +54,7 @@ I am researching the Facebook ads Granola uses (current task)
|
||||
|
||||
Tasks have parentage. Every task exists in service of a parent task, all the way up to the company goal. This is what keeps autonomous agents aligned — they can always answer "why am I doing this?"
|
||||
|
||||
The current issue model includes stable issue identifiers, parent/sub-issues, blockers, a single assignee, comments, issue documents, attachments and work products, and review/approval handoffs. That structure keeps work inspectable by both the board and agents while still allowing agents to decompose work into smaller tasks.
|
||||
More detailed task structure TBD.
|
||||
|
||||
## Principles
|
||||
|
||||
@@ -117,7 +115,7 @@ Paperclip’s core identity is a **control plane for autonomous AI companies**,
|
||||
|
||||
- Do not make the core product a general chat app. The current product definition is explicitly task/comment-centric and “not a chatbot,” and that boundary is valuable.
|
||||
- Do not build a complete Jira/GitHub replacement. The repo/docs already position Paperclip as organization orchestration, not focused on pull-request review.
|
||||
- Do not build enterprise-grade RBAC first. Paperclip now has authenticated mode, company memberships, instance roles, and permission grants, but fine-grained enterprise governance should remain secondary to the core company control plane.
|
||||
- Do not build enterprise-grade RBAC first. The current V1 spec still treats multi-board governance and fine-grained human permissions as out of scope, so the first multi-user version should be coarse and company-scoped.
|
||||
- Do not lead with raw bash logs and transcripts. Default view should be human-readable intent/progress, with raw detail beneath.
|
||||
- Do not force users to understand provider/API-key plumbing unless absolutely necessary. There are active onboarding/auth issues already; friction here is clearly real.
|
||||
|
||||
@@ -138,14 +136,11 @@ Paperclip’s core identity is a **control plane for autonomous AI companies**,
|
||||
5. **Output-first**
|
||||
Work is not done until the user can see the result: file, document, preview link, screenshot, plan, or PR.
|
||||
|
||||
6. **Execution visibility without log worship**
|
||||
Active runs, recovery issues, productivity review states, blockers, and work products should be first-class surfaces. Raw transcripts are available when needed, but they are not the primary product surface.
|
||||
|
||||
7. **Local-first, cloud-ready**
|
||||
6. **Local-first, cloud-ready**
|
||||
The mental model should not change between local solo use and shared/private or public/cloud deployment.
|
||||
|
||||
8. **Safe autonomy**
|
||||
7. **Safe autonomy**
|
||||
Auto mode is allowed; hidden token burn is not.
|
||||
|
||||
9. **Thin core, rich edges**
|
||||
8. **Thin core, rich edges**
|
||||
Put optional chat, knowledge, and special surfaces into plugins/extensions rather than bloating the control plane.
|
||||
|
||||
@@ -115,6 +115,38 @@ If the first real publish returns npm `E404`, check npm-side prerequisites befor
|
||||
- The initial publish must include `--access public` for a public scoped package.
|
||||
- npm also requires either account 2FA for publishing or a granular token that is allowed to bypass 2FA.
|
||||
|
||||
### Manual first publish for `@paperclipai/mcp-server`
|
||||
|
||||
If you need to publish only the MCP server package once by hand, use:
|
||||
|
||||
- `@paperclipai/mcp-server`
|
||||
|
||||
Recommended flow from the repo root:
|
||||
|
||||
```bash
|
||||
# optional sanity check: this 404s until the first publish exists
|
||||
npm view @paperclipai/mcp-server version
|
||||
|
||||
# make sure the build output is fresh
|
||||
pnpm --filter @paperclipai/mcp-server build
|
||||
|
||||
# confirm your local npm auth before the real publish
|
||||
npm whoami
|
||||
|
||||
# safe preview of the exact publish payload
|
||||
cd packages/mcp-server
|
||||
pnpm publish --dry-run --no-git-checks --access public
|
||||
|
||||
# real publish
|
||||
pnpm publish --no-git-checks --access public
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
- Publish from `packages/mcp-server/`, not the repo root.
|
||||
- If `npm view @paperclipai/mcp-server version` already returns the same version that is in [`packages/mcp-server/package.json`](../packages/mcp-server/package.json), do not republish. Bump the version or use the normal repo-wide release flow in [`scripts/release.sh`](../scripts/release.sh).
|
||||
- The same npm-side prerequisites apply as above: valid npm auth, permission to publish to the `@paperclipai` scope, `--access public`, and the required publish auth/2FA policy.
|
||||
|
||||
## Version formats
|
||||
|
||||
Paperclip uses calendar versions:
|
||||
@@ -143,13 +175,6 @@ This keeps the default install path unchanged while allowing explicit installs w
|
||||
npx paperclipai@canary onboard
|
||||
```
|
||||
|
||||
The release script now verifies two things after a canary publish:
|
||||
|
||||
- the `canary` dist-tag resolves to the version that was just published
|
||||
- every published internal `@paperclipai/*` dependency referenced by that manifest exists on npm
|
||||
|
||||
It also treats `latest -> canary` as a failure by default, because npm metadata can otherwise leave the default install path pointing at an unreleased canary dependency graph. Only pass `./scripts/release.sh canary --allow-canary-latest` when that `latest` behavior is explicitly intended.
|
||||
|
||||
### Stable
|
||||
|
||||
Stable publishes use the npm dist-tag `latest`.
|
||||
@@ -176,58 +201,6 @@ That means:
|
||||
|
||||
See [doc/RELEASE-AUTOMATION-SETUP.md](RELEASE-AUTOMATION-SETUP.md) for the GitHub/npm setup steps.
|
||||
|
||||
## Release enrollment for new public packages
|
||||
|
||||
Paperclip does not auto-publish every non-private workspace package anymore.
|
||||
CI publishing is controlled by [`scripts/release-package-manifest.json`](../scripts/release-package-manifest.json).
|
||||
|
||||
When you add a new public package:
|
||||
|
||||
1. add it to the manifest and decide whether CI should publish it immediately
|
||||
2. if CI should publish it, bootstrap the package on npm before merge
|
||||
3. if CI should not publish it yet, keep `"publishFromCi": false`
|
||||
4. only enable `"publishFromCi": true` after npm trusted publishing is configured for that package
|
||||
|
||||
PR CI now checks changed release-enabled package manifests against npm. That catches a missing first-publish bootstrap before the change reaches `master`.
|
||||
|
||||
### One-time bootstrap sequence for a new package
|
||||
|
||||
The first publish of a brand-new package still needs one human maintainer with npm write access.
|
||||
After that, trusted publishing can take over.
|
||||
|
||||
Example for `@paperclipai/adapter-acpx-local` from the repo root:
|
||||
|
||||
```bash
|
||||
# safe preview
|
||||
pnpm run release:bootstrap-package -- @paperclipai/adapter-acpx-local
|
||||
|
||||
# one-time first publish from an authenticated maintainer machine
|
||||
pnpm run release:bootstrap-package -- @paperclipai/adapter-acpx-local --publish --otp 123456
|
||||
```
|
||||
|
||||
The helper script:
|
||||
|
||||
- checks that the package does not already exist on npm
|
||||
- builds the target package unless `--skip-build` is passed
|
||||
- runs `npm pack --dry-run` in the package directory
|
||||
- only runs the real `npm publish --access public` when `--publish --otp <code>` is provided
|
||||
|
||||
For the real `--publish` step, the maintainer machine must already be authenticated to npm.
|
||||
If `npm whoami` returns `401`, first run `npm logout --registry=https://registry.npmjs.org/` to clear any stale local auth, then run `npm login` or `npm adduser` locally as an npm org member, and finally rerun the helper.
|
||||
That local human auth is fine for the one-time bootstrap publish; we just do not want the same auth model inside CI.
|
||||
The helper now requires `--otp <code>` up front for `--publish`, so it fails before the real publish attempt if the one-time password is missing.
|
||||
|
||||
After that first publish succeeds:
|
||||
|
||||
1. open `https://www.npmjs.com/package/@paperclipai/adapter-acpx-local`
|
||||
2. go to `Settings` → `Trusted publishing`
|
||||
3. add repository `paperclipai/paperclip`
|
||||
4. set workflow filename to `release.yml`
|
||||
5. optionally go to `Settings` → `Publishing access` and enable `Require two-factor authentication and disallow tokens`
|
||||
6. keep `publishFromCi: true` in [`scripts/release-package-manifest.json`](../scripts/release-package-manifest.json)
|
||||
|
||||
Once those steps are done, future canary and stable publishes for that package are automated through GitHub OIDC. The manual step is only the first package creation on npm.
|
||||
|
||||
## Rollback model
|
||||
|
||||
Rollback does not unpublish anything.
|
||||
|
||||
@@ -67,27 +67,6 @@ Why:
|
||||
- the single `release.yml` workflow handles both canary and stable publishing
|
||||
- GitHub environments `npm-canary` and `npm-stable` still enforce different approval rules on the GitHub side
|
||||
|
||||
### 2.2.1. Newly added public packages need a bootstrap phase
|
||||
|
||||
Trusted publishing is configured on the npm package itself, not at the repo scope.
|
||||
That means a brand-new public package must not be auto-enrolled into CI publishing until its npm package exists and its trusted publisher has been configured.
|
||||
|
||||
Repo policy:
|
||||
|
||||
1. add every non-private package to [`scripts/release-package-manifest.json`](../scripts/release-package-manifest.json)
|
||||
2. set `"publishFromCi": true` only when CI is expected to publish that package
|
||||
3. if the package is not ready for CI publishing yet, keep `"publishFromCi": false`
|
||||
4. complete the package bootstrap before merging any PR that changes a release-enabled new package
|
||||
|
||||
Bootstrap sequence for a new package:
|
||||
|
||||
1. publish the package once from a trusted maintainer machine using normal npm auth
|
||||
2. open that package on npm and add the `paperclipai/paperclip` trusted publisher for `.github/workflows/release.yml`
|
||||
3. rerun or dry-run the release flow as needed to confirm CI publishing now works
|
||||
4. only then enable `"publishFromCi": true`
|
||||
|
||||
PR CI enforces this by checking changed release-enabled package manifests against npm. That keeps `master` canary publishing healthy while preserving the no-long-lived-token model for normal CI releases.
|
||||
|
||||
### 2.3. Verify trusted publishing before removing old auth
|
||||
|
||||
After the workflows are live:
|
||||
|
||||
@@ -63,8 +63,6 @@ It:
|
||||
- verifies the pushed commit
|
||||
- computes the canary version for the current UTC date
|
||||
- publishes under npm dist-tag `canary`
|
||||
- verifies that `canary` resolves to the just-published version and that published internal dependencies exist on npm
|
||||
- fails by default if npm leaves `latest` pointing at a canary; use `--allow-canary-latest` only when that state is intentional
|
||||
- creates a git tag `canary/vYYYY.MDD.P-canary.N`
|
||||
|
||||
Users install canaries with:
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Paperclip V1 Implementation Spec
|
||||
|
||||
Status: Implementation contract for first release (V1)
|
||||
Date: 2026-04-28
|
||||
Date: 2026-02-17
|
||||
Audience: Product, engineering, and agent-integration authors
|
||||
Source inputs: `GOAL.md`, `PRODUCT.md`, `SPEC.md`, `DATABASE.md`, current monorepo code
|
||||
|
||||
@@ -37,9 +37,8 @@ These decisions close open questions from `SPEC.md` for V1.
|
||||
| Visibility | Full visibility to board and all agents in same company |
|
||||
| Communication | Tasks + comments only (no separate chat system) |
|
||||
| Task ownership | Single assignee; atomic checkout required for `in_progress` transition |
|
||||
| Recovery | Liveness/watchdog recovery preserves explicit ownership: retry lost execution continuity where safe, otherwise create visible recovery issues or require human escalation (see `doc/execution-semantics.md`) |
|
||||
| Agent adapters | Built-in `process`, `http`, local CLI/session adapters, and OpenClaw gateway support; external adapters can also be loaded through the adapter plugin flow |
|
||||
| Plugin framework | Local/self-hosted early plugin runtime is in scope; cloud marketplace and packaged public distribution remain out of scope |
|
||||
| Recovery | No automatic reassignment; work recovery stays manual/explicit |
|
||||
| Agent adapters | Built-in `process` and `http` adapters |
|
||||
| Auth | Mode-dependent human auth (`local_trusted` implicit board in current code; authenticated mode uses sessions), API keys for agents |
|
||||
| Budget period | Monthly UTC calendar window |
|
||||
| Budget enforcement | Soft alerts + hard limit auto-pause |
|
||||
@@ -74,7 +73,7 @@ V1 implementation extends this baseline into a company-centric, governance-aware
|
||||
|
||||
## 5.2 Out of Scope (V1)
|
||||
|
||||
- Cloud-grade plugin marketplace/distribution beyond the local/self-hosted plugin runtime
|
||||
- Plugin framework and third-party extension SDK
|
||||
- Revenue/expense accounting beyond model/token costs
|
||||
- Knowledge base subsystem
|
||||
- Public marketplace (ClipHub)
|
||||
@@ -124,16 +123,6 @@ Human auth tables (`users`, `sessions`, and provider-specific auth artifacts) ar
|
||||
- `name` text not null
|
||||
- `description` text null
|
||||
- `status` enum: `active | paused | archived`
|
||||
- `pause_reason` text null
|
||||
- `paused_at` timestamptz null
|
||||
- `issue_prefix` text not null
|
||||
- `issue_counter` int not null
|
||||
- `budget_monthly_cents` int not null default 0
|
||||
- `spent_monthly_cents` int not null default 0
|
||||
- `attachment_max_bytes` int not null
|
||||
- `require_board_approval_for_new_agents` boolean not null default false
|
||||
- feedback sharing consent fields
|
||||
- branding fields such as `brand_color`
|
||||
|
||||
Invariant: every business record belongs to exactly one company.
|
||||
|
||||
@@ -144,21 +133,15 @@ Invariant: every business record belongs to exactly one company.
|
||||
- `name` text not null
|
||||
- `role` text not null
|
||||
- `title` text null
|
||||
- `icon` text null
|
||||
- `status` enum: `active | paused | idle | running | error | pending_approval | terminated`
|
||||
- `status` enum: `active | paused | idle | running | error | terminated`
|
||||
- `reports_to` uuid fk `agents.id` null
|
||||
- `capabilities` text null
|
||||
- `adapter_type` text; built-ins include `process`, `http`, `claude_local`, `codex_local`, `gemini_local`, `opencode_local`, `pi_local`, `cursor`, and `openclaw_gateway`
|
||||
- `adapter_type` enum: `process | http`
|
||||
- `adapter_config` jsonb not null
|
||||
- `runtime_config` jsonb not null default `{}`; may include Paperclip runtime policy such as `modelProfiles.cheap.adapterConfig` for an optional low-cost model lane that does not change the primary adapter config
|
||||
- `default_environment_id` uuid fk `environments.id` null
|
||||
- `context_mode` enum: `thin | fat` default `thin`
|
||||
- `budget_monthly_cents` int not null default 0
|
||||
- `spent_monthly_cents` int not null default 0
|
||||
- pause fields: `pause_reason`, `paused_at`
|
||||
- `permissions` jsonb not null default `{}`
|
||||
- `last_heartbeat_at` timestamptz null
|
||||
- `metadata` jsonb null
|
||||
|
||||
Invariants:
|
||||
|
||||
@@ -212,7 +195,6 @@ Invariant:
|
||||
- `id` uuid pk
|
||||
- `company_id` uuid fk not null
|
||||
- `project_id` uuid fk `projects.id` null
|
||||
- `project_workspace_id` uuid fk `project_workspaces.id` null
|
||||
- `goal_id` uuid fk `goals.id` null
|
||||
- `parent_id` uuid fk `issues.id` null
|
||||
- `title` text not null
|
||||
@@ -220,22 +202,13 @@ Invariant:
|
||||
- `status` enum: `backlog | todo | in_progress | in_review | done | blocked | cancelled`
|
||||
- `priority` enum: `critical | high | medium | low`
|
||||
- `assignee_agent_id` uuid fk `agents.id` null
|
||||
- `assignee_user_id` text null
|
||||
- checkout/execution locks: `checkout_run_id`, `execution_run_id`, `execution_agent_name_key`, `execution_locked_at`
|
||||
- `created_by_agent_id` uuid fk `agents.id` null
|
||||
- `created_by_user_id` uuid fk `users.id` null
|
||||
- identifier fields: `issue_number`, `identifier`
|
||||
- origin fields: `origin_kind`, `origin_id`, `origin_run_id`, `origin_fingerprint`
|
||||
- `request_depth` int not null default 0
|
||||
- `billing_code` text null
|
||||
- `assignee_adapter_overrides` jsonb null
|
||||
- `execution_policy` jsonb null
|
||||
- `execution_state` jsonb null
|
||||
- execution workspace fields: `execution_workspace_id`, `execution_workspace_preference`, `execution_workspace_settings`
|
||||
- `started_at` timestamptz null
|
||||
- `completed_at` timestamptz null
|
||||
- `cancelled_at` timestamptz null
|
||||
- `hidden_at` timestamptz null
|
||||
|
||||
Invariants:
|
||||
|
||||
@@ -288,10 +261,10 @@ Invariant: each event must attach to agent and company; rollups are aggregation,
|
||||
|
||||
- `id` uuid pk
|
||||
- `company_id` uuid fk not null
|
||||
- `type` enum: `hire_agent | approve_ceo_strategy | budget_override_required | request_board_approval`
|
||||
- `type` enum: `hire_agent | approve_ceo_strategy`
|
||||
- `requested_by_agent_id` uuid fk `agents.id` null
|
||||
- `requested_by_user_id` uuid fk `users.id` null
|
||||
- `status` enum: `pending | revision_requested | approved | rejected | cancelled`
|
||||
- `status` enum: `pending | approved | rejected | cancelled`
|
||||
- `payload` jsonb not null
|
||||
- `decision_note` text null
|
||||
- `decided_by_user_id` uuid fk `users.id` null
|
||||
@@ -390,15 +363,6 @@ Operational policy:
|
||||
- `document_id` uuid fk not null
|
||||
- `key` text not null (`plan`, `design`, `notes`, etc.)
|
||||
|
||||
## 7.16 Current Implementation Addenda
|
||||
|
||||
The current implementation includes additional V1-control-plane tables beyond the original February snapshot:
|
||||
|
||||
- Issue structure and review: `issue_relations` for blockers, `labels`/`issue_labels`, `issue_thread_interactions`, `issue_approvals`, `issue_execution_decisions`, `issue_work_products`, `issue_inbox_archives`, `issue_read_states`, and issue reference mention indexes.
|
||||
- Execution and workspace control: `execution_workspaces`, `project_workspaces`, `workspace_runtime_services`, `workspace_operations`, `environments`, `environment_leases`, `agent_task_sessions`, `agent_runtime_state`, `agent_wakeup_requests`, heartbeat events, and watchdog decision tables.
|
||||
- Plugins and routines: `plugins`, plugin config/state/entities/jobs/logs/webhooks, plugin database namespaces/migrations, plugin company settings, and `routines`.
|
||||
- Access and operations: company memberships, instance roles, principal permission grants, invites, join requests, board API keys, CLI auth challenges, budget policies/incidents, feedback exports/votes, company skills, sidebar preferences, and company logos.
|
||||
|
||||
## 8. State Machines
|
||||
|
||||
## 8.1 Agent Status
|
||||
@@ -431,14 +395,7 @@ Side effects:
|
||||
- entering `done` sets `completed_at`
|
||||
- entering `cancelled` sets `cancelled_at`
|
||||
|
||||
V1 non-terminal liveness rule:
|
||||
|
||||
- agent-owned `todo`, `in_progress`, `in_review`, and `blocked` issues must have a live execution path, an explicit waiting path, or an explicit recovery path
|
||||
- `in_review` is healthy only when a typed execution participant, pending issue-thread interaction or approval, user owner, active run, queued wake, or explicit recovery issue owns the next action
|
||||
- a blocked chain is covered only when each unresolved leaf issue is live or explicitly waiting
|
||||
- when Paperclip cannot safely infer the next action, it surfaces the problem through visible blocked/recovery work instead of silently completing or reassigning work
|
||||
|
||||
Detailed ownership, execution, blocker, active-run watchdog, crash-recovery, and non-terminal liveness semantics are documented in `doc/execution-semantics.md`.
|
||||
Detailed ownership, execution, blocker, and crash-recovery semantics are documented in `doc/execution-semantics.md`.
|
||||
|
||||
## 8.3 Approval Status
|
||||
|
||||
@@ -527,7 +484,6 @@ All endpoints are under `/api` and return JSON.
|
||||
- `DELETE /issues/:issueId/documents/:key`
|
||||
- `POST /issues/:issueId/checkout`
|
||||
- `POST /issues/:issueId/release`
|
||||
- `POST /issues/:issueId/admin/force-release` (board-only lock recovery)
|
||||
- `POST /issues/:issueId/comments`
|
||||
- `GET /issues/:issueId/comments`
|
||||
- `POST /companies/:companyId/issues/:issueId/attachments` (multipart upload)
|
||||
@@ -552,8 +508,6 @@ Server behavior:
|
||||
2. if updated row count is 0, return `409` with current owner/status
|
||||
3. successful checkout sets `assignee_agent_id`, `status = in_progress`, and `started_at`
|
||||
|
||||
`POST /issues/:issueId/admin/force-release` is an operator recovery endpoint for stale harness locks. It requires board access to the issue company, clears checkout and execution run lock fields, and may clear the agent assignee when `clearAssignee=true` is passed. The route must write an `issue.admin_force_release` activity log entry containing the previous checkout and execution run IDs.
|
||||
|
||||
## 10.5 Projects
|
||||
|
||||
- `GET /companies/:companyId/projects`
|
||||
@@ -599,17 +553,6 @@ Dashboard payload must include:
|
||||
- `422` semantic rule violation
|
||||
- `500` server error
|
||||
|
||||
## 10.10 Current Implementation API Addenda
|
||||
|
||||
The current app also exposes V1-supporting surfaces for:
|
||||
|
||||
- issue thread interactions (`suggest_tasks`, `ask_user_questions`, `request_confirmation`)
|
||||
- issue approvals, issue references/search, labels, read state, inbox/archive state, and work products
|
||||
- execution workspaces, project workspaces, workspace runtime services, and workspace operations
|
||||
- routines and scheduled/API/webhook triggers
|
||||
- plugin installation, configuration, state, jobs, logs, webhooks, and plugin database namespace migration
|
||||
- company import/export preview/apply, feedback export/vote routes, instance backup/config routes, invites, join requests, memberships, and permission grants
|
||||
|
||||
## 11. Heartbeat and Adapter Contract
|
||||
|
||||
## 11.1 Adapter Interface
|
||||
@@ -676,7 +619,7 @@ Per-agent schedule fields in `adapter_config`:
|
||||
|
||||
- `enabled` boolean
|
||||
- `intervalSec` integer (minimum 30)
|
||||
- `maxConcurrentRuns` integer; new agents default to `20`; scheduler clamps configured values to `1..50`
|
||||
- `maxConcurrentRuns` fixed at `1` for V1
|
||||
|
||||
Scheduler must skip invocation when:
|
||||
|
||||
@@ -785,14 +728,13 @@ Required UX behaviors:
|
||||
|
||||
- Node 20+
|
||||
- `DATABASE_URL` optional
|
||||
- if unset, auto-use embedded PostgreSQL under `~/.paperclip/instances/default/db`
|
||||
- if unset, auto-use PGlite and push schema
|
||||
|
||||
## 15.2 Migrations
|
||||
|
||||
- Drizzle migrations are source of truth
|
||||
- local/dev startup applies pending migrations automatically where supported
|
||||
- `pnpm db:migrate` applies pending migrations manually
|
||||
- no destructive migration in-place for V1 upgrade path
|
||||
- provide migration script from existing minimal tables to company-scoped schema
|
||||
|
||||
## 15.3 Logging and Audit
|
||||
|
||||
@@ -847,8 +789,6 @@ A release candidate is blocked unless these pass:
|
||||
|
||||
## 18. Delivery Plan
|
||||
|
||||
Current implementation note: the milestones below describe the original V1 sequencing. Several systems originally framed as future work have since shipped or advanced materially, including issue documents/interactions, blockers, routines, execution workspaces, import/export portability, authenticated deployment modes, multi-user basics, and the local/self-hosted plugin runtime.
|
||||
|
||||
## Milestone 1: Company Core and Auth
|
||||
|
||||
- add `companies` and company scoping to existing entities
|
||||
@@ -901,7 +841,7 @@ V1 is complete only when all criteria are true:
|
||||
|
||||
## 20. Post-V1 Backlog (Explicitly Deferred)
|
||||
|
||||
- cloud-grade plugin marketplace/distribution
|
||||
- plugin architecture
|
||||
- richer workflow-state customization per team
|
||||
- milestones/labels/dependency graph depth beyond V1 minimum
|
||||
- realtime transport optimization (SSE/WebSockets)
|
||||
|
||||
|
Before Width: | Height: | Size: 174 KiB |
|
Before Width: | Height: | Size: 174 KiB |
|
Before Width: | Height: | Size: 177 KiB |
|
Before Width: | Height: | Size: 177 KiB |
|
Before Width: | Height: | Size: 140 KiB |
|
Before Width: | Height: | Size: 80 KiB |
|
Before Width: | Height: | Size: 137 KiB |
|
Before Width: | Height: | Size: 61 KiB |
|
Before Width: | Height: | Size: 29 KiB |
@@ -1,7 +1,7 @@
|
||||
# Execution Semantics
|
||||
|
||||
Status: Current implementation guide
|
||||
Date: 2026-04-26
|
||||
Date: 2026-04-13
|
||||
Audience: Product and engineering
|
||||
|
||||
This document explains how Paperclip interprets issue assignment, issue status, execution runs, wakeups, parent/sub-issue structure, and blocker relationships.
|
||||
@@ -67,15 +67,13 @@ This is the right state for:
|
||||
|
||||
- waiting on another issue
|
||||
- waiting on a human decision
|
||||
- waiting on an external dependency or system when Paperclip does not own a scheduled re-check
|
||||
- waiting on an external dependency or system
|
||||
- work that automatic recovery could not safely continue
|
||||
|
||||
### `in_review`
|
||||
|
||||
Execution work is paused because the next move belongs to a reviewer or approver, not the current executor.
|
||||
|
||||
An external review service can also be a valid review path when the issue keeps an agent assignee and has an active one-shot monitor that will wake that assignee to check the service later.
|
||||
|
||||
### `done`
|
||||
|
||||
The work is complete and terminal.
|
||||
@@ -148,28 +146,13 @@ Use it for:
|
||||
- explicit waiting relationships
|
||||
- automatic wakeups when all blockers resolve
|
||||
|
||||
Blocked issues should stay idle while blockers remain unresolved. Paperclip should not create a queued heartbeat run for that issue until the final blocker is done and the `issue_blockers_resolved` wake can start real work.
|
||||
|
||||
If a parent is truly waiting on a child, model that with blockers. Do not rely on the parent/child relationship alone.
|
||||
|
||||
## 7. Non-Terminal Issue Liveness Contract
|
||||
## 7. Consistent Execution Path Rules
|
||||
|
||||
For agent-owned, non-terminal issues, Paperclip should never leave work in a state where nobody is responsible for the next move and nothing will wake or surface it.
|
||||
For agent-assigned, non-terminal, actionable issues, Paperclip should not leave work in a state where nobody is working it and nothing will wake it.
|
||||
|
||||
This is a visibility contract, not an auto-completion contract. If Paperclip cannot safely infer the next action, it should surface the ambiguity with a blocked state, a visible comment, or an explicit recovery issue. It must not silently mark work done from prose comments or guess that a dependency is complete.
|
||||
|
||||
An issue is healthy when the product can answer "what moves this forward next?" without requiring a human to reconstruct intent from the whole thread. An issue is stalled when it is non-terminal but has no live execution path, no explicit waiting path, and no recovery path.
|
||||
|
||||
The valid action-path primitives are:
|
||||
|
||||
- an active run linked to the issue
|
||||
- a queued wake or continuation that can be delivered to the responsible agent
|
||||
- a typed execution-policy participant, such as `executionState.currentParticipant`
|
||||
- a pending issue-thread interaction or linked approval that is waiting for a specific responder
|
||||
- a one-shot issue monitor (`executionPolicy.monitor.nextCheckAt`) that will wake the assignee for a future check
|
||||
- a human owner via `assigneeUserId`
|
||||
- a first-class blocker chain whose unresolved leaf issues are themselves healthy
|
||||
- an open explicit recovery issue that names the owner and action needed to restore liveness
|
||||
The relevant execution path depends on status.
|
||||
|
||||
### Agent-assigned `todo`
|
||||
|
||||
@@ -177,21 +160,9 @@ This is dispatch state: ready to start, not yet actively claimed.
|
||||
|
||||
A healthy dispatch state means at least one of these is true:
|
||||
|
||||
- the issue already has a queued wake path
|
||||
- the issue is intentionally resting in `todo` after a completed agent heartbeat, with no interrupted dispatch evidence
|
||||
- the issue has been explicitly surfaced as stranded through a visible blocked/recovery path
|
||||
|
||||
An assigned `todo` issue is stalled when dispatch was interrupted, no wake remains queued or running, and no recovery path has been opened.
|
||||
|
||||
### Agent-assigned `backlog`
|
||||
|
||||
This is parked state, not dispatch state.
|
||||
|
||||
Assigning an issue normally implies executable intent. When create APIs receive an assignee and no explicit status, Paperclip defaults the issue to `todo` so the assignee has a wake path instead of silently inheriting the unassigned `backlog` default.
|
||||
|
||||
An explicit assigned `backlog` issue remains valid when the creator is deliberately parking the work. It must not wake the assignee just because it has an assignee. Paperclip should make that choice visible in activity and UI so operators can distinguish intentional parking from a missed handoff.
|
||||
|
||||
An assigned `backlog` issue becomes a liveness problem when another issue is blocked on it and there is no explicit waiting path such as a human owner, active run, queued wake, pending interaction or approval, monitor, or open recovery issue. In that case the blocked parent should surface "blocked by parked work" rather than treating the dependency chain as healthy.
|
||||
- the issue already has a queued/running wake path
|
||||
- the issue is intentionally resting in `todo` after a successful agent heartbeat, not after an interrupted dispatch
|
||||
- the issue has been explicitly surfaced as stranded
|
||||
|
||||
### Agent-assigned `in_progress`
|
||||
|
||||
@@ -201,63 +172,7 @@ A healthy active-work state means at least one of these is true:
|
||||
|
||||
- there is an active run for the issue
|
||||
- there is already a queued continuation wake
|
||||
- there is an active one-shot monitor that will wake the assignee for a future check
|
||||
- there is an open explicit recovery issue for the lost execution path
|
||||
|
||||
An agent-owned `in_progress` issue is stalled when it has no active run, no queued continuation, and no explicit recovery surface. A still-running but silent process is not automatically stalled; it is handled by the active-run watchdog contract.
|
||||
|
||||
### `in_review`
|
||||
|
||||
This is review/approval state: execution is paused because the next move belongs to a reviewer, approver, board user, or recovery owner.
|
||||
|
||||
A healthy `in_review` issue has at least one valid action path:
|
||||
|
||||
- a typed execution-policy participant who can approve or request changes
|
||||
- a pending issue-thread interaction or linked approval waiting for a named responder
|
||||
- a human owner via `assigneeUserId`
|
||||
- an active run or queued wake that is expected to process the review state
|
||||
- an active one-shot monitor for an external service or async review loop that the assignee owns
|
||||
- an open explicit recovery issue for an ambiguous review handoff
|
||||
|
||||
Agent-assigned `in_review` with no typed participant is only healthy when one of the other paths exists. Assignment to the same agent that produced the handoff is not, by itself, a review path.
|
||||
|
||||
An `in_review` issue is stalled when it has no typed participant, no pending interaction or approval, no user owner, no active monitor, no active run, no queued wake, and no explicit recovery issue. Paperclip should surface that state as recovery work rather than silently completing the issue or leaving blocker chains parked indefinitely.
|
||||
|
||||
### Issue monitors
|
||||
|
||||
An issue monitor is a one-shot deferred action path for agent-owned issues in `in_progress` or `in_review`.
|
||||
|
||||
Use a monitor when the current assignee owns a future check against an async system or external service. Examples include Greptile review loops, GitHub checks, Vercel deployments, or provider jobs where the agent should come back later and decide what happens next.
|
||||
|
||||
Monitor policy lives under `executionPolicy.monitor` and includes:
|
||||
|
||||
- `nextCheckAt`: when Paperclip should wake the assignee
|
||||
- `notes`: non-secret instructions for what the assignee should check
|
||||
- `serviceName`: optional non-secret external-service context
|
||||
- `externalRef`: optional external-service reference input; Paperclip treats it as secret-adjacent, redacts it before persistence/visibility, and omits it from activity and wake payloads
|
||||
- `timeoutAt`, `maxAttempts`, and `recoveryPolicy`: optional recovery hints for bounded waits
|
||||
|
||||
Monitors are not recurring intervals. When a monitor fires, Paperclip clears the scheduled monitor and queues an `issue_monitor_due` wake for the assignee. If the external service is still pending, the assignee must explicitly re-arm the monitor with a new `nextCheckAt`. If the issue moves to `done`, `cancelled`, an invalid status, or a human/unassigned owner, the monitor is cleared.
|
||||
|
||||
Because `serviceName` and `notes` remain visible in issue activity and wake context, operators should keep them short and non-secret. Put enough context for the assignee to know what to inspect, but do not include signed URLs, bearer tokens, customer secrets, tenant-private identifiers, or provider links with embedded credentials.
|
||||
|
||||
Monitor bounds are enforced. Paperclip rejects attempts to re-arm a monitor whose `timeoutAt` or `maxAttempts` is already exhausted. When a scheduled monitor reaches an exhausted bound at trigger time, Paperclip clears it and follows `recoveryPolicy`: `wake_owner` queues a bounded recovery wake for the assignee, `create_recovery_issue` opens visible recovery work, and `escalate_to_board` records a board-visible escalation comment/activity.
|
||||
|
||||
Use `blocked` instead of a monitor when no Paperclip assignee owns a responsible polling path. In that case, name the external owner/action or create first-class recovery/blocker work.
|
||||
|
||||
### `blocked`
|
||||
|
||||
This is explicit waiting state.
|
||||
|
||||
A healthy `blocked` issue has an explicit waiting path:
|
||||
|
||||
- first-class blockers exist, and each unresolved leaf has a valid action path under this contract
|
||||
- the issue is blocked on an explicit recovery issue that itself has a live or waiting path
|
||||
- the issue is waiting on a pending interaction, linked approval, human owner, or clearly named external owner/action
|
||||
|
||||
A blocker chain is covered only when its unresolved leaf is live or explicitly waiting. An intermediate `blocked` issue does not make the chain healthy by itself.
|
||||
|
||||
A `blocked` issue is stalled when the unresolved blocker leaf has no active run, queued wake, typed participant, pending interaction or approval, user owner, external owner/action, or recovery issue. In that case the parent should show the first stalled leaf instead of presenting the dependency as calmly covered.
|
||||
- the issue has been explicitly surfaced as stranded
|
||||
|
||||
## 8. Crash and Restart Recovery
|
||||
|
||||
@@ -301,83 +216,15 @@ This is an active-work continuity recovery.
|
||||
|
||||
Startup recovery and periodic recovery are different from normal wakeup delivery.
|
||||
|
||||
On startup and on the periodic recovery loop, Paperclip now does four things in sequence:
|
||||
On startup and on the periodic recovery loop, Paperclip now does three things in sequence:
|
||||
|
||||
1. reap orphaned `running` runs
|
||||
2. resume persisted `queued` runs
|
||||
3. reconcile stranded assigned work
|
||||
4. scan silent active runs and create or update explicit watchdog review issues
|
||||
|
||||
The stranded-work pass closes the gap where issue state survives a crash but the wake/run path does not. The silent-run scan covers the separate case where a live process exists but has stopped producing observable output.
|
||||
That last step is what closes the gap where issue state survives a crash but the wake/run path does not.
|
||||
|
||||
## 10. Silent Active-Run Watchdog
|
||||
|
||||
An active run can still be unhealthy even when its process is `running`. Paperclip treats prolonged output silence as a watchdog signal, not as proof that the run is failed.
|
||||
|
||||
The recovery service owns this contract:
|
||||
|
||||
- classify active-run output silence as `ok`, `suspicious`, `critical`, `snoozed`, or `not_applicable`
|
||||
- collect bounded evidence from run logs, recent run events, child issues, and blockers
|
||||
- preserve redaction and truncation before evidence is written to issue descriptions
|
||||
- create at most one open `stale_active_run_evaluation` issue per run
|
||||
- honor active snooze decisions before creating more review work
|
||||
- build the `outputSilence` summary shown by live-run and active-run API responses
|
||||
|
||||
Suspicious silence creates a medium-priority review issue for the selected recovery owner. Critical silence raises that review issue to high priority and blocks the source issue on the explicit evaluation task without cancelling the active process.
|
||||
|
||||
Watchdog decisions are explicit operator/recovery-owner decisions:
|
||||
|
||||
- `snooze` records an operator-chosen future quiet-until time and suppresses scan-created review work during that window
|
||||
- `continue` records that the current evidence is acceptable, does not cancel or mutate the active run, and sets a 30-minute default re-arm window before the watchdog evaluates the still-silent run again
|
||||
- `dismissed_false_positive` records why the review was not actionable
|
||||
|
||||
Operators should prefer `snooze` for known time-bounded quiet periods. `continue` is only a short acknowledgement of the current evidence; if the run remains silent after the re-arm window, the periodic watchdog scan can create or update review work again.
|
||||
|
||||
The board can record watchdog decisions. The assigned owner of the watchdog evaluation issue can also record them. Other agents cannot.
|
||||
|
||||
## 11. Auto-Recover vs Explicit Recovery vs Human Escalation
|
||||
|
||||
Paperclip uses three different recovery outcomes, depending on how much it can safely infer.
|
||||
|
||||
### Auto-Recover
|
||||
|
||||
Auto-recovery is allowed when ownership is clear and the control plane only lost execution continuity.
|
||||
|
||||
Examples:
|
||||
|
||||
- requeue one dispatch wake for an assigned `todo` issue whose latest run failed, timed out, or was cancelled
|
||||
- requeue one continuation wake for an assigned `in_progress` issue whose live execution path disappeared
|
||||
- assign an orphan blocker back to its creator when that blocker is already preventing other work
|
||||
|
||||
Auto-recovery preserves the existing owner. It does not choose a replacement agent.
|
||||
|
||||
### Explicit Recovery Issue
|
||||
|
||||
Paperclip creates an explicit recovery issue when the system can identify a problem but cannot safely complete the work itself.
|
||||
|
||||
Examples:
|
||||
|
||||
- automatic stranded-work retry was already exhausted
|
||||
- a dependency graph has an invalid/uninvokable owner, unassigned blocker, or invalid review participant
|
||||
- an active run is silent past the watchdog threshold
|
||||
|
||||
The source issue remains visible and blocked on the recovery issue when blocking is necessary for correctness. The recovery owner must restore a live path, resolve the source issue manually, or record the reason it is a false positive.
|
||||
|
||||
Instance-level issue-graph liveness auto-recovery is disabled by default. When enabled, its lookback window means "dependency paths updated within the last N hours"; older findings remain advisory and are counted as outside the configured lookback instead of creating recovery issues automatically. This is an operator noise control, not the older staleness delay for determining whether a chain is old enough to surface.
|
||||
|
||||
### Human Escalation
|
||||
|
||||
Human escalation is required when the next safe action depends on board judgment, budget/approval policy, or information unavailable to the control plane.
|
||||
|
||||
Examples:
|
||||
|
||||
- all candidate recovery owners are paused, terminated, pending approval, or budget-blocked
|
||||
- the issue is human-owned rather than agent-owned
|
||||
- the run is intentionally quiet but needs an operator decision before cancellation or continuation
|
||||
|
||||
In these cases Paperclip should leave a visible issue/comment trail instead of silently retrying.
|
||||
|
||||
## 12. What This Does Not Mean
|
||||
## 10. What This Does Not Mean
|
||||
|
||||
These semantics do not change V1 into an auto-reassignment system.
|
||||
|
||||
@@ -391,10 +238,9 @@ The recovery model is intentionally conservative:
|
||||
|
||||
- preserve ownership
|
||||
- retry once when the control plane lost execution continuity
|
||||
- create explicit recovery work when the system can identify a bounded recovery owner/action
|
||||
- escalate visibly when the system cannot safely keep going
|
||||
|
||||
## 13. Practical Interpretation
|
||||
## 11. Practical Interpretation
|
||||
|
||||
For a board operator, the intended meaning is:
|
||||
|
||||
|
||||
@@ -10,12 +10,7 @@ It is intentionally narrower than [PLUGIN_SPEC.md](./PLUGIN_SPEC.md). The spec i
|
||||
- Plugin UI runs as same-origin JavaScript inside the main Paperclip app.
|
||||
- Worker-side host APIs are capability-gated.
|
||||
- Plugin UI is not sandboxed by manifest capabilities.
|
||||
- Plugin database migrations are restricted to a host-derived plugin namespace.
|
||||
- Plugin-owned JSON API routes must be declared in the manifest and are mounted
|
||||
only under `/api/plugins/:pluginId/api/*`.
|
||||
- The host provides a small shared React component kit through
|
||||
`@paperclipai/plugin-sdk/ui`; use it for common Paperclip controls before
|
||||
building custom versions.
|
||||
- There is no host-provided shared React component kit for plugins yet.
|
||||
- `ctx.assets` is not supported in the current runtime.
|
||||
|
||||
## Scaffold a plugin
|
||||
@@ -82,14 +77,11 @@ Worker:
|
||||
- secrets
|
||||
- activity
|
||||
- state
|
||||
- database namespace via `ctx.db`
|
||||
- scoped JSON API routes declared with `apiRoutes`
|
||||
- entities
|
||||
- projects, project workspaces, and plugin-managed projects
|
||||
- projects and project workspaces
|
||||
- companies
|
||||
- issues, comments, namespaced `plugin:<pluginKey>` origins, blocker relations, checkout assertions, assignment wakeups, and orchestration summaries
|
||||
- agents, plugin-managed agents, and agent sessions
|
||||
- plugin-managed routines
|
||||
- issues and comments
|
||||
- agents and agent sessions
|
||||
- goals
|
||||
- data/actions
|
||||
- streams
|
||||
@@ -97,210 +89,6 @@ Worker:
|
||||
- metrics
|
||||
- logger
|
||||
|
||||
### Plugin database declarations
|
||||
|
||||
First-party or otherwise trusted orchestration plugins can declare:
|
||||
|
||||
```ts
|
||||
database: {
|
||||
migrationsDir: "migrations",
|
||||
coreReadTables: ["issues"],
|
||||
}
|
||||
```
|
||||
|
||||
Required capabilities are `database.namespace.migrate` and
|
||||
`database.namespace.read`; add `database.namespace.write` for runtime mutations.
|
||||
The host derives `ctx.db.namespace`, runs SQL files in filename order before the
|
||||
worker starts, records checksums in `plugin_migrations`, and rejects changed
|
||||
already-applied migrations.
|
||||
|
||||
Migration SQL may create or alter objects only inside `ctx.db.namespace`. It may
|
||||
reference whitelisted `public` core tables for foreign keys or read-only views,
|
||||
but may not mutate/alter/drop/truncate public tables, create extensions,
|
||||
triggers, untrusted languages, or runtime multi-statement SQL. Runtime
|
||||
`ctx.db.query()` is restricted to `SELECT`; runtime `ctx.db.execute()` is
|
||||
restricted to namespace-local `INSERT`, `UPDATE`, and `DELETE`.
|
||||
|
||||
### Scoped plugin API routes
|
||||
|
||||
Plugins can expose JSON-only routes under their own namespace:
|
||||
|
||||
```ts
|
||||
apiRoutes: [
|
||||
{
|
||||
routeKey: "initialize",
|
||||
method: "POST",
|
||||
path: "/issues/:issueId/smoke",
|
||||
auth: "board-or-agent",
|
||||
capability: "api.routes.register",
|
||||
checkoutPolicy: "required-for-agent-in-progress",
|
||||
companyResolution: { from: "issue", param: "issueId" },
|
||||
},
|
||||
]
|
||||
```
|
||||
|
||||
The host resolves the plugin, checks that it is ready, enforces
|
||||
`api.routes.register`, matches the declared method/path, resolves company access,
|
||||
and applies checkout policy before dispatching to the worker's `onApiRequest`
|
||||
handler. The worker receives sanitized headers, route params, query, parsed JSON
|
||||
body, actor context, and company id. Do not use plugin routes to claim core
|
||||
paths; they always remain under `/api/plugins/:pluginId/api/*`.
|
||||
|
||||
## Managed Paperclip resources
|
||||
|
||||
Plugins that provide durable Paperclip business objects should declare them in
|
||||
the manifest and let the host create or relink the actual records per company.
|
||||
Do this for plugin-owned agents, plugin-owned projects, and recurring automation.
|
||||
Do not hide long-lived work behind private plugin state when it should be visible
|
||||
to the board, scoped to a company, audited, budgeted, and assigned like normal
|
||||
Paperclip work.
|
||||
|
||||
Use these surfaces:
|
||||
|
||||
- Managed agents: declare top-level `agents[]` and require
|
||||
`agents.managed`. Use this when the plugin provides a named worker the board
|
||||
should see in the org, budget, pause, invoke, and inspect. Managed agents are
|
||||
normal Paperclip agents with plugin ownership metadata, not background plugin
|
||||
workers.
|
||||
- Managed projects: declare top-level `projects[]` and require
|
||||
`projects.managed`. Use this when the plugin needs a stable company-scoped
|
||||
project for its issues, routines, or workspace-oriented UI. Keep plugin work
|
||||
in a project instead of scattering generated issues across unrelated projects.
|
||||
- Managed routines: declare top-level `routines[]` and require
|
||||
`routines.managed`. Use this for scheduled, webhook, or manually triggered
|
||||
jobs that should create visible Paperclip issues. Prefer managed routines over
|
||||
plugin `jobs[]` for recurring business work; plugin jobs are for plugin
|
||||
runtime maintenance that does not need a board-visible task trail.
|
||||
|
||||
Managed resources are resolved by stable plugin keys, not hardcoded database
|
||||
ids. In a worker action or data handler, call `ctx.agents.managed.reconcile()`,
|
||||
`ctx.projects.managed.reconcile()`, and `ctx.routines.managed.reconcile()` for
|
||||
the current `companyId`. `reconcile()` creates the missing resource, relinks a
|
||||
recoverable binding, or returns the existing resource. `reset()` reapplies the
|
||||
manifest defaults when the operator wants to restore the plugin's suggested
|
||||
configuration.
|
||||
|
||||
Declare dependencies between managed resources with refs. A routine can point
|
||||
at a managed agent through `assigneeRef` and at a managed project through
|
||||
`projectRef`. Reconcile the referenced agent and project before reconciling the
|
||||
routine; if a ref is still missing, the routine resolution reports
|
||||
`missing_refs` instead of guessing.
|
||||
|
||||
```ts
|
||||
import type { PaperclipPluginManifestV1 } from "@paperclipai/plugin-sdk";
|
||||
|
||||
const manifest: PaperclipPluginManifestV1 = {
|
||||
id: "example.research-plugin",
|
||||
apiVersion: 1,
|
||||
version: "0.1.0",
|
||||
displayName: "Research Plugin",
|
||||
description: "Creates a managed research agent and scheduled research routine.",
|
||||
author: "Example",
|
||||
categories: ["automation"],
|
||||
capabilities: [
|
||||
"agents.managed",
|
||||
"projects.managed",
|
||||
"routines.managed",
|
||||
"instance.settings.register",
|
||||
],
|
||||
entrypoints: {
|
||||
worker: "./dist/worker.js",
|
||||
ui: "./dist/ui",
|
||||
},
|
||||
agents: [
|
||||
{
|
||||
agentKey: "researcher",
|
||||
displayName: "Researcher",
|
||||
role: "research",
|
||||
title: "Research Agent",
|
||||
capabilities: "Runs recurring research briefs for this company.",
|
||||
adapterPreference: ["codex_local", "claude_local", "process"],
|
||||
instructions: {
|
||||
content: "Follow the Paperclip heartbeat and produce concise research briefs.",
|
||||
},
|
||||
},
|
||||
],
|
||||
projects: [
|
||||
{
|
||||
projectKey: "research",
|
||||
displayName: "Research",
|
||||
description: "Recurring research work created by the Research Plugin.",
|
||||
status: "in_progress",
|
||||
},
|
||||
],
|
||||
routines: [
|
||||
{
|
||||
routineKey: "weekly-brief",
|
||||
title: "Weekly research brief",
|
||||
description: "Create a short research brief for the board.",
|
||||
assigneeRef: { resourceKind: "agent", resourceKey: "researcher" },
|
||||
projectRef: { resourceKind: "project", resourceKey: "research" },
|
||||
priority: "medium",
|
||||
triggers: [
|
||||
{
|
||||
kind: "schedule",
|
||||
label: "Monday morning",
|
||||
cronExpression: "0 9 * * 1",
|
||||
timezone: "America/Chicago",
|
||||
enabled: false,
|
||||
},
|
||||
],
|
||||
},
|
||||
],
|
||||
ui: {
|
||||
slots: [
|
||||
{
|
||||
type: "settingsPage",
|
||||
id: "settings",
|
||||
displayName: "Research",
|
||||
exportName: "SettingsPage",
|
||||
},
|
||||
],
|
||||
},
|
||||
};
|
||||
|
||||
export default manifest;
|
||||
```
|
||||
|
||||
In the worker, expose a small setup action or settings-page action that
|
||||
reconciles the resources for the selected company:
|
||||
|
||||
```ts
|
||||
import { definePlugin } from "@paperclipai/plugin-sdk";
|
||||
|
||||
export default definePlugin({
|
||||
setup(ctx) {
|
||||
ctx.actions.register("setup-company", async (params) => {
|
||||
const companyId = String(params.companyId ?? "");
|
||||
if (!companyId) throw new Error("companyId is required");
|
||||
|
||||
const project = await ctx.projects.managed.reconcile("research", companyId);
|
||||
const agent = await ctx.agents.managed.reconcile("researcher", companyId);
|
||||
const routine = await ctx.routines.managed.reconcile("weekly-brief", companyId);
|
||||
|
||||
return { project, agent, routine };
|
||||
});
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
Authoring rules:
|
||||
|
||||
- Keep keys stable once published. Renaming `agentKey`, `projectKey`, or
|
||||
`routineKey` creates a new managed resource from the host's point of view.
|
||||
- Use managed agents for plugin-provided labor. Use `ctx.agents.invoke()` or
|
||||
`ctx.agents.sessions` only after you have a real agent id, either selected by
|
||||
the operator or resolved from `ctx.agents.managed`.
|
||||
- Use managed routines for recurring or externally triggered work that should
|
||||
produce tasks. Schedule, webhook, and API triggers are visible routine
|
||||
triggers, and each run has the normal Paperclip issue/audit trail.
|
||||
- Use managed projects to keep plugin-generated work organized and to give
|
||||
project-scoped plugin UI a stable home. For filesystem access inside a
|
||||
project, still resolve project workspaces through `ctx.projects`.
|
||||
- Keep defaults conservative. Managed declarations are suggestions owned by the
|
||||
plugin, but the resulting resources are normal Paperclip records that the
|
||||
operator can inspect, pause, and adjust.
|
||||
|
||||
UI:
|
||||
|
||||
- `usePluginData`
|
||||
@@ -326,187 +114,6 @@ Mount surfaces currently wired in the host include:
|
||||
- `commentAnnotation`
|
||||
- `commentContextMenuItem`
|
||||
|
||||
## Shared host components
|
||||
|
||||
Use shared components from `@paperclipai/plugin-sdk/ui` when the plugin needs a
|
||||
Paperclip-native control. The host owns the implementation, so plugins inherit
|
||||
the board's current styling, ordering, recent selections, and dark-mode behavior
|
||||
without importing `ui/src` internals.
|
||||
|
||||
Currently exposed components include:
|
||||
|
||||
- `MarkdownBlock` and `MarkdownEditor` for rendered and editable markdown.
|
||||
- `FileTree` for serializable file and directory trees.
|
||||
- `IssuesList` for a native company-scoped issue table.
|
||||
- `AssigneePicker` for the same agent/user selector used in the new issue pane.
|
||||
Use the controlled `value` format `agent:<id>`, `user:<id>`, or `""`.
|
||||
- `ProjectPicker` for the same project selector used in the new issue pane.
|
||||
Use the controlled project id value, or `""` for no project.
|
||||
- `ManagedRoutinesList` for plugin-owned routine settings pages.
|
||||
|
||||
```tsx
|
||||
import { AssigneePicker, ProjectPicker } from "@paperclipai/plugin-sdk/ui";
|
||||
|
||||
export function PluginAssignmentControls({ companyId }: { companyId: string }) {
|
||||
const [assignee, setAssignee] = useState("");
|
||||
const [projectId, setProjectId] = useState("");
|
||||
|
||||
return (
|
||||
<>
|
||||
<AssigneePicker
|
||||
companyId={companyId}
|
||||
value={assignee}
|
||||
onChange={(value) => setAssignee(value)}
|
||||
/>
|
||||
<ProjectPicker
|
||||
companyId={companyId}
|
||||
value={projectId}
|
||||
onChange={setProjectId}
|
||||
/>
|
||||
</>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
## File and path UI
|
||||
|
||||
Plugin UI often needs to render a file tree, accept a folder path, or browse a
|
||||
project workspace. There are three different surfaces for that, and they map to
|
||||
different trust and data-flow boundaries. Pick the surface that matches the
|
||||
data the plugin actually has.
|
||||
|
||||
### When to use the shared `FileTree`
|
||||
|
||||
Use `FileTree` from `@paperclipai/plugin-sdk/ui` whenever the plugin only needs
|
||||
to render a serializable file/directory list and react to selection or
|
||||
expand/collapse. The host owns the implementation, so plugin UI inherits the
|
||||
board's icons, indent, focus ring, and dark-mode styling without importing host
|
||||
internals.
|
||||
|
||||
```tsx
|
||||
import {
|
||||
FileTree,
|
||||
type FileTreeNode,
|
||||
} from "@paperclipai/plugin-sdk/ui";
|
||||
|
||||
const nodes: FileTreeNode[] = [
|
||||
{ name: "AGENTS.md", path: "AGENTS.md", kind: "file", children: [] },
|
||||
{
|
||||
name: "wiki",
|
||||
path: "wiki",
|
||||
kind: "dir",
|
||||
children: [
|
||||
{ name: "index.md", path: "wiki/index.md", kind: "file", children: [] },
|
||||
],
|
||||
},
|
||||
];
|
||||
|
||||
export function WikiTree() {
|
||||
const [expanded, setExpanded] = useState<Set<string>>(() => new Set(["wiki"]));
|
||||
const [selected, setSelected] = useState<string | null>(null);
|
||||
|
||||
return (
|
||||
<FileTree
|
||||
nodes={nodes}
|
||||
selectedFile={selected}
|
||||
expandedPaths={expanded}
|
||||
onSelectFile={(path) => setSelected(path)}
|
||||
onToggleDir={(path) =>
|
||||
setExpanded((current) => {
|
||||
const next = new Set(current);
|
||||
next.has(path) ? next.delete(path) : next.add(path);
|
||||
return next;
|
||||
})
|
||||
}
|
||||
/>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
Good fits:
|
||||
|
||||
- LLM Wiki page navigation in `packages/plugins/plugin-llm-wiki` builds a
|
||||
`FileTreeNode[]` from worker query results and renders it through `FileTree`.
|
||||
- The example `plugin-file-browser-example` lazily fetches a directory's
|
||||
children through a `loadFileList` action when `onToggleDir` fires, then
|
||||
merges the children into the local tree state — letting the shared component
|
||||
handle rendering and selection.
|
||||
|
||||
Boundary rules:
|
||||
|
||||
- Keep the prop surface serializable (`nodes`, `expandedPaths`, `checkedPaths`,
|
||||
`fileBadges`, `fileTones`). Do not pass arbitrary render functions across the
|
||||
plugin/host boundary in v1; the supported escape hatches are
|
||||
`fileBadges` (status pill keyed by path) and `fileTones` (row tone keyed by
|
||||
path).
|
||||
- Do not import the host's `FileTree.tsx` or any `ui/src/*` module. The SDK
|
||||
declaration is the only supported import path for plugin UI.
|
||||
- The shared `FileTree` is for rendering and selection. Plugin-specific editors,
|
||||
ingest flows, query forms, and lint runs stay inside the plugin and do not
|
||||
belong as `FileTree` props.
|
||||
|
||||
### When to declare `localFolders`
|
||||
|
||||
When the plugin needs operator-configured filesystem roots — typically for
|
||||
trusted local plugins like wiki tooling — declare `localFolders[]` on the
|
||||
manifest and add the `local.folders` capability. The host renders a settings
|
||||
surface for the operator to set the absolute path, validates the path
|
||||
server-side (containment, symlinks, required files/directories), and exposes
|
||||
`ctx.localFolders.readText()` and `ctx.localFolders.writeTextAtomic()` in the
|
||||
worker.
|
||||
|
||||
```ts
|
||||
export const manifest = {
|
||||
capabilities: ["local.folders"],
|
||||
localFolders: [
|
||||
{
|
||||
folderKey: "content-root",
|
||||
displayName: "Content root",
|
||||
access: "readWrite",
|
||||
requiredDirectories: ["sources", "pages"],
|
||||
requiredFiles: ["schema.md"],
|
||||
},
|
||||
],
|
||||
};
|
||||
```
|
||||
|
||||
Use this when:
|
||||
|
||||
- The data lives outside any project workspace.
|
||||
- Reads and writes need company-scoped configuration.
|
||||
- The operator picks the path once in plugin settings and the worker resolves
|
||||
files relative to that root.
|
||||
|
||||
Do not use `localFolders` to grant the UI direct browser-side access to the
|
||||
filesystem — there is no such capability. The browser still goes through the
|
||||
worker via `getData` / `performAction`, and the worker only exposes paths it
|
||||
chose to expose.
|
||||
|
||||
### When to keep worker-mediated project workspace browsing
|
||||
|
||||
When the data lives inside an existing project workspace, keep the browsing
|
||||
flow worker-mediated:
|
||||
|
||||
- The worker uses `ctx.projects.listWorkspaces()` to resolve the workspace
|
||||
path, then reads its filesystem with normal Node APIs.
|
||||
- The plugin UI calls a `getData` handler for the root listing and an action
|
||||
for lazy children, then renders them through `FileTree`.
|
||||
- The worker is the only side that touches the disk. The browser receives a
|
||||
serializable tree and never sees raw absolute paths it can replay.
|
||||
|
||||
The example `plugin-file-browser-example` is the reference for this pattern:
|
||||
the worker registers `fileList` (data) and `loadFileList` (action) over the
|
||||
same handler, and the UI uses the action for on-toggle directory loading so the
|
||||
shared `FileTree` stays the rendering surface.
|
||||
|
||||
### Mixing surfaces
|
||||
|
||||
A single plugin can use more than one of these. The LLM Wiki uses
|
||||
`localFolders` for its content root, then renders the resulting page list
|
||||
through `FileTree`. The file browser example uses `ctx.projects.listWorkspaces`
|
||||
to pick a workspace and renders its on-disk tree through `FileTree` with lazy
|
||||
loading. Pick the boundary per data source, not per plugin.
|
||||
|
||||
## Company routes
|
||||
|
||||
Plugins may declare a `page` slot with `routePath` to own a company route like:
|
||||
|
||||
@@ -27,10 +27,7 @@ Current limitations to keep in mind:
|
||||
- Published npm packages are the intended install artifact for deployed plugins.
|
||||
- The repo example plugins under `packages/plugins/examples/` are development conveniences. They work from a source checkout and should not be assumed to exist in a generic published build unless they are explicitly shipped with that build.
|
||||
- Dynamic plugin install is not yet cloud-ready for horizontally scaled or ephemeral deployments. There is no shared artifact store, install coordination, or cross-node distribution layer yet.
|
||||
- The current runtime ships a small host-provided plugin UI component kit through `@paperclipai/plugin-sdk/ui`, but does not support plugin asset uploads/reads yet. Treat plugin asset APIs as future-scope ideas, not current implementation promises.
|
||||
- Scoped plugin API routes are JSON-only and must be declared in `apiRoutes`.
|
||||
They mount under `/api/plugins/:pluginId/api/*`; plugins cannot shadow core
|
||||
API routes.
|
||||
- The current runtime does not yet ship a real host-provided plugin UI component kit, and it does not support plugin asset uploads/reads. Treat those as future-scope ideas in this spec, not current implementation promises.
|
||||
|
||||
In practice, that means the current implementation is a good fit for local development and self-hosted persistent deployments, but not yet for multi-instance cloud plugin distribution.
|
||||
|
||||
@@ -627,46 +624,7 @@ Required SDK clients:
|
||||
|
||||
Plugins that need filesystem, git, terminal, or process operations handle those directly using standard Node APIs or libraries. The host provides project workspace metadata through `ctx.projects` so plugins can resolve workspace paths, but the host does not proxy low-level OS operations.
|
||||
|
||||
## 14.1 Issue Orchestration APIs
|
||||
|
||||
Trusted orchestration plugins can create and update Paperclip issues through `ctx.issues` instead of importing server internals. The public issue contract includes parent/project/goal links, board or agent assignees, blocker IDs, labels, billing code, request depth, execution workspace inheritance, and plugin origin metadata.
|
||||
|
||||
Origin rules:
|
||||
|
||||
- Built-in core issues keep built-in origins such as `manual` and `routine_execution`.
|
||||
- Plugin-managed issues use `plugin:<pluginKey>` or a sub-kind such as `plugin:<pluginKey>:feature`.
|
||||
- The host derives the default plugin origin from the installed plugin key and rejects attempts to set `plugin:<otherPluginKey>` origins.
|
||||
- `originId` is plugin-defined and should be stable for idempotent generated work.
|
||||
|
||||
Relation and read helpers:
|
||||
|
||||
- `ctx.issues.relations.get(issueId, companyId)`
|
||||
- `ctx.issues.relations.setBlockedBy(issueId, blockerIssueIds, companyId)`
|
||||
- `ctx.issues.relations.addBlockers(issueId, blockerIssueIds, companyId)`
|
||||
- `ctx.issues.relations.removeBlockers(issueId, blockerIssueIds, companyId)`
|
||||
- `ctx.issues.getSubtree(issueId, companyId, options)`
|
||||
- `ctx.issues.summaries.getOrchestration({ issueId, companyId, includeSubtree, billingCode })`
|
||||
|
||||
Governance helpers:
|
||||
|
||||
- `ctx.issues.assertCheckoutOwner({ issueId, companyId, actorAgentId, actorRunId })` lets plugin actions preserve agent-run checkout ownership.
|
||||
- `ctx.issues.requestWakeup(issueId, companyId, options)` requests assignment wakeups through host heartbeat semantics, including terminal-status, blocker, assignee, and budget hard-stop checks.
|
||||
- `ctx.issues.requestWakeups(issueIds, companyId, options)` applies the same host-owned wakeup semantics to a batch and may use an idempotency key prefix for stable coordinator retries.
|
||||
|
||||
Plugin-originated issue, relation, document, comment, and wakeup mutations must write activity entries with `actorType: "plugin"` and details fields for `sourcePluginId`, `sourcePluginKey`, `initiatingActorType`, `initiatingActorId`, and `initiatingRunId` when a user or agent run initiated the plugin work.
|
||||
|
||||
Scoped API routes:
|
||||
|
||||
- `apiRoutes[]` declares `routeKey`, `method`, plugin-local `path`, `auth`,
|
||||
`capability`, optional checkout policy, and company resolution.
|
||||
- The host enforces auth, company access, `api.routes.register`, route matching,
|
||||
and checkout policy before worker dispatch.
|
||||
- The worker implements `onApiRequest(input)` and returns a JSON response shape
|
||||
`{ status?, headers?, body? }`.
|
||||
- Only safe request headers are forwarded; auth/cookie headers are never passed
|
||||
to the worker.
|
||||
|
||||
## 14.2 Example SDK Shape
|
||||
## 14.1 Example SDK Shape
|
||||
|
||||
```ts
|
||||
/** Top-level helper for defining a plugin with type checking */
|
||||
@@ -738,24 +696,16 @@ The host enforces capabilities in the SDK layer and refuses calls outside the gr
|
||||
- `project.workspaces.read`
|
||||
- `issues.read`
|
||||
- `issue.comments.read`
|
||||
- `issue.documents.read`
|
||||
- `issue.relations.read`
|
||||
- `issue.subtree.read`
|
||||
- `agents.read`
|
||||
- `goals.read`
|
||||
- `activity.read`
|
||||
- `costs.read`
|
||||
- `issues.orchestration.read`
|
||||
|
||||
### Data Write
|
||||
|
||||
- `issues.create`
|
||||
- `issues.update`
|
||||
- `issue.comments.create`
|
||||
- `issue.documents.write`
|
||||
- `issue.relations.write`
|
||||
- `issues.checkout`
|
||||
- `issues.wakeup`
|
||||
- `assets.write`
|
||||
- `assets.read`
|
||||
- `activity.log.write`
|
||||
@@ -822,13 +772,6 @@ Minimum event set:
|
||||
- `issue.created`
|
||||
- `issue.updated`
|
||||
- `issue.comment.created`
|
||||
- `issue.document.created`
|
||||
- `issue.document.updated`
|
||||
- `issue.document.deleted`
|
||||
- `issue.relations.updated`
|
||||
- `issue.checked_out`
|
||||
- `issue.released`
|
||||
- `issue.assignment_wakeup_requested`
|
||||
- `agent.created`
|
||||
- `agent.updated`
|
||||
- `agent.status_changed`
|
||||
@@ -838,8 +781,6 @@ Minimum event set:
|
||||
- `agent.run.cancelled`
|
||||
- `approval.created`
|
||||
- `approval.decided`
|
||||
- `budget.incident.opened`
|
||||
- `budget.incident.resolved`
|
||||
- `cost_event.created`
|
||||
- `activity.logged`
|
||||
|
||||
@@ -976,23 +917,13 @@ export function DashboardWidget({ context }: PluginWidgetProps) {
|
||||
|
||||
The SDK includes a `ui` subpath export that plugin frontends import. This subpath provides:
|
||||
|
||||
- **Bridge hooks**: `usePluginData(key, params)`, `usePluginAction(key)`, `useHostContext()`, `useHostNavigation()`
|
||||
- **Bridge hooks**: `usePluginData(key, params)`, `usePluginAction(key)`, `useHostContext()`
|
||||
- **Design tokens**: colors, spacing, typography, shadows matching the host theme
|
||||
- **Shared components**: `MetricCard`, `StatusBadge`, `DataTable`, `LogView`, `ActionBar`, `Spinner`, etc.
|
||||
- **Type definitions**: `PluginPageProps`, `PluginWidgetProps`, `PluginDetailTabProps`
|
||||
|
||||
Plugins are encouraged but not required to use the shared components. A plugin may render entirely custom UI as long as it communicates through the bridge.
|
||||
|
||||
`useHostNavigation()` is the supported way for plugin UI to navigate to
|
||||
Paperclip-internal pages. It exposes `resolveHref(to)`, `navigate(to,
|
||||
options?)`, and `linkProps(to, options?)`. Plugin links should prefer
|
||||
`linkProps()` so anchors keep real `href` values for copy-link, modifier-click,
|
||||
middle-click, and open-in-new-tab behavior while plain left-clicks route through
|
||||
the host SPA router. The host resolves company-scoped paths against the active
|
||||
company prefix without double-prefixing already-prefixed paths. Plugin UI should
|
||||
not use raw same-origin `href`s or `window.location.assign()` for internal
|
||||
Paperclip navigation because those can force a full document reload.
|
||||
|
||||
### 19.0.2 Bundle Isolation
|
||||
|
||||
Plugin UI bundles are loaded as standard ES modules, not iframed. This gives plugins full rendering performance and access to the host's design tokens.
|
||||
@@ -1072,11 +1003,6 @@ The host SDK ships shared components that plugins can import to quickly build UI
|
||||
| `LogView` | Scrollable log output with timestamps | Webhook deliveries, job output, process logs |
|
||||
| `JsonTree` | Collapsible JSON tree for debugging | Raw API responses, plugin state inspection |
|
||||
| `Spinner` | Loading indicator | Data fetch states |
|
||||
| `FileTree` | Host-styled file/directory tree | Wiki pages, workspace files, import previews |
|
||||
| `IssuesList` | Host issue list | Plugin pages that need a native issue view |
|
||||
| `AssigneePicker` | Host assignee picker for agents and board users | Creating issues, assigning routines, filtering work |
|
||||
| `ProjectPicker` | Host project picker | Creating issues, scoping dashboards, filtering work |
|
||||
| `ManagedRoutinesList` | Host routine list | Plugin settings pages that manage routines |
|
||||
|
||||
Plugins may also use entirely custom components. The shared components exist to reduce boilerplate and keep visual consistency, not to limit what plugins can render.
|
||||
|
||||
@@ -1312,8 +1238,6 @@ Plugin-originated mutations should write:
|
||||
|
||||
- `actor_type = plugin`
|
||||
- `actor_id = <plugin-id>`
|
||||
- details include `sourcePluginId` and `sourcePluginKey`
|
||||
- details include `initiatingActorType`, `initiatingActorId`, and `initiatingRunId` when a user or agent run triggered the plugin work
|
||||
|
||||
## 21.5 Plugin Migrations
|
||||
|
||||
|
||||
@@ -114,14 +114,14 @@ If the connection drops, the UI reconnects automatically.
|
||||
|
||||
1. Enable timer wakeups (for example every 300s)
|
||||
2. Keep assignment wakeups on
|
||||
3. Use a focused prompt template that tells agents to act in the same heartbeat, leave durable progress, and mark blocked work with an owner/action
|
||||
3. Use a focused prompt template
|
||||
4. Watch run logs and adjust prompt/config over time
|
||||
|
||||
## 7.2 Event-driven loop (less constant polling)
|
||||
|
||||
1. Disable timer or set a long interval
|
||||
2. Keep wake-on-assignment enabled
|
||||
3. Use child issues, comments, and on-demand wakeups for handoffs instead of loops that poll agents, sessions, or processes
|
||||
3. Use on-demand wakeups for manual nudges
|
||||
|
||||
## 7.3 Safety-first loop
|
||||
|
||||
|
||||
@@ -1,299 +0,0 @@
|
||||
# Invite Flow State Map
|
||||
|
||||
Status: Current implementation map
|
||||
Date: 2026-04-13
|
||||
|
||||
This document maps the current invite creation and acceptance states implemented in:
|
||||
|
||||
- `ui/src/pages/CompanyInvites.tsx`
|
||||
- `ui/src/pages/CompanySettings.tsx`
|
||||
- `ui/src/pages/InviteLanding.tsx`
|
||||
- `server/src/routes/access.ts`
|
||||
- `server/src/lib/join-request-dedupe.ts`
|
||||
|
||||
## State Legend
|
||||
|
||||
- Invite state: `active`, `revoked`, `accepted`, `expired`
|
||||
- Join request status: `pending_approval`, `approved`, `rejected`
|
||||
- Claim secret state for agent joins: `available`, `consumed`, `expired`
|
||||
- Invite type: `company_join` or `bootstrap_ceo`
|
||||
- Join type: `human`, `agent`, or `both`
|
||||
|
||||
## Entity Lifecycle
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
Board[Board user on invite screen]
|
||||
HumanInvite[Create human company invite]
|
||||
OpenClawInvite[Generate OpenClaw invite prompt]
|
||||
Active[Invite state: active]
|
||||
Revoked[Invite state: revoked]
|
||||
Expired[Invite state: expired]
|
||||
Accepted[Invite state: accepted]
|
||||
BootstrapDone[Bootstrap accepted<br/>no join request]
|
||||
HumanReuse{Matching human join request<br/>already exists for same user/email?}
|
||||
HumanPending[Join request<br/>pending_approval]
|
||||
HumanApproved[Join request<br/>approved]
|
||||
HumanRejected[Join request<br/>rejected]
|
||||
AgentPending[Agent join request<br/>pending_approval<br/>+ optional claim secret]
|
||||
AgentApproved[Agent join request<br/>approved]
|
||||
AgentRejected[Agent join request<br/>rejected]
|
||||
ClaimAvailable[Claim secret available]
|
||||
ClaimConsumed[Claim secret consumed]
|
||||
ClaimExpired[Claim secret expired]
|
||||
OpenClawReplay[Special replay path:<br/>accepted invite can be POSTed again<br/>for openclaw_gateway only]
|
||||
|
||||
Board --> HumanInvite --> Active
|
||||
Board --> OpenClawInvite --> Active
|
||||
Active --> Revoked: revoke
|
||||
Active --> Expired: expiresAt passes
|
||||
|
||||
Active --> BootstrapDone: bootstrap_ceo accept
|
||||
BootstrapDone --> Accepted
|
||||
|
||||
Active --> HumanReuse: human accept
|
||||
HumanReuse --> HumanPending: reuse existing pending request
|
||||
HumanReuse --> HumanApproved: reuse existing approved request
|
||||
HumanReuse --> HumanPending: no reusable request<br/>create new request
|
||||
HumanPending --> HumanApproved: board approves
|
||||
HumanPending --> HumanRejected: board rejects
|
||||
HumanPending --> Accepted
|
||||
HumanApproved --> Accepted
|
||||
|
||||
Active --> AgentPending: agent accept
|
||||
AgentPending --> Accepted
|
||||
AgentPending --> AgentApproved: board approves
|
||||
AgentPending --> AgentRejected: board rejects
|
||||
AgentApproved --> ClaimAvailable: createdAgentId + claimSecretHash
|
||||
ClaimAvailable --> ClaimConsumed: POST claim-api-key succeeds
|
||||
ClaimAvailable --> ClaimExpired: secret expires
|
||||
|
||||
Accepted --> OpenClawReplay
|
||||
OpenClawReplay --> AgentPending
|
||||
OpenClawReplay --> AgentApproved
|
||||
```
|
||||
|
||||
## Board-Side Screen States
|
||||
|
||||
```mermaid
|
||||
stateDiagram-v2
|
||||
[*] --> CompanySelection
|
||||
|
||||
CompanySelection --> NoCompany: no company selected
|
||||
CompanySelection --> LoadingHistory: selectedCompanyId present
|
||||
LoadingHistory --> HistoryError: listInvites failed
|
||||
LoadingHistory --> Ready: listInvites succeeded
|
||||
|
||||
state Ready {
|
||||
[*] --> EmptyHistory
|
||||
EmptyHistory --> PopulatedHistory: invites exist
|
||||
PopulatedHistory --> LoadingMore: View more
|
||||
LoadingMore --> PopulatedHistory: next page loaded
|
||||
|
||||
PopulatedHistory --> RevokePending: Revoke active invite
|
||||
RevokePending --> PopulatedHistory: revoke succeeded
|
||||
RevokePending --> PopulatedHistory: revoke failed
|
||||
|
||||
EmptyHistory --> CreatePending: Create invite
|
||||
PopulatedHistory --> CreatePending: Create invite
|
||||
CreatePending --> LatestInviteVisible: create succeeded
|
||||
CreatePending --> Ready: create failed
|
||||
LatestInviteVisible --> CopiedToast: clipboard copy succeeded
|
||||
LatestInviteVisible --> Ready: navigate away or refresh
|
||||
}
|
||||
|
||||
CompanySelection --> OpenClawPromptReady: Company settings prompt generator
|
||||
OpenClawPromptReady --> OpenClawPromptPending: Generate OpenClaw Invite Prompt
|
||||
OpenClawPromptPending --> OpenClawSnippetVisible: prompt generated
|
||||
OpenClawPromptPending --> OpenClawPromptReady: generation failed
|
||||
```
|
||||
|
||||
## Invite Landing Screen States
|
||||
|
||||
```mermaid
|
||||
stateDiagram-v2
|
||||
[*] --> TokenGate
|
||||
|
||||
TokenGate --> InvalidToken: token missing
|
||||
TokenGate --> Loading: token present
|
||||
Loading --> InviteUnavailable: invite fetch failed or invite not returned
|
||||
Loading --> CheckingAccess: signed-in session and invite.companyId
|
||||
Loading --> InviteResolved: invite loaded without membership check
|
||||
Loading --> AcceptedInviteSummary: invite already consumed<br/>but linked join request still exists
|
||||
|
||||
CheckingAccess --> RedirectToBoard: current user already belongs to company
|
||||
CheckingAccess --> InviteResolved: membership check finished and no join-request summary state is active
|
||||
CheckingAccess --> AcceptedInviteSummary: membership check finished and invite has joinRequestStatus
|
||||
|
||||
state InviteResolved {
|
||||
[*] --> Branch
|
||||
Branch --> AgentForm: company_join + allowedJoinTypes=agent
|
||||
Branch --> InlineAuth: authenticated mode + no session + join is not agent-only
|
||||
Branch --> AcceptReady: bootstrap invite or human-ready session/local_trusted
|
||||
|
||||
InlineAuth --> InlineAuth: toggle sign-up/sign-in
|
||||
InlineAuth --> InlineAuth: auth validation or auth error message
|
||||
InlineAuth --> RedirectToBoard: auth succeeded and company membership already exists
|
||||
InlineAuth --> AcceptPending: auth succeeded and invite still needs acceptance
|
||||
|
||||
AgentForm --> AcceptPending: submit request
|
||||
AgentForm --> AgentForm: validation or accept error
|
||||
|
||||
AcceptReady --> AcceptPending: Accept invite
|
||||
AcceptReady --> AcceptReady: accept error
|
||||
}
|
||||
|
||||
AcceptPending --> BootstrapComplete: bootstrapAccepted=true
|
||||
AcceptPending --> RedirectToBoard: join status=approved
|
||||
AcceptPending --> PendingApprovalResult: join status=pending_approval
|
||||
AcceptPending --> RejectedResult: join status=rejected
|
||||
|
||||
state AcceptedInviteSummary {
|
||||
[*] --> SummaryBranch
|
||||
SummaryBranch --> PendingApprovalReload: joinRequestStatus=pending_approval
|
||||
SummaryBranch --> OpeningCompany: joinRequestStatus=approved<br/>and human invite user is now a member
|
||||
SummaryBranch --> RejectedReload: joinRequestStatus=rejected
|
||||
SummaryBranch --> ConsumedReload: approved agent invite or other consumed state
|
||||
}
|
||||
|
||||
PendingApprovalResult --> PendingApprovalReload: reload after submit
|
||||
RejectedResult --> RejectedReload: reload after board rejects
|
||||
RedirectToBoard --> OpeningCompany: brief pre-navigation render when approved membership is detected
|
||||
OpeningCompany --> RedirectToBoard: navigate to board
|
||||
```
|
||||
|
||||
## Sequence Diagrams
|
||||
|
||||
### Human Invite Creation And First Acceptance
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
autonumber
|
||||
actor Board as Board user
|
||||
participant Settings as Company Invites UI
|
||||
participant API as Access routes
|
||||
participant Invites as invites table
|
||||
actor Invitee as Invite recipient
|
||||
participant Landing as Invite landing UI
|
||||
participant Auth as Auth session
|
||||
participant Join as join_requests table
|
||||
|
||||
Board->>Settings: Choose role and click Create invite
|
||||
Settings->>API: POST /api/companies/:companyId/invites
|
||||
API->>Invites: Insert active invite
|
||||
API-->>Settings: inviteUrl + metadata
|
||||
|
||||
Invitee->>Landing: Open invite URL
|
||||
Landing->>API: GET /api/invites/:token
|
||||
API->>Invites: Load active invite
|
||||
API-->>Landing: Invite summary
|
||||
|
||||
alt Authenticated mode and no session
|
||||
Landing->>Auth: Sign up or sign in
|
||||
Auth-->>Landing: Session established
|
||||
end
|
||||
|
||||
Landing->>API: POST /api/invites/:token/accept (requestType=human)
|
||||
API->>Join: Look for reusable human join request
|
||||
alt Reusable pending or approved request exists
|
||||
API->>Invites: Mark invite accepted
|
||||
API-->>Landing: Existing join request status
|
||||
else No reusable request exists
|
||||
API->>Invites: Mark invite accepted
|
||||
API->>Join: Insert pending_approval join request
|
||||
API-->>Landing: New pending_approval join request
|
||||
end
|
||||
```
|
||||
|
||||
### Human Approval And Reload Path
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
autonumber
|
||||
actor Invitee as Invite recipient
|
||||
participant Landing as Invite landing UI
|
||||
participant API as Access routes
|
||||
participant Join as join_requests table
|
||||
actor Approver as Company admin
|
||||
participant Queue as Access queue UI
|
||||
participant Membership as company_memberships + grants
|
||||
|
||||
Invitee->>Landing: Reload consumed invite URL
|
||||
Landing->>API: GET /api/invites/:token
|
||||
API->>Join: Load join request by inviteId
|
||||
API-->>Landing: joinRequestStatus + joinRequestType
|
||||
|
||||
alt joinRequestStatus = pending_approval
|
||||
Landing-->>Invitee: Show waiting-for-approval panel
|
||||
Approver->>Queue: Review request in Company Settings -> Access
|
||||
Queue->>API: POST /companies/:companyId/join-requests/:requestId/approve
|
||||
API->>Membership: Ensure membership and grants
|
||||
API->>Join: Mark join request approved
|
||||
Invitee->>Landing: Refresh after approval
|
||||
Landing->>API: GET /api/invites/:token
|
||||
API->>Join: Reload approved join request
|
||||
API-->>Landing: approved status
|
||||
Landing-->>Invitee: Opening company and redirect
|
||||
else joinRequestStatus = rejected
|
||||
Landing-->>Invitee: Show rejected error panel
|
||||
else joinRequestStatus = approved but membership missing
|
||||
Landing-->>Invitee: Fall through to consumed/unavailable state
|
||||
end
|
||||
```
|
||||
|
||||
### Agent Invite Approval, Claim, And Replay
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
autonumber
|
||||
actor Board as Board user
|
||||
participant Settings as Company Settings UI
|
||||
participant API as Access routes
|
||||
participant Invites as invites table
|
||||
actor Gateway as OpenClaw gateway agent
|
||||
participant Join as join_requests table
|
||||
actor Approver as Company admin
|
||||
participant Agents as agents table
|
||||
participant Keys as agent_api_keys table
|
||||
|
||||
Board->>Settings: Generate OpenClaw invite prompt
|
||||
Settings->>API: POST /api/companies/:companyId/openclaw-invite-prompt
|
||||
API->>Invites: Insert active agent invite
|
||||
API-->>Settings: Prompt text + invite token
|
||||
|
||||
Gateway->>API: POST /api/invites/:token/accept (agent, openclaw_gateway)
|
||||
API->>Invites: Mark invite accepted
|
||||
API->>Join: Insert pending_approval join request + claimSecretHash
|
||||
API-->>Gateway: requestId + claimSecret + claimApiKeyPath
|
||||
|
||||
Approver->>API: POST /companies/:companyId/join-requests/:requestId/approve
|
||||
API->>Agents: Create agent + membership + grants
|
||||
API->>Join: Mark request approved and store createdAgentId
|
||||
|
||||
Gateway->>API: POST /api/join-requests/:requestId/claim-api-key (claimSecret)
|
||||
API->>Keys: Create initial API key
|
||||
API->>Join: Mark claim secret consumed
|
||||
API-->>Gateway: Plaintext Paperclip API key
|
||||
|
||||
opt Replay accepted invite for updated gateway defaults
|
||||
Gateway->>API: POST /api/invites/:token/accept again
|
||||
API->>Join: Reuse existing approved or pending request
|
||||
API->>Agents: Update approved agent adapter config when applicable
|
||||
API-->>Gateway: Updated join request payload
|
||||
end
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- `GET /api/invites/:token` treats `revoked` and `expired` invites as unavailable. Accepted invites remain resolvable when they already have a linked join request, and the summary now includes `joinRequestStatus` plus `joinRequestType`.
|
||||
- Human acceptance consumes the invite immediately and then either creates a new join request or reuses an existing `pending_approval` or `approved` human join request for the same user/email.
|
||||
- The landing page has two layers of post-accept UI:
|
||||
- immediate mutation-result UI from `POST /api/invites/:token/accept`
|
||||
- reload-time summary UI from `GET /api/invites/:token` once the invite has already been consumed
|
||||
- Reload behavior for accepted company invites is now status-sensitive:
|
||||
- `pending_approval` re-renders the waiting-for-approval panel
|
||||
- `rejected` renders the "This join request was not approved." error panel
|
||||
- `approved` only becomes a success path for human invites after membership is visible to the current session; otherwise the page falls through to the generic consumed/unavailable state
|
||||
- `GET /api/invites/:token/logo` still rejects accepted invites, so accepted-invite reload states may fall back to the generated company icon even though the summary payload still carries `companyLogoUrl`.
|
||||
- The only accepted-invite replay path in the current implementation is `POST /api/invites/:token/accept` for `agent` requests with `adapterType=openclaw_gateway`, and only when the existing join request is still `pending_approval` or already `approved`.
|
||||
- `bootstrap_ceo` invites are one-time and do not create join requests.
|
||||
@@ -1,30 +0,0 @@
|
||||
# AWS ECS Fargate deployment environment
|
||||
# Copy to .env.aws and fill in values before deploying
|
||||
#
|
||||
# Secrets (DATABASE_URL, BETTER_AUTH_SECRET, ANTHROPIC_API_KEY, OPENAI_API_KEY,
|
||||
# GITHUB_TOKEN) are injected via AWS Secrets Manager — do NOT set them here.
|
||||
|
||||
# Deployment mode
|
||||
PAPERCLIP_DEPLOYMENT_MODE=authenticated
|
||||
PAPERCLIP_DEPLOYMENT_EXPOSURE=public
|
||||
PAPERCLIP_PUBLIC_URL=https://paperclip.example.com
|
||||
|
||||
# Server
|
||||
HOST=0.0.0.0
|
||||
PORT=3100
|
||||
NODE_ENV=production
|
||||
SERVE_UI=true
|
||||
|
||||
# Paperclip paths
|
||||
PAPERCLIP_HOME=/paperclip
|
||||
PAPERCLIP_INSTANCE_ID=default
|
||||
PAPERCLIP_CONFIG=/paperclip/instances/default/config.json
|
||||
|
||||
# Auto-apply migrations on startup
|
||||
PAPERCLIP_MIGRATION_AUTO_APPLY=true
|
||||
|
||||
# Enable heartbeat scheduler for remote agents
|
||||
HEARTBEAT_SCHEDULER_ENABLED=true
|
||||
|
||||
# Post-deploy hardening (uncomment after first user signs up)
|
||||
# PAPERCLIP_AUTH_DISABLE_SIGN_UP=true
|
||||
@@ -1,90 +0,0 @@
|
||||
{
|
||||
"family": "paperclip-server",
|
||||
"networkMode": "awsvpc",
|
||||
"requiresCompatibilities": ["FARGATE"],
|
||||
"cpu": "2048",
|
||||
"memory": "4096",
|
||||
"executionRoleArn": "arn:aws:iam::<ACCOUNT_ID>:role/paperclip-ecs-execution",
|
||||
"taskRoleArn": "arn:aws:iam::<ACCOUNT_ID>:role/paperclip-ecs-task",
|
||||
"containerDefinitions": [
|
||||
{
|
||||
"name": "paperclip-server",
|
||||
"image": "<ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/paperclip-server:latest",
|
||||
"essential": true,
|
||||
"portMappings": [
|
||||
{
|
||||
"containerPort": 3100,
|
||||
"protocol": "tcp"
|
||||
}
|
||||
],
|
||||
"environment": [
|
||||
{ "name": "NODE_ENV", "value": "production" },
|
||||
{ "name": "HOST", "value": "0.0.0.0" },
|
||||
{ "name": "PORT", "value": "3100" },
|
||||
{ "name": "SERVE_UI", "value": "true" },
|
||||
{ "name": "PAPERCLIP_HOME", "value": "/paperclip" },
|
||||
{ "name": "PAPERCLIP_INSTANCE_ID", "value": "default" },
|
||||
{ "name": "PAPERCLIP_CONFIG", "value": "/paperclip/instances/default/config.json" },
|
||||
{ "name": "PAPERCLIP_DEPLOYMENT_MODE", "value": "authenticated" },
|
||||
{ "name": "PAPERCLIP_DEPLOYMENT_EXPOSURE", "value": "public" },
|
||||
{ "name": "PAPERCLIP_PUBLIC_URL", "value": "https://<DOMAIN>" },
|
||||
{ "name": "PAPERCLIP_MIGRATION_AUTO_APPLY", "value": "true" },
|
||||
{ "name": "HEARTBEAT_SCHEDULER_ENABLED", "value": "true" }
|
||||
],
|
||||
"secrets": [
|
||||
{
|
||||
"name": "DATABASE_URL",
|
||||
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/database-url"
|
||||
},
|
||||
{
|
||||
"name": "BETTER_AUTH_SECRET",
|
||||
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/better-auth-secret"
|
||||
},
|
||||
{
|
||||
"name": "ANTHROPIC_API_KEY",
|
||||
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/anthropic-api-key"
|
||||
},
|
||||
{
|
||||
"name": "OPENAI_API_KEY",
|
||||
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/openai-api-key"
|
||||
},
|
||||
{
|
||||
"name": "GITHUB_TOKEN",
|
||||
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/github-token"
|
||||
}
|
||||
],
|
||||
"mountPoints": [
|
||||
{
|
||||
"sourceVolume": "paperclip-data",
|
||||
"containerPath": "/paperclip",
|
||||
"readOnly": false
|
||||
}
|
||||
],
|
||||
"healthCheck": {
|
||||
"command": ["CMD-SHELL", "curl -f http://localhost:3100/api/health || exit 1"],
|
||||
"interval": 30,
|
||||
"timeout": 5,
|
||||
"retries": 3,
|
||||
"startPeriod": 60
|
||||
},
|
||||
"logConfiguration": {
|
||||
"logDriver": "awslogs",
|
||||
"options": {
|
||||
"awslogs-group": "/ecs/paperclip",
|
||||
"awslogs-region": "<REGION>",
|
||||
"awslogs-stream-prefix": "server"
|
||||
}
|
||||
}
|
||||
}
|
||||
],
|
||||
"volumes": [
|
||||
{
|
||||
"name": "paperclip-data",
|
||||
"efsVolumeConfiguration": {
|
||||
"fileSystemId": "<EFS_ID>",
|
||||
"rootDirectory": "/",
|
||||
"transitEncryption": "ENABLED"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -124,14 +124,14 @@ If the connection drops, the UI reconnects automatically.
|
||||
|
||||
1. Enable timer wakeups (for example every 300s)
|
||||
2. Keep assignment wakeups on
|
||||
3. Use a focused prompt template that tells agents to act in the same heartbeat, leave durable progress, and mark blocked work with an owner/action
|
||||
3. Use a focused prompt template
|
||||
4. Watch run logs and adjust prompt/config over time
|
||||
|
||||
## 7.2 Event-driven loop (less constant polling)
|
||||
|
||||
1. Disable timer or set a long interval
|
||||
2. Keep wake-on-assignment enabled
|
||||
3. Use child issues, comments, and on-demand wakeups for handoffs instead of loops that poll agents, sessions, or processes
|
||||
3. Use on-demand wakeups for manual nudges
|
||||
|
||||
## 7.3 Safety-first loop
|
||||
|
||||
|
||||
@@ -1,9 +1,9 @@
|
||||
---
|
||||
title: Issues
|
||||
summary: Issue CRUD, checkout/release, comments, documents, interactions, and attachments
|
||||
summary: Issue CRUD, checkout/release, comments, documents, and attachments
|
||||
---
|
||||
|
||||
Issues are the unit of work in Paperclip. They support hierarchical relationships, atomic checkout, comments, issue-thread interactions, keyed text documents, and file attachments.
|
||||
Issues are the unit of work in Paperclip. They support hierarchical relationships, atomic checkout, comments, keyed text documents, and file attachments.
|
||||
|
||||
## List Issues
|
||||
|
||||
@@ -121,65 +121,6 @@ POST /api/issues/{issueId}/comments
|
||||
|
||||
@-mentions (`@AgentName`) in comments trigger heartbeats for the mentioned agent.
|
||||
|
||||
## Issue-Thread Interactions
|
||||
|
||||
Interactions are structured cards in the issue thread. Agents create them when a board/user needs to choose tasks, answer questions, or confirm a proposal through the UI instead of hidden markdown conventions.
|
||||
|
||||
### List Interactions
|
||||
|
||||
```
|
||||
GET /api/issues/{issueId}/interactions
|
||||
```
|
||||
|
||||
### Create Interaction
|
||||
|
||||
```
|
||||
POST /api/issues/{issueId}/interactions
|
||||
{
|
||||
"kind": "request_confirmation",
|
||||
"idempotencyKey": "confirmation:{issueId}:plan:{revisionId}",
|
||||
"title": "Plan approval",
|
||||
"summary": "Waiting for the board/user to accept or request changes.",
|
||||
"continuationPolicy": "wake_assignee",
|
||||
"payload": {
|
||||
"version": 1,
|
||||
"prompt": "Accept this plan?",
|
||||
"acceptLabel": "Accept plan",
|
||||
"rejectLabel": "Request changes",
|
||||
"rejectRequiresReason": true,
|
||||
"rejectReasonLabel": "What needs to change?",
|
||||
"detailsMarkdown": "Review the latest plan document before accepting.",
|
||||
"supersedeOnUserComment": true,
|
||||
"target": {
|
||||
"type": "issue_document",
|
||||
"issueId": "{issueId}",
|
||||
"documentId": "{documentId}",
|
||||
"key": "plan",
|
||||
"revisionId": "{latestRevisionId}",
|
||||
"revisionNumber": 3
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Supported `kind` values:
|
||||
|
||||
- `suggest_tasks`: propose child issues for the board/user to accept or reject
|
||||
- `ask_user_questions`: ask structured questions and store selected answers
|
||||
- `request_confirmation`: ask the board/user to accept or reject a proposal
|
||||
|
||||
For `request_confirmation`, `continuationPolicy: "wake_assignee"` wakes the assignee only after acceptance. Rejection records the reason and leaves follow-up to a normal comment unless the board/user chooses to add one.
|
||||
|
||||
### Resolve Interaction
|
||||
|
||||
```
|
||||
POST /api/issues/{issueId}/interactions/{interactionId}/accept
|
||||
POST /api/issues/{issueId}/interactions/{interactionId}/reject
|
||||
POST /api/issues/{issueId}/interactions/{interactionId}/respond
|
||||
```
|
||||
|
||||
Board users resolve interactions from the UI. Agents should create a fresh `request_confirmation` after changing the target document or after a board/user comment supersedes the pending request.
|
||||
|
||||
## Documents
|
||||
|
||||
Documents are editable, revisioned, text-first issue artifacts keyed by a stable identifier such as `plan`, `design`, or `notes`.
|
||||
|
||||
@@ -75,28 +75,11 @@ Fields:
|
||||
```
|
||||
PATCH /api/routines/{routineId}
|
||||
{
|
||||
"status": "paused",
|
||||
"baseRevisionId": "{latestRevisionId}"
|
||||
"status": "paused"
|
||||
}
|
||||
```
|
||||
|
||||
All fields from create are updatable. `baseRevisionId` is optional for backward compatibility; when provided, stale values return `409 Conflict` with the current revision id. **Agents can only update routines assigned to themselves and cannot reassign a routine to another agent.**
|
||||
|
||||
## List Revisions
|
||||
|
||||
```
|
||||
GET /api/routines/{routineId}/revisions
|
||||
```
|
||||
|
||||
Returns append-only routine definition revisions newest first. Snapshots include routine fields and safe trigger metadata only; webhook secret values and `secretId` are never returned.
|
||||
|
||||
## Restore Revision
|
||||
|
||||
```
|
||||
POST /api/routines/{routineId}/revisions/{revisionId}/restore
|
||||
```
|
||||
|
||||
Restores a historical routine definition by creating a new latest revision copied from the selected revision. Historical revision rows, routine run history, and activity history are preserved. If restoring a deleted webhook trigger requires recreating it, the response can include one-time replacement secret material for that trigger.
|
||||
All fields from create are updatable. **Agents can only update routines assigned to themselves and cannot reassign a routine to another agent.**
|
||||
|
||||
## Add Trigger
|
||||
|
||||
|
||||
|
Before Width: | Height: | Size: 258 KiB |
|
Before Width: | Height: | Size: 321 KiB |
@@ -1,580 +0,0 @@
|
||||
---
|
||||
title: AWS ECS Fargate
|
||||
summary: Deploy Paperclip to AWS using ECS Fargate, RDS Postgres, and EFS
|
||||
---
|
||||
|
||||
Deploy Paperclip to AWS with ECS Fargate (compute), RDS Postgres 17 (database), and EFS (persistent storage). This guide uses the AWS CLI and produces a single-task ECS service behind an ALB with HTTPS.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- AWS CLI v2 configured with a profile that has admin-level permissions
|
||||
- Docker installed locally (for building and pushing the image)
|
||||
- A registered domain with DNS you control (for the TLS certificate)
|
||||
- The Paperclip repo cloned locally
|
||||
|
||||
Set these shell variables for the rest of the guide:
|
||||
|
||||
```bash
|
||||
export AWS_REGION=us-east-1
|
||||
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
|
||||
export PAPERCLIP_DOMAIN=paperclip.example.com # your domain
|
||||
export DB_PASSWORD=$(openssl rand -base64 24 | tr -d '/+=' | head -c 32)
|
||||
export AUTH_SECRET=$(openssl rand -base64 32)
|
||||
```
|
||||
|
||||
## 1. Create ECR Repository
|
||||
|
||||
```bash
|
||||
aws ecr create-repository \
|
||||
--repository-name paperclip-server \
|
||||
--image-scanning-configuration scanOnPush=true \
|
||||
--region $AWS_REGION
|
||||
```
|
||||
|
||||
## 2. Build and Push Docker Image
|
||||
|
||||
```bash
|
||||
cd /path/to/paperclip
|
||||
|
||||
# Authenticate Docker to ECR
|
||||
aws ecr get-login-password --region $AWS_REGION \
|
||||
| docker login --username AWS --password-stdin \
|
||||
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
|
||||
|
||||
# Build
|
||||
docker build -t paperclip-server .
|
||||
|
||||
# Tag and push
|
||||
docker tag paperclip-server:latest \
|
||||
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/paperclip-server:latest
|
||||
|
||||
docker push \
|
||||
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/paperclip-server:latest
|
||||
```
|
||||
|
||||
## 3. Networking (VPC, Subnets, Security Groups)
|
||||
|
||||
Use the default VPC or create a dedicated one. The guide assumes the default VPC with public and private subnets in two AZs.
|
||||
|
||||
```bash
|
||||
# Get default VPC
|
||||
VPC_ID=$(aws ec2 describe-vpcs \
|
||||
--filters Name=isDefault,Values=true \
|
||||
--query 'Vpcs[0].VpcId' --output text)
|
||||
|
||||
# Get two public subnets (for ALB)
|
||||
SUBNET_IDS=$(aws ec2 describe-subnets \
|
||||
--filters Name=vpc-id,Values=$VPC_ID \
|
||||
--query 'Subnets[?MapPublicIpOnLaunch==`true`] | [0:2].SubnetId' \
|
||||
--output text)
|
||||
SUBNET_1=$(echo $SUBNET_IDS | awk '{print $1}')
|
||||
SUBNET_2=$(echo $SUBNET_IDS | awk '{print $2}')
|
||||
```
|
||||
|
||||
Create security groups:
|
||||
|
||||
```bash
|
||||
# ALB security group — inbound HTTPS
|
||||
ALB_SG=$(aws ec2 create-security-group \
|
||||
--group-name paperclip-alb \
|
||||
--description "Paperclip ALB" \
|
||||
--vpc-id $VPC_ID \
|
||||
--query 'GroupId' --output text)
|
||||
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-id $ALB_SG \
|
||||
--protocol tcp --port 443 --cidr 0.0.0.0/0
|
||||
|
||||
# Also open port 80 so the ALB can accept HTTP and redirect to HTTPS
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-id $ALB_SG \
|
||||
--protocol tcp --port 80 --cidr 0.0.0.0/0
|
||||
|
||||
# ECS task security group — inbound from ALB only
|
||||
ECS_SG=$(aws ec2 create-security-group \
|
||||
--group-name paperclip-ecs \
|
||||
--description "Paperclip ECS tasks" \
|
||||
--vpc-id $VPC_ID \
|
||||
--query 'GroupId' --output text)
|
||||
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-id $ECS_SG \
|
||||
--protocol tcp --port 3100 \
|
||||
--source-group $ALB_SG
|
||||
|
||||
# RDS security group — inbound from ECS only
|
||||
RDS_SG=$(aws ec2 create-security-group \
|
||||
--group-name paperclip-rds \
|
||||
--description "Paperclip RDS" \
|
||||
--vpc-id $VPC_ID \
|
||||
--query 'GroupId' --output text)
|
||||
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-id $RDS_SG \
|
||||
--protocol tcp --port 5432 \
|
||||
--source-group $ECS_SG
|
||||
|
||||
# EFS security group — inbound NFS from ECS only
|
||||
EFS_SG=$(aws ec2 create-security-group \
|
||||
--group-name paperclip-efs \
|
||||
--description "Paperclip EFS" \
|
||||
--vpc-id $VPC_ID \
|
||||
--query 'GroupId' --output text)
|
||||
|
||||
aws ec2 authorize-security-group-ingress \
|
||||
--group-id $EFS_SG \
|
||||
--protocol tcp --port 2049 \
|
||||
--source-group $ECS_SG
|
||||
```
|
||||
|
||||
## 4. Create RDS Postgres Instance
|
||||
|
||||
```bash
|
||||
# Custom VPCs don't come with a default DB subnet group — create one
|
||||
# that spans our two subnets so RDS can place the instance.
|
||||
aws rds create-db-subnet-group \
|
||||
--db-subnet-group-name paperclip-db-subnet \
|
||||
--db-subnet-group-description "Paperclip RDS subnets" \
|
||||
--subnet-ids $SUBNET_1 $SUBNET_2
|
||||
|
||||
aws rds create-db-instance \
|
||||
--db-instance-identifier paperclip-db \
|
||||
--db-instance-class db.t4g.micro \
|
||||
--engine postgres \
|
||||
--engine-version 17 \
|
||||
--master-username paperclip \
|
||||
--master-user-password "$DB_PASSWORD" \
|
||||
--allocated-storage 20 \
|
||||
--storage-type gp3 \
|
||||
--vpc-security-group-ids $RDS_SG \
|
||||
--db-subnet-group-name paperclip-db-subnet \
|
||||
--no-publicly-accessible \
|
||||
--backup-retention-period 7 \
|
||||
--no-multi-az \
|
||||
--db-name paperclip \
|
||||
--region $AWS_REGION
|
||||
|
||||
# Wait for it to become available (takes 5-10 min)
|
||||
aws rds wait db-instance-available \
|
||||
--db-instance-identifier paperclip-db
|
||||
|
||||
# Get the endpoint
|
||||
RDS_ENDPOINT=$(aws rds describe-db-instances \
|
||||
--db-instance-identifier paperclip-db \
|
||||
--query 'DBInstances[0].Endpoint.Address' --output text)
|
||||
|
||||
DATABASE_URL="postgresql://paperclip:${DB_PASSWORD}@${RDS_ENDPOINT}:5432/paperclip"
|
||||
```
|
||||
|
||||
## 5. Create EFS Filesystem
|
||||
|
||||
```bash
|
||||
EFS_ID=$(aws efs create-file-system \
|
||||
--performance-mode generalPurpose \
|
||||
--throughput-mode bursting \
|
||||
--encrypted \
|
||||
--tags Key=Name,Value=paperclip-data \
|
||||
--query 'FileSystemId' --output text)
|
||||
|
||||
# Create mount targets in each subnet
|
||||
for SUBNET in $SUBNET_1 $SUBNET_2; do
|
||||
aws efs create-mount-target \
|
||||
--file-system-id $EFS_ID \
|
||||
--subnet-id $SUBNET \
|
||||
--security-groups $EFS_SG
|
||||
done
|
||||
|
||||
# Wait for mount targets
|
||||
aws efs describe-mount-targets --file-system-id $EFS_ID
|
||||
```
|
||||
|
||||
## 6. Store Secrets
|
||||
|
||||
```bash
|
||||
aws secretsmanager create-secret \
|
||||
--name paperclip/database-url \
|
||||
--secret-string "$DATABASE_URL"
|
||||
|
||||
aws secretsmanager create-secret \
|
||||
--name paperclip/anthropic-api-key \
|
||||
--secret-string "YOUR_ANTHROPIC_KEY"
|
||||
|
||||
aws secretsmanager create-secret \
|
||||
--name paperclip/better-auth-secret \
|
||||
--secret-string "$AUTH_SECRET"
|
||||
|
||||
aws secretsmanager create-secret \
|
||||
--name paperclip/openai-api-key \
|
||||
--secret-string "YOUR_OPENAI_KEY"
|
||||
|
||||
aws secretsmanager create-secret \
|
||||
--name paperclip/github-token \
|
||||
--secret-string "YOUR_GITHUB_PAT"
|
||||
```
|
||||
|
||||
## 7. IAM Roles
|
||||
|
||||
Create the ECS task execution role (pulls images, reads secrets) and the task role (application permissions).
|
||||
|
||||
```bash
|
||||
# Task execution role
|
||||
aws iam create-role \
|
||||
--role-name paperclip-ecs-execution \
|
||||
--assume-role-policy-document '{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [{
|
||||
"Effect": "Allow",
|
||||
"Principal": {"Service": "ecs-tasks.amazonaws.com"},
|
||||
"Action": "sts:AssumeRole"
|
||||
}]
|
||||
}'
|
||||
|
||||
aws iam attach-role-policy \
|
||||
--role-name paperclip-ecs-execution \
|
||||
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
|
||||
|
||||
# Allow reading secrets
|
||||
aws iam put-role-policy \
|
||||
--role-name paperclip-ecs-execution \
|
||||
--policy-name SecretsAccess \
|
||||
--policy-document '{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [{
|
||||
"Effect": "Allow",
|
||||
"Action": ["secretsmanager:GetSecretValue"],
|
||||
"Resource": "arn:aws:secretsmanager:'$AWS_REGION':'$AWS_ACCOUNT_ID':secret:paperclip/*"
|
||||
}]
|
||||
}'
|
||||
|
||||
# Task role (application — add permissions as needed)
|
||||
aws iam create-role \
|
||||
--role-name paperclip-ecs-task \
|
||||
--assume-role-policy-document '{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [{
|
||||
"Effect": "Allow",
|
||||
"Principal": {"Service": "ecs-tasks.amazonaws.com"},
|
||||
"Action": "sts:AssumeRole"
|
||||
}]
|
||||
}'
|
||||
```
|
||||
|
||||
## 8. ECS Cluster and Task Definition
|
||||
|
||||
```bash
|
||||
aws ecs create-cluster --cluster-name paperclip
|
||||
|
||||
aws logs create-log-group --log-group-name /ecs/paperclip
|
||||
```
|
||||
|
||||
Register the task definition using the template at `docker/ecs-task-definition.json`. Before registering, replace the placeholder values:
|
||||
|
||||
```bash
|
||||
sed -e "s|<ACCOUNT_ID>|$AWS_ACCOUNT_ID|g" \
|
||||
-e "s|<REGION>|$AWS_REGION|g" \
|
||||
-e "s|<EFS_ID>|$EFS_ID|g" \
|
||||
-e "s|<DOMAIN>|$PAPERCLIP_DOMAIN|g" \
|
||||
docker/ecs-task-definition.json > /tmp/paperclip-task-def.json
|
||||
|
||||
aws ecs register-task-definition \
|
||||
--cli-input-json file:///tmp/paperclip-task-def.json
|
||||
```
|
||||
|
||||
## 9. ALB and TLS Certificate
|
||||
|
||||
Request a certificate (you must validate via DNS):
|
||||
|
||||
```bash
|
||||
CERT_ARN=$(aws acm request-certificate \
|
||||
--domain-name $PAPERCLIP_DOMAIN \
|
||||
--validation-method DNS \
|
||||
--query 'CertificateArn' --output text)
|
||||
|
||||
# Get the CNAME record to add to your DNS
|
||||
aws acm describe-certificate \
|
||||
--certificate-arn $CERT_ARN \
|
||||
--query 'Certificate.DomainValidationOptions[0].ResourceRecord'
|
||||
```
|
||||
|
||||
Add the CNAME to your DNS provider, then wait for validation:
|
||||
|
||||
```bash
|
||||
aws acm wait certificate-validated --certificate-arn $CERT_ARN
|
||||
```
|
||||
|
||||
Create the ALB:
|
||||
|
||||
```bash
|
||||
ALB_ARN=$(aws elbv2 create-load-balancer \
|
||||
--name paperclip-alb \
|
||||
--subnets $SUBNET_1 $SUBNET_2 \
|
||||
--security-groups $ALB_SG \
|
||||
--scheme internet-facing \
|
||||
--type application \
|
||||
--query 'LoadBalancers[0].LoadBalancerArn' --output text)
|
||||
|
||||
ALB_DNS=$(aws elbv2 describe-load-balancers \
|
||||
--load-balancer-arns $ALB_ARN \
|
||||
--query 'LoadBalancers[0].DNSName' --output text)
|
||||
|
||||
# Target group
|
||||
TG_ARN=$(aws elbv2 create-target-group \
|
||||
--name paperclip-tg \
|
||||
--protocol HTTP \
|
||||
--port 3100 \
|
||||
--vpc-id $VPC_ID \
|
||||
--target-type ip \
|
||||
--health-check-path /api/health \
|
||||
--health-check-interval-seconds 30 \
|
||||
--healthy-threshold-count 2 \
|
||||
--unhealthy-threshold-count 3 \
|
||||
--query 'TargetGroups[0].TargetGroupArn' --output text)
|
||||
|
||||
# HTTPS listener
|
||||
LISTENER_ARN=$(aws elbv2 create-listener \
|
||||
--load-balancer-arn $ALB_ARN \
|
||||
--protocol HTTPS \
|
||||
--port 443 \
|
||||
--certificates CertificateArn=$CERT_ARN \
|
||||
--default-actions Type=forward,TargetGroupArn=$TG_ARN \
|
||||
--query 'Listeners[0].ListenerArn' --output text)
|
||||
|
||||
# HTTP listener — redirect all :80 traffic to :443
|
||||
HTTP_LISTENER_ARN=$(aws elbv2 create-listener \
|
||||
--load-balancer-arn $ALB_ARN \
|
||||
--protocol HTTP \
|
||||
--port 80 \
|
||||
--default-actions Type=redirect,RedirectConfig='{Protocol=HTTPS,Port=443,StatusCode=HTTP_301}' \
|
||||
--query 'Listeners[0].ListenerArn' --output text)
|
||||
```
|
||||
|
||||
Point your DNS to the ALB:
|
||||
- Create a CNAME or ALIAS record for `$PAPERCLIP_DOMAIN` -> `$ALB_DNS`
|
||||
|
||||
## 10. Create ECS Service
|
||||
|
||||
```bash
|
||||
aws ecs create-service \
|
||||
--cluster paperclip \
|
||||
--service-name paperclip-server \
|
||||
--task-definition paperclip-server \
|
||||
--desired-count 1 \
|
||||
--launch-type FARGATE \
|
||||
--deployment-configuration '{
|
||||
"deploymentCircuitBreaker": {"enable": true, "rollback": true},
|
||||
"maximumPercent": 200,
|
||||
"minimumHealthyPercent": 100
|
||||
}' \
|
||||
--network-configuration '{
|
||||
"awsvpcConfiguration": {
|
||||
"subnets": ["'$SUBNET_1'", "'$SUBNET_2'"],
|
||||
"securityGroups": ["'$ECS_SG'"],
|
||||
"assignPublicIp": "ENABLED"
|
||||
}
|
||||
}' \
|
||||
--load-balancers '[{
|
||||
"targetGroupArn": "'$TG_ARN'",
|
||||
"containerName": "paperclip-server",
|
||||
"containerPort": 3100
|
||||
}]'
|
||||
```
|
||||
|
||||
> **Note:** `assignPublicIp: ENABLED` is needed if using public subnets without a NAT Gateway. For private subnets, set to `DISABLED` and ensure a NAT Gateway is configured for outbound internet access.
|
||||
|
||||
## 11. Verify Deployment
|
||||
|
||||
```bash
|
||||
# Watch task come up
|
||||
aws ecs describe-services \
|
||||
--cluster paperclip \
|
||||
--services paperclip-server \
|
||||
--query 'services[0].{desired:desiredCount,running:runningCount,status:status}'
|
||||
|
||||
# Check task health
|
||||
aws ecs list-tasks --cluster paperclip --service-name paperclip-server
|
||||
TASK_ARN=$(aws ecs list-tasks --cluster paperclip --service-name paperclip-server --query 'taskArns[0]' --output text)
|
||||
aws ecs describe-tasks --cluster paperclip --tasks $TASK_ARN \
|
||||
--query 'tasks[0].{status:lastStatus,health:healthStatus}'
|
||||
|
||||
# Check logs
|
||||
aws logs tail /ecs/paperclip --since 10m --follow
|
||||
|
||||
# Hit the health endpoint
|
||||
curl -sf https://$PAPERCLIP_DOMAIN/api/health
|
||||
```
|
||||
|
||||
**Healthy indicators:**
|
||||
- ECS task status: `RUNNING`, health: `HEALTHY`
|
||||
- Logs show `plugin job coordinator started` and `plugin-loader: loadAll complete`
|
||||
- `/api/health` returns 200
|
||||
|
||||
## Post-Deploy Security Hardening
|
||||
|
||||
After the first user has signed up (which grants admin role), lock down the instance:
|
||||
|
||||
```bash
|
||||
# Disable public sign-up (prevents unauthorized users from creating accounts)
|
||||
# Add to the task definition environment section, then redeploy:
|
||||
# { "name": "PAPERCLIP_AUTH_DISABLE_SIGN_UP", "value": "true" }
|
||||
|
||||
# Or update via Secrets Manager / task def override, then force new deployment
|
||||
aws ecs update-service \
|
||||
--cluster paperclip \
|
||||
--service paperclip-server \
|
||||
--force-new-deployment
|
||||
```
|
||||
|
||||
Use the invite flow (added in v2026.416.0) to grant access to additional users after sign-up is disabled.
|
||||
|
||||
## Deploying Updates
|
||||
|
||||
Build, push, and force a new deployment:
|
||||
|
||||
```bash
|
||||
# Build and push new image
|
||||
docker build -t paperclip-server .
|
||||
docker tag paperclip-server:latest \
|
||||
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/paperclip-server:latest
|
||||
docker push \
|
||||
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/paperclip-server:latest
|
||||
|
||||
# Roll out
|
||||
aws ecs update-service \
|
||||
--cluster paperclip \
|
||||
--service paperclip-server \
|
||||
--force-new-deployment
|
||||
|
||||
# Watch the deployment
|
||||
aws ecs describe-services \
|
||||
--cluster paperclip \
|
||||
--services paperclip-server \
|
||||
--query 'services[0].deployments[*].{status:status,running:runningCount,desired:desiredCount,rollout:rolloutState}'
|
||||
```
|
||||
|
||||
ECS performs a rolling update: starts a new task, waits for it to pass health checks, then drains the old task.
|
||||
|
||||
## Rollback
|
||||
|
||||
If the new deployment is unhealthy:
|
||||
|
||||
```bash
|
||||
# ECS automatically rolls back if the new task fails health checks
|
||||
# (circuit breaker is enabled in the service configuration above).
|
||||
# To force rollback manually:
|
||||
|
||||
# 1. Find the previous task definition revision
|
||||
aws ecs list-task-definitions \
|
||||
--family-prefix paperclip-server \
|
||||
--sort DESC \
|
||||
--query 'taskDefinitionArns[0:3]'
|
||||
|
||||
# 2. Update service to the previous revision
|
||||
aws ecs update-service \
|
||||
--cluster paperclip \
|
||||
--service paperclip-server \
|
||||
--task-definition paperclip-server:<PREVIOUS_REVISION>
|
||||
```
|
||||
|
||||
## Scaling to Zero (Cost Savings)
|
||||
|
||||
Scale down when not in use:
|
||||
|
||||
```bash
|
||||
# Stop
|
||||
aws ecs update-service \
|
||||
--cluster paperclip \
|
||||
--service paperclip-server \
|
||||
--desired-count 0
|
||||
|
||||
# Start
|
||||
aws ecs update-service \
|
||||
--cluster paperclip \
|
||||
--service paperclip-server \
|
||||
--desired-count 1
|
||||
```
|
||||
|
||||
RDS can also be stopped (auto-restarts after 7 days):
|
||||
|
||||
```bash
|
||||
aws rds stop-db-instance --db-instance-identifier paperclip-db
|
||||
aws rds start-db-instance --db-instance-identifier paperclip-db
|
||||
```
|
||||
|
||||
## Teardown
|
||||
|
||||
Remove all resources in reverse order:
|
||||
|
||||
```bash
|
||||
# 1. ECS service and cluster
|
||||
aws ecs update-service --cluster paperclip --service paperclip-server --desired-count 0
|
||||
aws ecs delete-service --cluster paperclip --service paperclip-server --force
|
||||
aws ecs delete-cluster --cluster paperclip
|
||||
|
||||
# 2. ALB and ACM cert
|
||||
aws elbv2 delete-listener --listener-arn $HTTP_LISTENER_ARN
|
||||
aws elbv2 delete-listener --listener-arn $LISTENER_ARN
|
||||
aws elbv2 delete-target-group --target-group-arn $TG_ARN
|
||||
aws elbv2 delete-load-balancer --load-balancer-arn $ALB_ARN
|
||||
aws acm delete-certificate --certificate-arn $CERT_ARN
|
||||
|
||||
# 3. RDS (creates final snapshot)
|
||||
aws rds delete-db-instance \
|
||||
--db-instance-identifier paperclip-db \
|
||||
--final-db-snapshot-identifier paperclip-db-final
|
||||
aws rds wait db-instance-deleted --db-instance-identifier paperclip-db
|
||||
aws rds delete-db-subnet-group --db-subnet-group-name paperclip-db-subnet
|
||||
|
||||
# 4. EFS (mount targets must be deleted first)
|
||||
for MT in $(aws efs describe-mount-targets --file-system-id $EFS_ID --query 'MountTargets[*].MountTargetId' --output text); do
|
||||
aws efs delete-mount-target --mount-target-id $MT
|
||||
done
|
||||
# Mount-target deletion is async; poll until none remain before deleting
|
||||
# the filesystem, otherwise delete-file-system fails with FileSystemInUse.
|
||||
echo "Waiting for mount targets to delete..."
|
||||
while aws efs describe-mount-targets \
|
||||
--file-system-id $EFS_ID \
|
||||
--query 'MountTargets[0].MountTargetId' --output text 2>/dev/null | grep -q 'fsmt-'; do
|
||||
sleep 5
|
||||
done
|
||||
aws efs delete-file-system --file-system-id $EFS_ID
|
||||
|
||||
# 5. Secrets
|
||||
for s in database-url anthropic-api-key better-auth-secret openai-api-key github-token; do
|
||||
aws secretsmanager delete-secret --secret-id paperclip/$s --force-delete-without-recovery
|
||||
done
|
||||
|
||||
# 6. Security groups (after all dependents are gone)
|
||||
for sg in $EFS_SG $RDS_SG $ECS_SG $ALB_SG; do
|
||||
aws ec2 delete-security-group --group-id $sg
|
||||
done
|
||||
|
||||
# 7. ECR
|
||||
aws ecr delete-repository --repository-name paperclip-server --force
|
||||
|
||||
# 8. IAM roles
|
||||
aws iam delete-role-policy --role-name paperclip-ecs-execution --policy-name SecretsAccess
|
||||
aws iam detach-role-policy --role-name paperclip-ecs-execution \
|
||||
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
|
||||
aws iam delete-role --role-name paperclip-ecs-execution
|
||||
aws iam delete-role --role-name paperclip-ecs-task
|
||||
|
||||
# 9. Log group
|
||||
aws logs delete-log-group --log-group-name /ecs/paperclip
|
||||
```
|
||||
|
||||
## Cost Reference
|
||||
|
||||
| Service | Config | Monthly |
|
||||
|---------|--------|---------|
|
||||
| ECS Fargate | 2 vCPU, 4 GB, 24/7 | ~$70 |
|
||||
| RDS Postgres | db.t4g.micro, 20 GB | ~$15 |
|
||||
| ALB | 1 LCU average | ~$22 |
|
||||
| NAT Gateway | 1 AZ (if using private subnets) | ~$35 |
|
||||
| EFS | 1 GB Standard | ~$0.30 |
|
||||
| Secrets Manager | 5 secrets | ~$2 |
|
||||
| CloudWatch Logs | ~1 GB/mo | ~$0.50 |
|
||||
| ECR | ~1 GB | ~$0.10 |
|
||||
| **Total (public subnets, no NAT)** | | **~$110/mo** |
|
||||
| **Total (private subnets + NAT)** | | **~$145/mo** |
|
||||
|
||||
Use Fargate Spot and scheduled scaling to 0 during off-hours to reduce to ~$60-85/mo.
|
||||
@@ -40,7 +40,7 @@ Paperclip supports three deployment configurations, from zero-friction local to
|
||||
|
||||
- **Just trying Paperclip?** Use `local_trusted` (the default)
|
||||
- **Sharing with a team on private network?** Use `authenticated` + `private`
|
||||
- **Deploying to the cloud?** Use `authenticated` + `public` — see [AWS ECS Fargate guide](aws-ecs.md)
|
||||
- **Deploying to the cloud?** Use `authenticated` + `public`
|
||||
|
||||
Set the mode during onboarding:
|
||||
|
||||
|
||||
@@ -48,8 +48,6 @@
|
||||
"guides/board-operator/managing-tasks",
|
||||
"guides/board-operator/execution-workspaces-and-runtime-services",
|
||||
"guides/board-operator/delegation",
|
||||
"guides/board-operator/execution-workspaces-and-runtime-services",
|
||||
"guides/board-operator/delegation",
|
||||
"guides/board-operator/approvals",
|
||||
"guides/board-operator/costs-and-budgets",
|
||||
"guides/board-operator/activity-log",
|
||||
|
||||
@@ -55,15 +55,3 @@ The name must match the agent's `name` field exactly (case-insensitive). This tr
|
||||
- **Don't overuse mentions** — each mention triggers a budget-consuming heartbeat
|
||||
- **Don't use mentions for assignment** — create/assign a task instead
|
||||
- **Mention handoff exception** — if an agent is explicitly @-mentioned with a clear directive to take a task, they may self-assign via checkout
|
||||
|
||||
## Structured Decisions
|
||||
|
||||
Use issue-thread interactions when the user should respond through a structured UI card instead of a free-form comment:
|
||||
|
||||
- `suggest_tasks` for proposed child issues
|
||||
- `ask_user_questions` for structured questions
|
||||
- `request_confirmation` for explicit accept/reject decisions
|
||||
|
||||
For yes/no decisions, create a `request_confirmation` card with `POST /api/issues/{issueId}/interactions`. Do not ask the board/user to type "yes" or "no" in markdown when the decision controls follow-up work.
|
||||
|
||||
Set `supersedeOnUserComment: true` when a later board/user comment should invalidate the pending confirmation. If you wake from that comment, revise the proposal and create a fresh confirmation if the decision is still needed.
|
||||
|
||||
@@ -5,16 +5,6 @@ summary: Agent-side approval request and response
|
||||
|
||||
Agents interact with the approval system in two ways: requesting approvals and responding to approval resolutions.
|
||||
|
||||
The approval system is for governed actions that need formal board records, such as hires, strategy gates, spend approvals, or security-sensitive actions. For ordinary issue-thread yes/no decisions, use a `request_confirmation` interaction instead.
|
||||
|
||||
Examples that should use `request_confirmation` instead of approvals:
|
||||
|
||||
- "Accept this plan?"
|
||||
- "Proceed with this issue breakdown?"
|
||||
- "Use option A or reject and request changes?"
|
||||
|
||||
Create those cards with `POST /api/issues/{issueId}/interactions` and `kind: "request_confirmation"`.
|
||||
|
||||
## Requesting a Hire
|
||||
|
||||
Managers and CEOs can request to hire new agents:
|
||||
@@ -47,16 +37,6 @@ POST /api/companies/{companyId}/approvals
|
||||
}
|
||||
```
|
||||
|
||||
## Plan Approval Cards
|
||||
|
||||
For normal issue implementation plans, use the issue-thread confirmation surface:
|
||||
|
||||
1. Update the `plan` issue document.
|
||||
2. Create `request_confirmation` bound to the latest `plan` revision.
|
||||
3. Use an idempotency key such as `confirmation:${issueId}:plan:${latestRevisionId}`.
|
||||
4. Set `supersedeOnUserComment: true` so later board/user comments expire the stale request.
|
||||
5. Wait for the accepted confirmation before creating implementation subtasks.
|
||||
|
||||
## Responding to Approval Resolutions
|
||||
|
||||
When an approval you requested is resolved, you may be woken with:
|
||||
|
||||
@@ -66,11 +66,7 @@ Read ancestors to understand why this task exists. If woken by a specific commen
|
||||
|
||||
### Step 7: Do the Work
|
||||
|
||||
Use your tools and capabilities to complete the task. If the issue is actionable, take a concrete action in the same heartbeat. Do not stop at a plan unless the issue asked for planning.
|
||||
|
||||
Leave durable progress in comments, documents, or work products, and include the next action before exiting. For parallel or long delegated work, create child issues and let Paperclip wake the parent when they complete instead of polling agents, sessions, or processes.
|
||||
|
||||
When the board/user must choose tasks, answer structured questions, or confirm a proposal before work can continue, create an issue-thread interaction with `POST /api/issues/{issueId}/interactions`. Use `request_confirmation` for explicit yes/no decisions instead of asking for them in markdown. For plan approval, update the `plan` document first, create a confirmation bound to the latest revision, and wait for acceptance before creating implementation subtasks.
|
||||
Use your tools and capabilities to complete the task.
|
||||
|
||||
### Step 8: Update Status
|
||||
|
||||
@@ -106,23 +102,6 @@ Always set `parentId` and `goalId` on subtasks.
|
||||
- **Always checkout** before working — never PATCH to `in_progress` manually
|
||||
- **Never retry a 409** — the task belongs to someone else
|
||||
- **Always comment** on in-progress work before exiting a heartbeat
|
||||
- **Start actionable work** in the same heartbeat; planning-only exits are for planning tasks
|
||||
- **Leave a clear next action** in durable issue context
|
||||
- **Use child issues instead of polling** for long or parallel delegated work
|
||||
- **Use `request_confirmation`** for issue-scoped yes/no decisions and plan approval cards
|
||||
- **Always set parentId** on subtasks
|
||||
- **Never cancel cross-team tasks** — reassign to your manager
|
||||
- **Escalate when stuck** — use your chain of command
|
||||
|
||||
## Run Liveness
|
||||
|
||||
Paperclip records run liveness as metadata on heartbeat runs. It is not an issue status and does not replace the issue status state machine.
|
||||
|
||||
- Issue status remains authoritative for workflow: `todo`, `in_progress`, `blocked`, `in_review`, `done`, and related states.
|
||||
- Run liveness describes the latest run outcome: for example `completed`, `advanced`, `plan_only`, `empty_response`, `blocked`, `failed`, or `needs_followup`.
|
||||
- Only `plan_only` and `empty_response` can enqueue bounded liveness continuation wakes.
|
||||
- Continuations re-wake the same assigned agent on the same issue when the issue is still active and budget/execution policy allow it.
|
||||
- `continuationAttempt` counts semantic liveness continuations for a source run chain. It is separate from process recovery, queued wake delivery, adapter session resume, and other operational retries.
|
||||
- Liveness continuation wake prompts include the attempt, source run, liveness state, liveness reason, and the instruction for the next heartbeat.
|
||||
- Continuations do not mark the issue `blocked` or `done`. If automatic continuations are exhausted, Paperclip leaves an audit comment so a human or manager can clarify, block, or assign follow-up work.
|
||||
- Workspace provisioning alone is not treated as concrete task progress. Durable progress should appear as tool/action events, issue comments, document or work-product revisions, activity log entries, commits, or tests.
|
||||
|
||||
@@ -68,53 +68,6 @@ POST /api/companies/{companyId}/issues
|
||||
|
||||
Always set `parentId` to maintain the task hierarchy. Set `goalId` when applicable.
|
||||
|
||||
## Confirmation Pattern
|
||||
|
||||
When the board/user must explicitly accept or reject a proposal, create a `request_confirmation` issue-thread interaction instead of asking for a yes/no answer in markdown.
|
||||
|
||||
```
|
||||
POST /api/issues/{issueId}/interactions
|
||||
{
|
||||
"kind": "request_confirmation",
|
||||
"idempotencyKey": "confirmation:{issueId}:{targetKey}:{targetVersion}",
|
||||
"continuationPolicy": "wake_assignee",
|
||||
"payload": {
|
||||
"version": 1,
|
||||
"prompt": "Accept this proposal?",
|
||||
"acceptLabel": "Accept",
|
||||
"rejectLabel": "Request changes",
|
||||
"rejectRequiresReason": true,
|
||||
"supersedeOnUserComment": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Use `continuationPolicy: "wake_assignee"` when acceptance should wake you to continue. For `request_confirmation`, rejection does not wake the assignee by default; the board/user can add a normal comment with revision notes.
|
||||
|
||||
## Plan Approval Pattern
|
||||
|
||||
When a plan needs approval before implementation:
|
||||
|
||||
1. Create or update the issue document with key `plan`.
|
||||
2. Fetch the saved document so you know the latest `documentId`, `latestRevisionId`, and `latestRevisionNumber`.
|
||||
3. Create a `request_confirmation` targeting that exact `plan` revision.
|
||||
4. Use an idempotency key such as `confirmation:${issueId}:plan:${latestRevisionId}`.
|
||||
5. Wait for acceptance before creating implementation subtasks.
|
||||
6. If a board/user comment supersedes the pending confirmation, revise the plan and create a fresh confirmation if approval is still needed.
|
||||
|
||||
Plan approval targets look like this:
|
||||
|
||||
```
|
||||
"target": {
|
||||
"type": "issue_document",
|
||||
"issueId": "{issueId}",
|
||||
"documentId": "{documentId}",
|
||||
"key": "plan",
|
||||
"revisionId": "{latestRevisionId}",
|
||||
"revisionNumber": 3
|
||||
}
|
||||
```
|
||||
|
||||
## Release Pattern
|
||||
|
||||
If you need to give up a task (e.g. you realize it should go to someone else):
|
||||
|
||||
@@ -47,7 +47,7 @@ You do **not** need to tell the CEO to engage specific agents. After you approve
|
||||
- **Breaks goals into concrete tasks** with clear descriptions, priorities, and acceptance criteria
|
||||
- **Assigns tasks to the right agent** based on role and capabilities (e.g., engineering tasks go to the CTO or engineers, marketing tasks go to the CMO)
|
||||
- **Creates subtasks** when work needs to be decomposed further
|
||||
- **Hires new agents** when the team lacks capacity for a goal, with hire approvals available when enabled in company settings
|
||||
- **Hires new agents** when the team lacks capacity for a goal (subject to your approval)
|
||||
- **Monitors progress** on each heartbeat, checking task status and unblocking reports
|
||||
- **Escalates to you** when it encounters something it can't resolve — budget issues, blocked approvals, or strategic ambiguity
|
||||
|
||||
|
||||
|
Before Width: | Height: | Size: 182 KiB |
|
Before Width: | Height: | Size: 108 KiB |
|
Before Width: | Height: | Size: 191 KiB |
|
Before Width: | Height: | Size: 121 KiB |
|
Before Width: | Height: | Size: 183 KiB |
|
Before Width: | Height: | Size: 105 KiB |
|
Before Width: | Height: | Size: 188 KiB |
|
Before Width: | Height: | Size: 106 KiB |
|
Before Width: | Height: | Size: 335 KiB |
|
Before Width: | Height: | Size: 151 KiB |
|
Before Width: | Height: | Size: 74 KiB |
|
Before Width: | Height: | Size: 74 KiB |
|
Before Width: | Height: | Size: 74 KiB |
|
Before Width: | Height: | Size: 74 KiB |
|
Before Width: | Height: | Size: 88 KiB |
|
Before Width: | Height: | Size: 87 KiB |
|
Before Width: | Height: | Size: 41 KiB |
|
Before Width: | Height: | Size: 40 KiB |
|
Before Width: | Height: | Size: 36 KiB |
|
Before Width: | Height: | Size: 36 KiB |
|
Before Width: | Height: | Size: 180 KiB |
|
Before Width: | Height: | Size: 76 KiB |
|
Before Width: | Height: | Size: 74 KiB |
|
Before Width: | Height: | Size: 701 KiB |
|
Before Width: | Height: | Size: 316 KiB |
|
Before Width: | Height: | Size: 694 KiB |
|
Before Width: | Height: | Size: 546 KiB |
|
Before Width: | Height: | Size: 701 KiB |
|
Before Width: | Height: | Size: 316 KiB |
|
Before Width: | Height: | Size: 694 KiB |
|
Before Width: | Height: | Size: 43 KiB |
|
Before Width: | Height: | Size: 62 KiB |
|
Before Width: | Height: | Size: 118 KiB |