Compare commits

..

1 Commits

Author SHA1 Message Date
Forgotten
34ab6e22e4 Fix security vulnerability 2026-04-10 11:52:25 -05:00
359 changed files with 3570 additions and 52266 deletions

View File

@@ -1,230 +0,0 @@
---
name: deal-with-security-advisory
description: >
Handle a GitHub Security Advisory response for Paperclip, including
confidential fix development in a temporary private fork, human coordination
on advisory-thread comments, CVE request, synchronized advisory publication,
and immediate security release steps.
---
# Security Vulnerability Response Instructions
## ⚠️ CRITICAL: This is a security vulnerability. Everything about this process is confidential until the advisory is published. Do not mention the vulnerability details in any public commit message, PR title, branch name, or comment. Do not push anything to a public branch. Do not discuss specifics in any public channel. Assume anything on the public repo is visible to attackers who will exploit the window between disclosure and user upgrades.
***
## Context
A security vulnerability has been reported via GitHub Security Advisory:
* **Advisory:** {{ghsaId}} (e.g. GHSA-x8hx-rhr2-9rf7)
* **Reporter:** {{reporterHandle}}
* **Severity:** {{severity}}
* **Notes:** {{notes}}
***
## Step 0: Fetch the Advisory Details
Pull the full advisory so you understand the vulnerability before doing anything else:
```
gh api repos/paperclipai/paperclip/security-advisories/{{ghsaId}}
```
Read the `description`, `severity`, `cvss`, and `vulnerabilities` fields. Understand the attack vector before writing code.
## Step 1: Acknowledge the Report
⚠️ **This step requires a human.** The advisory thread does not have a comment API. Ask the human operator to post a comment on the private advisory thread acknowledging the report. Provide them this template:
> Thanks for the report, @{{reporterHandle}}. We've confirmed the issue and are working on a fix. We're targeting a patch release within {{timeframe}}. We'll keep you updated here.
Give your human this template, but still continue
Below we use `gh` tools - you do have access and credentials outside of your sandbox, so use them.
## Step 2: Create the Temporary Private Fork
This is where all fix development happens. Never push to the public repo.
```
gh api --method POST \
repos/paperclipai/paperclip/security-advisories/{{ghsaId}}/forks
```
This returns a repository object for the private fork. Save the `full_name` and `clone_url`.
Clone it and set up your workspace:
```
# Clone the private fork somewhere outside ~/paperclip
git clone <clone_url_from_response> ~/security-patch-{{ghsaId}}
cd ~/security-patch-{{ghsaId}}
git checkout -b security-fix
```
**Do not edit `~/paperclip`** — the dev server is running off the `~/paperclip` master branch and we don't want to touch it. All work happens in the private fork clone.
**TIPS:**
* Do not commit `pnpm-lock.yaml` — the repo has actions to manage this
* Do not use descriptive branch names that leak the vulnerability (e.g., no `fix-dns-rebinding-rce`). Use something generic like `security-fix`
* All work stays in the private fork until publication
* CI/GitHub Actions will NOT run on the temporary private fork — this is a GitHub limitation by design. You must run tests locally
## Step 3: Develop and Validate the Fix
Write the patch. Same content standards as any PR:
* It must functionally work — **run tests locally** since CI won't run on the private fork
* Consider the whole codebase, not just the narrow vulnerability path. A patch that fixes one vector but opens another is worse than no patch
* Ensure backwards compatibility for the database, or be explicit about what breaks
* Make sure any UI components still look correct if the fix touches them
* The fix should be minimal and focused — don't bundle unrelated changes into a security patch. Reviewers (and the reporter) should be able to read the diff and understand exactly what changed and why
**Specific to security fixes:**
* Verify the fix actually closes the attack vector described in the advisory. Reproduce the vulnerability first (using the reporter's description), then confirm the patch prevents it
* Consider adjacent attack vectors — if DNS rebinding is the issue, are there other endpoints or modes with the same class of problem?
* Do not introduce new dependencies unless absolutely necessary — new deps in a security patch raise eyebrows
Push your fix to the private fork:
```
git add -A
git commit -m "Fix security vulnerability"
git push origin security-fix
```
## Step 4: Coordinate with the Reporter
⚠️ **This step requires a human.** Ask the human operator to post on the advisory thread letting the reporter know the fix is ready and giving them a chance to review. Provide them this template:
> @{{reporterHandle}} — fix is ready in the private fork if you'd like to review before we publish. Planning to release within {{timeframe}}.
Proceed
## Step 5: Request a CVE
This makes vulnerability scanners (npm audit, Snyk, Dependabot) warn users to upgrade. Without it, nobody gets automated notification.
```
gh api --method POST \
repos/paperclipai/paperclip/security-advisories/{{ghsaId}}/cve
```
GitHub is a CVE Numbering Authority and will assign one automatically. The CVE may take a few hours to propagate after the advisory is published.
## Step 6: Publish Everything Simultaneously
This all happens at once — do not stagger these steps. The goal is **zero window** between the vulnerability becoming public knowledge and the fix being available.
### 6a. Verify reporter credit before publishing
```
gh api repos/paperclipai/paperclip/security-advisories/{{ghsaId}} --jq '.credits'
```
If the reporter is not credited, add them:
```
gh api --method PATCH \
repos/paperclipai/paperclip/security-advisories/{{ghsaId}} \
--input - << 'EOF'
{
"credits": [
{
"login": "{{reporterHandle}}",
"type": "reporter"
}
]
}
EOF
```
### 6b. Update the advisory with the patched version and publish
```
gh api --method PATCH \
repos/paperclipai/paperclip/security-advisories/{{ghsaId}} \
--input - << 'EOF'
{
"state": "published",
"vulnerabilities": [
{
"package": {
"ecosystem": "npm",
"name": "paperclip"
},
"vulnerable_version_range": "< {{patchedVersion}}",
"patched_versions": "{{patchedVersion}}"
}
]
}
EOF
```
Publishing the advisory simultaneously:
* Makes the GHSA public
* Merges the temporary private fork into your repo
* Triggers the CVE assignment (if requested in step 5)
### 6c. Cut a release immediately after merge
```
cd ~/paperclip
git pull origin master
gh release create v{{patchedVersion}} \
--repo paperclipai/paperclip \
--title "v{{patchedVersion}} — Security Release" \
--notes "## Security Release
This release fixes a critical security vulnerability.
### What was fixed
{{briefDescription}} (e.g., Remote code execution via DNS rebinding in \`local_trusted\` mode)
### Advisory
https://github.com/paperclipai/paperclip/security/advisories/{{ghsaId}}
### Credit
Thanks to @{{reporterHandle}} for responsibly disclosing this vulnerability.
### Action required
All users running versions prior to {{patchedVersion}} should upgrade immediately."
```
## Step 7: Post-Publication Verification
```
# Verify the advisory is published and CVE is assigned
gh api repos/paperclipai/paperclip/security-advisories/{{ghsaId}} \
--jq '{state: .state, cve_id: .cve_id, published_at: .published_at}'
# Verify the release exists
gh release view v{{patchedVersion}} --repo paperclipai/paperclip
```
If the CVE hasn't been assigned yet, that's normal — it can take a few hours.
⚠️ **Human step:** Ask the human operator to post a final comment on the advisory thread confirming publication and thanking the reporter.
Tell the human operator what you did by posting a comment to this task, including:
* The published advisory URL: `https://github.com/paperclipai/paperclip/security/advisories/{{ghsaId}}`
* The release URL
* Whether the CVE has been assigned yet
* All URLs to any pull requests or branches

View File

@@ -1,3 +1 @@
Dotta <bippadotta@protonmail.com> <34892728+cryppadotta@users.noreply.github.com>
Dotta <bippadotta@protonmail.com> <forgottenrunes@protonmail.com>
Dotta <bippadotta@protonmail.com> <dotta@example.com>
Dotta <bippadotta@protonmail.com> Forgotten <forgottenrunes@protonmail.com>

View File

@@ -108,21 +108,6 @@ Notes:
## 7. Verification Before Hand-off
Default local/agent test path:
```sh
pnpm test
```
This is the cheap default and only runs the Vitest suite. Browser suites stay opt-in:
```sh
pnpm test:e2e
pnpm test:release-smoke
```
Run the browser suites only when your change touches them or when you are explicitly verifying CI/release flows.
Run this full check before claiming done:
```sh

View File

@@ -177,14 +177,6 @@ Open source. Self-hosted. No Paperclip account required.
npx paperclipai onboard --yes
```
That quickstart path now defaults to trusted local loopback mode for the fastest first run. To start in authenticated/private mode instead, choose a bind preset explicitly:
```bash
npx paperclipai onboard --yes --bind lan
# or:
npx paperclipai onboard --yes --bind tailnet
```
If you already have Paperclip configured, rerunning `onboard` keeps the existing config in place. Use `paperclipai configure` to edit settings.
Or manually:
@@ -233,15 +225,11 @@ pnpm dev:once # Full dev without file watching
pnpm dev:server # Server only
pnpm build # Build all
pnpm typecheck # Type checking
pnpm test # Cheap default test run (Vitest only)
pnpm test:watch # Vitest watch mode
pnpm test:e2e # Playwright browser suite
pnpm test:run # Run tests
pnpm db:generate # Generate DB migration
pnpm db:migrate # Apply migrations
```
`pnpm test` does not run Playwright. Browser suites stay separate and are typically run only when working on those flows or in CI.
See [doc/DEVELOPING.md](doc/DEVELOPING.md) for the full development guide.
<br/>
@@ -255,18 +243,11 @@ See [doc/DEVELOPING.md](doc/DEVELOPING.md) for the full development guide.
- ✅ Skills Manager
- ✅ Scheduled Routines
- ✅ Better Budgeting
- Agent Reviews and Approvals
- Artifacts & Deployments
- ⚪ CEO Chat
- ⚪ MAXIMIZER MODE
- ⚪ Multiple Human Users
- ⚪ Cloud / Sandbox agents (e.g. Cursor / e2b agents)
- ⚪ Artifacts & Work Products
- ⚪ Memory & Knowledge
- ⚪ Enforced Outcomes
- ⚪ MAXIMIZER MODE
- ⚪ Deep Planning
- ⚪ Work Queues
- ⚪ Self-Organization
- ⚪ Automatic Organizational Learning
- ⚪ CEO Chat
- ⚪ Cloud deployments
- ⚪ Desktop App
@@ -282,12 +263,12 @@ Paperclip collects anonymous usage telemetry to help us understand how the produ
Telemetry is **enabled by default** and can be disabled with any of the following:
| Method | How |
| -------------------- | ------------------------------------------------------- |
| Environment variable | `PAPERCLIP_TELEMETRY_DISABLED=1` |
| Standard convention | `DO_NOT_TRACK=1` |
| CI environments | Automatically disabled when `CI=true` |
| Config file | Set `telemetry.enabled: false` in your Paperclip config |
| Method | How |
|---|---|
| Environment variable | `PAPERCLIP_TELEMETRY_DISABLED=1` |
| Standard convention | `DO_NOT_TRACK=1` |
| CI environments | Automatically disabled when `CI=true` |
| Config file | Set `telemetry.enabled: false` in your Paperclip config |
## Contributing

View File

@@ -1,8 +0,0 @@
# Security Policy
## Reporting a Vulnerability
Please report security vulnerabilities through GitHub's Security Advisory feature:
[https://github.com/paperclipai/paperclip/security/advisories/new](https://github.com/paperclipai/paperclip/security/advisories/new)
Do not open public issues for security vulnerabilities.

View File

@@ -177,14 +177,6 @@ Open source. Self-hosted. No Paperclip account required.
npx paperclipai onboard --yes
```
That quickstart path now defaults to trusted local loopback mode for the fastest first run. To start in authenticated/private mode instead, choose a bind preset explicitly:
```bash
npx paperclipai onboard --yes --bind lan
# or:
npx paperclipai onboard --yes --bind tailnet
```
If you already have Paperclip configured, rerunning `onboard` keeps the existing config in place. Use `paperclipai configure` to edit settings.
Or manually:
@@ -233,15 +225,11 @@ pnpm dev:once # Full dev without file watching
pnpm dev:server # Server only
pnpm build # Build all
pnpm typecheck # Type checking
pnpm test # Cheap default test run (Vitest only)
pnpm test:watch # Vitest watch mode
pnpm test:e2e # Playwright browser suite
pnpm test:run # Run tests
pnpm db:generate # Generate DB migration
pnpm db:migrate # Apply migrations
```
`pnpm test` does not run Playwright. Browser suites stay separate and are typically run only when working on those flows or in CI.
See [doc/DEVELOPING.md](https://github.com/paperclipai/paperclip/blob/master/doc/DEVELOPING.md) for the full development guide.
<br/>

View File

@@ -1,62 +0,0 @@
import { describe, expect, it } from "vitest";
import { resolveRuntimeBind, validateConfiguredBindMode } from "@paperclipai/shared";
import { buildPresetServerConfig } from "../config/server-bind.js";
describe("network bind helpers", () => {
it("rejects non-loopback bind modes in local_trusted", () => {
expect(
validateConfiguredBindMode({
deploymentMode: "local_trusted",
deploymentExposure: "private",
bind: "lan",
host: "0.0.0.0",
}),
).toContain("local_trusted requires server.bind=loopback");
});
it("resolves tailnet bind using the detected tailscale address", () => {
const resolved = resolveRuntimeBind({
bind: "tailnet",
host: "127.0.0.1",
tailnetBindHost: "100.64.0.8",
});
expect(resolved.errors).toEqual([]);
expect(resolved.host).toBe("100.64.0.8");
});
it("requires a custom bind host when bind=custom", () => {
const resolved = resolveRuntimeBind({
bind: "custom",
host: "127.0.0.1",
});
expect(resolved.errors).toContain("server.customBindHost is required when server.bind=custom");
});
it("stores the detected tailscale address for tailnet presets", () => {
process.env.PAPERCLIP_TAILNET_BIND_HOST = "100.64.0.8";
const preset = buildPresetServerConfig("tailnet", {
port: 3100,
allowedHostnames: [],
serveUi: true,
});
expect(preset.server.host).toBe("100.64.0.8");
delete process.env.PAPERCLIP_TAILNET_BIND_HOST;
});
it("falls back to loopback when no tailscale address is available for tailnet presets", () => {
delete process.env.PAPERCLIP_TAILNET_BIND_HOST;
const preset = buildPresetServerConfig("tailnet", {
port: 3100,
allowedHostnames: [],
serveUi: true,
});
expect(preset.server.host).toBe("127.0.0.1");
});
});

View File

@@ -74,11 +74,6 @@ function createExistingConfigFixture() {
return { configPath, configText: fs.readFileSync(configPath, "utf8") };
}
function createFreshConfigPath() {
const root = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-onboard-fresh-"));
return path.join(root, ".paperclip", "config.json");
}
describe("onboard", () => {
beforeEach(() => {
process.env = { ...ORIGINAL_ENV };
@@ -110,57 +105,4 @@ describe("onboard", () => {
expect(fs.existsSync(`${fixture.configPath}.backup`)).toBe(false);
expect(fs.existsSync(path.join(path.dirname(fixture.configPath), ".env"))).toBe(true);
});
it("keeps --yes onboarding on local trusted loopback defaults", async () => {
const configPath = createFreshConfigPath();
process.env.HOST = "0.0.0.0";
process.env.PAPERCLIP_BIND = "lan";
await onboard({ config: configPath, yes: true, invokedByRun: true });
const raw = JSON.parse(fs.readFileSync(configPath, "utf8")) as PaperclipConfig;
expect(raw.server.deploymentMode).toBe("local_trusted");
expect(raw.server.exposure).toBe("private");
expect(raw.server.bind).toBe("loopback");
expect(raw.server.host).toBe("127.0.0.1");
});
it("supports authenticated/private quickstart bind presets", async () => {
const configPath = createFreshConfigPath();
process.env.PAPERCLIP_TAILNET_BIND_HOST = "100.64.0.8";
await onboard({ config: configPath, yes: true, invokedByRun: true, bind: "tailnet" });
const raw = JSON.parse(fs.readFileSync(configPath, "utf8")) as PaperclipConfig;
expect(raw.server.deploymentMode).toBe("authenticated");
expect(raw.server.exposure).toBe("private");
expect(raw.server.bind).toBe("tailnet");
expect(raw.server.host).toBe("100.64.0.8");
});
it("keeps tailnet quickstart on loopback until tailscale is available", async () => {
const configPath = createFreshConfigPath();
delete process.env.PAPERCLIP_TAILNET_BIND_HOST;
await onboard({ config: configPath, yes: true, invokedByRun: true, bind: "tailnet" });
const raw = JSON.parse(fs.readFileSync(configPath, "utf8")) as PaperclipConfig;
expect(raw.server.deploymentMode).toBe("authenticated");
expect(raw.server.exposure).toBe("private");
expect(raw.server.bind).toBe("tailnet");
expect(raw.server.host).toBe("127.0.0.1");
});
it("ignores deployment env overrides during --yes quickstart", async () => {
const configPath = createFreshConfigPath();
process.env.PAPERCLIP_DEPLOYMENT_MODE = "authenticated";
await onboard({ config: configPath, yes: true, invokedByRun: true });
const raw = JSON.parse(fs.readFileSync(configPath, "utf8")) as PaperclipConfig;
expect(raw.server.deploymentMode).toBe("local_trusted");
expect(raw.server.exposure).toBe("private");
expect(raw.server.bind).toBe("loopback");
expect(raw.server.host).toBe("127.0.0.1");
});
});

View File

@@ -2,20 +2,10 @@ import fs from "node:fs";
import os from "node:os";
import path from "node:path";
import { execFileSync } from "node:child_process";
import { randomUUID } from "node:crypto";
import { afterEach, describe, expect, it, vi } from "vitest";
import {
agents,
companies,
createDb,
projects,
routines,
routineTriggers,
} from "@paperclipai/db";
import {
copyGitHooksToWorktreeGitDir,
copySeededSecretsKey,
pauseSeededScheduledRoutines,
readSourceAttachmentBody,
rebindWorkspaceCwd,
resolveSourceConfigPath,
@@ -23,7 +13,6 @@ import {
resolveWorktreeReseedTargetPaths,
resolveGitWorktreeAddArgs,
resolveWorktreeMakeTargetPath,
worktreeRepairCommand,
worktreeInitCommand,
worktreeMakeCommand,
worktreeReseedCommand,
@@ -39,21 +28,9 @@ import {
sanitizeWorktreeInstanceId,
} from "../commands/worktree-lib.js";
import type { PaperclipConfig } from "../config/schema.js";
import {
getEmbeddedPostgresTestSupport,
startEmbeddedPostgresTestDatabase,
} from "./helpers/embedded-postgres.js";
const ORIGINAL_CWD = process.cwd();
const ORIGINAL_ENV = { ...process.env };
const embeddedPostgresSupport = await getEmbeddedPostgresTestSupport();
const describeEmbeddedPostgres = embeddedPostgresSupport.supported ? describe : describe.skip;
if (!embeddedPostgresSupport.supported) {
console.warn(
`Skipping embedded Postgres worktree CLI tests on this host: ${embeddedPostgresSupport.reason ?? "unsupported environment"}`,
);
}
afterEach(() => {
process.chdir(ORIGINAL_CWD);
@@ -845,246 +822,4 @@ describe("worktree helpers", () => {
fs.rmSync(tempRoot, { recursive: true, force: true });
}
}, 20_000);
it("no-ops on the primary checkout unless --branch is provided", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-repair-primary-"));
const repoRoot = path.join(tempRoot, "repo");
const originalCwd = process.cwd();
try {
fs.mkdirSync(repoRoot, { recursive: true });
execFileSync("git", ["init"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.email", "test@example.com"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.name", "Test User"], { cwd: repoRoot, stdio: "ignore" });
fs.writeFileSync(path.join(repoRoot, "README.md"), "# temp\n", "utf8");
execFileSync("git", ["add", "README.md"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["commit", "-m", "Initial commit"], { cwd: repoRoot, stdio: "ignore" });
process.chdir(repoRoot);
await worktreeRepairCommand({});
expect(fs.existsSync(path.join(repoRoot, ".paperclip", "config.json"))).toBe(false);
expect(fs.existsSync(path.join(repoRoot, ".paperclip", "worktrees"))).toBe(false);
} finally {
process.chdir(originalCwd);
fs.rmSync(tempRoot, { recursive: true, force: true });
}
});
it("repairs the current linked worktree when Paperclip metadata is missing", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-repair-current-"));
const repoRoot = path.join(tempRoot, "repo");
const worktreePath = path.join(repoRoot, ".paperclip", "worktrees", "repair-me");
const sourceConfigPath = path.join(tempRoot, "source-config.json");
const worktreeHome = path.join(tempRoot, ".paperclip-worktrees");
const worktreePaths = resolveWorktreeLocalPaths({
cwd: worktreePath,
homeDir: worktreeHome,
instanceId: sanitizeWorktreeInstanceId(path.basename(worktreePath)),
});
const originalCwd = process.cwd();
try {
fs.mkdirSync(repoRoot, { recursive: true });
execFileSync("git", ["init"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.email", "test@example.com"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.name", "Test User"], { cwd: repoRoot, stdio: "ignore" });
fs.writeFileSync(path.join(repoRoot, "README.md"), "# temp\n", "utf8");
execFileSync("git", ["add", "README.md"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["commit", "-m", "Initial commit"], { cwd: repoRoot, stdio: "ignore" });
fs.mkdirSync(path.dirname(worktreePath), { recursive: true });
execFileSync("git", ["worktree", "add", "-b", "repair-me", worktreePath, "HEAD"], {
cwd: repoRoot,
stdio: "ignore",
});
fs.writeFileSync(sourceConfigPath, JSON.stringify(buildSourceConfig(), null, 2), "utf8");
fs.mkdirSync(worktreePaths.instanceRoot, { recursive: true });
fs.writeFileSync(path.join(worktreePaths.instanceRoot, "marker.txt"), "stale", "utf8");
process.chdir(worktreePath);
await worktreeRepairCommand({
fromConfig: sourceConfigPath,
home: worktreeHome,
noSeed: true,
});
expect(fs.existsSync(path.join(worktreePath, ".paperclip", "config.json"))).toBe(true);
expect(fs.existsSync(path.join(worktreePath, ".paperclip", ".env"))).toBe(true);
expect(fs.existsSync(path.join(worktreePaths.instanceRoot, "marker.txt"))).toBe(false);
} finally {
process.chdir(originalCwd);
fs.rmSync(tempRoot, { recursive: true, force: true });
}
}, 20_000);
it("creates and repairs a missing branch worktree when --branch is provided", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-repair-branch-"));
const repoRoot = path.join(tempRoot, "repo");
const sourceConfigPath = path.join(tempRoot, "source-config.json");
const worktreeHome = path.join(tempRoot, ".paperclip-worktrees");
const originalCwd = process.cwd();
const expectedWorktreePath = path.join(repoRoot, ".paperclip", "worktrees", "feature-repair-me");
try {
fs.mkdirSync(repoRoot, { recursive: true });
execFileSync("git", ["init"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.email", "test@example.com"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["config", "user.name", "Test User"], { cwd: repoRoot, stdio: "ignore" });
fs.writeFileSync(path.join(repoRoot, "README.md"), "# temp\n", "utf8");
execFileSync("git", ["add", "README.md"], { cwd: repoRoot, stdio: "ignore" });
execFileSync("git", ["commit", "-m", "Initial commit"], { cwd: repoRoot, stdio: "ignore" });
fs.writeFileSync(sourceConfigPath, JSON.stringify(buildSourceConfig(), null, 2), "utf8");
process.chdir(repoRoot);
await worktreeRepairCommand({
branch: "feature/repair-me",
fromConfig: sourceConfigPath,
home: worktreeHome,
noSeed: true,
});
expect(fs.existsSync(path.join(expectedWorktreePath, ".git"))).toBe(true);
expect(fs.existsSync(path.join(expectedWorktreePath, ".paperclip", "config.json"))).toBe(true);
expect(fs.existsSync(path.join(expectedWorktreePath, ".paperclip", ".env"))).toBe(true);
} finally {
process.chdir(originalCwd);
fs.rmSync(tempRoot, { recursive: true, force: true });
}
}, 20_000);
});
describeEmbeddedPostgres("pauseSeededScheduledRoutines", () => {
it("pauses only routines with enabled schedule triggers", async () => {
const tempDb = await startEmbeddedPostgresTestDatabase("paperclip-worktree-routines-");
const db = createDb(tempDb.connectionString);
const companyId = randomUUID();
const projectId = randomUUID();
const agentId = randomUUID();
const activeScheduledRoutineId = randomUUID();
const activeApiRoutineId = randomUUID();
const pausedScheduledRoutineId = randomUUID();
const archivedScheduledRoutineId = randomUUID();
const disabledScheduleRoutineId = randomUUID();
try {
await db.insert(companies).values({
id: companyId,
name: "Paperclip",
issuePrefix: `T${companyId.replace(/-/g, "").slice(0, 6).toUpperCase()}`,
requireBoardApprovalForNewAgents: false,
});
await db.insert(agents).values({
id: agentId,
companyId,
name: "Coder",
adapterType: "process",
adapterConfig: {},
runtimeConfig: {},
permissions: {},
});
await db.insert(projects).values({
id: projectId,
companyId,
name: "Project",
status: "in_progress",
});
await db.insert(routines).values([
{
id: activeScheduledRoutineId,
companyId,
projectId,
assigneeAgentId: agentId,
title: "Active scheduled",
status: "active",
},
{
id: activeApiRoutineId,
companyId,
projectId,
assigneeAgentId: agentId,
title: "Active API",
status: "active",
},
{
id: pausedScheduledRoutineId,
companyId,
projectId,
assigneeAgentId: agentId,
title: "Paused scheduled",
status: "paused",
},
{
id: archivedScheduledRoutineId,
companyId,
projectId,
assigneeAgentId: agentId,
title: "Archived scheduled",
status: "archived",
},
{
id: disabledScheduleRoutineId,
companyId,
projectId,
assigneeAgentId: agentId,
title: "Disabled schedule",
status: "active",
},
]);
await db.insert(routineTriggers).values([
{
companyId,
routineId: activeScheduledRoutineId,
kind: "schedule",
enabled: true,
cronExpression: "0 9 * * *",
timezone: "UTC",
},
{
companyId,
routineId: activeApiRoutineId,
kind: "api",
enabled: true,
},
{
companyId,
routineId: pausedScheduledRoutineId,
kind: "schedule",
enabled: true,
cronExpression: "0 10 * * *",
timezone: "UTC",
},
{
companyId,
routineId: archivedScheduledRoutineId,
kind: "schedule",
enabled: true,
cronExpression: "0 11 * * *",
timezone: "UTC",
},
{
companyId,
routineId: disabledScheduleRoutineId,
kind: "schedule",
enabled: false,
cronExpression: "0 12 * * *",
timezone: "UTC",
},
]);
const pausedCount = await pauseSeededScheduledRoutines(tempDb.connectionString);
expect(pausedCount).toBe(1);
const rows = await db.select({ id: routines.id, status: routines.status }).from(routines);
const statusById = new Map(rows.map((row) => [row.id, row.status]));
expect(statusById.get(activeScheduledRoutineId)).toBe("paused");
expect(statusById.get(activeApiRoutineId)).toBe("active");
expect(statusById.get(pausedScheduledRoutineId)).toBe("paused");
expect(statusById.get(archivedScheduledRoutineId)).toBe("archived");
expect(statusById.get(disabledScheduleRoutineId)).toBe("active");
} finally {
await db.$client?.end?.({ timeout: 5 }).catch(() => undefined);
await tempDb.cleanup();
}
}, 20_000);
});

View File

@@ -1,21 +1,24 @@
import { inferBindModeFromHost } from "@paperclipai/shared";
import type { PaperclipConfig } from "../config/schema.js";
import type { CheckResult } from "./index.js";
function isLoopbackHost(host: string) {
const normalized = host.trim().toLowerCase();
return normalized === "127.0.0.1" || normalized === "localhost" || normalized === "::1";
}
export function deploymentAuthCheck(config: PaperclipConfig): CheckResult {
const mode = config.server.deploymentMode;
const exposure = config.server.exposure;
const auth = config.auth;
const bind = config.server.bind ?? inferBindModeFromHost(config.server.host);
if (mode === "local_trusted") {
if (bind !== "loopback") {
if (!isLoopbackHost(config.server.host)) {
return {
name: "Deployment/auth mode",
status: "fail",
message: `local_trusted requires loopback binding (found ${bind})`,
message: `local_trusted requires loopback host binding (found ${config.server.host})`,
canRepair: false,
repairHint: "Run `paperclipai configure --section server` and choose Local trusted / loopback reachability",
repairHint: "Run `paperclipai configure --section server` and set host to 127.0.0.1",
};
}
return {
@@ -83,6 +86,6 @@ export function deploymentAuthCheck(config: PaperclipConfig): CheckResult {
return {
name: "Deployment/auth mode",
status: "pass",
message: `Mode ${mode}/${exposure} with bind ${bind} and auth URL mode ${auth.baseUrlMode}`,
message: `Mode ${mode}/${exposure} with auth URL mode ${auth.baseUrlMode}`,
};
}

View File

@@ -3,7 +3,6 @@ import * as p from "@clack/prompts";
import pc from "picocolors";
import { and, eq, gt, isNull } from "drizzle-orm";
import { createDb, instanceUserRoles, invites } from "@paperclipai/db";
import { inferBindModeFromHost } from "@paperclipai/shared";
import { loadPaperclipEnvFile } from "../config/env.js";
import { readConfig, resolveConfigPath } from "../config/store.js";
@@ -41,13 +40,9 @@ function resolveBaseUrl(configPath?: string, explicitBaseUrl?: string) {
if (config?.auth.baseUrlMode === "explicit" && config.auth.publicBaseUrl) {
return config.auth.publicBaseUrl.replace(/\/+$/, "");
}
const bind = config?.server.bind ?? inferBindModeFromHost(config?.server.host);
const host =
bind === "custom"
? config?.server.customBindHost ?? config?.server.host ?? "localhost"
: config?.server.host ?? "localhost";
const host = config?.server.host ?? "localhost";
const port = config?.server.port ?? 3100;
const publicHost = host === "0.0.0.0" || bind === "lan" ? "localhost" : host;
const publicHost = host === "0.0.0.0" ? "localhost" : host;
return `http://${publicHost}:${port}`;
}

View File

@@ -54,7 +54,6 @@ function defaultConfig(): PaperclipConfig {
server: {
deploymentMode: "local_trusted",
exposure: "private",
bind: "loopback",
host: "127.0.0.1",
port: 3100,
allowedHostnames: [],

View File

@@ -73,7 +73,7 @@ export async function dbBackupCommand(opts: DbBackupOptions): Promise<void> {
const result = await runDatabaseBackup({
connectionString: connection.value,
backupDir,
retention: { dailyDays: retentionDays, weeklyWeeks: 4, monthlyMonths: 1 },
retentionDays,
filenamePrefix,
});
spinner.stop(`Backup saved: ${formatDatabaseBackupResult(result)}`);

View File

@@ -3,14 +3,10 @@ import path from "node:path";
import pc from "picocolors";
import {
AUTH_BASE_URL_MODES,
BIND_MODES,
DEPLOYMENT_EXPOSURES,
DEPLOYMENT_MODES,
SECRET_PROVIDERS,
STORAGE_PROVIDERS,
inferBindModeFromHost,
resolveRuntimeBind,
type BindMode,
type AuthBaseUrlMode,
type DeploymentExposure,
type DeploymentMode,
@@ -27,7 +23,6 @@ import { promptLogging } from "../prompts/logging.js";
import { defaultSecretsConfig } from "../prompts/secrets.js";
import { defaultStorageConfig, promptStorage } from "../prompts/storage.js";
import { promptServer } from "../prompts/server.js";
import { buildPresetServerConfig } from "../config/server-bind.js";
import {
describeLocalInstancePaths,
expandHomePrefix,
@@ -51,14 +46,10 @@ type OnboardOptions = {
run?: boolean;
yes?: boolean;
invokedByRun?: boolean;
bind?: BindMode;
};
type OnboardDefaults = Pick<PaperclipConfig, "database" | "logging" | "server" | "auth" | "storage" | "secrets">;
const TAILNET_BIND_WARNING =
"No Tailscale address was detected during setup. The saved config will stay on loopback until Tailscale is available or PAPERCLIP_TAILNET_BIND_HOST is set.";
const ONBOARD_ENV_KEYS = [
"PAPERCLIP_PUBLIC_URL",
"DATABASE_URL",
@@ -68,9 +59,6 @@ const ONBOARD_ENV_KEYS = [
"PAPERCLIP_DB_BACKUP_DIR",
"PAPERCLIP_DEPLOYMENT_MODE",
"PAPERCLIP_DEPLOYMENT_EXPOSURE",
"PAPERCLIP_BIND",
"PAPERCLIP_BIND_HOST",
"PAPERCLIP_TAILNET_BIND_HOST",
"HOST",
"PORT",
"SERVE_UI",
@@ -116,62 +104,29 @@ function resolvePathFromEnv(rawValue: string | undefined): string | null {
return path.resolve(expandHomePrefix(rawValue.trim()));
}
function describeServerBinding(server: Pick<PaperclipConfig["server"], "bind" | "customBindHost" | "host" | "port">): string {
const bind = server.bind ?? inferBindModeFromHost(server.host);
const detail =
bind === "custom"
? server.customBindHost ?? server.host
: bind === "tailnet"
? "detected tailscale address"
: server.host;
return `${bind}${detail ? ` (${detail})` : ""}:${server.port}`;
}
function quickstartDefaultsFromEnv(opts?: { preferTrustedLocal?: boolean }): {
function quickstartDefaultsFromEnv(): {
defaults: OnboardDefaults;
usedEnvKeys: string[];
ignoredEnvKeys: Array<{ key: string; reason: string }>;
} {
const preferTrustedLocal = opts?.preferTrustedLocal ?? false;
const instanceId = resolvePaperclipInstanceId();
const defaultStorage = defaultStorageConfig();
const defaultSecrets = defaultSecretsConfig();
const databaseUrl = process.env.DATABASE_URL?.trim() || undefined;
const publicUrl = preferTrustedLocal
? undefined
: (
process.env.PAPERCLIP_PUBLIC_URL?.trim() ||
process.env.PAPERCLIP_AUTH_PUBLIC_BASE_URL?.trim() ||
process.env.BETTER_AUTH_URL?.trim() ||
process.env.BETTER_AUTH_BASE_URL?.trim() ||
undefined
);
const deploymentMode = preferTrustedLocal
? "local_trusted"
: (parseEnumFromEnv<DeploymentMode>(process.env.PAPERCLIP_DEPLOYMENT_MODE, DEPLOYMENT_MODES) ?? "local_trusted");
const publicUrl =
process.env.PAPERCLIP_PUBLIC_URL?.trim() ||
process.env.PAPERCLIP_AUTH_PUBLIC_BASE_URL?.trim() ||
process.env.BETTER_AUTH_URL?.trim() ||
process.env.BETTER_AUTH_BASE_URL?.trim() ||
undefined;
const deploymentMode =
parseEnumFromEnv<DeploymentMode>(process.env.PAPERCLIP_DEPLOYMENT_MODE, DEPLOYMENT_MODES) ?? "local_trusted";
const deploymentExposureFromEnv = parseEnumFromEnv<DeploymentExposure>(
process.env.PAPERCLIP_DEPLOYMENT_EXPOSURE,
DEPLOYMENT_EXPOSURES,
);
const deploymentExposure =
deploymentMode === "local_trusted" ? "private" : (deploymentExposureFromEnv ?? "private");
const bindFromEnv = parseEnumFromEnv<BindMode>(process.env.PAPERCLIP_BIND, BIND_MODES);
const customBindHostFromEnv = process.env.PAPERCLIP_BIND_HOST?.trim() || undefined;
const hostFromEnv = process.env.HOST?.trim() || undefined;
const configuredBindHost = customBindHostFromEnv ?? hostFromEnv;
const bind = preferTrustedLocal
? "loopback"
: (
deploymentMode === "local_trusted"
? "loopback"
: (bindFromEnv ?? (configuredBindHost ? inferBindModeFromHost(configuredBindHost) : "lan"))
);
const resolvedBind = resolveRuntimeBind({
bind,
host: hostFromEnv ?? (bind === "loopback" ? "127.0.0.1" : "0.0.0.0"),
customBindHost: customBindHostFromEnv,
tailnetBindHost: process.env.PAPERCLIP_TAILNET_BIND_HOST?.trim(),
});
const authPublicBaseUrl = publicUrl;
const authBaseUrlModeFromEnv = parseEnumFromEnv<AuthBaseUrlMode>(
process.env.PAPERCLIP_AUTH_BASE_URL_MODE,
@@ -228,9 +183,7 @@ function quickstartDefaultsFromEnv(opts?: { preferTrustedLocal?: boolean }): {
server: {
deploymentMode,
exposure: deploymentExposure,
bind: resolvedBind.bind,
...(resolvedBind.customBindHost ? { customBindHost: resolvedBind.customBindHost } : {}),
host: resolvedBind.host,
host: process.env.HOST ?? "127.0.0.1",
port: Number(process.env.PORT) || 3100,
allowedHostnames: Array.from(new Set([...allowedHostnamesFromEnv, ...(hostnameFromPublicUrl ? [hostnameFromPublicUrl] : [])])),
serveUi: parseBooleanFromEnv(process.env.SERVE_UI) ?? true,
@@ -267,49 +220,12 @@ function quickstartDefaultsFromEnv(opts?: { preferTrustedLocal?: boolean }): {
},
};
const ignoredEnvKeys: Array<{ key: string; reason: string }> = [];
if (preferTrustedLocal) {
const forcedLocalReason = "Ignored because --yes quickstart forces trusted local loopback defaults";
for (const key of [
"PAPERCLIP_DEPLOYMENT_MODE",
"PAPERCLIP_DEPLOYMENT_EXPOSURE",
"PAPERCLIP_BIND",
"PAPERCLIP_BIND_HOST",
"HOST",
"PAPERCLIP_AUTH_BASE_URL_MODE",
"PAPERCLIP_AUTH_PUBLIC_BASE_URL",
"PAPERCLIP_PUBLIC_URL",
"BETTER_AUTH_URL",
"BETTER_AUTH_BASE_URL",
] as const) {
if (process.env[key] !== undefined) {
ignoredEnvKeys.push({ key, reason: forcedLocalReason });
}
}
}
if (deploymentMode === "local_trusted" && process.env.PAPERCLIP_DEPLOYMENT_EXPOSURE !== undefined) {
ignoredEnvKeys.push({
key: "PAPERCLIP_DEPLOYMENT_EXPOSURE",
reason: "Ignored because deployment mode local_trusted always forces private exposure",
});
}
if (deploymentMode === "local_trusted" && process.env.PAPERCLIP_BIND !== undefined) {
ignoredEnvKeys.push({
key: "PAPERCLIP_BIND",
reason: "Ignored because deployment mode local_trusted always uses loopback reachability",
});
}
if (deploymentMode === "local_trusted" && process.env.PAPERCLIP_BIND_HOST !== undefined) {
ignoredEnvKeys.push({
key: "PAPERCLIP_BIND_HOST",
reason: "Ignored because deployment mode local_trusted always uses loopback reachability",
});
}
if (deploymentMode === "local_trusted" && process.env.HOST !== undefined) {
ignoredEnvKeys.push({
key: "HOST",
reason: "Ignored because deployment mode local_trusted always uses loopback reachability",
});
}
const ignoredKeySet = new Set(ignoredEnvKeys.map((entry) => entry.key));
const usedEnvKeys = ONBOARD_ENV_KEYS.filter(
@@ -323,10 +239,6 @@ function canCreateBootstrapInviteImmediately(config: Pick<PaperclipConfig, "data
}
export async function onboard(opts: OnboardOptions): Promise<void> {
if (opts.bind && !["loopback", "lan", "tailnet"].includes(opts.bind)) {
throw new Error(`Unsupported bind preset for onboard: ${opts.bind}. Use loopback, lan, or tailnet.`);
}
printPaperclipCliBanner();
p.intro(pc.bgCyan(pc.black(" paperclipai onboard ")));
const configPath = resolveConfigPath(opts.config);
@@ -381,7 +293,7 @@ export async function onboard(opts: OnboardOptions): Promise<void> {
`Database: ${existingConfig.database.mode}`,
existingConfig.llm ? `LLM: ${existingConfig.llm.provider}` : "LLM: not configured",
`Logging: ${existingConfig.logging.mode} -> ${existingConfig.logging.logDir}`,
`Server: ${existingConfig.server.deploymentMode}/${existingConfig.server.exposure} @ ${describeServerBinding(existingConfig.server)}`,
`Server: ${existingConfig.server.deploymentMode}/${existingConfig.server.exposure} @ ${existingConfig.server.host}:${existingConfig.server.port}`,
`Allowed hosts: ${existingConfig.server.allowedHostnames.length > 0 ? existingConfig.server.allowedHostnames.join(", ") : "(loopback only)"}`,
`Auth URL mode: ${existingConfig.auth.baseUrlMode}${existingConfig.auth.publicBaseUrl ? ` (${existingConfig.auth.publicBaseUrl})` : ""}`,
`Storage: ${existingConfig.storage.provider}`,
@@ -424,13 +336,7 @@ export async function onboard(opts: OnboardOptions): Promise<void> {
let setupMode: SetupMode = "quickstart";
if (opts.yes) {
p.log.message(
pc.dim(
opts.bind
? `\`--yes\` enabled: using Quickstart defaults with bind=${opts.bind}.`
: "`--yes` enabled: using Quickstart defaults.",
),
);
p.log.message(pc.dim("`--yes` enabled: using Quickstart defaults."));
} else {
const setupModeChoice = await p.select({
message: "Choose setup path",
@@ -459,9 +365,7 @@ export async function onboard(opts: OnboardOptions): Promise<void> {
if (tc) trackInstallStarted(tc);
let llm: PaperclipConfig["llm"] | undefined;
const { defaults: derivedDefaults, usedEnvKeys, ignoredEnvKeys } = quickstartDefaultsFromEnv({
preferTrustedLocal: opts.yes === true && !opts.bind,
});
const { defaults: derivedDefaults, usedEnvKeys, ignoredEnvKeys } = quickstartDefaultsFromEnv();
let {
database,
logging,
@@ -471,19 +375,6 @@ export async function onboard(opts: OnboardOptions): Promise<void> {
secrets,
} = derivedDefaults;
if (opts.bind === "loopback" || opts.bind === "lan" || opts.bind === "tailnet") {
const preset = buildPresetServerConfig(opts.bind, {
port: server.port,
allowedHostnames: server.allowedHostnames,
serveUi: server.serveUi,
});
server = preset.server;
auth = preset.auth;
if (opts.bind === "tailnet" && server.host === "127.0.0.1") {
p.log.warn(TAILNET_BIND_WARNING);
}
}
if (setupMode === "advanced") {
p.log.step(pc.bold("Database"));
database = await promptDatabase(database);
@@ -571,13 +462,7 @@ export async function onboard(opts: OnboardOptions): Promise<void> {
);
} else {
p.log.step(pc.bold("Quickstart"));
p.log.message(
pc.dim(
opts.bind
? `Using quickstart defaults with bind=${opts.bind}.`
: `Using quickstart defaults: ${server.deploymentMode}/${server.exposure} @ ${describeServerBinding(server)}.`,
),
);
p.log.message(pc.dim("Using quickstart defaults."));
if (usedEnvKeys.length > 0) {
p.log.message(pc.dim(`Environment-aware defaults active (${usedEnvKeys.length} env var(s) detected).`));
} else {
@@ -636,7 +521,7 @@ export async function onboard(opts: OnboardOptions): Promise<void> {
`Database: ${database.mode}`,
llm ? `LLM: ${llm.provider}` : "LLM: not configured",
`Logging: ${logging.mode} -> ${logging.logDir}`,
`Server: ${server.deploymentMode}/${server.exposure} @ ${describeServerBinding(server)}`,
`Server: ${server.deploymentMode}/${server.exposure} @ ${server.host}:${server.port}`,
`Allowed hosts: ${server.allowedHostnames.length > 0 ? server.allowedHostnames.join(", ") : "(loopback only)"}`,
`Auth URL mode: ${auth.baseUrlMode}${auth.publicBaseUrl ? ` (${auth.publicBaseUrl})` : ""}`,
`Storage: ${storage.provider}`,

View File

@@ -1,6 +1,5 @@
import fs from "node:fs";
import path from "node:path";
import { spawnSync } from "node:child_process";
import { fileURLToPath, pathToFileURL } from "node:url";
import * as p from "@clack/prompts";
import pc from "picocolors";
@@ -22,7 +21,6 @@ interface RunOptions {
instance?: string;
repair?: boolean;
yes?: boolean;
bind?: "loopback" | "lan" | "tailnet";
}
interface StartedServer {
@@ -59,7 +57,7 @@ export async function runCommand(opts: RunOptions): Promise<void> {
}
p.log.step("No config found. Starting onboarding...");
await onboard({ config: configPath, invokedByRun: true, bind: opts.bind });
await onboard({ config: configPath, invokedByRun: true });
}
p.log.step("Running doctor checks...");
@@ -148,35 +146,11 @@ function maybeEnableUiDevMiddleware(entrypoint: string): void {
}
}
function ensureDevWorkspaceBuildDeps(projectRoot: string): void {
const buildScript = path.resolve(projectRoot, "scripts/ensure-plugin-build-deps.mjs");
if (!fs.existsSync(buildScript)) return;
const result = spawnSync(process.execPath, [buildScript], {
cwd: projectRoot,
stdio: "inherit",
timeout: 120_000,
});
if (result.error) {
throw new Error(
`Failed to prepare workspace build artifacts before starting the Paperclip dev server.\n${formatError(result.error)}`,
);
}
if ((result.status ?? 1) !== 0) {
throw new Error(
"Failed to prepare workspace build artifacts before starting the Paperclip dev server.",
);
}
}
async function importServerEntry(): Promise<StartedServer> {
// Dev mode: try local workspace path (monorepo with tsx)
const projectRoot = path.resolve(path.dirname(fileURLToPath(import.meta.url)), "../../..");
const devEntry = path.resolve(projectRoot, "server/src/index.ts");
if (fs.existsSync(devEntry)) {
ensureDevWorkspaceBuildDeps(projectRoot);
maybeEnableUiDevMiddleware(devEntry);
const mod = await import(pathToFileURL(devEntry).href);
return await startServerFromModule(mod, devEntry);

View File

@@ -214,8 +214,6 @@ export function buildWorktreeConfig(input: {
server: {
deploymentMode: source?.server.deploymentMode ?? "local_trusted",
exposure: source?.server.exposure ?? "private",
...(source?.server.bind ? { bind: source.server.bind } : {}),
...(source?.server.customBindHost ? { customBindHost: source.server.customBindHost } : {}),
host: source?.server.host ?? "127.0.0.1",
port: serverPort,
allowedHostnames: source?.server.allowedHostnames ?? [],

View File

@@ -39,8 +39,6 @@ import {
issues,
projectWorkspaces,
projects,
routines,
routineTriggers,
runDatabaseBackup,
runDatabaseRestore,
createEmbeddedPostgresLogBuffer,
@@ -130,17 +128,6 @@ type WorktreeReseedOptions = {
allowLiveTarget?: boolean;
};
type WorktreeRepairOptions = {
branch?: string;
home?: string;
fromConfig?: string;
fromDataDir?: string;
fromInstance?: string;
seedMode?: string;
noSeed?: boolean;
allowLiveTarget?: boolean;
};
type EmbeddedPostgresInstance = {
initialise(): Promise<void>;
start(): Promise<void>;
@@ -561,46 +548,6 @@ function detectGitBranchName(cwd: string): string | null {
}
}
function validateGitBranchName(cwd: string, branchName: string): string {
const value = nonEmpty(branchName);
if (!value) {
throw new Error("Branch name is required.");
}
try {
execFileSync("git", ["check-ref-format", "--branch", value], {
cwd,
stdio: ["ignore", "pipe", "pipe"],
});
} catch (error) {
throw new Error(`Invalid branch name "${branchName}": ${extractExecSyncErrorMessage(error) ?? String(error)}`);
}
return value;
}
function isPrimaryGitWorktree(cwd: string): boolean {
const workspace = detectGitWorkspaceInfo(cwd);
return Boolean(workspace && workspace.gitDir === workspace.commonDir);
}
function resolvePrimaryGitRepoRoot(cwd: string): string {
const workspace = detectGitWorkspaceInfo(cwd);
if (!workspace) {
throw new Error("Current directory is not inside a git repository.");
}
if (workspace.gitDir === workspace.commonDir) {
return workspace.root;
}
return path.resolve(workspace.commonDir, "..");
}
function resolveRepairWorktreeDirName(branchName: string): string {
const normalized = branchName.trim()
.replace(/[^A-Za-z0-9._-]+/g, "-")
.replace(/-+/g, "-")
.replace(/^[-._]+|[-._]+$/g, "");
return normalized || "worktree";
}
function detectGitWorkspaceInfo(cwd: string): GitWorkspaceInfo | null {
try {
const root = execFileSync("git", ["rev-parse", "--show-toplevel"], {
@@ -824,21 +771,6 @@ export function resolveWorktreeReseedSource(input: WorktreeReseedOptions): Resol
);
}
function resolveWorktreeRepairSource(input: WorktreeRepairOptions): ResolvedWorktreeReseedSource {
const fromConfig = nonEmpty(input.fromConfig);
const fromDataDir = nonEmpty(input.fromDataDir);
const fromInstance = nonEmpty(input.fromInstance) ?? "default";
const configPath = resolveSourceConfigPath({
fromConfig: fromConfig ?? undefined,
fromDataDir: fromDataDir ?? undefined,
fromInstance,
});
return {
configPath,
label: configPath,
};
}
export function resolveWorktreeReseedTargetPaths(input: {
configPath: string;
rootPath: string;
@@ -860,105 +792,6 @@ export function resolveWorktreeReseedTargetPaths(input: {
});
}
function resolveExistingGitWorktree(selector: string, cwd: string): MergeSourceChoice | null {
const trimmed = selector.trim();
if (trimmed.length === 0) return null;
const directPath = path.resolve(trimmed);
if (existsSync(directPath)) {
return {
worktree: directPath,
branch: null,
branchLabel: path.basename(directPath),
hasPaperclipConfig: existsSync(path.resolve(directPath, ".paperclip", "config.json")),
isCurrent: directPath === path.resolve(cwd),
};
}
return toMergeSourceChoices(cwd).find((choice) =>
choice.worktree === directPath
|| path.basename(choice.worktree) === trimmed
|| choice.branchLabel === trimmed
|| choice.branch === trimmed,
) ?? null;
}
async function ensureRepairTargetWorktree(input: {
selector?: string;
seedMode: WorktreeSeedMode;
opts: WorktreeRepairOptions;
}): Promise<ResolvedWorktreeRepairTarget | null> {
const cwd = process.cwd();
const currentRoot = path.resolve(cwd);
const currentConfigPath = path.resolve(currentRoot, ".paperclip", "config.json");
if (!input.selector) {
if (isPrimaryGitWorktree(cwd)) {
return null;
}
return {
rootPath: currentRoot,
configPath: currentConfigPath,
label: path.basename(currentRoot),
branchName: detectGitBranchName(cwd),
created: false,
};
}
const existing = resolveExistingGitWorktree(input.selector, cwd);
if (existing) {
return {
rootPath: existing.worktree,
configPath: path.resolve(existing.worktree, ".paperclip", "config.json"),
label: existing.branchLabel,
branchName: existing.branchLabel === "(detached)" ? null : existing.branchLabel,
created: false,
};
}
const repoRoot = resolvePrimaryGitRepoRoot(cwd);
const branchName = validateGitBranchName(repoRoot, input.selector);
const targetPath = path.resolve(
repoRoot,
".paperclip",
"worktrees",
resolveRepairWorktreeDirName(branchName),
);
if (existsSync(targetPath)) {
throw new Error(`Target path already exists but is not a registered git worktree: ${targetPath}`);
}
mkdirSync(path.dirname(targetPath), { recursive: true });
const spinner = p.spinner();
spinner.start(`Creating git worktree for ${branchName}...`);
try {
execFileSync("git", resolveGitWorktreeAddArgs({
branchName,
targetPath,
branchExists: localBranchExists(repoRoot, branchName),
}), {
cwd: repoRoot,
stdio: ["ignore", "pipe", "pipe"],
});
spinner.stop(`Created git worktree at ${targetPath}.`);
} catch (error) {
spinner.stop(pc.red("Failed to create git worktree."));
throw new Error(extractExecSyncErrorMessage(error) ?? String(error));
}
installDependenciesBestEffort(targetPath);
return {
rootPath: targetPath,
configPath: path.resolve(targetPath, ".paperclip", "config.json"),
label: branchName,
branchName,
created: true,
};
}
function resolveSourceConnectionString(config: PaperclipConfig, envEntries: Record<string, string>, portOverride?: number): string {
if (config.database.mode === "postgres") {
const connectionString = nonEmpty(envEntries.DATABASE_URL) ?? nonEmpty(config.database.connectionString);
@@ -1089,36 +922,6 @@ async function ensureEmbeddedPostgres(dataDir: string, preferredPort: number): P
};
}
export async function pauseSeededScheduledRoutines(connectionString: string): Promise<number> {
const db = createDb(connectionString);
try {
const scheduledRoutineIds = await db
.selectDistinct({ routineId: routineTriggers.routineId })
.from(routineTriggers)
.where(and(eq(routineTriggers.kind, "schedule"), eq(routineTriggers.enabled, true)));
const idsToPause = scheduledRoutineIds
.map((row) => row.routineId)
.filter((value): value is string => Boolean(value));
if (idsToPause.length === 0) {
return 0;
}
const paused = await db
.update(routines)
.set({
status: "paused",
updatedAt: new Date(),
})
.where(and(inArray(routines.id, idsToPause), sql`${routines.status} <> 'paused'`, sql`${routines.status} <> 'archived'`))
.returning({ id: routines.id });
return paused.length;
} finally {
await db.$client?.end?.({ timeout: 5 }).catch(() => undefined);
}
}
async function seedWorktreeDatabase(input: {
sourceConfigPath: string;
sourceConfig: PaperclipConfig;
@@ -1156,7 +959,7 @@ async function seedWorktreeDatabase(input: {
const backup = await runDatabaseBackup({
connectionString: sourceConnectionString,
backupDir: path.resolve(input.targetPaths.backupDir, "seed"),
retention: { dailyDays: 7, weeklyWeeks: 4, monthlyMonths: 1 },
retentionDays: 7,
filenamePrefix: `${input.instanceId}-seed`,
includeMigrationJournal: true,
excludeTables: seedPlan.excludedTables,
@@ -1176,7 +979,6 @@ async function seedWorktreeDatabase(input: {
backupFile: backup.backupFile,
});
await applyPendingMigrations(targetConnectionString);
await pauseSeededScheduledRoutines(targetConnectionString);
const reboundWorkspaces = await rebindSeededProjectWorkspaces({
targetConnectionString,
currentCwd: input.targetPaths.cwd,
@@ -1370,7 +1172,18 @@ export async function worktreeMakeCommand(nameArg: string, opts: WorktreeMakeOpt
throw new Error(extractExecSyncErrorMessage(error) ?? String(error));
}
installDependenciesBestEffort(targetPath);
const installSpinner = p.spinner();
installSpinner.start("Installing dependencies...");
try {
execFileSync("pnpm", ["install"], {
cwd: targetPath,
stdio: ["ignore", "pipe", "pipe"],
});
installSpinner.stop("Installed dependencies.");
} catch (error) {
installSpinner.stop(pc.yellow("Failed to install dependencies (continuing anyway)."));
p.log.warning(extractExecSyncErrorMessage(error) ?? String(error));
}
const originalCwd = process.cwd();
try {
@@ -1387,21 +1200,6 @@ export async function worktreeMakeCommand(nameArg: string, opts: WorktreeMakeOpt
}
}
function installDependenciesBestEffort(targetPath: string): void {
const installSpinner = p.spinner();
installSpinner.start("Installing dependencies...");
try {
execFileSync("pnpm", ["install"], {
cwd: targetPath,
stdio: ["ignore", "pipe", "pipe"],
});
installSpinner.stop("Installed dependencies.");
} catch (error) {
installSpinner.stop(pc.yellow("Failed to install dependencies (continuing anyway)."));
p.log.warning(extractExecSyncErrorMessage(error) ?? String(error));
}
}
type WorktreeCleanupOptions = {
instance?: string;
home?: string;
@@ -1435,14 +1233,6 @@ type ResolvedWorktreeReseedSource = {
label: string;
};
type ResolvedWorktreeRepairTarget = {
rootPath: string;
configPath: string;
label: string;
branchName: string | null;
created: boolean;
};
function parseGitWorktreeList(cwd: string): GitWorktreeListEntry[] {
const raw = execFileSync("git", ["worktree", "list", "--porcelain"], {
cwd,
@@ -2884,7 +2674,10 @@ export async function worktreeMergeHistoryCommand(sourceArg: string | undefined,
}
}
async function runWorktreeReseed(opts: WorktreeReseedOptions): Promise<void> {
export async function worktreeReseedCommand(opts: WorktreeReseedOptions): Promise<void> {
printPaperclipCliBanner();
p.intro(pc.bgCyan(pc.black(" paperclipai worktree reseed ")));
const seedMode = opts.seedMode ?? "full";
if (!isWorktreeSeedMode(seedMode)) {
throw new Error(`Unsupported seed mode "${seedMode}". Expected one of: minimal, full.`);
@@ -2964,96 +2757,6 @@ async function runWorktreeReseed(opts: WorktreeReseedOptions): Promise<void> {
}
}
export async function worktreeReseedCommand(opts: WorktreeReseedOptions): Promise<void> {
printPaperclipCliBanner();
p.intro(pc.bgCyan(pc.black(" paperclipai worktree reseed ")));
await runWorktreeReseed(opts);
}
export async function worktreeRepairCommand(opts: WorktreeRepairOptions): Promise<void> {
printPaperclipCliBanner();
p.intro(pc.bgCyan(pc.black(" paperclipai worktree repair ")));
const seedMode = opts.seedMode ?? "minimal";
if (!isWorktreeSeedMode(seedMode)) {
throw new Error(`Unsupported seed mode "${seedMode}". Expected one of: minimal, full.`);
}
const target = await ensureRepairTargetWorktree({
selector: nonEmpty(opts.branch) ?? undefined,
seedMode,
opts,
});
if (!target) {
p.log.warn("Current checkout is the primary repo worktree. Pass --branch to create or repair a linked worktree.");
p.outro(pc.yellow("No worktree repaired."));
return;
}
const source = resolveWorktreeRepairSource(opts);
if (!existsSync(source.configPath)) {
throw new Error(`Source config not found at ${source.configPath}.`);
}
if (path.resolve(source.configPath) === path.resolve(target.configPath)) {
throw new Error("Source and target Paperclip configs are the same. Use --from-config/--from-instance to point repair at a different source.");
}
const targetConfig = existsSync(target.configPath) ? readConfig(target.configPath) : null;
const targetEnvEntries = readPaperclipEnvEntries(resolvePaperclipEnvFile(target.configPath));
const targetHasWorktreeEnv = Boolean(
nonEmpty(targetEnvEntries.PAPERCLIP_HOME) && nonEmpty(targetEnvEntries.PAPERCLIP_INSTANCE_ID),
);
if (targetConfig && targetHasWorktreeEnv && opts.noSeed) {
p.log.message(pc.dim(`Target ${target.label} already has worktree-local config/env. Skipping reseed because --no-seed was passed.`));
p.outro(pc.green(`Worktree metadata already looks healthy for ${target.label}.`));
return;
}
if (targetConfig && targetHasWorktreeEnv) {
await runWorktreeReseed({
fromConfig: source.configPath,
to: target.rootPath,
seedMode,
yes: true,
allowLiveTarget: opts.allowLiveTarget,
});
return;
}
const repairInstanceId = sanitizeWorktreeInstanceId(path.basename(target.rootPath));
const repairPaths = resolveWorktreeLocalPaths({
cwd: target.rootPath,
homeDir: resolveWorktreeHome(opts.home),
instanceId: repairInstanceId,
});
const runningTargetPid = readRunningPostmasterPid(path.resolve(repairPaths.embeddedPostgresDataDir, "postmaster.pid"));
if (runningTargetPid && !opts.allowLiveTarget) {
throw new Error(
`Target worktree database appears to be running (pid ${runningTargetPid}). Stop Paperclip in ${target.rootPath} before repairing, or re-run with --allow-live-target if you want to override this guard.`,
);
}
if (runningTargetPid && opts.allowLiveTarget) {
p.log.warning(`Proceeding even though the target embedded PostgreSQL appears to be running (pid ${runningTargetPid}).`);
}
const originalCwd = process.cwd();
try {
process.chdir(target.rootPath);
await runWorktreeInit({
home: opts.home,
fromConfig: source.configPath,
fromDataDir: opts.fromDataDir,
fromInstance: opts.fromInstance,
seed: opts.noSeed ? false : true,
seedMode,
force: true,
});
} finally {
process.chdir(originalCwd);
}
}
export function registerWorktreeCommands(program: Command): void {
const worktree = program.command("worktree").description("Worktree-local Paperclip instance helpers");
@@ -3129,19 +2832,6 @@ export function registerWorktreeCommands(program: Command): void {
.option("--allow-live-target", "Override the guard that requires the target worktree DB to be stopped first", false)
.action(worktreeReseedCommand);
worktree
.command("repair")
.description("Create or repair a linked worktree-local Paperclip instance without touching the primary checkout")
.option("--branch <name>", "Existing branch/worktree selector to repair, or a branch name to create under .paperclip/worktrees")
.option("--home <path>", `Home root for worktree instances (env: PAPERCLIP_WORKTREES_DIR, default: ${DEFAULT_WORKTREE_HOME})`)
.option("--from-config <path>", "Source config.json to seed from")
.option("--from-data-dir <path>", "Source PAPERCLIP_HOME used when deriving the source config")
.option("--from-instance <id>", "Source instance id when deriving the source config (default: default)")
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: minimal)", "minimal")
.option("--no-seed", "Repair metadata only and skip reseeding when bootstrapping a missing worktree config", false)
.option("--allow-live-target", "Override the guard that requires the target worktree DB to be stopped first", false)
.action(worktreeRepairCommand);
program
.command("worktree:cleanup")
.description("Safely remove a worktree, its branch, and its isolated instance data")

View File

@@ -1,183 +0,0 @@
import { execFileSync } from "node:child_process";
import {
ALL_INTERFACES_BIND_HOST,
LOOPBACK_BIND_HOST,
inferBindModeFromHost,
isAllInterfacesHost,
isLoopbackHost,
type BindMode,
type DeploymentExposure,
type DeploymentMode,
} from "@paperclipai/shared";
import type { AuthConfig, ServerConfig } from "./schema.js";
const TAILSCALE_DETECT_TIMEOUT_MS = 3000;
type BaseServerInput = {
port: number;
allowedHostnames: string[];
serveUi: boolean;
};
export function inferConfiguredBind(server?: Partial<ServerConfig>): BindMode {
if (server?.bind) return server.bind;
return inferBindModeFromHost(server?.customBindHost ?? server?.host);
}
export function detectTailnetBindHost(): string | undefined {
const explicit = process.env.PAPERCLIP_TAILNET_BIND_HOST?.trim();
if (explicit) return explicit;
try {
const stdout = execFileSync("tailscale", ["ip", "-4"], {
encoding: "utf8",
stdio: ["ignore", "pipe", "ignore"],
timeout: TAILSCALE_DETECT_TIMEOUT_MS,
});
return stdout
.split(/\r?\n/)
.map((line) => line.trim())
.find(Boolean);
} catch {
return undefined;
}
}
export function buildPresetServerConfig(
bind: Exclude<BindMode, "custom">,
input: BaseServerInput,
): { server: ServerConfig; auth: AuthConfig } {
const host =
bind === "loopback"
? LOOPBACK_BIND_HOST
: bind === "tailnet"
? (detectTailnetBindHost() ?? LOOPBACK_BIND_HOST)
: ALL_INTERFACES_BIND_HOST;
return {
server: {
deploymentMode: bind === "loopback" ? "local_trusted" : "authenticated",
exposure: "private",
bind,
customBindHost: undefined,
host,
port: input.port,
allowedHostnames: input.allowedHostnames,
serveUi: input.serveUi,
},
auth: {
baseUrlMode: "auto",
disableSignUp: false,
},
};
}
export function buildCustomServerConfig(input: BaseServerInput & {
deploymentMode: DeploymentMode;
exposure: DeploymentExposure;
host: string;
publicBaseUrl?: string;
}): { server: ServerConfig; auth: AuthConfig } {
const normalizedHost = input.host.trim();
const bind = isLoopbackHost(normalizedHost)
? "loopback"
: isAllInterfacesHost(normalizedHost)
? "lan"
: "custom";
return {
server: {
deploymentMode: input.deploymentMode,
exposure: input.deploymentMode === "local_trusted" ? "private" : input.exposure,
bind,
customBindHost: bind === "custom" ? normalizedHost : undefined,
host: normalizedHost,
port: input.port,
allowedHostnames: input.allowedHostnames,
serveUi: input.serveUi,
},
auth:
input.deploymentMode === "authenticated" && input.exposure === "public"
? {
baseUrlMode: "explicit",
disableSignUp: false,
publicBaseUrl: input.publicBaseUrl,
}
: {
baseUrlMode: "auto",
disableSignUp: false,
},
};
}
export function resolveQuickstartServerConfig(input: {
bind?: BindMode | null;
deploymentMode?: DeploymentMode | null;
exposure?: DeploymentExposure | null;
host?: string | null;
port: number;
allowedHostnames: string[];
serveUi: boolean;
publicBaseUrl?: string;
}): { server: ServerConfig; auth: AuthConfig } {
const trimmedHost = input.host?.trim();
const explicitBind = input.bind ?? null;
if (explicitBind === "loopback" || explicitBind === "lan" || explicitBind === "tailnet") {
return buildPresetServerConfig(explicitBind, {
port: input.port,
allowedHostnames: input.allowedHostnames,
serveUi: input.serveUi,
});
}
if (explicitBind === "custom") {
return buildCustomServerConfig({
deploymentMode: input.deploymentMode ?? "authenticated",
exposure: input.exposure ?? "private",
host: trimmedHost || LOOPBACK_BIND_HOST,
port: input.port,
allowedHostnames: input.allowedHostnames,
serveUi: input.serveUi,
publicBaseUrl: input.publicBaseUrl,
});
}
if (trimmedHost) {
return buildCustomServerConfig({
deploymentMode: input.deploymentMode ?? (isLoopbackHost(trimmedHost) ? "local_trusted" : "authenticated"),
exposure: input.exposure ?? "private",
host: trimmedHost,
port: input.port,
allowedHostnames: input.allowedHostnames,
serveUi: input.serveUi,
publicBaseUrl: input.publicBaseUrl,
});
}
if (input.deploymentMode === "authenticated") {
if (input.exposure === "public") {
return buildCustomServerConfig({
deploymentMode: "authenticated",
exposure: "public",
host: ALL_INTERFACES_BIND_HOST,
port: input.port,
allowedHostnames: input.allowedHostnames,
serveUi: input.serveUi,
publicBaseUrl: input.publicBaseUrl,
});
}
return buildPresetServerConfig("lan", {
port: input.port,
allowedHostnames: input.allowedHostnames,
serveUi: input.serveUi,
});
}
return buildPresetServerConfig("loopback", {
port: input.port,
allowedHostnames: input.allowedHostnames,
serveUi: input.serveUi,
});
}

View File

@@ -50,8 +50,7 @@ program
.description("Interactive first-run setup wizard")
.option("-c, --config <path>", "Path to config file")
.option("-d, --data-dir <path>", DATA_DIR_OPTION_HELP)
.option("--bind <mode>", "Quickstart reachability preset (loopback, lan, tailnet)")
.option("-y, --yes", "Accept quickstart defaults (trusted local loopback unless --bind is set) and start immediately", false)
.option("-y, --yes", "Accept defaults (quickstart + start immediately)", false)
.option("--run", "Start Paperclip immediately after saving config", false)
.action(onboard);
@@ -109,7 +108,6 @@ program
.option("-c, --config <path>", "Path to config file")
.option("-d, --data-dir <path>", DATA_DIR_OPTION_HELP)
.option("-i, --instance <id>", "Local instance id (default: default)")
.option("--bind <mode>", "On first run, use onboarding reachability preset (loopback, lan, tailnet)")
.option("--repair", "Attempt automatic repairs during doctor", true)
.option("--no-repair", "Disable automatic repairs during doctor")
.action(runCommand);

View File

@@ -1,16 +1,6 @@
import * as p from "@clack/prompts";
import { isLoopbackHost, type BindMode } from "@paperclipai/shared";
import type { AuthConfig, ServerConfig } from "../config/schema.js";
import { parseHostnameCsv } from "../config/hostnames.js";
import { buildCustomServerConfig, buildPresetServerConfig, inferConfiguredBind } from "../config/server-bind.js";
const TAILNET_BIND_WARNING =
"No Tailscale address was detected during setup. The saved config will stay on loopback until Tailscale is available or PAPERCLIP_TAILNET_BIND_HOST is set.";
function cancelled(): never {
p.cancel("Setup cancelled.");
process.exit(0);
}
export async function promptServer(opts?: {
currentServer?: Partial<ServerConfig>;
@@ -18,37 +8,69 @@ export async function promptServer(opts?: {
}): Promise<{ server: ServerConfig; auth: AuthConfig }> {
const currentServer = opts?.currentServer;
const currentAuth = opts?.currentAuth;
const currentBind = inferConfiguredBind(currentServer);
const bindSelection = await p.select({
message: "Reachability",
const deploymentModeSelection = await p.select({
message: "Deployment mode",
options: [
{
value: "loopback" as const,
label: "Trusted local",
hint: "Recommended for first run: localhost only, no login friction",
value: "local_trusted",
label: "Local trusted",
hint: "Easiest for local setup (no login, localhost-only)",
},
{
value: "lan" as const,
label: "Private network",
hint: "Broad private bind for LAN, VPN, or legacy --tailscale-auth style access",
},
{
value: "tailnet" as const,
label: "Tailnet",
hint: "Private authenticated access using the machine's detected Tailscale address",
},
{
value: "custom" as const,
label: "Custom",
hint: "Choose exact auth mode, exposure, and host manually",
value: "authenticated",
label: "Authenticated",
hint: "Login required; use for private network or public hosting",
},
],
initialValue: currentBind,
initialValue: currentServer?.deploymentMode ?? "local_trusted",
});
if (p.isCancel(bindSelection)) cancelled();
const bind = bindSelection as BindMode;
if (p.isCancel(deploymentModeSelection)) {
p.cancel("Setup cancelled.");
process.exit(0);
}
const deploymentMode = deploymentModeSelection as ServerConfig["deploymentMode"];
let exposure: ServerConfig["exposure"] = "private";
if (deploymentMode === "authenticated") {
const exposureSelection = await p.select({
message: "Exposure profile",
options: [
{
value: "private",
label: "Private network",
hint: "Private access (for example Tailscale), lower setup friction",
},
{
value: "public",
label: "Public internet",
hint: "Internet-facing deployment with stricter requirements",
},
],
initialValue: currentServer?.exposure ?? "private",
});
if (p.isCancel(exposureSelection)) {
p.cancel("Setup cancelled.");
process.exit(0);
}
exposure = exposureSelection as ServerConfig["exposure"];
}
const hostDefault = deploymentMode === "local_trusted" ? "127.0.0.1" : "0.0.0.0";
const hostStr = await p.text({
message: "Bind host",
defaultValue: currentServer?.host ?? hostDefault,
placeholder: hostDefault,
validate: (val) => {
if (!val.trim()) return "Host is required";
},
});
if (p.isCancel(hostStr)) {
p.cancel("Setup cancelled.");
process.exit(0);
}
const portStr = await p.text({
message: "Server port",
@@ -62,113 +84,15 @@ export async function promptServer(opts?: {
},
});
if (p.isCancel(portStr)) cancelled();
const port = Number(portStr) || 3100;
const serveUi = currentServer?.serveUi ?? true;
if (bind === "loopback") {
return buildPresetServerConfig("loopback", {
port,
allowedHostnames: [],
serveUi,
});
if (p.isCancel(portStr)) {
p.cancel("Setup cancelled.");
process.exit(0);
}
if (bind === "lan" || bind === "tailnet") {
const allowedHostnamesInput = await p.text({
message: "Allowed private hostnames (comma-separated, optional)",
defaultValue: (currentServer?.allowedHostnames ?? []).join(", "),
placeholder:
bind === "tailnet"
? "your-machine.tailnet.ts.net"
: "dotta-macbook-pro, host.docker.internal",
validate: (val) => {
try {
parseHostnameCsv(val);
return;
} catch (err) {
return err instanceof Error ? err.message : "Invalid hostname list";
}
},
});
if (p.isCancel(allowedHostnamesInput)) cancelled();
const preset = buildPresetServerConfig(bind, {
port,
allowedHostnames: parseHostnameCsv(allowedHostnamesInput),
serveUi,
});
if (bind === "tailnet" && isLoopbackHost(preset.server.host)) {
p.log.warn(TAILNET_BIND_WARNING);
}
return preset;
}
const deploymentModeSelection = await p.select({
message: "Auth mode",
options: [
{
value: "local_trusted",
label: "Local trusted",
hint: "No login required; only safe with loopback-only or similarly trusted access",
},
{
value: "authenticated",
label: "Authenticated",
hint: "Login required; supports both private-network and public deployments",
},
],
initialValue: currentServer?.deploymentMode ?? "authenticated",
});
if (p.isCancel(deploymentModeSelection)) cancelled();
const deploymentMode = deploymentModeSelection as ServerConfig["deploymentMode"];
let exposure: ServerConfig["exposure"] = "private";
if (deploymentMode === "authenticated") {
const exposureSelection = await p.select({
message: "Exposure profile",
options: [
{
value: "private",
label: "Private network",
hint: "Private access only, with automatic URL handling",
},
{
value: "public",
label: "Public internet",
hint: "Internet-facing deployment with explicit public URL requirements",
},
],
initialValue: currentServer?.exposure ?? "private",
});
if (p.isCancel(exposureSelection)) cancelled();
exposure = exposureSelection as ServerConfig["exposure"];
}
const defaultHost =
currentServer?.customBindHost ??
currentServer?.host ??
(deploymentMode === "local_trusted" ? "127.0.0.1" : "0.0.0.0");
const host = await p.text({
message: "Bind host",
defaultValue: defaultHost,
placeholder: defaultHost,
validate: (val) => {
if (!val.trim()) return "Host is required";
if (deploymentMode === "local_trusted" && !isLoopbackHost(val.trim())) {
return "Local trusted mode requires a loopback host such as 127.0.0.1";
}
},
});
if (p.isCancel(host)) cancelled();
let allowedHostnames: string[] = [];
if (deploymentMode === "authenticated" && exposure === "private") {
const allowedHostnamesInput = await p.text({
message: "Allowed private hostnames (comma-separated, optional)",
message: "Allowed hostnames (comma-separated, optional)",
defaultValue: (currentServer?.allowedHostnames ?? []).join(", "),
placeholder: "dotta-macbook-pro, your-host.tailnet.ts.net",
validate: (val) => {
@@ -181,11 +105,15 @@ export async function promptServer(opts?: {
},
});
if (p.isCancel(allowedHostnamesInput)) cancelled();
if (p.isCancel(allowedHostnamesInput)) {
p.cancel("Setup cancelled.");
process.exit(0);
}
allowedHostnames = parseHostnameCsv(allowedHostnamesInput);
}
let publicBaseUrl: string | undefined;
const port = Number(portStr) || 3100;
let auth: AuthConfig = { baseUrlMode: "auto", disableSignUp: false };
if (deploymentMode === "authenticated" && exposure === "public") {
const urlInput = await p.text({
message: "Public base URL",
@@ -205,17 +133,32 @@ export async function promptServer(opts?: {
}
},
});
if (p.isCancel(urlInput)) cancelled();
publicBaseUrl = urlInput.trim().replace(/\/+$/, "");
if (p.isCancel(urlInput)) {
p.cancel("Setup cancelled.");
process.exit(0);
}
auth = {
baseUrlMode: "explicit",
disableSignUp: false,
publicBaseUrl: urlInput.trim().replace(/\/+$/, ""),
};
} else if (currentAuth?.baseUrlMode === "explicit" && currentAuth.publicBaseUrl) {
auth = {
baseUrlMode: "explicit",
disableSignUp: false,
publicBaseUrl: currentAuth.publicBaseUrl,
};
}
return buildCustomServerConfig({
deploymentMode,
exposure,
host: host.trim(),
port,
allowedHostnames,
serveUi,
publicBaseUrl,
});
return {
server: {
deploymentMode,
exposure,
host: hostStr.trim(),
port,
allowedHostnames,
serveUi: currentServer?.serveUi ?? true,
},
auth,
};
}

View File

@@ -32,12 +32,10 @@ Mode taxonomy and design intent are documented in `doc/DEPLOYMENT-MODES.md`.
Current CLI behavior:
- `paperclipai onboard` and `paperclipai configure --section server` set deployment mode in config
- server onboarding/configure ask for reachability intent and write `server.bind`
- `paperclipai run --bind <loopback|lan|tailnet>` passes a quickstart bind preset into first-run onboarding when config is missing
- runtime can override mode with `PAPERCLIP_DEPLOYMENT_MODE`
- `paperclipai run` and `paperclipai doctor` still do not expose a direct low-level `--mode` flag
- `paperclipai run` and `paperclipai doctor` do not yet expose a direct `--mode` flag
Canonical behavior is documented in `doc/DEPLOYMENT-MODES.md`.
Target behavior (planned) is documented in `doc/DEPLOYMENT-MODES.md` section 5.
Allow an authenticated/private hostname (for example custom Tailscale DNS):

View File

@@ -17,11 +17,6 @@ Paperclip supports two runtime modes:
This keeps one authenticated auth stack while still separating low-friction private-network defaults from internet-facing hardening requirements.
Paperclip now treats **bind** as a separate concern from auth:
- auth model: `local_trusted` vs `authenticated`, plus `private/public`
- reachability model: `server.bind = loopback | lan | tailnet | custom`
## 2. Canonical Model
| Runtime Mode | Exposure | Human auth | Primary use |
@@ -30,15 +25,6 @@ Paperclip now treats **bind** as a separate concern from auth:
| `authenticated` | `private` | Login required | Private-network access (for example Tailscale/VPN/LAN) |
| `authenticated` | `public` | Login required | Internet-facing/cloud deployment |
## Reachability Model
| Bind | Meaning | Typical use |
|---|---|---|
| `loopback` | Listen on localhost only | default local usage, reverse-proxy deployments |
| `lan` | Listen on all interfaces (`0.0.0.0`) | LAN/VPN/private-network access |
| `tailnet` | Listen on a detected Tailscale IP | Tailscale-only access |
| `custom` | Listen on an explicit host/IP | advanced interface-specific setups |
## 3. Security Policy
## `local_trusted`
@@ -52,14 +38,12 @@ Paperclip now treats **bind** as a separate concern from auth:
- login required
- low-friction URL handling (`auto` base URL mode)
- private-host trust policy required
- bind can be `loopback`, `lan`, `tailnet`, or `custom`
## `authenticated + public`
- login required
- explicit public URL required
- stricter deployment checks and failures in doctor
- recommended bind is `loopback` behind a reverse proxy; direct `lan/custom` is advanced
## 4. Onboarding UX Contract
@@ -71,22 +55,14 @@ pnpm paperclipai onboard
Server prompt behavior:
1. quickstart `--yes` defaults to `server.bind=loopback` and therefore `local_trusted/private`
2. advanced server setup asks reachability first:
- `Trusted local``bind=loopback`, `local_trusted/private`
- `Private network``bind=lan`, `authenticated/private`
- `Tailnet``bind=tailnet`, `authenticated/private`
- `Custom` → manual mode/exposure/host entry
3. raw host entry is only required for the `Custom` path
4. explicit public URL is only required for `authenticated + public`
Examples:
```sh
pnpm paperclipai onboard --yes
pnpm paperclipai onboard --yes --bind lan
pnpm paperclipai run --bind tailnet
```
1. ask mode, default `local_trusted`
2. option copy:
- `local_trusted`: "Easiest for local setup (no login, localhost-only)"
- `authenticated`: "Login required; use for private network or public hosting"
3. if `authenticated`, ask exposure:
- `private`: "Private network access (for example Tailscale), lower setup friction"
- `public`: "Internet-facing deployment, stricter security requirements"
4. ask explicit public URL only for `authenticated + public`
`configure --section server` follows the same interactive behavior.

View File

@@ -54,54 +54,18 @@ pnpm dev:stop
Tailscale/private-auth dev mode:
```sh
pnpm dev --bind lan
```
This runs dev as `authenticated/private` with a private-network bind preset.
For Tailscale-only reachability on a detected tailnet address:
```sh
pnpm dev --bind tailnet
```
Legacy aliases still map to the old broad private-network behavior:
```sh
pnpm dev --tailscale-auth
pnpm dev --authenticated-private
```
This runs dev as `authenticated/private` and binds the server to `0.0.0.0` for private-network access.
Allow additional private hostnames (for example custom Tailscale hostnames):
```sh
pnpm paperclipai allowed-hostname dotta-macbook-pro
```
## Test Commands
Use the cheap local default unless you are specifically working on browser flows:
```sh
pnpm test
```
`pnpm test` runs the Vitest suite only. For interactive Vitest watch mode use:
```sh
pnpm test:watch
```
Browser suites stay separate:
```sh
pnpm test:e2e
pnpm test:release-smoke
```
These browser suites are intended for targeted local verification and CI, not the default agent/human test command.
## One-Command Local Run
For a first-time local install, you can bootstrap and run in one command:
@@ -211,9 +175,7 @@ Seed modes:
After `worktree init`, both the server and the CLI auto-load the repo-local `.paperclip/.env` when run inside that worktree, so normal commands like `pnpm dev`, `paperclipai doctor`, and `paperclipai db:backup` stay scoped to the worktree instance.
`pnpm dev` now fails fast in a linked git worktree when `.paperclip/.env` is missing, instead of silently booting against the default instance/port. If that happens, run `paperclipai worktree init` in the worktree first.
Provisioned git worktrees also pause seeded routines that still have enabled schedule triggers in the isolated worktree database by default. This prevents copied daily/cron routines from firing unexpectedly inside the new workspace instance during development without disabling webhook/API-only routines.
Provisioned git worktrees also pause all seeded routines in the isolated worktree database by default. This prevents copied daily/cron routines from firing unexpectedly inside the new workspace instance during development.
That repo-local env also sets:
@@ -262,7 +224,7 @@ paperclipai worktree init --force
Repair an already-created repo-managed worktree and reseed its isolated instance from the main default install:
```sh
cd /path/to/paperclip/.paperclip/worktrees/PAP-884-ai-commits-component
cd ~/.paperclip/worktrees/PAP-884-ai-commits-component
pnpm paperclipai worktree init --force --seed-mode minimal \
--name PAP-884-ai-commits-component \
--from-config ~/.paperclip/instances/default/config.json
@@ -270,33 +232,6 @@ pnpm paperclipai worktree init --force --seed-mode minimal \
That rewrites the worktree-local `.paperclip/config.json` + `.paperclip/.env`, recreates the isolated instance under `~/.paperclip-worktrees/instances/<worktree-id>/`, and preserves the git worktree contents themselves.
For an already-created worktree where you want the CLI to decide whether to rebuild missing worktree metadata or just reseed the isolated DB, use `worktree repair`.
**`pnpm paperclipai worktree repair [options]`** — Repair the current linked worktree by default, or create/repair a named linked worktree under `.paperclip/worktrees/` when `--branch` is provided. The command never targets the primary checkout unless you explicitly pass `--branch`.
| Option | Description |
|---|---|
| `--branch <name>` | Existing branch/worktree selector to repair, or a branch name to create under `.paperclip/worktrees` |
| `--home <path>` | Home root for worktree instances (default: `~/.paperclip-worktrees`) |
| `--from-config <path>` | Source config.json to seed from |
| `--from-data-dir <path>` | Source `PAPERCLIP_HOME` used when deriving the source config |
| `--from-instance <id>` | Source instance id when deriving the source config (default: `default`) |
| `--seed-mode <mode>` | Seed profile: `minimal` or `full` (default: `minimal`) |
| `--no-seed` | Repair metadata only when bootstrapping a missing worktree config |
| `--allow-live-target` | Override the guard that requires the target worktree DB to be stopped first |
Examples:
```sh
# From inside a linked worktree, rebuild missing .paperclip metadata and reseed it from the default instance.
cd /path/to/paperclip/.paperclip/worktrees/PAP-1132-assistant-ui-pap-1131-make-issues-comments-be-like-a-chat
pnpm paperclipai worktree repair
# From the primary checkout, create or repair a linked worktree for a branch under .paperclip/worktrees/.
cd /path/to/paperclip
pnpm paperclipai worktree repair --branch PAP-1132-assistant-ui-pap-1131-make-issues-comments-be-like-a-chat
```
For an already-created worktree where you want to keep the existing repo-local config/env and only overwrite the isolated database, use `worktree reseed` instead. Stop the target worktree's Paperclip server first so the command can replace the DB safely.
**`pnpm paperclipai worktree reseed [options]`** — Re-seed an existing worktree-local instance from another Paperclip instance or worktree while preserving the target worktree's current config, ports, and instance identity.

View File

@@ -3,7 +3,7 @@ Use this exact checklist.
1. Start Paperclip in auth mode.
```bash
cd <paperclip-repo-root>
pnpm dev --bind lan
pnpm dev --tailscale-auth
```
Then verify:
```bash

View File

@@ -395,8 +395,6 @@ Side effects:
- entering `done` sets `completed_at`
- entering `cancelled` sets `cancelled_at`
Detailed ownership, execution, blocker, and crash-recovery semantics are documented in `doc/execution-semantics.md`.
## 8.3 Approval Status
- `pending -> approved | rejected | cancelled`

View File

@@ -1,252 +0,0 @@
# Execution Semantics
Status: Current implementation guide
Date: 2026-04-13
Audience: Product and engineering
This document explains how Paperclip interprets issue assignment, issue status, execution runs, wakeups, parent/sub-issue structure, and blocker relationships.
`doc/SPEC-implementation.md` remains the V1 contract. This document is the detailed execution model behind that contract.
## 1. Core Model
Paperclip separates four concepts that are easy to blur together:
1. structure: parent/sub-issue relationships
2. dependency: blocker relationships
3. ownership: who is responsible for the issue now
4. execution: whether the control plane currently has a live path to move the issue forward
The system works best when those are kept separate.
## 2. Assignee Semantics
An issue has at most one assignee.
- `assigneeAgentId` means the issue is owned by an agent
- `assigneeUserId` means the issue is owned by a human board user
- both cannot be set at the same time
This is a hard invariant. Paperclip is single-assignee by design.
## 3. Status Semantics
Paperclip issue statuses are not just UI labels. They imply different expectations about ownership and execution.
### `backlog`
The issue is not ready for active work.
- no execution expectation
- no pickup expectation
- safe resting state for future work
### `todo`
The issue is actionable but not actively claimed.
- it may be assigned or unassigned
- no checkout/execution lock is required yet
- for agent-assigned work, Paperclip may still need a wake path to ensure the assignee actually sees it
### `in_progress`
The issue is actively owned work.
- requires an assignee
- for agent-owned issues, this is a strict execution-backed state
- for user-owned issues, this is a human ownership state and is not backed by heartbeat execution
For agent-owned issues, `in_progress` should not be allowed to become a silent dead state.
### `blocked`
The issue cannot proceed until something external changes.
This is the right state for:
- waiting on another issue
- waiting on a human decision
- waiting on an external dependency or system
- work that automatic recovery could not safely continue
### `in_review`
Execution work is paused because the next move belongs to a reviewer or approver, not the current executor.
### `done`
The work is complete and terminal.
### `cancelled`
The work will not continue and is terminal.
## 4. Agent-Owned vs User-Owned Execution
The execution model differs depending on assignee type.
### Agent-owned issues
Agent-owned issues are part of the control plane's execution loop.
- Paperclip can wake the assignee
- Paperclip can track runs linked to the issue
- Paperclip can recover some lost execution state after crashes/restarts
### User-owned issues
User-owned issues are not executed by the heartbeat scheduler.
- Paperclip can track the ownership and status
- Paperclip cannot rely on heartbeat/run semantics to keep them moving
- stranded-work reconciliation does not apply to them
This is why `in_progress` can be strict for agents without forcing the same runtime rules onto human-held work.
## 5. Checkout and Active Execution
Checkout is the bridge from issue ownership to active agent execution.
- checkout is required to move an issue into agent-owned `in_progress`
- `checkoutRunId` represents issue-ownership lock for the current agent run
- `executionRunId` represents the currently active execution path for the issue
These are related but not identical:
- `checkoutRunId` answers who currently owns execution rights for the issue
- `executionRunId` answers which run is actually live right now
Paperclip already clears stale execution locks and can adopt some stale checkout locks when the original run is gone.
## 6. Parent/Sub-Issue vs Blockers
Paperclip uses two different relationships for different jobs.
### Parent/Sub-Issue (`parentId`)
This is structural.
Use it for:
- work breakdown
- rollup context
- explaining why a child issue exists
- waking the parent assignee when all direct children become terminal
Do not treat `parentId` as execution dependency by itself.
### Blockers (`blockedByIssueIds`)
This is dependency semantics.
Use it for:
- \"this issue cannot continue until that issue changes state\"
- explicit waiting relationships
- automatic wakeups when all blockers resolve
If a parent is truly waiting on a child, model that with blockers. Do not rely on the parent/child relationship alone.
## 7. Consistent Execution Path Rules
For agent-assigned, non-terminal, actionable issues, Paperclip should not leave work in a state where nobody is working it and nothing will wake it.
The relevant execution path depends on status.
### Agent-assigned `todo`
This is dispatch state: ready to start, not yet actively claimed.
A healthy dispatch state means at least one of these is true:
- the issue already has a queued/running wake path
- the issue is intentionally resting in `todo` after a successful agent heartbeat, not after an interrupted dispatch
- the issue has been explicitly surfaced as stranded
### Agent-assigned `in_progress`
This is active-work state.
A healthy active-work state means at least one of these is true:
- there is an active run for the issue
- there is already a queued continuation wake
- the issue has been explicitly surfaced as stranded
## 8. Crash and Restart Recovery
Paperclip now treats crash/restart recovery as a stranded-assigned-work problem, not just a stranded-run problem.
There are two distinct failure modes.
### 8.1 Stranded assigned `todo`
Example:
- issue is assigned to an agent
- status is `todo`
- the original wake/run died during or after dispatch
- after restart there is no queued wake and nothing picks the issue back up
Recovery rule:
- if the latest issue-linked run failed/timed out/cancelled and no live execution path remains, Paperclip queues one automatic assignment recovery wake
- if that recovery wake also finishes and the issue is still stranded, Paperclip moves the issue to `blocked` and posts a visible comment
This is a dispatch recovery, not a continuation recovery.
### 8.2 Stranded assigned `in_progress`
Example:
- issue is assigned to an agent
- status is `in_progress`
- the live run disappeared
- after restart there is no active run and no queued continuation
Recovery rule:
- Paperclip queues one automatic continuation wake
- if that continuation wake also finishes and the issue is still stranded, Paperclip moves the issue to `blocked` and posts a visible comment
This is an active-work continuity recovery.
## 9. Startup and Periodic Reconciliation
Startup recovery and periodic recovery are different from normal wakeup delivery.
On startup and on the periodic recovery loop, Paperclip now does three things in sequence:
1. reap orphaned `running` runs
2. resume persisted `queued` runs
3. reconcile stranded assigned work
That last step is what closes the gap where issue state survives a crash but the wake/run path does not.
## 10. What This Does Not Mean
These semantics do not change V1 into an auto-reassignment system.
Paperclip still does not:
- automatically reassign work to a different agent
- infer dependency semantics from `parentId` alone
- treat human-held work as heartbeat-managed execution
The recovery model is intentionally conservative:
- preserve ownership
- retry once when the control plane lost execution continuity
- escalate visibly when the system cannot safely keep going
## 11. Practical Interpretation
For a board operator, the intended meaning is:
- agent-owned `in_progress` should mean \"this is live work or clearly surfaced as a problem\"
- agent-owned `todo` should not stay assigned forever after a crash with no remaining wake path
- parent/sub-issue explains structure
- blockers explain waiting
That is the execution contract Paperclip should present to operators.

View File

@@ -22,7 +22,6 @@ The question is not "which memory project wins?" The question is "what is the sm
### Hosted memory APIs
- `mem0`
- `AWS Bedrock AgentCore Memory`
- `supermemory`
- `Memori`
@@ -50,7 +49,6 @@ These emphasize local persistence, inspectability, and low operational overhead.
|---|---|---|---|---|
| [nuggets](https://github.com/NeoVertex1/nuggets) | local memory engine + messaging gateway | topic-scoped HRR memory with `remember`, `recall`, `forget`, fact promotion into `MEMORY.md` | good example of lightweight local memory and automatic promotion | very specific architecture; not a general multi-tenant service |
| [mem0](https://github.com/mem0ai/mem0) | hosted + OSS SDK | `add`, `search`, `getAll`, `get`, `update`, `delete`, `deleteAll`; entity partitioning via `user_id`, `agent_id`, `run_id`, `app_id` | closest to a clean provider API with identities and metadata filters | provider owns extraction heavily; Paperclip should not assume every backend behaves like mem0 |
| [AWS Bedrock AgentCore Memory](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/memory.html) | AWS-managed memory service | explicit short-term and long-term memories, actor/session/event APIs, memory strategies, namespace templates, optional self-managed extraction pipeline | strong example of provider-managed memory with clear scoped ids, retention controls, and standalone API access outside a single agent framework | AWS-hosted and IAM-centric; Paperclip would still need its own company/run/comment provenance, cost rollups, and likely a plugin wrapper instead of baking AWS semantics into core |
| [MemOS](https://github.com/MemTensor/MemOS) | memory OS / framework | unified add-retrieve-edit-delete, memory cubes, multimodal memory, tool memory, async scheduler, feedback/correction | strong source for optional capabilities beyond plain search | much broader than the minimal contract Paperclip should standardize first |
| [supermemory](https://github.com/supermemoryai/supermemory) | hosted memory + context API | `add`, `profile`, `search.memories`, `search.documents`, document upload, settings; automatic profile building and forgetting | strong example of "context bundle" rather than raw search results | heavily productized around its own ontology and hosted flow |
| [memU](https://github.com/NevaMind-AI/memU) | proactive agent memory framework | file-system metaphor, proactive loop, intent prediction, always-on companion model | good source for when memory should trigger agent behavior, not just retrieval | proactive assistant framing is broader than Paperclip's task-centric control plane |
@@ -79,7 +77,6 @@ These differences are exactly why Paperclip needs a layered contract instead of
### 1. Who owns extraction?
- `mem0`, `supermemory`, and `Memori` expect the provider to infer memories from conversations.
- `AWS Bedrock AgentCore Memory` supports both provider-managed extraction and self-managed pipelines where the host writes curated long-term memory records.
- `memsearch` expects the host to decide what markdown to write, then indexes it.
- `MemOS`, `memU`, `EverMemOS`, and `OpenViking` sit somewhere in between and often expose richer memory construction pipelines.
@@ -107,7 +104,6 @@ Paperclip should make plain search the minimum contract and richer outputs optio
### 4. Is memory synchronous or asynchronous?
- local tools often work synchronously in-process.
- `AWS Bedrock AgentCore Memory` is synchronous at the API edge, but its long-term memory path includes background extraction/indexing behavior and retention policies managed by the provider.
- larger systems add schedulers, background indexing, compaction, or sync jobs.
Paperclip needs both direct request/response operations and background maintenance hooks.

View File

@@ -7,10 +7,10 @@ Define a Paperclip memory service and surface API that can sit above multiple me
- company scoping
- auditability
- provenance back to Paperclip work objects
- budget and cost visibility
- budget / cost visibility
- plugin-first extensibility
This plan is based on the external landscape summarized in `doc/memory-landscape.md`, the AWS AgentCore comparison captured in [PAP-1274](/PAP/issues/PAP-1274), and the current Paperclip architecture in:
This plan is based on the external landscape summarized in `doc/memory-landscape.md` and on the current Paperclip architecture in:
- `doc/SPEC-implementation.md`
- `doc/plugins/PLUGIN_SPEC.md`
@@ -19,26 +19,23 @@ This plan is based on the external landscape summarized in `doc/memory-landscape
## Recommendation In One Sentence
Paperclip should add a company-scoped memory control plane with company default plus agent override resolution, shared hook delivery, and full operation attribution, while leaving extraction and storage semantics to built-ins and plugins.
Paperclip should not embed one opinionated memory engine into core. It should add a company-scoped memory control plane with a small normalized adapter contract, then let built-ins and plugins implement the provider-specific behavior.
## Product Decisions
### 1. Memory resolution is company default plus agent override
### 1. Memory is company-scoped by default
Every memory binding belongs to exactly one company.
Resolution order in V1:
That binding can then be:
- company default binding
- optional per-agent override
There is no per-project override in V1.
Project context can still appear in scope and provenance so providers can use it for retrieval and partitioning, but projects do not participate in binding selection.
- the company default
- an agent override
- a project override later if we need it
No cross-company memory sharing in the initial design.
### 2. Providers are selected by stable binding key
### 2. Providers are selected by key
Each configured memory provider gets a stable key inside a company, for example:
@@ -47,53 +44,36 @@ Each configured memory provider gets a stable key inside a company, for example:
- `local-markdown`
- `research-kb`
Agents, tools, and background hooks resolve the active provider by key, not by hard-coded vendor logic.
Agents and services resolve the active provider by key, not by hard-coded vendor logic.
### 3. Plugins are the primary provider path
Built-ins are useful for a zero-config local path, but most providers should arrive through the existing Paperclip plugin runtime.
That keeps the core small and matches the broader Paperclip direction that specialized knowledge systems live at the edges.
That keeps the core small and matches the current direction that optional knowledge-like systems live at the edges.
### 4. Paperclip owns routing, provenance, and policy
### 4. Paperclip owns routing, provenance, and accounting
Providers should not decide how Paperclip entities map to governance.
Paperclip core should own:
- binding resolution
- who is allowed to call a memory operation
- which company, agent, issue, project, run, and subject scope is active
- what source object the operation belongs to
- how usage and costs are attributed
- how operators inspect what happened
- which company / agent / project scope is active
- what issue / run / comment / document the operation belongs to
- how usage gets recorded
### 5. Paperclip exposes shared hooks, providers own extraction
Paperclip should emit a common set of memory hooks that built-ins, third-party adapters, and plugins can all use.
Those hooks should pass structured Paperclip source objects plus normalized metadata. The provider then decides how to extract from those objects.
Paperclip should not force one extraction pipeline or one canonical "memory text" transform before the provider sees the input.
### 6. Automatic memory should start narrow, but the hook surface should be general
### 5. Automatic memory should be narrow at first
Automatic capture is useful, but broad silent capture is dangerous.
Initial built-in automatic hooks should be:
Initial automatic hooks should be:
- pre-run hydrate for agent context recall
- post-run capture from agent runs
- optional issue comment capture
- optional issue document capture
- issue comment / document capture when the binding enables it
- pre-run recall for agent context hydration
The hook registry itself should be general enough that other providers can subscribe to the same events without core changes.
### 7. No approval gate for binding changes in the open-source product
For the open-source version, changing memory bindings should not require approvals.
Paperclip should still log those changes in activity and preserve full auditability. Approval-gated memory governance can remain an enterprise or future policy layer.
Everything else should start explicit.
## Proposed Concepts
@@ -103,7 +83,7 @@ A built-in or plugin-supplied implementation that stores and retrieves memory.
Examples:
- local markdown plus semantic index
- local markdown + vector index
- mem0 adapter
- supermemory adapter
- MemOS adapter
@@ -114,15 +94,6 @@ A company-scoped configuration record that points to a provider and carries prov
This is the object selected by key.
### Memory binding target
A mapping from a Paperclip target to a binding.
V1 targets:
- `company`
- `agent`
### Memory scope
The normalized Paperclip scope passed into a provider request.
@@ -134,9 +105,7 @@ At minimum:
- optional `projectId`
- optional `issueId`
- optional `runId`
- optional `subjectId` for external or user identity
- optional `sessionKey` for providers that organize memory around sessions
- optional `namespace` for providers that need an explicit partition hint
- optional `subjectId` for external/user identity
### Memory source reference
@@ -152,36 +121,24 @@ Supported source kinds should include:
- `manual_note`
- `external_document`
### Memory hook
A normalized trigger emitted by Paperclip when something memory-relevant happens.
Initial hook kinds:
- `pre_run_hydrate`
- `post_run_capture`
- `issue_comment_capture`
- `issue_document_capture`
- `manual_capture`
### Memory operation
A normalized capture, record-write, query, browse, get, correction, or delete action performed through Paperclip.
A normalized write, query, browse, or delete action performed through Paperclip.
Paperclip should log every memory operation whether the provider is local, plugin-backed, or external.
Paperclip should log every operation, whether the provider is local or external.
## Required Adapter Contract
The required core should be small enough to fit `memsearch`, `mem0`, `Memori`, `MemOS`, or `OpenViking`, but strong enough to satisfy Paperclip's attribution and inspectability requirements.
The required core should be small enough to fit `memsearch`, `mem0`, `Memori`, `MemOS`, or `OpenViking`.
```ts
export interface MemoryAdapterCapabilities {
profile?: boolean;
browse?: boolean;
correction?: boolean;
asyncIngestion?: boolean;
multimodal?: boolean;
providerManagedExtraction?: boolean;
asyncExtraction?: boolean;
providerNativeBrowse?: boolean;
}
export interface MemoryScope {
@@ -191,8 +148,6 @@ export interface MemoryScope {
issueId?: string;
runId?: string;
subjectId?: string;
sessionKey?: string;
namespace?: string;
}
export interface MemorySourceRef {
@@ -213,34 +168,10 @@ export interface MemorySourceRef {
externalRef?: string;
}
export interface MemoryHookContext {
hookKind:
| "pre_run_hydrate"
| "post_run_capture"
| "issue_comment_capture"
| "issue_document_capture"
| "manual_capture";
hookId: string;
triggeredAt: string;
actorAgentId?: string;
heartbeatRunId?: string;
}
export interface MemorySourcePayload {
text?: string;
mimeType?: string;
metadata?: Record<string, unknown>;
object?: Record<string, unknown>;
}
export interface MemoryUsage {
provider: string;
biller?: string;
model?: string;
billingType?: "metered_api" | "subscription_included" | "subscription_overage" | "unknown";
attributionMode?: "billed_directly" | "included_in_run" | "external_invoice" | "untracked";
inputTokens?: number;
cachedInputTokens?: number;
outputTokens?: number;
embeddingTokens?: number;
costCents?: number;
@@ -248,30 +179,18 @@ export interface MemoryUsage {
details?: Record<string, unknown>;
}
export interface MemoryRecordHandle {
providerKey: string;
providerRecordId: string;
}
export interface MemoryCaptureRequest {
export interface MemoryWriteRequest {
bindingKey: string;
scope: MemoryScope;
source: MemorySourceRef;
payload: MemorySourcePayload;
hook?: MemoryHookContext;
mode?: "capture_residue" | "capture_record";
content: string;
metadata?: Record<string, unknown>;
mode?: "append" | "upsert" | "summarize";
}
export interface MemoryRecordWriteRequest {
bindingKey: string;
scope: MemoryScope;
source?: MemorySourceRef;
records: Array<{
text: string;
summary?: string;
metadata?: Record<string, unknown>;
}>;
export interface MemoryRecordHandle {
providerKey: string;
providerRecordId: string;
}
export interface MemoryQueryRequest {
@@ -283,14 +202,6 @@ export interface MemoryQueryRequest {
metadataFilter?: Record<string, unknown>;
}
export interface MemoryListRequest {
bindingKey: string;
scope: MemoryScope;
cursor?: string;
limit?: number;
metadataFilter?: Record<string, unknown>;
}
export interface MemorySnippet {
handle: MemoryRecordHandle;
text: string;
@@ -306,149 +217,30 @@ export interface MemoryContextBundle {
usage?: MemoryUsage[];
}
export interface MemoryListPage {
items: MemorySnippet[];
nextCursor?: string;
usage?: MemoryUsage[];
}
export interface MemoryExtractionJob {
providerJobId: string;
status: "queued" | "running" | "succeeded" | "failed" | "cancelled";
hookKind?: MemoryHookContext["hookKind"];
source?: MemorySourceRef;
error?: string;
submittedAt?: string;
startedAt?: string;
finishedAt?: string;
}
export interface MemoryAdapter {
key: string;
capabilities: MemoryAdapterCapabilities;
capture(req: MemoryCaptureRequest): Promise<{
records?: MemoryRecordHandle[];
jobs?: MemoryExtractionJob[];
usage?: MemoryUsage[];
}>;
upsertRecords(req: MemoryRecordWriteRequest): Promise<{
write(req: MemoryWriteRequest): Promise<{
records?: MemoryRecordHandle[];
usage?: MemoryUsage[];
}>;
query(req: MemoryQueryRequest): Promise<MemoryContextBundle>;
list(req: MemoryListRequest): Promise<MemoryListPage>;
get(handle: MemoryRecordHandle, scope: MemoryScope): Promise<MemorySnippet | null>;
forget(handles: MemoryRecordHandle[], scope: MemoryScope): Promise<{ usage?: MemoryUsage[] }>;
}
```
This contract intentionally does not force a provider to expose its internal graph, file tree, or ontology. It does require enough structure for Paperclip to browse, attribute, and audit what happened.
This contract intentionally does not force a provider to expose its internal graph, filesystem, or ontology.
## Optional Adapter Surfaces
These should be capability-gated, not required:
- `browse(scope, filters)` for file-system / graph / timeline inspection
- `correct(handle, patch)` for natural-language correction flows
- `profile(scope)` when the provider can synthesize stable preferences or summaries
- `listExtractionJobs(scope, cursor)` when async extraction needs richer operator visibility
- `retryExtractionJob(jobId)` when a provider supports re-drive
- `sync(source)` for connectors or background ingestion
- `explain(queryResult)` for providers that can expose retrieval traces
- provider-native browse or graph surfaces exposed through plugin UI
## Lessons From AWS AgentCore Memory API
AWS AgentCore Memory is a useful check on whether this plan is too abstract or missing important operational surfaces.
The broad direction still looks right:
- AWS splits memory into a control plane (`CreateMemory`, `UpdateMemory`, `ListMemories`) and a data plane (`CreateEvent`, `RetrieveMemoryRecords`, `GetMemoryRecord`, `ListMemoryRecords`)
- AWS separates raw interaction capture from curated long-term memory records
- AWS supports both provider-managed extraction and self-managed pipelines
- AWS treats browse and list operations as first-class APIs, not ad hoc debugging helpers
- AWS exposes extraction jobs instead of hiding asynchronous maintenance completely
That lines up with the Paperclip plan at a high level: provider configuration, scoped writes, scoped retrieval, provider-managed extraction as a capability, and a browse and inspect surface.
The concrete changes Paperclip should take from AWS are:
### 1. Keep config APIs separate from runtime traffic
The rollout should preserve a clean separation between:
- control-plane APIs for binding CRUD, defaults, overrides, and capability metadata
- runtime APIs and tools for capture, record writes, query, list, get, forget, and extraction status
This keeps governance changes distinct from high-volume memory traffic.
### 2. Distinguish capture from curated record writes
AWS does not flatten everything into one write primitive. It distinguishes captured events from durable memory records.
Paperclip should do the same:
- `capture(...)` for raw run, comment, document, or activity residue
- `upsertRecords(...)` for curated durable facts and notes
That is a better fit for provider-managed extraction and for manual curation flows.
### 3. Make list and browse first-class
AWS exposes list and retrieve surfaces directly. Paperclip should not make browse optional at the portable layer.
The minimum portable surface should include:
- `query`
- `list`
- `get`
Provider-native graph or file browsing can remain optional beyond that.
### 4. Add pagination and cursors for operator inspection
AWS consistently uses pagination on browse-heavy APIs.
Paperclip should add cursor-based pagination to:
- record listing
- extraction job listing
- memory operation explorer APIs
Prompt hydration can continue to use `topK`, but operator surfaces need cursors.
### 5. Add explicit session and namespace hints
AWS uses `actorId`, `sessionId`, `namespace`, and `memoryStrategyId` heavily.
Paperclip should keep its own control-plane-centric model, but the adapter contract needs obvious places to map those concepts:
- `sessionKey`
- `namespace`
The provider adapter can map them to AWS or other vendor-specific identifiers without leaking those identifiers into core.
### 6. Treat asynchronous extraction as a real operational surface
AWS exposes extraction jobs explicitly. Paperclip should too.
Operators should be able to see:
- pending extraction work
- failed extraction work
- which hook or source caused the work
- whether a retry is available
### 7. Keep Paperclip provenance primary
Paperclip should continue to center:
- `companyId`
- `agentId`
- `projectId`
- `issueId`
- `runId`
- issue comments, documents, and activity as sources
The lesson from AWS is to support clean mapping into provider-specific models, not to let provider identifiers take over the core product model.
## What Paperclip Should Persist
@@ -456,67 +248,39 @@ Paperclip should not mirror the full provider memory corpus into Postgres unless
Paperclip core should persist:
- memory bindings
- company default and agent override resolution targets
- memory bindings and overrides
- provider keys and capability metadata
- normalized memory operation logs
- source references back to issue comments, documents, runs, and activity
- provider record handles returned by operations when available
- hook delivery records and extraction job state
- usage and cost attribution
- source references back to issue comments, documents, runs, and activity
- usage and cost data
For external providers, the actual memory payload can remain in the provider.
For external providers, the memory payload itself can remain in the provider.
## Hook Model
### Shared hook surface
Paperclip should expose one shared hook system for memory.
That same system must be available to:
- built-in memory providers
- plugin-based memory providers
- third-party adapter integrations that want to use memory hooks
### What a hook delivers
Each hook delivery should include:
- resolved binding key
- normalized `MemoryScope`
- `MemorySourceRef`
- structured source payload
- hook metadata such as hook kind, trigger time, and related run id
The payload should include structured objects where possible so the provider can decide how to extract and chunk.
### Initial automatic hooks
### Automatic hooks
These should be low-risk and easy to reason about:
1. `pre_run_hydrate`
1. `pre-run hydrate`
Before an agent run starts, Paperclip may call `query(... intent = "agent_preamble")` using the active binding.
2. `post_run_capture`
After a run finishes, Paperclip may call `capture(...)` with structured run output, excerpts, and provenance.
2. `post-run capture`
After a run finishes, Paperclip may write a summary or transcript-derived note tied to the run.
3. `issue_comment_capture`
When enabled on the binding, Paperclip may call `capture(...)` for selected issue comments.
3. `issue comment / document capture`
When enabled on the binding, Paperclip may capture selected issue comments or issue documents as memory sources.
4. `issue_document_capture`
When enabled on the binding, Paperclip may call `capture(...)` for selected issue documents.
### Explicit hooks
### Explicit tools and APIs
These should be tool-driven or UI-driven first:
These should be tool- or UI-driven first:
- `memory.search`
- `memory.note`
- `memory.forget`
- `memory.correct`
- memory record list and get
- extraction-job inspection
- `memory.browse`
### Not automatic in the first version
@@ -545,69 +309,34 @@ The initial browse surface should support:
- active binding by company and agent
- recent memory operations
- recent write and capture sources
- record list and record detail with source backlinks
- recent write sources
- query results with source backlinks
- extraction job status
- filters by agent, issue, project, run, source kind, and date
- provider usage, cost, and latency summaries
- filters by agent, issue, run, source kind, and date
- provider usage / cost / latency summaries
When a provider supports richer browsing, the plugin can add deeper views through the existing plugin UI surfaces.
## Cost And Evaluation
Paperclip should treat memory accounting as two related but distinct concerns:
Every adapter response should be able to return usage records.
### 1. `memory_operations` is the authoritative audit trail
Paperclip should roll up:
Every memory action should create a normalized operation record that captures:
- binding
- scope
- source provenance
- operation type
- success or failure
- memory inference tokens
- embedding tokens
- external provider cost
- latency
- usage details reported by the provider
- attribution mode
- related run, issue, and agent when available
- query count
- write count
This is where operators answer "what memory work happened and why?"
### 2. `cost_events` remains the canonical spend ledger for billable metered usage
The current `cost_events` model is already the canonical cost ledger for token and model spend, and `agent_runtime_state` plus `heartbeat_runs.usageJson` already roll up and summarize run usage.
The recommendation is:
- if a memory operation runs inside a normal Paperclip agent heartbeat and the model usage is already counted on that run, do not create a duplicate `cost_event`
- instead, store the memory operation with `attributionMode = "included_in_run"` and link it to the related `heartbeatRunId`
- if a memory provider makes a direct metered model call outside the agent run accounting path, the provider must report usage and Paperclip should create a `cost_event`
- that direct `cost_event` should still link back to the memory operation, agent, company, and issue or run context when possible
### 3. `finance_events` should carry flat subscription or invoice-style costs
If a memory service incurs:
- monthly subscription cost
- storage invoices
- provider platform charges not tied to one request
those should be represented as `finance_events`, not as synthetic per-query memory operations.
That keeps usage telemetry separate from accounting entries like invoices and flat fees.
### 4. Evaluation metrics still matter
Paperclip should record evaluation-oriented metrics where possible:
It should also record evaluation-oriented metrics where possible:
- recall hit rate
- empty query rate
- manual correction count
- extraction failure count
- per-binding success and failure counts
- per-binding success / failure counts
This is important because a memory system that "works" but silently burns budget or silently fails extraction is not acceptable in Paperclip.
This is important because a memory system that "works" but silently burns budget is not acceptable in Paperclip.
## Suggested Data Model Additions
@@ -615,36 +344,23 @@ At the control-plane level, the likely new core tables are:
- `memory_bindings`
- company-scoped key
- provider id or plugin id
- provider id / plugin id
- config blob
- enabled status
- `memory_binding_targets`
- target type (`company`, `agent`)
- target type (`company`, `agent`, later `project`)
- target id
- binding id
- `memory_operations`
- company id
- binding id
- operation type (`capture`, `record_upsert`, `query`, `list`, `get`, `forget`, `correct`)
- operation type (`write`, `query`, `forget`, `browse`, `correct`)
- scope fields
- source refs
- usage, latency, and attribution mode
- related heartbeat run id
- related cost event id
- success or error
- `memory_extraction_jobs`
- company id
- binding id
- operation id
- provider job id
- hook kind
- status
- source refs
- error
- submitted, started, and finished timestamps
- usage / latency / cost
- success / error
Provider-specific long-form state should stay in plugin state or the provider itself unless a built-in local provider needs its own schema.
@@ -666,46 +382,45 @@ The design should still treat that built-in as just another provider behind the
### Phase 1: Control-plane contract
- add memory binding models and API types
- add company default plus agent override resolution
- add plugin capability and registration surface for memory providers
- add plugin capability / registration surface for memory providers
- add operation logging and usage reporting
### Phase 2: Hook delivery and operation audit
- add shared memory hook emission in core
- add operation logging, extraction job state, and usage attribution
- add direct-provider cost and finance-event linkage rules
### Phase 3: One built-in plus one plugin example
### Phase 2: One built-in + one plugin example
- ship a local markdown-first provider
- ship one hosted adapter example to validate the external-provider path
### Phase 4: UI inspection
### Phase 3: UI inspection
- add company and agent memory settings
- add company / agent memory settings
- add a memory operation explorer
- add record list and detail surfaces
- add source backlinks to issues and runs
### Phase 4: Automatic hooks
- pre-run hydrate
- post-run capture
- selected issue comment / document capture
### Phase 5: Rich capabilities
- correction flows
- provider-native browse or graph views
- provider-native browse / graph views
- project-level overrides if needed
- evaluation dashboards
- retention and quota controls
## Remaining Open Questions
## Open Questions
- Which built-in local provider should ship first: pure markdown, markdown plus embeddings, or a lightweight local vector store?
- How much source payload should Paperclip snapshot inside `memory_operations` for debugging without duplicating large transcripts?
- Should correction flows mutate provider state directly, create superseding records, or both depending on provider capability?
- What default retention and size limits should the local built-in enforce?
- Should project overrides exist in V1 of the memory service, or should we force company default + agent override first?
- Do we want Paperclip-managed extraction pipelines at all, or should built-ins be the only place where Paperclip owns extraction?
- Should memory usage extend the current `cost_events` model directly, or should memory operations keep a parallel usage log and roll up into `cost_events` secondarily?
- Do we want provider install / binding changes to require approvals for some companies?
## Bottom Line
The right abstraction is:
- Paperclip owns bindings, resolution, hooks, provenance, policy, and attribution.
- Paperclip owns memory bindings, scopes, provenance, governance, and usage reporting.
- Providers own extraction, ranking, storage, and provider-native memory semantics.
That gives Paperclip a stable memory service without locking the product to one memory philosophy or one vendor, and it integrates the AWS lessons without importing AWS's model into core.
That gives Paperclip a stable "memory service" without locking the product to one memory philosophy or one vendor.

View File

@@ -1,382 +0,0 @@
# VS Code Task Interoperability Plan
Status: planning only, no code changes
Date: 2026-04-12
Related issue: `PAP-1377`
## Summary
Paperclip should not replace its workspace runtime service model with VS Code tasks.
It should add a narrow interoperability layer that can discover and adopt supported entries from `.vscode/tasks.json`.
The core product model should stay:
- Paperclip owns long-running workspace services and their desired state
- Paperclip shows operators exactly which named thing they are starting or stopping
- Paperclip distinguishes long-running services from one-shot jobs
VS Code tasks should be treated as:
- an import/discovery format for workspace commands
- a convenience for repos that already maintain `tasks.json`
- a partial compatibility layer, not a full execution model
## Current State
The current implementation is already service-oriented:
- project workspaces and execution workspaces can store `workspaceRuntime` config plus `desiredState` and per-service `serviceStates`
- the UI renders one control row per configured service and persists start/stop intent
- the backend supervises long-running local processes, reuses eligible services, and restores desired services on startup
Relevant files:
- `packages/shared/src/types/workspace-runtime.ts`
- `server/src/services/workspace-runtime.ts`
- `server/src/services/project-workspace-runtime-config.ts`
- `ui/src/components/WorkspaceRuntimeControls.tsx`
- `ui/src/pages/ProjectWorkspaceDetail.tsx`
- `ui/src/pages/ExecutionWorkspaceDetail.tsx`
This is directionally correct for Paperclip because it gives the control plane an explicit model for service lifecycle, health, reuse, and restart behavior.
## Problem To Solve
The current UX is still too raw:
- operators have to hand-author runtime JSON
- a workspace can have multiple attached services, but the higher-level intent is not obvious
- start/stop controls are visible in multiple places, which makes it easy to lose track of what is being controlled
- there is no interoperability with repos that already define useful local workflows in `.vscode/tasks.json`
The issue is not that services are the wrong abstraction.
The issue is that the configuration surface is too low-level and Paperclip does not yet leverage existing workspace metadata.
## Recommendation
Keep Paperclip runtime services as the source of truth for service supervision.
Add a new workspace command model above the raw JSON layer, with VS Code task discovery as one input.
The product model should become:
1. `Workspace command`
A named runnable thing attached to a workspace.
2. `Workspace service`
A workspace command that is expected to stay alive and be supervised.
3. `Workspace job`
A workspace command that runs once and exits.
4. `Runtime service instance`
The live process record that already exists today in Paperclip.
In that model, VS Code tasks are a way to populate workspace commands.
Only commands that map cleanly to Paperclip service or job semantics should become runnable in Paperclip.
## Why Not Fully Adopt VS Code Tasks
VS Code tasks are broader than Paperclip runtime services.
They include shell/process tasks, compound tasks, background/watch tasks, presentation settings, extension/task-provider types, variable substitution, and problem-matcher-driven lifecycle.
That creates a bad fit if Paperclip tries to use `tasks.json` as its only runtime model:
- many tasks are one-shot jobs, not long-running services
- some tasks depend on VS Code task providers or editor-only variable resolution
- compound task graphs are useful, but they are not the same thing as a supervised service
- problem matcher readiness is useful metadata, but it is not enough to replace Paperclip's persisted service lifecycle model
The right boundary is interoperability, not replacement.
## Interoperability Contract
Paperclip should support a conservative subset of VS Code tasks and clearly mark unsupported entries.
### Supported in phase 1
- `shell` and `process` tasks with a concrete command Paperclip can resolve
- optional task `options.cwd`
- optional task environment values that can be flattened safely
- task labels and detail text for naming and display
- `dependsOn` for import-time expansion or display-only dependency hints
- background/watch-oriented tasks that can reasonably be treated as long-running services
### Maybe supported in later phases
- grouping and default task metadata for better UX
- selected variable substitution when Paperclip can resolve it safely from workspace context
- mapping task metadata into Paperclip readiness/expose hints
- limited compound-task launch flows
### Not supported initially
- extension-provided task types Paperclip cannot execute directly
- arbitrary VS Code variable substitution semantics
- problem matcher parsing as the main source of service health
- full parity with VS Code task execution behavior
## Long-Running Service Detection
Paperclip needs an explicit classification layer instead of assuming every VS Code task is a service.
Recommended classification:
- `service`
Explicitly marked by Paperclip metadata, or confidently inferred from background/watch task semantics
- `job`
One-shot command expected to exit
- `unsupported`
Present in `tasks.json`, but not safely runnable by Paperclip
The important product decision is that service classification must be visible and editable by the operator.
Inference can help, but it should not be the only source of truth.
## Proposed Product Shape
### 1. Replace raw-first editing with command-first editing
Project and execution workspace pages should stop making raw runtime JSON the primary editing surface.
Default UI should show:
- workspace commands
- command type: service or job
- source: Paperclip or VS Code
- exact command and cwd
- current state for services
- explicit start, stop, restart, and run-now actions
Raw JSON should remain available behind an advanced section.
### 2. Add VS Code task discovery on workspaces
For a workspace with `cwd`, Paperclip should look for `.vscode/tasks.json`.
The workspace UI should show:
- whether a `tasks.json` file was found
- last parse time
- supported commands discovered
- unsupported tasks with reasons
- whether commands are inherited into execution workspaces
### 3. Make the controlled thing explicit
Start and stop UI should always name the exact entry being controlled.
Examples:
- `Start web`
- `Stop api`
- `Run db:migrate`
Avoid generic workspace-level labels when multiple commands exist.
### 4. Separate services from jobs in the UI
Do not mix one-shot jobs and long-running services into one undifferentiated list.
Recommended sections:
- `Services`
- `Jobs`
- `Unsupported imported tasks`
That resolves the ambiguity called out in the issue.
## Data Model Direction
Do not replace `workspaceRuntime` immediately.
Instead add a higher-level representation that can compile down to the existing runtime-service machinery.
Suggested workspace metadata shape:
```ts
type WorkspaceCommandSource =
| { type: "paperclip" }
| { type: "vscode_task"; taskLabel: string; taskPath: ".vscode/tasks.json" };
type WorkspaceCommandKind = "service" | "job";
type WorkspaceCommandDefinition = {
id: string;
name: string;
kind: WorkspaceCommandKind;
source: WorkspaceCommandSource;
command: string | null;
cwd: string | null;
env?: Record<string, string> | null;
autoStart?: boolean;
serviceConfig?: {
lifecycle?: "shared" | "ephemeral";
reuseScope?: "project_workspace" | "execution_workspace" | "run";
readiness?: Record<string, unknown> | null;
expose?: Record<string, unknown> | null;
} | null;
importWarnings?: string[];
disabledReason?: string | null;
};
```
`workspaceRuntime` can then become a derived or advanced representation for service-type commands until the rest of the system is migrated.
## VS Code Mapping Rules
Paperclip should map imported tasks with explicit, documented rules.
Recommended rules:
1. A task becomes a `job` by default.
2. A task becomes a `service` only when:
- Paperclip metadata marks it as a service, or
- the task clearly represents a background/watch process and the operator confirms the classification.
3. Unsupported tasks stay visible but disabled.
4. Task labels become default command names.
5. `dependsOn` is preserved as metadata, not silently flattened into hidden behavior.
Paperclip-specific metadata can live in a namespaced field on the imported task definition, for example:
```json
{
"label": "web",
"type": "shell",
"command": "pnpm dev",
"isBackground": true,
"paperclip": {
"kind": "service",
"readiness": {
"type": "http",
"urlTemplate": "http://127.0.0.1:${port}"
},
"expose": {
"type": "url",
"urlTemplate": "http://127.0.0.1:${port}"
}
}
}
```
That gives us interoperability without depending on VS Code-only semantics for service readiness and exposure.
## Execution Policy
Project workspaces should be the main place where imported commands are discovered and curated.
Execution workspaces should inherit that curated command set by default, with optional issue-level overrides.
Recommended precedence:
1. execution workspace override
2. project workspace command set
3. imported VS Code tasks from the linked workspace
4. advanced raw runtime fallback
This matches the existing direction in `doc/plans/2026-03-10-workspace-strategy-and-git-worktrees.md`.
## Implementation Plan
### Phase 1: Discovery and read-only visibility
Goal:
show imported VS Code tasks in the workspace UI without changing runtime behavior.
Work:
- parse `.vscode/tasks.json` for project workspaces with local `cwd`
- derive a list of candidate commands plus unsupported items
- show source, label, command, cwd, and classification
- show parse warnings and unsupported reasons
Success condition:
an operator can see what Paperclip would import and why.
### Phase 2: Command model and explicit classification
Goal:
introduce a first-class workspace command layer above raw runtime JSON.
Work:
- add a persisted command definition model in workspace metadata or a dedicated table
- allow operator edits to imported command classification
- separate `service` and `job` in UI
- keep existing runtime-service storage for live supervised processes
Success condition:
the workspace UI is command-first, and raw runtime JSON is advanced-only.
### Phase 3: Service execution backed by existing runtime supervisor
Goal:
run supported imported service commands through the current Paperclip supervisor.
Work:
- compile service commands into the existing runtime service start/stop path
- persist desired state per named command
- keep startup restoration behavior for service commands
- make the active command name explicit everywhere control actions appear
Success condition:
imported service commands behave like native Paperclip services once adopted.
### Phase 4: Job execution and optional dependency handling
Goal:
support one-shot imported commands without pretending they are services.
Work:
- add `Run` actions for jobs
- record output in workspace operations
- optionally support simple `dependsOn` execution for jobs with clear logging
Success condition:
one-shot tasks are runnable, but they are not mixed into the service lifecycle model.
### Phase 5: Adapter and execution workspace integration
Goal:
let agents and issue-scoped workspaces consume the curated command model consistently.
Work:
- expose inherited workspace commands to execution workspaces
- allow issue-level selection of a default service command when relevant
- make service selection explicit in issue and workspace views
Success condition:
agents, operators, and workspaces all refer to the same named commands.
## Non-Goals
- full VS Code task-runner parity
- support for every VS Code task type
- removal of Paperclip's own runtime supervision model
- editor-dependent execution semantics inside the control plane
## Risks
- overfitting Paperclip to VS Code and making the model worse for non-VS-Code repos
- misclassifying watch tasks as durable services
- hiding too much detail and making debugging harder
- allowing imported task graphs to become implicit magic
These risks are manageable if the import layer stays explicit, conservative, and operator-editable.
## Decision
Paperclip should adopt VS Code tasks as an optional workspace command source, not as the canonical runtime model.
The main UX change should be:
- move from raw runtime JSON to named workspace commands
- separate services from jobs
- make the exact controlled command explicit
- let `.vscode/tasks.json` pre-populate those commands when available
## External References
- VS Code tasks documentation: https://code.visualstudio.com/docs/debugtest/tasks
- Existing Paperclip workspace plan: `doc/plans/2026-03-10-workspace-strategy-and-git-worktrees.md`

View File

@@ -20,7 +20,6 @@ The `codex_local` adapter runs OpenAI's Codex CLI locally. It supports session p
| `env` | object | No | Environment variables (supports secret refs) |
| `timeoutSec` | number | No | Process timeout (0 = no timeout) |
| `graceSec` | number | No | Grace period before force-kill |
| `fastMode` | boolean | No | Enables Codex Fast mode. Currently supported on `gpt-5.4` only and burns credits faster |
| `dangerouslyBypassApprovalsAndSandbox` | boolean | No | Skip safety checks (dev only) |
## Session Persistence
@@ -31,22 +30,8 @@ Codex uses `previous_response_id` for session continuity. The adapter serializes
The adapter symlinks Paperclip skills into the global Codex skills directory (`~/.codex/skills`). Existing user skills are not overwritten.
## Fast Mode
When `fastMode` is enabled, Paperclip adds Codex config overrides equivalent to:
```sh
-c 'service_tier="fast"' -c 'features.fast_mode=true'
```
Paperclip currently applies that only when the selected model is `gpt-5.4`. On other models, the toggle is preserved in config but ignored at execution time to avoid unsupported runs.
## Managed `CODEX_HOME`
When Paperclip is running inside a managed worktree instance (`PAPERCLIP_IN_WORKTREE=true`), the adapter instead uses a worktree-isolated `CODEX_HOME` under the Paperclip instance so Codex skills, sessions, logs, and other runtime state do not leak across checkouts. It seeds that isolated home from the user's main Codex home for shared auth/config continuity.
## Manual Local CLI
For manual local CLI usage outside heartbeat runs (for example running as `codexcoder` directly), use:
```sh

View File

@@ -203,43 +203,6 @@ export const sessionCodec: AdapterSessionCodec = {
};
```
## Capability Flags
Adapters can declare what "local" capabilities they support by setting optional fields on the `ServerAdapterModule`. The server and UI use these flags to decide which features to enable for agents using the adapter (instructions bundle editor, skills sync, JWT auth, etc.).
| Flag | Type | Default | What it controls |
|------|------|---------|------------------|
| `supportsLocalAgentJwt` | `boolean` | `false` | Whether heartbeat generates a local JWT for the agent |
| `supportsInstructionsBundle` | `boolean` | `false` | Managed instructions bundle (AGENTS.md) — server-side resolution + UI editor |
| `instructionsPathKey` | `string` | `"instructionsFilePath"` | The `adapterConfig` key that holds the instructions file path |
| `requiresMaterializedRuntimeSkills` | `boolean` | `false` | Whether runtime skill entries must be written to disk before execution |
These flags are exposed via `GET /api/adapters` in a `capabilities` object, along with a derived `supportsSkills` flag (true when `listSkills` or `syncSkills` is defined).
### Example
```ts
export function createServerAdapter(): ServerAdapterModule {
return {
type: "my_k8s_adapter",
execute: myExecute,
testEnvironment: myTestEnvironment,
listSkills: myListSkills,
syncSkills: mySyncSkills,
// Capability flags
supportsLocalAgentJwt: true,
supportsInstructionsBundle: true,
instructionsPathKey: "instructionsFilePath",
requiresMaterializedRuntimeSkills: true,
};
}
```
With these flags set, the Paperclip UI will automatically show the instructions bundle editor, skills management tab, and working directory field for agents using this adapter — no Paperclip source changes required.
If capability flags are not set, the server falls back to legacy hardcoded lists for built-in adapter types. External adapters that omit the flags will default to `false` for all capabilities.
## Skills Injection
Make Paperclip skills discoverable to your agent runtime without writing to the agent's working directory:

View File

@@ -13,8 +13,6 @@ GET /api/companies/{companyId}/agents
Returns all agents in the company.
This route does not accept query filters. Unsupported query parameters return `400`.
## Get Agent
```

View File

@@ -66,8 +66,6 @@ The optional `comment` field adds a comment in the same call.
Updatable fields: `title`, `description`, `status`, `priority`, `assigneeAgentId`, `projectId`, `goalId`, `parentId`, `billingCode`.
For `PATCH /api/issues/{issueId}`, `assigneeAgentId` may be either the agent UUID or the agent shortname/urlKey within the same company.
## Checkout (Claim Task)
```

View File

@@ -89,8 +89,6 @@ Show resolved environment configuration:
pnpm paperclipai env
```
This now includes bind-oriented deployment settings such as `PAPERCLIP_BIND` and `PAPERCLIP_BIND_HOST` when configured.
## `paperclipai allowed-hostname`
Allow a private hostname for authenticated/private mode:

View File

@@ -3,14 +3,13 @@ title: Deployment Modes
summary: local_trusted vs authenticated (private/public)
---
Paperclip supports two runtime modes with different security profiles. Reachability is configured separately with `bind`.
Paperclip supports two runtime modes with different security profiles.
## `local_trusted`
The default mode. Optimized for single-operator local use.
- **Host binding**: loopback only (localhost)
- **Bind**: `loopback`
- **Authentication**: no login required
- **Use case**: local development, solo experimentation
- **Board identity**: auto-created local board user
@@ -32,7 +31,6 @@ For private network access (Tailscale, VPN, LAN).
- **Authentication**: login required via Better Auth
- **URL handling**: auto base URL mode (lower friction)
- **Host trust**: private-host trust policy required
- **Bind**: choose `loopback`, `lan`, `tailnet`, or `custom`
```sh
pnpm paperclipai onboard
@@ -52,7 +50,6 @@ For internet-facing deployment.
- **Authentication**: login required
- **URL**: explicit public URL required
- **Security**: stricter deployment checks in doctor
- **Bind**: usually `loopback` behind a reverse proxy; `lan/custom` is advanced
```sh
pnpm paperclipai onboard
@@ -84,5 +81,5 @@ pnpm paperclipai configure --section server
Runtime override via environment variable:
```sh
PAPERCLIP_DEPLOYMENT_MODE=authenticated PAPERCLIP_BIND=lan pnpm paperclipai run
PAPERCLIP_DEPLOYMENT_MODE=authenticated pnpm paperclipai run
```

View File

@@ -10,15 +10,11 @@ All environment variables that Paperclip uses for server configuration.
| Variable | Default | Description |
|----------|---------|-------------|
| `PORT` | `3100` | Server port |
| `PAPERCLIP_BIND` | `loopback` | Reachability preset: `loopback`, `lan`, `tailnet`, or `custom` |
| `PAPERCLIP_BIND_HOST` | (unset) | Required when `PAPERCLIP_BIND=custom` |
| `HOST` | `127.0.0.1` | Legacy host override; prefer `PAPERCLIP_BIND` for new setups |
| `HOST` | `127.0.0.1` | Server host binding |
| `DATABASE_URL` | (embedded) | PostgreSQL connection string |
| `PAPERCLIP_HOME` | `~/.paperclip` | Base directory for all Paperclip data |
| `PAPERCLIP_INSTANCE_ID` | `default` | Instance identifier (for multiple local instances) |
| `PAPERCLIP_DEPLOYMENT_MODE` | `local_trusted` | Runtime mode override |
| `PAPERCLIP_DEPLOYMENT_EXPOSURE` | `private` | Exposure policy when deployment mode is `authenticated` |
| `PAPERCLIP_API_URL` | (auto-derived) | Paperclip API base URL. When set externally (e.g., via Kubernetes ConfigMap, load balancer, or reverse proxy), the server preserves the value instead of deriving it from the listen host and port. Useful for deployments where the public-facing URL differs from the local bind address. |
## Secrets
@@ -36,7 +32,7 @@ These are set automatically by the server when invoking agents:
|----------|-------------|
| `PAPERCLIP_AGENT_ID` | Agent's unique ID |
| `PAPERCLIP_COMPANY_ID` | Company ID |
| `PAPERCLIP_API_URL` | Paperclip API base URL (inherits the server-level value; see Server Configuration above) |
| `PAPERCLIP_API_URL` | Paperclip API base URL |
| `PAPERCLIP_API_KEY` | Short-lived JWT for API auth |
| `PAPERCLIP_RUN_ID` | Current heartbeat run ID |
| `PAPERCLIP_TASK_ID` | Issue that triggered this wake |

View File

@@ -38,26 +38,19 @@ This does:
2. Runs `paperclipai doctor` with repair enabled
3. Starts the server when checks pass
## Bind Presets In Dev
## Tailscale/Private Auth Dev Mode
Default `pnpm dev` stays in `local_trusted` with loopback-only binding.
To open Paperclip to a private network with login enabled:
```sh
pnpm dev --bind lan
```
For Tailscale-only binding on a detected tailnet address:
```sh
pnpm dev --bind tailnet
```
Legacy aliases still work and map to the older broad private-network behavior:
To run in `authenticated/private` mode for network access:
```sh
pnpm dev --tailscale-auth
```
This binds the server to `0.0.0.0` for private-network access.
Alias:
```sh
pnpm dev --authenticated-private
```

View File

@@ -1,6 +1,6 @@
---
title: Tailscale Private Access
summary: Run Paperclip with Tailscale-friendly bind presets and connect from other devices
summary: Run Paperclip with Tailscale-friendly host binding and connect from other devices
---
Use this when you want to access Paperclip over Tailscale (or a private LAN/VPN) instead of only `localhost`.
@@ -8,25 +8,20 @@ Use this when you want to access Paperclip over Tailscale (or a private LAN/VPN)
## 1. Start Paperclip in private authenticated mode
```sh
pnpm dev --bind tailnet
pnpm dev --tailscale-auth
```
Recommended behavior:
This configures:
- `PAPERCLIP_DEPLOYMENT_MODE=authenticated`
- `PAPERCLIP_DEPLOYMENT_EXPOSURE=private`
- `PAPERCLIP_BIND=tailnet`
- `PAPERCLIP_AUTH_BASE_URL_MODE=auto`
- `HOST=0.0.0.0` (bind on all interfaces)
If you want the old broad private-network behavior instead, use:
Equivalent flag:
```sh
pnpm dev --bind lan
```
Legacy aliases still map to `authenticated/private + bind=lan`:
pnpm dev --authenticated-private
pnpm dev --tailscale-auth
```
## 2. Find your reachable Tailscale address
@@ -78,5 +73,5 @@ Expected result:
## Troubleshooting
- Login or redirect errors on a private hostname: add it with `paperclipai allowed-hostname`.
- App only works on `localhost`: make sure you started with `--bind lan` or `--bind tailnet` instead of plain `pnpm dev`.
- App only works on `localhost`: make sure you started with `--tailscale-auth` (or set `HOST=0.0.0.0` in private mode).
- Can connect locally but not remotely: verify both devices are on the same Tailscale network and port `3100` is reachable.

View File

@@ -5,28 +5,22 @@ summary: How project runtime configuration, execution workspaces, and issue runs
This guide documents the intended runtime model for projects, execution workspaces, and issue runs in Paperclip.
Paperclip now presents this as a workspace-command model:
- `Services` are long-running commands that stay supervised.
- `Jobs` are one-shot commands that run once and exit.
- Raw runtime JSON is still available for advanced config, but it is no longer the primary mental model.
## Project runtime configuration
You can define how to run a project on the project workspace itself.
- Project workspace runtime config describes the services and jobs available for that project checkout.
- Project workspace runtime config describes how to run services for that project checkout.
- This is the default runtime configuration that child execution workspaces may inherit.
- Defining the config does not start anything by itself.
## Manual runtime control
Workspace commands are manually controlled from the UI.
Runtime services are manually controlled from the UI.
- Project workspace services are started and stopped from the project workspace UI, and project jobs can be run on demand there.
- Execution workspace services are started and stopped from the execution workspace UI, and execution-workspace jobs can be run on demand there.
- Paperclip does not automatically start or stop these workspace services as part of issue execution.
- Paperclip also does not automatically restart workspace services on server boot.
- Project workspace runtime services are started and stopped from the project workspace UI.
- Execution workspace runtime services are started and stopped from the execution workspace UI.
- Paperclip does not automatically start or stop these runtime services as part of issue execution.
- Paperclip also does not automatically restart workspace runtime services on server boot.
## Execution workspace inheritance
@@ -35,7 +29,7 @@ Execution workspaces isolate code and runtime state from the project primary wor
- An isolated execution workspace has its own checkout path, branch, and local runtime instance.
- The runtime configuration may inherit from the linked project workspace by default.
- The execution workspace may override that runtime configuration with its own workspace-specific settings.
- The inherited configuration answers "which commands exist and how to run them", but any running service process is still specific to that execution workspace.
- The inherited configuration answers "how to run the service", but the running process is still specific to that execution workspace.
## Issues and execution workspaces
@@ -44,7 +38,7 @@ Issues are attached to execution workspace behavior, not to automatic runtime ma
- An issue may create a new execution workspace when you choose an isolated workspace mode.
- An issue may reuse an existing execution workspace when you choose reuse.
- Multiple issues may intentionally share one execution workspace so they can work against the same branch and running runtime services.
- Assigning or running an issue does not automatically start or stop workspace services for that workspace.
- Assigning or running an issue does not automatically start or stop runtime services for that workspace.
## Execution workspace lifecycle
@@ -68,7 +62,7 @@ Heartbeat still resolves a workspace for the run, but that is about code locatio
With the current implementation:
- Project workspace command config is the fallback for execution workspace UI controls.
- Project workspace runtime config is the fallback for execution workspace UI controls.
- Execution workspace runtime overrides are stored on the execution workspace.
- Heartbeat runs do not auto-start workspace services.
- Server startup does not auto-restart workspace services.
- Heartbeat runs do not auto-start workspace runtime services.
- Server startup does not auto-restart workspace runtime services.

View File

@@ -3,7 +3,6 @@
"private": true,
"type": "module",
"scripts": {
"preflight:workspace-links": "node cli/node_modules/tsx/dist/cli.mjs scripts/ensure-workspace-package-links.ts",
"dev": "pnpm --filter @paperclipai/server exec tsx ../scripts/dev-runner.ts watch",
"dev:watch": "pnpm --filter @paperclipai/server exec tsx ../scripts/dev-runner.ts watch",
"dev:once": "pnpm --filter @paperclipai/server exec tsx ../scripts/dev-runner.ts dev",
@@ -11,11 +10,10 @@
"dev:stop": "pnpm --filter @paperclipai/server exec tsx ../scripts/dev-service.ts stop",
"dev:server": "pnpm --filter @paperclipai/server dev",
"dev:ui": "pnpm --filter @paperclipai/ui dev",
"build": "pnpm run preflight:workspace-links && pnpm -r build",
"typecheck": "pnpm run preflight:workspace-links && pnpm -r typecheck",
"test": "pnpm run test:run",
"test:watch": "pnpm run preflight:workspace-links && vitest",
"test:run": "pnpm run preflight:workspace-links && vitest run",
"build": "pnpm -r build",
"typecheck": "pnpm -r typecheck",
"test": "vitest",
"test:run": "vitest run",
"db:generate": "pnpm --filter @paperclipai/db generate",
"db:migrate": "pnpm --filter @paperclipai/db migrate",
"secrets:migrate-inline-env": "tsx scripts/migrate-inline-env-secrets.ts",

View File

@@ -2,24 +2,6 @@ import { randomUUID } from "node:crypto";
import { describe, expect, it } from "vitest";
import { runChildProcess } from "./server-utils.js";
function isPidAlive(pid: number) {
try {
process.kill(pid, 0);
return true;
} catch {
return false;
}
}
async function waitForPidExit(pid: number, timeoutMs = 2_000) {
const deadline = Date.now() + timeoutMs;
while (Date.now() < deadline) {
if (!isPidAlive(pid)) return true;
await new Promise((resolve) => setTimeout(resolve, 50));
}
return !isPidAlive(pid);
}
describe("runChildProcess", () => {
it("waits for onSpawn before sending stdin to the child", async () => {
const spawnDelayMs = 150;
@@ -53,36 +35,4 @@ describe("runChildProcess", () => {
expect(onSpawnCompletedAt).toBeGreaterThanOrEqual(startedAt + spawnDelayMs);
expect(finishedAt - startedAt).toBeGreaterThanOrEqual(spawnDelayMs);
});
it.skipIf(process.platform === "win32")("kills descendant processes on timeout via the process group", async () => {
let descendantPid: number | null = null;
const result = await runChildProcess(
randomUUID(),
process.execPath,
[
"-e",
[
"const { spawn } = require('node:child_process');",
"const child = spawn(process.execPath, ['-e', 'setInterval(() => {}, 1000)'], { stdio: 'ignore' });",
"process.stdout.write(String(child.pid));",
"setInterval(() => {}, 1000);",
].join(" "),
],
{
cwd: process.cwd(),
env: {},
timeoutSec: 1,
graceSec: 1,
onLog: async () => {},
onSpawn: async () => {},
},
);
descendantPid = Number.parseInt(result.stdout.trim(), 10);
expect(result.timedOut).toBe(true);
expect(Number.isInteger(descendantPid) && descendantPid > 0).toBe(true);
expect(await waitForPidExit(descendantPid!, 2_000)).toBe(true);
});
});

View File

@@ -19,7 +19,6 @@ export interface RunProcessResult {
interface RunningProcess {
child: ChildProcess;
graceSec: number;
processGroupId: number | null;
}
interface SpawnTarget {
@@ -35,28 +34,6 @@ type ChildProcessWithEvents = ChildProcess & {
): ChildProcess;
};
function resolveProcessGroupId(child: ChildProcess) {
if (process.platform === "win32") return null;
return typeof child.pid === "number" && child.pid > 0 ? child.pid : null;
}
function signalRunningProcess(
running: Pick<RunningProcess, "child" | "processGroupId">,
signal: NodeJS.Signals,
) {
if (process.platform !== "win32" && running.processGroupId && running.processGroupId > 0) {
try {
process.kill(-running.processGroupId, signal);
return;
} catch {
// Fall back to the direct child signal if group signaling fails.
}
}
if (!running.child.killed) {
running.child.kill(signal);
}
}
export const runningProcesses = new Map<string, RunningProcess>();
export const MAX_CAPTURE_BYTES = 4 * 1024 * 1024;
export const MAX_EXCERPT_BYTES = 32 * 1024;
@@ -253,7 +230,6 @@ type PaperclipWakeComment = {
type PaperclipWakePayload = {
reason: string | null;
issue: PaperclipWakeIssue | null;
checkedOutByHarness: boolean;
executionStage: PaperclipWakeExecutionStage | null;
commentIds: string[];
latestCommentId: string | null;
@@ -364,7 +340,6 @@ export function normalizePaperclipWakePayload(value: unknown): PaperclipWakePayl
return {
reason: asString(payload.reason, "").trim() || null,
issue: normalizePaperclipWakeIssue(payload.issue),
checkedOutByHarness: asBoolean(payload.checkedOutByHarness, false),
executionStage,
commentIds,
latestCommentId: asString(payload.latestCommentId, "").trim() || null,
@@ -434,9 +409,6 @@ export function renderPaperclipWakePrompt(
if (normalized.issue?.priority) {
lines.push(`- issue priority: ${normalized.issue.priority}`);
}
if (normalized.checkedOutByHarness) {
lines.push("- checkout: already claimed by the harness for this run");
}
if (normalized.missingCount > 0) {
lines.push(`- omitted comments: ${normalized.missingCount}`);
}
@@ -470,15 +442,6 @@ export function renderPaperclipWakePrompt(
}
}
if (normalized.checkedOutByHarness) {
lines.push(
"",
"The harness already checked out this issue for the current run.",
"Do not call `/api/issues/{id}/checkout` again unless you intentionally switch to a different task.",
"",
);
}
if (normalized.comments.length > 0) {
lines.push("New comments in order:");
}
@@ -1071,7 +1034,7 @@ export async function runChildProcess(
graceSec: number;
onLog: (stream: "stdout" | "stderr", chunk: string) => Promise<void>;
onLogError?: (err: unknown, runId: string, message: string) => void;
onSpawn?: (meta: { pid: number; processGroupId: number | null; startedAt: string }) => Promise<void>;
onSpawn?: (meta: { pid: number; startedAt: string }) => Promise<void>;
stdin?: string;
},
): Promise<RunProcessResult> {
@@ -1101,21 +1064,19 @@ export async function runChildProcess(
const child = spawn(target.command, target.args, {
cwd: opts.cwd,
env: mergedEnv,
detached: process.platform !== "win32",
shell: false,
stdio: [opts.stdin != null ? "pipe" : "ignore", "pipe", "pipe"],
}) as ChildProcessWithEvents;
const startedAt = new Date().toISOString();
const processGroupId = resolveProcessGroupId(child);
const spawnPersistPromise =
typeof child.pid === "number" && child.pid > 0 && opts.onSpawn
? opts.onSpawn({ pid: child.pid, processGroupId, startedAt }).catch((err) => {
? opts.onSpawn({ pid: child.pid, startedAt }).catch((err) => {
onLogError(err, runId, "failed to record child process metadata");
})
: Promise.resolve();
runningProcesses.set(runId, { child, graceSec: opts.graceSec, processGroupId });
runningProcesses.set(runId, { child, graceSec: opts.graceSec });
let timedOut = false;
let stdout = "";
@@ -1126,9 +1087,11 @@ export async function runChildProcess(
opts.timeoutSec > 0
? setTimeout(() => {
timedOut = true;
signalRunningProcess({ child, processGroupId }, "SIGTERM");
child.kill("SIGTERM");
setTimeout(() => {
signalRunningProcess({ child, processGroupId }, "SIGKILL");
if (!child.killed) {
child.kill("SIGKILL");
}
}, Math.max(1, opts.graceSec) * 1000);
}, opts.timeoutSec * 1000)
: null;

View File

@@ -120,7 +120,7 @@ export interface AdapterExecutionContext {
context: Record<string, unknown>;
onLog: (stream: "stdout" | "stderr", chunk: string) => Promise<void>;
onMeta?: (meta: AdapterInvocationMeta) => Promise<void>;
onSpawn?: (meta: { pid: number; processGroupId: number | null; startedAt: string }) => Promise<void>;
onSpawn?: (meta: { pid: number; startedAt: string }) => Promise<void>;
authToken?: string;
}
@@ -328,36 +328,6 @@ export interface ServerAdapterModule {
* resolved inside this method — the caller receives a fully hydrated schema.
*/
getConfigSchema?: () => Promise<AdapterConfigSchema> | AdapterConfigSchema;
// ---------------------------------------------------------------------------
// Adapter capability flags
//
// These allow adapter plugins to declare what "local" capabilities they
// support, replacing hardcoded type lists in the server and UI.
// All flags are optional — when undefined, the server falls back to
// legacy hardcoded lists for built-in adapters.
// ---------------------------------------------------------------------------
/**
* Adapter supports managed instructions bundle (AGENTS.md files).
* When true, the server uses instructionsPathKey (default "instructionsFilePath")
* to resolve the instructions config key, and the UI shows the bundle editor.
* Built-in local adapters default to true; external plugins must opt in.
*/
supportsInstructionsBundle?: boolean;
/**
* The adapterConfig key that holds the instructions file path.
* Defaults to "instructionsFilePath" when supportsInstructionsBundle is true.
*/
instructionsPathKey?: string;
/**
* Adapter needs runtime skill entries materialized (written to disk)
* before being passed via config. Used by adapters that scan a directory
* rather than reading config.paperclipRuntimeSkills.
*/
requiresMaterializedRuntimeSkills?: boolean;
}
// ---------------------------------------------------------------------------
@@ -402,7 +372,6 @@ export interface CreateConfigValues {
chrome: boolean;
dangerouslySkipPermissions: boolean;
search: boolean;
fastMode: boolean;
dangerouslyBypassSandbox: boolean;
command: string;
args: string;

View File

@@ -1,4 +1,5 @@
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { fileURLToPath } from "node:url";
import type { AdapterExecutionContext, AdapterExecutionResult } from "@paperclipai/adapter-utils";
@@ -32,10 +33,35 @@ import {
} from "./parse.js";
import { resolveClaudeDesiredSkillNames } from "./skills.js";
import { isBedrockModelId } from "./models.js";
import { prepareClaudePromptBundle } from "./prompt-cache.js";
const __moduleDir = path.dirname(fileURLToPath(import.meta.url));
/**
* Create a tmpdir with `.claude/skills/` containing symlinks to skills from
* the repo's `skills/` directory, so `--add-dir` makes Claude Code discover
* them as proper registered skills.
*/
async function buildSkillsDir(config: Record<string, unknown>): Promise<string> {
const tmp = await fs.mkdtemp(path.join(os.tmpdir(), "paperclip-skills-"));
const target = path.join(tmp, ".claude", "skills");
await fs.mkdir(target, { recursive: true });
const availableEntries = await readPaperclipRuntimeSkillEntries(config, __moduleDir);
const desiredNames = new Set(
resolveClaudeDesiredSkillNames(
config,
availableEntries,
),
);
for (const entry of availableEntries) {
if (!desiredNames.has(entry.key)) continue;
await fs.symlink(
entry.source,
path.join(target, entry.runtimeName),
);
}
return tmp;
}
interface ClaudeExecutionInput {
runId: string;
agent: AdapterExecutionContext["agent"];
@@ -335,64 +361,47 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
),
);
const billingType = resolveClaudeBillingType(effectiveEnv);
const claudeSkillEntries = await readPaperclipRuntimeSkillEntries(config, __moduleDir);
const desiredSkillNames = new Set(resolveClaudeDesiredSkillNames(config, claudeSkillEntries));
// When instructionsFilePath is configured, build a stable content-addressed
// file that includes both the file content and the path directive, so we only
// need --append-system-prompt-file (Claude CLI forbids using both flags together).
let combinedInstructionsContents: string | null = null;
if (instructionsFilePath) {
const skillsDir = await buildSkillsDir(config);
const runtimeSessionParams = parseObject(runtime.sessionParams);
const runtimeSessionId = asString(runtimeSessionParams.sessionId, runtime.sessionId ?? "");
const runtimeSessionCwd = asString(runtimeSessionParams.cwd, "");
const canResumeSession =
runtimeSessionId.length > 0 &&
(runtimeSessionCwd.length === 0 || path.resolve(runtimeSessionCwd) === path.resolve(cwd));
const sessionId = canResumeSession ? runtimeSessionId : null;
if (runtimeSessionId && !canResumeSession) {
await onLog(
"stdout",
`[paperclip] Claude session "${runtimeSessionId}" was saved for cwd "${runtimeSessionCwd}" and will not be resumed in "${cwd}".\n`,
);
}
let effectiveInstructionsFilePath: string | undefined;
let preparedInstructionsFile = false;
const ensureEffectiveInstructionsFilePath = async (resumeSessionId: string | null) => {
if (resumeSessionId || !instructionsFilePath) return undefined;
if (preparedInstructionsFile) return effectiveInstructionsFilePath;
preparedInstructionsFile = true;
try {
const instructionsContent = await fs.readFile(instructionsFilePath, "utf-8");
const pathDirective =
`\nThe above agent instructions were loaded from ${instructionsFilePath}. ` +
`Resolve any relative file references from ${instructionsFileDir}. ` +
`This base directory is authoritative for sibling instruction files such as ` +
`./HEARTBEAT.md, ./SOUL.md, and ./TOOLS.md; do not resolve those from the parent agent directory.`;
combinedInstructionsContents = instructionsContent + pathDirective;
const pathDirective = `\nThe above agent instructions were loaded from ${instructionsFilePath}. Resolve any relative file references from ${instructionsFileDir}.`;
const combinedPath = path.join(skillsDir, "agent-instructions.md");
await fs.writeFile(combinedPath, instructionsContent + pathDirective, "utf-8");
effectiveInstructionsFilePath = combinedPath;
} catch (err) {
const reason = err instanceof Error ? err.message : String(err);
await onLog(
"stderr",
`[paperclip] Warning: could not read agent instructions file "${instructionsFilePath}": ${reason}\n`,
);
effectiveInstructionsFilePath = undefined;
}
}
const promptBundle = await prepareClaudePromptBundle({
companyId: agent.companyId,
skills: claudeSkillEntries.filter((entry) => desiredSkillNames.has(entry.key)),
instructionsContents: combinedInstructionsContents,
onLog,
});
const effectiveInstructionsFilePath = promptBundle.instructionsFilePath ?? undefined;
const runtimeSessionParams = parseObject(runtime.sessionParams);
const runtimeSessionId = asString(runtimeSessionParams.sessionId, runtime.sessionId ?? "");
const runtimeSessionCwd = asString(runtimeSessionParams.cwd, "");
const runtimePromptBundleKey = asString(runtimeSessionParams.promptBundleKey, "");
const hasMatchingPromptBundle =
runtimePromptBundleKey.length === 0 || runtimePromptBundleKey === promptBundle.bundleKey;
const canResumeSession =
runtimeSessionId.length > 0 &&
hasMatchingPromptBundle &&
(runtimeSessionCwd.length === 0 || path.resolve(runtimeSessionCwd) === path.resolve(cwd));
const sessionId = canResumeSession ? runtimeSessionId : null;
if (
runtimeSessionId &&
runtimeSessionCwd.length > 0 &&
path.resolve(runtimeSessionCwd) !== path.resolve(cwd)
) {
await onLog(
"stdout",
`[paperclip] Claude session "${runtimeSessionId}" was saved for cwd "${runtimeSessionCwd}" and will not be resumed in "${cwd}".\n`,
);
}
if (runtimeSessionId && runtimePromptBundleKey.length > 0 && runtimePromptBundleKey !== promptBundle.bundleKey) {
await onLog(
"stdout",
`[paperclip] Claude session "${runtimeSessionId}" was saved for prompt bundle "${runtimePromptBundleKey}" and will not be resumed with "${promptBundle.bundleKey}".\n`,
);
}
return effectiveInstructionsFilePath;
};
const bootstrapPromptTemplate = asString(config.bootstrapPromptTemplate, "");
const templateData = {
agentId: agent.id,
@@ -447,7 +456,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (attemptInstructionsFilePath && !resumeSessionId) {
args.push("--append-system-prompt-file", attemptInstructionsFilePath);
}
args.push("--add-dir", promptBundle.addDir);
args.push("--add-dir", skillsDir);
if (extraArgs.length > 0) args.push(...extraArgs);
return args;
};
@@ -469,17 +478,14 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
};
const runAttempt = async (resumeSessionId: string | null) => {
const attemptInstructionsFilePath = resumeSessionId ? undefined : effectiveInstructionsFilePath;
const attemptInstructionsFilePath = await ensureEffectiveInstructionsFilePath(resumeSessionId);
const args = buildClaudeArgs(resumeSessionId, attemptInstructionsFilePath);
const commandNotes: string[] = [];
if (!resumeSessionId) {
commandNotes.push(`Using stable Claude prompt bundle ${promptBundle.bundleKey}.`);
}
if (attemptInstructionsFilePath && !resumeSessionId) {
commandNotes.push(
`Injected agent instructions via --append-system-prompt-file ${instructionsFilePath} (with path directive appended)`,
);
}
const commandNotes =
attemptInstructionsFilePath && !resumeSessionId
? [
`Injected agent instructions via --append-system-prompt-file ${instructionsFilePath} (with path directive appended)`,
]
: [];
if (onMeta) {
await onMeta({
adapterType: "claude_local",
@@ -576,7 +582,6 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
? ({
sessionId: resolvedSessionId,
cwd,
promptBundleKey: promptBundle.bundleKey,
...(workspaceId ? { workspaceId } : {}),
...(workspaceRepoUrl ? { repoUrl: workspaceRepoUrl } : {}),
...(workspaceRepoRef ? { repoRef: workspaceRepoRef } : {}),
@@ -609,21 +614,25 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
};
};
const initial = await runAttempt(sessionId ?? null);
if (
sessionId &&
!initial.proc.timedOut &&
(initial.proc.exitCode ?? 0) !== 0 &&
initial.parsed &&
isClaudeUnknownSessionError(initial.parsed)
) {
await onLog(
"stdout",
`[paperclip] Claude resume session "${sessionId}" is unavailable; retrying with a fresh session.\n`,
);
const retry = await runAttempt(null);
return toAdapterResult(retry, { fallbackSessionId: null, clearSessionOnMissingSession: true });
}
try {
const initial = await runAttempt(sessionId ?? null);
if (
sessionId &&
!initial.proc.timedOut &&
(initial.proc.exitCode ?? 0) !== 0 &&
initial.parsed &&
isClaudeUnknownSessionError(initial.parsed)
) {
await onLog(
"stdout",
`[paperclip] Claude resume session "${sessionId}" is unavailable; retrying with a fresh session.\n`,
);
const retry = await runAttempt(null);
return toAdapterResult(retry, { fallbackSessionId: null, clearSessionOnMissingSession: true });
}
return toAdapterResult(initial, { fallbackSessionId: runtimeSessionId || runtime.sessionId });
return toAdapterResult(initial, { fallbackSessionId: runtimeSessionId || runtime.sessionId });
} finally {
fs.rm(skillsDir, { recursive: true, force: true }).catch(() => {});
}
}

View File

@@ -36,16 +36,12 @@ export const sessionCodec: AdapterSessionCodec = {
readNonEmptyString(record.cwd) ??
readNonEmptyString(record.workdir) ??
readNonEmptyString(record.folder);
const promptBundleKey =
readNonEmptyString(record.promptBundleKey) ??
readNonEmptyString(record.prompt_bundle_key);
const workspaceId = readNonEmptyString(record.workspaceId) ?? readNonEmptyString(record.workspace_id);
const repoUrl = readNonEmptyString(record.repoUrl) ?? readNonEmptyString(record.repo_url);
const repoRef = readNonEmptyString(record.repoRef) ?? readNonEmptyString(record.repo_ref);
return {
sessionId,
...(cwd ? { cwd } : {}),
...(promptBundleKey ? { promptBundleKey } : {}),
...(workspaceId ? { workspaceId } : {}),
...(repoUrl ? { repoUrl } : {}),
...(repoRef ? { repoRef } : {}),
@@ -59,16 +55,12 @@ export const sessionCodec: AdapterSessionCodec = {
readNonEmptyString(params.cwd) ??
readNonEmptyString(params.workdir) ??
readNonEmptyString(params.folder);
const promptBundleKey =
readNonEmptyString(params.promptBundleKey) ??
readNonEmptyString(params.prompt_bundle_key);
const workspaceId = readNonEmptyString(params.workspaceId) ?? readNonEmptyString(params.workspace_id);
const repoUrl = readNonEmptyString(params.repoUrl) ?? readNonEmptyString(params.repo_url);
const repoRef = readNonEmptyString(params.repoRef) ?? readNonEmptyString(params.repo_ref);
return {
sessionId,
...(cwd ? { cwd } : {}),
...(promptBundleKey ? { promptBundleKey } : {}),
...(workspaceId ? { workspaceId } : {}),
...(repoUrl ? { repoUrl } : {}),
...(repoRef ? { repoRef } : {}),

View File

@@ -1,172 +0,0 @@
import { constants as fsConstants } from "node:fs";
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { createHash, type Hash } from "node:crypto";
import type { AdapterExecutionContext } from "@paperclipai/adapter-utils";
import { ensurePaperclipSkillSymlink, type PaperclipSkillEntry } from "@paperclipai/adapter-utils/server-utils";
const DEFAULT_PAPERCLIP_INSTANCE_ID = "default";
type SkillEntry = PaperclipSkillEntry;
export interface ClaudePromptBundle {
bundleKey: string;
rootDir: string;
addDir: string;
instructionsFilePath: string | null;
}
function nonEmpty(value: string | undefined): string | null {
return typeof value === "string" && value.trim().length > 0 ? value.trim() : null;
}
function resolveManagedClaudePromptCacheRoot(
env: NodeJS.ProcessEnv,
companyId: string,
): string {
const paperclipHome = nonEmpty(env.PAPERCLIP_HOME) ?? path.resolve(os.homedir(), ".paperclip");
const instanceId = nonEmpty(env.PAPERCLIP_INSTANCE_ID) ?? DEFAULT_PAPERCLIP_INSTANCE_ID;
return path.resolve(
paperclipHome,
"instances",
instanceId,
"companies",
companyId,
"claude-prompt-cache",
);
}
async function hashPathContents(
candidate: string,
hash: Hash,
relativePath: string,
seenDirectories: Set<string>,
): Promise<void> {
const stat = await fs.lstat(candidate);
if (stat.isSymbolicLink()) {
hash.update(`symlink:${relativePath}\n`);
const resolved = await fs.realpath(candidate).catch(() => null);
if (!resolved) {
hash.update("missing\n");
return;
}
await hashPathContents(resolved, hash, relativePath, seenDirectories);
return;
}
if (stat.isDirectory()) {
const realDir = await fs.realpath(candidate).catch(() => candidate);
hash.update(`dir:${relativePath}\n`);
if (seenDirectories.has(realDir)) {
hash.update("loop\n");
return;
}
seenDirectories.add(realDir);
const entries = await fs.readdir(candidate, { withFileTypes: true });
entries.sort((left, right) => left.name.localeCompare(right.name));
for (const entry of entries) {
const childRelativePath = relativePath.length > 0 ? `${relativePath}/${entry.name}` : entry.name;
await hashPathContents(path.join(candidate, entry.name), hash, childRelativePath, seenDirectories);
}
return;
}
if (stat.isFile()) {
hash.update(`file:${relativePath}\n`);
hash.update(await fs.readFile(candidate));
hash.update("\n");
return;
}
hash.update(`other:${relativePath}:${stat.mode}\n`);
}
async function buildClaudePromptBundleKey(input: {
skills: SkillEntry[];
instructionsContents: string | null;
}): Promise<string> {
const hash = createHash("sha256");
hash.update("paperclip-claude-prompt-bundle:v1\n");
if (input.instructionsContents) {
hash.update("instructions\n");
hash.update(input.instructionsContents);
hash.update("\n");
} else {
hash.update("instructions:none\n");
}
const sortedSkills = [...input.skills].sort((left, right) => left.runtimeName.localeCompare(right.runtimeName));
for (const entry of sortedSkills) {
hash.update(`skill:${entry.key}:${entry.runtimeName}\n`);
await hashPathContents(entry.source, hash, entry.runtimeName, new Set<string>());
}
return hash.digest("hex");
}
async function ensureReadableFile(targetPath: string, contents: string): Promise<void> {
try {
await fs.access(targetPath, fsConstants.R_OK);
return;
} catch {
// Fall through and materialize the file.
}
await fs.mkdir(path.dirname(targetPath), { recursive: true });
const tempPath = `${targetPath}.${process.pid}.${Date.now()}.tmp`;
try {
await fs.writeFile(tempPath, contents, "utf8");
await fs.rename(tempPath, targetPath);
} catch (err) {
const targetReadable = await fs.access(targetPath, fsConstants.R_OK).then(() => true).catch(() => false);
if (!targetReadable) {
throw err;
}
} finally {
await fs.rm(tempPath, { force: true }).catch(() => {});
}
}
export async function prepareClaudePromptBundle(input: {
companyId: string;
skills: SkillEntry[];
instructionsContents: string | null;
onLog: AdapterExecutionContext["onLog"];
}): Promise<ClaudePromptBundle> {
const { companyId, skills, instructionsContents, onLog } = input;
const bundleKey = await buildClaudePromptBundleKey({
skills,
instructionsContents,
});
const rootDir = path.join(resolveManagedClaudePromptCacheRoot(process.env, companyId), bundleKey);
const skillsHome = path.join(rootDir, ".claude", "skills");
await fs.mkdir(skillsHome, { recursive: true });
for (const entry of skills) {
const target = path.join(skillsHome, entry.runtimeName);
try {
await ensurePaperclipSkillSymlink(entry.source, target);
} catch (err) {
await onLog(
"stderr",
`[paperclip] Failed to materialize Claude skill "${entry.key}" into ${skillsHome}: ${err instanceof Error ? err.message : String(err)}\n`,
);
}
}
const instructionsFilePath = instructionsContents
? path.join(rootDir, "agent-instructions.md")
: null;
if (instructionsFilePath && instructionsContents) {
await ensureReadableFile(instructionsFilePath, instructionsContents);
}
return {
bundleKey,
rootDir,
addDir: rootDir,
instructionsFilePath,
};
}

View File

@@ -187,15 +187,13 @@ function formatExtraUsageLabel(extraUsage: AnthropicExtraUsage): string | null {
) {
return null;
}
// API returns values in cents — convert to dollars for display
return `${formatCurrencyAmount(usedCredits / 100, extraUsage.currency)} / ${formatCurrencyAmount(monthlyLimit / 100, extraUsage.currency)}`;
return `${formatCurrencyAmount(usedCredits, extraUsage.currency)} / ${formatCurrencyAmount(monthlyLimit, extraUsage.currency)}`;
}
/** Convert a utilization value to a 0-100 integer percent. Returns null for null/undefined input.
* Handles both 0-1 fractions (legacy) and 0-100 percentages (current API). */
/** Convert a 0-1 utilization fraction to a 0-100 integer percent. Returns null for null/undefined input. */
export function toPercent(utilization: number | null | undefined): number | null {
if (utilization == null) return null;
return Math.min(100, Math.round(utilization < 1 ? utilization * 100 : utilization));
return Math.min(100, Math.round(utilization * 100));
}
/** fetch with an abort-based timeout so a hanging provider api doesn't block the response indefinitely */

View File

@@ -47,7 +47,7 @@ async function buildClaudeSkillSnapshot(config: Record<string, unknown>): Promis
sourcePath: entry.source,
targetPath: null,
detail: desiredSet.has(entry.key)
? "Will be materialized into the stable Paperclip-managed Claude prompt bundle on the next run."
? "Will be mounted into the ephemeral Claude skill directory on the next run."
: null,
required: Boolean(entry.required),
requiredReason: entry.requiredReason ?? null,

View File

@@ -2,14 +2,6 @@ export const type = "codex_local";
export const label = "Codex (local)";
export const DEFAULT_CODEX_LOCAL_MODEL = "gpt-5.3-codex";
export const DEFAULT_CODEX_LOCAL_BYPASS_APPROVALS_AND_SANDBOX = true;
export const CODEX_LOCAL_FAST_MODE_SUPPORTED_MODELS = ["gpt-5.4"] as const;
export function isCodexLocalFastModeSupported(model: string | null | undefined): boolean {
const normalizedModel = typeof model === "string" ? model.trim() : "";
return CODEX_LOCAL_FAST_MODE_SUPPORTED_MODELS.includes(
normalizedModel as (typeof CODEX_LOCAL_FAST_MODE_SUPPORTED_MODELS)[number],
);
}
export const models = [
{ id: "gpt-5.4", label: "gpt-5.4" },
@@ -35,7 +27,6 @@ Core fields:
- modelReasoningEffort (string, optional): reasoning effort override (minimal|low|medium|high|xhigh) passed via -c model_reasoning_effort=...
- promptTemplate (string, optional): run prompt template
- search (boolean, optional): run codex with --search
- fastMode (boolean, optional): enable Codex Fast mode; currently supported on GPT-5.4 only and consumes credits faster
- dangerouslyBypassApprovalsAndSandbox (boolean, optional): run with bypass flag
- command (string, optional): defaults to "codex"
- extraArgs (string[], optional): additional CLI args
@@ -54,6 +45,5 @@ Notes:
- Paperclip injects desired local skills into the effective CODEX_HOME/skills/ directory at execution time so Codex can discover "$paperclip" and related skills without polluting the project working directory. In managed-home mode (the default) this is ~/.paperclip/instances/<id>/companies/<companyId>/codex-home/skills/; when CODEX_HOME is explicitly overridden in adapter config, that override is used instead.
- Unless explicitly overridden in adapter config, Paperclip runs Codex with a per-company managed CODEX_HOME under the active Paperclip instance and seeds auth/config from the shared Codex home (the CODEX_HOME env var, when set, or ~/.codex).
- Some model/tool combinations reject certain effort levels (for example minimal with web search enabled).
- Fast mode is currently supported on GPT-5.4 only. When enabled, Paperclip applies \`service_tier="fast"\` and \`features.fast_mode=true\`.
- When Paperclip realizes a workspace/runtime for a run, it injects PAPERCLIP_WORKSPACE_* and PAPERCLIP_RUNTIME_* env vars for agent-side tooling.
`;

View File

@@ -1,46 +0,0 @@
import { describe, expect, it } from "vitest";
import { buildCodexExecArgs } from "./codex-args.js";
describe("buildCodexExecArgs", () => {
it("enables Codex fast mode overrides for GPT-5.4", () => {
const result = buildCodexExecArgs({
model: "gpt-5.4",
search: true,
fastMode: true,
});
expect(result.fastModeRequested).toBe(true);
expect(result.fastModeApplied).toBe(true);
expect(result.fastModeIgnoredReason).toBeNull();
expect(result.args).toEqual([
"--search",
"exec",
"--json",
"--model",
"gpt-5.4",
"-c",
'service_tier="fast"',
"-c",
"features.fast_mode=true",
"-",
]);
});
it("ignores fast mode for unsupported models", () => {
const result = buildCodexExecArgs({
model: "gpt-5.3-codex",
fastMode: true,
});
expect(result.fastModeRequested).toBe(true);
expect(result.fastModeApplied).toBe(false);
expect(result.fastModeIgnoredReason).toContain("currently only supported on gpt-5.4");
expect(result.args).toEqual([
"exec",
"--json",
"--model",
"gpt-5.3-codex",
"-",
]);
});
});

View File

@@ -1,74 +0,0 @@
import { asBoolean, asString, asStringArray } from "@paperclipai/adapter-utils/server-utils";
import {
CODEX_LOCAL_FAST_MODE_SUPPORTED_MODELS,
isCodexLocalFastModeSupported,
} from "../index.js";
export type BuildCodexExecArgsResult = {
args: string[];
model: string;
fastModeRequested: boolean;
fastModeApplied: boolean;
fastModeIgnoredReason: string | null;
};
function readExtraArgs(config: unknown): string[] {
const fromExtraArgs = asStringArray(asRecord(config).extraArgs);
if (fromExtraArgs.length > 0) return fromExtraArgs;
return asStringArray(asRecord(config).args);
}
function asRecord(value: unknown): Record<string, unknown> {
return typeof value === "object" && value !== null && !Array.isArray(value)
? (value as Record<string, unknown>)
: {};
}
function formatFastModeSupportedModels(): string {
return CODEX_LOCAL_FAST_MODE_SUPPORTED_MODELS.join(", ");
}
export function buildCodexExecArgs(
config: unknown,
options: { resumeSessionId?: string | null } = {},
): BuildCodexExecArgsResult {
const record = asRecord(config);
const model = asString(record.model, "").trim();
const modelReasoningEffort = asString(
record.modelReasoningEffort,
asString(record.reasoningEffort, ""),
).trim();
const search = asBoolean(record.search, false);
const fastModeRequested = asBoolean(record.fastMode, false);
const fastModeApplied = fastModeRequested && isCodexLocalFastModeSupported(model);
const bypass = asBoolean(
record.dangerouslyBypassApprovalsAndSandbox,
asBoolean(record.dangerouslyBypassSandbox, false),
);
const extraArgs = readExtraArgs(record);
const args = ["exec", "--json"];
if (search) args.unshift("--search");
if (bypass) args.push("--dangerously-bypass-approvals-and-sandbox");
if (model) args.push("--model", model);
if (modelReasoningEffort) {
args.push("-c", `model_reasoning_effort=${JSON.stringify(modelReasoningEffort)}`);
}
if (fastModeApplied) {
args.push("-c", 'service_tier="fast"', "-c", "features.fast_mode=true");
}
if (extraArgs.length > 0) args.push(...extraArgs);
if (options.resumeSessionId) args.push("resume", options.resumeSessionId, "-");
else args.push("-");
return {
args,
model,
fastModeRequested,
fastModeApplied,
fastModeIgnoredReason:
fastModeRequested && !fastModeApplied
? `Configured fast mode is currently only supported on ${formatFastModeSupportedModels()}; Paperclip will ignore it for model ${model || "(default)"}.`
: null,
};
}

View File

@@ -5,6 +5,8 @@ import { inferOpenAiCompatibleBiller, type AdapterExecutionContext, type Adapter
import {
asString,
asNumber,
asBoolean,
asStringArray,
parseObject,
buildPaperclipEnv,
buildInvocationEnvForLogs,
@@ -24,7 +26,6 @@ import {
import { parseCodexJsonl, isCodexUnknownSessionError } from "./parse.js";
import { pathExists, prepareManagedCodexHome, resolveManagedCodexHomeDir, resolveSharedCodexHomeDir } from "./codex-home.js";
import { resolveCodexDesiredSkillNames } from "./skills.js";
import { buildCodexExecArgs } from "./codex-args.js";
const __moduleDir = path.dirname(fileURLToPath(import.meta.url));
const CODEX_ROLLOUT_NOISE_RE =
@@ -222,6 +223,15 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
);
const command = asString(config.command, "codex");
const model = asString(config.model, "");
const modelReasoningEffort = asString(
config.modelReasoningEffort,
asString(config.reasoningEffort, ""),
);
const search = asBoolean(config.search, false);
const bypass = asBoolean(
config.dangerouslyBypassApprovalsAndSandbox,
asBoolean(config.dangerouslyBypassSandbox, false),
);
const workspaceContext = parseObject(context.paperclipWorkspace);
const workspaceCwd = asString(workspaceContext.cwd, "");
@@ -389,6 +399,11 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const timeoutSec = asNumber(config.timeoutSec, 0);
const graceSec = asNumber(config.graceSec, 20);
const extraArgs = (() => {
const fromExtraArgs = asStringArray(config.extraArgs);
if (fromExtraArgs.length > 0) return fromExtraArgs;
return asStringArray(config.args);
})();
const runtimeSessionParams = parseObject(runtime.sessionParams);
const runtimeSessionId = asString(runtimeSessionParams.sessionId, runtime.sessionId ?? "");
@@ -484,19 +499,26 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
heartbeatPromptChars: renderedPrompt.length,
};
const buildArgs = (resumeSessionId: string | null) => {
const args = ["exec", "--json"];
if (search) args.unshift("--search");
if (bypass) args.push("--dangerously-bypass-approvals-and-sandbox");
if (model) args.push("--model", model);
if (modelReasoningEffort) args.push("-c", `model_reasoning_effort=${JSON.stringify(modelReasoningEffort)}`);
if (extraArgs.length > 0) args.push(...extraArgs);
if (resumeSessionId) args.push("resume", resumeSessionId, "-");
else args.push("-");
return args;
};
const runAttempt = async (resumeSessionId: string | null) => {
const execArgs = buildCodexExecArgs(config, { resumeSessionId });
const args = execArgs.args;
const commandNotesWithFastMode =
execArgs.fastModeIgnoredReason == null
? commandNotes
: [...commandNotes, execArgs.fastModeIgnoredReason];
const args = buildArgs(resumeSessionId);
if (onMeta) {
await onMeta({
adapterType: "codex_local",
command: resolvedCommand,
cwd,
commandNotes: commandNotesWithFastMode,
commandNotes,
commandArgs: args.map((value, idx) => {
if (idx === args.length - 1 && value !== "-") return `<prompt ${prompt.length} chars>`;
return value;

View File

@@ -27,39 +27,6 @@ describe("parseCodexJsonl", () => {
errorMessage: "resume failed",
});
});
it("uses the last agent message as the summary when commentary updates precede the final answer", () => {
const stdout = [
JSON.stringify({ type: "thread.started", thread_id: "thread_123" }),
JSON.stringify({
type: "item.completed",
item: { type: "reasoning", text: "Checking the heartbeat procedure" },
}),
JSON.stringify({
type: "item.completed",
item: { type: "agent_message", text: "Im checking out the issue and reading the docs now." },
}),
JSON.stringify({
type: "item.completed",
item: { type: "agent_message", text: "Fixed the issue and verified the targeted tests pass." },
}),
JSON.stringify({
type: "turn.completed",
usage: { input_tokens: 10, cached_input_tokens: 2, output_tokens: 4 },
}),
].join("\n");
expect(parseCodexJsonl(stdout)).toEqual({
sessionId: "thread_123",
summary: "Fixed the issue and verified the targeted tests pass.",
usage: {
inputTokens: 10,
cachedInputTokens: 2,
outputTokens: 4,
},
errorMessage: null,
});
});
});
describe("isCodexUnknownSessionError", () => {

View File

@@ -2,7 +2,7 @@ import { asString, asNumber, parseObject, parseJson } from "@paperclipai/adapter
export function parseCodexJsonl(stdout: string) {
let sessionId: string | null = null;
let finalMessage: string | null = null;
const messages: string[] = [];
let errorMessage: string | null = null;
const usage = {
inputTokens: 0,
@@ -33,7 +33,7 @@ export function parseCodexJsonl(stdout: string) {
const item = parseObject(event.item);
if (asString(item.type, "") === "agent_message") {
const text = asString(item.text, "");
if (text) finalMessage = text;
if (text) messages.push(text);
}
continue;
}
@@ -55,7 +55,7 @@ export function parseCodexJsonl(stdout: string) {
return {
sessionId,
summary: finalMessage?.trim() ?? "",
summary: messages.join("\n\n").trim(),
usage,
errorMessage,
};

View File

@@ -5,6 +5,8 @@ import type {
} from "@paperclipai/adapter-utils";
import {
asString,
asBoolean,
asStringArray,
parseObject,
ensureAbsoluteDirectory,
ensureCommandResolvable,
@@ -14,7 +16,6 @@ import {
import path from "node:path";
import { parseCodexJsonl } from "./parse.js";
import { codexHomeDir, readCodexAuthInfo } from "./quota.js";
import { buildCodexExecArgs } from "./codex-args.js";
function summarizeStatus(checks: AdapterEnvironmentCheck[]): AdapterEnvironmentTestResult["status"] {
if (checks.some((check) => check.level === "error")) return "fail";
@@ -139,16 +140,31 @@ export async function testEnvironment(
hint: "Use the `codex` CLI command to run the automatic login and installation probe.",
});
} else {
const execArgs = buildCodexExecArgs({ ...config, fastMode: false });
const args = execArgs.args;
if (execArgs.fastModeIgnoredReason) {
checks.push({
code: "codex_fast_mode_unsupported_model",
level: "warn",
message: execArgs.fastModeIgnoredReason,
hint: "Switch the agent model to GPT-5.4 to enable Codex Fast mode.",
});
const model = asString(config.model, "").trim();
const modelReasoningEffort = asString(
config.modelReasoningEffort,
asString(config.reasoningEffort, ""),
).trim();
const search = asBoolean(config.search, false);
const bypass = asBoolean(
config.dangerouslyBypassApprovalsAndSandbox,
asBoolean(config.dangerouslyBypassSandbox, false),
);
const extraArgs = (() => {
const fromExtraArgs = asStringArray(config.extraArgs);
if (fromExtraArgs.length > 0) return fromExtraArgs;
return asStringArray(config.args);
})();
const args = ["exec", "--json"];
if (search) args.unshift("--search");
if (bypass) args.push("--dangerously-bypass-approvals-and-sandbox");
if (model) args.push("--model", model);
if (modelReasoningEffort) {
args.push("-c", `model_reasoning_effort=${JSON.stringify(modelReasoningEffort)}`);
}
if (extraArgs.length > 0) args.push(...extraArgs);
args.push("-");
const probe = await runChildProcess(
`codex-envtest-${Date.now()}-${Math.random().toString(16).slice(2)}`,

View File

@@ -1,54 +0,0 @@
import { describe, expect, it } from "vitest";
import { buildCodexLocalConfig } from "./build-config.js";
import type { CreateConfigValues } from "@paperclipai/adapter-utils";
function makeValues(overrides: Partial<CreateConfigValues> = {}): CreateConfigValues {
return {
adapterType: "codex_local",
cwd: "",
instructionsFilePath: "",
promptTemplate: "",
model: "gpt-5.4",
thinkingEffort: "",
chrome: false,
dangerouslySkipPermissions: true,
search: false,
fastMode: false,
dangerouslyBypassSandbox: true,
command: "",
args: "",
extraArgs: "",
envVars: "",
envBindings: {},
url: "",
bootstrapPrompt: "",
payloadTemplateJson: "",
workspaceStrategyType: "project_primary",
workspaceBaseRef: "",
workspaceBranchTemplate: "",
worktreeParentDir: "",
runtimeServicesJson: "",
maxTurnsPerRun: 1000,
heartbeatEnabled: false,
intervalSec: 300,
...overrides,
};
}
describe("buildCodexLocalConfig", () => {
it("persists the fastMode toggle into adapter config", () => {
const config = buildCodexLocalConfig(
makeValues({
search: true,
fastMode: true,
}),
);
expect(config).toMatchObject({
model: "gpt-5.4",
search: true,
fastMode: true,
dangerouslyBypassApprovalsAndSandbox: true,
});
});
});

View File

@@ -85,7 +85,6 @@ export function buildCodexLocalConfig(v: CreateConfigValues): Record<string, unk
}
if (Object.keys(env).length > 0) ac.env = env;
ac.search = v.search;
ac.fastMode = v.fastMode;
ac.dangerouslyBypassApprovalsAndSandbox =
typeof v.dangerouslyBypassSandbox === "boolean"
? v.dangerouslyBypassSandbox

View File

@@ -66,7 +66,7 @@ OPENCLAW_RESET_STATE=1 OPENCLAW_BUILD=1 ./scripts/smoke/openclaw-docker-ui.sh
### 1) Start Paperclip
```bash
pnpm dev --bind lan
pnpm dev --tailscale-auth
curl -fsS http://127.0.0.1:3100/api/health
```

View File

@@ -125,12 +125,12 @@ describeEmbeddedPostgres("runDatabaseBackup", () => {
const result = await runDatabaseBackup({
connectionString: sourceConnectionString,
backupDir,
retention: { dailyDays: 7, weeklyWeeks: 4, monthlyMonths: 1 },
retentionDays: 7,
filenamePrefix: "paperclip-test",
});
expect(result.backupFile).toMatch(/paperclip-test-.*\.sql\.gz$/);
expect(result.sizeBytes).toBeGreaterThan(0);
expect(result.backupFile).toMatch(/paperclip-test-.*\.sql$/);
expect(result.sizeBytes).toBeGreaterThan(1024 * 1024);
expect(fs.existsSync(result.backupFile)).toBe(true);
await runDatabaseRestore({

View File

@@ -1,20 +1,12 @@
import { createReadStream, createWriteStream, existsSync, mkdirSync, readdirSync, statSync, unlinkSync } from "node:fs";
import { basename, resolve } from "node:path";
import { createInterface } from "node:readline";
import { pipeline } from "node:stream/promises";
import { createGunzip, createGzip } from "node:zlib";
import postgres from "postgres";
export type BackupRetentionPolicy = {
dailyDays: number;
weeklyWeeks: number;
monthlyMonths: number;
};
export type RunDatabaseBackupOptions = {
connectionString: string;
backupDir: string;
retention: BackupRetentionPolicy;
retentionDays: number;
filenamePrefix?: string;
connectTimeoutSeconds?: number;
includeMigrationJournal?: boolean;
@@ -83,91 +75,23 @@ function timestamp(date: Date = new Date()): string {
return `${date.getFullYear()}${pad(date.getMonth() + 1)}${pad(date.getDate())}-${pad(date.getHours())}${pad(date.getMinutes())}${pad(date.getSeconds())}`;
}
/**
* ISO week key for grouping backups by calendar week (ISO 8601).
*/
function isoWeekKey(date: Date): string {
const d = new Date(Date.UTC(date.getFullYear(), date.getMonth(), date.getDate()));
d.setUTCDate(d.getUTCDate() + 4 - (d.getUTCDay() || 7));
const yearStart = new Date(Date.UTC(d.getUTCFullYear(), 0, 1));
const weekNo = Math.ceil(((d.getTime() - yearStart.getTime()) / 86400000 + 1) / 7);
return `${d.getUTCFullYear()}-W${String(weekNo).padStart(2, "0")}`;
}
function monthKey(date: Date): string {
return `${date.getFullYear()}-${String(date.getMonth() + 1).padStart(2, "0")}`;
}
/**
* Tiered backup pruning:
* - Daily tier: keep ALL backups from the last `dailyDays` days
* - Weekly tier: keep the NEWEST backup per calendar week for `weeklyWeeks` weeks
* - Monthly tier: keep the NEWEST backup per calendar month for `monthlyMonths` months
* - Everything else is deleted
*/
function pruneOldBackups(backupDir: string, retention: BackupRetentionPolicy, filenamePrefix: string): number {
function pruneOldBackups(backupDir: string, retentionDays: number, filenamePrefix: string): number {
if (!existsSync(backupDir)) return 0;
const now = Date.now();
const dailyCutoff = now - Math.max(1, retention.dailyDays) * 24 * 60 * 60 * 1000;
const weeklyCutoff = now - Math.max(1, retention.weeklyWeeks) * 7 * 24 * 60 * 60 * 1000;
const monthlyCutoff = now - Math.max(1, retention.monthlyMonths) * 30 * 24 * 60 * 60 * 1000;
type BackupEntry = { name: string; fullPath: string; mtimeMs: number };
const entries: BackupEntry[] = [];
const safeRetention = Math.max(1, Math.trunc(retentionDays));
const cutoff = Date.now() - safeRetention * 24 * 60 * 60 * 1000;
let pruned = 0;
for (const name of readdirSync(backupDir)) {
if (!name.startsWith(`${filenamePrefix}-`)) continue;
if (!name.endsWith(".sql") && !name.endsWith(".sql.gz")) continue;
if (!name.startsWith(`${filenamePrefix}-`) || !name.endsWith(".sql")) continue;
const fullPath = resolve(backupDir, name);
const stat = statSync(fullPath);
entries.push({ name, fullPath, mtimeMs: stat.mtimeMs });
}
// Sort newest first so the first entry per week/month bucket is the one we keep
entries.sort((a, b) => b.mtimeMs - a.mtimeMs);
const keepWeekBuckets = new Set<string>();
const keepMonthBuckets = new Set<string>();
const toDelete: string[] = [];
for (const entry of entries) {
// Daily tier — keep everything within dailyDays
if (entry.mtimeMs >= dailyCutoff) continue;
const date = new Date(entry.mtimeMs);
const week = isoWeekKey(date);
const month = monthKey(date);
// Weekly tier — keep newest per calendar week
if (entry.mtimeMs >= weeklyCutoff) {
if (keepWeekBuckets.has(week)) {
toDelete.push(entry.fullPath);
} else {
keepWeekBuckets.add(week);
}
continue;
if (stat.mtimeMs < cutoff) {
unlinkSync(fullPath);
pruned++;
}
// Monthly tier — keep newest per calendar month
if (entry.mtimeMs >= monthlyCutoff) {
if (keepMonthBuckets.has(month)) {
toDelete.push(entry.fullPath);
} else {
keepMonthBuckets.add(month);
}
continue;
}
// Beyond all retention tiers — delete
toDelete.push(entry.fullPath);
}
for (const filePath of toDelete) {
unlinkSync(filePath);
}
return toDelete.length;
return pruned;
}
function formatBackupSize(sizeBytes: number): string {
@@ -224,9 +148,7 @@ function tableKey(schemaName: string, tableName: string): string {
}
async function* readRestoreStatements(backupFile: string): AsyncGenerator<string> {
const raw = createReadStream(backupFile);
const stream = backupFile.endsWith(".gz") ? raw.pipe(createGunzip()) : raw;
stream.setEncoding("utf8");
const stream = createReadStream(backupFile, { encoding: "utf8" });
const reader = createInterface({
input: stream,
crlfDelay: Infinity,
@@ -258,7 +180,6 @@ async function* readRestoreStatements(backupFile: string): AsyncGenerator<string
} finally {
reader.close();
stream.destroy();
raw.destroy();
}
}
@@ -360,16 +281,15 @@ export function createBufferedTextFileWriter(filePath: string, maxBufferedBytes
export async function runDatabaseBackup(opts: RunDatabaseBackupOptions): Promise<RunDatabaseBackupResult> {
const filenamePrefix = opts.filenamePrefix ?? "paperclip";
const retention = opts.retention;
const retentionDays = Math.max(1, Math.trunc(opts.retentionDays));
const connectTimeout = Math.max(1, Math.trunc(opts.connectTimeoutSeconds ?? 5));
const includeMigrationJournal = opts.includeMigrationJournal === true;
const excludedTableNames = normalizeTableNameSet(opts.excludeTables);
const nullifiedColumnsByTable = normalizeNullifyColumnMap(opts.nullifyColumns);
const sql = postgres(opts.connectionString, { max: 1, connect_timeout: connectTimeout });
mkdirSync(opts.backupDir, { recursive: true });
const sqlFile = resolve(opts.backupDir, `${filenamePrefix}-${timestamp()}.sql`);
const backupFile = `${sqlFile}.gz`;
const writer = createBufferedTextFileWriter(sqlFile);
const backupFile = resolve(opts.backupDir, `${filenamePrefix}-${timestamp()}.sql`);
const writer = createBufferedTextFileWriter(backupFile);
try {
await sql`SELECT 1`;
@@ -744,14 +664,8 @@ export async function runDatabaseBackup(opts: RunDatabaseBackupOptions): Promise
await writer.close();
// Compress the SQL file with gzip
const sqlReadStream = createReadStream(sqlFile);
const gzWriteStream = createWriteStream(backupFile);
await pipeline(sqlReadStream, createGzip(), gzWriteStream);
unlinkSync(sqlFile);
const sizeBytes = statSync(backupFile).size;
const prunedCount = pruneOldBackups(opts.backupDir, retention, filenamePrefix);
const prunedCount = pruneOldBackups(opts.backupDir, retentionDays, filenamePrefix);
return {
backupFile,
@@ -760,12 +674,6 @@ export async function runDatabaseBackup(opts: RunDatabaseBackupOptions): Promise
};
} catch (error) {
await writer.abort();
if (existsSync(backupFile)) {
try { unlinkSync(backupFile); } catch { /* ignore */ }
}
if (existsSync(sqlFile)) {
try { unlinkSync(sqlFile); } catch { /* ignore */ }
}
throw error;
} finally {
await sql.end();

View File

@@ -85,7 +85,7 @@ function resolveBackupDir(config: PartialConfig | null): string {
}
function resolveRetentionDays(config: PartialConfig | null): number {
return asPositiveInt(config?.database?.backup?.retentionDays) ?? 7;
return asPositiveInt(config?.database?.backup?.retentionDays) ?? 30;
}
async function main() {
@@ -103,7 +103,7 @@ async function main() {
const result = await runDatabaseBackup({
connectionString,
backupDir,
retention: { dailyDays: retentionDays, weeklyWeeks: 4, monthlyMonths: 1 },
retentionDays,
filenamePrefix: "paperclip",
});

View File

@@ -21,7 +21,6 @@ export {
runDatabaseBackup,
runDatabaseRestore,
formatDatabaseBackupResult,
type BackupRetentionPolicy,
type RunDatabaseBackupOptions,
type RunDatabaseBackupResult,
type RunDatabaseRestoreOptions,

View File

@@ -1 +0,0 @@
ALTER TABLE "heartbeat_runs" ADD COLUMN "process_group_id" integer;--> statement-breakpoint

View File

@@ -1,22 +0,0 @@
CREATE TABLE "company_user_sidebar_preferences" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"company_id" uuid NOT NULL,
"user_id" text NOT NULL,
"project_order" jsonb DEFAULT '[]'::jsonb NOT NULL,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
CREATE TABLE "user_sidebar_preferences" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"user_id" text NOT NULL,
"company_order" jsonb DEFAULT '[]'::jsonb NOT NULL,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
ALTER TABLE "company_user_sidebar_preferences" ADD CONSTRAINT "company_user_sidebar_preferences_company_id_companies_id_fk" FOREIGN KEY ("company_id") REFERENCES "public"."companies"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
CREATE INDEX "company_user_sidebar_preferences_company_idx" ON "company_user_sidebar_preferences" USING btree ("company_id");--> statement-breakpoint
CREATE INDEX "company_user_sidebar_preferences_user_idx" ON "company_user_sidebar_preferences" USING btree ("user_id");--> statement-breakpoint
CREATE UNIQUE INDEX "company_user_sidebar_preferences_company_user_uq" ON "company_user_sidebar_preferences" USING btree ("company_id","user_id");--> statement-breakpoint
CREATE UNIQUE INDEX "user_sidebar_preferences_user_uq" ON "user_sidebar_preferences" USING btree ("user_id");

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -386,20 +386,6 @@
"when": 1775750400000,
"tag": "0054_draft_routines",
"breakpoints": true
},
{
"idx": 55,
"version": "7",
"when": 1775825256196,
"tag": "0055_kind_weapon_omega",
"breakpoints": true
},
{
"idx": 56,
"version": "7",
"when": 1776084034244,
"tag": "0056_spooky_ultragirl",
"breakpoints": true
}
]
}
}

View File

@@ -1,22 +0,0 @@
import { pgTable, uuid, text, timestamp, jsonb, uniqueIndex, index } from "drizzle-orm/pg-core";
import { companies } from "./companies.js";
export const companyUserSidebarPreferences = pgTable(
"company_user_sidebar_preferences",
{
id: uuid("id").primaryKey().defaultRandom(),
companyId: uuid("company_id").notNull().references(() => companies.id, { onDelete: "cascade" }),
userId: text("user_id").notNull(),
projectOrder: jsonb("project_order").$type<string[]>().notNull().default([]),
createdAt: timestamp("created_at", { withTimezone: true }).notNull().defaultNow(),
updatedAt: timestamp("updated_at", { withTimezone: true }).notNull().defaultNow(),
},
(table) => ({
companyIdx: index("company_user_sidebar_preferences_company_idx").on(table.companyId),
userIdx: index("company_user_sidebar_preferences_user_idx").on(table.userId),
companyUserUq: uniqueIndex("company_user_sidebar_preferences_company_user_uq").on(
table.companyId,
table.userId,
),
}),
);

View File

@@ -32,7 +32,6 @@ export const heartbeatRuns = pgTable(
errorCode: text("error_code"),
externalRunId: text("external_run_id"),
processPid: integer("process_pid"),
processGroupId: integer("process_group_id"),
processStartedAt: timestamp("process_started_at", { withTimezone: true }),
retryOfRunId: uuid("retry_of_run_id").references((): AnyPgColumn => heartbeatRuns.id, {
onDelete: "set null",

View File

@@ -3,12 +3,10 @@ export { companyLogos } from "./company_logos.js";
export { authUsers, authSessions, authAccounts, authVerifications } from "./auth.js";
export { instanceSettings } from "./instance_settings.js";
export { instanceUserRoles } from "./instance_user_roles.js";
export { userSidebarPreferences } from "./user_sidebar_preferences.js";
export { agents } from "./agents.js";
export { boardApiKeys } from "./board_api_keys.js";
export { cliAuthChallenges } from "./cli_auth_challenges.js";
export { companyMemberships } from "./company_memberships.js";
export { companyUserSidebarPreferences } from "./company_user_sidebar_preferences.js";
export { principalPermissionGrants } from "./principal_permission_grants.js";
export { invites } from "./invites.js";
export { joinRequests } from "./join_requests.js";

View File

@@ -1,15 +0,0 @@
import { pgTable, uuid, text, timestamp, jsonb, uniqueIndex } from "drizzle-orm/pg-core";
export const userSidebarPreferences = pgTable(
"user_sidebar_preferences",
{
id: uuid("id").primaryKey().defaultRandom(),
userId: text("user_id").notNull(),
companyOrder: jsonb("company_order").$type<string[]>().notNull().default([]),
createdAt: timestamp("created_at", { withTimezone: true }).notNull().defaultNow(),
updatedAt: timestamp("updated_at", { withTimezone: true }).notNull().defaultNow(),
},
(table) => ({
userUq: uniqueIndex("user_sidebar_preferences_user_uq").on(table.userId),
}),
);

View File

@@ -13,7 +13,6 @@ export const API = {
activity: `${API_PREFIX}/activity`,
dashboard: `${API_PREFIX}/dashboard`,
sidebarBadges: `${API_PREFIX}/sidebar-badges`,
sidebarPreferences: `${API_PREFIX}/sidebar-preferences`,
invites: `${API_PREFIX}/invites`,
joinRequests: `${API_PREFIX}/join-requests`,
members: `${API_PREFIX}/members`,

View File

@@ -1,13 +1,11 @@
import { z } from "zod";
import {
AUTH_BASE_URL_MODES,
BIND_MODES,
DEPLOYMENT_EXPOSURES,
DEPLOYMENT_MODES,
SECRET_PROVIDERS,
STORAGE_PROVIDERS,
} from "./constants.js";
import { validateConfiguredBindMode } from "./network-bind.js";
export const configMetaSchema = z.object({
version: z.literal(1),
@@ -23,7 +21,7 @@ export const llmConfigSchema = z.object({
export const databaseBackupConfigSchema = z.object({
enabled: z.boolean().default(true),
intervalMinutes: z.number().int().min(1).max(7 * 24 * 60).default(60),
retentionDays: z.number().int().min(1).max(3650).default(7),
retentionDays: z.number().int().min(1).max(3650).default(30),
dir: z.string().default("~/.paperclip/instances/default/data/backups"),
});
@@ -35,7 +33,7 @@ export const databaseConfigSchema = z.object({
backup: databaseBackupConfigSchema.default({
enabled: true,
intervalMinutes: 60,
retentionDays: 7,
retentionDays: 30,
dir: "~/.paperclip/instances/default/data/backups",
}),
});
@@ -48,8 +46,6 @@ export const loggingConfigSchema = z.object({
export const serverConfigSchema = z.object({
deploymentMode: z.enum(DEPLOYMENT_MODES).default("local_trusted"),
exposure: z.enum(DEPLOYMENT_EXPOSURES).default("private"),
bind: z.enum(BIND_MODES).optional(),
customBindHost: z.string().optional(),
host: z.string().default("127.0.0.1"),
port: z.number().int().min(1).max(65535).default(3100),
allowedHostnames: z.array(z.string().min(1)).default([]),
@@ -136,26 +132,15 @@ export const paperclipConfigSchema = z
}),
})
.superRefine((value, ctx) => {
if (value.server.deploymentMode === "local_trusted" && value.server.exposure !== "private") {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: "server.exposure must be private when deploymentMode is local_trusted",
path: ["server", "exposure"],
});
}
for (const message of validateConfiguredBindMode({
deploymentMode: value.server.deploymentMode,
deploymentExposure: value.server.exposure,
bind: value.server.bind,
host: value.server.host,
customBindHost: value.server.customBindHost,
})) {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message,
path: message.includes("customBindHost") ? ["server", "customBindHost"] : ["server", "bind"],
});
if (value.server.deploymentMode === "local_trusted") {
if (value.server.exposure !== "private") {
ctx.addIssue({
code: z.ZodIssueCode.custom,
message: "server.exposure must be private when deploymentMode is local_trusted",
path: ["server", "exposure"],
});
}
return;
}
if (value.auth.baseUrlMode === "explicit" && !value.auth.publicBaseUrl) {

View File

@@ -7,9 +7,6 @@ export type DeploymentMode = (typeof DEPLOYMENT_MODES)[number];
export const DEPLOYMENT_EXPOSURES = ["private", "public"] as const;
export type DeploymentExposure = (typeof DEPLOYMENT_EXPOSURES)[number];
export const BIND_MODES = ["loopback", "lan", "tailnet", "custom"] as const;
export type BindMode = (typeof BIND_MODES)[number];
export const AUTH_BASE_URL_MODES = ["auto", "explicit"] as const;
export type AuthBaseUrlMode = (typeof AUTH_BASE_URL_MODES)[number];

View File

@@ -3,7 +3,6 @@ export {
COMPANY_STATUSES,
DEPLOYMENT_MODES,
DEPLOYMENT_EXPOSURES,
BIND_MODES,
AUTH_BASE_URL_MODES,
AGENT_STATUSES,
AGENT_ADAPTER_TYPES,
@@ -80,7 +79,6 @@ export {
type CompanyStatus,
type DeploymentMode,
type DeploymentExposure,
type BindMode,
type AuthBaseUrlMode,
type AgentStatus,
type AgentAdapterType,
@@ -151,16 +149,6 @@ export {
type PluginBridgeErrorCode,
} from "./constants.js";
export {
ALL_INTERFACES_BIND_HOST,
LOOPBACK_BIND_HOST,
inferBindModeFromHost,
isAllInterfacesHost,
isLoopbackHost,
resolveRuntimeBind,
validateConfiguredBindMode,
} from "./network-bind.js";
export type {
Company,
FeedbackVote,
@@ -201,7 +189,6 @@ export type {
InstanceExperimentalSettings,
InstanceGeneralSettings,
InstanceSettings,
BackupRetentionPolicy,
Agent,
AgentAccessState,
AgentChainOfCommandEntry,
@@ -232,11 +219,7 @@ export type {
ExecutionWorkspaceCloseReadiness,
ExecutionWorkspaceCloseReadinessState,
ProjectWorkspaceRuntimeConfig,
WorkspaceCommandDefinition,
WorkspaceCommandKind,
WorkspaceRuntimeControlTarget,
WorkspaceRuntimeService,
WorkspaceRuntimeServiceStateMap,
WorkspaceOperation,
WorkspaceOperationPhase,
WorkspaceOperationStatus,
@@ -305,7 +288,6 @@ export type {
DashboardSummary,
ActivityEvent,
SidebarBadges,
SidebarOrderPreference,
InboxDismissal,
CompanyMembership,
PrincipalPermissionGrant,
@@ -379,21 +361,6 @@ export type {
ProviderQuotaResult,
} from "./types/index.js";
export {
sidebarOrderPreferenceSchema,
upsertSidebarOrderPreferenceSchema,
type UpsertSidebarOrderPreference,
} from "./validators/sidebar-preferences.js";
export { workspaceRuntimeControlTargetSchema } from "./validators/execution-workspace.js";
export {
findWorkspaceCommandDefinition,
listWorkspaceCommandDefinitions,
listWorkspaceServiceCommandDefinitions,
matchWorkspaceRuntimeServiceToCommand,
scoreWorkspaceRuntimeServiceMatch,
} from "./workspace-commands.js";
export {
DEFAULT_FEEDBACK_DATA_SHARING_PREFERENCE,
FEEDBACK_TARGET_TYPES,
@@ -403,13 +370,6 @@ export {
DEFAULT_FEEDBACK_DATA_SHARING_TERMS_VERSION,
} from "./types/feedback.js";
export {
DAILY_RETENTION_PRESETS,
WEEKLY_RETENTION_PRESETS,
MONTHLY_RETENTION_PRESETS,
DEFAULT_BACKUP_RETENTION,
} from "./types/instance.js";
export {
getClosedIsolatedExecutionWorkspaceMessage,
isClosedIsolatedExecutionWorkspace,

View File

@@ -1,105 +0,0 @@
import type { BindMode, DeploymentExposure, DeploymentMode } from "./constants.js";
export const LOOPBACK_BIND_HOST = "127.0.0.1";
export const ALL_INTERFACES_BIND_HOST = "0.0.0.0";
function normalizeHost(host: string | null | undefined): string | undefined {
const trimmed = host?.trim();
return trimmed ? trimmed : undefined;
}
export function isLoopbackHost(host: string | null | undefined): boolean {
const normalized = normalizeHost(host)?.toLowerCase();
return normalized === "127.0.0.1" || normalized === "localhost" || normalized === "::1";
}
export function isAllInterfacesHost(host: string | null | undefined): boolean {
const normalized = normalizeHost(host)?.toLowerCase();
return normalized === "0.0.0.0" || normalized === "::";
}
export function inferBindModeFromHost(
host: string | null | undefined,
opts?: { tailnetBindHost?: string | null | undefined },
): BindMode {
const normalized = normalizeHost(host);
const tailnetBindHost = normalizeHost(opts?.tailnetBindHost);
if (!normalized || isLoopbackHost(normalized)) return "loopback";
if (isAllInterfacesHost(normalized)) return "lan";
if (tailnetBindHost && normalized === tailnetBindHost) return "tailnet";
return "custom";
}
export function validateConfiguredBindMode(input: {
deploymentMode: DeploymentMode;
deploymentExposure: DeploymentExposure;
bind?: BindMode | null | undefined;
host?: string | null | undefined;
customBindHost?: string | null | undefined;
}): string[] {
const bind = input.bind ?? inferBindModeFromHost(input.host);
const customBindHost = normalizeHost(input.customBindHost);
const errors: string[] = [];
if (input.deploymentMode === "local_trusted" && bind !== "loopback") {
errors.push("local_trusted requires server.bind=loopback");
}
if (bind === "custom" && !customBindHost) {
const legacyHost = normalizeHost(input.host);
if (!legacyHost || isLoopbackHost(legacyHost) || isAllInterfacesHost(legacyHost)) {
errors.push("server.customBindHost is required when server.bind=custom");
}
}
if (input.deploymentMode === "authenticated" && input.deploymentExposure === "public" && bind === "tailnet") {
errors.push("server.bind=tailnet is only supported for authenticated/private deployments");
}
return errors;
}
export function resolveRuntimeBind(input: {
bind?: BindMode | null | undefined;
host?: string | null | undefined;
customBindHost?: string | null | undefined;
tailnetBindHost?: string | null | undefined;
}): {
bind: BindMode;
host: string;
customBindHost?: string;
errors: string[];
} {
const bind = input.bind ?? inferBindModeFromHost(input.host, { tailnetBindHost: input.tailnetBindHost });
const legacyHost = normalizeHost(input.host);
const customBindHost =
normalizeHost(input.customBindHost) ??
(bind === "custom" && legacyHost && !isLoopbackHost(legacyHost) && !isAllInterfacesHost(legacyHost)
? legacyHost
: undefined);
switch (bind) {
case "loopback":
return { bind, host: LOOPBACK_BIND_HOST, customBindHost, errors: [] };
case "lan":
return { bind, host: ALL_INTERFACES_BIND_HOST, customBindHost, errors: [] };
case "custom":
return customBindHost
? { bind, host: customBindHost, customBindHost, errors: [] }
: { bind, host: legacyHost ?? LOOPBACK_BIND_HOST, errors: ["server.customBindHost is required when server.bind=custom"] };
case "tailnet": {
const tailnetBindHost = normalizeHost(input.tailnetBindHost);
return tailnetBindHost
? { bind, host: tailnetBindHost, customBindHost, errors: [] }
: {
bind,
host: legacyHost ?? LOOPBACK_BIND_HOST,
customBindHost,
errors: [
"server.bind=tailnet requires a detected Tailscale address or PAPERCLIP_TAILNET_BIND_HOST",
],
};
}
}
}

View File

@@ -6,10 +6,7 @@ import type {
TelemetryState,
} from "./types.js";
const DEFAULT_ENDPOINTS = [
"https://telemetry.paperclip.ing/ingest",
"https://rusqrrg391.execute-api.us-east-1.amazonaws.com/ingest",
] as const;
const DEFAULT_ENDPOINT = "https://telemetry.paperclip.ing/ingest";
const BATCH_SIZE = 50;
const SEND_TIMEOUT_MS = 5_000;
@@ -47,35 +44,29 @@ export class TelemetryClient {
const events = this.queue.splice(0);
const state = this.getState();
const endpoints = this.resolveEndpoints();
const endpoint = this.config.endpoint ?? DEFAULT_ENDPOINT;
const app = this.config.app ?? "paperclip";
const schemaVersion = this.config.schemaVersion ?? "1";
const body = JSON.stringify({
app,
schemaVersion,
installId: state.installId,
version: this.version,
events,
});
for (const endpoint of endpoints) {
const controller = new AbortController();
const timer = setTimeout(() => controller.abort(), SEND_TIMEOUT_MS);
try {
const response = await fetch(endpoint, {
method: "POST",
headers: { "Content-Type": "application/json" },
body,
signal: controller.signal,
});
if (response.ok) {
return;
}
} catch {
// Try the next built-in endpoint before dropping the batch.
} finally {
clearTimeout(timer);
}
const controller = new AbortController();
const timer = setTimeout(() => controller.abort(), SEND_TIMEOUT_MS);
try {
await fetch(endpoint, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
app,
schemaVersion,
installId: state.installId,
version: this.version,
events,
}),
signal: controller.signal,
});
} catch {
// Fire-and-forget: silent failure, no retries
} finally {
clearTimeout(timer);
}
}
@@ -111,9 +102,4 @@ export class TelemetryClient {
}
return this.state;
}
private resolveEndpoints(): readonly string[] {
const configured = this.config.endpoint?.trim();
return configured ? [configured] : DEFAULT_ENDPOINTS;
}
}

View File

@@ -50,12 +50,9 @@ export function trackGoalCreated(
export function trackAgentCreated(
client: TelemetryClient,
dims: { agentRole: string; agentId?: string },
dims: { agentRole: string },
): void {
client.track("agent.created", {
agent_role: dims.agentRole,
...(dims.agentId ? { agent_id: dims.agentId } : {}),
});
client.track("agent.created", { agent_role: dims.agentRole });
}
export function trackSkillImported(
@@ -70,24 +67,16 @@ export function trackSkillImported(
export function trackAgentFirstHeartbeat(
client: TelemetryClient,
dims: { agentRole: string; agentId?: string },
dims: { agentRole: string },
): void {
client.track("agent.first_heartbeat", {
agent_role: dims.agentRole,
...(dims.agentId ? { agent_id: dims.agentId } : {}),
});
client.track("agent.first_heartbeat", { agent_role: dims.agentRole });
}
export function trackAgentTaskCompleted(
client: TelemetryClient,
dims: { agentRole: string; agentId?: string; adapterType?: string; model?: string },
dims: { agentRole: string },
): void {
client.track("agent.task_completed", {
agent_role: dims.agentRole,
...(dims.agentId ? { agent_id: dims.agentId } : {}),
...(dims.adapterType ? { adapter_type: dims.adapterType } : {}),
...(dims.model ? { model: dims.model } : {}),
});
client.track("agent.task_completed", { agent_role: dims.agentRole });
}
export function trackErrorHandlerCrash(

View File

@@ -34,7 +34,6 @@ export interface HeartbeatRun {
errorCode: string | null;
externalRunId: string | null;
processPid: number | null;
processGroupId?: number | null;
processStartedAt: Date | null;
retryOfRunId: string | null;
processLossRetryCount: number;

View File

@@ -11,8 +11,7 @@ export type {
FeedbackTraceBundleFile,
FeedbackTraceBundle,
} from "./feedback.js";
export type { InstanceExperimentalSettings, InstanceGeneralSettings, InstanceSettings, BackupRetentionPolicy } from "./instance.js";
export { DAILY_RETENTION_PRESETS, WEEKLY_RETENTION_PRESETS, MONTHLY_RETENTION_PRESETS, DEFAULT_BACKUP_RETENTION } from "./instance.js";
export type { InstanceExperimentalSettings, InstanceGeneralSettings, InstanceSettings } from "./instance.js";
export type {
CompanySkillSourceType,
CompanySkillTrustLevel,
@@ -71,11 +70,7 @@ export type {
ExecutionWorkspaceCloseReadiness,
ExecutionWorkspaceCloseReadinessState,
ProjectWorkspaceRuntimeConfig,
WorkspaceCommandDefinition,
WorkspaceCommandKind,
WorkspaceRuntimeControlTarget,
WorkspaceRuntimeService,
WorkspaceRuntimeServiceStateMap,
WorkspaceRuntimeDesiredState,
ExecutionWorkspaceStrategyType,
ExecutionWorkspaceMode,
@@ -169,7 +164,6 @@ export type { LiveEvent } from "./live.js";
export type { DashboardSummary } from "./dashboard.js";
export type { ActivityEvent } from "./activity.js";
export type { SidebarBadges } from "./sidebar-badges.js";
export type { SidebarOrderPreference } from "./sidebar-preferences.js";
export type { InboxDismissal } from "./inbox-dismissal.js";
export type {
CompanyMembership,

View File

@@ -1,26 +1,9 @@
import type { FeedbackDataSharingPreference } from "./feedback.js";
export const DAILY_RETENTION_PRESETS = [3, 7, 14] as const;
export const WEEKLY_RETENTION_PRESETS = [1, 2, 4] as const;
export const MONTHLY_RETENTION_PRESETS = [1, 3, 6] as const;
export interface BackupRetentionPolicy {
dailyDays: (typeof DAILY_RETENTION_PRESETS)[number];
weeklyWeeks: (typeof WEEKLY_RETENTION_PRESETS)[number];
monthlyMonths: (typeof MONTHLY_RETENTION_PRESETS)[number];
}
export const DEFAULT_BACKUP_RETENTION: BackupRetentionPolicy = {
dailyDays: 7,
weeklyWeeks: 4,
monthlyMonths: 1,
};
export interface InstanceGeneralSettings {
censorUsernameInLogs: boolean;
keyboardShortcuts: boolean;
feedbackDataSharingPreference: FeedbackDataSharingPreference;
backupRetention: BackupRetentionPolicy;
}
export interface InstanceExperimentalSettings {

View File

@@ -129,7 +129,7 @@ export interface RoutineExecutionIssueOrigin {
}
export interface RoutineListItem extends Routine {
triggers: Pick<RoutineTrigger, "id" | "kind" | "label" | "enabled" | "cronExpression" | "timezone" | "nextRunAt" | "lastFiredAt" | "lastResult">[];
triggers: Pick<RoutineTrigger, "id" | "kind" | "label" | "enabled" | "nextRunAt" | "lastFiredAt" | "lastResult">[];
lastRun: RoutineRunSummary | null;
activeIssue: RoutineIssueSummary | null;
}

View File

@@ -1,4 +0,0 @@
export interface SidebarOrderPreference {
orderedIds: string[];
updatedAt: Date | null;
}

View File

@@ -46,27 +46,6 @@ export type ExecutionWorkspaceCloseActionKind =
| "remove_local_directory";
export type WorkspaceRuntimeDesiredState = "running" | "stopped";
export type WorkspaceRuntimeServiceStateMap = Record<string, WorkspaceRuntimeDesiredState>;
export type WorkspaceCommandKind = "service" | "job";
export interface WorkspaceCommandSource {
type: "paperclip";
key: "commands" | "services" | "jobs";
index: number;
}
export interface WorkspaceCommandDefinition {
id: string;
name: string;
kind: WorkspaceCommandKind;
command: string | null;
cwd: string | null;
lifecycle: "shared" | "ephemeral" | null;
serviceIndex: number | null;
disabledReason: string | null;
rawConfig: Record<string, unknown>;
source: WorkspaceCommandSource;
}
export interface ExecutionWorkspaceStrategy {
type: ExecutionWorkspaceStrategyType;
@@ -83,19 +62,11 @@ export interface ExecutionWorkspaceConfig {
cleanupCommand: string | null;
workspaceRuntime: Record<string, unknown> | null;
desiredState: WorkspaceRuntimeDesiredState | null;
serviceStates?: WorkspaceRuntimeServiceStateMap | null;
}
export interface ProjectWorkspaceRuntimeConfig {
workspaceRuntime: Record<string, unknown> | null;
desiredState: WorkspaceRuntimeDesiredState | null;
serviceStates?: WorkspaceRuntimeServiceStateMap | null;
}
export interface WorkspaceRuntimeControlTarget {
workspaceCommandId?: string | null;
runtimeServiceId?: string | null;
serviceIndex?: number | null;
}
export interface ExecutionWorkspaceCloseAction {
@@ -216,7 +187,6 @@ export interface WorkspaceRuntimeService {
stoppedAt: Date | null;
stopPolicy: Record<string, unknown> | null;
healthStatus: "unknown" | "healthy" | "unhealthy";
configIndex?: number | null;
createdAt: Date;
updatedAt: Date;
}

View File

@@ -12,12 +12,14 @@ export type CreateApproval = z.infer<typeof createApprovalSchema>;
export const resolveApprovalSchema = z.object({
decisionNote: z.string().optional().nullable(),
decidedByUserId: z.string().optional().default("board"),
});
export type ResolveApproval = z.infer<typeof resolveApprovalSchema>;
export const requestApprovalRevisionSchema = z.object({
decisionNote: z.string().optional().nullable(),
decidedByUserId: z.string().optional().default("board"),
});
export type RequestApprovalRevision = z.infer<typeof requestApprovalRevisionSchema>;

View File

@@ -14,13 +14,6 @@ export const executionWorkspaceConfigSchema = z.object({
cleanupCommand: z.string().optional().nullable(),
workspaceRuntime: z.record(z.unknown()).optional().nullable(),
desiredState: z.enum(["running", "stopped"]).optional().nullable(),
serviceStates: z.record(z.enum(["running", "stopped"])).optional().nullable(),
}).strict();
export const workspaceRuntimeControlTargetSchema = z.object({
workspaceCommandId: z.string().min(1).optional().nullable(),
runtimeServiceId: z.string().uuid().optional().nullable(),
serviceIndex: z.number().int().nonnegative().optional().nullable(),
}).strict();
export const executionWorkspaceCloseReadinessStateSchema = z.enum([
@@ -95,7 +88,6 @@ export const workspaceRuntimeServiceSchema = z.object({
stoppedAt: z.coerce.date().nullable(),
stopPolicy: z.record(z.unknown()).nullable(),
healthStatus: z.enum(["unknown", "healthy", "unhealthy"]),
configIndex: z.number().int().nonnegative().nullable().optional(),
createdAt: z.coerce.date(),
updatedAt: z.coerce.date(),
}).strict();

View File

@@ -32,11 +32,6 @@ export {
upsertIssueFeedbackVoteSchema,
type UpsertIssueFeedbackVote,
} from "./feedback.js";
export {
sidebarOrderPreferenceSchema,
upsertSidebarOrderPreferenceSchema,
type UpsertSidebarOrderPreference,
} from "./sidebar-preferences.js";
export {
companySkillSourceTypeSchema,
companySkillTrustLevelSchema,

View File

@@ -1,33 +1,13 @@
import { z } from "zod";
import { DEFAULT_FEEDBACK_DATA_SHARING_PREFERENCE } from "../types/feedback.js";
import {
DAILY_RETENTION_PRESETS,
WEEKLY_RETENTION_PRESETS,
MONTHLY_RETENTION_PRESETS,
DEFAULT_BACKUP_RETENTION,
} from "../types/instance.js";
import { feedbackDataSharingPreferenceSchema } from "./feedback.js";
function presetSchema<T extends readonly number[]>(presets: T, label: string) {
return z.number().refine(
(v): v is T[number] => (presets as readonly number[]).includes(v),
{ message: `${label} must be one of: ${presets.join(", ")}` },
);
}
export const backupRetentionPolicySchema = z.object({
dailyDays: presetSchema(DAILY_RETENTION_PRESETS, "dailyDays").default(DEFAULT_BACKUP_RETENTION.dailyDays),
weeklyWeeks: presetSchema(WEEKLY_RETENTION_PRESETS, "weeklyWeeks").default(DEFAULT_BACKUP_RETENTION.weeklyWeeks),
monthlyMonths: presetSchema(MONTHLY_RETENTION_PRESETS, "monthlyMonths").default(DEFAULT_BACKUP_RETENTION.monthlyMonths),
});
export const instanceGeneralSettingsSchema = z.object({
censorUsernameInLogs: z.boolean().default(false),
keyboardShortcuts: z.boolean().default(false),
feedbackDataSharingPreference: feedbackDataSharingPreferenceSchema.default(
DEFAULT_FEEDBACK_DATA_SHARING_PREFERENCE,
),
backupRetention: backupRetentionPolicySchema.default(DEFAULT_BACKUP_RETENTION),
}).strict();
export const patchInstanceGeneralSettingsSchema = instanceGeneralSettingsSchema.partial();

View File

@@ -146,7 +146,6 @@ export const createIssueLabelSchema = z.object({
export type CreateIssueLabel = z.infer<typeof createIssueLabelSchema>;
export const updateIssueSchema = createIssueSchema.partial().extend({
assigneeAgentId: z.string().trim().min(1).optional().nullable(),
comment: z.string().min(1).optional(),
reopen: z.boolean().optional(),
interrupt: z.boolean().optional(),

View File

@@ -31,7 +31,6 @@ export const projectExecutionWorkspacePolicySchema = z
export const projectWorkspaceRuntimeConfigSchema = z.object({
workspaceRuntime: z.record(z.unknown()).optional().nullable(),
desiredState: z.enum(["running", "stopped"]).optional().nullable(),
serviceStates: z.record(z.enum(["running", "stopped"])).optional().nullable(),
}).strict();
const projectWorkspaceSourceTypeSchema = z.enum(["local_path", "git_repo", "remote_managed", "non_git_path"]);

View File

@@ -1,14 +0,0 @@
import { z } from "zod";
const sidebarOrderedIdSchema = z.string().uuid();
export const sidebarOrderPreferenceSchema = z.object({
orderedIds: z.array(sidebarOrderedIdSchema),
updatedAt: z.coerce.date().nullable(),
});
export const upsertSidebarOrderPreferenceSchema = z.object({
orderedIds: z.array(sidebarOrderedIdSchema),
});
export type UpsertSidebarOrderPreference = z.infer<typeof upsertSidebarOrderPreferenceSchema>;

View File

@@ -1,56 +0,0 @@
import { describe, expect, it } from "vitest";
import {
findWorkspaceCommandDefinition,
listWorkspaceCommandDefinitions,
matchWorkspaceRuntimeServiceToCommand,
} from "./workspace-commands.js";
describe("workspace command helpers", () => {
it("derives service and job commands from command-first runtime config", () => {
const commands = listWorkspaceCommandDefinitions({
commands: [
{ id: "web", name: "web", kind: "service", command: "pnpm dev" },
{ id: "db-migrate", name: "db:migrate", kind: "job", command: "pnpm db:migrate" },
],
});
expect(commands).toEqual([
expect.objectContaining({ id: "web", kind: "service", serviceIndex: 0 }),
expect.objectContaining({ id: "db-migrate", kind: "job", serviceIndex: null }),
]);
});
it("falls back to legacy services and jobs arrays", () => {
const commands = listWorkspaceCommandDefinitions({
services: [{ name: "web", command: "pnpm dev" }],
jobs: [{ name: "lint", command: "pnpm lint" }],
});
expect(commands).toEqual([
expect.objectContaining({ id: "service:web", kind: "service", serviceIndex: 0 }),
expect.objectContaining({ id: "job:lint", kind: "job", serviceIndex: null }),
]);
});
it("matches a configured service command to the current runtime service", () => {
const workspaceRuntime = {
commands: [
{ id: "web", name: "web", kind: "service", command: "pnpm dev", cwd: "." },
],
};
const command = findWorkspaceCommandDefinition(workspaceRuntime, "web");
expect(command).not.toBeNull();
const match = matchWorkspaceRuntimeServiceToCommand(command!, [
{
id: "runtime-web",
serviceName: "web",
command: "pnpm dev",
cwd: "/repo",
configIndex: null,
},
]);
expect(match).toEqual(expect.objectContaining({ id: "runtime-web" }));
});
});

View File

@@ -1,204 +0,0 @@
import type { WorkspaceCommandDefinition, WorkspaceRuntimeService } from "./types/workspace-runtime.js";
function isRecord(value: unknown): value is Record<string, unknown> {
return typeof value === "object" && value !== null && !Array.isArray(value);
}
function readNonEmptyString(value: unknown): string | null {
if (typeof value !== "string") return null;
const trimmed = value.trim();
return trimmed.length > 0 ? trimmed : null;
}
function slugify(value: string | null | undefined) {
const normalized = (value ?? "")
.trim()
.toLowerCase()
.replace(/[^a-z0-9]+/g, "-")
.replace(/-+/g, "-")
.replace(/^-+|-+$/g, "");
return normalized.length > 0 ? normalized : null;
}
function deriveWorkspaceCommandId(input: {
kind: WorkspaceCommandDefinition["kind"];
explicitId: string | null;
name: string;
index: number;
}) {
const explicitId = slugify(input.explicitId);
if (explicitId) return explicitId;
const nameSlug = slugify(input.name);
return nameSlug ? `${input.kind}:${nameSlug}` : `${input.kind}:${input.index + 1}`;
}
function buildWorkspaceCommandDefinition(input: {
entry: Record<string, unknown>;
kind: WorkspaceCommandDefinition["kind"];
sourceKey: WorkspaceCommandDefinition["source"]["key"];
sourceIndex: number;
serviceIndex: number | null;
fallbackName: string;
}): WorkspaceCommandDefinition {
return {
id: deriveWorkspaceCommandId({
kind: input.kind,
explicitId: readNonEmptyString(input.entry.id),
name:
readNonEmptyString(input.entry.name)
?? readNonEmptyString(input.entry.label)
?? readNonEmptyString(input.entry.title)
?? input.fallbackName,
index: input.sourceIndex,
}),
name:
readNonEmptyString(input.entry.name)
?? readNonEmptyString(input.entry.label)
?? readNonEmptyString(input.entry.title)
?? input.fallbackName,
kind: input.kind,
command: readNonEmptyString(input.entry.command),
cwd: readNonEmptyString(input.entry.cwd),
lifecycle:
input.kind === "service"
? input.entry.lifecycle === "ephemeral"
? "ephemeral"
: "shared"
: null,
serviceIndex: input.serviceIndex,
disabledReason: readNonEmptyString(input.entry.disabledReason),
rawConfig: { ...input.entry },
source: {
type: "paperclip",
key: input.sourceKey,
index: input.sourceIndex,
},
};
}
function uniqueWorkspaceCommandId(
seen: Set<string>,
commandId: string,
sourceKey: WorkspaceCommandDefinition["source"]["key"],
sourceIndex: number,
) {
if (!seen.has(commandId)) {
seen.add(commandId);
return commandId;
}
const fallbackId = `${commandId}-${sourceKey}-${sourceIndex + 1}`;
seen.add(fallbackId);
return fallbackId;
}
function readCommandEntries(
workspaceRuntime: Record<string, unknown> | null | undefined,
key: "commands" | "services" | "jobs",
) {
const raw = workspaceRuntime?.[key];
return Array.isArray(raw) ? raw.filter((entry): entry is Record<string, unknown> => isRecord(entry)) : [];
}
export function listWorkspaceCommandDefinitions(
workspaceRuntime: Record<string, unknown> | null | undefined,
): WorkspaceCommandDefinition[] {
if (!workspaceRuntime) return [];
const commandEntries = readCommandEntries(workspaceRuntime, "commands");
const seenIds = new Set<string>();
let nextServiceIndex = 0;
const finalize = (command: WorkspaceCommandDefinition) => ({
...command,
id: uniqueWorkspaceCommandId(seenIds, command.id, command.source.key, command.source.index),
});
if (commandEntries.length > 0) {
return commandEntries.map((entry, index) =>
finalize(buildWorkspaceCommandDefinition({
entry,
kind: entry.kind === "job" ? "job" : "service",
sourceKey: "commands",
sourceIndex: index,
serviceIndex: entry.kind === "job" ? null : nextServiceIndex++,
fallbackName: entry.kind === "job" ? `Job ${index + 1}` : `Service ${index + 1}`,
})));
}
const serviceDefinitions = readCommandEntries(workspaceRuntime, "services").map((entry, index) =>
finalize(buildWorkspaceCommandDefinition({
entry,
kind: "service",
sourceKey: "services",
sourceIndex: index,
serviceIndex: nextServiceIndex++,
fallbackName: `Service ${index + 1}`,
})));
const jobDefinitions = readCommandEntries(workspaceRuntime, "jobs").map((entry, index) =>
finalize(buildWorkspaceCommandDefinition({
entry,
kind: "job",
sourceKey: "jobs",
sourceIndex: index,
serviceIndex: null,
fallbackName: `Job ${index + 1}`,
})));
return [...serviceDefinitions, ...jobDefinitions];
}
export function listWorkspaceServiceCommandDefinitions(
workspaceRuntime: Record<string, unknown> | null | undefined,
) {
return listWorkspaceCommandDefinitions(workspaceRuntime).filter((command) => command.kind === "service");
}
export function findWorkspaceCommandDefinition(
workspaceRuntime: Record<string, unknown> | null | undefined,
workspaceCommandId: string | null | undefined,
) {
const normalizedId = readNonEmptyString(workspaceCommandId);
if (!normalizedId) return null;
return listWorkspaceCommandDefinitions(workspaceRuntime).find((command) => command.id === normalizedId) ?? null;
}
export function scoreWorkspaceRuntimeServiceMatch(
command: Pick<WorkspaceCommandDefinition, "serviceIndex" | "name" | "command" | "cwd">,
runtimeService: Pick<WorkspaceRuntimeService, "configIndex" | "serviceName" | "command" | "cwd">,
) {
if (command.serviceIndex !== null && runtimeService.configIndex !== null && runtimeService.configIndex !== undefined) {
return runtimeService.configIndex === command.serviceIndex ? 100 : -1;
}
let score = 0;
if (runtimeService.serviceName === command.name) score += 4;
if ((runtimeService.command ?? null) === (command.command ?? null)) score += 4;
if (
command.cwd
&& runtimeService.cwd
&& (runtimeService.cwd === command.cwd || runtimeService.cwd.endsWith(`/${command.cwd}`))
) {
score += 2;
}
return score;
}
export function matchWorkspaceRuntimeServiceToCommand<
T extends Pick<WorkspaceRuntimeService, "configIndex" | "serviceName" | "command" | "cwd">,
>(
command: Pick<WorkspaceCommandDefinition, "serviceIndex" | "name" | "command" | "cwd">,
runtimeServices: T[] | null | undefined,
) {
let bestMatch: T | null = null;
let bestScore = -1;
for (const runtimeService of runtimeServices ?? []) {
const score = scoreWorkspaceRuntimeServiceMatch(command, runtimeService);
if (score > bestScore) {
bestMatch = runtimeService;
bestScore = score;
}
}
return bestScore > 0 ? bestMatch : null;
}

View File

@@ -1,50 +0,0 @@
# v2026.410.0
> Released: 2026-04-13
## Security
- **Authorization hardening (GHSA-68qg-g8mg-6pr7)** — Scoped import, approval, activity, and heartbeat API routes to enforce proper authorization checks. Previously, certain administrative endpoints were accessible without adequate permission verification. All users are strongly encouraged to upgrade. ([#3315](https://github.com/cryppadotta/paperclip/pull/3315))
- **Removed hardcoded JWT secret fallback** — The `createBetterAuthInstance` function no longer falls back to a hardcoded JWT secret, closing a credential-hygiene gap.
- **Redact Bearer tokens in logs** — Server log output now redacts Bearer tokens to prevent accidental credential exposure. ([#2659](https://github.com/cryppadotta/paperclip/pull/2659))
- **Dependency bumps** — Updated `multer` to 2.1.1 (HIGH CVEs) and `rollup` to 4.59.0 (path-traversal CVE). ([#2819](https://github.com/cryppadotta/paperclip/pull/2819))
## Highlights
- **Issue-to-issue navigation** — Faster navigation between issues with scroll reset, prefetch, and detail-view optimizations. ([#3542](https://github.com/cryppadotta/paperclip/pull/3542))
- **Auto-checkout for scoped wakes** — Agent harness now automatically checks out the scoped issue on comment-driven wakes, reducing latency for agent heartbeats. ([#3538](https://github.com/cryppadotta/paperclip/pull/3538))
- **Inbox parent-child nesting** — Issues in the Mine inbox can now be grouped by parent, with a toggle and keyboard-traversable nested rows.
- **Keyboard shortcut cheatsheet** — Press `?` to see all available keyboard shortcuts in a dialog.
- **Issue search in inbox** — Broadened comment matching for inbox issue search with fallback.
- **Codex fast mode** — Added fast mode support for `codex_local` adapters with env probe safeguards.
- **Backups with retention** — Gzip-compressed database backups with tiered daily/weekly/monthly retention and UI controls in Instance Settings.
- **AWS Bedrock auth** — Added AWS Bedrock authentication support on `claude-local` adapters. ([#2793](https://github.com/cryppadotta/paperclip/pull/2793))
## Improvements
- **Issue detail stability** — Faster comment loading, reduced rerenders on interrupted runs, stable transcript rendering for non-succeeded runs.
- **Execution workspaces** — Fixed linked worktree reuse, dev runner isolation, workspace import regressions, and workspace preflight through server toolchain.
- **Agent runtime** — Hardened heartbeat and adapter runtime workflows, scoped-wake fast path skips full heartbeat on comment wakes, signoff stage access fixes.
- **Execution policy** — Fixed non-participant stage mutation rejection, decision persistence, and signoff PR follow-up flows.
- **Chat UX polish** — Shimmer animation improvements, image gallery in chat messages, inline comment composer, Working/Worked status tokens.
- **Inbox refinements** — Avoid refetching on filter-only changes, archive shortcut fix, badge fixture alignment, nesting column alignment.
- **Typing performance** — Fixed typing lag in long comment threads. ([#3163](https://github.com/cryppadotta/paperclip/pull/3163))
- **Issue list grouping** — Added workspace and parent issue grouping to the issues list view.
- **Worktree tooling** — Improved worktree helpers, bind presets for deployment setup, tailnet bind hardening.
- **Plugin SDK** — Plugin SDK now prepares before CLI dev boot. ([#3343](https://github.com/cryppadotta/paperclip/pull/3343))
## Fixes
- **Agent env bindings** — Cleared agent env bindings now persist correctly on save.
- **Comment editor sync** — Hardened issue comment editor synchronization.
- **Document revisions** — Latest issue document revision stays current in the UI. ([#3342](https://github.com/cryppadotta/paperclip/pull/3342))
- **Claude instructions** — Fixed instruction sibling path hints, gate file I/O to fresh sessions only, skip `--append-system-prompt-file` on resumed sessions.
- **Codex transcript** — Fixed Codex tool-use transcript completion parsing.
- **Backup cleanup** — Orphaned `.sql` files cleaned up on compression failure; stale startup log fixed.
- **Chat layout** — Fixed avatar positioning, activity line alignment, comment alignment, and feedback panel closing.
## Upgrade Guide
Multiple database migrations will run automatically on startup. All migrations are additive — no existing data is modified.
**Security:** This release addresses [GHSA-68qg-g8mg-6pr7](https://github.com/cryppadotta/paperclip/security/advisories/GHSA-68qg-g8mg-6pr7). All deployments should upgrade as soon as possible.

View File

@@ -1,98 +0,0 @@
# v2026.413.0
> Released: 2026-04-13
## Highlights
- **Issue chat thread** — Replaced the classic comment timeline with a full chat-style thread powered by assistant-ui. Agent run transcripts, chain-of-thought, and user messages now render inline as a continuous conversation with polished avatars, action bars, and relative timestamps. ([#3079](https://github.com/paperclipai/paperclip/pull/3079))
- **External adapter plugin system** — Third-party adapters can now be installed as npm packages or loaded from local directories. Plugins declare a config schema and an optional UI transcript parser; built-in adapters can be overridden by external ones. Includes Hermes local session management and provider/model display in run details. ([#2649](https://github.com/paperclipai/paperclip/pull/2649), [#2650](https://github.com/paperclipai/paperclip/pull/2650), [#2651](https://github.com/paperclipai/paperclip/pull/2651), [#2654](https://github.com/paperclipai/paperclip/pull/2654), [#2655](https://github.com/paperclipai/paperclip/pull/2655), [#2659](https://github.com/paperclipai/paperclip/pull/2659), @plind-dm)
- **Execution policies** — Issues can now carry a review/approval execution policy with multi-stage signoff workflows. Reviewers and approvers are selected per-stage, and Paperclip routes the issue through each stage automatically. ([#3222](https://github.com/paperclipai/paperclip/pull/3222))
- **Blocker dependencies** — First-class issue blocker relations with automatic wake-on-dependency-resolved. Set `blockedByIssueIds` on any issue and Paperclip wakes the assignee when all blockers reach `done`. ([#2797](https://github.com/paperclipai/paperclip/pull/2797))
- **Standalone MCP server** — New `@paperclipai/mcp-server` package exposing the Paperclip API as an MCP tool server, including approval creation. ([#2435](https://github.com/paperclipai/paperclip/pull/2435))
## Improvements
- **Board approvals** — Generic issue-linked board approvals with card styling and visibility improvements in the issue detail sidebar. ([#3220](https://github.com/paperclipai/paperclip/pull/3220))
- **Inbox parent-child nesting** — Parent issues group their children in the inbox Mine view with a toggle button, j/k keyboard traversal across nested items, and collapsible groups. ([#2218](https://github.com/paperclipai/paperclip/pull/2218), @HenkDz)
- **Inbox workspace grouping** — Issues can now be grouped by workspace in the inbox with collapsible mobile groups and shared column controls across inbox and issues lists. ([#3356](https://github.com/paperclipai/paperclip/pull/3356))
- **Issue search** — Trigram-indexed full-text search across titles, identifiers, descriptions, and comments with debounced input. Comment matches now surface in search results. ([#2999](https://github.com/paperclipai/paperclip/pull/2999))
- **Sub-issues inline** — Sub-issues moved from a separate tab to inline display on the issue detail, with parent-inherited workspace defaults and assignee propagation. ([#3355](https://github.com/paperclipai/paperclip/pull/3355))
- **Document revision diff viewer** — Side-by-side diff viewer for issue document revisions with improved modal layout. ([#2792](https://github.com/paperclipai/paperclip/pull/2792))
- **Keyboard shortcuts cheatsheet** — Press `?` to open a keyboard shortcut reference dialog; new `g i` (go to inbox), `g c` (comment composer), and inbox archive undo shortcuts. ([#2772](https://github.com/paperclipai/paperclip/pull/2772))
- **Bedrock model selection** — Claude local adapter now supports AWS Bedrock authentication and model selection. ([#3033](https://github.com/paperclipai/paperclip/pull/3033), @kimnamu)
- **Codex fast mode** — Added fast mode support for the Codex local adapter. ([#3383](https://github.com/paperclipai/paperclip/pull/3383))
- **Backup improvements** — Gzip-compressed backups with tiered daily/weekly/monthly retention and UI controls in Instance Settings. ([#3015](https://github.com/paperclipai/paperclip/pull/3015), @aronprins)
- **GitHub webhook signing modes** — Added `github_hmac` and `none` webhook signing modes with timing-safe HMAC comparison. ([#1961](https://github.com/paperclipai/paperclip/pull/1961), @antonio-mello-ai)
- **Project environment variables** — Projects can now define environment variables that are inherited by workspace runs.
- **Routine improvements** — Draft routine defaults, run-time overrides, routine title variables, and relaxed project/agent requirements for routines. ([#3220](https://github.com/paperclipai/paperclip/pull/3220))
- **Workspace runtime controls** — Start/stop controls, runtime state reconciliation, runtime service JSON textarea improvements, and workspace branch/folder display in the issue properties sidebar. ([#3354](https://github.com/paperclipai/paperclip/pull/3354))
- **Attachment improvements** — Arbitrary file attachments (not just images), drag-and-drop non-image files onto markdown editor, and square-cropped image gallery grid. ([#2749](https://github.com/paperclipai/paperclip/pull/2749))
- **Image gallery in chat** — Clicking images in chat messages now opens a full gallery viewer.
- **Mobile UX** — Gmail-inspired mobile top bar for inbox issue views, responsive execution workspace pages, mobile mention menu placement, and mobile comment copy button feedback.
- **Sidebar order persistence** — Sidebar project and company ordering preferences now persist per-user.
- **Skill auto-enable** — Mentioned skills are automatically enabled for heartbeat runs.
- **Comment wake batching** — Multiple comment wakes are batched into a single inline payload for more efficient agent heartbeats.
- **Server-side adapter pause/resume** — Builtin adapter types can now be paused/resumed from the server with `overridePaused`. ([#2542](https://github.com/paperclipai/paperclip/pull/2542), @plind-dm)
- **Skill slash-command autocomplete** — Skill names now autocomplete in the editor.
- **Worktree reseed command** — New CLI command to reseed worktrees from latest repo state. ([#3353](https://github.com/paperclipai/paperclip/pull/3353))
## Fixes
- **Issue detail stability** — Fixed visible refreshes during agent updates, comment post resets, ref update loops, split regressions, and main-pane focus on navigation. ([#3355](https://github.com/paperclipai/paperclip/pull/3355))
- **Inbox badge count** — Badge now correctly counts only unread Mine issues. ([#2512](https://github.com/paperclipai/paperclip/pull/2512), @AllenHyang)
- **Inbox keyboard navigation** — Fixed j/k traversal across groups and nesting column alignment. ([#2218](https://github.com/paperclipai/paperclip/pull/2218), @HenkDz)
- **Vite HTML transforms** — Fixed repeated vite HTML transforms in dev mode.
- **Auth session lookup** — Skipped unnecessary auth session lookups on non-API requests.
- **Stale execution locks** — Fixed stale execution lock lifecycle with proper `executionAgentNameKey` clearing. ([#2643](https://github.com/paperclipai/paperclip/pull/2643), @chrisschwer)
- **Agent env bindings** — Fixed cleared agent env bindings not persisting on save. ([#3232](https://github.com/paperclipai/paperclip/pull/3232), @officialasishkumar)
- **Capabilities field** — Fixed blank screen when clearing the Capabilities field. ([#2442](https://github.com/paperclipai/paperclip/pull/2442), @sparkeros)
- **Skill deletion** — Company skills can now be deleted with an agent usage check. ([#2441](https://github.com/paperclipai/paperclip/pull/2441), @DanielSousa)
- **Claude session resume** — Fixed `--append-system-prompt-file` being sent on resumed Claude sessions and preserved instructions on resume fallback. ([#2949](https://github.com/paperclipai/paperclip/pull/2949), [#2936](https://github.com/paperclipai/paperclip/pull/2936), [#2937](https://github.com/paperclipai/paperclip/pull/2937), @Lempkey)
- **JWT secret fallback** — Removed hardcoded JWT secret fallback; auth now properly falls back to `BETTER_AUTH_SECRET`. ([#3124](https://github.com/paperclipai/paperclip/pull/3124), @cleanunicorn)
- **Agent auth JWT** — Fixed agent auth to fall back to `BETTER_AUTH_SECRET` when `PAPERCLIP_AGENT_JWT_SECRET` is absent. ([#2866](https://github.com/paperclipai/paperclip/pull/2866), @ergonaworks)
- **Typing lag** — Fixed typing lag in long comment threads. ([#3163](https://github.com/paperclipai/paperclip/pull/3163))
- **Infinite render loop** — Fixed infinite render loop in inbox mobile toolbar.
- **Shimmer animation** — Fixed shimmer text using invalid `hsl()` wrapper on `oklch` colors, loop jitter, and added pause between repeats.
- **Mention selection** — Restored touch mention selection and fixed spaced mention queries.
- **Inbox archive** — Fixed archive flashing back after fade-out.
- **Goal description** — Made goal description area scrollable in create dialog. ([#2148](https://github.com/paperclipai/paperclip/pull/2148), @shoaib050326)
- **Worktree provisioning** — Fixed symlink relinking, fallback seeding, dependency hydration, and validated linked worktrees before reuse. ([#3354](https://github.com/paperclipai/paperclip/pull/3354))
- **Node keepAliveTimeout** — Increased timeout behind reverse proxies to prevent 502 errors.
- **Noisy request logging** — Reduced noisy server request logging.
- **Codex tool-use transcripts** — Fixed Codex tool-use transcript completion parsing.
- **Codex resume error** — Recognize missing-rollout Codex resume error as stale session.
- **Pi quota exhaustion** — Treat Pi quota exhaustion as a failed run. ([#2305](https://github.com/paperclipai/paperclip/pull/2305))
- **Security** — Bumped rollup to 4.59.0 (path-traversal CVE), multer to 2.1.1 (HIGH CVEs), and redacted Bearer tokens from server log output. ([#2909](https://github.com/paperclipai/paperclip/pull/2909), @marysomething99-prog)
- **Issue identifier collisions** — Prevented identifier collisions during concurrent issue creation.
- **OpenClaw CEO paths** — Fixed `$AGENT_HOME` references in CEO onboarding instructions to use relative paths. ([#3299](https://github.com/paperclipai/paperclip/pull/3299), @aronprins)
- **Route authorization** — Scoped import, approvals, activity, and heartbeat routes properly. ([#3009](https://github.com/paperclipai/paperclip/pull/3009), @KhairulA)
- **Windows adapter** — Uses `cmd.exe` for `.cmd`/`.bat` wrappers on Windows. ([#2662](https://github.com/paperclipai/paperclip/pull/2662), @wbelt)
- **Markdown autoformat** — Fixed autoformat of pasted markdown in inline editor. ([#2733](https://github.com/paperclipai/paperclip/pull/2733), @davison)
- **Paused agent dimming** — Correctly dim paused agents in list and org chart views; skip dimming on Paused filter tab. ([#2397](https://github.com/paperclipai/paperclip/pull/2397), @HearthCore)
- **Import role fallback** — Import now reads agent role from frontmatter before defaulting to "agent". ([#2594](https://github.com/paperclipai/paperclip/pull/2594), @plind-dm)
- **Backup cleanup** — Clean up orphaned `.sql` files on compression failure and fix stale startup log.
## Upgrade Guide
Eight new database migrations (`0049``0056`) will run automatically on startup. These add:
- Issue blocker relations table (`0049`)
- Project environment variables (`0050`)
- Trigram search indexes on issues and comments (`0051` — requires `pg_trgm` extension)
- Execution policy decision tracking (`0052`)
- Non-issue inbox dismissals (`0053`)
- Relaxed routine constraints (`0054`)
- Heartbeat run process group tracking (`0055`)
- User sidebar preferences (`0056`)
All migrations are additive — no existing data is modified or removed.
**`pg_trgm` extension**: Migration `0051` creates the `pg_trgm` PostgreSQL extension for full-text search. If your database user does not have `CREATE EXTENSION` privileges, ask your DBA to run `CREATE EXTENSION IF NOT EXISTS pg_trgm;` before upgrading.
If you use external adapter plugins, note that built-in adapters can now be overridden by external ones. The `overriddenBuiltin` flag in the adapter API indicates when this is happening.
## Contributors
Thank you to everyone who contributed to this release!
@AllenHyang, @antonio-mello-ai, @aronprins, @chrisschwer, @cleanunicorn, @cryppadotta, @DanielSousa, @davison, @ergonaworks, @HearthCore, @HenkDz, @KhairulA, @kimnamu, @Lempkey, @marysomething99-prog, @mvanhorn, @officialasishkumar, @plind-dm, @shoaib050326, @sparkeros, @wbelt

View File

@@ -7,7 +7,6 @@ import { stdin, stdout } from "node:process";
import { createCapturedOutputBuffer, parseJsonResponseWithLimit } from "./dev-runner-output.mjs";
import { shouldTrackDevServerPath } from "./dev-runner-paths.mjs";
import { createDevServiceIdentity, repoRoot } from "./dev-service-profile.ts";
import { bootstrapDevRunnerWorktreeEnv } from "../server/src/dev-runner-worktree.ts";
import {
findAdoptableLocalService,
removeLocalServiceRegistryRecord,
@@ -15,19 +14,6 @@ import {
writeLocalServiceRegistryRecord,
} from "../server/src/services/local-service-supervisor.ts";
// Keep these values local so the dev runner can boot from the server package's
// tsx context without requiring workspace package resolution first.
const BIND_MODES = ["loopback", "lan", "tailnet", "custom"] as const;
type BindMode = (typeof BIND_MODES)[number];
const worktreeEnvBootstrap = bootstrapDevRunnerWorktreeEnv(repoRoot, process.env);
if (worktreeEnvBootstrap.missingEnv) {
console.error(
`[paperclip] linked git worktree at ${repoRoot} is missing ${path.relative(repoRoot, worktreeEnvBootstrap.envPath)}. Run \`paperclipai worktree init\` in this worktree before \`pnpm dev\`.`,
);
process.exit(1);
}
const mode = process.argv[2] === "watch" ? "watch" : "dev";
const cliArgs = process.argv.slice(3);
const scanIntervalMs = 1500;
@@ -76,36 +62,13 @@ const tailscaleAuthFlagNames = new Set([
]);
let tailscaleAuth = false;
let bindMode: BindMode | null = null;
let bindHost: string | null = null;
const forwardedArgs: string[] = [];
for (let index = 0; index < cliArgs.length; index += 1) {
const arg = cliArgs[index];
for (const arg of cliArgs) {
if (tailscaleAuthFlagNames.has(arg)) {
tailscaleAuth = true;
continue;
}
if (arg === "--bind") {
const value = cliArgs[index + 1];
if (!value || value.startsWith("--") || !BIND_MODES.includes(value as BindMode)) {
console.error(`[paperclip] invalid --bind value. Use one of: ${BIND_MODES.join(", ")}`);
process.exit(1);
}
bindMode = value as BindMode;
index += 1;
continue;
}
if (arg === "--bind-host") {
const value = cliArgs[index + 1];
if (!value || value.startsWith("--")) {
console.error("[paperclip] --bind-host requires a value");
process.exit(1);
}
bindHost = value;
index += 1;
continue;
}
forwardedArgs.push(arg);
}
@@ -115,16 +78,6 @@ if (process.env.npm_config_tailscale_auth === "true") {
if (process.env.npm_config_authenticated_private === "true") {
tailscaleAuth = true;
}
if (!bindMode && process.env.npm_config_bind && BIND_MODES.includes(process.env.npm_config_bind as BindMode)) {
bindMode = process.env.npm_config_bind as BindMode;
}
if (!bindHost && process.env.npm_config_bind_host) {
bindHost = process.env.npm_config_bind_host;
}
if (bindMode === "custom" && !bindHost) {
console.error("[paperclip] --bind custom requires --bind-host <host>");
process.exit(1);
}
const env: NodeJS.ProcessEnv = {
...process.env,
@@ -141,36 +94,13 @@ if (mode === "watch") {
env.PAPERCLIP_MIGRATION_AUTO_APPLY ??= "true";
}
if (tailscaleAuth || bindMode) {
const effectiveBind = bindMode ?? "lan";
if (tailscaleAuth) {
console.log("[paperclip] note: --tailscale-auth/--authenticated-private are legacy aliases for --bind lan");
}
env.PAPERCLIP_BIND = effectiveBind;
if (bindHost) {
env.PAPERCLIP_BIND_HOST = bindHost;
} else {
delete env.PAPERCLIP_BIND_HOST;
}
if (effectiveBind === "loopback" && !tailscaleAuth) {
delete env.PAPERCLIP_DEPLOYMENT_MODE;
delete env.PAPERCLIP_DEPLOYMENT_EXPOSURE;
delete env.PAPERCLIP_AUTH_BASE_URL_MODE;
console.log("[paperclip] dev mode: local_trusted (bind=loopback)");
} else {
env.PAPERCLIP_DEPLOYMENT_MODE = "authenticated";
env.PAPERCLIP_DEPLOYMENT_EXPOSURE = "private";
env.PAPERCLIP_AUTH_BASE_URL_MODE = "auto";
console.log(
`[paperclip] dev mode: authenticated/private (bind=${effectiveBind}${bindHost ? `:${bindHost}` : ""})`,
);
}
if (tailscaleAuth) {
env.PAPERCLIP_DEPLOYMENT_MODE = "authenticated";
env.PAPERCLIP_DEPLOYMENT_EXPOSURE = "private";
env.PAPERCLIP_AUTH_BASE_URL_MODE = "auto";
env.HOST = "0.0.0.0";
console.log("[paperclip] dev mode: authenticated/private (tailscale-friendly) on 0.0.0.0");
} else {
delete env.PAPERCLIP_BIND;
delete env.PAPERCLIP_BIND_HOST;
delete env.PAPERCLIP_DEPLOYMENT_MODE;
delete env.PAPERCLIP_DEPLOYMENT_EXPOSURE;
delete env.PAPERCLIP_AUTH_BASE_URL_MODE;
console.log("[paperclip] dev mode: local_trusted (default)");
}
@@ -178,7 +108,7 @@ const serverPort = Number.parseInt(env.PORT ?? process.env.PORT ?? "3100", 10) |
const devService = createDevServiceIdentity({
mode,
forwardedArgs,
networkProfile: tailscaleAuth ? `legacy:${bindMode ?? "lan"}` : (bindMode ?? "default"),
tailscaleAuth,
port: serverPort,
});

View File

@@ -8,7 +8,7 @@ export const repoRoot = path.resolve(path.dirname(fileURLToPath(import.meta.url)
export function createDevServiceIdentity(input: {
mode: "watch" | "dev";
forwardedArgs: string[];
networkProfile: string;
tailscaleAuth: boolean;
port: number;
}) {
const envFingerprint = createHash("sha256")
@@ -16,7 +16,7 @@ export function createDevServiceIdentity(input: {
JSON.stringify({
mode: input.mode,
forwardedArgs: input.forwardedArgs,
networkProfile: input.networkProfile,
tailscaleAuth: input.tailscaleAuth,
port: input.port,
}),
)

View File

@@ -1,117 +0,0 @@
#!/usr/bin/env -S node --import tsx
import fs from "node:fs/promises";
import { existsSync, readdirSync, readFileSync, realpathSync } from "node:fs";
import path from "node:path";
import { repoRoot } from "./dev-service-profile.ts";
type WorkspaceLinkMismatch = {
workspaceDir: string;
packageName: string;
expectedPath: string;
actualPath: string | null;
};
function readJsonFile(filePath: string): Record<string, unknown> {
return JSON.parse(readFileSync(filePath, "utf8")) as Record<string, unknown>;
}
function discoverWorkspacePackagePaths(rootDir: string): Map<string, string> {
const packagePaths = new Map<string, string>();
const ignoredDirNames = new Set([".git", ".paperclip", "dist", "node_modules"]);
function visit(dirPath: string) {
const packageJsonPath = path.join(dirPath, "package.json");
if (existsSync(packageJsonPath)) {
const packageJson = readJsonFile(packageJsonPath);
if (typeof packageJson.name === "string" && packageJson.name.length > 0) {
packagePaths.set(packageJson.name, dirPath);
}
}
for (const entry of readdirSync(dirPath, { withFileTypes: true })) {
if (!entry.isDirectory()) continue;
if (ignoredDirNames.has(entry.name)) continue;
visit(path.join(dirPath, entry.name));
}
}
visit(path.join(rootDir, "packages"));
visit(path.join(rootDir, "server"));
visit(path.join(rootDir, "ui"));
visit(path.join(rootDir, "cli"));
return packagePaths;
}
const workspacePackagePaths = discoverWorkspacePackagePaths(repoRoot);
const workspaceDirs = Array.from(
new Set(
Array.from(workspacePackagePaths.values())
.map((packagePath) => path.relative(repoRoot, packagePath))
.filter((workspaceDir) => workspaceDir.length > 0),
),
).sort();
function findWorkspaceLinkMismatches(workspaceDir: string): WorkspaceLinkMismatch[] {
const nodeModulesDir = path.join(repoRoot, workspaceDir, "node_modules");
if (!existsSync(nodeModulesDir)) {
return [];
}
const packageJson = readJsonFile(path.join(repoRoot, workspaceDir, "package.json"));
const dependencies = {
...(packageJson.dependencies as Record<string, unknown> | undefined),
...(packageJson.devDependencies as Record<string, unknown> | undefined),
};
const mismatches: WorkspaceLinkMismatch[] = [];
for (const [packageName, version] of Object.entries(dependencies)) {
if (typeof version !== "string" || !version.startsWith("workspace:")) continue;
const expectedPath = workspacePackagePaths.get(packageName);
if (!expectedPath) continue;
const linkPath = path.join(repoRoot, workspaceDir, "node_modules", ...packageName.split("/"));
const actualPath = existsSync(linkPath) ? path.resolve(realpathSync(linkPath)) : null;
if (actualPath === path.resolve(expectedPath)) continue;
mismatches.push({
workspaceDir,
packageName,
expectedPath: path.resolve(expectedPath),
actualPath,
});
}
return mismatches;
}
async function ensureWorkspaceLinksCurrent(workspaceDir: string) {
const mismatches = findWorkspaceLinkMismatches(workspaceDir);
if (mismatches.length === 0) return;
console.log(`[paperclip] detected stale workspace package links for ${workspaceDir}; relinking dependencies...`);
for (const mismatch of mismatches) {
console.log(
`[paperclip] ${mismatch.packageName}: ${mismatch.actualPath ?? "missing"} -> ${mismatch.expectedPath}`,
);
}
for (const mismatch of mismatches) {
const linkPath = path.join(repoRoot, mismatch.workspaceDir, "node_modules", ...mismatch.packageName.split("/"));
await fs.mkdir(path.dirname(linkPath), { recursive: true });
await fs.rm(linkPath, { recursive: true, force: true });
await fs.symlink(mismatch.expectedPath, linkPath);
}
const remainingMismatches = findWorkspaceLinkMismatches(workspaceDir);
if (remainingMismatches.length === 0) return;
throw new Error(
`Workspace relink did not repair all ${workspaceDir} package links: ${remainingMismatches.map((item) => item.packageName).join(", ")}`,
);
}
for (const workspaceDir of workspaceDirs) {
await ensureWorkspaceLinksCurrent(workspaceDir);
}

View File

@@ -1,110 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
usage() {
cat <<'EOF'
Usage:
scripts/paperclip-issue-update.sh [--issue-id ID] [--status STATUS] [--comment TEXT] [--dry-run]
Reads a multiline markdown comment from stdin when stdin is piped. This preserves
newlines when building the JSON payload for PATCH /api/issues/{issueId}.
Examples:
scripts/paperclip-issue-update.sh --issue-id "$PAPERCLIP_TASK_ID" --status in_progress <<'MD'
Investigating formatting
- Pulled the raw comment body
- Comparing it with the run transcript
MD
scripts/paperclip-issue-update.sh --issue-id "$PAPERCLIP_TASK_ID" --status done --dry-run <<'MD'
Done
- Fixed the issue update helper
MD
EOF
}
require_command() {
if ! command -v "$1" >/dev/null 2>&1; then
printf 'Missing required command: %s\n' "$1" >&2
exit 1
fi
}
issue_id="${PAPERCLIP_TASK_ID:-}"
status=""
comment_arg=""
dry_run=0
while [[ $# -gt 0 ]]; do
case "$1" in
--issue-id)
issue_id="${2:-}"
shift 2
;;
--status)
status="${2:-}"
shift 2
;;
--comment)
comment_arg="${2:-}"
shift 2
;;
--dry-run)
dry_run=1
shift
;;
--help|-h)
usage
exit 0
;;
*)
printf 'Unknown argument: %s\n' "$1" >&2
usage >&2
exit 1
;;
esac
done
if [[ -z "$issue_id" ]]; then
printf 'Missing issue id. Pass --issue-id or set PAPERCLIP_TASK_ID.\n' >&2
exit 1
fi
comment=""
if [[ -n "$comment_arg" ]]; then
comment="$comment_arg"
elif [[ ! -t 0 ]]; then
comment="$(cat)"
fi
require_command jq
payload="$(
jq -nc \
--arg status "$status" \
--arg comment "$comment" \
'
(if $status == "" then {} else {status: $status} end) +
(if $comment == "" then {} else {comment: $comment} end)
'
)"
if [[ "$dry_run" == "1" ]]; then
printf '%s\n' "$payload"
exit 0
fi
if [[ -z "${PAPERCLIP_API_URL:-}" || -z "${PAPERCLIP_API_KEY:-}" || -z "${PAPERCLIP_RUN_ID:-}" ]]; then
printf 'Missing PAPERCLIP_API_URL, PAPERCLIP_API_KEY, or PAPERCLIP_RUN_ID.\n' >&2
exit 1
fi
curl -sS -X PATCH \
"$PAPERCLIP_API_URL/api/issues/$issue_id" \
-H "Authorization: Bearer $PAPERCLIP_API_KEY" \
-H "X-Paperclip-Run-Id: $PAPERCLIP_RUN_ID" \
-H 'Content-Type: application/json' \
--data-binary "$payload"

View File

@@ -53,24 +53,6 @@ run_paperclipai_command() {
return 1
}
paperclipai_command_available() {
if command -v pnpm >/dev/null 2>&1 && pnpm paperclipai --help >/dev/null 2>&1; then
return 0
fi
local base_cli_tsx_path="$base_cwd/cli/node_modules/tsx/dist/cli.mjs"
local base_cli_entry_path="$base_cwd/cli/src/index.ts"
if command -v node >/dev/null 2>&1 && [[ -f "$base_cli_tsx_path" ]] && [[ -f "$base_cli_entry_path" ]]; then
return 0
fi
if command -v paperclipai >/dev/null 2>&1; then
return 0
fi
return 1
}
run_isolated_worktree_init() {
run_paperclipai_command \
worktree \
@@ -255,8 +237,6 @@ async function main() {
server: {
deploymentMode: sourceConfig?.server?.deploymentMode ?? "local_trusted",
exposure: sourceConfig?.server?.exposure ?? "private",
...(sourceConfig?.server?.bind ? { bind: sourceConfig.server.bind } : {}),
...(sourceConfig?.server?.customBindHost ? { customBindHost: sourceConfig.server.customBindHost } : {}),
host: sourceConfig?.server?.host ?? "127.0.0.1",
port: serverPort,
allowedHostnames: sourceConfig?.server?.allowedHostnames ?? [],
@@ -336,13 +316,25 @@ main().catch((error) => {
EOF
}
if paperclipai_command_available; then
run_isolated_worktree_init
else
if ! run_isolated_worktree_init; then
echo "paperclipai CLI not available in this workspace; writing isolated fallback config without DB seeding." >&2
write_fallback_worktree_config
fi
disable_seeded_routines() {
local company_id="${PAPERCLIP_COMPANY_ID:-}"
if [[ -z "$company_id" ]]; then
echo "PAPERCLIP_COMPANY_ID not set; skipping routine disable post-step." >&2
return 0
fi
if ! run_paperclipai_command routines disable-all --config "$worktree_config_path" --company-id "$company_id"; then
echo "paperclipai CLI not available in this workspace; skipping routine disable post-step." >&2
fi
}
disable_seeded_routines
list_base_node_modules_paths() {
cd "$base_cwd" &&
find . \
@@ -404,49 +396,14 @@ if [[ -f "$worktree_cwd/package.json" && -f "$worktree_cwd/pnpm-lock.yaml" ]]; t
done
}
run_pnpm_install() {
local stdout_path stderr_path
stdout_path="$(mktemp)"
stderr_path="$(mktemp)"
if (
cd "$worktree_cwd"
pnpm install "$@"
) >"$stdout_path" 2>"$stderr_path"; then
cat "$stdout_path"
cat "$stderr_path" >&2
rm -f "$stdout_path" "$stderr_path"
return 0
fi
local exit_code=$?
cat "$stdout_path"
cat "$stderr_path" >&2
if grep -q "ERR_PNPM_OUTDATED_LOCKFILE" "$stdout_path" "$stderr_path"; then
rm -f "$stdout_path" "$stderr_path"
return 90
fi
rm -f "$stdout_path" "$stderr_path"
return "$exit_code"
(
cd "$worktree_cwd"
pnpm install --frozen-lockfile
) || {
restore_moved_symlinks
exit 1
}
if run_pnpm_install --frozen-lockfile; then
:
else
install_exit_code=$?
if [[ "$install_exit_code" -eq 90 ]]; then
echo "pnpm-lock.yaml is out of date in this execution workspace; retrying install without --frozen-lockfile." >&2
run_pnpm_install --no-frozen-lockfile || {
restore_moved_symlinks
exit 1
}
else
restore_moved_symlinks
exit "$install_exit_code"
fi
fi
cleanup_moved_symlinks
fi

Some files were not shown because too many files have changed in this diff Show More