mirror of
https://github.com/paperclipai/paperclip
synced 2026-04-25 17:25:15 +02:00
chore: improve worktree tooling and security docs
This commit is contained in:
230
.agents/skills/deal-with-security-advisory/SKILL.md
Normal file
230
.agents/skills/deal-with-security-advisory/SKILL.md
Normal file
@@ -0,0 +1,230 @@
|
||||
---
|
||||
name: deal-with-security-advisory
|
||||
description: >
|
||||
Handle a GitHub Security Advisory response for Paperclip, including
|
||||
confidential fix development in a temporary private fork, human coordination
|
||||
on advisory-thread comments, CVE request, synchronized advisory publication,
|
||||
and immediate security release steps.
|
||||
---
|
||||
|
||||
# Security Vulnerability Response Instructions
|
||||
|
||||
## ⚠️ CRITICAL: This is a security vulnerability. Everything about this process is confidential until the advisory is published. Do not mention the vulnerability details in any public commit message, PR title, branch name, or comment. Do not push anything to a public branch. Do not discuss specifics in any public channel. Assume anything on the public repo is visible to attackers who will exploit the window between disclosure and user upgrades.
|
||||
|
||||
***
|
||||
|
||||
## Context
|
||||
|
||||
A security vulnerability has been reported via GitHub Security Advisory:
|
||||
|
||||
* **Advisory:** {{ghsaId}} (e.g. GHSA-x8hx-rhr2-9rf7)
|
||||
* **Reporter:** {{reporterHandle}}
|
||||
* **Severity:** {{severity}}
|
||||
* **Notes:** {{notes}}
|
||||
|
||||
***
|
||||
|
||||
## Step 0: Fetch the Advisory Details
|
||||
|
||||
Pull the full advisory so you understand the vulnerability before doing anything else:
|
||||
|
||||
```
|
||||
gh api repos/paperclipai/paperclip/security-advisories/{{ghsaId}}
|
||||
|
||||
```
|
||||
|
||||
Read the `description`, `severity`, `cvss`, and `vulnerabilities` fields. Understand the attack vector before writing code.
|
||||
|
||||
## Step 1: Acknowledge the Report
|
||||
|
||||
⚠️ **This step requires a human.** The advisory thread does not have a comment API. Ask the human operator to post a comment on the private advisory thread acknowledging the report. Provide them this template:
|
||||
|
||||
> Thanks for the report, @{{reporterHandle}}. We've confirmed the issue and are working on a fix. We're targeting a patch release within {{timeframe}}. We'll keep you updated here.
|
||||
|
||||
Give your human this template, but still continue
|
||||
|
||||
Below we use `gh` tools - you do have access and credentials outside of your sandbox, so use them.
|
||||
|
||||
## Step 2: Create the Temporary Private Fork
|
||||
|
||||
This is where all fix development happens. Never push to the public repo.
|
||||
|
||||
```
|
||||
gh api --method POST \
|
||||
repos/paperclipai/paperclip/security-advisories/{{ghsaId}}/forks
|
||||
|
||||
```
|
||||
|
||||
This returns a repository object for the private fork. Save the `full_name` and `clone_url`.
|
||||
|
||||
Clone it and set up your workspace:
|
||||
|
||||
```
|
||||
# Clone the private fork somewhere outside ~/paperclip
|
||||
git clone <clone_url_from_response> ~/security-patch-{{ghsaId}}
|
||||
cd ~/security-patch-{{ghsaId}}
|
||||
git checkout -b security-fix
|
||||
|
||||
```
|
||||
|
||||
**Do not edit `~/paperclip`** — the dev server is running off the `~/paperclip` master branch and we don't want to touch it. All work happens in the private fork clone.
|
||||
|
||||
**TIPS:**
|
||||
|
||||
* Do not commit `pnpm-lock.yaml` — the repo has actions to manage this
|
||||
* Do not use descriptive branch names that leak the vulnerability (e.g., no `fix-dns-rebinding-rce`). Use something generic like `security-fix`
|
||||
* All work stays in the private fork until publication
|
||||
* CI/GitHub Actions will NOT run on the temporary private fork — this is a GitHub limitation by design. You must run tests locally
|
||||
|
||||
## Step 3: Develop and Validate the Fix
|
||||
|
||||
Write the patch. Same content standards as any PR:
|
||||
|
||||
* It must functionally work — **run tests locally** since CI won't run on the private fork
|
||||
* Consider the whole codebase, not just the narrow vulnerability path. A patch that fixes one vector but opens another is worse than no patch
|
||||
* Ensure backwards compatibility for the database, or be explicit about what breaks
|
||||
* Make sure any UI components still look correct if the fix touches them
|
||||
* The fix should be minimal and focused — don't bundle unrelated changes into a security patch. Reviewers (and the reporter) should be able to read the diff and understand exactly what changed and why
|
||||
|
||||
**Specific to security fixes:**
|
||||
|
||||
* Verify the fix actually closes the attack vector described in the advisory. Reproduce the vulnerability first (using the reporter's description), then confirm the patch prevents it
|
||||
* Consider adjacent attack vectors — if DNS rebinding is the issue, are there other endpoints or modes with the same class of problem?
|
||||
* Do not introduce new dependencies unless absolutely necessary — new deps in a security patch raise eyebrows
|
||||
|
||||
Push your fix to the private fork:
|
||||
|
||||
```
|
||||
git add -A
|
||||
git commit -m "Fix security vulnerability"
|
||||
git push origin security-fix
|
||||
|
||||
```
|
||||
|
||||
## Step 4: Coordinate with the Reporter
|
||||
|
||||
⚠️ **This step requires a human.** Ask the human operator to post on the advisory thread letting the reporter know the fix is ready and giving them a chance to review. Provide them this template:
|
||||
|
||||
> @{{reporterHandle}} — fix is ready in the private fork if you'd like to review before we publish. Planning to release within {{timeframe}}.
|
||||
|
||||
Proceed
|
||||
|
||||
## Step 5: Request a CVE
|
||||
|
||||
This makes vulnerability scanners (npm audit, Snyk, Dependabot) warn users to upgrade. Without it, nobody gets automated notification.
|
||||
|
||||
```
|
||||
gh api --method POST \
|
||||
repos/paperclipai/paperclip/security-advisories/{{ghsaId}}/cve
|
||||
|
||||
```
|
||||
|
||||
GitHub is a CVE Numbering Authority and will assign one automatically. The CVE may take a few hours to propagate after the advisory is published.
|
||||
|
||||
## Step 6: Publish Everything Simultaneously
|
||||
|
||||
This all happens at once — do not stagger these steps. The goal is **zero window** between the vulnerability becoming public knowledge and the fix being available.
|
||||
|
||||
### 6a. Verify reporter credit before publishing
|
||||
|
||||
```
|
||||
gh api repos/paperclipai/paperclip/security-advisories/{{ghsaId}} --jq '.credits'
|
||||
|
||||
```
|
||||
|
||||
If the reporter is not credited, add them:
|
||||
|
||||
```
|
||||
gh api --method PATCH \
|
||||
repos/paperclipai/paperclip/security-advisories/{{ghsaId}} \
|
||||
--input - << 'EOF'
|
||||
{
|
||||
"credits": [
|
||||
{
|
||||
"login": "{{reporterHandle}}",
|
||||
"type": "reporter"
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
|
||||
```
|
||||
|
||||
### 6b. Update the advisory with the patched version and publish
|
||||
|
||||
```
|
||||
gh api --method PATCH \
|
||||
repos/paperclipai/paperclip/security-advisories/{{ghsaId}} \
|
||||
--input - << 'EOF'
|
||||
{
|
||||
"state": "published",
|
||||
"vulnerabilities": [
|
||||
{
|
||||
"package": {
|
||||
"ecosystem": "npm",
|
||||
"name": "paperclip"
|
||||
},
|
||||
"vulnerable_version_range": "< {{patchedVersion}}",
|
||||
"patched_versions": "{{patchedVersion}}"
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
|
||||
```
|
||||
|
||||
Publishing the advisory simultaneously:
|
||||
|
||||
* Makes the GHSA public
|
||||
* Merges the temporary private fork into your repo
|
||||
* Triggers the CVE assignment (if requested in step 5)
|
||||
|
||||
### 6c. Cut a release immediately after merge
|
||||
|
||||
```
|
||||
cd ~/paperclip
|
||||
git pull origin master
|
||||
|
||||
gh release create v{{patchedVersion}} \
|
||||
--repo paperclipai/paperclip \
|
||||
--title "v{{patchedVersion}} — Security Release" \
|
||||
--notes "## Security Release
|
||||
|
||||
This release fixes a critical security vulnerability.
|
||||
|
||||
### What was fixed
|
||||
{{briefDescription}} (e.g., Remote code execution via DNS rebinding in \`local_trusted\` mode)
|
||||
|
||||
### Advisory
|
||||
https://github.com/paperclipai/paperclip/security/advisories/{{ghsaId}}
|
||||
|
||||
### Credit
|
||||
Thanks to @{{reporterHandle}} for responsibly disclosing this vulnerability.
|
||||
|
||||
### Action required
|
||||
All users running versions prior to {{patchedVersion}} should upgrade immediately."
|
||||
|
||||
```
|
||||
|
||||
## Step 7: Post-Publication Verification
|
||||
|
||||
```
|
||||
# Verify the advisory is published and CVE is assigned
|
||||
gh api repos/paperclipai/paperclip/security-advisories/{{ghsaId}} \
|
||||
--jq '{state: .state, cve_id: .cve_id, published_at: .published_at}'
|
||||
|
||||
# Verify the release exists
|
||||
gh release view v{{patchedVersion}} --repo paperclipai/paperclip
|
||||
|
||||
```
|
||||
|
||||
If the CVE hasn't been assigned yet, that's normal — it can take a few hours.
|
||||
|
||||
⚠️ **Human step:** Ask the human operator to post a final comment on the advisory thread confirming publication and thanking the reporter.
|
||||
|
||||
Tell the human operator what you did by posting a comment to this task, including:
|
||||
|
||||
* The published advisory URL: `https://github.com/paperclipai/paperclip/security/advisories/{{ghsaId}}`
|
||||
* The release URL
|
||||
* Whether the CVE has been assigned yet
|
||||
* All URLs to any pull requests or branches
|
||||
4
.mailmap
4
.mailmap
@@ -1 +1,3 @@
|
||||
Dotta <bippadotta@protonmail.com> Forgotten <forgottenrunes@protonmail.com>
|
||||
Dotta <bippadotta@protonmail.com> <34892728+cryppadotta@users.noreply.github.com>
|
||||
Dotta <bippadotta@protonmail.com> <forgottenrunes@protonmail.com>
|
||||
Dotta <bippadotta@protonmail.com> <dotta@example.com>
|
||||
|
||||
25
README.md
25
README.md
@@ -243,11 +243,18 @@ See [doc/DEVELOPING.md](doc/DEVELOPING.md) for the full development guide.
|
||||
- ✅ Skills Manager
|
||||
- ✅ Scheduled Routines
|
||||
- ✅ Better Budgeting
|
||||
- ⚪ Artifacts & Deployments
|
||||
- ⚪ CEO Chat
|
||||
- ⚪ MAXIMIZER MODE
|
||||
- ✅ Agent Reviews and Approvals
|
||||
- ⚪ Multiple Human Users
|
||||
- ⚪ Cloud / Sandbox agents (e.g. Cursor / e2b agents)
|
||||
- ⚪ Artifacts & Work Products
|
||||
- ⚪ Memory & Knowledge
|
||||
- ⚪ Enforced Outcomes
|
||||
- ⚪ MAXIMIZER MODE
|
||||
- ⚪ Deep Planning
|
||||
- ⚪ Work Queues
|
||||
- ⚪ Self-Organization
|
||||
- ⚪ Automatic Organizational Learning
|
||||
- ⚪ CEO Chat
|
||||
- ⚪ Cloud deployments
|
||||
- ⚪ Desktop App
|
||||
|
||||
@@ -263,12 +270,12 @@ Paperclip collects anonymous usage telemetry to help us understand how the produ
|
||||
|
||||
Telemetry is **enabled by default** and can be disabled with any of the following:
|
||||
|
||||
| Method | How |
|
||||
|---|---|
|
||||
| Environment variable | `PAPERCLIP_TELEMETRY_DISABLED=1` |
|
||||
| Standard convention | `DO_NOT_TRACK=1` |
|
||||
| CI environments | Automatically disabled when `CI=true` |
|
||||
| Config file | Set `telemetry.enabled: false` in your Paperclip config |
|
||||
| Method | How |
|
||||
| -------------------- | ------------------------------------------------------- |
|
||||
| Environment variable | `PAPERCLIP_TELEMETRY_DISABLED=1` |
|
||||
| Standard convention | `DO_NOT_TRACK=1` |
|
||||
| CI environments | Automatically disabled when `CI=true` |
|
||||
| Config file | Set `telemetry.enabled: false` in your Paperclip config |
|
||||
|
||||
## Contributing
|
||||
|
||||
|
||||
8
SECURITY.md
Normal file
8
SECURITY.md
Normal file
@@ -0,0 +1,8 @@
|
||||
# Security Policy
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
Please report security vulnerabilities through GitHub's Security Advisory feature:
|
||||
[https://github.com/paperclipai/paperclip/security/advisories/new](https://github.com/paperclipai/paperclip/security/advisories/new)
|
||||
|
||||
Do not open public issues for security vulnerabilities.
|
||||
@@ -2,10 +2,20 @@ import fs from "node:fs";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { execFileSync } from "node:child_process";
|
||||
import { randomUUID } from "node:crypto";
|
||||
import { afterEach, describe, expect, it, vi } from "vitest";
|
||||
import {
|
||||
agents,
|
||||
companies,
|
||||
createDb,
|
||||
projects,
|
||||
routines,
|
||||
routineTriggers,
|
||||
} from "@paperclipai/db";
|
||||
import {
|
||||
copyGitHooksToWorktreeGitDir,
|
||||
copySeededSecretsKey,
|
||||
pauseSeededScheduledRoutines,
|
||||
readSourceAttachmentBody,
|
||||
rebindWorkspaceCwd,
|
||||
resolveSourceConfigPath,
|
||||
@@ -28,9 +38,21 @@ import {
|
||||
sanitizeWorktreeInstanceId,
|
||||
} from "../commands/worktree-lib.js";
|
||||
import type { PaperclipConfig } from "../config/schema.js";
|
||||
import {
|
||||
getEmbeddedPostgresTestSupport,
|
||||
startEmbeddedPostgresTestDatabase,
|
||||
} from "./helpers/embedded-postgres.js";
|
||||
|
||||
const ORIGINAL_CWD = process.cwd();
|
||||
const ORIGINAL_ENV = { ...process.env };
|
||||
const embeddedPostgresSupport = await getEmbeddedPostgresTestSupport();
|
||||
const describeEmbeddedPostgres = embeddedPostgresSupport.supported ? describe : describe.skip;
|
||||
|
||||
if (!embeddedPostgresSupport.supported) {
|
||||
console.warn(
|
||||
`Skipping embedded Postgres worktree CLI tests on this host: ${embeddedPostgresSupport.reason ?? "unsupported environment"}`,
|
||||
);
|
||||
}
|
||||
|
||||
afterEach(() => {
|
||||
process.chdir(ORIGINAL_CWD);
|
||||
@@ -823,3 +845,138 @@ describe("worktree helpers", () => {
|
||||
}
|
||||
}, 20_000);
|
||||
});
|
||||
|
||||
describeEmbeddedPostgres("pauseSeededScheduledRoutines", () => {
|
||||
it("pauses only routines with enabled schedule triggers", async () => {
|
||||
const tempDb = await startEmbeddedPostgresTestDatabase("paperclip-worktree-routines-");
|
||||
const db = createDb(tempDb.connectionString);
|
||||
const companyId = randomUUID();
|
||||
const projectId = randomUUID();
|
||||
const agentId = randomUUID();
|
||||
const activeScheduledRoutineId = randomUUID();
|
||||
const activeApiRoutineId = randomUUID();
|
||||
const pausedScheduledRoutineId = randomUUID();
|
||||
const archivedScheduledRoutineId = randomUUID();
|
||||
const disabledScheduleRoutineId = randomUUID();
|
||||
|
||||
try {
|
||||
await db.insert(companies).values({
|
||||
id: companyId,
|
||||
name: "Paperclip",
|
||||
issuePrefix: `T${companyId.replace(/-/g, "").slice(0, 6).toUpperCase()}`,
|
||||
requireBoardApprovalForNewAgents: false,
|
||||
});
|
||||
await db.insert(agents).values({
|
||||
id: agentId,
|
||||
companyId,
|
||||
name: "Coder",
|
||||
adapterType: "process",
|
||||
adapterConfig: {},
|
||||
runtimeConfig: {},
|
||||
permissions: {},
|
||||
});
|
||||
await db.insert(projects).values({
|
||||
id: projectId,
|
||||
companyId,
|
||||
name: "Project",
|
||||
status: "in_progress",
|
||||
});
|
||||
await db.insert(routines).values([
|
||||
{
|
||||
id: activeScheduledRoutineId,
|
||||
companyId,
|
||||
projectId,
|
||||
assigneeAgentId: agentId,
|
||||
title: "Active scheduled",
|
||||
status: "active",
|
||||
},
|
||||
{
|
||||
id: activeApiRoutineId,
|
||||
companyId,
|
||||
projectId,
|
||||
assigneeAgentId: agentId,
|
||||
title: "Active API",
|
||||
status: "active",
|
||||
},
|
||||
{
|
||||
id: pausedScheduledRoutineId,
|
||||
companyId,
|
||||
projectId,
|
||||
assigneeAgentId: agentId,
|
||||
title: "Paused scheduled",
|
||||
status: "paused",
|
||||
},
|
||||
{
|
||||
id: archivedScheduledRoutineId,
|
||||
companyId,
|
||||
projectId,
|
||||
assigneeAgentId: agentId,
|
||||
title: "Archived scheduled",
|
||||
status: "archived",
|
||||
},
|
||||
{
|
||||
id: disabledScheduleRoutineId,
|
||||
companyId,
|
||||
projectId,
|
||||
assigneeAgentId: agentId,
|
||||
title: "Disabled schedule",
|
||||
status: "active",
|
||||
},
|
||||
]);
|
||||
await db.insert(routineTriggers).values([
|
||||
{
|
||||
companyId,
|
||||
routineId: activeScheduledRoutineId,
|
||||
kind: "schedule",
|
||||
enabled: true,
|
||||
cronExpression: "0 9 * * *",
|
||||
timezone: "UTC",
|
||||
},
|
||||
{
|
||||
companyId,
|
||||
routineId: activeApiRoutineId,
|
||||
kind: "api",
|
||||
enabled: true,
|
||||
},
|
||||
{
|
||||
companyId,
|
||||
routineId: pausedScheduledRoutineId,
|
||||
kind: "schedule",
|
||||
enabled: true,
|
||||
cronExpression: "0 10 * * *",
|
||||
timezone: "UTC",
|
||||
},
|
||||
{
|
||||
companyId,
|
||||
routineId: archivedScheduledRoutineId,
|
||||
kind: "schedule",
|
||||
enabled: true,
|
||||
cronExpression: "0 11 * * *",
|
||||
timezone: "UTC",
|
||||
},
|
||||
{
|
||||
companyId,
|
||||
routineId: disabledScheduleRoutineId,
|
||||
kind: "schedule",
|
||||
enabled: false,
|
||||
cronExpression: "0 12 * * *",
|
||||
timezone: "UTC",
|
||||
},
|
||||
]);
|
||||
|
||||
const pausedCount = await pauseSeededScheduledRoutines(tempDb.connectionString);
|
||||
expect(pausedCount).toBe(1);
|
||||
|
||||
const rows = await db.select({ id: routines.id, status: routines.status }).from(routines);
|
||||
const statusById = new Map(rows.map((row) => [row.id, row.status]));
|
||||
expect(statusById.get(activeScheduledRoutineId)).toBe("paused");
|
||||
expect(statusById.get(activeApiRoutineId)).toBe("active");
|
||||
expect(statusById.get(pausedScheduledRoutineId)).toBe("paused");
|
||||
expect(statusById.get(archivedScheduledRoutineId)).toBe("archived");
|
||||
expect(statusById.get(disabledScheduleRoutineId)).toBe("active");
|
||||
} finally {
|
||||
await db.$client?.end?.({ timeout: 5 }).catch(() => undefined);
|
||||
await tempDb.cleanup();
|
||||
}
|
||||
}, 20_000);
|
||||
});
|
||||
|
||||
@@ -39,6 +39,8 @@ import {
|
||||
issues,
|
||||
projectWorkspaces,
|
||||
projects,
|
||||
routines,
|
||||
routineTriggers,
|
||||
runDatabaseBackup,
|
||||
runDatabaseRestore,
|
||||
createEmbeddedPostgresLogBuffer,
|
||||
@@ -922,6 +924,36 @@ async function ensureEmbeddedPostgres(dataDir: string, preferredPort: number): P
|
||||
};
|
||||
}
|
||||
|
||||
export async function pauseSeededScheduledRoutines(connectionString: string): Promise<number> {
|
||||
const db = createDb(connectionString);
|
||||
try {
|
||||
const scheduledRoutineIds = await db
|
||||
.selectDistinct({ routineId: routineTriggers.routineId })
|
||||
.from(routineTriggers)
|
||||
.where(and(eq(routineTriggers.kind, "schedule"), eq(routineTriggers.enabled, true)));
|
||||
const idsToPause = scheduledRoutineIds
|
||||
.map((row) => row.routineId)
|
||||
.filter((value): value is string => Boolean(value));
|
||||
|
||||
if (idsToPause.length === 0) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
const paused = await db
|
||||
.update(routines)
|
||||
.set({
|
||||
status: "paused",
|
||||
updatedAt: new Date(),
|
||||
})
|
||||
.where(and(inArray(routines.id, idsToPause), sql`${routines.status} <> 'paused'`, sql`${routines.status} <> 'archived'`))
|
||||
.returning({ id: routines.id });
|
||||
|
||||
return paused.length;
|
||||
} finally {
|
||||
await db.$client?.end?.({ timeout: 5 }).catch(() => undefined);
|
||||
}
|
||||
}
|
||||
|
||||
async function seedWorktreeDatabase(input: {
|
||||
sourceConfigPath: string;
|
||||
sourceConfig: PaperclipConfig;
|
||||
@@ -979,6 +1011,7 @@ async function seedWorktreeDatabase(input: {
|
||||
backupFile: backup.backupFile,
|
||||
});
|
||||
await applyPendingMigrations(targetConnectionString);
|
||||
await pauseSeededScheduledRoutines(targetConnectionString);
|
||||
const reboundWorkspaces = await rebindSeededProjectWorkspaces({
|
||||
targetConnectionString,
|
||||
currentCwd: input.targetPaths.cwd,
|
||||
|
||||
@@ -175,7 +175,7 @@ Seed modes:
|
||||
|
||||
After `worktree init`, both the server and the CLI auto-load the repo-local `.paperclip/.env` when run inside that worktree, so normal commands like `pnpm dev`, `paperclipai doctor`, and `paperclipai db:backup` stay scoped to the worktree instance.
|
||||
|
||||
Provisioned git worktrees also pause all seeded routines in the isolated worktree database by default. This prevents copied daily/cron routines from firing unexpectedly inside the new workspace instance during development.
|
||||
Provisioned git worktrees also pause seeded routines that still have enabled schedule triggers in the isolated worktree database by default. This prevents copied daily/cron routines from firing unexpectedly inside the new workspace instance during development without disabling webhook/API-only routines.
|
||||
|
||||
That repo-local env also sets:
|
||||
|
||||
|
||||
@@ -22,6 +22,7 @@ The question is not "which memory project wins?" The question is "what is the sm
|
||||
### Hosted memory APIs
|
||||
|
||||
- `mem0`
|
||||
- `AWS Bedrock AgentCore Memory`
|
||||
- `supermemory`
|
||||
- `Memori`
|
||||
|
||||
@@ -49,6 +50,7 @@ These emphasize local persistence, inspectability, and low operational overhead.
|
||||
|---|---|---|---|---|
|
||||
| [nuggets](https://github.com/NeoVertex1/nuggets) | local memory engine + messaging gateway | topic-scoped HRR memory with `remember`, `recall`, `forget`, fact promotion into `MEMORY.md` | good example of lightweight local memory and automatic promotion | very specific architecture; not a general multi-tenant service |
|
||||
| [mem0](https://github.com/mem0ai/mem0) | hosted + OSS SDK | `add`, `search`, `getAll`, `get`, `update`, `delete`, `deleteAll`; entity partitioning via `user_id`, `agent_id`, `run_id`, `app_id` | closest to a clean provider API with identities and metadata filters | provider owns extraction heavily; Paperclip should not assume every backend behaves like mem0 |
|
||||
| [AWS Bedrock AgentCore Memory](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/memory.html) | AWS-managed memory service | explicit short-term and long-term memories, actor/session/event APIs, memory strategies, namespace templates, optional self-managed extraction pipeline | strong example of provider-managed memory with clear scoped ids, retention controls, and standalone API access outside a single agent framework | AWS-hosted and IAM-centric; Paperclip would still need its own company/run/comment provenance, cost rollups, and likely a plugin wrapper instead of baking AWS semantics into core |
|
||||
| [MemOS](https://github.com/MemTensor/MemOS) | memory OS / framework | unified add-retrieve-edit-delete, memory cubes, multimodal memory, tool memory, async scheduler, feedback/correction | strong source for optional capabilities beyond plain search | much broader than the minimal contract Paperclip should standardize first |
|
||||
| [supermemory](https://github.com/supermemoryai/supermemory) | hosted memory + context API | `add`, `profile`, `search.memories`, `search.documents`, document upload, settings; automatic profile building and forgetting | strong example of "context bundle" rather than raw search results | heavily productized around its own ontology and hosted flow |
|
||||
| [memU](https://github.com/NevaMind-AI/memU) | proactive agent memory framework | file-system metaphor, proactive loop, intent prediction, always-on companion model | good source for when memory should trigger agent behavior, not just retrieval | proactive assistant framing is broader than Paperclip's task-centric control plane |
|
||||
@@ -77,6 +79,7 @@ These differences are exactly why Paperclip needs a layered contract instead of
|
||||
### 1. Who owns extraction?
|
||||
|
||||
- `mem0`, `supermemory`, and `Memori` expect the provider to infer memories from conversations.
|
||||
- `AWS Bedrock AgentCore Memory` supports both provider-managed extraction and self-managed pipelines where the host writes curated long-term memory records.
|
||||
- `memsearch` expects the host to decide what markdown to write, then indexes it.
|
||||
- `MemOS`, `memU`, `EverMemOS`, and `OpenViking` sit somewhere in between and often expose richer memory construction pipelines.
|
||||
|
||||
@@ -104,6 +107,7 @@ Paperclip should make plain search the minimum contract and richer outputs optio
|
||||
### 4. Is memory synchronous or asynchronous?
|
||||
|
||||
- local tools often work synchronously in-process.
|
||||
- `AWS Bedrock AgentCore Memory` is synchronous at the API edge, but its long-term memory path includes background extraction/indexing behavior and retention policies managed by the provider.
|
||||
- larger systems add schedulers, background indexing, compaction, or sync jobs.
|
||||
|
||||
Paperclip needs both direct request/response operations and background maintenance hooks.
|
||||
|
||||
@@ -7,10 +7,10 @@ Define a Paperclip memory service and surface API that can sit above multiple me
|
||||
- company scoping
|
||||
- auditability
|
||||
- provenance back to Paperclip work objects
|
||||
- budget / cost visibility
|
||||
- budget and cost visibility
|
||||
- plugin-first extensibility
|
||||
|
||||
This plan is based on the external landscape summarized in `doc/memory-landscape.md` and on the current Paperclip architecture in:
|
||||
This plan is based on the external landscape summarized in `doc/memory-landscape.md`, the AWS AgentCore comparison captured in [PAP-1274](/PAP/issues/PAP-1274), and the current Paperclip architecture in:
|
||||
|
||||
- `doc/SPEC-implementation.md`
|
||||
- `doc/plugins/PLUGIN_SPEC.md`
|
||||
@@ -19,23 +19,26 @@ This plan is based on the external landscape summarized in `doc/memory-landscape
|
||||
|
||||
## Recommendation In One Sentence
|
||||
|
||||
Paperclip should not embed one opinionated memory engine into core. It should add a company-scoped memory control plane with a small normalized adapter contract, then let built-ins and plugins implement the provider-specific behavior.
|
||||
Paperclip should add a company-scoped memory control plane with company default plus agent override resolution, shared hook delivery, and full operation attribution, while leaving extraction and storage semantics to built-ins and plugins.
|
||||
|
||||
## Product Decisions
|
||||
|
||||
### 1. Memory is company-scoped by default
|
||||
### 1. Memory resolution is company default plus agent override
|
||||
|
||||
Every memory binding belongs to exactly one company.
|
||||
|
||||
That binding can then be:
|
||||
Resolution order in V1:
|
||||
|
||||
- the company default
|
||||
- an agent override
|
||||
- a project override later if we need it
|
||||
- company default binding
|
||||
- optional per-agent override
|
||||
|
||||
There is no per-project override in V1.
|
||||
|
||||
Project context can still appear in scope and provenance so providers can use it for retrieval and partitioning, but projects do not participate in binding selection.
|
||||
|
||||
No cross-company memory sharing in the initial design.
|
||||
|
||||
### 2. Providers are selected by key
|
||||
### 2. Providers are selected by stable binding key
|
||||
|
||||
Each configured memory provider gets a stable key inside a company, for example:
|
||||
|
||||
@@ -44,36 +47,53 @@ Each configured memory provider gets a stable key inside a company, for example:
|
||||
- `local-markdown`
|
||||
- `research-kb`
|
||||
|
||||
Agents and services resolve the active provider by key, not by hard-coded vendor logic.
|
||||
Agents, tools, and background hooks resolve the active provider by key, not by hard-coded vendor logic.
|
||||
|
||||
### 3. Plugins are the primary provider path
|
||||
|
||||
Built-ins are useful for a zero-config local path, but most providers should arrive through the existing Paperclip plugin runtime.
|
||||
|
||||
That keeps the core small and matches the current direction that optional knowledge-like systems live at the edges.
|
||||
That keeps the core small and matches the broader Paperclip direction that specialized knowledge systems live at the edges.
|
||||
|
||||
### 4. Paperclip owns routing, provenance, and accounting
|
||||
### 4. Paperclip owns routing, provenance, and policy
|
||||
|
||||
Providers should not decide how Paperclip entities map to governance.
|
||||
|
||||
Paperclip core should own:
|
||||
|
||||
- binding resolution
|
||||
- who is allowed to call a memory operation
|
||||
- which company / agent / project scope is active
|
||||
- what issue / run / comment / document the operation belongs to
|
||||
- how usage gets recorded
|
||||
- which company, agent, issue, project, run, and subject scope is active
|
||||
- what source object the operation belongs to
|
||||
- how usage and costs are attributed
|
||||
- how operators inspect what happened
|
||||
|
||||
### 5. Automatic memory should be narrow at first
|
||||
### 5. Paperclip exposes shared hooks, providers own extraction
|
||||
|
||||
Paperclip should emit a common set of memory hooks that built-ins, third-party adapters, and plugins can all use.
|
||||
|
||||
Those hooks should pass structured Paperclip source objects plus normalized metadata. The provider then decides how to extract from those objects.
|
||||
|
||||
Paperclip should not force one extraction pipeline or one canonical "memory text" transform before the provider sees the input.
|
||||
|
||||
### 6. Automatic memory should start narrow, but the hook surface should be general
|
||||
|
||||
Automatic capture is useful, but broad silent capture is dangerous.
|
||||
|
||||
Initial automatic hooks should be:
|
||||
Initial built-in automatic hooks should be:
|
||||
|
||||
- pre-run hydrate for agent context recall
|
||||
- post-run capture from agent runs
|
||||
- issue comment / document capture when the binding enables it
|
||||
- pre-run recall for agent context hydration
|
||||
- optional issue comment capture
|
||||
- optional issue document capture
|
||||
|
||||
Everything else should start explicit.
|
||||
The hook registry itself should be general enough that other providers can subscribe to the same events without core changes.
|
||||
|
||||
### 7. No approval gate for binding changes in the open-source product
|
||||
|
||||
For the open-source version, changing memory bindings should not require approvals.
|
||||
|
||||
Paperclip should still log those changes in activity and preserve full auditability. Approval-gated memory governance can remain an enterprise or future policy layer.
|
||||
|
||||
## Proposed Concepts
|
||||
|
||||
@@ -83,7 +103,7 @@ A built-in or plugin-supplied implementation that stores and retrieves memory.
|
||||
|
||||
Examples:
|
||||
|
||||
- local markdown + vector index
|
||||
- local markdown plus semantic index
|
||||
- mem0 adapter
|
||||
- supermemory adapter
|
||||
- MemOS adapter
|
||||
@@ -94,6 +114,15 @@ A company-scoped configuration record that points to a provider and carries prov
|
||||
|
||||
This is the object selected by key.
|
||||
|
||||
### Memory binding target
|
||||
|
||||
A mapping from a Paperclip target to a binding.
|
||||
|
||||
V1 targets:
|
||||
|
||||
- `company`
|
||||
- `agent`
|
||||
|
||||
### Memory scope
|
||||
|
||||
The normalized Paperclip scope passed into a provider request.
|
||||
@@ -105,7 +134,9 @@ At minimum:
|
||||
- optional `projectId`
|
||||
- optional `issueId`
|
||||
- optional `runId`
|
||||
- optional `subjectId` for external/user identity
|
||||
- optional `subjectId` for external or user identity
|
||||
- optional `sessionKey` for providers that organize memory around sessions
|
||||
- optional `namespace` for providers that need an explicit partition hint
|
||||
|
||||
### Memory source reference
|
||||
|
||||
@@ -121,24 +152,36 @@ Supported source kinds should include:
|
||||
- `manual_note`
|
||||
- `external_document`
|
||||
|
||||
### Memory hook
|
||||
|
||||
A normalized trigger emitted by Paperclip when something memory-relevant happens.
|
||||
|
||||
Initial hook kinds:
|
||||
|
||||
- `pre_run_hydrate`
|
||||
- `post_run_capture`
|
||||
- `issue_comment_capture`
|
||||
- `issue_document_capture`
|
||||
- `manual_capture`
|
||||
|
||||
### Memory operation
|
||||
|
||||
A normalized write, query, browse, or delete action performed through Paperclip.
|
||||
A normalized capture, record-write, query, browse, get, correction, or delete action performed through Paperclip.
|
||||
|
||||
Paperclip should log every operation, whether the provider is local or external.
|
||||
Paperclip should log every memory operation whether the provider is local, plugin-backed, or external.
|
||||
|
||||
## Required Adapter Contract
|
||||
|
||||
The required core should be small enough to fit `memsearch`, `mem0`, `Memori`, `MemOS`, or `OpenViking`.
|
||||
The required core should be small enough to fit `memsearch`, `mem0`, `Memori`, `MemOS`, or `OpenViking`, but strong enough to satisfy Paperclip's attribution and inspectability requirements.
|
||||
|
||||
```ts
|
||||
export interface MemoryAdapterCapabilities {
|
||||
profile?: boolean;
|
||||
browse?: boolean;
|
||||
correction?: boolean;
|
||||
asyncIngestion?: boolean;
|
||||
multimodal?: boolean;
|
||||
providerManagedExtraction?: boolean;
|
||||
asyncExtraction?: boolean;
|
||||
providerNativeBrowse?: boolean;
|
||||
}
|
||||
|
||||
export interface MemoryScope {
|
||||
@@ -148,6 +191,8 @@ export interface MemoryScope {
|
||||
issueId?: string;
|
||||
runId?: string;
|
||||
subjectId?: string;
|
||||
sessionKey?: string;
|
||||
namespace?: string;
|
||||
}
|
||||
|
||||
export interface MemorySourceRef {
|
||||
@@ -168,10 +213,34 @@ export interface MemorySourceRef {
|
||||
externalRef?: string;
|
||||
}
|
||||
|
||||
export interface MemoryHookContext {
|
||||
hookKind:
|
||||
| "pre_run_hydrate"
|
||||
| "post_run_capture"
|
||||
| "issue_comment_capture"
|
||||
| "issue_document_capture"
|
||||
| "manual_capture";
|
||||
hookId: string;
|
||||
triggeredAt: string;
|
||||
actorAgentId?: string;
|
||||
heartbeatRunId?: string;
|
||||
}
|
||||
|
||||
export interface MemorySourcePayload {
|
||||
text?: string;
|
||||
mimeType?: string;
|
||||
metadata?: Record<string, unknown>;
|
||||
object?: Record<string, unknown>;
|
||||
}
|
||||
|
||||
export interface MemoryUsage {
|
||||
provider: string;
|
||||
biller?: string;
|
||||
model?: string;
|
||||
billingType?: "metered_api" | "subscription_included" | "subscription_overage" | "unknown";
|
||||
attributionMode?: "billed_directly" | "included_in_run" | "external_invoice" | "untracked";
|
||||
inputTokens?: number;
|
||||
cachedInputTokens?: number;
|
||||
outputTokens?: number;
|
||||
embeddingTokens?: number;
|
||||
costCents?: number;
|
||||
@@ -179,20 +248,32 @@ export interface MemoryUsage {
|
||||
details?: Record<string, unknown>;
|
||||
}
|
||||
|
||||
export interface MemoryWriteRequest {
|
||||
bindingKey: string;
|
||||
scope: MemoryScope;
|
||||
source: MemorySourceRef;
|
||||
content: string;
|
||||
metadata?: Record<string, unknown>;
|
||||
mode?: "append" | "upsert" | "summarize";
|
||||
}
|
||||
|
||||
export interface MemoryRecordHandle {
|
||||
providerKey: string;
|
||||
providerRecordId: string;
|
||||
}
|
||||
|
||||
export interface MemoryCaptureRequest {
|
||||
bindingKey: string;
|
||||
scope: MemoryScope;
|
||||
source: MemorySourceRef;
|
||||
payload: MemorySourcePayload;
|
||||
hook?: MemoryHookContext;
|
||||
mode?: "capture_residue" | "capture_record";
|
||||
metadata?: Record<string, unknown>;
|
||||
}
|
||||
|
||||
export interface MemoryRecordWriteRequest {
|
||||
bindingKey: string;
|
||||
scope: MemoryScope;
|
||||
source?: MemorySourceRef;
|
||||
records: Array<{
|
||||
text: string;
|
||||
summary?: string;
|
||||
metadata?: Record<string, unknown>;
|
||||
}>;
|
||||
}
|
||||
|
||||
export interface MemoryQueryRequest {
|
||||
bindingKey: string;
|
||||
scope: MemoryScope;
|
||||
@@ -202,6 +283,14 @@ export interface MemoryQueryRequest {
|
||||
metadataFilter?: Record<string, unknown>;
|
||||
}
|
||||
|
||||
export interface MemoryListRequest {
|
||||
bindingKey: string;
|
||||
scope: MemoryScope;
|
||||
cursor?: string;
|
||||
limit?: number;
|
||||
metadataFilter?: Record<string, unknown>;
|
||||
}
|
||||
|
||||
export interface MemorySnippet {
|
||||
handle: MemoryRecordHandle;
|
||||
text: string;
|
||||
@@ -217,30 +306,149 @@ export interface MemoryContextBundle {
|
||||
usage?: MemoryUsage[];
|
||||
}
|
||||
|
||||
export interface MemoryListPage {
|
||||
items: MemorySnippet[];
|
||||
nextCursor?: string;
|
||||
usage?: MemoryUsage[];
|
||||
}
|
||||
|
||||
export interface MemoryExtractionJob {
|
||||
providerJobId: string;
|
||||
status: "queued" | "running" | "succeeded" | "failed" | "cancelled";
|
||||
hookKind?: MemoryHookContext["hookKind"];
|
||||
source?: MemorySourceRef;
|
||||
error?: string;
|
||||
submittedAt?: string;
|
||||
startedAt?: string;
|
||||
finishedAt?: string;
|
||||
}
|
||||
|
||||
export interface MemoryAdapter {
|
||||
key: string;
|
||||
capabilities: MemoryAdapterCapabilities;
|
||||
write(req: MemoryWriteRequest): Promise<{
|
||||
capture(req: MemoryCaptureRequest): Promise<{
|
||||
records?: MemoryRecordHandle[];
|
||||
jobs?: MemoryExtractionJob[];
|
||||
usage?: MemoryUsage[];
|
||||
}>;
|
||||
upsertRecords(req: MemoryRecordWriteRequest): Promise<{
|
||||
records?: MemoryRecordHandle[];
|
||||
usage?: MemoryUsage[];
|
||||
}>;
|
||||
query(req: MemoryQueryRequest): Promise<MemoryContextBundle>;
|
||||
list(req: MemoryListRequest): Promise<MemoryListPage>;
|
||||
get(handle: MemoryRecordHandle, scope: MemoryScope): Promise<MemorySnippet | null>;
|
||||
forget(handles: MemoryRecordHandle[], scope: MemoryScope): Promise<{ usage?: MemoryUsage[] }>;
|
||||
}
|
||||
```
|
||||
|
||||
This contract intentionally does not force a provider to expose its internal graph, filesystem, or ontology.
|
||||
This contract intentionally does not force a provider to expose its internal graph, file tree, or ontology. It does require enough structure for Paperclip to browse, attribute, and audit what happened.
|
||||
|
||||
## Optional Adapter Surfaces
|
||||
|
||||
These should be capability-gated, not required:
|
||||
|
||||
- `browse(scope, filters)` for file-system / graph / timeline inspection
|
||||
- `correct(handle, patch)` for natural-language correction flows
|
||||
- `profile(scope)` when the provider can synthesize stable preferences or summaries
|
||||
- `sync(source)` for connectors or background ingestion
|
||||
- `listExtractionJobs(scope, cursor)` when async extraction needs richer operator visibility
|
||||
- `retryExtractionJob(jobId)` when a provider supports re-drive
|
||||
- `explain(queryResult)` for providers that can expose retrieval traces
|
||||
- provider-native browse or graph surfaces exposed through plugin UI
|
||||
|
||||
## Lessons From AWS AgentCore Memory API
|
||||
|
||||
AWS AgentCore Memory is a useful check on whether this plan is too abstract or missing important operational surfaces.
|
||||
|
||||
The broad direction still looks right:
|
||||
|
||||
- AWS splits memory into a control plane (`CreateMemory`, `UpdateMemory`, `ListMemories`) and a data plane (`CreateEvent`, `RetrieveMemoryRecords`, `GetMemoryRecord`, `ListMemoryRecords`)
|
||||
- AWS separates raw interaction capture from curated long-term memory records
|
||||
- AWS supports both provider-managed extraction and self-managed pipelines
|
||||
- AWS treats browse and list operations as first-class APIs, not ad hoc debugging helpers
|
||||
- AWS exposes extraction jobs instead of hiding asynchronous maintenance completely
|
||||
|
||||
That lines up with the Paperclip plan at a high level: provider configuration, scoped writes, scoped retrieval, provider-managed extraction as a capability, and a browse and inspect surface.
|
||||
|
||||
The concrete changes Paperclip should take from AWS are:
|
||||
|
||||
### 1. Keep config APIs separate from runtime traffic
|
||||
|
||||
The rollout should preserve a clean separation between:
|
||||
|
||||
- control-plane APIs for binding CRUD, defaults, overrides, and capability metadata
|
||||
- runtime APIs and tools for capture, record writes, query, list, get, forget, and extraction status
|
||||
|
||||
This keeps governance changes distinct from high-volume memory traffic.
|
||||
|
||||
### 2. Distinguish capture from curated record writes
|
||||
|
||||
AWS does not flatten everything into one write primitive. It distinguishes captured events from durable memory records.
|
||||
|
||||
Paperclip should do the same:
|
||||
|
||||
- `capture(...)` for raw run, comment, document, or activity residue
|
||||
- `upsertRecords(...)` for curated durable facts and notes
|
||||
|
||||
That is a better fit for provider-managed extraction and for manual curation flows.
|
||||
|
||||
### 3. Make list and browse first-class
|
||||
|
||||
AWS exposes list and retrieve surfaces directly. Paperclip should not make browse optional at the portable layer.
|
||||
|
||||
The minimum portable surface should include:
|
||||
|
||||
- `query`
|
||||
- `list`
|
||||
- `get`
|
||||
|
||||
Provider-native graph or file browsing can remain optional beyond that.
|
||||
|
||||
### 4. Add pagination and cursors for operator inspection
|
||||
|
||||
AWS consistently uses pagination on browse-heavy APIs.
|
||||
|
||||
Paperclip should add cursor-based pagination to:
|
||||
|
||||
- record listing
|
||||
- extraction job listing
|
||||
- memory operation explorer APIs
|
||||
|
||||
Prompt hydration can continue to use `topK`, but operator surfaces need cursors.
|
||||
|
||||
### 5. Add explicit session and namespace hints
|
||||
|
||||
AWS uses `actorId`, `sessionId`, `namespace`, and `memoryStrategyId` heavily.
|
||||
|
||||
Paperclip should keep its own control-plane-centric model, but the adapter contract needs obvious places to map those concepts:
|
||||
|
||||
- `sessionKey`
|
||||
- `namespace`
|
||||
|
||||
The provider adapter can map them to AWS or other vendor-specific identifiers without leaking those identifiers into core.
|
||||
|
||||
### 6. Treat asynchronous extraction as a real operational surface
|
||||
|
||||
AWS exposes extraction jobs explicitly. Paperclip should too.
|
||||
|
||||
Operators should be able to see:
|
||||
|
||||
- pending extraction work
|
||||
- failed extraction work
|
||||
- which hook or source caused the work
|
||||
- whether a retry is available
|
||||
|
||||
### 7. Keep Paperclip provenance primary
|
||||
|
||||
Paperclip should continue to center:
|
||||
|
||||
- `companyId`
|
||||
- `agentId`
|
||||
- `projectId`
|
||||
- `issueId`
|
||||
- `runId`
|
||||
- issue comments, documents, and activity as sources
|
||||
|
||||
The lesson from AWS is to support clean mapping into provider-specific models, not to let provider identifiers take over the core product model.
|
||||
|
||||
## What Paperclip Should Persist
|
||||
|
||||
@@ -248,39 +456,67 @@ Paperclip should not mirror the full provider memory corpus into Postgres unless
|
||||
|
||||
Paperclip core should persist:
|
||||
|
||||
- memory bindings and overrides
|
||||
- memory bindings
|
||||
- company default and agent override resolution targets
|
||||
- provider keys and capability metadata
|
||||
- normalized memory operation logs
|
||||
- provider record handles returned by operations when available
|
||||
- source references back to issue comments, documents, runs, and activity
|
||||
- usage and cost data
|
||||
- provider record handles returned by operations when available
|
||||
- hook delivery records and extraction job state
|
||||
- usage and cost attribution
|
||||
|
||||
For external providers, the memory payload itself can remain in the provider.
|
||||
For external providers, the actual memory payload can remain in the provider.
|
||||
|
||||
## Hook Model
|
||||
|
||||
### Automatic hooks
|
||||
### Shared hook surface
|
||||
|
||||
Paperclip should expose one shared hook system for memory.
|
||||
|
||||
That same system must be available to:
|
||||
|
||||
- built-in memory providers
|
||||
- plugin-based memory providers
|
||||
- third-party adapter integrations that want to use memory hooks
|
||||
|
||||
### What a hook delivers
|
||||
|
||||
Each hook delivery should include:
|
||||
|
||||
- resolved binding key
|
||||
- normalized `MemoryScope`
|
||||
- `MemorySourceRef`
|
||||
- structured source payload
|
||||
- hook metadata such as hook kind, trigger time, and related run id
|
||||
|
||||
The payload should include structured objects where possible so the provider can decide how to extract and chunk.
|
||||
|
||||
### Initial automatic hooks
|
||||
|
||||
These should be low-risk and easy to reason about:
|
||||
|
||||
1. `pre-run hydrate`
|
||||
1. `pre_run_hydrate`
|
||||
Before an agent run starts, Paperclip may call `query(... intent = "agent_preamble")` using the active binding.
|
||||
|
||||
2. `post-run capture`
|
||||
After a run finishes, Paperclip may write a summary or transcript-derived note tied to the run.
|
||||
2. `post_run_capture`
|
||||
After a run finishes, Paperclip may call `capture(...)` with structured run output, excerpts, and provenance.
|
||||
|
||||
3. `issue comment / document capture`
|
||||
When enabled on the binding, Paperclip may capture selected issue comments or issue documents as memory sources.
|
||||
3. `issue_comment_capture`
|
||||
When enabled on the binding, Paperclip may call `capture(...)` for selected issue comments.
|
||||
|
||||
### Explicit hooks
|
||||
4. `issue_document_capture`
|
||||
When enabled on the binding, Paperclip may call `capture(...)` for selected issue documents.
|
||||
|
||||
These should be tool- or UI-driven first:
|
||||
### Explicit tools and APIs
|
||||
|
||||
These should be tool-driven or UI-driven first:
|
||||
|
||||
- `memory.search`
|
||||
- `memory.note`
|
||||
- `memory.forget`
|
||||
- `memory.correct`
|
||||
- `memory.browse`
|
||||
- memory record list and get
|
||||
- extraction-job inspection
|
||||
|
||||
### Not automatic in the first version
|
||||
|
||||
@@ -309,34 +545,69 @@ The initial browse surface should support:
|
||||
|
||||
- active binding by company and agent
|
||||
- recent memory operations
|
||||
- recent write sources
|
||||
- recent write and capture sources
|
||||
- record list and record detail with source backlinks
|
||||
- query results with source backlinks
|
||||
- filters by agent, issue, run, source kind, and date
|
||||
- provider usage / cost / latency summaries
|
||||
- extraction job status
|
||||
- filters by agent, issue, project, run, source kind, and date
|
||||
- provider usage, cost, and latency summaries
|
||||
|
||||
When a provider supports richer browsing, the plugin can add deeper views through the existing plugin UI surfaces.
|
||||
|
||||
## Cost And Evaluation
|
||||
|
||||
Every adapter response should be able to return usage records.
|
||||
Paperclip should treat memory accounting as two related but distinct concerns:
|
||||
|
||||
Paperclip should roll up:
|
||||
### 1. `memory_operations` is the authoritative audit trail
|
||||
|
||||
- memory inference tokens
|
||||
- embedding tokens
|
||||
- external provider cost
|
||||
Every memory action should create a normalized operation record that captures:
|
||||
|
||||
- binding
|
||||
- scope
|
||||
- source provenance
|
||||
- operation type
|
||||
- success or failure
|
||||
- latency
|
||||
- query count
|
||||
- write count
|
||||
- usage details reported by the provider
|
||||
- attribution mode
|
||||
- related run, issue, and agent when available
|
||||
|
||||
It should also record evaluation-oriented metrics where possible:
|
||||
This is where operators answer "what memory work happened and why?"
|
||||
|
||||
### 2. `cost_events` remains the canonical spend ledger for billable metered usage
|
||||
|
||||
The current `cost_events` model is already the canonical cost ledger for token and model spend, and `agent_runtime_state` plus `heartbeat_runs.usageJson` already roll up and summarize run usage.
|
||||
|
||||
The recommendation is:
|
||||
|
||||
- if a memory operation runs inside a normal Paperclip agent heartbeat and the model usage is already counted on that run, do not create a duplicate `cost_event`
|
||||
- instead, store the memory operation with `attributionMode = "included_in_run"` and link it to the related `heartbeatRunId`
|
||||
- if a memory provider makes a direct metered model call outside the agent run accounting path, the provider must report usage and Paperclip should create a `cost_event`
|
||||
- that direct `cost_event` should still link back to the memory operation, agent, company, and issue or run context when possible
|
||||
|
||||
### 3. `finance_events` should carry flat subscription or invoice-style costs
|
||||
|
||||
If a memory service incurs:
|
||||
|
||||
- monthly subscription cost
|
||||
- storage invoices
|
||||
- provider platform charges not tied to one request
|
||||
|
||||
those should be represented as `finance_events`, not as synthetic per-query memory operations.
|
||||
|
||||
That keeps usage telemetry separate from accounting entries like invoices and flat fees.
|
||||
|
||||
### 4. Evaluation metrics still matter
|
||||
|
||||
Paperclip should record evaluation-oriented metrics where possible:
|
||||
|
||||
- recall hit rate
|
||||
- empty query rate
|
||||
- manual correction count
|
||||
- per-binding success / failure counts
|
||||
- extraction failure count
|
||||
- per-binding success and failure counts
|
||||
|
||||
This is important because a memory system that "works" but silently burns budget is not acceptable in Paperclip.
|
||||
This is important because a memory system that "works" but silently burns budget or silently fails extraction is not acceptable in Paperclip.
|
||||
|
||||
## Suggested Data Model Additions
|
||||
|
||||
@@ -344,23 +615,36 @@ At the control-plane level, the likely new core tables are:
|
||||
|
||||
- `memory_bindings`
|
||||
- company-scoped key
|
||||
- provider id / plugin id
|
||||
- provider id or plugin id
|
||||
- config blob
|
||||
- enabled status
|
||||
|
||||
- `memory_binding_targets`
|
||||
- target type (`company`, `agent`, later `project`)
|
||||
- target type (`company`, `agent`)
|
||||
- target id
|
||||
- binding id
|
||||
|
||||
- `memory_operations`
|
||||
- company id
|
||||
- binding id
|
||||
- operation type (`write`, `query`, `forget`, `browse`, `correct`)
|
||||
- operation type (`capture`, `record_upsert`, `query`, `list`, `get`, `forget`, `correct`)
|
||||
- scope fields
|
||||
- source refs
|
||||
- usage / latency / cost
|
||||
- success / error
|
||||
- usage, latency, and attribution mode
|
||||
- related heartbeat run id
|
||||
- related cost event id
|
||||
- success or error
|
||||
|
||||
- `memory_extraction_jobs`
|
||||
- company id
|
||||
- binding id
|
||||
- operation id
|
||||
- provider job id
|
||||
- hook kind
|
||||
- status
|
||||
- source refs
|
||||
- error
|
||||
- submitted, started, and finished timestamps
|
||||
|
||||
Provider-specific long-form state should stay in plugin state or the provider itself unless a built-in local provider needs its own schema.
|
||||
|
||||
@@ -382,45 +666,46 @@ The design should still treat that built-in as just another provider behind the
|
||||
### Phase 1: Control-plane contract
|
||||
|
||||
- add memory binding models and API types
|
||||
- add plugin capability / registration surface for memory providers
|
||||
- add operation logging and usage reporting
|
||||
- add company default plus agent override resolution
|
||||
- add plugin capability and registration surface for memory providers
|
||||
|
||||
### Phase 2: One built-in + one plugin example
|
||||
### Phase 2: Hook delivery and operation audit
|
||||
|
||||
- add shared memory hook emission in core
|
||||
- add operation logging, extraction job state, and usage attribution
|
||||
- add direct-provider cost and finance-event linkage rules
|
||||
|
||||
### Phase 3: One built-in plus one plugin example
|
||||
|
||||
- ship a local markdown-first provider
|
||||
- ship one hosted adapter example to validate the external-provider path
|
||||
|
||||
### Phase 3: UI inspection
|
||||
### Phase 4: UI inspection
|
||||
|
||||
- add company / agent memory settings
|
||||
- add company and agent memory settings
|
||||
- add a memory operation explorer
|
||||
- add record list and detail surfaces
|
||||
- add source backlinks to issues and runs
|
||||
|
||||
### Phase 4: Automatic hooks
|
||||
|
||||
- pre-run hydrate
|
||||
- post-run capture
|
||||
- selected issue comment / document capture
|
||||
|
||||
### Phase 5: Rich capabilities
|
||||
|
||||
- correction flows
|
||||
- provider-native browse / graph views
|
||||
- project-level overrides if needed
|
||||
- provider-native browse or graph views
|
||||
- evaluation dashboards
|
||||
- retention and quota controls
|
||||
|
||||
## Open Questions
|
||||
## Remaining Open Questions
|
||||
|
||||
- Should project overrides exist in V1 of the memory service, or should we force company default + agent override first?
|
||||
- Do we want Paperclip-managed extraction pipelines at all, or should built-ins be the only place where Paperclip owns extraction?
|
||||
- Should memory usage extend the current `cost_events` model directly, or should memory operations keep a parallel usage log and roll up into `cost_events` secondarily?
|
||||
- Do we want provider install / binding changes to require approvals for some companies?
|
||||
- Which built-in local provider should ship first: pure markdown, markdown plus embeddings, or a lightweight local vector store?
|
||||
- How much source payload should Paperclip snapshot inside `memory_operations` for debugging without duplicating large transcripts?
|
||||
- Should correction flows mutate provider state directly, create superseding records, or both depending on provider capability?
|
||||
- What default retention and size limits should the local built-in enforce?
|
||||
|
||||
## Bottom Line
|
||||
|
||||
The right abstraction is:
|
||||
|
||||
- Paperclip owns memory bindings, scopes, provenance, governance, and usage reporting.
|
||||
- Paperclip owns bindings, resolution, hooks, provenance, policy, and attribution.
|
||||
- Providers own extraction, ranking, storage, and provider-native memory semantics.
|
||||
|
||||
That gives Paperclip a stable "memory service" without locking the product to one memory philosophy or one vendor.
|
||||
That gives Paperclip a stable memory service without locking the product to one memory philosophy or one vendor, and it integrates the AWS lessons without importing AWS's model into core.
|
||||
|
||||
@@ -3,6 +3,7 @@
|
||||
"private": true,
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"preflight:workspace-links": "pnpm exec tsx scripts/ensure-workspace-package-links.ts",
|
||||
"dev": "pnpm --filter @paperclipai/server exec tsx ../scripts/dev-runner.ts watch",
|
||||
"dev:watch": "pnpm --filter @paperclipai/server exec tsx ../scripts/dev-runner.ts watch",
|
||||
"dev:once": "pnpm --filter @paperclipai/server exec tsx ../scripts/dev-runner.ts dev",
|
||||
@@ -10,10 +11,10 @@
|
||||
"dev:stop": "pnpm --filter @paperclipai/server exec tsx ../scripts/dev-service.ts stop",
|
||||
"dev:server": "pnpm --filter @paperclipai/server dev",
|
||||
"dev:ui": "pnpm --filter @paperclipai/ui dev",
|
||||
"build": "pnpm -r build",
|
||||
"typecheck": "pnpm -r typecheck",
|
||||
"test": "vitest",
|
||||
"test:run": "vitest run",
|
||||
"build": "pnpm run preflight:workspace-links && pnpm -r build",
|
||||
"typecheck": "pnpm run preflight:workspace-links && pnpm -r typecheck",
|
||||
"test": "pnpm run preflight:workspace-links && vitest",
|
||||
"test:run": "pnpm run preflight:workspace-links && vitest run",
|
||||
"db:generate": "pnpm --filter @paperclipai/db generate",
|
||||
"db:migrate": "pnpm --filter @paperclipai/db migrate",
|
||||
"secrets:migrate-inline-env": "tsx scripts/migrate-inline-env-secrets.ts",
|
||||
|
||||
126
scripts/ensure-workspace-package-links.ts
Normal file
126
scripts/ensure-workspace-package-links.ts
Normal file
@@ -0,0 +1,126 @@
|
||||
#!/usr/bin/env -S node --import tsx
|
||||
import fs from "node:fs/promises";
|
||||
import { existsSync, lstatSync, readdirSync, readFileSync, realpathSync } from "node:fs";
|
||||
import path from "node:path";
|
||||
import { repoRoot } from "./dev-service-profile.ts";
|
||||
|
||||
type WorkspaceLinkMismatch = {
|
||||
workspaceDir: string;
|
||||
packageName: string;
|
||||
expectedPath: string;
|
||||
actualPath: string | null;
|
||||
};
|
||||
|
||||
function readJsonFile(filePath: string): Record<string, unknown> {
|
||||
return JSON.parse(readFileSync(filePath, "utf8")) as Record<string, unknown>;
|
||||
}
|
||||
|
||||
function discoverWorkspacePackagePaths(rootDir: string): Map<string, string> {
|
||||
const packagePaths = new Map<string, string>();
|
||||
const ignoredDirNames = new Set([".git", ".paperclip", "dist", "node_modules"]);
|
||||
|
||||
function visit(dirPath: string) {
|
||||
const packageJsonPath = path.join(dirPath, "package.json");
|
||||
if (existsSync(packageJsonPath)) {
|
||||
const packageJson = readJsonFile(packageJsonPath);
|
||||
if (typeof packageJson.name === "string" && packageJson.name.length > 0) {
|
||||
packagePaths.set(packageJson.name, dirPath);
|
||||
}
|
||||
}
|
||||
|
||||
for (const entry of readdirSync(dirPath, { withFileTypes: true })) {
|
||||
if (!entry.isDirectory()) continue;
|
||||
if (ignoredDirNames.has(entry.name)) continue;
|
||||
visit(path.join(dirPath, entry.name));
|
||||
}
|
||||
}
|
||||
|
||||
visit(path.join(rootDir, "packages"));
|
||||
visit(path.join(rootDir, "server"));
|
||||
visit(path.join(rootDir, "ui"));
|
||||
visit(path.join(rootDir, "cli"));
|
||||
|
||||
return packagePaths;
|
||||
}
|
||||
|
||||
function isLinkedGitWorktreeCheckout(rootDir: string) {
|
||||
const gitMetadataPath = path.join(rootDir, ".git");
|
||||
if (!existsSync(gitMetadataPath)) return false;
|
||||
|
||||
const stat = lstatSync(gitMetadataPath);
|
||||
if (!stat.isFile()) return false;
|
||||
|
||||
return readFileSync(gitMetadataPath, "utf8").trimStart().startsWith("gitdir:");
|
||||
}
|
||||
|
||||
if (!isLinkedGitWorktreeCheckout(repoRoot)) {
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
const workspacePackagePaths = discoverWorkspacePackagePaths(repoRoot);
|
||||
const workspaceDirs = Array.from(
|
||||
new Set(
|
||||
Array.from(workspacePackagePaths.values())
|
||||
.map((packagePath) => path.relative(repoRoot, packagePath))
|
||||
.filter((workspaceDir) => workspaceDir.length > 0),
|
||||
),
|
||||
).sort();
|
||||
|
||||
function findWorkspaceLinkMismatches(workspaceDir: string): WorkspaceLinkMismatch[] {
|
||||
const packageJson = readJsonFile(path.join(repoRoot, workspaceDir, "package.json"));
|
||||
const dependencies = {
|
||||
...(packageJson.dependencies as Record<string, unknown> | undefined),
|
||||
...(packageJson.devDependencies as Record<string, unknown> | undefined),
|
||||
};
|
||||
const mismatches: WorkspaceLinkMismatch[] = [];
|
||||
|
||||
for (const [packageName, version] of Object.entries(dependencies)) {
|
||||
if (typeof version !== "string" || !version.startsWith("workspace:")) continue;
|
||||
|
||||
const expectedPath = workspacePackagePaths.get(packageName);
|
||||
if (!expectedPath) continue;
|
||||
|
||||
const linkPath = path.join(repoRoot, workspaceDir, "node_modules", ...packageName.split("/"));
|
||||
const actualPath = existsSync(linkPath) ? path.resolve(realpathSync(linkPath)) : null;
|
||||
if (actualPath === path.resolve(expectedPath)) continue;
|
||||
|
||||
mismatches.push({
|
||||
workspaceDir,
|
||||
packageName,
|
||||
expectedPath: path.resolve(expectedPath),
|
||||
actualPath,
|
||||
});
|
||||
}
|
||||
|
||||
return mismatches;
|
||||
}
|
||||
|
||||
async function ensureWorkspaceLinksCurrent(workspaceDir: string) {
|
||||
const mismatches = findWorkspaceLinkMismatches(workspaceDir);
|
||||
if (mismatches.length === 0) return;
|
||||
|
||||
console.log(`[paperclip] detected stale workspace package links for ${workspaceDir}; relinking dependencies...`);
|
||||
for (const mismatch of mismatches) {
|
||||
console.log(
|
||||
`[paperclip] ${mismatch.packageName}: ${mismatch.actualPath ?? "missing"} -> ${mismatch.expectedPath}`,
|
||||
);
|
||||
}
|
||||
|
||||
for (const mismatch of mismatches) {
|
||||
const linkPath = path.join(repoRoot, mismatch.workspaceDir, "node_modules", ...mismatch.packageName.split("/"));
|
||||
await fs.mkdir(path.dirname(linkPath), { recursive: true });
|
||||
await fs.rm(linkPath, { recursive: true, force: true });
|
||||
await fs.symlink(mismatch.expectedPath, linkPath);
|
||||
}
|
||||
|
||||
const remainingMismatches = findWorkspaceLinkMismatches(workspaceDir);
|
||||
if (remainingMismatches.length === 0) return;
|
||||
|
||||
throw new Error(
|
||||
`Workspace relink did not repair all ${workspaceDir} package links: ${remainingMismatches.map((item) => item.packageName).join(", ")}`,
|
||||
);
|
||||
}
|
||||
|
||||
for (const workspaceDir of workspaceDirs) {
|
||||
await ensureWorkspaceLinksCurrent(workspaceDir);
|
||||
}
|
||||
110
scripts/paperclip-issue-update.sh
Executable file
110
scripts/paperclip-issue-update.sh
Executable file
@@ -0,0 +1,110 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
usage() {
|
||||
cat <<'EOF'
|
||||
Usage:
|
||||
scripts/paperclip-issue-update.sh [--issue-id ID] [--status STATUS] [--comment TEXT] [--dry-run]
|
||||
|
||||
Reads a multiline markdown comment from stdin when stdin is piped. This preserves
|
||||
newlines when building the JSON payload for PATCH /api/issues/{issueId}.
|
||||
|
||||
Examples:
|
||||
scripts/paperclip-issue-update.sh --issue-id "$PAPERCLIP_TASK_ID" --status in_progress <<'MD'
|
||||
Investigating formatting
|
||||
|
||||
- Pulled the raw comment body
|
||||
- Comparing it with the run transcript
|
||||
MD
|
||||
|
||||
scripts/paperclip-issue-update.sh --issue-id "$PAPERCLIP_TASK_ID" --status done --dry-run <<'MD'
|
||||
Done
|
||||
|
||||
- Fixed the issue update helper
|
||||
MD
|
||||
EOF
|
||||
}
|
||||
|
||||
require_command() {
|
||||
if ! command -v "$1" >/dev/null 2>&1; then
|
||||
printf 'Missing required command: %s\n' "$1" >&2
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
issue_id="${PAPERCLIP_TASK_ID:-}"
|
||||
status=""
|
||||
comment_arg=""
|
||||
dry_run=0
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--issue-id)
|
||||
issue_id="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--status)
|
||||
status="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--comment)
|
||||
comment_arg="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--dry-run)
|
||||
dry_run=1
|
||||
shift
|
||||
;;
|
||||
--help|-h)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
printf 'Unknown argument: %s\n' "$1" >&2
|
||||
usage >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ -z "$issue_id" ]]; then
|
||||
printf 'Missing issue id. Pass --issue-id or set PAPERCLIP_TASK_ID.\n' >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
comment=""
|
||||
if [[ -n "$comment_arg" ]]; then
|
||||
comment="$comment_arg"
|
||||
elif [[ ! -t 0 ]]; then
|
||||
comment="$(cat)"
|
||||
fi
|
||||
|
||||
require_command jq
|
||||
|
||||
payload="$(
|
||||
jq -nc \
|
||||
--arg status "$status" \
|
||||
--arg comment "$comment" \
|
||||
'
|
||||
(if $status == "" then {} else {status: $status} end) +
|
||||
(if $comment == "" then {} else {comment: $comment} end)
|
||||
'
|
||||
)"
|
||||
|
||||
if [[ "$dry_run" == "1" ]]; then
|
||||
printf '%s\n' "$payload"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [[ -z "${PAPERCLIP_API_URL:-}" || -z "${PAPERCLIP_API_KEY:-}" || -z "${PAPERCLIP_RUN_ID:-}" ]]; then
|
||||
printf 'Missing PAPERCLIP_API_URL, PAPERCLIP_API_KEY, or PAPERCLIP_RUN_ID.\n' >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
curl -sS -X PATCH \
|
||||
"$PAPERCLIP_API_URL/api/issues/$issue_id" \
|
||||
-H "Authorization: Bearer $PAPERCLIP_API_KEY" \
|
||||
-H "X-Paperclip-Run-Id: $PAPERCLIP_RUN_ID" \
|
||||
-H 'Content-Type: application/json' \
|
||||
--data-binary "$payload"
|
||||
@@ -321,20 +321,6 @@ if ! run_isolated_worktree_init; then
|
||||
write_fallback_worktree_config
|
||||
fi
|
||||
|
||||
disable_seeded_routines() {
|
||||
local company_id="${PAPERCLIP_COMPANY_ID:-}"
|
||||
if [[ -z "$company_id" ]]; then
|
||||
echo "PAPERCLIP_COMPANY_ID not set; skipping routine disable post-step." >&2
|
||||
return 0
|
||||
fi
|
||||
|
||||
if ! run_paperclipai_command routines disable-all --config "$worktree_config_path" --company-id "$company_id"; then
|
||||
echo "paperclipai CLI not available in this workspace; skipping routine disable post-step." >&2
|
||||
fi
|
||||
}
|
||||
|
||||
disable_seeded_routines
|
||||
|
||||
list_base_node_modules_paths() {
|
||||
cd "$base_cwd" &&
|
||||
find . \
|
||||
|
||||
@@ -63,6 +63,7 @@ curl -sS "$PAPERCLIP_API_URL/llms/agent-icons.txt" \
|
||||
- adapter type
|
||||
- optional `desiredSkills` from the company skill library when this role needs installed skills on day one
|
||||
- adapter and runtime config aligned to this environment
|
||||
- leave timer heartbeats off by default; only set `runtimeConfig.heartbeat.enabled=true` with an `intervalSec` when the role genuinely needs scheduled recurring work or the user explicitly asked for it
|
||||
- capabilities
|
||||
- run prompt in adapter config (`promptTemplate` where applicable)
|
||||
- source issue linkage (`sourceIssueId` or `sourceIssueIds`) when this hire came from an issue
|
||||
@@ -83,7 +84,7 @@ curl -sS -X POST "$PAPERCLIP_API_URL/api/companies/$PAPERCLIP_COMPANY_ID/agent-h
|
||||
"desiredSkills": ["vercel-labs/agent-browser/agent-browser"],
|
||||
"adapterType": "codex_local",
|
||||
"adapterConfig": {"cwd": "/abs/path/to/repo", "model": "o4-mini"},
|
||||
"runtimeConfig": {"heartbeat": {"enabled": true, "intervalSec": 300, "wakeOnDemand": true}},
|
||||
"runtimeConfig": {"heartbeat": {"enabled": false, "wakeOnDemand": true}},
|
||||
"sourceIssueId": "<issue-id>"
|
||||
}'
|
||||
```
|
||||
@@ -136,6 +137,7 @@ Before sending a hire request:
|
||||
- Avoid secrets in plain text unless required by adapter behavior.
|
||||
- Ensure reporting line is correct and in-company.
|
||||
- Ensure prompt is role-specific and operationally scoped.
|
||||
- Keep timer heartbeats opt-in. Most hires should rely on assignment/on-demand wakeups unless the job explicitly needs a schedule.
|
||||
- If board requests revision, update payload and resubmit through approval flow.
|
||||
|
||||
For endpoint payload shapes and full examples, read:
|
||||
|
||||
@@ -47,8 +47,7 @@ Request body matches agent create shape:
|
||||
},
|
||||
"runtimeConfig": {
|
||||
"heartbeat": {
|
||||
"enabled": true,
|
||||
"intervalSec": 300,
|
||||
"enabled": false,
|
||||
"wakeOnDemand": true
|
||||
}
|
||||
},
|
||||
@@ -80,6 +79,7 @@ Response:
|
||||
If company setting disables required approval, `approval` is `null` and the agent is created as `idle`.
|
||||
|
||||
`desiredSkills` accepts company skill ids, canonical keys, or a unique slug. The server resolves and stores canonical company skill keys.
|
||||
Leave timer heartbeats disabled by default. Only set `runtimeConfig.heartbeat.enabled=true` and include an `intervalSec` when the role truly needs scheduled recurring work or the user explicitly requested it.
|
||||
|
||||
## Approval Lifecycle
|
||||
|
||||
|
||||
@@ -120,6 +120,17 @@ Headers: X-Paperclip-Run-Id: $PAPERCLIP_RUN_ID
|
||||
{ "status": "blocked", "comment": "What is blocked, why, and who needs to unblock it." }
|
||||
```
|
||||
|
||||
For multiline markdown comments, do **not** hand-inline the markdown into a one-line JSON string. That is how comments get "smooshed" together. Use the helper below or an equivalent `jq --arg` pattern so literal newlines survive JSON encoding:
|
||||
|
||||
```bash
|
||||
scripts/paperclip-issue-update.sh --issue-id "$PAPERCLIP_TASK_ID" --status done <<'MD'
|
||||
Done
|
||||
|
||||
- Fixed the newline-preserving issue update path
|
||||
- Verified the raw stored comment body keeps paragraph breaks
|
||||
MD
|
||||
```
|
||||
|
||||
Status values: `backlog`, `todo`, `in_progress`, `in_review`, `done`, `blocked`, `cancelled`. Priority values: `critical`, `high`, `medium`, `low`. Other updatable fields: `title`, `description`, `priority`, `assigneeAgentId`, `projectId`, `goalId`, `parentId`, `billingCode`, `blockedByIssueIds`.
|
||||
|
||||
**Step 9 — Delegate if needed.** Create subtasks with `POST /api/companies/{companyId}/issues`. Always set `parentId` and `goalId`. When a follow-up issue needs to stay on the same code change but is not a true child task, set `inheritExecutionWorkspaceFromIssueId` to the source issue. Set `billingCode` for cross-team work.
|
||||
@@ -303,6 +314,20 @@ Never leave bare ticket ids in issue descriptions or comments when a clickable i
|
||||
|
||||
Do NOT use unprefixed paths like `/issues/PAP-123` or `/agents/cto` — always include the company prefix.
|
||||
|
||||
**Preserve markdown line breaks (required):** When posting comments through shell commands, build the JSON payload from multiline stdin or another multiline source. Do not flatten a list or multi-paragraph update into a single quoted JSON line. Preferred helper:
|
||||
|
||||
```bash
|
||||
scripts/paperclip-issue-update.sh --issue-id "$PAPERCLIP_TASK_ID" --status in_progress <<'MD'
|
||||
Investigating comment formatting
|
||||
|
||||
- Pulled the raw stored comment body
|
||||
- Compared it with the run's final assistant message
|
||||
- Traced whether the flattening happened before or after the API call
|
||||
MD
|
||||
```
|
||||
|
||||
If you cannot use the helper, use `jq -n --arg comment "$comment"` with `comment` read from a heredoc or file. Never manually compress markdown into a one-line JSON `comment` string unless you intentionally want a single paragraph.
|
||||
|
||||
Example:
|
||||
|
||||
```md
|
||||
|
||||
@@ -616,6 +616,7 @@ POST /api/companies/{companyId}/agent-hires
|
||||
If company policy requires approval, the new agent is created as `pending_approval` and a linked `hire_agent` approval is created automatically.
|
||||
|
||||
**Do NOT** request hires unless you are a manager or CEO. IC agents should ask their manager.
|
||||
Leave timer heartbeats off by default for new hires. Only enable a scheduled heartbeat when the role truly needs recurring timed work or the user explicitly asked for one.
|
||||
|
||||
Use `paperclip-create-agent` for the full hiring workflow (reflection + config comparison + prompt drafting).
|
||||
|
||||
|
||||
Reference in New Issue
Block a user