Compare commits

...

205 Commits

Author SHA1 Message Date
dotta
528f836e71 fix: use origin for github release creation in actions 2026-03-18 09:10:00 -05:00
Dotta
78c714c29a Merge pull request #1221 from paperclipai/fix/workspace-warnings-stdout
Log workspace warnings to stdout
2026-03-18 08:38:42 -05:00
dotta
88da68d8a2 Log workspace warnings to stdout
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-18 08:32:59 -05:00
Dotta
0d9fabb6ec Merge pull request #1220 from paperclipai/release-note-2026-318-0
Add 2026.318.0 release notes stub
2026-03-18 08:31:51 -05:00
dotta
ff16ff8d01 Add 2026.318.0 release notes stub 2026-03-18 08:30:59 -05:00
Dotta
493b0ca8d1 Merge pull request #1216 from paperclipai/split/release-smoke-calver
Release smoke workflow and CalVer patch-slot updates
2026-03-18 08:07:32 -05:00
Dotta
7730230aa9 Merge pull request #1217 from paperclipai/split/ui-onboarding-inbox-agent-details
Inbox, agent detail, and onboarding polish
2026-03-18 08:04:57 -05:00
dotta
2c05c2c0ac test: harden onboarding route coverage 2026-03-18 08:00:02 -05:00
dotta
cc1620e4fe chore: hide agents skills tab from UI
Comment out the Skills tab entry, view rendering, and breadcrumb
in AgentDetail so it's not visible. The code is preserved for
later re-enablement.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-18 07:59:50 -05:00
dotta
3e88afb64a fix: remove session compaction card from agent configuration
No adapters currently support configuring compaction, so the info box
adds complexity without utility. Will revisit at the adapter level.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-18 07:59:50 -05:00
dotta
3562cca743 feat: hide bootstrap prompt config unless agent already has one
Only show the bootstrap prompt field on the agent configuration page
when editing an existing agent that already has a bootstrapPromptTemplate
value set. Label it as "(legacy)" with an amber notice recommending
migration to prompt template or instructions file. Hidden entirely
for new agent creation.

Closes PAP-536

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-18 07:59:50 -05:00
dotta
9a4135c288 fix: use proper Tooltip component for sidebar version hover
The native title attribute tooltip was not working reliably. Switched
to the project's Radix-based Tooltip component which provides instant,
styled tooltips on hover.

Fixes PAP-533

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-18 07:59:50 -05:00
dotta
7140090d0b Align approval inbox icon with issue status column
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-18 07:59:50 -05:00
dotta
bfb1960703 fix: show only 'v' in sidebar with full version on hover tooltip
The full version string was pushing the sidebar too wide. Now displays
just "v" with the full version (e.g. "v1.2.3") shown on hover via
title attribute, for both mobile and desktop sidebar layouts.

Fixes PAP-533

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-18 07:59:50 -05:00
dotta
22ae70649b Mix approvals into inbox activity feed
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-18 07:59:50 -05:00
dotta
c121f4d4a7 Fix inbox recent visibility
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-18 07:59:50 -05:00
dotta
19f4a78f4a feat: add release smoke workflow 2026-03-18 07:59:32 -05:00
dotta
3e0e15394a chore: switch release calver to mdd patch 2026-03-18 07:57:36 -05:00
Dotta
f598a556dc Merge pull request #1166 from paperclipai/fix/canary-version-after-partial-publish
fix: advance canary after partial publish
2026-03-17 16:37:58 -05:00
dotta
21f7adbe45 fix: advance canary after partial publish 2026-03-17 16:31:38 -05:00
Dotta
9df7fd019f Merge pull request #1162 from paperclipai/fix/npm-provenance-package-metadata
fix: add npm provenance package metadata
2026-03-17 16:02:36 -05:00
dotta
0036f0e9a1 fix: add npm provenance package metadata 2026-03-17 16:01:48 -05:00
Dotta
f499c9e222 Merge pull request #1145 from wesseljt/fix/opencode-home-env
fix(opencode-local): resolve HOME from os.userInfo() for model discovery
2026-03-17 15:40:13 -05:00
Dotta
b59bc9a6de Merge pull request #1159 from paperclipai/release-automation-followups
fix: validate canary release path in CI
2026-03-17 15:39:50 -05:00
dotta
5cf841283a fix: correct codeowners maintainer handle 2026-03-17 15:38:03 -05:00
repro
9176218d16 fix: validate canary release path in CI 2026-03-17 15:35:59 -05:00
github-actions[bot]
42c0ca669b chore(lockfile): refresh pnpm-lock.yaml (#900)
Co-authored-by: lockfile-bot <lockfile-bot@users.noreply.github.com>
2026-03-17 15:08:56 -05:00
Dotta
9acf70baab Merge pull request #1154 from paperclipai/release-automation-followups
Release automation follow-ups
2026-03-17 15:07:52 -05:00
Dotta
62e8fd494f chore: expand github codeowners coverage 2026-03-17 15:03:18 -05:00
Dotta
3921466aae chore: auto-merge lockfile refresh PRs 2026-03-17 15:02:16 -05:00
Dotta
f1a0460105 fix: reset lockfile changes before release publish 2026-03-17 14:53:23 -05:00
Dotta
773f8dcf6d Merge pull request #1151 from paperclipai/release-automation-calver
Automate canary/stable releases with CalVer
2026-03-17 14:46:26 -05:00
Dotta
824a297c73 fix: drop accidental agent prompt tab changes 2026-03-17 14:45:22 -05:00
Dotta
4d8c988dab fix: use one workflow for npm trusted publishing 2026-03-17 14:18:42 -05:00
Dotta
48326da83f fix: restore release script executable bit 2026-03-17 14:09:16 -05:00
Dotta
21c1235277 chore: automate canary and stable releases 2026-03-17 14:08:55 -05:00
Dotta
7b9718cbaa docs: plan memory service surface API
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-17 12:07:14 -05:00
John Wessel
5965266cb8 fix: guard os.userInfo() for UID-only containers, exclude HOME from cache key
Address Greptile review feedback:

1. Wrap os.userInfo() in try/catch — it throws SystemError when the
   current UID has no /etc/passwd entry (e.g. `docker run --user 1234`
   with a minimal image). Falls back to process.env.HOME gracefully.

2. Add HOME to VOLATILE_ENV_KEY_EXACT so the discovery cache key is
   not affected by the caller-supplied HOME vs the resolved HOME.
   os.userInfo().homedir is constant for the process lifetime, so
   HOME adds no useful cache differentiation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 13:05:23 -04:00
John Wessel
2aa607c828 fix(opencode-local): resolve HOME from os.userInfo() for model discovery
When Paperclip's server is started via `runuser -u node` (common in
Docker/Fly.io deployments), the HOME environment variable retains the
parent process's value (e.g. /root) instead of the target user's home
directory (/home/node). This causes `opencode models` to miss provider
auth credentials stored under the actual user's home, resulting in
"Configured OpenCode model is unavailable" errors for providers that
require API keys (e.g. zai/zhipuai).

Fix: use `os.userInfo().homedir` (reads from /etc/passwd, not env) to
ensure the child process always sees the correct HOME, regardless of
how the server was launched.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-17 12:58:13 -04:00
Dotta
71de1c5877 Fix HTML entities appearing in copied issue text
Decode HTML entities (e.g. &#x20;) from title and description
before copying to clipboard, and trim trailing whitespace.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-17 11:18:55 -05:00
Dotta
cd67bf1d3d Add copy-to-clipboard button on issue detail header
Adds a copy icon button to the left of the properties panel toggle
on the issue detail page. Clicking it copies a markdown representation
of the issue (identifier, title, description) to the clipboard and
shows a success toast. The icon briefly switches to a checkmark for
visual feedback.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-17 11:12:56 -05:00
Dotta
2a15650341 feat: reorganize agent detail tabs and add Prompts tab
Rearrange tabs to: Dashboard, Prompts, Skills, Configuration, Budget.
Move Prompt Template out of Configuration into a dedicated Prompts tab
with its own save/cancel flow and dirty tracking.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-17 11:12:04 -05:00
Dotta
eaa7d54cb4 Merge pull request #899 from paperclipai/paperclip-subissues
Advanced Workspace Support
2026-03-17 10:37:32 -05:00
Dotta
2a56f4134e Harden workspace cleanup and clone env handling 2026-03-17 10:29:44 -05:00
Dotta
b8a816ff8c Hide issue work product outputs 2026-03-17 10:21:11 -05:00
Dotta
108792ac51 Address remaining Greptile workspace review 2026-03-17 10:12:44 -05:00
Dotta
4a5f6ec00a Merge remote-tracking branch 'public-gh/master' into paperclip-subissues
* public-gh/master:
  hide version until loaded
  add app version label
2026-03-17 09:54:30 -05:00
Dotta
549e3b22e5 Merge pull request #1096 from saishankar404/feat/ui-version-label
add app version label
2026-03-17 09:50:04 -05:00
Dotta
b2f7252b27 Fix empty space in new issue pane when workspaces disabled
The execution workspace section wrapper div was rendered whenever a
project was selected, even when experimental workspaces were off.
The empty div's py-3 padding caused a visible layout bump. Now the
entire block only renders when currentProjectSupportsExecutionWorkspace
is true, and the redundant inner conditional is removed.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 09:49:12 -05:00
Dotta
6ebfd3ccf1 Merge public-gh/master into paperclip-subissues 2026-03-17 09:42:31 -05:00
Dotta
4da13984e2 Add workspace operation tracking and fix project properties JSX 2026-03-17 09:36:35 -05:00
Dotta
d974268e37 Merge pull request #1132 from paperclipai/dotta-march-17-updates
dottas-march-17-updates
2026-03-17 09:33:57 -05:00
Dotta
2c747402a8 docs: add PR thinking path guidance to contributing 2026-03-17 09:31:21 -05:00
Dotta
e39ae5a400 Add instance experimental setting for isolated workspaces
Introduce a singleton instance_settings store and experimental settings API, add the Experimental instance settings page, and gate execution workspace behavior behind the new enableIsolatedWorkspaces flag.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-17 09:24:28 -05:00
Dotta
4d9769c620 fix: address review feedback on skills and session compaction 2026-03-17 09:21:44 -05:00
Dotta
4323d4bbda feat: add agent skills tab and local dev helpers 2026-03-17 09:11:37 -05:00
Dotta
5a9a4170e8 Remove box border from execution workspace toggle in issue properties panel
Same styling fix as NewIssueDialog — removes the rounded-md border
from the workspace toggle in the issue detail view.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-17 09:10:46 -05:00
Dotta
cebd62cbb7 Remove box border and add vertical margin to execution workspace toggle in new issue dialog
Removes the rounded border around the execution workspace toggle section
and increases top/bottom padding for better visual spacing.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-17 09:10:46 -05:00
Dotta
bba36ab4c0 fix: convert archivedAt string to Date before Drizzle update
The zod schema validates archivedAt as a datetime string, but Drizzle's
timestamp column expects a Date object. The string was passed directly to
db.update(), causing a 500 error. Now we convert the string to a Date
in the route handler before calling the service.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 09:10:46 -05:00
Dotta
fee3df2e62 Make session compaction adapter-aware
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-17 09:10:46 -05:00
Dotta
2539950ad7 fix: add two newlines after image drop/paste in markdown editor
When dragging or pasting an image into a markdown editor field, the cursor
would end up right next to the image making it hard to continue typing.
Now inserts two newlines after the image so a new paragraph is ready.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-17 09:10:46 -05:00
Dotta
d06cbb84f4 fix: increase top margin on markdown headers for better visual separation
Headers were butting up against previous paragraphs too closely. Changed
rendered markdown header selectors from :where() to direct element selectors
to increase CSS specificity and beat Tailwind prose defaults. Bumped
margin-top from 1.15rem to 1.75rem. Also added top margins to MDXEditor
headers (h1: 1.4em, h2: 1.3em, h3: 1.2em) which previously had none.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 09:10:46 -05:00
Dotta
7c2f015f31 fix: replace window.confirm with inline confirmation for archive project
Swap the browser alert dialog for an in-page confirm/cancel button pair.
Shows a loading spinner while the archive request is in flight, then the
redirect and toast (from prior commit) handle the rest.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 09:10:46 -05:00
Dotta
b2072518e0 fix: hide sticky save bar on non-config tabs to prevent layout push
The sticky float-right save/cancel bar was rendering (invisible via
opacity-0) on all tabs including runs, causing it to push the runs
layout content. Now only rendered when showConfigActionBar is true.
Also reverts the negative margin workaround from the previous attempt.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 09:10:46 -05:00
Dotta
10e37dd7e5 fix: remove "None" text from empty goals, add padding to + goal button
When no goals are linked, hide the "None" label and just show the
"+ Goal" button. When goals exist, add left margin to the button so
it doesn't crowd the goal badges.

Closes PAP-522

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 09:10:46 -05:00
Dotta
0fd7ed84fb fix: archive project navigates to dashboard and shows toast
Previously archiving a project silently navigated to /projects with no
feedback. Now it navigates to /dashboard and shows a success toast for
both archive and unarchive actions.

Closes PAP-521

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 09:10:45 -05:00
Dotta
54a28cc5b9 fix: remove unnecessary right padding on runs page
Use negative right margin to counteract the Layout container padding,
giving the runs detail panel more horizontal space especially on
smaller screens.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 09:10:45 -05:00
Dotta
132e2bd0d9 feat: cache project tab per-project and rename/reorder tabs
- Rename "List" tab to "Issues" and reorder: Issues, Overview, Configuration
- Cache the last active tab per project in localStorage
- On revisit, restore the cached tab instead of always defaulting to Issues

Closes PAP-520

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-17 09:10:45 -05:00
Dotta
6c779fbd48 Improve workspace path wrapping with natural break points
- Replace break-all with <wbr> hints after / and - characters so paths
  break at directory and word boundaries instead of mid-word
- Use overflow-wrap: anywhere as fallback for very long segments
- Apply natural breaking to the workspace name link as well
- Rename CopyablePath to CopyableValue with optional mono prop for
  better semantic clarity

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-17 08:53:19 -05:00
Dotta
e6269e5817 Add wrapping paths with copy-to-clipboard in workspace properties
- Replace truncated paths with wrapping text (break-all) so full paths
  are visible
- Add CopyablePath component with a copy icon that appears on hover and
  shows a green checkmark after copying
- Apply to all workspace paths: cwd, branch name, repo URL, and the
  project primary workspace fallback

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-17 08:08:14 -05:00
Dotta
be9dac6723 Show workspace path and details in issue properties pane
Display the absolute cwd path, branch name, and repo URL for the current
execution workspace in the issue properties panel. When no execution
workspace is active yet, show the project's primary workspace path as
a fallback.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-17 08:00:39 -05:00
Dotta
ce105d32c3 Simplify execution workspace chooser and deduplicate reusable workspaces
- Remove "Operator branch" and "Agent default" options from the workspace
  mode dropdown in both NewIssueDialog and IssueProperties, keeping only
  "Project default", "New isolated workspace", and "Reuse existing workspace"
- Deduplicate reusable workspaces by space identity (cwd path) so the
  "choose existing workspace" dropdown shows unique worktrees instead of
  duplicate entries from multiple issues sharing the same local folder

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-17 07:46:40 -05:00
Sai Shankar
8abfe894e3 hide version until loaded 2026-03-17 09:47:19 +05:30
Sai Shankar
02bf0dd862 add app version label 2026-03-17 09:40:07 +05:30
Dotta
88bf1b23a3 Harden embedded postgres adoption on startup 2026-03-16 21:03:05 -05:00
Dotta
8d5af56fc5 Fix Greptile workspace review issues 2026-03-16 20:12:22 -05:00
Dotta
6dd4cc2840 Verify embedded postgres adoption data dir 2026-03-16 20:01:31 -05:00
Dotta
794ba59bb6 Merge remote-tracking branch 'public-gh/master' into paperclip-subissues
* public-gh/master:
  fix(plugins): address Greptile feedback on testing.ts
  feat(plugins): add document CRUD methods to Plugin SDK
2026-03-16 19:51:09 -05:00
Dotta
1309cc449d Fix trash button on repo when workspace has no local folder
clearRepoWorkspace was calling updateWorkspace to null out the repo
even when there was no local folder, leaving an empty workspace.
Now falls through to persistCodebase which correctly removes the
entire workspace when both cwd and repoUrl would be null.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-16 19:38:46 -05:00
Dotta
81b4e4f826 Fix workspace form not refreshing when project accessed via URL key
The invalidateProject() in ProjectProperties only invalidated
queryKeys.projects.detail(project.id) (UUID), but the parent
ProjectDetail query uses routeProjectRef which is typically the
URL key (e.g. "openclaw-testing"). This meant mutations succeeded
but the parent query was never refetched — the UI only updated on
page refresh.

Now also invalidates via project.urlKey so both UUID and URL-key
based query caches are cleared.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-16 19:34:46 -05:00
Dotta
3456808e1c Fix workspace codebase form not allowing empty saves and not auto-updating
- Allow saving empty values to clear repo URL or local folder from an existing workspace
- submitLocalWorkspace/submitRepoWorkspace now handle empty input as a "clear" operation
- Save button is only disabled for empty input when there's no existing workspace to clear
- removeWorkspace.onSuccess now resets form state (matching create/update handlers)

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-16 19:19:38 -05:00
Dotta
e079b8ebcf Remove "None" text from empty goals and add padding to + Goal button
- When no goals are linked, just show the "+ Goal" button without
  displaying "None" text
- Add left margin to the "+ Goal" button when goals exist above it
  for better visual spacing

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-16 18:47:29 -05:00
Dotta
9e6cc0851b Fix archive project appearing to do nothing
The archive mutation lacked user feedback:
- No toast notification on success/failure
- Navigated to /projects instead of /dashboard after archiving
- No error handling if the API call failed

Added success/error toasts using the existing ToastContext, navigate
to /dashboard after archiving (matching user expectations), and error
toast on failure.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-16 18:46:26 -05:00
Dotta
7e4aec9379 Remove the experimental workspace toggle
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-16 18:37:59 -05:00
Dotta
4220d6e057 Hide project execution workspace config for now
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-16 18:27:27 -05:00
Dotta
bb788d8360 Treat Codex bootstrap logs as stdout
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-16 18:26:36 -05:00
Dotta
04ceb1f619 Fix codebase Save button not closing form after update
The updateWorkspace mutation's onSuccess handler only invalidated the
project query but didn't reset the form state (close the edit mode,
clear inputs). This made it look like Save did nothing when editing an
existing workspace. Now matches createWorkspace's onSuccess behavior.

Also added updateWorkspace.isPending to the Save button disabled state
for both local folder and repo inputs.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-16 18:26:06 -05:00
Dotta
bb7d1b2c71 Merge remote-tracking branch 'public-gh/master' into paperclip-subissues
* public-gh/master:
  Fix budget incident resolution edge cases
  Fix agent budget tab routing
  Fix budget auth and monthly spend rollups
  Harden budget enforcement and migration startup
  Add budget tabs and sidebar budget indicators
  feat(costs): add billing, quota, and budget control plane
  refactor(quota): move provider quota logic into adapter layer, add unit tests
  fix(costs): replace non-null map assertions with nullish coalescing, clarify weekData guard
  fix(costs): guard byProject against duplicate null keys, memoize ProviderQuotaCard row aggregations
  fix(costs): align byAgent run filter to startedAt, tighten providerTabItems memo deps, stabilize byProject row keys
  feat(costs): add agent model breakdown, harden date validation, sync CostByProject type, fix quota threshold and tab-gated queries
  fix(costs): harden company auth check, fix frozen date memo, hide empty quota rows
  fix(costs): guard routes, fix DST ranges, sync provider state, wire live updates
  feat(costs): consolidate /usage into /costs with Spend + Providers tabs
  feat(usage): add subscription quota windows per provider on /usage page
  address greptile review: per-provider deficit notch, startedAt filter, weekRange refresh, deduplicate providerDisplayName
  feat(ui): add resource and usage dashboard (/usage route)

# Conflicts:
#	packages/db/src/migration-runtime.ts
#	packages/db/src/migrations/meta/0031_snapshot.json
#	packages/db/src/migrations/meta/_journal.json
2026-03-16 17:19:55 -05:00
Dotta
eb113bff3d Merge pull request #1074 from residentagent/osc/940-plugin-sdk-document-crud
feat(plugins): add document CRUD methods to Plugin SDK
2026-03-16 17:19:24 -05:00
Dotta
3d01217aef Fix legacy migration reconciliation 2026-03-16 17:03:23 -05:00
Justin Miller
56985a320f fix(plugins): address Greptile feedback on testing.ts
Remove unnecessary `as any` casts on capability strings (now valid
PluginCapability members) and add company-membership guards to match
production behavior in plugin-host-services.ts.
2026-03-16 16:01:00 -06:00
Justin Miller
0d4dd50b35 feat(plugins): add document CRUD methods to Plugin SDK
Wire issue document list/get/upsert/delete operations through the
JSON-RPC protocol so plugins can manage issue documents with the same
capabilities available via the REST API.

Fixes #940
2026-03-16 15:53:50 -06:00
Dotta
c578fb1575 Merge pull request #949 from paperclipai/feature/upgraded-costs-and-budgeting
feat(costs): add billing, quota, and budget control plane
2026-03-16 16:52:57 -05:00
Dotta
8fbbc4ada6 Fix budget incident resolution edge cases 2026-03-16 16:48:13 -05:00
Dotta
0c121b856f Merge remote-tracking branch 'public-gh/master' into paperclip-subissues
* public-gh/master: (51 commits)
  Use attachment-size limit for company logos
  Address Greptile company logo feedback
  Drop lockfile from PR branch
  Use asset-backed company logos
  fix: use appType "custom" for Vite dev server so worktree branding is applied
  docs: fix documentation drift — adapters, plugins, tech stack
  docs: update documentation for accuracy after plugin system launch
  chore: ignore superset artifacts
  Dark theme for CodeMirror code blocks in MDXEditor
  Remove duplicate @paperclipai/adapter-openclaw-gateway in server/package.json
  Fix code block styles with robust prose overrides
  Add Docker setup for untrusted PR review in isolated containers
  Fix org chart canvas height to fit viewport without scrolling
  Add doc-maintenance skill for periodic documentation accuracy audits
  Fix sidebar scrollbar: hide track background when not hovering
  Restyle markdown code blocks: dark background, smaller font, compact padding
  Add archive project button and filter archived projects from selectors
  fix: address review feedback — subscription cleanup, filter nullability, stale diagram
  fix: wire plugin event subscriptions from worker to host
  fix(ui): hide scrollbar track background when sidebar is not hovered
  ...

# Conflicts:
#	packages/db/src/migrations/meta/0030_snapshot.json
#	packages/db/src/migrations/meta/_journal.json
2026-03-16 16:02:37 -05:00
Dotta
1990b29018 Fix agent budget tab routing 2026-03-16 16:02:21 -05:00
Dotta
9b7b90521f Redesign project codebase configuration 2026-03-16 15:56:37 -05:00
Dotta
728d9729ed Fix budget auth and monthly spend rollups 2026-03-16 15:41:48 -05:00
Dotta
5f2c2ee0e2 Harden budget enforcement and migration startup 2026-03-16 15:11:34 -05:00
Dotta
411952573e Add budget tabs and sidebar budget indicators 2026-03-16 15:11:01 -05:00
Dotta
76e6cc08a6 feat(costs): add billing, quota, and budget control plane 2026-03-16 15:11:01 -05:00
Sai Shankar
656b4659fc refactor(quota): move provider quota logic into adapter layer, add unit tests
- Extract all Anthropic credential/API logic into claude-local/src/server/quota.ts
- Extract all OpenAI/WHAM credential/API logic into codex-local/src/server/quota.ts
- Add optional getQuotaWindows() to ServerAdapterModule in adapter-utils
- Rewrite quota-windows.ts as a 29-line thin aggregator with zero provider knowledge
- Wire getQuotaWindows into adapter registry for claude-local and codex-local
- Add 47 unit tests covering toPercent, secondsToWindowLabel, WHAM normalization,
  readClaudeToken, readCodexToken, fetchClaudeQuota, fetchCodexQuota, fetchWithTimeout
- Add 8 unit tests covering parseDateRange validation and byProvider pro-rata math

Adding a third provider now requires only touching that provider's adapter.
2026-03-16 15:08:54 -05:00
Sai Shankar
f383a37b01 fix(costs): replace non-null map assertions with nullish coalescing, clarify weekData guard 2026-03-16 15:08:54 -05:00
Sai Shankar
3529ccfa85 fix(costs): guard byProject against duplicate null keys, memoize ProviderQuotaCard row aggregations 2026-03-16 15:08:54 -05:00
Sai Shankar
7db3446a09 fix(costs): align byAgent run filter to startedAt, tighten providerTabItems memo deps, stabilize byProject row keys 2026-03-16 15:08:54 -05:00
Sai Shankar
9d21380699 feat(costs): add agent model breakdown, harden date validation, sync CostByProject type, fix quota threshold and tab-gated queries
- add byAgentModel endpoint and expandable per-agent model sub-rows in the spend tab
- validate date range inputs with isNaN + badRequest to return HTTP 400 on bad input
- move CostByProject from a local api/costs.ts definition into packages/shared types
- gate providerData query on mainTab === providers, consistent with weekData/windowData/quotaData
- fix byProject range filter from finishedAt to startedAt, consistent with byProvider runs query
- fix WHAM used_percent threshold from <= 1 to < 1 to avoid misclassifying 1% usage as 100%
- replace inline opacity style with tailwind bg-primary/85 class in ProviderQuotaCard
- reset expandedAgents set when company or date range changes
- sort agent model sub-rows by cost descending in ui memo
2026-03-16 15:08:54 -05:00
Sai Shankar
db20f4f46e fix(costs): harden company auth check, fix frozen date memo, hide empty quota rows
- add company existence check on quota-windows route to guard against
  sentinel and forged company IDs (was a no-op assertCompanyAccess)
- fix useDateRange minuteTick memo frozen at mount; realign interval to
  next calendar minute boundary via setTimeout + intervalRef pattern
- fix midnight timer in Costs.tsx to use stable [] dep and
  self-scheduling todayTimerRef to avoid StrictMode double-invoke
- return null for rolling window rows with no DB data instead of
  rendering $0.00 / 0 tok false zeros
- fix secondsToWindowLabel to handle windows >168h with actual day count
  instead of silently falling back to 7d
- fix byProvider.get(p) non-null assertion to use ?? [] fallback
2026-03-16 15:08:54 -05:00
Sai Shankar
bc991a96b4 fix(costs): guard routes, fix DST ranges, sync provider state, wire live updates
- add companyAccess guard to costs route
- fix effectiveProvider/activeProvider desync via sync-back useEffect
- move ROLLING_WINDOWS to module level; replace IIFE with useMemo in ProviderQuotaCard
- add NO_COMPANY sentinel to eliminate non-null assertions before enabled guard
- fix DST-unsafe 7d/30d ranges in useDateRange (use Date constructor)
- remove providerData from providerTabItems memo deps (use byProvider)
- normalize used_percent 0-1 vs 0-100 ambiguity in quota-windows service
- rename secondsToWindowLabel index param to fallback; pass explicit labels
- add 4.33 magic number comment; fix quota window key collision
- remove rounded-md from date inputs (violates --radius: 0 theme)
- wire cost_event invalidation in LiveUpdatesProvider
2026-03-16 15:08:54 -05:00
Sai Shankar
56c9d95daa feat(costs): consolidate /usage into /costs with Spend + Providers tabs
merge Usage page into Costs as two tabs ('Spend' and 'Providers'),
extract shared date-range logic to useDateRange() hook, delete /usage
route and sidebar entry, fix quota-windows bugs from prior review
2026-03-16 15:08:54 -05:00
Sai Shankar
f14b6e449f feat(usage): add subscription quota windows per provider on /usage page
reads local claude and codex auth files server-side, calls provider
quota apis (anthropic oauth usage, chatgpt wham/usage), and surfaces
live usedPercent per window in ProviderQuotaCard with threshold fill colors
2026-03-16 15:08:54 -05:00
Sai Shankar
82bc00a3ae address greptile review: per-provider deficit notch, startedAt filter, weekRange refresh, deduplicate providerDisplayName 2026-03-16 15:08:54 -05:00
Sai Shankar
94018e0239 feat(ui): add resource and usage dashboard (/usage route)
adds a new /usage page that lets board operators see how much each ai
provider is consuming across any date window, with per-model breakdowns,
rolling 5h/24h/7d burn windows, weekly budget bars, and a deficit notch
when projected spend is on track to exceed the monthly budget.

- new GET /companies/:id/costs/by-provider endpoint aggregates cost events
  by provider + model with pro-rated billing type splits from heartbeat runs
- new GET /companies/:id/costs/window-spend endpoint returns rolling window
  spend (5h, 24h, 7d) per provider with no schema changes
- QuotaBar: reusable boxed-border progress bar with green/yellow/red
  threshold fill colors and optional deficit notch
- ProviderQuotaCard: per-provider card showing budget allocation bars,
  rolling windows, subscription usage, and model breakdown with token/cost
  share overlays
- Usage page: date preset toggles (mtd, 7d, 30d, ytd, all, custom),
  provider tabs, 30s polling plus ws invalidation on cost_event
- custom date range blocks queries until both dates are selected and
  treats boundaries as local-time (not utc midnight) so full days are
  included regardless of timezone
- query key to timestamp is floored to the nearest minute to prevent
  cache churn on every 30s refetch tick
2026-03-16 15:08:54 -05:00
Dotta
3dc3347a58 Merge pull request #162 from JonCSykes/feature/upload-company-logo
Feature/upload company logo
2026-03-16 11:10:07 -05:00
Dotta
6eceb9b886 Use attachment-size limit for company logos 2026-03-16 10:13:19 -05:00
Dotta
4dfd862f11 Address Greptile company logo feedback 2026-03-16 10:05:14 -05:00
Dotta
2d548a9da0 Drop lockfile from PR branch 2026-03-16 09:37:23 -05:00
Dotta
e538329b0a Use asset-backed company logos 2026-03-16 09:25:39 -05:00
Dotta
1a5eaba622 Merge public-gh/master into review/pr-162 2026-03-16 08:47:05 -05:00
Dotta
7034ea5b01 Merge pull request #1038 from paperclipai/paperclip-worktree-dynamics
fix: worktree branding not applied in vite dev mode
2026-03-16 08:10:54 -05:00
Dotta
ccb6729ec8 fix: use appType "custom" for Vite dev server so worktree branding is applied
Vite's "spa" appType adds its own SPA fallback middleware that serves
index.html directly, bypassing the custom catch-all route that calls
applyUiBranding(). Changing to "custom" lets our route handle HTML
serving, which injects the worktree-colored favicon and banner meta tags.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 08:08:38 -05:00
Dotta
4244047d4d Merge pull request #990 from paperclipai/dotta-sunday-ui-updates
dottas-sunday-ui-updates
2026-03-16 07:12:35 -05:00
Dotta
34ec40211e Merge pull request #1010 from paperclipai/docs/maintenance-20260316
docs: fix documentation drift — adapters, plugins, tech stack
2026-03-15 21:13:34 -05:00
Dotta
52b12784a0 docs: fix documentation drift — adapters, plugins, tech stack
- Fix gemini adapter name: `gemini-local` → `gemini_local` (matches registry.ts)
- Move .doc-review-cursor to .gitignore (tooling state, not source)

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-15 21:08:19 -05:00
Dotta
3bffe3e479 docs: update documentation for accuracy after plugin system launch
- README: mark plugin system as shipped in roadmap
- SPEC: update adapter table with openclaw_gateway, gemini-local, hermes_local
- SPEC: update plugin architecture section to reflect shipped status
- Add .doc-review-cursor for future maintenance runs

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-15 20:30:54 -05:00
Dotta
c5cc191a08 chore: ignore superset artifacts 2026-03-15 14:56:53 -05:00
Dotta
16ab8c8303 Dark theme for CodeMirror code blocks in MDXEditor
The code blocks users see in issue documents are rendered by CodeMirror
(via MDXEditor's codeMirrorPlugin), not by MarkdownBody. MDXEditor
bundles cm6-theme-basic-light which gives them a white background.

Added dark overrides for all CodeMirror elements:
- .cm-editor: dark background (#1e1e2e), light text (#cdd6f4)
- .cm-gutters: darker gutter with muted line numbers
- .cm-activeLine, .cm-selectionBackground: subtle dark highlights
- .cm-cursor: light cursor for visibility
- Language selector dropdown: dark-themed to match
- Reduced pre padding to 0 since CodeMirror handles its own spacing

Uses \!important to beat CodeMirror's programmatically-injected theme
styles (EditorView.theme generates high-specificity scoped selectors).

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-15 14:39:09 -05:00
Dotta
2daa35cd3a Remove duplicate @paperclipai/adapter-openclaw-gateway in server/package.json
Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-15 14:33:22 -05:00
Dotta
597c4b1d45 Fix code block styles with robust prose overrides
Previous attempt was being overridden by Tailwind prose/prose-invert
CSS variables. This fix:

- Overrides --tw-prose-pre-bg and --tw-prose-invert-pre-bg CSS variables
  on .paperclip-markdown to force dark background in both modes
- Uses .paperclip-markdown pre with \!important for bulletproof overrides
- Removes conflicting prose-pre: utility classes from MarkdownBody
- Adds explicit pre code reset (inherit color/size, no background)
- Verified visually with Playwright at desktop and mobile viewports

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-15 14:30:53 -05:00
Dotta
6f931b8405 Add Docker setup for untrusted PR review in isolated containers
Adds a dedicated Docker environment for reviewing untrusted pull requests
with codex/claude, keeping CLI auth state in volumes and using a separate
scratch workspace for PR checkouts.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-15 14:30:53 -05:00
Dotta
41e03bae61 Fix org chart canvas height to fit viewport without scrolling
The height calc subtracted only 4rem but the actual overhead is ~6rem
(3rem breadcrumb bar + 3rem main padding). Also use dvh for better
mobile support.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-15 14:30:53 -05:00
Dotta
d7f45eac14 Add doc-maintenance skill for periodic documentation accuracy audits
Skill detects documentation drift by scanning git history since last review,
cross-referencing shipped features against README, SPEC, and PRODUCT docs,
and opening PRs with minimal fixes. Includes audit checklist and section map
references.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-15 14:30:53 -05:00
Dotta
94112b324c Fix sidebar scrollbar: hide track background when not hovering
The scrollbar track background was still visible as a colored "well" even
when the thumb was hidden. Now both track and thumb are fully transparent
by default, only appearing on container hover.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-15 14:30:53 -05:00
Dotta
fe0d7d029a Restyle markdown code blocks: dark background, smaller font, compact padding
- Switch code block background from transparent accent to dark (#1e1e2e) with
  light text (#cdd6f4) for better readability in both light and dark modes
- Reduce code font size from 0.84em to 0.78em
- Compact padding and margins on pre blocks
- Hide MDXEditor code block toolbar by default, show on hover/focus to prevent
  overlap with code content on mobile
- Use horizontal scroll instead of word-wrap for code blocks to preserve formatting

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-15 14:30:33 -05:00
Dotta
aea133ff9f Add archive project button and filter archived projects from selectors
- Add "Archive project" / "Unarchive project" button in the project
  configuration danger zone (ProjectProperties)
- Filter archived projects from the Projects listing page
- Filter archived projects from NewIssueDialog project selector
- Filter archived projects from IssueProperties project picker
  (keeps current project visible even if archived)
- Filter archived projects from CommandPalette
- SidebarProjects already filters archived projects

Co-Authored-By: Paperclip <noreply@paperclip.ing>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-15 14:30:33 -05:00
Dotta
c94132bc7e Merge pull request #988 from leeknowsai/fix/plugin-event-subscription-wiring
fix: wire plugin event subscriptions from worker to host
2026-03-15 14:26:11 -05:00
HD
8468d347be fix: address review feedback — subscription cleanup, filter nullability, stale diagram
- Add scopedBus.clear() in dispose() to prevent subscription accumulation
  on worker crash/restart cycles
- Use two-arg subscribe() overload when filter is null instead of passing
  empty object; fix filter type to include null
- Update ASCII flow diagram: onEvent is a notification, not request/response
2026-03-16 02:25:03 +07:00
HD
61fd5486e8 fix: wire plugin event subscriptions from worker to host
Plugin workers register event handlers via `ctx.events.on()` in the SDK,
but these subscriptions were never forwarded to the host process. The host
sends events via `notifyWorker("onEvent", ...)` which produces a JSON-RPC
notification (no `id`), but the worker only dispatched `onEvent` as a
request handler — notifications were silently dropped.

Changes:
- Add `events.subscribe` RPC method so workers can register subscriptions
  on the host-side event bus during setup
- Handle `onEvent` notifications in the worker notification dispatcher
  (previously only `agents.sessions.event` was handled)
- Add `events.subscribe` to HostServices interface, capability map, and
  host client handler
- Add `subscribe` handler in host services that registers on the scoped
  plugin event bus and forwards matched events to the worker
2026-03-16 02:10:10 +07:00
Dotta
675421f3a9 Merge pull request #587 from teknium1/feat/hermes-agent-adapter
feat: add Hermes Agent adapter (hermes_local)
2026-03-15 08:25:56 -05:00
Dotta
2162289bf3 Merge branch 'master' into feat/hermes-agent-adapter 2026-03-15 08:23:23 -05:00
Dotta
bfaa4b4bdc Merge pull request #834 from mvanhorn/fix/dotenv-cwd-fallback
fix(server): load .env from cwd as fallback
2026-03-14 21:02:54 -05:00
Dotta
872807a6f8 Merge pull request #918 from gsxdsm/fix/plugin-slots
Enhance plugin loading and toolbar integration
2026-03-14 17:51:26 -05:00
Dotta
f482433ddf Merge pull request #919 from paperclipai/fix/sidebar-scrollbar-hover-track
fix(ui): hide sidebar scrollbar track until hover
2026-03-14 17:49:05 -05:00
Dotta
825d2b4759 fix(ui): hide scrollbar track background when sidebar is not hovered
The scrollbar-auto-hide utility was only hiding the thumb but left the
track background visible, creating a visible "well" even when idle.
Now both track and thumb are transparent by default, appearing only on
container hover.

Fixes PAP-374

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-14 17:47:22 -05:00
gsxdsm
6c7ebaeb59 Refactor secret-ref format registration to use a UI hint for Paperclip secret UUIDs 2026-03-14 15:43:56 -07:00
gsxdsm
6d65800173 Register secret-ref format in AJV for validating Paperclip secret UUIDs 2026-03-14 15:41:22 -07:00
gsxdsm
0d2380b7b1 Fix plugin launchers initialization by adding enabled flag based on companyId 2026-03-14 15:35:01 -07:00
gsxdsm
ec261e9c7c Enhance plugin loading and toolbar integration
- Added packagePath to plugin loader for improved manifest handling.
- Refactored GlobalToolbarPlugins for better slot and launcher management in BreadcrumbBar.
- Updated launcher trigger styles for globalToolbarButton.
2026-03-14 15:27:45 -07:00
Dotta
5d52ce2e5e Merge pull request #916 from gsxdsm/fix/plugin-slots
Add globalToolbarButton slot type and update related documentation
2026-03-14 17:22:34 -05:00
gsxdsm
811e2b9909 Add globalToolbarButton slot type and update related documentation 2026-03-14 15:05:04 -07:00
Dotta
8985ddaeed Merge pull request #914 from gsxdsm/fix/dev-runner-syntax
Fix syntax error in dev-runner script
2026-03-14 16:55:43 -05:00
gsxdsm
2dbb31ef3c Fix syntax error 2026-03-14 14:34:56 -07:00
Dotta
648ee37a17 Merge pull request #912 from gsxdsm/feat/plugin-breadcumb-slot
Add global toolbar slots to BreadcrumbBar component
2026-03-14 16:28:56 -05:00
Dotta
98e73acc3b Merge pull request #904 from gsxdsm/fix/build-plugin-sdk
Add buildPluginSdk function to build the plugin SDK during dev run
2026-03-14 16:28:26 -05:00
Dotta
0fb85e5729 Merge pull request #910 from gsxdsm/feat/plugin-cli
Add plugin cli commands
2026-03-14 16:28:06 -05:00
gsxdsm
7e3a04c76c Refactor BreadcrumbBar to use useMemo for global toolbar slot context and improve rendering logic 2026-03-14 14:19:32 -07:00
gsxdsm
e219761d95 Fix plugin installation output and error handling in registerPluginCommands 2026-03-14 14:15:42 -07:00
gsxdsm
0afd5d5630 Update cli/src/commands/client/plugin.ts
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
2026-03-14 14:08:21 -07:00
gsxdsm
4f8df1804d Update cli/src/commands/client/plugin.ts
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
2026-03-14 14:08:15 -07:00
gsxdsm
d0677dcd91 Update cli/src/commands/client/plugin.ts
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
2026-03-14 14:08:03 -07:00
gsxdsm
bc5d650248 Add global toolbar slots to BreadcrumbBar component 2026-03-14 14:05:29 -07:00
Dotta
2e3a0d027e Merge pull request #909 from mvanhorn/feat/plugin-domain-event-bridge
feat(plugins): bridge core domain events to plugin event bus
2026-03-14 16:00:03 -05:00
Dotta
b92f234d88 Merge pull request #903 from gsxdsm/fix/process-list
Refine heartbeatService to only target runs stuck in "running" state
2026-03-14 15:58:46 -05:00
gsxdsm
0f831e09c1 Add plugin cli commands 2026-03-14 13:58:43 -07:00
Matt Van Horn
a6c7e09e2a fix(plugins): log plugin handler errors, warn on double-init
Address Greptile review feedback:
- Log plugin event handler errors via logger.warn instead of
  silently discarding the emit() result
- Warn if setPluginEventBus is called more than once

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-14 13:51:41 -07:00
Matt Van Horn
30e2914424 feat(plugins): bridge core domain events to plugin event bus
The plugin event bus accepts subscriptions for core events like
issue.created but nothing emits them. This adds a bridge in
logActivity() so every domain action that's already logged also
fires a PluginEvent to subscribing plugins.

Uses a module-level setter (same pattern as publishLiveEvent)
to avoid threading the bus through all route handlers. Only
actions matching PLUGIN_EVENT_TYPES are forwarded.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-14 13:44:26 -07:00
gsxdsm
6b17f7caa8 Update scripts/dev-runner.mjs
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
2026-03-14 13:29:46 -07:00
gsxdsm
2dc3b4df24 Add buildPluginSdk function to build the plugin SDK during dev run 2026-03-14 13:10:01 -07:00
gsxdsm
b13c530024 Refine heartbeatService to only target runs stuck in "running" state 2026-03-14 13:02:21 -07:00
Dotta
dd828e96ad Fix workspace review issues and policy check 2026-03-14 14:13:03 -05:00
Dotta
6e6d67372c Merge remote-tracking branch 'public-gh/master' into paperclip-subissues
* public-gh/master:
  Drop lockfile from watcher change
  Tighten plugin dev file watching
  Fix plugin smoke example typecheck
  Fix plugin dev watcher and migration snapshot
  Clarify plugin authoring and external dev workflow
  Expand kitchen sink plugin demos
  fix: set AGENT_HOME env var for agent processes
  Add kitchen sink plugin example
  Simplify plugin runtime and cleanup lifecycle
  Add plugin framework and settings UI

# Conflicts:
#	packages/db/src/migrations/meta/0029_snapshot.json
#	packages/db/src/migrations/meta/_journal.json
2026-03-14 13:56:09 -05:00
Dotta
0851e81b47 Merge pull request #821 from paperclipai/feature/plugin-runtime-instance-cleanup
WIP: Simplify plugin runtime and cleanup lifecycle
2026-03-14 13:45:56 -05:00
Dotta
325fcf8505 Merge pull request #864 from paperclipai/fix/agent-home-env
fix: set AGENT_HOME env var for agent processes
2026-03-14 12:44:28 -05:00
Dotta
8cf85a5a50 Merge remote-tracking branch 'public-gh/master' into paperclip-subissues
* public-gh/master: (55 commits)
  fix(issue-documents): address greptile review
  Update packages/shared/src/validators/issue.ts
  feat(ui): add issue document copy and download actions
  fix(ui): unify new issue upload action
  feat(ui): stage issue files before create
  feat(ui): handle issue document edit conflicts
  fix(ui): refresh issue documents from live events
  feat(ui): deep link issue documents
  fix(ui): streamline issue document chrome
  fix(ui): collapse empty document and attachment states
  fix(ui): simplify document card body layout
  fix(issues): address document review comments
  feat(issues): add issue documents and inline editing
  docs: add agent evals framework plan
  fix(cli): quote env values with special characters
  Fix worktree seed source selection
  fix: address greptile follow-up
  docs: add paperclip skill tightening plan
  fix: isolate codex home in worktrees
  Add worktree UI branding
  ...

# Conflicts:
#	packages/db/src/migrations/meta/0028_snapshot.json
#	packages/db/src/migrations/meta/_journal.json
#	packages/shared/src/index.ts
#	server/src/routes/issues.ts
#	ui/src/api/issues.ts
#	ui/src/components/NewIssueDialog.tsx
#	ui/src/pages/IssueDetail.tsx
2026-03-14 12:24:40 -05:00
Matt Van Horn
ff4f326341 fix(server): use realpathSync for .env path dedup to handle symlinks
realpathSync resolves symlinks and normalizes case, preventing
double-loading the same .env file when paths differ only by
symlink indirection or filesystem case.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-14 09:05:51 -07:00
Dotta
dcd8a47d4f Merge pull request #713 from paperclipai/release/0.3.1
Release/0.3.1
2026-03-14 11:00:24 -05:00
Dotta
9ed7092aab Stop runtime services during workspace cleanup 2026-03-14 09:41:13 -05:00
Dotta
3b25268c0b Fix execution workspace runtime lifecycle 2026-03-14 09:35:35 -05:00
Devin Foley
d671a59306 fix: set AGENT_HOME env var for agent processes
The $AGENT_HOME environment variable was referenced by skills (e.g.
para-memory-files) but never actually set, causing runtime errors like
"/HEARTBEAT.md: No such file or directory" when agents tried to resolve
paths relative to their home directory.

Add agentHome to the paperclipWorkspace context in the heartbeat service
and propagate it as the AGENT_HOME env var in all local adapters.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-14 00:36:53 -07:00
teknium1
93faf6d361 fix: address review feedback — pin version, enable JWT
- Pin hermes-paperclip-adapter to exact version 0.1.1 (was ^0.1.0).
  Avoids auto-pulling potentially breaking patches from a 0.x package.
- Enable supportsLocalAgentJwt (was false). The adapter uses
  buildPaperclipEnv which passes the JWT to the child process,
  matching the pattern of all other local adapters.
2026-03-13 20:26:27 -07:00
Dotta
7a06a577ce Fix dev startup with embedded postgres reuse 2026-03-13 20:56:19 -05:00
Matt Van Horn
303c00b61b fix(server): load .env from cwd as fallback when .paperclip/.env is missing
The server only loads environment variables from .paperclip/.env, which is
not the standard location users expect. When DATABASE_URL is set in a .env
file in the project root (cwd), it is silently ignored, requiring users to
manually export the variable.

Add a fallback that loads cwd/.env after .paperclip/.env with override:false,
so the Paperclip-specific env file always takes precedence but standard .env
files in the project root are also picked up.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-03-13 17:22:27 -07:00
Dotta
920bc4c70f Implement execution workspaces and work products 2026-03-13 17:12:25 -05:00
Dotta
9da5358bb3 Add workspace technical implementation spec 2026-03-13 16:37:40 -05:00
Dotta
25d3bf2c64 Incorporate Worktrunk patterns into workspace plan 2026-03-13 09:41:12 -05:00
Dotta
752a53e38e Expand workspace plan for migration and cloud execution 2026-03-13 09:06:49 -05:00
Dotta
284bd733b9 Add workspace product model plan 2026-03-13 08:41:01 -05:00
teknium1
e84c0e8df2 fix: use npm package instead of GitHub URL dependency
- Published hermes-paperclip-adapter@0.1.0 to npm registry
- Replaced github:NousResearch/hermes-paperclip-adapter with
  hermes-paperclip-adapter ^0.1.0 (proper semver, reproducible builds)
- Updated imports from @nousresearch/paperclip-adapter-hermes to
  hermes-paperclip-adapter
- Wired in hermesSessionCodec for structured session validation

Addresses both review items from greptile-apps:
1. Unpinned GitHub dependency → now a proper npm package with semver
2. Missing sessionCodec → now imported and registered
2026-03-12 17:23:24 -07:00
teknium1
4e354ad00d fix: address review feedback — pin dependency and add sessionCodec
- Pin @nousresearch/paperclip-adapter-hermes to v0.1.0 tag for
  reproducible builds and supply-chain safety
- Import and wire hermesSessionCodec into the adapter registration
  for structured session parameter validation (matching claude_local,
  codex_local, and other adapters that support session persistence)
2026-03-12 17:03:49 -07:00
Dotta
63c62e3ada chore: release v0.3.1 2026-03-12 13:09:22 -05:00
Dotta
964e04369a fixes verification 2026-03-12 12:55:26 -05:00
Dotta
873535fbf0 verify the packages actually make it to npm 2026-03-12 12:42:00 -05:00
Dotta
87c0bf9cdf added v0.3.1.md changelog 2026-03-12 11:05:31 -05:00
teknium1
97d628d784 feat: add Hermes Agent adapter (hermes_local)
Adds support for Hermes Agent (https://github.com/NousResearch/hermes-agent)
as a managed employee in Paperclip companies.

Hermes Agent is a full-featured AI agent by Nous Research with 30+ native
tools, persistent memory, session persistence, 80+ skills, MCP support,
and multi-provider model access.

Changes:
- Add 'hermes_local' to AGENT_ADAPTER_TYPES (packages/shared)
- Add @nousresearch/paperclip-adapter-hermes dependency (server)
- Register hermesLocalAdapter in the adapter registry (server)

The adapter package is maintained at:
https://github.com/NousResearch/hermes-paperclip-adapter
2026-03-10 23:12:13 -07:00
Aaron
fb684f25e9 Address PR feedback: keep testEnvironment non-destructive, warn on swallowed errors
- Update cwd test to expect an error for missing directories (matches
  createIfMissing: false accepted from review)
- Add warn-level check for non-ProviderModelNotFoundError failures
  during best-effort model discovery when no model is configured

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-07 15:50:14 -05:00
Aaron
fa7acd2482 Apply suggestion from @greptile-apps[bot]
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
2026-03-07 15:50:14 -05:00
Aaron
5114c32810 Fix opencode-local adapter: parser, UI, CLI, and environment tests
- Move costUsd to top-level return field in parseOpenCodeJsonl (out of usage)
- Fix session-not-found regex to match "Session not found" pattern
- Use callID for toolUseId in UI stdout parser, add status/metadata header
- Fix CLI formatter: separate tool_call/tool_result lines, split step_finish
- Enable createIfMissing for cwd validation in environment tests
- Add empty OPENAI_API_KEY override detection
- Classify ProviderModelNotFoundError as warning during model discovery
- Make model discovery best-effort when no model is configured

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-07 15:50:14 -05:00
JonCSykes
a765d342e0 Merge remote-tracking branch 'origin/feature/upload-company-logo' into feature/upload-company-logo
# Conflicts:
#	pnpm-lock.yaml
#	server/package.json
2026-03-07 13:36:41 -05:00
Jon Sykes
a5fda1546b Merge branch 'master' into feature/upload-company-logo 2026-03-07 13:34:57 -05:00
JonCSykes
4dffdc4de2 Merge master into feature/upload-company-logo 2026-03-07 12:58:02 -05:00
JonCSykes
f44efce265 Add @types/node as a devDependency in cursor-local package 2026-03-06 23:15:45 -05:00
JonCSykes
6f16fc0a93 Merge remote-tracking branch 'origin/master' into feature/upload-company-logo 2026-03-06 22:55:25 -05:00
JonCSykes
4599fc5a8d Merge remote-tracking branch 'origin/master' into feature/upload-company-logo 2026-03-06 17:21:06 -05:00
JonCSykes
a4702e48f9 Add sanitization for SVG uploads and enhance security headers for asset responses
- Introduced SVG sanitization using `dompurify` to prevent malicious content.
- Updated tests to validate SVG sanitization with various scenarios.
- Enhanced response headers for assets, adding CSP and nosniff for SVGs.
- Adjusted UI to better clarify supported file types for logo uploads.
- Updated dependencies to include `jsdom` and `dompurify`.
2026-03-06 17:18:43 -05:00
JonCSykes
1448b55ca4 Improve error handling in CompanySettings for mutation failure messages. 2026-03-06 16:47:04 -05:00
JonCSykes
b19d0b6f3b Add support for company logos, including schema adjustments, validation, assets handling, and UI display enhancements. 2026-03-06 16:39:35 -05:00
299 changed files with 91505 additions and 3718 deletions

View File

@@ -0,0 +1,201 @@
---
name: doc-maintenance
description: >
Audit top-level documentation (README, SPEC, PRODUCT) against recent git
history to find drift — shipped features missing from docs or features
listed as upcoming that already landed. Proposes minimal edits, creates
a branch, and opens a PR. Use when asked to review docs for accuracy,
after major feature merges, or on a periodic schedule.
---
# Doc Maintenance Skill
Detect documentation drift and fix it via PR — no rewrites, no churn.
## When to Use
- Periodic doc review (e.g. weekly or after releases)
- After major feature merges
- When asked "are our docs up to date?"
- When asked to audit README / SPEC / PRODUCT accuracy
## Target Documents
| Document | Path | What matters |
|----------|------|-------------|
| README | `README.md` | Features table, roadmap, quickstart, "what is" accuracy, "works with" table |
| SPEC | `doc/SPEC.md` | No false "not supported" claims, major model/schema accuracy |
| PRODUCT | `doc/PRODUCT.md` | Core concepts, feature list, principles accuracy |
Out of scope: DEVELOPING.md, DATABASE.md, CLI.md, doc/plans/, skill files,
release notes. These are dev-facing or ephemeral — lower risk of user-facing
confusion.
## Workflow
### Step 1 — Detect what changed
Find the last review cursor:
```bash
# Read the last-reviewed commit SHA
CURSOR_FILE=".doc-review-cursor"
if [ -f "$CURSOR_FILE" ]; then
LAST_SHA=$(cat "$CURSOR_FILE" | head -1)
else
# First run: look back 60 days
LAST_SHA=$(git log --format="%H" --after="60 days ago" --reverse | head -1)
fi
```
Then gather commits since the cursor:
```bash
git log "$LAST_SHA"..HEAD --oneline --no-merges
```
### Step 2 — Classify changes
Scan commit messages and changed files. Categorize into:
- **Feature** — new capabilities (keywords: `feat`, `add`, `implement`, `support`)
- **Breaking** — removed/renamed things (keywords: `remove`, `breaking`, `drop`, `rename`)
- **Structural** — new directories, config changes, new adapters, new CLI commands
**Ignore:** refactors, test-only changes, CI config, dependency bumps, doc-only
changes, style/formatting commits. These don't affect doc accuracy.
For borderline cases, check the actual diff — a commit titled "refactor: X"
that adds a new public API is a feature.
### Step 3 — Build a change summary
Produce a concise list like:
```
Since last review (<sha>, <date>):
- FEATURE: Plugin system merged (runtime, SDK, CLI, slots, event bridge)
- FEATURE: Project archiving added
- BREAKING: Removed legacy webhook adapter
- STRUCTURAL: New .agents/skills/ directory convention
```
If there are no notable changes, skip to Step 7 (update cursor and exit).
### Step 4 — Audit each target doc
For each target document, read it fully and cross-reference against the change
summary. Check for:
1. **False negatives** — major shipped features not mentioned at all
2. **False positives** — features listed as "coming soon" / "roadmap" / "planned"
/ "not supported" / "TBD" that already shipped
3. **Quickstart accuracy** — install commands, prereqs, and startup instructions
still correct (README only)
4. **Feature table accuracy** — does the features section reflect current
capabilities? (README only)
5. **Works-with accuracy** — are supported adapters/integrations listed correctly?
Use `references/audit-checklist.md` as the structured checklist.
Use `references/section-map.md` to know where to look for each feature area.
### Step 5 — Create branch and apply minimal edits
```bash
# Create a branch for the doc updates
BRANCH="docs/maintenance-$(date +%Y%m%d)"
git checkout -b "$BRANCH"
```
Apply **only** the edits needed to fix drift. Rules:
- **Minimal patches only.** Fix inaccuracies, don't rewrite sections.
- **Preserve voice and style.** Match the existing tone of each document.
- **No cosmetic changes.** Don't fix typos, reformat tables, or reorganize
sections unless they're part of a factual fix.
- **No new sections.** If a feature needs a whole new section, note it in the
PR description as a follow-up — don't add it in a maintenance pass.
- **Roadmap items:** Move shipped features out of Roadmap. Add a brief mention
in the appropriate existing section if there isn't one already. Don't add
long descriptions.
### Step 6 — Open a PR
Commit the changes and open a PR:
```bash
git add README.md doc/SPEC.md doc/PRODUCT.md .doc-review-cursor
git commit -m "docs: update documentation for accuracy
- [list each fix briefly]
Co-Authored-By: Paperclip <noreply@paperclip.ing>"
git push -u origin "$BRANCH"
gh pr create \
--title "docs: periodic documentation accuracy update" \
--body "$(cat <<'EOF'
## Summary
Automated doc maintenance pass. Fixes documentation drift detected since
last review.
### Changes
- [list each fix]
### Change summary (since last review)
- [list notable code changes that triggered doc updates]
## Review notes
- Only factual accuracy fixes — no style/cosmetic changes
- Preserves existing voice and structure
- Larger doc additions (new sections, tutorials) noted as follow-ups
🤖 Generated by doc-maintenance skill
EOF
)"
```
### Step 7 — Update the cursor
After a successful audit (whether or not edits were needed), update the cursor:
```bash
git rev-parse HEAD > .doc-review-cursor
```
If edits were made, this is already committed in the PR branch. If no edits
were needed, commit the cursor update to the current branch.
## Change Classification Rules
| Signal | Category | Doc update needed? |
|--------|----------|-------------------|
| `feat:`, `add`, `implement`, `support` in message | Feature | Yes if user-facing |
| `remove`, `drop`, `breaking`, `!:` in message | Breaking | Yes |
| New top-level directory or config file | Structural | Maybe |
| `fix:`, `bugfix` | Fix | No (unless it changes behavior described in docs) |
| `refactor:`, `chore:`, `ci:`, `test:` | Maintenance | No |
| `docs:` | Doc change | No (already handled) |
| Dependency bumps only | Maintenance | No |
## Patch Style Guide
- Fix the fact, not the prose
- If removing a roadmap item, don't leave a gap — remove the bullet cleanly
- If adding a feature mention, match the format of surrounding entries
(e.g. if features are in a table, add a table row)
- Keep README changes especially minimal — it shouldn't churn often
- For SPEC/PRODUCT, prefer updating existing statements over adding new ones
(e.g. change "not supported in V1" to "supported via X" rather than adding
a new section)
## Output
When the skill completes, report:
- How many commits were scanned
- How many notable changes were found
- How many doc edits were made (and to which files)
- PR link (if edits were made)
- Any follow-up items that need larger doc work

View File

@@ -0,0 +1,85 @@
# Doc Maintenance Audit Checklist
Use this checklist when auditing each target document. For each item, compare
against the change summary from git history.
## README.md
### Features table
- [ ] Each feature card reflects a shipped capability
- [ ] No feature cards for things that don't exist yet
- [ ] No major shipped features missing from the table
### Roadmap
- [ ] Nothing listed as "planned" or "coming soon" that already shipped
- [ ] No removed/cancelled items still listed
- [ ] Items reflect current priorities (cross-check with recent PRs)
### Quickstart
- [ ] `npx paperclipai onboard` command is correct
- [ ] Manual install steps are accurate (clone URL, commands)
- [ ] Prerequisites (Node version, pnpm version) are current
- [ ] Server URL and port are correct
### "What is Paperclip" section
- [ ] High-level description is accurate
- [ ] Step table (Define goal / Hire team / Approve and run) is correct
### "Works with" table
- [ ] All supported adapters/runtimes are listed
- [ ] No removed adapters still listed
- [ ] Logos and labels match current adapter names
### "Paperclip is right for you if"
- [ ] Use cases are still accurate
- [ ] No claims about capabilities that don't exist
### "Why Paperclip is special"
- [ ] Technical claims are accurate (atomic execution, governance, etc.)
- [ ] No features listed that were removed or significantly changed
### FAQ
- [ ] Answers are still correct
- [ ] No references to removed features or outdated behavior
### Development section
- [ ] Commands are accurate (`pnpm dev`, `pnpm build`, etc.)
- [ ] Link to DEVELOPING.md is correct
## doc/SPEC.md
### Company Model
- [ ] Fields match current schema
- [ ] Governance model description is accurate
### Agent Model
- [ ] Adapter types match what's actually supported
- [ ] Agent configuration description is accurate
- [ ] No features described as "not supported" or "not V1" that shipped
### Task Model
- [ ] Task hierarchy description is accurate
- [ ] Status values match current implementation
### Extensions / Plugins
- [ ] If plugins are shipped, no "not in V1" or "future" language
- [ ] Plugin model description matches implementation
### Open Questions
- [ ] Resolved questions removed or updated
- [ ] No "TBD" items that have been decided
## doc/PRODUCT.md
### Core Concepts
- [ ] Company, Employees, Task Management descriptions accurate
- [ ] Agent Execution modes described correctly
- [ ] No missing major concepts
### Principles
- [ ] Principles haven't been contradicted by shipped features
- [ ] No principles referencing removed capabilities
### User Flow
- [ ] Dream scenario still reflects actual onboarding
- [ ] Steps are achievable with current features

View File

@@ -0,0 +1,22 @@
# Section Map
Maps feature areas to specific document sections so the skill knows where to
look when a feature ships or changes.
| Feature Area | README Section | SPEC Section | PRODUCT Section |
|-------------|---------------|-------------|----------------|
| Plugins / Extensions | Features table, Roadmap | Extensions, Agent Model | Core Concepts |
| Adapters (new runtimes) | "Works with" table, FAQ | Agent Model, Agent Configuration | Employees & Agents, Agent Execution |
| Governance / Approvals | Features table, "Why special" | Board Governance, Board Approval Gates | Principles |
| Budget / Cost Control | Features table, "Why special" | Budget Delegation | Company (revenue & expenses) |
| Task Management | Features table | Task Model | Task Management |
| Org Chart / Hierarchy | Features table | Agent Model (reporting) | Employees & Agents |
| Multi-Company | Features table, FAQ | Company Model | Company |
| Heartbeats | Features table, FAQ | Agent Execution | Agent Execution |
| CLI Commands | Development section | — | — |
| Onboarding / Quickstart | Quickstart, FAQ | — | User Flow |
| Skills / Skill Injection | "Why special" | — | — |
| Company Templates | "Why special", Roadmap (ClipMart) | — | — |
| Mobile / UI | Features table | — | — |
| Project Archiving | — | — | — |
| OpenClaw Integration | "Works with" table, FAQ | Agent Model | Agent Execution |

View File

@@ -1,7 +1,7 @@
---
name: release-changelog
description: >
Generate the stable Paperclip release changelog at releases/v{version}.md by
Generate the stable Paperclip release changelog at releases/vYYYY.MDD.P.md by
reading commits, changesets, and merged PR context since the last stable tag.
---
@@ -9,20 +9,33 @@ description: >
Generate the user-facing changelog for the **stable** Paperclip release.
## Versioning Model
Paperclip uses **calendar versioning (calver)**:
- Stable releases: `YYYY.MDD.P` (e.g. `2026.318.0`)
- Canary releases: `YYYY.MDD.P-canary.N` (e.g. `2026.318.1-canary.0`)
- Git tags: `vYYYY.MDD.P` for stable, `canary/vYYYY.MDD.P-canary.N` for canary
There are no major/minor/patch bumps. The stable version is derived from the
intended release date (UTC) plus the next same-day stable patch slot.
Output:
- `releases/v{version}.md`
- `releases/vYYYY.MDD.P.md`
Important rule:
Important rules:
- even if there are canary releases such as `1.2.3-canary.0`, the changelog file stays `releases/v1.2.3.md`
- even if there are canary releases such as `2026.318.1-canary.0`, the changelog file stays `releases/v2026.318.1.md`
- do not derive versions from semver bump types
- do not create canary changelog files
## Step 0 — Idempotency Check
Before generating anything, check whether the file already exists:
```bash
ls releases/v{version}.md 2>/dev/null
ls releases/vYYYY.MDD.P.md 2>/dev/null
```
If it exists:
@@ -41,13 +54,14 @@ git tag --list 'v*' --sort=-version:refname | head -1
git log v{last}..HEAD --oneline --no-merges
```
The planned stable version comes from one of:
The stable version comes from one of:
- an explicit maintainer request
- the chosen bump type applied to the last stable tag
- `./scripts/release.sh stable --date YYYY-MM-DD --print-version`
- the release plan already agreed in `doc/RELEASING.md`
Do not derive the changelog version from a canary tag or prerelease suffix.
Do not derive major/minor/patch bumps from API intent — calver uses the date and same-day stable slot.
## Step 2 — Gather the Raw Inputs
@@ -73,7 +87,6 @@ Look for:
- destructive migrations
- removed or changed API fields/endpoints
- renamed or removed config keys
- `major` changesets
- `BREAKING:` or `BREAKING CHANGE:` commit signals
Key commands:
@@ -85,7 +98,8 @@ git diff v{last}..HEAD -- server/src/routes/ server/src/api/
git log v{last}..HEAD --format="%s" | rg -n 'BREAKING CHANGE|BREAKING:|^[a-z]+!:' || true
```
If the requested bump is lower than the minimum required bump, flag that before the release proceeds.
If breaking changes are detected, flag them prominently — they must appear in the
Breaking Changes section with an upgrade path.
## Step 4 — Categorize for Users
@@ -130,9 +144,9 @@ Rules:
Template:
```markdown
# v{version}
# vYYYY.MDD.P
> Released: {YYYY-MM-DD}
> Released: YYYY-MM-DD
## Breaking Changes

View File

@@ -2,23 +2,21 @@
name: release
description: >
Coordinate a full Paperclip release across engineering verification, npm,
GitHub, website publishing, and announcement follow-up. Use when leadership
asks to ship a release, not merely to discuss version bumps.
GitHub, smoke testing, and announcement follow-up. Use when leadership asks
to ship a release, not merely to discuss versioning.
---
# Release Coordination Skill
Run the full Paperclip release as a maintainer workflow, not just an npm publish.
Run the full Paperclip maintainer release workflow, not just an npm publish.
This skill coordinates:
- stable changelog drafting via `release-changelog`
- release-train setup via `scripts/release-start.sh`
- prerelease canary publishing via `scripts/release.sh --canary`
- canary verification and publish status from `master`
- Docker smoke testing via `scripts/docker-onboard-smoke.sh`
- stable publishing via `scripts/release.sh`
- pushing the stable branch commit and tag
- GitHub Release creation via `scripts/create-github-release.sh`
- manual stable promotion from a chosen source ref
- GitHub Release creation
- website / announcement follow-up tasks
## Trigger
@@ -26,8 +24,9 @@ This skill coordinates:
Use this skill when leadership asks for:
- "do a release"
- "ship the next patch/minor/major"
- "release vX.Y.Z"
- "ship the release"
- "promote this canary to stable"
- "cut the stable release"
## Preconditions
@@ -35,10 +34,10 @@ Before proceeding, verify all of the following:
1. `.agents/skills/release-changelog/SKILL.md` exists and is usable.
2. The repo working tree is clean, including untracked files.
3. There are commits since the last stable tag.
4. The release SHA has passed the verification gate or is about to.
5. If package manifests changed, the CI-owned `pnpm-lock.yaml` refresh is already merged on `master` before the release branch is cut.
6. npm publish rights are available locally, or the GitHub release workflow is being used with trusted publishing.
3. There is at least one canary or candidate commit since the last stable tag.
4. The candidate SHA has passed the verification gate or is about to.
5. If manifests changed, the CI-owned `pnpm-lock.yaml` refresh is already merged on `master`.
6. npm publish rights are available through GitHub trusted publishing, or through local npm auth for emergency/manual use.
7. If running through Paperclip, you have issue context for status updates and follow-up task creation.
If any precondition fails, stop and report the blocker.
@@ -47,78 +46,67 @@ If any precondition fails, stop and report the blocker.
Collect these inputs up front:
- requested bump: `patch`, `minor`, or `major`
- whether this run is a dry run or live release
- whether the release is being run locally or from GitHub Actions
- whether the target is a canary check or a stable promotion
- the candidate `source_ref` for stable
- whether the stable run is dry-run or live
- release issue / company context for website and announcement follow-up
## Step 0 — Release Model
Paperclip now uses this release model:
Paperclip now uses a commit-driven release model:
1. Start or resume `release/X.Y.Z`
2. Draft the **stable** changelog as `releases/vX.Y.Z.md`
3. Publish one or more **prerelease canaries** such as `X.Y.Z-canary.0`
4. Smoke test the canary via Docker
5. Publish the stable version `X.Y.Z`
6. Push the stable branch commit and tag
7. Create the GitHub Release
8. Merge `release/X.Y.Z` back to `master` without squash or rebase
9. Complete website and announcement surfaces
1. every push to `master` publishes a canary automatically
2. canaries use `YYYY.MDD.P-canary.N`
3. stable releases use `YYYY.MDD.P`
4. the middle slot is `MDD`, where `M` is the UTC month and `DD` is the zero-padded UTC day
5. the stable patch slot increments when more than one stable ships on the same UTC date
6. stable releases are manually promoted from a chosen tested commit or canary source commit
7. only stable releases get `releases/vYYYY.MDD.P.md`, git tag `vYYYY.MDD.P`, and a GitHub Release
Critical consequence:
Critical consequences:
- Canaries do **not** use promote-by-dist-tag anymore.
- The changelog remains stable-only. Do not create `releases/vX.Y.Z-canary.N.md`.
- do not use release branches as the default path
- do not derive major/minor/patch bumps
- do not create canary changelog files
- do not create canary GitHub Releases
## Step 1 — Decide the Stable Version
## Step 1 — Choose the Candidate
Start the release train first:
For canary validation:
- inspect the latest successful canary run on `master`
- record the canary version and source SHA
For stable promotion:
1. choose the tested source ref
2. confirm it is the exact SHA you want to promote
3. resolve the target stable version with `./scripts/release.sh stable --date YYYY-MM-DD --print-version`
Useful commands:
```bash
./scripts/release-start.sh {patch|minor|major}
git tag --list 'v*' --sort=-version:refname | head -1
git log --oneline --no-merges
npm view paperclipai@canary version
```
Then run release preflight:
```bash
./scripts/release-preflight.sh canary {patch|minor|major}
# or
./scripts/release-preflight.sh stable {patch|minor|major}
```
Then use the last stable tag as the base:
```bash
LAST_TAG=$(git tag --list 'v*' --sort=-version:refname | head -1)
git log "${LAST_TAG}..HEAD" --oneline --no-merges
git diff --name-only "${LAST_TAG}..HEAD" -- packages/db/src/migrations/
git diff "${LAST_TAG}..HEAD" -- packages/db/src/schema/
git log "${LAST_TAG}..HEAD" --format="%s" | rg -n 'BREAKING CHANGE|BREAKING:|^[a-z]+!:' || true
```
Bump policy:
- destructive migrations, removed APIs, breaking config changes -> `major`
- additive migrations or clearly user-visible features -> at least `minor`
- fixes only -> `patch`
If the requested bump is too low, escalate it and explain why.
## Step 2 — Draft the Stable Changelog
Invoke `release-changelog` and generate:
Stable changelog files live at:
- `releases/vX.Y.Z.md`
- `releases/vYYYY.MDD.P.md`
Invoke `release-changelog` and generate or update the stable notes only.
Rules:
- review the draft with a human before publish
- preserve manual edits if the file already exists
- keep the heading and filename stable-only, for example `v1.2.3`
- do not create a separate canary changelog file
- keep the filename stable-only
- do not create a canary changelog file
## Step 3 — Verify the Release SHA
## Step 3 — Verify the Candidate SHA
Run the standard gate:
@@ -128,41 +116,27 @@ pnpm test:run
pnpm build
```
If the release will be run through GitHub Actions, the workflow can rerun this gate. Still report whether the local tree currently passes.
If the GitHub release workflow will run the publish, it can rerun this gate. Still report local status if you checked it.
The GitHub Actions release workflow installs with `pnpm install --frozen-lockfile`. Treat that as a release invariant, not a nuisance: if manifests changed and the lockfile refresh PR has not landed yet, stop and wait for `master` to contain the committed lockfile before shipping.
For PRs that touch release logic, the repo also runs a canary release dry-run in CI. That is a release-specific guard, not a substitute for the standard gate.
## Step 4 — Publish a Canary
## Step 4 — Validate the Canary
Run from the `release/X.Y.Z` branch:
The normal canary path is automatic from `master` via:
```bash
./scripts/release.sh {patch|minor|major} --canary --dry-run
./scripts/release.sh {patch|minor|major} --canary
```
- `.github/workflows/release.yml`
What this means:
Confirm:
- npm receives `X.Y.Z-canary.N` under dist-tag `canary`
- `latest` remains unchanged
- no git tag is created
- the script cleans the working tree afterward
1. verification passed
2. npm canary publish succeeded
3. git tag `canary/vYYYY.MDD.P-canary.N` exists
Guard:
- if the current stable is `0.2.7`, the next patch canary is `0.2.8-canary.0`
- the tooling must never publish `0.2.7-canary.N` after `0.2.7` is already stable
After publish, verify:
Useful checks:
```bash
npm view paperclipai@canary version
```
The user install path is:
```bash
npx paperclipai@canary onboard
git tag --list 'canary/v*' --sort=-version:refname | head -5
```
## Step 5 — Smoke Test the Canary
@@ -173,60 +147,70 @@ Run:
PAPERCLIPAI_VERSION=canary ./scripts/docker-onboard-smoke.sh
```
Useful isolated variant:
```bash
HOST_PORT=3232 DATA_DIR=./data/release-smoke-canary PAPERCLIPAI_VERSION=canary ./scripts/docker-onboard-smoke.sh
```
Confirm:
1. install succeeds
2. onboarding completes
3. server boots
4. UI loads
5. basic company/dashboard flow works
2. onboarding completes without crashes
3. the server boots
4. the UI loads
5. basic company creation and dashboard load work
If smoke testing fails:
- stop the stable release
- fix the issue
- publish another canary
- repeat the smoke test
- fix the issue on `master`
- wait for the next automatic canary
- rerun smoke testing
Each retry should create a higher canary ordinal, while the stable target version can stay the same.
## Step 6 — Preview or Publish Stable
## Step 6 — Publish Stable
The normal stable path is manual `workflow_dispatch` on:
Once the SHA is vetted, run:
- `.github/workflows/release.yml`
Inputs:
- `source_ref`
- `stable_date`
- `dry_run`
Before live stable:
1. resolve the target stable version with `./scripts/release.sh stable --date YYYY-MM-DD --print-version`
2. ensure `releases/vYYYY.MDD.P.md` exists on the source ref
3. run the stable workflow in dry-run mode first when practical
4. then run the real stable publish
The stable workflow:
- re-verifies the exact source ref
- computes the next stable patch slot for the chosen UTC date
- publishes `YYYY.MDD.P` under dist-tag `latest`
- creates git tag `vYYYY.MDD.P`
- creates or updates the GitHub Release from `releases/vYYYY.MDD.P.md`
Local emergency/manual commands:
```bash
./scripts/release.sh {patch|minor|major} --dry-run
./scripts/release.sh {patch|minor|major}
./scripts/release.sh stable --dry-run
./scripts/release.sh stable
git push public-gh refs/tags/vYYYY.MDD.P
./scripts/create-github-release.sh YYYY.MDD.P
```
Stable publish does this:
- publishes `X.Y.Z` to npm under `latest`
- creates the local release commit
- creates the local git tag `vX.Y.Z`
Stable publish does **not** push the release for you.
## Step 7 — Push and Create GitHub Release
After stable publish succeeds:
```bash
git push public-gh HEAD --follow-tags
./scripts/create-github-release.sh X.Y.Z
```
Use the stable changelog file as the GitHub Release notes source.
Then open the PR from `release/X.Y.Z` back to `master` and merge without squash or rebase.
## Step 8 — Finish the Other Surfaces
## Step 7 — Finish the Other Surfaces
Create or verify follow-up work for:
- website changelog publishing
- launch post / social announcement
- any release summary in Paperclip issue context
- release summary in Paperclip issue context
These should reference the stable release, not the canary.
@@ -236,9 +220,9 @@ If the canary is bad:
- publish another canary, do not ship stable
If stable npm publish succeeds but push or GitHub release creation fails:
If stable npm publish succeeds but tag push or GitHub release creation fails:
- fix the git/GitHub issue immediately from the same checkout
- fix the git/GitHub issue immediately from the same release result
- do not republish the same version
If `latest` is bad after stable publish:
@@ -247,15 +231,17 @@ If `latest` is bad after stable publish:
./scripts/rollback-latest.sh <last-good-version>
```
Then fix forward with a new patch release.
Then fix forward with a new stable release.
## Output
When the skill completes, provide:
- stable version and, if relevant, the final canary version tested
- candidate SHA and tested canary version, if relevant
- stable version, if promoted
- verification status
- npm status
- smoke-test status
- git tag / GitHub Release status
- website / announcement follow-up status
- rollback recommendation if anything is still partially complete

View File

@@ -1,8 +0,0 @@
# Changesets
Hello and welcome! This folder has been automatically generated by `@changesets/cli`, a build tool that works
with multi-package repos, or single-package repos to help you version and publish your code. You can
find the full documentation for it [in our repository](https://github.com/changesets/changesets).
We have a quick list of common questions to get you started engaging with this project in
[our documentation](https://github.com/changesets/changesets/blob/main/docs/common-questions.md).

View File

@@ -1,11 +0,0 @@
{
"$schema": "https://unpkg.com/@changesets/config@3.1.3/schema.json",
"changelog": "@changesets/cli/changelog",
"commit": false,
"fixed": [["@paperclipai/*", "paperclipai"]],
"linked": [],
"access": "public",
"baseBranch": "master",
"updateInternalDependencies": "patch",
"ignore": ["@paperclipai/ui"]
}

10
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,10 @@
# Replace @cryppadotta if a different maintainer or team should own release infrastructure.
.github/** @cryppadotta @devinfoley
scripts/release*.sh @cryppadotta @devinfoley
scripts/release-*.mjs @cryppadotta @devinfoley
scripts/create-github-release.sh @cryppadotta @devinfoley
scripts/rollback-latest.sh @cryppadotta @devinfoley
doc/RELEASING.md @cryppadotta @devinfoley
doc/PUBLISHING.md @cryppadotta @devinfoley
doc/RELEASE-AUTOMATION-SETUP.md @cryppadotta @devinfoley

View File

@@ -26,7 +26,7 @@ jobs:
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 20
node-version: 24
cache: pnpm
- name: Install dependencies
@@ -40,3 +40,9 @@ jobs:
- name: Build
run: pnpm build
- name: Release canary dry run
run: |
git checkout -B master HEAD
git checkout -- pnpm-lock.yaml
./scripts/release.sh canary --skip-verify --dry-run

View File

@@ -79,3 +79,15 @@ jobs:
else
echo "PR #$existing already exists, branch updated via force push."
fi
- name: Enable auto-merge for lockfile PR
env:
GH_TOKEN: ${{ github.token }}
run: |
pr_url="$(gh pr list --head chore/refresh-lockfile --json url --jq '.[0].url')"
if [ -z "$pr_url" ]; then
echo "Error: lockfile PR was not found." >&2
exit 1
fi
gh pr merge --auto --squash --delete-branch "$pr_url"

118
.github/workflows/release-smoke.yml vendored Normal file
View File

@@ -0,0 +1,118 @@
name: Release Smoke
on:
workflow_dispatch:
inputs:
paperclip_version:
description: Published Paperclip dist-tag to test
required: true
default: canary
type: choice
options:
- canary
- latest
host_port:
description: Host port for the Docker smoke container
required: false
default: "3232"
type: string
artifact_name:
description: Artifact name for uploaded diagnostics
required: false
default: release-smoke
type: string
workflow_call:
inputs:
paperclip_version:
required: true
type: string
host_port:
required: false
default: "3232"
type: string
artifact_name:
required: false
default: release-smoke
type: string
jobs:
smoke:
runs-on: ubuntu-latest
timeout-minutes: 45
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 9.15.4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 24
cache: pnpm
- name: Install dependencies
run: pnpm install --no-frozen-lockfile
- name: Install Playwright browser
run: npx playwright install --with-deps chromium
- name: Launch Docker smoke harness
run: |
metadata_file="$RUNNER_TEMP/release-smoke.env"
HOST_PORT="${{ inputs.host_port }}" \
DATA_DIR="$RUNNER_TEMP/release-smoke-data" \
PAPERCLIPAI_VERSION="${{ inputs.paperclip_version }}" \
SMOKE_DETACH=true \
SMOKE_METADATA_FILE="$metadata_file" \
./scripts/docker-onboard-smoke.sh
set -a
source "$metadata_file"
set +a
{
echo "SMOKE_BASE_URL=$SMOKE_BASE_URL"
echo "SMOKE_ADMIN_EMAIL=$SMOKE_ADMIN_EMAIL"
echo "SMOKE_ADMIN_PASSWORD=$SMOKE_ADMIN_PASSWORD"
echo "SMOKE_CONTAINER_NAME=$SMOKE_CONTAINER_NAME"
echo "SMOKE_DATA_DIR=$SMOKE_DATA_DIR"
echo "SMOKE_IMAGE_NAME=$SMOKE_IMAGE_NAME"
echo "SMOKE_PAPERCLIPAI_VERSION=$SMOKE_PAPERCLIPAI_VERSION"
echo "SMOKE_METADATA_FILE=$metadata_file"
} >> "$GITHUB_ENV"
- name: Run release smoke Playwright suite
env:
PAPERCLIP_RELEASE_SMOKE_BASE_URL: ${{ env.SMOKE_BASE_URL }}
PAPERCLIP_RELEASE_SMOKE_EMAIL: ${{ env.SMOKE_ADMIN_EMAIL }}
PAPERCLIP_RELEASE_SMOKE_PASSWORD: ${{ env.SMOKE_ADMIN_PASSWORD }}
run: pnpm run test:release-smoke
- name: Capture Docker logs
if: always()
run: |
if [[ -n "${SMOKE_CONTAINER_NAME:-}" ]]; then
docker logs "$SMOKE_CONTAINER_NAME" >"$RUNNER_TEMP/docker-onboard-smoke.log" 2>&1 || true
fi
- name: Upload diagnostics
if: always()
uses: actions/upload-artifact@v4
with:
name: ${{ inputs.artifact_name }}
path: |
${{ runner.temp }}/docker-onboard-smoke.log
${{ env.SMOKE_METADATA_FILE }}
tests/release-smoke/playwright-report/
tests/release-smoke/test-results/
retention-days: 14
- name: Stop Docker smoke container
if: always()
run: |
if [[ -n "${SMOKE_CONTAINER_NAME:-}" ]]; then
docker rm -f "$SMOKE_CONTAINER_NAME" >/dev/null 2>&1 || true
fi

View File

@@ -1,38 +1,33 @@
name: Release
on:
push:
branches:
- master
workflow_dispatch:
inputs:
channel:
description: Release channel
source_ref:
description: Commit SHA, branch, or tag to publish as stable
required: true
type: choice
default: canary
options:
- canary
- stable
bump:
description: Semantic version bump
required: true
type: choice
default: patch
options:
- patch
- minor
- major
type: string
default: master
stable_date:
description: Enter a UTC date in YYYY-MM-DD format, for example 2026-03-18. Do not enter a version string. The workflow will resolve that date to a stable version such as 2026.318.0, then 2026.318.1 for the next same-day stable.
required: false
type: string
dry_run:
description: Preview the release without publishing
description: Preview the stable release without publishing
required: true
type: boolean
default: true
default: false
concurrency:
group: release-${{ github.ref }}
group: release-${{ github.event_name }}-${{ github.ref }}
cancel-in-progress: false
jobs:
verify:
if: startsWith(github.ref, 'refs/heads/release/')
verify_canary:
if: github.event_name == 'push'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
@@ -56,7 +51,7 @@ jobs:
cache: pnpm
- name: Install dependencies
run: pnpm install --frozen-lockfile
run: pnpm install --no-frozen-lockfile
- name: Typecheck
run: pnpm -r typecheck
@@ -67,12 +62,12 @@ jobs:
- name: Build
run: pnpm build
publish:
if: startsWith(github.ref, 'refs/heads/release/')
needs: verify
publish_canary:
if: github.event_name == 'push'
needs: verify_canary
runs-on: ubuntu-latest
timeout-minutes: 45
environment: npm-release
environment: npm-canary
permissions:
contents: write
id-token: write
@@ -95,34 +90,168 @@ jobs:
cache: pnpm
- name: Install dependencies
run: pnpm install --frozen-lockfile
run: pnpm install --no-frozen-lockfile
- name: Restore tracked install-time changes
run: git checkout -- pnpm-lock.yaml
- name: Configure git author
run: |
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
- name: Run release script
- name: Publish canary
env:
GITHUB_ACTIONS: "true"
run: ./scripts/release.sh canary --skip-verify
- name: Push canary tag
run: |
tag="$(git tag --points-at HEAD | grep '^canary/v' | head -1)"
if [ -z "$tag" ]; then
echo "Error: no canary tag points at HEAD after release." >&2
exit 1
fi
git push origin "refs/tags/${tag}"
verify_stable:
if: github.event_name == 'workflow_dispatch'
runs-on: ubuntu-latest
timeout-minutes: 30
permissions:
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ inputs.source_ref }}
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 9.15.4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 24
cache: pnpm
- name: Install dependencies
run: pnpm install --no-frozen-lockfile
- name: Typecheck
run: pnpm -r typecheck
- name: Run tests
run: pnpm test:run
- name: Build
run: pnpm build
preview_stable:
if: github.event_name == 'workflow_dispatch' && inputs.dry_run
needs: verify_stable
runs-on: ubuntu-latest
timeout-minutes: 45
permissions:
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ inputs.source_ref }}
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 9.15.4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 24
cache: pnpm
- name: Install dependencies
run: pnpm install --no-frozen-lockfile
- name: Dry-run stable release
env:
GITHUB_ACTIONS: "true"
run: |
args=("${{ inputs.bump }}")
if [ "${{ inputs.channel }}" = "canary" ]; then
args+=("--canary")
fi
if [ "${{ inputs.dry_run }}" = "true" ]; then
args+=("--dry-run")
args=(stable --skip-verify --dry-run)
if [ -n "${{ inputs.stable_date }}" ]; then
args+=(--date "${{ inputs.stable_date }}")
fi
./scripts/release.sh "${args[@]}"
- name: Push stable release branch commit and tag
if: inputs.channel == 'stable' && !inputs.dry_run
run: git push origin "HEAD:${GITHUB_REF_NAME}" --follow-tags
publish_stable:
if: github.event_name == 'workflow_dispatch' && !inputs.dry_run
needs: verify_stable
runs-on: ubuntu-latest
timeout-minutes: 45
environment: npm-stable
permissions:
contents: write
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
ref: ${{ inputs.source_ref }}
- name: Setup pnpm
uses: pnpm/action-setup@v4
with:
version: 9.15.4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: 24
cache: pnpm
- name: Install dependencies
run: pnpm install --no-frozen-lockfile
- name: Restore tracked install-time changes
run: git checkout -- pnpm-lock.yaml
- name: Configure git author
run: |
git config user.name "github-actions[bot]"
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
- name: Publish stable
env:
GITHUB_ACTIONS: "true"
run: |
args=(stable --skip-verify)
if [ -n "${{ inputs.stable_date }}" ]; then
args+=(--date "${{ inputs.stable_date }}")
fi
./scripts/release.sh "${args[@]}"
- name: Push stable tag
run: |
tag="$(git tag --points-at HEAD | grep '^v' | head -1)"
if [ -z "$tag" ]; then
echo "Error: no stable tag points at HEAD after release." >&2
exit 1
fi
git push origin "refs/tags/${tag}"
- name: Create GitHub Release
if: inputs.channel == 'stable' && !inputs.dry_run
env:
GH_TOKEN: ${{ github.token }}
PUBLISH_REMOTE: origin
run: |
version="$(git tag --points-at HEAD | grep '^v' | head -1 | sed 's/^v//')"
if [ -z "$version" ]; then

11
.gitignore vendored
View File

@@ -37,7 +37,16 @@ tmp/
.vscode/
.claude/settings.local.json
.paperclip-local/
/.idea/
/.agents/
# Doc maintenance cursor
.doc-review-cursor
# Playwright
tests/e2e/test-results/
tests/e2e/playwright-report/
tests/e2e/playwright-report/
tests/release-smoke/test-results/
tests/release-smoke/playwright-report/
.superset/
.claude/worktrees/

View File

@@ -7,6 +7,7 @@ We really appreciate both small fixes and thoughtful larger changes.
## Two Paths to Get Your Pull Request Accepted
### Path 1: Small, Focused Changes (Fastest way to get merged)
- Pick **one** clear thing to fix/improve
- Touch the **smallest possible number of files**
- Make sure the change is very targeted and easy to review
@@ -16,6 +17,7 @@ We really appreciate both small fixes and thoughtful larger changes.
These almost always get merged quickly when they're clean.
### Path 2: Bigger or Impactful Changes
- **First** talk about it in Discord → #dev channel
→ Describe what you're trying to solve
→ Share rough ideas / approach
@@ -30,12 +32,43 @@ These almost always get merged quickly when they're clean.
PRs that follow this path are **much** more likely to be accepted, even when they're large.
## General Rules (both paths)
- Write clear commit messages
- Keep PR title + description meaningful
- One PR = one logical change (unless it's a small related group)
- Run tests locally first
- Be kind in discussions 😄
## Writing a Good PR message
Please include a "thinking path" at the top of your PR message that explains from the top of the project down to what you fixed. E.g.:
### Thinking Path Example 1:
> - Paperclip orchestrates ai-agents for zero-human companies
> - There are many types of adapters for each LLM model provider
> - But LLM's have a context limit and not all agents can automatically compact their context
> - So we need to have an adapter-specific configuration for which adapters can and cannot automatically compact their context
> - This pull request adds per-adapter configuration of compaction, either auto or paperclip managed
> - That way we can get optimal performance from any adapter/provider in Paperclip
### Thinking Path Example 2:
> - Paperclip orchestrates ai-agents for zero-human companies
> - But humans want to watch the agents and oversee their work
> - Human users also operate in teams and so they need their own logins, profiles, views etc.
> - So we have a multi-user system for humans
> - But humans want to be able to update their own profile picture and avatar
> - But the avatar upload form wasn't saving the avatar to the file storage system
> - So this PR fixes the avatar upload form to use the file storage service
> - The benefit is we don't have a one-off file storage for just one aspect of the system, which would cause confusion and extra configuration
Then have the rest of your normal PR message after the Thinking Path.
This should include details about what you did, why you did it, why it matters & the benefits, how we can verify it works, and any risks.
Please include screenshots if possible if you have a visible change. (use something like the [agent-browser skill](https://github.com/vercel-labs/agent-browser/blob/main/skills/agent-browser/SKILL.md) or similar to take screenshots). Ideally, you include before and after screenshots.
Questions? Just ask in #dev — we're happy to help.
Happy hacking!

View File

@@ -239,7 +239,7 @@ See [doc/DEVELOPING.md](doc/DEVELOPING.md) for the full development guide.
- ⚪ ClipMart - buy and sell entire agent companies
- ⚪ Easy agent configurations / easier to understand
- ⚪ Better support for harness engineering
- Plugin system (e.g. if you want to add a knowledgebase, custom tracing, queues, etc)
- 🟢 Plugin system (e.g. if you want to add a knowledgebase, custom tracing, queues, etc)
- ⚪ Better docs
<br/>

View File

@@ -1,5 +1,23 @@
# paperclipai
## 0.3.1
### Patch Changes
- Stable release preparation for 0.3.1
- Updated dependencies
- @paperclipai/adapter-utils@0.3.1
- @paperclipai/adapter-claude-local@0.3.1
- @paperclipai/adapter-codex-local@0.3.1
- @paperclipai/adapter-cursor-local@0.3.1
- @paperclipai/adapter-gemini-local@0.3.1
- @paperclipai/adapter-openclaw-gateway@0.3.1
- @paperclipai/adapter-opencode-local@0.3.1
- @paperclipai/adapter-pi-local@0.3.1
- @paperclipai/db@0.3.1
- @paperclipai/shared@0.3.1
- @paperclipai/server@0.3.1
## 0.3.0
### Minor Changes

View File

@@ -1,6 +1,6 @@
{
"name": "paperclipai",
"version": "0.3.0",
"version": "0.3.1",
"description": "Paperclip CLI — orchestrate AI agent teams to run a business",
"type": "module",
"bin": {
@@ -16,10 +16,13 @@
"license": "MIT",
"repository": {
"type": "git",
"url": "https://github.com/paperclipai/paperclip.git",
"url": "https://github.com/paperclipai/paperclip",
"directory": "cli"
},
"homepage": "https://github.com/paperclipai/paperclip",
"bugs": {
"url": "https://github.com/paperclipai/paperclip/issues"
},
"files": [
"dist"
],

View File

@@ -8,12 +8,16 @@ function makeCompany(overrides: Partial<Company>): Company {
name: "Alpha",
description: null,
status: "active",
pauseReason: null,
pausedAt: null,
issuePrefix: "ALP",
issueCounter: 1,
budgetMonthlyCents: 0,
spentMonthlyCents: 0,
requireBoardApprovalForNewAgents: false,
brandColor: null,
logoAssetId: null,
logoUrl: null,
createdAt: new Date(),
updatedAt: new Date(),
...overrides,

View File

@@ -0,0 +1,374 @@
import path from "node:path";
import { Command } from "commander";
import pc from "picocolors";
import {
addCommonClientOptions,
handleCommandError,
printOutput,
resolveCommandContext,
type BaseClientOptions,
} from "./common.js";
// ---------------------------------------------------------------------------
// Types mirroring server-side shapes
// ---------------------------------------------------------------------------
interface PluginRecord {
id: string;
pluginKey: string;
packageName: string;
version: string;
status: string;
displayName?: string;
lastError?: string | null;
installedAt: string;
updatedAt: string;
}
// ---------------------------------------------------------------------------
// Option types
// ---------------------------------------------------------------------------
interface PluginListOptions extends BaseClientOptions {
status?: string;
}
interface PluginInstallOptions extends BaseClientOptions {
local?: boolean;
version?: string;
}
interface PluginUninstallOptions extends BaseClientOptions {
force?: boolean;
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
/**
* Resolve a local path argument to an absolute path so the server can find the
* plugin on disk regardless of where the user ran the CLI.
*/
function resolvePackageArg(packageArg: string, isLocal: boolean): string {
if (!isLocal) return packageArg;
// Already absolute
if (path.isAbsolute(packageArg)) return packageArg;
// Expand leading ~ to home directory
if (packageArg.startsWith("~")) {
const home = process.env.HOME ?? process.env.USERPROFILE ?? "";
return path.resolve(home, packageArg.slice(1).replace(/^[\\/]/, ""));
}
return path.resolve(process.cwd(), packageArg);
}
function formatPlugin(p: PluginRecord): string {
const statusColor =
p.status === "ready"
? pc.green(p.status)
: p.status === "error"
? pc.red(p.status)
: p.status === "disabled"
? pc.dim(p.status)
: pc.yellow(p.status);
const parts = [
`key=${pc.bold(p.pluginKey)}`,
`status=${statusColor}`,
`version=${p.version}`,
`id=${pc.dim(p.id)}`,
];
if (p.lastError) {
parts.push(`error=${pc.red(p.lastError.slice(0, 80))}`);
}
return parts.join(" ");
}
// ---------------------------------------------------------------------------
// Command registration
// ---------------------------------------------------------------------------
export function registerPluginCommands(program: Command): void {
const plugin = program.command("plugin").description("Plugin lifecycle management");
// -------------------------------------------------------------------------
// plugin list
// -------------------------------------------------------------------------
addCommonClientOptions(
plugin
.command("list")
.description("List installed plugins")
.option("--status <status>", "Filter by status (ready, error, disabled, installed, upgrade_pending)")
.action(async (opts: PluginListOptions) => {
try {
const ctx = resolveCommandContext(opts);
const qs = opts.status ? `?status=${encodeURIComponent(opts.status)}` : "";
const plugins = await ctx.api.get<PluginRecord[]>(`/api/plugins${qs}`);
if (ctx.json) {
printOutput(plugins, { json: true });
return;
}
const rows = plugins ?? [];
if (rows.length === 0) {
console.log(pc.dim("No plugins installed."));
return;
}
for (const p of rows) {
console.log(formatPlugin(p));
}
} catch (err) {
handleCommandError(err);
}
}),
);
// -------------------------------------------------------------------------
// plugin install <package-or-path>
// -------------------------------------------------------------------------
addCommonClientOptions(
plugin
.command("install <package>")
.description(
"Install a plugin from a local path or npm package.\n" +
" Examples:\n" +
" paperclipai plugin install ./my-plugin # local path\n" +
" paperclipai plugin install @acme/plugin-linear # npm package\n" +
" paperclipai plugin install @acme/plugin-linear@1.2 # pinned version",
)
.option("-l, --local", "Treat <package> as a local filesystem path", false)
.option("--version <version>", "Specific npm version to install (npm packages only)")
.action(async (packageArg: string, opts: PluginInstallOptions) => {
try {
const ctx = resolveCommandContext(opts);
// Auto-detect local paths: starts with . or / or ~ or is an absolute path
const isLocal =
opts.local ||
packageArg.startsWith("./") ||
packageArg.startsWith("../") ||
packageArg.startsWith("/") ||
packageArg.startsWith("~");
const resolvedPackage = resolvePackageArg(packageArg, isLocal);
if (!ctx.json) {
console.log(
pc.dim(
isLocal
? `Installing plugin from local path: ${resolvedPackage}`
: `Installing plugin: ${resolvedPackage}${opts.version ? `@${opts.version}` : ""}`,
),
);
}
const installedPlugin = await ctx.api.post<PluginRecord>("/api/plugins/install", {
packageName: resolvedPackage,
version: opts.version,
isLocalPath: isLocal,
});
if (ctx.json) {
printOutput(installedPlugin, { json: true });
return;
}
if (!installedPlugin) {
console.log(pc.dim("Install returned no plugin record."));
return;
}
console.log(
pc.green(
`✓ Installed ${pc.bold(installedPlugin.pluginKey)} v${installedPlugin.version} (${installedPlugin.status})`,
),
);
if (installedPlugin.lastError) {
console.log(pc.red(` Warning: ${installedPlugin.lastError}`));
}
} catch (err) {
handleCommandError(err);
}
}),
);
// -------------------------------------------------------------------------
// plugin uninstall <plugin-key-or-id>
// -------------------------------------------------------------------------
addCommonClientOptions(
plugin
.command("uninstall <pluginKey>")
.description(
"Uninstall a plugin by its plugin key or database ID.\n" +
" Use --force to hard-purge all state and config.",
)
.option("--force", "Purge all plugin state and config (hard delete)", false)
.action(async (pluginKey: string, opts: PluginUninstallOptions) => {
try {
const ctx = resolveCommandContext(opts);
const purge = opts.force === true;
const qs = purge ? "?purge=true" : "";
if (!ctx.json) {
console.log(
pc.dim(
purge
? `Uninstalling and purging plugin: ${pluginKey}`
: `Uninstalling plugin: ${pluginKey}`,
),
);
}
const result = await ctx.api.delete<PluginRecord | null>(
`/api/plugins/${encodeURIComponent(pluginKey)}${qs}`,
);
if (ctx.json) {
printOutput(result, { json: true });
return;
}
console.log(pc.green(`✓ Uninstalled ${pc.bold(pluginKey)}${purge ? " (purged)" : ""}`));
} catch (err) {
handleCommandError(err);
}
}),
);
// -------------------------------------------------------------------------
// plugin enable <plugin-key-or-id>
// -------------------------------------------------------------------------
addCommonClientOptions(
plugin
.command("enable <pluginKey>")
.description("Enable a disabled or errored plugin")
.action(async (pluginKey: string, opts: BaseClientOptions) => {
try {
const ctx = resolveCommandContext(opts);
const result = await ctx.api.post<PluginRecord>(
`/api/plugins/${encodeURIComponent(pluginKey)}/enable`,
);
if (ctx.json) {
printOutput(result, { json: true });
return;
}
console.log(pc.green(`✓ Enabled ${pc.bold(pluginKey)} — status: ${result?.status ?? "unknown"}`));
} catch (err) {
handleCommandError(err);
}
}),
);
// -------------------------------------------------------------------------
// plugin disable <plugin-key-or-id>
// -------------------------------------------------------------------------
addCommonClientOptions(
plugin
.command("disable <pluginKey>")
.description("Disable a running plugin without uninstalling it")
.action(async (pluginKey: string, opts: BaseClientOptions) => {
try {
const ctx = resolveCommandContext(opts);
const result = await ctx.api.post<PluginRecord>(
`/api/plugins/${encodeURIComponent(pluginKey)}/disable`,
);
if (ctx.json) {
printOutput(result, { json: true });
return;
}
console.log(pc.dim(`Disabled ${pc.bold(pluginKey)} — status: ${result?.status ?? "unknown"}`));
} catch (err) {
handleCommandError(err);
}
}),
);
// -------------------------------------------------------------------------
// plugin inspect <plugin-key-or-id>
// -------------------------------------------------------------------------
addCommonClientOptions(
plugin
.command("inspect <pluginKey>")
.description("Show full details for an installed plugin")
.action(async (pluginKey: string, opts: BaseClientOptions) => {
try {
const ctx = resolveCommandContext(opts);
const result = await ctx.api.get<PluginRecord>(
`/api/plugins/${encodeURIComponent(pluginKey)}`,
);
if (ctx.json) {
printOutput(result, { json: true });
return;
}
if (!result) {
console.log(pc.red(`Plugin not found: ${pluginKey}`));
process.exit(1);
}
console.log(formatPlugin(result));
if (result.lastError) {
console.log(`\n${pc.red("Last error:")}\n${result.lastError}`);
}
} catch (err) {
handleCommandError(err);
}
}),
);
// -------------------------------------------------------------------------
// plugin examples
// -------------------------------------------------------------------------
addCommonClientOptions(
plugin
.command("examples")
.description("List bundled example plugins available for local install")
.action(async (opts: BaseClientOptions) => {
try {
const ctx = resolveCommandContext(opts);
const examples = await ctx.api.get<
Array<{
packageName: string;
pluginKey: string;
displayName: string;
description: string;
localPath: string;
tag: string;
}>
>("/api/plugins/examples");
if (ctx.json) {
printOutput(examples, { json: true });
return;
}
const rows = examples ?? [];
if (rows.length === 0) {
console.log(pc.dim("No bundled examples available."));
return;
}
for (const ex of rows) {
console.log(
`${pc.bold(ex.displayName)} ${pc.dim(ex.pluginKey)}\n` +
` ${ex.description}\n` +
` ${pc.cyan(`paperclipai plugin install ${ex.localPath}`)}`,
);
}
} catch (err) {
handleCommandError(err);
}
}),
);
}

View File

@@ -18,6 +18,7 @@ import { registerDashboardCommands } from "./commands/client/dashboard.js";
import { applyDataDirOverride, type DataDirOptionLike } from "./config/data-dir.js";
import { loadPaperclipEnvFile } from "./config/env.js";
import { registerWorktreeCommands } from "./commands/worktree.js";
import { registerPluginCommands } from "./commands/client/plugin.js";
const program = new Command();
const DATA_DIR_OPTION_HELP =
@@ -136,6 +137,7 @@ registerApprovalCommands(program);
registerActivityCommands(program);
registerDashboardCommands(program);
registerWorktreeCommands(program);
registerPluginCommands(program);
const auth = program.command("auth").description("Authentication and bootstrap utilities");

View File

@@ -89,6 +89,10 @@ docker compose -f docker-compose.quickstart.yml up --build
See `doc/DOCKER.md` for API key wiring (`OPENAI_API_KEY` / `ANTHROPIC_API_KEY`) and persistence details.
## Docker For Untrusted PR Review
For a separate review-oriented container that keeps `codex`/`claude` login state in Docker volumes and checks out PRs into an isolated scratch workspace, see `doc/UNTRUSTED-PR-REVIEW.md`.
## Database in Dev (Auto-Handled)
For local development, leave `DATABASE_URL` unset.

View File

@@ -93,6 +93,12 @@ Notes:
- Without API keys, the app still runs normally.
- Adapter environment checks in Paperclip will surface missing auth/CLI prerequisites.
## Untrusted PR Review Container
If you want a separate Docker environment for reviewing untrusted pull requests with `codex` or `claude`, use the dedicated review workflow in `doc/UNTRUSTED-PR-REVIEW.md`.
That setup keeps CLI auth state in Docker volumes instead of your host home directory and uses a separate scratch workspace for PR checkouts and preview runs.
## Onboard Smoke Test (Ubuntu + npm only)
Use this when you want to mimic a fresh machine that only has Ubuntu + npm and verify:
@@ -114,6 +120,7 @@ Useful overrides:
```sh
HOST_PORT=3200 PAPERCLIPAI_VERSION=latest ./scripts/docker-onboard-smoke.sh
PAPERCLIP_DEPLOYMENT_MODE=authenticated PAPERCLIP_DEPLOYMENT_EXPOSURE=private ./scripts/docker-onboard-smoke.sh
SMOKE_DETACH=true SMOKE_METADATA_FILE=/tmp/paperclip-smoke.env PAPERCLIPAI_VERSION=latest ./scripts/docker-onboard-smoke.sh
```
Notes:
@@ -125,4 +132,5 @@ Notes:
- Smoke script also defaults `PAPERCLIP_PUBLIC_URL` to `http://localhost:<HOST_PORT>` so bootstrap invite URLs and auth callbacks use the reachable host port instead of the container's internal `3100`.
- In authenticated mode, the smoke script defaults `SMOKE_AUTO_BOOTSTRAP=true` and drives the real bootstrap path automatically: it signs up a real user, runs `paperclipai auth bootstrap-ceo` inside the container to mint a real bootstrap invite, accepts that invite over HTTP, and verifies board session access.
- Run the script in the foreground to watch the onboarding flow; stop with `Ctrl+C` after validation.
- Set `SMOKE_DETACH=true` to leave the container running for automation and optionally write shell-ready metadata to `SMOKE_METADATA_FILE`.
- The image definition is in `Dockerfile.onboard-smoke`.

View File

@@ -1,18 +1,19 @@
# Publishing to npm
Low-level reference for how Paperclip packages are built for npm.
Low-level reference for how Paperclip packages are prepared and published to npm.
For the maintainer release workflow, use [doc/RELEASING.md](RELEASING.md). This document is only about packaging internals and the scripts that produce publishable artifacts.
For the maintainer workflow, use [doc/RELEASING.md](RELEASING.md). This document focuses on packaging internals.
## Current Release Entry Points
Use these scripts instead of older one-off publish commands:
Use these scripts:
- [`scripts/release-start.sh`](../scripts/release-start.sh) to create or resume `release/X.Y.Z`
- [`scripts/release-preflight.sh`](../scripts/release-preflight.sh) before any canary or stable release
- [`scripts/release.sh`](../scripts/release.sh) for canary and stable npm publishes
- [`scripts/rollback-latest.sh`](../scripts/rollback-latest.sh) to repoint `latest` during rollback
- [`scripts/create-github-release.sh`](../scripts/create-github-release.sh) after pushing the stable branch tag
- [`scripts/release.sh`](../scripts/release.sh) for canary and stable publish flows
- [`scripts/create-github-release.sh`](../scripts/create-github-release.sh) after pushing a stable tag
- [`scripts/rollback-latest.sh`](../scripts/rollback-latest.sh) to repoint `latest`
- [`scripts/build-npm.sh`](../scripts/build-npm.sh) for the CLI packaging build
Paperclip no longer uses release branches or Changesets for publishing.
## Why the CLI needs special packaging
@@ -23,7 +24,7 @@ The CLI package, `paperclipai`, imports code from workspace packages such as:
- `@paperclipai/shared`
- adapter packages under `packages/adapters/`
Those workspace references use `workspace:*` during development. npm cannot install those references directly for end users, so the release build has to transform the CLI into a publishable standalone package.
Those workspace references are valid in development but not in a publishable npm package. The release flow rewrites versions temporarily, then builds a publishable CLI bundle.
## `build-npm.sh`
@@ -33,89 +34,107 @@ Run:
./scripts/build-npm.sh
```
This script does six things:
This script:
1. Runs the forbidden token check unless `--skip-checks` is supplied
2. Runs `pnpm -r typecheck`
3. Bundles the CLI entrypoint with esbuild into `cli/dist/index.js`
4. Verifies the bundled entrypoint with `node --check`
5. Rewrites `cli/package.json` into a publishable npm manifest and stores the dev copy as `cli/package.dev.json`
6. Copies the repo `README.md` into `cli/README.md` for npm package metadata
1. runs the forbidden token check unless `--skip-checks` is supplied
2. runs `pnpm -r typecheck`
3. bundles the CLI entrypoint with esbuild into `cli/dist/index.js`
4. verifies the bundled entrypoint with `node --check`
5. rewrites `cli/package.json` into a publishable npm manifest and stores the dev copy as `cli/package.dev.json`
6. copies the repo `README.md` into `cli/README.md` for npm metadata
`build-npm.sh` is used by the release script so that npm users install a real package rather than unresolved workspace dependencies.
After the release script exits, the dev manifest and temporary files are restored automatically.
## Publishable CLI layout
## Package discovery and versioning
During development, [`cli/package.json`](../cli/package.json) contains workspace references.
During release preparation:
- `cli/package.json` becomes a publishable manifest with external npm dependency ranges
- `cli/package.dev.json` stores the development manifest temporarily
- `cli/dist/index.js` contains the bundled CLI entrypoint
- `cli/README.md` is copied in for npm metadata
After release finalization, the release script restores the development manifest and removes the temporary README copy.
## Package discovery
The release tooling scans the workspace for public packages under:
Public packages are discovered from:
- `packages/`
- `server/`
- `cli/`
`ui/` remains ignored for npm publishing because it is private.
`ui/` is ignored because it is private.
This matters because all public packages are versioned and published together as one release unit.
The version rewrite step now uses [`scripts/release-package-map.mjs`](../scripts/release-package-map.mjs), which:
## Canary packaging model
- finds all public packages
- sorts them topologically by internal dependencies
- rewrites each package version to the target release version
- rewrites internal `workspace:*` dependency references to the exact target version
- updates the CLI's displayed version string
Canaries are published as semver prereleases such as:
Those rewrites are temporary. The working tree is restored after publish or dry-run.
- `1.2.3-canary.0`
- `1.2.3-canary.1`
## Version formats
They are published under the npm dist-tag `canary`.
Paperclip uses calendar versions:
This means:
- stable: `YYYY.MDD.P`
- canary: `YYYY.MDD.P-canary.N`
- `npx paperclipai@canary onboard` can install them explicitly
- `npx paperclipai onboard` continues to resolve `latest`
- the stable changelog can stay at `releases/v1.2.3.md`
Examples:
## Stable packaging model
- stable: `2026.318.0`
- canary: `2026.318.1-canary.2`
Stable releases publish normal semver versions such as `1.2.3` under the npm dist-tag `latest`.
## Publish model
The stable publish flow also creates the local release commit and git tag on `release/X.Y.Z`. Pushing that branch commit/tag, creating the GitHub Release, and merging the release branch back to `master` happen afterward as separate maintainer steps.
### Canary
Canaries publish under the npm dist-tag `canary`.
Example:
- `paperclipai@2026.318.1-canary.2`
This keeps the default install path unchanged while allowing explicit installs with:
```bash
npx paperclipai@canary onboard
```
### Stable
Stable publishes use the npm dist-tag `latest`.
Example:
- `paperclipai@2026.318.0`
Stable publishes do not create a release commit. Instead:
- package versions are rewritten temporarily
- packages are published from the chosen source commit
- git tag `vYYYY.MDD.P` points at that original commit
## Trusted publishing
The intended CI model is npm trusted publishing through GitHub OIDC.
That means:
- no long-lived `NPM_TOKEN` in repository secrets
- GitHub Actions obtains short-lived publish credentials
- trusted publisher rules are configured per workflow file
See [doc/RELEASE-AUTOMATION-SETUP.md](RELEASE-AUTOMATION-SETUP.md) for the GitHub/npm setup steps.
## Rollback model
Rollback does not unpublish packages.
Rollback does not unpublish anything.
Instead, the maintainer should move the `latest` dist-tag back to the previous good stable version with:
It repoints the `latest` dist-tag to a prior stable version:
```bash
./scripts/rollback-latest.sh <stable-version>
./scripts/rollback-latest.sh 2026.318.0
```
That keeps history intact while restoring the default install path quickly.
## Notes for CI
The repo includes a manual GitHub Actions release workflow at [`.github/workflows/release.yml`](../.github/workflows/release.yml).
Recommended CI release setup:
- use npm trusted publishing via GitHub OIDC
- require approval through the `npm-release` environment
- run releases from `release/X.Y.Z`
- use canary first, then stable
This is the fastest way to restore the default install path if a stable release is bad.
## Related Files
- [`scripts/build-npm.sh`](../scripts/build-npm.sh)
- [`scripts/generate-npm-package-json.mjs`](../scripts/generate-npm-package-json.mjs)
- [`scripts/release-package-map.mjs`](../scripts/release-package-map.mjs)
- [`cli/esbuild.config.mjs`](../cli/esbuild.config.mjs)
- [`doc/RELEASING.md`](RELEASING.md)

View File

@@ -0,0 +1,281 @@
# Release Automation Setup
This document covers the GitHub and npm setup required for the current Paperclip release model:
- automatic canaries from `master`
- manual stable promotion from a chosen source ref
- npm trusted publishing via GitHub OIDC
- protected release infrastructure in a public repository
Repo-side files that depend on this setup:
- `.github/workflows/release.yml`
- `.github/CODEOWNERS`
Note:
- the release workflows intentionally use `pnpm install --no-frozen-lockfile`
- this matches the repo's current policy where `pnpm-lock.yaml` is refreshed by GitHub automation after manifest changes land on `master`
- the publish jobs then restore `pnpm-lock.yaml` before running `scripts/release.sh`, so the release script still sees a clean worktree
## 1. Merge the Repo Changes First
Before touching GitHub or npm settings, merge the release automation code so the referenced workflow filenames already exist on the default branch.
Required files:
- `.github/workflows/release.yml`
- `.github/CODEOWNERS`
## 2. Configure npm Trusted Publishing
Do this for every public package that Paperclip publishes.
At minimum that includes:
- `paperclipai`
- `@paperclipai/server`
- public packages under `packages/`
### 2.1. In npm, open each package settings page
For each package:
1. open npm as an owner of the package
2. go to the package settings / publishing access area
3. add a trusted publisher for the GitHub repository `paperclipai/paperclip`
### 2.2. Add one trusted publisher entry per package
npm currently allows one trusted publisher configuration per package.
Configure:
- workflow: `.github/workflows/release.yml`
Repository:
- `paperclipai/paperclip`
Environment name:
- leave the npm trusted-publisher environment field blank
Why:
- the single `release.yml` workflow handles both canary and stable publishing
- GitHub environments `npm-canary` and `npm-stable` still enforce different approval rules on the GitHub side
### 2.3. Verify trusted publishing before removing old auth
After the workflows are live:
1. run a canary publish
2. confirm npm publish succeeds without any `NPM_TOKEN`
3. run a stable dry-run
4. run one real stable publish
Only after that should you remove old token-based access.
## 3. Remove Legacy npm Tokens
After trusted publishing works:
1. revoke any repository or organization `NPM_TOKEN` secrets used for publish
2. revoke any personal automation token that used to publish Paperclip
3. if npm offers a package-level setting to restrict publishing to trusted publishers, enable it
Goal:
- no long-lived npm publishing token should remain in GitHub Actions
## 4. Create GitHub Environments
Create two environments in the GitHub repository:
- `npm-canary`
- `npm-stable`
Path:
1. GitHub repository
2. `Settings`
3. `Environments`
4. `New environment`
## 5. Configure `npm-canary`
Recommended settings for `npm-canary`:
- environment name: `npm-canary`
- required reviewers: none
- wait timer: none
- deployment branches and tags:
- selected branches only
- allow `master`
Reasoning:
- every push to `master` should be able to publish a canary automatically
- no human approval should be required for canaries
## 6. Configure `npm-stable`
Recommended settings for `npm-stable`:
- environment name: `npm-stable`
- required reviewers: at least one maintainer other than the person triggering the workflow when possible
- prevent self-review: enabled
- admin bypass: disabled if your team can tolerate it
- wait timer: optional
- deployment branches and tags:
- selected branches only
- allow `master`
Reasoning:
- stable publishes should require an explicit human approval gate
- the workflow is manual, but the environment should still be the real control point
## 7. Protect `master`
Open the branch protection settings for `master`.
Recommended rules:
1. require pull requests before merging
2. require status checks to pass before merging
3. require review from code owners
4. dismiss stale approvals when new commits are pushed
5. restrict who can push directly to `master`
At minimum, make sure workflow and release script changes cannot land without review.
## 8. Enforce CODEOWNERS Review
This repo now includes `.github/CODEOWNERS`, but GitHub only enforces it if branch protection requires code owner reviews.
In branch protection for `master`, enable:
- `Require review from Code Owners`
Then verify the owner entries are correct for your actual maintainer set.
Current file:
- `.github/CODEOWNERS`
If `@cryppadotta` is not the right reviewer identity in the public repo, change it before enabling enforcement.
## 9. Protect Release Infrastructure Specifically
These files should always trigger code owner review:
- `.github/workflows/release.yml`
- `scripts/release.sh`
- `scripts/release-lib.sh`
- `scripts/release-package-map.mjs`
- `scripts/create-github-release.sh`
- `scripts/rollback-latest.sh`
- `doc/RELEASING.md`
- `doc/PUBLISHING.md`
If you want stronger controls, add a repository ruleset that explicitly blocks direct pushes to:
- `.github/workflows/**`
- `scripts/release*`
## 10. Do Not Store a Claude Token in GitHub Actions
Do not add a personal Claude or Anthropic token for automatic changelog generation.
Recommended policy:
- stable changelog generation happens locally from a trusted maintainer machine
- canaries never generate changelogs
This keeps LLM spending intentional and avoids a high-value token sitting in Actions.
## 11. Verify the Canary Workflow
After setup:
1. merge a harmless commit to `master`
2. open the `Release` workflow run triggered by that push
3. confirm it passes verification
4. confirm publish succeeds under the `npm-canary` environment
5. confirm npm now shows a new `canary` release
6. confirm a git tag named `canary/vYYYY.MDD.P-canary.N` was pushed
Install-path check:
```bash
npx paperclipai@canary onboard
```
## 12. Verify the Stable Workflow
After at least one good canary exists:
1. resolve the target stable version with `./scripts/release.sh stable --date YYYY-MM-DD --print-version`
2. prepare `releases/vYYYY.MDD.P.md` on the source commit you want to promote
3. open `Actions` -> `Release`
4. run it with:
- `source_ref`: the tested commit SHA or canary tag source commit
- `stable_date`: leave blank or set the intended UTC date like `2026-03-18`
do not enter a version like `2026.318.0`; the workflow computes that from the date
- `dry_run`: `true`
5. confirm the dry-run succeeds
6. rerun with `dry_run: false`
7. approve the `npm-stable` environment when prompted
8. confirm npm `latest` points to the new stable version
9. confirm git tag `vYYYY.MDD.P` exists
10. confirm the GitHub Release was created
Implementation note:
- the GitHub Actions stable workflow calls `create-github-release.sh` with `PUBLISH_REMOTE=origin`
- local maintainer usage can still pass `PUBLISH_REMOTE=public-gh` explicitly when needed
## 13. Suggested Maintainer Policy
Use this policy going forward:
- canaries are automatic and cheap
- stables are manual and approved
- only stables get public notes and announcements
- release notes are committed before stable publish
- rollback uses `npm dist-tag`, not unpublish
## 14. Troubleshooting
### Trusted publishing fails with an auth error
Check:
1. the workflow filename on GitHub exactly matches the filename configured in npm
2. the package has the trusted publisher entry for the correct repository
3. the job has `id-token: write`
4. the job is running from the expected repository, not a fork
### Stable workflow runs but never asks for approval
Check:
1. the `publish` job uses environment `npm-stable`
2. the environment actually has required reviewers configured
3. the workflow is running in the canonical repository, not a fork
### CODEOWNERS does not trigger
Check:
1. `.github/CODEOWNERS` is on the default branch
2. branch protection on `master` requires code owner review
3. the owner identities in the file are valid reviewers with repository access
## Related Docs
- [doc/RELEASING.md](RELEASING.md)
- [doc/PUBLISHING.md](PUBLISHING.md)
- [doc/plans/2026-03-17-release-automation-and-versioning.md](plans/2026-03-17-release-automation-and-versioning.md)

View File

@@ -1,220 +1,174 @@
# Releasing Paperclip
Maintainer runbook for shipping a full Paperclip release across npm, GitHub, and the website-facing changelog surface.
Maintainer runbook for shipping Paperclip across npm, GitHub, and the website-facing changelog surface.
The release model is branch-driven:
The release model is now commit-driven:
1. Start a release train on `release/X.Y.Z`
2. Draft the stable changelog on that branch
3. Publish one or more canaries from that branch
4. Publish stable from that same branch head
5. Push the branch commit and tag
6. Create the GitHub Release
7. Merge `release/X.Y.Z` back to `master` without squash or rebase
1. Every push to `master` publishes a canary automatically.
2. Stable releases are manually promoted from a chosen tested commit or canary tag.
3. Stable release notes live in `releases/vYYYY.MDD.P.md`.
4. Only stable releases get GitHub Releases.
## Versioning Model
Paperclip uses calendar versions that still fit semver syntax:
- stable: `YYYY.MDD.P`
- canary: `YYYY.MDD.P-canary.N`
Examples:
- first stable on March 18, 2026: `2026.318.0`
- second stable on March 18, 2026: `2026.318.1`
- fourth canary for the `2026.318.1` line: `2026.318.1-canary.3`
Important constraints:
- the middle numeric slot is `MDD`, where `M` is the UTC month and `DD` is the zero-padded UTC day
- use `2026.303.0` for March 3, not `2026.33.0`
- do not use leading zeroes such as `2026.0318.0`
- do not use four numeric segments such as `2026.3.18.1`
- the semver-safe canary form is `2026.318.0-canary.1`
## Release Surfaces
Every release has four separate surfaces:
Every stable release has four separate surfaces:
1. **Verification** — the exact git SHA passes typecheck, tests, and build
2. **npm**`paperclipai` and public workspace packages are published
3. **GitHub** — the stable release gets a git tag and GitHub Release
4. **Website / announcements** — the stable changelog is published externally and announced
A release is done only when all four surfaces are handled.
A stable release is done only when all four surfaces are handled.
Canaries only cover the first two surfaces plus an internal traceability tag.
## Core Invariants
- Canary and stable for `X.Y.Z` must come from the same `release/X.Y.Z` branch.
- The release scripts must run from the matching `release/X.Y.Z` branch.
- Once `vX.Y.Z` exists locally, on GitHub, or on npm, that release train is frozen.
- Do not squash-merge or rebase-merge a release branch PR back to `master`.
- The stable changelog is always `releases/vX.Y.Z.md`. Never create canary changelog files.
The reason for the merge rule is simple: the tag must keep pointing at the exact published commit. Squash or rebase breaks that property.
- canaries publish from `master`
- stables publish from an explicitly chosen source ref
- tags point at the original source commit, not a generated release commit
- stable notes are always `releases/vYYYY.MDD.P.md`
- canaries never create GitHub Releases
- canaries never require changelog generation
## TL;DR
### 1. Start the release train
### Canary
Use this to compute the next version, create or resume the branch, create or resume a dedicated worktree, and push the branch to GitHub.
Every push to `master` runs the canary path inside [`.github/workflows/release.yml`](../.github/workflows/release.yml).
```bash
./scripts/release-start.sh patch
```
It:
That script:
- fetches the release remote and tags
- computes the next stable version from the latest `v*` tag
- creates or resumes `release/X.Y.Z`
- creates or resumes a dedicated worktree
- pushes the branch to the remote by default
- refuses to reuse a frozen release train
### 2. Draft the stable changelog
From the release worktree:
```bash
VERSION=X.Y.Z
claude --print --output-format stream-json --verbose --dangerously-skip-permissions --model claude-opus-4-6 "Use the release-changelog skill to draft or update releases/v${VERSION}.md for Paperclip. Read doc/RELEASING.md and .agents/skills/release-changelog/SKILL.md, then generate the stable changelog for v${VERSION} from commits since the last stable tag. Do not create a canary changelog."
```
### 3. Verify and publish a canary
```bash
./scripts/release-preflight.sh canary patch
./scripts/release.sh patch --canary --dry-run
./scripts/release.sh patch --canary
PAPERCLIPAI_VERSION=canary ./scripts/docker-onboard-smoke.sh
```
- verifies the pushed commit
- computes the canary version for the current UTC date
- publishes under npm dist-tag `canary`
- creates a git tag `canary/vYYYY.MDD.P-canary.N`
Users install canaries with:
```bash
npx paperclipai@canary onboard
```
### 4. Publish stable
```bash
./scripts/release-preflight.sh stable patch
./scripts/release.sh patch --dry-run
./scripts/release.sh patch
git push public-gh HEAD --follow-tags
./scripts/create-github-release.sh X.Y.Z
```
Then open a PR from `release/X.Y.Z` to `master` and merge without squash or rebase.
## Release Branches
Paperclip uses one release branch per target stable version:
- `release/0.3.0`
- `release/0.3.1`
- `release/1.0.0`
Do not create separate per-canary branches like `canary/0.3.0-1`. A canary is just a prerelease snapshot of the same stable train.
## Script Entry Points
- [`scripts/release-start.sh`](../scripts/release-start.sh) — create or resume the release train branch/worktree
- [`scripts/release-preflight.sh`](../scripts/release-preflight.sh) — validate branch, version plan, git/npm state, and verification gate
- [`scripts/release.sh`](../scripts/release.sh) — publish canary or stable from the release branch
- [`scripts/create-github-release.sh`](../scripts/create-github-release.sh) — create or update the GitHub Release after pushing the tag
- [`scripts/rollback-latest.sh`](../scripts/rollback-latest.sh) — repoint `latest` to the last good stable version
## Detailed Workflow
### 1. Start or resume the release train
Run:
```bash
./scripts/release-start.sh <patch|minor|major>
```
Useful options:
```bash
./scripts/release-start.sh patch --dry-run
./scripts/release-start.sh minor --worktree-dir ../paperclip-release-0.4.0
./scripts/release-start.sh patch --no-push
```
The script is intentionally idempotent:
- if `release/X.Y.Z` already exists locally, it reuses it
- if the branch already exists on the remote, it resumes it locally
- if the branch is already checked out in another worktree, it points you there
- if `vX.Y.Z` already exists locally, remotely, or on npm, it refuses to reuse that train
### 2. Write the stable changelog early
Create or update:
- `releases/vX.Y.Z.md`
That file is for the eventual stable release. It should not include `-canary` in the filename or heading.
Recommended structure:
- `Breaking Changes` when needed
- `Highlights`
- `Improvements`
- `Fixes`
- `Upgrade Guide` when needed
- `Contributors` — @-mention every contributor by GitHub username (no emails)
Package-level `CHANGELOG.md` files are generated as part of the release mechanics. They are not the main release narrative.
### 3. Run release preflight
From the `release/X.Y.Z` worktree:
```bash
./scripts/release-preflight.sh canary <patch|minor|major>
# or
./scripts/release-preflight.sh stable <patch|minor|major>
npx paperclipai@canary onboard --data-dir "$(mktemp -d /tmp/paperclip-canary.XXXXXX)"
```
The preflight script now checks all of the following before it runs the verification gate:
### Stable
- the worktree is clean, including untracked files
- the current branch matches the computed `release/X.Y.Z`
- the release train is not frozen
- the target version is still free on npm
- the target tag does not already exist locally or remotely
- whether the remote release branch already exists
- whether `releases/vX.Y.Z.md` is present
Use [`.github/workflows/release.yml`](../.github/workflows/release.yml) from the Actions tab with the manual `workflow_dispatch` inputs.
Then it runs:
[Run the action here](https://github.com/paperclipai/paperclip/actions/workflows/release.yml)
Inputs:
- `source_ref`
- commit SHA, branch, or tag
- `stable_date`
- optional UTC date override in `YYYY-MM-DD`
- enter a date like `2026-03-18`, not a version like `2026.318.0`
- `dry_run`
- preview only when true
Before running stable:
1. pick the canary commit or tag you trust
2. resolve the target stable version with `./scripts/release.sh stable --date "$(date +%F)" --print-version`
3. create or update `releases/vYYYY.MDD.P.md` on that source ref
4. run the stable workflow from that source ref
Example:
- `source_ref`: `master`
- `stable_date`: `2026-03-18`
- resulting stable version: `2026.318.0`
The workflow:
- re-verifies the exact source ref
- computes the next stable patch slot for the chosen UTC date
- publishes `YYYY.MDD.P` under npm dist-tag `latest`
- creates git tag `vYYYY.MDD.P`
- creates or updates the GitHub Release from `releases/vYYYY.MDD.P.md`
## Local Commands
### Preview a canary locally
```bash
pnpm -r typecheck
pnpm test:run
pnpm build
./scripts/release.sh canary --dry-run
```
### 4. Publish one or more canaries
Run:
### Preview a stable locally
```bash
./scripts/release.sh <patch|minor|major> --canary --dry-run
./scripts/release.sh <patch|minor|major> --canary
./scripts/release.sh stable --dry-run
```
Result:
### Publish a stable locally
- npm gets a prerelease such as `1.2.3-canary.0` under dist-tag `canary`
- `latest` is unchanged
- no git tag is created
- no GitHub Release is created
- the worktree returns to clean after the script finishes
This is mainly for emergency/manual use. The normal path is the GitHub workflow.
Guardrails:
```bash
./scripts/release.sh stable
git push public-gh refs/tags/vYYYY.MDD.P
PUBLISH_REMOTE=public-gh ./scripts/create-github-release.sh YYYY.MDD.P
```
- the script refuses to run from the wrong branch
- the script refuses to publish from a frozen train
- the canary is always derived from the next stable version
- if the stable notes file is missing, the script warns before you forget it
## Stable Changelog Workflow
Concrete example:
Stable changelog files live at:
- if the latest stable is `0.2.7`, a patch canary targets `0.2.8-canary.0`
- `0.2.7-canary.N` is invalid because `0.2.7` is already stable
- `releases/vYYYY.MDD.P.md`
### 5. Smoke test the canary
Canaries do not get changelog files.
Run the actual install path in Docker:
Recommended local generation flow:
```bash
VERSION="$(./scripts/release.sh stable --date 2026-03-18 --print-version)"
claude --print --output-format stream-json --verbose --dangerously-skip-permissions --model claude-opus-4-6 "Use the release-changelog skill to draft or update releases/v${VERSION}.md for Paperclip. Read doc/RELEASING.md and .agents/skills/release-changelog/SKILL.md, then generate the stable changelog for v${VERSION} from commits since the last stable tag. Do not create a canary changelog."
```
The repo intentionally does not run this through GitHub Actions because:
- canaries are too frequent
- stable notes are the only public narrative surface that needs LLM help
- maintainer LLM tokens should not live in Actions
## Smoke Testing
For a canary:
```bash
PAPERCLIPAI_VERSION=canary ./scripts/docker-onboard-smoke.sh
```
For the current stable:
```bash
PAPERCLIPAI_VERSION=latest ./scripts/docker-onboard-smoke.sh
```
Useful isolated variants:
```bash
@@ -222,201 +176,76 @@ HOST_PORT=3232 DATA_DIR=./data/release-smoke-canary PAPERCLIPAI_VERSION=canary .
HOST_PORT=3233 DATA_DIR=./data/release-smoke-stable PAPERCLIPAI_VERSION=latest ./scripts/docker-onboard-smoke.sh
```
If you want to exercise onboarding from the current committed ref instead of npm, use:
Automated browser smoke is also available:
```bash
./scripts/clean-onboard-ref.sh
PAPERCLIP_PORT=3234 ./scripts/clean-onboard-ref.sh
./scripts/clean-onboard-ref.sh HEAD
gh workflow run release-smoke.yml -f paperclip_version=canary
gh workflow run release-smoke.yml -f paperclip_version=latest
```
Minimum checks:
- `npx paperclipai@canary onboard` installs
- onboarding completes without crashes
- the server boots
- the UI loads
- basic company creation and dashboard load work
- authenticated login works with the smoke credentials
- the browser lands in onboarding on a fresh instance
- company creation succeeds
- the first CEO agent is created
- the first CEO heartbeat run is triggered
If smoke testing fails:
## Rollback
1. stop the stable release
2. fix the issue on the same `release/X.Y.Z` branch
3. publish another canary
4. rerun smoke testing
Rollback does not unpublish versions.
### 6. Publish stable from the same release branch
Once the branch head is vetted, run:
It only moves the `latest` dist-tag back to a previous stable:
```bash
./scripts/release.sh <patch|minor|major> --dry-run
./scripts/release.sh <patch|minor|major>
./scripts/rollback-latest.sh 2026.318.0 --dry-run
./scripts/rollback-latest.sh 2026.318.0
```
Stable publish:
- publishes `X.Y.Z` to npm under `latest`
- creates the local release commit
- creates the local tag `vX.Y.Z`
Stable publish refuses to proceed if:
- the current branch is not `release/X.Y.Z`
- the remote release branch does not exist yet
- the stable notes file is missing
- the target tag already exists locally or remotely
- the stable version already exists on npm
Those checks intentionally freeze the train after stable publish.
### 7. Push the stable branch commit and tag
After stable publish succeeds:
```bash
git push public-gh HEAD --follow-tags
./scripts/create-github-release.sh X.Y.Z
```
The GitHub Release notes come from:
- `releases/vX.Y.Z.md`
### 8. Merge the release branch back to `master`
Open a PR:
- base: `master`
- head: `release/X.Y.Z`
Merge rule:
- allowed: merge commit or fast-forward
- forbidden: squash merge
- forbidden: rebase merge
Post-merge verification:
```bash
git fetch public-gh --tags
git merge-base --is-ancestor "vX.Y.Z" "public-gh/master"
```
That command must succeed. If it fails, the published tagged commit is not reachable from `master`, which means the merge strategy was wrong.
### 9. Finish the external surfaces
After GitHub is correct:
- publish the changelog on the website
- write and send the announcement copy
- ensure public docs and install guidance point to the stable version
## GitHub Actions Release
There is also a manual workflow at [`.github/workflows/release.yml`](../.github/workflows/release.yml).
Use it from the Actions tab on the relevant `release/X.Y.Z` branch:
1. Choose `Release`
2. Choose `channel`: `canary` or `stable`
3. Choose `bump`: `patch`, `minor`, or `major`
4. Choose whether this is a `dry_run`
5. Run it from the release branch, not from `master`
The workflow:
- reruns `typecheck`, `test:run`, and `build`
- gates publish behind the `npm-release` environment
- can publish canaries without touching `latest`
- can publish stable, push the stable branch commit and tag, and create the GitHub Release
It does not merge the release branch back to `master` for you.
## Release Checklist
### Before any publish
- [ ] The release train exists on `release/X.Y.Z`
- [ ] The working tree is clean, including untracked files
- [ ] If package manifests changed, the CI-owned `pnpm-lock.yaml` refresh is already merged on `master` before the train is cut
- [ ] The required verification gate passed on the exact branch head you want to publish
- [ ] The bump type is correct for the user-visible impact
- [ ] The stable changelog file exists or is ready at `releases/vX.Y.Z.md`
- [ ] You know which previous stable version you would roll back to if needed
### Before a stable
- [ ] The candidate has already passed smoke testing
- [ ] The remote `release/X.Y.Z` branch exists
- [ ] You are ready to push the stable branch commit and tag immediately after npm publish
- [ ] You are ready to create the GitHub Release immediately after the push
- [ ] You are ready to open the PR back to `master`
### After a stable
- [ ] `npm view paperclipai@latest version` matches the new stable version
- [ ] The git tag exists on GitHub
- [ ] The GitHub Release exists and uses `releases/vX.Y.Z.md`
- [ ] `vX.Y.Z` is reachable from `master`
- [ ] The website changelog is updated
- [ ] Announcement copy matches the stable release, not the canary
Then fix forward with a new stable patch slot or release date.
## Failure Playbooks
### If the canary publishes but the smoke test fails
### If the canary publishes but smoke testing fails
Do not publish stable.
Do not run stable.
Instead:
1. fix the issue on `release/X.Y.Z`
2. publish another canary
3. rerun smoke testing
1. fix the issue on `master`
2. merge the fix
3. wait for the next automatic canary
4. rerun smoke testing
### If stable npm publish succeeds but push or GitHub release creation fails
### If stable npm publish succeeds but tag push or GitHub release creation fails
This is a partial release. npm is already live.
Do this immediately:
1. fix the git or GitHub issue from the same checkout
2. push the stable branch commit and tag
3. create the GitHub Release
1. push the missing tag
2. rerun `PUBLISH_REMOTE=public-gh ./scripts/create-github-release.sh YYYY.MDD.P`
3. verify the GitHub Release notes point at `releases/vYYYY.MDD.P.md`
Do not republish the same version.
### If `latest` is broken after stable publish
Preview:
Roll back the dist-tag:
```bash
./scripts/rollback-latest.sh X.Y.Z --dry-run
./scripts/rollback-latest.sh YYYY.MDD.P
```
Roll back:
Then fix forward with a new stable release.
```bash
./scripts/rollback-latest.sh X.Y.Z
```
## Related Files
This does not unpublish anything. It only moves the `latest` dist-tag back to the last good stable release.
Then fix forward with a new patch release.
### If the GitHub Release notes are wrong
Re-run:
```bash
./scripts/create-github-release.sh X.Y.Z
```
If the release already exists, the script updates it.
## Related Docs
- [doc/PUBLISHING.md](PUBLISHING.md) — low-level npm build and packaging internals
- [.agents/skills/release/SKILL.md](../.agents/skills/release/SKILL.md) — maintainer release coordination workflow
- [.agents/skills/release-changelog/SKILL.md](../.agents/skills/release-changelog/SKILL.md) — stable changelog drafting workflow
- [`scripts/release.sh`](../scripts/release.sh)
- [`scripts/release-package-map.mjs`](../scripts/release-package-map.mjs)
- [`scripts/create-github-release.sh`](../scripts/create-github-release.sh)
- [`scripts/rollback-latest.sh`](../scripts/rollback-latest.sh)
- [`doc/PUBLISHING.md`](PUBLISHING.md)
- [`doc/RELEASE-AUTOMATION-SETUP.md`](RELEASE-AUTOMATION-SETUP.md)

View File

@@ -188,12 +188,15 @@ The heartbeat is a protocol, not a runtime. Paperclip defines how to initiate an
Agent configuration includes an **adapter** that defines how Paperclip invokes the agent. Initial adapters:
| Adapter | Mechanism | Example |
| --------- | ----------------------- | --------------------------------------------- |
| `process` | Execute a child process | `python run_agent.py --agent-id {id}` |
| `http` | Send an HTTP request | `POST https://openclaw.example.com/hook/{id}` |
| Adapter | Mechanism | Example |
| -------------------- | ----------------------- | --------------------------------------------- |
| `process` | Execute a child process | `python run_agent.py --agent-id {id}` |
| `http` | Send an HTTP request | `POST https://openclaw.example.com/hook/{id}` |
| `openclaw_gateway` | OpenClaw gateway API | Managed OpenClaw agent via gateway |
| `gemini_local` | Gemini CLI process | Local Gemini CLI with sandbox and approval |
| `hermes_local` | Hermes agent process | Local Hermes agent |
The `process` and `http` adapters ship as defaults. Additional adapters can be added via the plugin system (see Plugin / Extension Architecture).
The `process` and `http` adapters ship as defaults. Additional adapters have been added for specific agent runtimes (see list above), and new adapter types can be registered via the plugin system (see Plugin / Extension Architecture).
### Adapter Interface
@@ -429,7 +432,7 @@ The core Paperclip system must be extensible. Features like knowledge bases, ext
- **Agent Adapter plugins** — new Adapter types can be registered via the plugin system
- Plugin-registrable UI components (future)
This isn't a V1 deliverable (we're not building a plugin framework upfront), but the architecture should not paint us into a corner. Keep boundaries clean so extensions are possible.
The plugin framework has shipped. Plugins can register new adapter types, hook into lifecycle events, and contribute UI components (e.g. global toolbar buttons). A plugin SDK and CLI commands (`paperclipai plugin`) are available for authoring and installing plugins.
---

135
doc/UNTRUSTED-PR-REVIEW.md Normal file
View File

@@ -0,0 +1,135 @@
# Untrusted PR Review In Docker
Use this workflow when you want Codex or Claude to inspect a pull request that you do not want touching your host machine directly.
This is intentionally separate from the normal Paperclip dev image.
## What this container isolates
- `codex` auth/session state in a Docker volume, not your host `~/.codex`
- `claude` auth/session state in a Docker volume, not your host `~/.claude`
- `gh` auth state in the same container-local home volume
- review clones, worktrees, dependency installs, and local databases in a writable scratch volume under `/work`
By default this workflow does **not** mount your host repo checkout, your host home directory, or your SSH agent.
## Files
- `docker/untrusted-review/Dockerfile`
- `docker-compose.untrusted-review.yml`
- `review-checkout-pr` inside the container
## Build and start a shell
```sh
docker compose -f docker-compose.untrusted-review.yml build
docker compose -f docker-compose.untrusted-review.yml run --rm --service-ports review
```
That opens an interactive shell in the review container with:
- Node + Corepack/pnpm
- `codex`
- `claude`
- `gh`
- `git`, `rg`, `fd`, `jq`
## First-time login inside the container
Run these once. The resulting login state persists in the `review-home` Docker volume.
```sh
gh auth login
codex login
claude login
```
If you prefer API-key auth instead of CLI login, pass keys through Compose env:
```sh
OPENAI_API_KEY=... ANTHROPIC_API_KEY=... docker compose -f docker-compose.untrusted-review.yml run --rm review
```
## Check out a PR safely
Inside the container:
```sh
review-checkout-pr paperclipai/paperclip 432
cd /work/checkouts/paperclipai-paperclip/pr-432
```
What this does:
1. Creates or reuses a repo clone under `/work/repos/...`
2. Fetches `pull/<pr>/head` from GitHub
3. Creates a detached git worktree under `/work/checkouts/...`
The checkout lives entirely inside the container volume.
## Ask Codex or Claude to review it
Inside the PR checkout:
```sh
codex
```
Then give it a prompt like:
```text
Review this PR as hostile input. Focus on security issues, data exfiltration paths, sandbox escapes, dangerous install/runtime scripts, auth changes, and subtle behavioral regressions. Do not modify files. Produce findings ordered by severity with file references.
```
Or with Claude:
```sh
claude
```
## Preview the Paperclip app from the PR
Only do this when you intentionally want to execute the PR's code inside the container.
Inside the PR checkout:
```sh
pnpm install
HOST=0.0.0.0 pnpm dev
```
Open from the host:
- `http://localhost:3100`
The Compose file also exposes Vite's default port:
- `http://localhost:5173`
Notes:
- `pnpm install` can run untrusted lifecycle scripts from the PR. That is why this happens inside the isolated container instead of on your host.
- If you only want static inspection, do not run install/dev commands.
- Paperclip's embedded PostgreSQL and local storage stay inside the container home volume via `PAPERCLIP_HOME=/home/reviewer/.paperclip-review`.
## Reset state
Remove the review container volumes when you want a clean environment:
```sh
docker compose -f docker-compose.untrusted-review.yml down -v
```
That deletes:
- Codex/Claude/GitHub login state stored in `review-home`
- cloned repos, worktrees, installs, and scratch data stored in `review-work`
## Security limits
This is a useful isolation boundary, but it is still Docker, not a full VM.
- A reviewed PR can still access the container's network unless you disable it.
- Any secrets you pass into the container are available to code you execute inside it.
- Do not mount your host repo, host home, `.ssh`, or Docker socket unless you are intentionally weakening the boundary.
- If you need a stronger boundary than this, use a disposable VM instead of Docker.

172
doc/memory-landscape.md Normal file
View File

@@ -0,0 +1,172 @@
# Memory Landscape
Date: 2026-03-17
This document summarizes the memory systems referenced in task `PAP-530` and extracts the design patterns that matter for Paperclip.
## What Paperclip Needs From This Survey
Paperclip is not trying to become a single opinionated memory engine. The more useful target is a control-plane memory surface that:
- stays company-scoped
- lets each company choose a default memory provider
- lets specific agents override that default
- keeps provenance back to Paperclip runs, issues, comments, and documents
- records memory-related cost and latency the same way the rest of the control plane records work
- works with plugin-provided providers, not only built-ins
The question is not "which memory project wins?" The question is "what is the smallest Paperclip contract that can sit above several very different memory systems without flattening away the useful differences?"
## Quick Grouping
### Hosted memory APIs
- `mem0`
- `supermemory`
- `Memori`
These optimize for a simple application integration story: send conversation/content plus an identity, then query for relevant memory or user context later.
### Agent-centric memory frameworks / memory OSes
- `MemOS`
- `memU`
- `EverMemOS`
- `OpenViking`
These treat memory as an agent runtime subsystem, not only as a search index. They usually add task memory, profiles, filesystem-style organization, async ingestion, or skill/resource management.
### Local-first memory stores / indexes
- `nuggets`
- `memsearch`
These emphasize local persistence, inspectability, and low operational overhead. They are useful because Paperclip is local-first today and needs at least one zero-config path.
## Per-Project Notes
| Project | Shape | Notable API / model | Strong fit for Paperclip | Main mismatch |
|---|---|---|---|---|
| [nuggets](https://github.com/NeoVertex1/nuggets) | local memory engine + messaging gateway | topic-scoped HRR memory with `remember`, `recall`, `forget`, fact promotion into `MEMORY.md` | good example of lightweight local memory and automatic promotion | very specific architecture; not a general multi-tenant service |
| [mem0](https://github.com/mem0ai/mem0) | hosted + OSS SDK | `add`, `search`, `getAll`, `get`, `update`, `delete`, `deleteAll`; entity partitioning via `user_id`, `agent_id`, `run_id`, `app_id` | closest to a clean provider API with identities and metadata filters | provider owns extraction heavily; Paperclip should not assume every backend behaves like mem0 |
| [MemOS](https://github.com/MemTensor/MemOS) | memory OS / framework | unified add-retrieve-edit-delete, memory cubes, multimodal memory, tool memory, async scheduler, feedback/correction | strong source for optional capabilities beyond plain search | much broader than the minimal contract Paperclip should standardize first |
| [supermemory](https://github.com/supermemoryai/supermemory) | hosted memory + context API | `add`, `profile`, `search.memories`, `search.documents`, document upload, settings; automatic profile building and forgetting | strong example of "context bundle" rather than raw search results | heavily productized around its own ontology and hosted flow |
| [memU](https://github.com/NevaMind-AI/memU) | proactive agent memory framework | file-system metaphor, proactive loop, intent prediction, always-on companion model | good source for when memory should trigger agent behavior, not just retrieval | proactive assistant framing is broader than Paperclip's task-centric control plane |
| [Memori](https://github.com/MemoriLabs/Memori) | hosted memory fabric + SDK wrappers | registers against LLM SDKs, attribution via `entity_id` + `process_id`, sessions, cloud + BYODB | strong example of automatic capture around model clients | wrapper-centric design does not map 1:1 to Paperclip's run / issue / comment lifecycle |
| [EverMemOS](https://github.com/EverMind-AI/EverMemOS) | conversational long-term memory system | MemCell extraction, structured narratives, user profiles, hybrid retrieval / reranking | useful model for provenance-rich structured memories and evolving profiles | focused on conversational memory rather than generalized control-plane events |
| [memsearch](https://github.com/zilliztech/memsearch) | markdown-first local memory index | markdown as source of truth, `index`, `search`, `watch`, transcript parsing, plugin hooks | excellent baseline for a local built-in provider and inspectable provenance | intentionally simple; no hosted service semantics or rich correction workflow |
| [OpenViking](https://github.com/volcengine/OpenViking) | context database | filesystem-style organization of memories/resources/skills, tiered loading, visualized retrieval trajectories | strong source for browse/inspect UX and context provenance | treats "context database" as a larger product surface than Paperclip should own |
## Common Primitives Across The Landscape
Even though the systems disagree on architecture, they converge on a few primitives:
- `ingest`: add memory from text, messages, documents, or transcripts
- `query`: search or retrieve memory given a task, question, or scope
- `scope`: partition memory by user, agent, project, process, or session
- `provenance`: carry enough metadata to explain where a memory came from
- `maintenance`: update, forget, dedupe, compact, or correct memories over time
- `context assembly`: turn raw memories into a prompt-ready bundle for the agent
If Paperclip does not expose these, it will not adapt well to the systems above.
## Where The Systems Differ
These differences are exactly why Paperclip needs a layered contract instead of a single hard-coded engine.
### 1. Who owns extraction?
- `mem0`, `supermemory`, and `Memori` expect the provider to infer memories from conversations.
- `memsearch` expects the host to decide what markdown to write, then indexes it.
- `MemOS`, `memU`, `EverMemOS`, and `OpenViking` sit somewhere in between and often expose richer memory construction pipelines.
Paperclip should support both:
- provider-managed extraction
- Paperclip-managed extraction with provider-managed storage / retrieval
### 2. What is the source of truth?
- `memsearch` and `nuggets` make the source inspectable on disk.
- hosted APIs often make the provider store canonical.
- filesystem-style systems like `OpenViking` and `memU` treat hierarchy itself as part of the memory model.
Paperclip should not require a single storage shape. It should require normalized references back to Paperclip entities.
### 3. Is memory just search, or also profile and planning state?
- `mem0` and `memsearch` center search and CRUD.
- `supermemory` adds user profiles as a first-class output.
- `MemOS`, `memU`, `EverMemOS`, and `OpenViking` expand into tool traces, task memory, resources, and skills.
Paperclip should make plain search the minimum contract and richer outputs optional capabilities.
### 4. Is memory synchronous or asynchronous?
- local tools often work synchronously in-process.
- larger systems add schedulers, background indexing, compaction, or sync jobs.
Paperclip needs both direct request/response operations and background maintenance hooks.
## Paperclip-Specific Takeaways
### Paperclip should own these concerns
- binding a provider to a company and optionally overriding it per agent
- mapping Paperclip entities into provider scopes
- provenance back to issue comments, documents, runs, and activity
- cost / token / latency reporting for memory work
- browse and inspect surfaces in the Paperclip UI
- governance on destructive operations
### Providers should own these concerns
- extraction heuristics
- embedding / indexing strategy
- ranking and reranking
- profile synthesis
- contradiction resolution and forgetting logic
- storage engine details
### The control-plane contract should stay small
Paperclip does not need to standardize every feature from every provider. It needs:
- a required portable core
- optional capability flags for richer providers
- a way to record provider-native ids and metadata without pretending all providers are equivalent internally
## Recommended Direction
Paperclip should adopt a two-layer memory model:
1. `Memory binding + control plane layer`
Paperclip decides which provider key is in effect for a company, agent, or project, and it logs every memory operation with provenance and usage.
2. `Provider adapter layer`
A built-in or plugin-supplied adapter turns Paperclip memory requests into provider-specific calls.
The portable core should cover:
- ingest / write
- search / recall
- browse / inspect
- get by provider record handle
- forget / correction
- usage reporting
Optional capabilities can cover:
- profile synthesis
- async ingestion
- multimodal content
- tool / resource / skill memory
- provider-native graph browsing
That is enough to support:
- a local markdown-first baseline similar to `memsearch`
- hosted services similar to `mem0`, `supermemory`, or `Memori`
- richer agent-memory systems like `MemOS` or `OpenViking`
without forcing Paperclip itself to become a monolithic memory engine.

View File

@@ -0,0 +1,468 @@
# Billing Ledger and Reporting
## Context
Paperclip currently stores model spend in `cost_events` and operational run state in `heartbeat_runs`.
That split is fine, but the current reporting code tries to infer billing semantics by mixing both tables:
- `cost_events` knows provider, model, tokens, and dollars
- `heartbeat_runs.usage_json` knows some per-run billing metadata
- `heartbeat_runs.usage_json` does **not** currently carry enough normalized billing dimensions to support honest provider-level reporting
This becomes incorrect as soon as a company uses more than one provider, more than one billing channel, or more than one billing mode.
Examples:
- direct OpenAI API usage
- Claude subscription usage with zero marginal dollars
- subscription overage with dollars and tokens
- OpenRouter billing where the biller is OpenRouter but the upstream provider is Anthropic or OpenAI
The system needs to support:
- dollar reporting
- token reporting
- subscription-included usage
- subscription overage
- direct metered API usage
- future aggregator billing such as OpenRouter
## Product Decision
`cost_events` becomes the canonical billing and usage ledger for reporting.
`heartbeat_runs` remains an operational execution log. It may keep mirrored billing metadata for debugging and transcripts, but reporting must not reconstruct billing semantics from `heartbeat_runs.usage_json`.
## Decision: One Ledger Or Two
We do **not** need two tables to solve the current PR's problem.
For request-level inference reporting, `cost_events` is enough if it carries the right dimensions:
- upstream provider
- biller
- billing type
- model
- token fields
- billed amount
That is why the first implementation pass extends `cost_events` instead of introducing a second table immediately.
However, if Paperclip needs to account for the full billing surface of aggregators and managed AI platforms, then `cost_events` alone is not enough.
Some charges are not cleanly representable as a single model inference event:
- account top-ups and credit purchases
- platform fees charged at purchase time
- BYOK platform fees that are account-level or threshold-based
- prepaid credit expirations, refunds, and adjustments
- provisioned throughput commitments
- fine-tuning, training, model import, and storage charges
- gateway logging or other platform overhead that is not attributable to one prompt/response pair
So the decision is:
- near term: keep `cost_events` as the inference and usage ledger
- next phase: add `finance_events` for non-inference financial events
This is a deliberate split between:
- usage and inference accounting
- account-level and platform-level financial accounting
That separation keeps request reporting honest without forcing us to fake invoice semantics onto rows that were never request-scoped.
## External Motivation And Sources
The need for this model is not theoretical.
It follows directly from the billing systems of providers and aggregators Paperclip needs to support.
### OpenRouter
Source URLs:
- https://openrouter.ai/docs/faq#credit-and-billing-systems
- https://openrouter.ai/pricing
Relevant billing behavior as of March 14, 2026:
- OpenRouter passes through underlying inference pricing and deducts request cost from purchased credits.
- OpenRouter charges a 5.5% fee with a $0.80 minimum when purchasing credits.
- Crypto payments are charged a 5% fee.
- BYOK has its own fee model after a free request threshold.
- OpenRouter billing is aggregated at the OpenRouter account level even when the upstream provider is Anthropic, OpenAI, Google, or another provider.
Implication for Paperclip:
- request usage belongs in `cost_events`
- credit purchases, purchase fees, BYOK fees, refunds, and expirations belong in `finance_events`
- `biller=openrouter` must remain distinct from `provider=anthropic|openai|google|...`
### Cloudflare AI Gateway Unified Billing
Source URL:
- https://developers.cloudflare.com/ai-gateway/features/unified-billing/
Relevant billing behavior as of March 14, 2026:
- Unified Billing lets users call multiple upstream providers while receiving a single Cloudflare bill.
- Usage is paid from Cloudflare-loaded credits.
- Cloudflare supports manual top-ups and auto top-up thresholds.
- Spend limits can stop request processing on daily, weekly, or monthly boundaries.
- Unified Billing traffic can use Cloudflare-managed credentials rather than the user's direct provider key.
Implication for Paperclip:
- request usage needs `biller=cloudflare`
- upstream provider still needs to be preserved separately
- Cloudflare credit loads and related account-level events are not inference rows and should not be forced into `cost_events`
- quota and limits reporting must support biller-level controls, not just upstream provider limits
### Amazon Bedrock
Source URL:
- https://aws.amazon.com/bedrock/pricing/
Relevant billing behavior as of March 14, 2026:
- Bedrock supports on-demand and batch pricing.
- Bedrock pricing varies by region.
- some pricing tiers add premiums or discounts relative to standard pricing
- provisioned throughput is commitment-based rather than request-based
- custom model import uses Custom Model Units billed per minute, with monthly storage charges
- imported model copies are billed in 5-minute windows once active
- customization and fine-tuning introduce training and hosted-model charges beyond normal inference
Implication for Paperclip:
- normal tokenized inference fits in `cost_events`
- provisioned throughput, custom model unit charges, training, and storage charges require `finance_events`
- region and pricing tier need to be first-class dimensions in the financial model
## Ledger Boundary
To keep the system coherent, the table boundary should be explicit.
### `cost_events`
Use `cost_events` for request-scoped usage and inference charges:
- one row per billable or usage-bearing run event
- provider/model/biller/billingType/tokens/cost
- optionally tied to `heartbeat_run_id`
- supports direct APIs, subscriptions, overage, OpenRouter-routed inference, Cloudflare-routed inference, and Bedrock on-demand inference
### `finance_events`
Use `finance_events` for account-scoped or platform-scoped financial events:
- credit purchase
- top-up
- refund
- fee
- expiry
- provisioned capacity
- training
- model import
- storage
- invoice adjustment
These rows may or may not have a related model, provider, or run id.
Trying to force them into `cost_events` would either create fake request rows or create null-heavy rows that mean something fundamentally different from inference usage.
## Canonical Billing Dimensions
Every persisted billing event should model four separate axes:
1. Usage provider
The upstream provider whose model performed the work.
Examples: `openai`, `anthropic`, `google`.
2. Biller
The system that charged for the usage.
Examples: `openai`, `anthropic`, `openrouter`, `cursor`, `chatgpt`.
3. Billing type
The pricing mode applied to the event.
Initial canonical values:
- `metered_api`
- `subscription_included`
- `subscription_overage`
- `credits`
- `fixed`
- `unknown`
4. Measures
Usage and billing must both be storable:
- `input_tokens`
- `output_tokens`
- `cached_input_tokens`
- `cost_cents`
These dimensions are independent.
For example, an event may be:
- provider: `anthropic`
- biller: `openrouter`
- billing type: `metered_api`
- tokens: non-zero
- cost cents: non-zero
Or:
- provider: `anthropic`
- biller: `anthropic`
- billing type: `subscription_included`
- tokens: non-zero
- cost cents: `0`
## Schema Changes
Extend `cost_events` with:
- `heartbeat_run_id uuid null references heartbeat_runs.id`
- `biller text not null default 'unknown'`
- `billing_type text not null default 'unknown'`
- `cached_input_tokens int not null default 0`
Keep `provider` as the upstream usage provider.
Do not overload `provider` to mean biller.
Add a future `finance_events` table for account-level financial events with fields along these lines:
- `company_id`
- `occurred_at`
- `event_kind`
- `direction`
- `biller`
- `provider nullable`
- `execution_adapter_type nullable`
- `pricing_tier nullable`
- `region nullable`
- `model nullable`
- `quantity nullable`
- `unit nullable`
- `amount_cents`
- `currency`
- `estimated`
- `related_cost_event_id nullable`
- `related_heartbeat_run_id nullable`
- `external_invoice_id nullable`
- `metadata_json nullable`
Add indexes:
- `(company_id, biller, occurred_at)`
- `(company_id, provider, occurred_at)`
- `(company_id, heartbeat_run_id)` if distinct-run reporting remains common
## Shared Contract Changes
### Shared types
Add a shared billing type union and enrich cost types with:
- `heartbeatRunId`
- `biller`
- `billingType`
- `cachedInputTokens`
Update reporting response types so the provider breakdown reflects the ledger directly rather than inferred run metadata.
### Validators
Extend `createCostEventSchema` to accept:
- `heartbeatRunId`
- `biller`
- `billingType`
- `cachedInputTokens`
Defaults:
- `biller` defaults to `provider`
- `billingType` defaults to `unknown`
- `cachedInputTokens` defaults to `0`
## Adapter Contract Changes
Extend adapter execution results so they can report:
- `biller`
- richer billing type values
Backwards compatibility:
- existing adapter values `api` and `subscription` are treated as legacy aliases
- map `api -> metered_api`
- map `subscription -> subscription_included`
Future adapters may emit the canonical values directly.
OpenRouter support will use:
- `provider` = upstream provider when known
- `biller` = `openrouter`
- `billingType` = `metered_api` unless OpenRouter later exposes another billing mode
Cloudflare Unified Billing support will use:
- `provider` = upstream provider when known
- `biller` = `cloudflare`
- `billingType` = `credits` or `metered_api` depending on the normalized request billing contract
Bedrock support will use:
- `provider` = upstream provider or `aws_bedrock` depending on adapter shape
- `biller` = `aws_bedrock`
- `billingType` = request-scoped mode for inference rows
- `finance_events` for provisioned, training, import, and storage charges
## Write Path Changes
### Heartbeat-created events
When a heartbeat run produces usage or spend:
1. normalize adapter billing metadata
2. write a ledger row to `cost_events`
3. attach `heartbeat_run_id`
4. set `provider`, `biller`, `billing_type`, token fields, and `cost_cents`
The write path should no longer depend on later inference from `heartbeat_runs`.
### Manual API-created events
Manual cost event creation remains supported.
These events may have `heartbeatRunId = null`.
Rules:
- `provider` remains required
- `biller` defaults to `provider`
- `billingType` defaults to `unknown`
## Reporting Changes
### Server
Refactor reporting queries to use `cost_events` only.
#### `summary`
- sum `cost_cents`
#### `by-agent`
- sum costs and token fields from `cost_events`
- use `count(distinct heartbeat_run_id)` filtered by billing type for run counts
- use token sums filtered by billing type for subscription usage
#### `by-provider`
- group by `provider`, `model`
- sum costs and token fields directly from the ledger
- derive billing-type slices from `cost_events.billing_type`
- never pro-rate from unrelated `heartbeat_runs`
#### future `by-biller`
- group by `biller`
- this is the right view for invoice and subscription accountability
#### `window-spend`
- continue to use `cost_events`
#### project attribution
Keep current project attribution logic for now, but prefer `cost_events.heartbeat_run_id` as the join anchor whenever possible.
## UI Changes
### Principles
- Spend, usage, and quota are related but distinct
- a missing quota fetch is not the same as “no quota”
- provider and biller are different dimensions
### Immediate UI changes
1. Keep the current costs page structure.
2. Make the provider cards accurate by reading only ledger-backed values.
3. Show provider quota fetch errors explicitly instead of dropping them.
### Follow-up UI direction
The long-term board UI should expose:
- Spend
Dollars by biller, provider, model, agent, project
- Usage
Tokens by provider, model, agent, project
- Quotas
Live provider or biller limits, credits, and reset windows
- Financial events
Credit purchases, top-ups, fees, refunds, commitments, storage, and other non-inference charges
## Migration Plan
Migration behavior:
- add new non-destructive columns with defaults
- backfill existing rows:
- `biller = provider`
- `billing_type = 'unknown'`
- `cached_input_tokens = 0`
- `heartbeat_run_id = null`
Do **not** attempt to backfill historical provider-level subscription attribution from `heartbeat_runs`.
That data was never stored with the required dimensions.
## Testing Plan
Add or update tests for:
1. heartbeat-created ledger rows persist `heartbeatRunId`, `biller`, `billingType`, and cached tokens
2. legacy adapter billing values map correctly
3. provider reporting uses ledger data only
4. mixed-provider companies do not cross-attribute subscription usage
5. zero-dollar subscription usage still appears in token reporting
6. quota fetch failures render explicit UI state
7. manual cost events still validate and write correctly
8. biller reporting keeps upstream provider breakdowns separate
9. OpenRouter-style rows can show `biller=openrouter` with non-OpenRouter upstream providers
10. Cloudflare-style rows can show `biller=cloudflare` with preserved upstream provider identity
11. future `finance_events` aggregation handles non-request charges without requiring a model or run id
## Delivery Plan
### Step 1
- land the ledger contract and query rewrite
- make the current costs page correct
### Step 2
- add biller-oriented reporting endpoints and UI
### Step 3
- wire OpenRouter and any future aggregator adapters to the same contract
### Step 4
- add `executionAdapterType` to persisted cost reporting if adapter-level grouping becomes a product requirement
### Step 5
- introduce `finance_events`
- add non-inference accounting endpoints
- add UI for platform/account charges alongside inference spend and usage
## Non-Goals For This Change
- multi-currency support
- invoice reconciliation
- provider-specific cost estimation beyond persisted billed cost
- replacing `heartbeat_runs` as the operational run record

View File

@@ -0,0 +1,611 @@
# Budget Policies and Enforcement
## Context
Paperclip already treats budgets as a core control-plane responsibility:
- `doc/SPEC.md` gives the Board authority to set budgets, pause agents, pause work, and override any budget.
- `doc/SPEC-implementation.md` says V1 must support monthly UTC budget windows, soft alerts, and hard auto-pause.
- the current code only partially implements that intent.
Today the system has narrow money-budget behavior:
- companies track `budgetMonthlyCents` and `spentMonthlyCents`
- agents track `budgetMonthlyCents` and `spentMonthlyCents`
- `cost_events` ingestion increments those counters
- when an agent exceeds its monthly budget, the agent is paused
That leaves major product gaps:
- no project budget model
- no approval generated when budget is hit
- no generic budget policy system
- no project pause semantics tied to budget
- no durable incident tracking to prevent duplicate alerts
- no separation between enforceable spend budgets and advisory usage quotas
This plan defines the concrete budgeting model Paperclip should implement next.
## Product Goals
Paperclip should let operators:
1. Set budgets on agents and projects.
2. Understand whether a budget is based on money or usage.
3. Be warned before a budget is exhausted.
4. Automatically pause work when a hard budget is hit.
5. Approve, raise, or resume from a budget stop using obvious UI.
6. See budget state on the dashboard, `/costs`, and scope detail pages.
The system should make one thing very clear:
- budgets are policy controls
- quotas are usage visibility
They are related, but they are not the same concept.
## Product Decisions
### V1 Budget Defaults
For the next implementation pass, Paperclip should enforce these defaults:
- agent budgets are recurring monthly budgets
- project budgets are lifetime total budgets
- hard-stop enforcement uses billed dollars, not tokens
- monthly windows use UTC calendar months
- project total budgets do not reset automatically
This gives a clean mental model:
- agents are ongoing workers, so monthly recurring budget is natural
- projects are bounded workstreams, so lifetime cap is natural
### Metric To Enforce First
The first enforceable metric should be `billed_cents`.
Reasoning:
- it works across providers, billers, and models
- it maps directly to real financial risk
- it handles overage and metered usage consistently
- it avoids cross-provider token normalization problems
- it applies cleanly even when future finance events are not token-based
Token budgets should not be the first hard-stop policy.
They should come later as advisory usage controls once the money-based system is solid.
### Subscription Usage Decision
Paperclip should separate subscription-included usage from billed spend:
- `subscription_included`
- visible in reporting
- visible in usage summaries
- does not count against money budget
- `subscription_overage`
- visible in reporting
- counts against money budget
- `metered_api`
- visible in reporting
- counts against money budget
This keeps the budget system honest:
- users should not see "spend" rise for usage that did not incur marginal billed cost
- users should still see the token usage and provider quota state
### Soft Alert Versus Hard Stop
Paperclip should have two threshold classes:
- soft alert
- creates visible notification state
- does not create an approval
- does not pause work
- hard stop
- pauses the affected scope automatically
- creates an approval requiring human resolution
- prevents additional heartbeats or task pickup in that scope
Default thresholds:
- soft alert at `80%`
- hard stop at `100%`
These should be configurable per policy later, but they are good defaults now.
## Scope Model
### Supported Scope Types
Budget policies should support:
- `company`
- `agent`
- `project`
This plan focuses on finishing `agent` and `project` first while preserving the existing company budget behavior.
### Recommended V1.5 Policy Presets
- Company
- metric: `billed_cents`
- window: `calendar_month_utc`
- Agent
- metric: `billed_cents`
- window: `calendar_month_utc`
- Project
- metric: `billed_cents`
- window: `lifetime`
Future extensions can add:
- token advisory policies
- daily or weekly spend windows
- provider- or biller-scoped budgets
- inherited delegated budgets down the org tree
## Current Implementation Baseline
The current codebase is not starting from zero, but the existing shape is too ad hoc to extend safely.
### What Exists Today
- company and agent monthly cents counters
- cost ingestion that updates those counters
- agent hard-stop pause on monthly budget overrun
### What Is Missing
- project budgets
- generic budget policy persistence
- generic threshold crossing detection
- incident deduplication per scope/window
- approval creation on hard-stop
- project execution blocking
- budget timeline and incident UI
- distinction between advisory quota and enforceable budget
## Proposed Data Model
### 1. `budget_policies`
Create a new table for canonical budget definitions.
Suggested fields:
- `id`
- `company_id`
- `scope_type`
- `scope_id`
- `metric`
- `window_kind`
- `amount`
- `warn_percent`
- `hard_stop_enabled`
- `notify_enabled`
- `is_active`
- `created_by_user_id`
- `updated_by_user_id`
- `created_at`
- `updated_at`
Notes:
- `scope_type` is one of `company | agent | project`
- `scope_id` is nullable only for company-level policy if company is implied; otherwise keep it explicit
- `metric` should start with `billed_cents`
- `window_kind` starts with `calendar_month_utc | lifetime`
- `amount` is stored in the natural unit of the metric
### 2. `budget_incidents`
Create a durable record of threshold crossings.
Suggested fields:
- `id`
- `company_id`
- `policy_id`
- `scope_type`
- `scope_id`
- `metric`
- `window_kind`
- `window_start`
- `window_end`
- `threshold_type`
- `amount_limit`
- `amount_observed`
- `status`
- `approval_id` nullable
- `activity_id` nullable
- `resolved_at` nullable
- `created_at`
- `updated_at`
Notes:
- `threshold_type`: `soft | hard`
- `status`: `open | acknowledged | resolved | dismissed`
- one open incident per policy per threshold per window prevents duplicate approvals and alert spam
### 3. Project Pause State
Projects need explicit pause semantics.
Recommended approach:
- extend project status or add a pause field so a project can be blocked by budget
- preserve whether the project is paused due to budget versus manually paused
Preferred shape:
- keep project workflow status as-is
- add execution-state fields:
- `execution_status`: `active | paused | archived`
- `pause_reason`: `manual | budget | system | null`
If that is too large for the immediate pass, a smaller version is:
- add `paused_at`
- add `pause_reason`
The key requirement is behavioral, not cosmetic:
Paperclip must know that a project is budget-paused and enforce it.
### 4. Compatibility With Existing Budget Columns
Existing company and agent monthly budget columns should remain temporarily for compatibility.
Migration plan:
1. keep reading existing columns during transition
2. create equivalent `budget_policies` rows
3. switch enforcement and UI to policies
4. later remove or deprecate legacy columns
## Budget Engine
Budget enforcement should move into a dedicated service.
Current logic is buried inside cost ingestion.
That is too narrow because budget checks must apply at more than one execution boundary.
### Responsibilities
New service: `budgetService`
Responsibilities:
- resolve applicable policies for a cost event
- compute current window totals
- detect threshold crossings
- create incidents, activities, and approvals
- pause affected scopes on hard-stop
- provide preflight enforcement checks for execution entry points
### Canonical Evaluation Flow
When a new `cost_event` is written:
1. persist the `cost_event`
2. identify affected scopes
- company
- agent
- project
3. fetch active policies for those scopes
4. compute current observed amount for each policy window
5. compare to thresholds
6. create soft incident if soft threshold crossed for first time in window
7. create hard incident if hard threshold crossed for first time in window
8. if hard incident:
- pause the scope
- create approval
- create activity event
- emit notification state
### Preflight Enforcement Checks
Budget enforcement cannot rely only on post-hoc cost ingestion.
Paperclip must also block execution before new work starts.
Add budget checks to:
- scheduler heartbeat dispatch
- manual invoke endpoints
- assignment-driven wakeups
- queued run promotion
- issue checkout or pickup paths where applicable
If a scope is budget-paused:
- do not start a new heartbeat
- do not let the agent pick up additional work
- present a clear reason in API and UI
### Active Run Behavior
When a hard-stop is triggered while a run is already active:
- mark scope paused immediately for future work
- request graceful cancellation of the current run
- allow normal cancellation timeout behavior
- write activity explaining that pause came from budget enforcement
This mirrors the general pause semantics already expected by the product.
## Approval Model
Budget hard-stops should create a first-class approval.
### New Approval Type
Add approval type:
- `budget_override_required`
Payload should include:
- `scopeType`
- `scopeId`
- `scopeName`
- `metric`
- `windowKind`
- `thresholdType`
- `budgetAmount`
- `observedAmount`
- `windowStart`
- `windowEnd`
- `topDrivers`
- `paused`
### Resolution Actions
The approval UI should support:
- raise budget and resume
- resume once without changing policy
- keep paused
Optional later action:
- disable budget policy
### Soft Alerts Do Not Need Approval
Soft alerts should create:
- activity event
- dashboard alert
- inbox notification or similar board-visible signal
They should not create an approval by default.
## Notification And Activity Model
Budget events need obvious operator visibility.
Required outputs:
- activity log entry on threshold crossings
- dashboard surface for active budget incidents
- detail page banner on paused agent or project
- `/costs` summary of active incidents and policy health
Later channels:
- email
- webhook
- Slack or other integrations
## API Plan
### Policy Management
Add routes for:
- list budget policies for company
- create budget policy
- update budget policy
- archive or disable budget policy
### Incident Surfaces
Add routes for:
- list active budget incidents
- list incident history
- get incident detail for a scope
### Approval Resolution
Budget approvals should use the existing approval system once the new approval type is added.
Expected flows:
- create approval on hard-stop
- resolve approval by changing policy and resuming
- resolve approval by resuming once
### Execution Errors
When work is blocked by budget, the API should return explicit errors.
Examples:
- agent invocation blocked because agent budget is paused
- issue execution blocked because project budget is paused
Do not silently no-op.
## UI Plan
Budgeting should be visible in the places where operators make decisions.
### `/costs`
Add a budget section that includes:
- active budget incidents
- policy list with scope, window, metric, and threshold state
- progress bars for current period or total
- clear distinction between:
- spend budget
- subscription quota
- quick actions:
- raise budget
- open approval
- resume scope if permitted
The page should make this visual distinction obvious:
- Budget
- enforceable spend policy
- Quota
- provider or subscription usage window
### Agent Detail
Add an agent budget card:
- monthly budget amount
- current month spend
- remaining spend
- status
- warning or paused banner
- link to approval if blocked
### Project Detail
Add a project budget card:
- total budget amount
- total spend to date
- remaining spend
- pause status
- approval link
Project detail should also show if issue execution is blocked because the project is budget-paused.
### Dashboard
Add a high-signal budget section:
- active budget breaches
- upcoming soft alerts
- counts of paused agents and paused projects due to budget
The operator should not have to visit `/costs` to learn that work has stopped.
## Budget Math
### What Counts Toward Budget
For V1.5 enforcement, include:
- `metered_api` cost events
- `subscription_overage` cost events
- any future request-scoped cost event with non-zero billed cents
Do not include:
- `subscription_included` cost events with zero billed cents
- advisory quota rows
- account-level finance events unless and until company-level financial budgets are added explicitly
### Why Not Tokens First
Token budgets should not be the first hard-stop because:
- providers count tokens differently
- cached tokens complicate simple totals
- some future charges are not token-based
- subscription tokens do not necessarily imply spend
- money remains the cleanest cross-provider enforcement metric
### Future Budget Metrics
Future policy metrics can include:
- `total_tokens`
- `input_tokens`
- `output_tokens`
- `requests`
- `finance_amount_cents`
But they should enter only after the money-budget path is stable.
## Migration Plan
### Phase 1: Foundation
- add `budget_policies`
- add `budget_incidents`
- add new approval type
- add project pause metadata
### Phase 2: Compatibility
- backfill policies from existing company and agent monthly budget columns
- keep legacy columns readable during migration
### Phase 3: Enforcement
- move budget logic into dedicated service
- add hard-stop incident creation
- add activity and approval creation
- add execution guards on heartbeat and invoke paths
### Phase 4: UI
- `/costs` budget section
- agent detail budget card
- project detail budget card
- dashboard incident summary
### Phase 5: Cleanup
- move all reads/writes to `budget_policies`
- reduce legacy column reliance
- decide whether to remove old budget columns
## Tests
Required coverage:
- agent monthly budget soft alert at 80%
- agent monthly budget hard-stop at 100%
- project lifetime budget soft alert
- project lifetime budget hard-stop
- `subscription_included` usage does not consume money budget
- `subscription_overage` does consume money budget
- hard-stop creates one incident per threshold per window
- hard-stop creates approval and pauses correct scope
- paused project blocks new issue execution
- paused agent blocks new heartbeat dispatch
- policy update and resume clears or resolves active incident correctly
- dashboard and `/costs` surface active incidents
## Open Questions
These should be explicitly deferred unless they block implementation:
- Should project budgets also support monthly mode, or is lifetime enough for the first release?
- Should company-level budgets eventually include `finance_events` such as OpenRouter top-up fees and Bedrock provisioned charges?
- Should delegated budget editing be limited by org hierarchy in V1, or remain board-only in the UI even if the data model can support delegation later?
- Do we need "resume once" immediately, or can first approval resolution be "raise budget and resume" plus "keep paused"?
## Recommendation
Implement the first coherent budgeting system with these rules:
- Agent budget = monthly billed dollars
- Project budget = lifetime billed dollars
- Hard-stop = auto-pause + approval
- Soft alert = visible warning, no approval
- Subscription usage = visible quota and token reporting, not money-budget enforcement
This solves the real operator problem without mixing together spend control, provider quota windows, and token accounting.

View File

@@ -0,0 +1,424 @@
# Docker Release Browser E2E Plan
## Context
Today release smoke testing for published Paperclip packages is manual and shell-driven:
```sh
HOST_PORT=3232 DATA_DIR=./data/release-smoke-canary PAPERCLIPAI_VERSION=canary ./scripts/docker-onboard-smoke.sh
HOST_PORT=3233 DATA_DIR=./data/release-smoke-stable PAPERCLIPAI_VERSION=latest ./scripts/docker-onboard-smoke.sh
```
That is useful because it exercises the same public install surface users hit:
- Docker
- `npx paperclipai@canary`
- `npx paperclipai@latest`
- authenticated bootstrap flow
But it still leaves the most important release questions to a human with a browser:
- can I sign in with the smoke credentials?
- do I land in onboarding?
- can I complete onboarding?
- does the initial CEO agent actually get created and run?
The repo already has two adjacent pieces:
- `tests/e2e/onboarding.spec.ts` covers the onboarding wizard against the local source tree
- `scripts/docker-onboard-smoke.sh` boots a published Docker install and auto-bootstraps authenticated mode, but only verifies the API/session layer
What is missing is one deterministic browser test that joins those two paths.
## Goal
Add a release-grade Docker-backed browser E2E that validates the published `canary` and `latest` installs end to end:
1. boot the published package in Docker
2. sign in with known smoke credentials
3. verify the user is routed into onboarding
4. complete onboarding in the browser
5. verify the first CEO agent exists
6. verify the initial CEO run was triggered and reached a terminal or active state
Then wire that test into GitHub Actions so release validation is no longer manual-only.
## Recommendation In One Sentence
Turn the current Docker smoke script into a machine-friendly test harness, add a dedicated Playwright release-smoke spec that drives the authenticated browser flow against published Docker installs, and run it in GitHub Actions for both `canary` and `latest`.
## What We Have Today
### Existing local browser coverage
`tests/e2e/onboarding.spec.ts` already proves the onboarding wizard can:
- create a company
- create a CEO agent
- create an initial issue
- optionally observe task progress
That is a good base, but it does not validate the public npm package, Docker path, authenticated login flow, or release dist-tags.
### Existing Docker smoke coverage
`scripts/docker-onboard-smoke.sh` already does useful setup work:
- builds `Dockerfile.onboard-smoke`
- runs `paperclipai@${PAPERCLIPAI_VERSION}` inside Docker
- waits for health
- signs up or signs in a smoke admin user
- generates and accepts the bootstrap CEO invite in authenticated mode
- verifies a board session and `/api/companies`
That means the hard bootstrap problem is mostly solved already. The main gap is that the script is human-oriented and never hands control to a browser test.
### Existing CI shape
The repo already has:
- `.github/workflows/e2e.yml` for manual Playwright runs against local source
- `.github/workflows/release.yml` for canary publish on `master` and manual stable promotion
So the right move is to extend the current test/release system, not create a parallel one.
## Product Decision
### 1. The release smoke should stay deterministic and token-free
The first version should not require OpenAI, Anthropic, or external agent credentials.
Use the onboarding flow with a deterministic adapter that can run on a stock GitHub runner and inside the published Docker install. The existing `process` adapter with a trivial command is the right base path for this release gate.
That keeps this test focused on:
- release packaging
- auth/bootstrap
- UI routing
- onboarding contract
- agent creation
- heartbeat invocation plumbing
Later we can add a second credentialed smoke lane for real model-backed agents.
### 2. Smoke credentials become an explicit test contract
The current defaults in `scripts/docker-onboard-smoke.sh` should be treated as stable test fixtures:
- email: `smoke-admin@paperclip.local`
- password: `paperclip-smoke-password`
The browser test should log in with those exact values unless overridden by env vars.
### 3. Published-package smoke and source-tree E2E stay separate
Keep two lanes:
- source-tree E2E for feature development
- published Docker release smoke for release confidence
They overlap on onboarding assertions, but they guard different failure classes.
## Proposed Design
## 1. Add a CI-friendly Docker smoke harness
Refactor `scripts/docker-onboard-smoke.sh` so it can run in two modes:
- interactive mode
- current behavior
- streams logs and waits in foreground for manual inspection
- CI mode
- starts the container
- waits for health and authenticated bootstrap
- prints machine-readable metadata
- exits while leaving the container running for Playwright
Recommended shape:
- keep `scripts/docker-onboard-smoke.sh` as the public entry point
- add a `SMOKE_DETACH=true` or `--detach` mode
- emit a JSON blob or `.env` file containing:
- `SMOKE_BASE_URL`
- `SMOKE_ADMIN_EMAIL`
- `SMOKE_ADMIN_PASSWORD`
- `SMOKE_CONTAINER_NAME`
- `SMOKE_DATA_DIR`
The workflow and Playwright tests can then consume the emitted metadata instead of scraping logs.
### Why this matters
The current script always tails logs and then blocks on `wait "$LOG_PID"`. That is convenient for manual smoke testing, but it is the wrong shape for CI orchestration.
## 2. Add a dedicated Playwright release-smoke spec
Create a second Playwright entry point specifically for published Docker installs, for example:
- `tests/release-smoke/playwright.config.ts`
- `tests/release-smoke/docker-auth-onboarding.spec.ts`
This suite should not use Playwright `webServer`, because the app server will already be running inside Docker.
### Browser scenario
The first release-smoke scenario should validate:
1. open `/`
2. unauthenticated user is redirected to `/auth`
3. sign in using the smoke credentials
4. authenticated user lands on onboarding when no companies exist
5. onboarding wizard appears with the expected step labels
6. create a company
7. create the first agent using `process`
8. create the initial issue
9. finish onboarding and open the created issue
10. verify via API:
- company exists
- CEO agent exists
- issue exists and is assigned to the CEO
11. verify the first heartbeat run was triggered:
- either by checking issue status changed from initial state, or
- by checking agent/runs API shows a run for the CEO, or
- both
The test should tolerate the run completing quickly. For this reason, the assertion should accept:
- `queued`
- `running`
- `succeeded`
and similarly for issue progression if the issue status changes before the assertion runs.
### Why a separate spec instead of reusing `tests/e2e/onboarding.spec.ts`
The local-source test and release-smoke test have different assumptions:
- different server lifecycle
- different auth path
- different deployment mode
- published npm package instead of local workspace code
Trying to force both through one spec will make both worse.
## 3. Add a release-smoke workflow in GitHub Actions
Add a workflow dedicated to this surface, ideally reusable:
- `.github/workflows/release-smoke.yml`
Recommended triggers:
- `workflow_dispatch`
- `workflow_call`
Recommended inputs:
- `paperclip_version`
- `canary` or `latest`
- `host_port`
- optional, default runner-safe port
- `artifact_name`
- optional for clearer uploads
### Job outline
1. checkout repo
2. install Node/pnpm
3. install Playwright browser dependencies
4. launch Docker smoke harness in detached mode with the chosen dist-tag
5. run the release-smoke Playwright suite against the returned base URL
6. always collect diagnostics:
- Playwright report
- screenshots
- trace
- `docker logs`
- harness metadata file
7. stop and remove container
### Why a reusable workflow
This lets us:
- run the smoke manually on demand
- call it from `release.yml`
- reuse the same job for both `canary` and `latest`
## 4. Integrate it into release automation incrementally
### Phase A: Manual workflow only
First ship the workflow as manual-only so the harness and test can be stabilized without blocking releases.
### Phase B: Run automatically after canary publish
After `publish_canary` succeeds in `.github/workflows/release.yml`, call the reusable release-smoke workflow with:
- `paperclip_version=canary`
This proves the just-published public canary really boots and onboards.
### Phase C: Run automatically after stable publish
After `publish_stable` succeeds, call the same workflow with:
- `paperclip_version=latest`
This gives us post-publish confirmation that the stable dist-tag is healthy.
### Important nuance
Testing `latest` from npm cannot happen before stable publish, because the package under test does not exist under `latest` yet. So the `latest` smoke is a post-publish verification, not a pre-publish gate.
If we later want a true pre-publish stable gate, that should be a separate source-ref or locally built package smoke job.
## 5. Make diagnostics first-class
This workflow is only valuable if failures are fast to debug.
Always capture:
- Playwright HTML report
- Playwright trace on failure
- final screenshot on failure
- full `docker logs` output
- emitted smoke metadata
- optional `curl /api/health` snapshot
Without that, the test will become a flaky black box and people will stop trusting it.
## Implementation Plan
## Phase 1: Harness refactor
Files:
- `scripts/docker-onboard-smoke.sh`
- optionally `scripts/lib/docker-onboard-smoke.sh` or similar helper
- `doc/DOCKER.md`
- `doc/RELEASING.md`
Tasks:
1. Add detached/CI mode to the Docker smoke script.
2. Make the script emit machine-readable connection metadata.
3. Keep the current interactive manual mode intact.
4. Add reliable cleanup commands for CI.
Acceptance:
- a script invocation can start the published Docker app, auto-bootstrap it, and return control to the caller with enough metadata for browser automation
## Phase 2: Browser release-smoke suite
Files:
- `tests/release-smoke/playwright.config.ts`
- `tests/release-smoke/docker-auth-onboarding.spec.ts`
- root `package.json`
Tasks:
1. Add a dedicated Playwright config for external server testing.
2. Implement login + onboarding + CEO creation flow.
3. Assert a CEO run was created or completed.
4. Add a root script such as:
- `test:release-smoke`
Acceptance:
- the suite passes locally against both:
- `PAPERCLIPAI_VERSION=canary`
- `PAPERCLIPAI_VERSION=latest`
## Phase 3: GitHub Actions workflow
Files:
- `.github/workflows/release-smoke.yml`
Tasks:
1. Add manual and reusable workflow entry points.
2. Install Chromium and runner dependencies.
3. Start Docker smoke in detached mode.
4. Run the release-smoke Playwright suite.
5. Upload diagnostics artifacts.
Acceptance:
- a maintainer can run the workflow manually for either `canary` or `latest`
## Phase 4: Release workflow integration
Files:
- `.github/workflows/release.yml`
- `doc/RELEASING.md`
Tasks:
1. Trigger release smoke automatically after canary publish.
2. Trigger release smoke automatically after stable publish.
3. Document expected behavior and failure handling.
Acceptance:
- canary releases automatically produce a published-package browser smoke result
- stable releases automatically produce a `latest` browser smoke result
## Phase 5: Future extension for real model-backed agent validation
Not part of the first implementation, but this should be the next layer after the deterministic lane is stable.
Possible additions:
- a second Playwright project gated on repo secrets
- real `claude_local` or `codex_local` adapter validation in Docker-capable environments
- assertion that the CEO posts a real task/comment artifact
- stable release holdback until the credentialed lane passes
This should stay optional until the token-free lane is trustworthy.
## Acceptance Criteria
The plan is complete when the implemented system can demonstrate all of the following:
1. A published `paperclipai@canary` Docker install can be smoke-tested by Playwright in CI.
2. A published `paperclipai@latest` Docker install can be smoke-tested by Playwright in CI.
3. The test logs into authenticated mode with the smoke credentials.
4. The test sees onboarding for a fresh instance.
5. The test completes onboarding in the browser.
6. The test verifies the initial CEO agent was created.
7. The test verifies at least one CEO heartbeat run was triggered.
8. Failures produce actionable artifacts rather than just a red job.
## Risks And Decisions To Make
### 1. Fast process runs may finish before the UI visibly updates
That is expected. The assertions should prefer API polling for run existence/status rather than only visual indicators.
### 2. `latest` smoke is post-publish, not preventive
This is a real limitation of testing the published dist-tag itself. It is still valuable, but it should not be confused with a pre-publish gate.
### 3. We should not overcouple the test to cosmetic onboarding text
The important contract is flow success, created entities, and run creation. Use visible labels sparingly and prefer stable semantic selectors where possible.
### 4. Keep the smoke adapter path boring
For release safety, the first test should use the most boring runnable adapter possible. This is not the place to validate every adapter.
## Recommended First Slice
If we want the fastest path to value, ship this in order:
1. add detached mode to `scripts/docker-onboard-smoke.sh`
2. add one Playwright spec for authenticated login + onboarding + CEO run verification
3. add manual `release-smoke.yml`
4. once stable, wire canary into `release.yml`
5. after that, wire stable `latest` smoke into `release.yml`
That gives release confidence quickly without turning the first version into a large CI redesign.

View File

@@ -0,0 +1,426 @@
# Paperclip Memory Service Plan
## Goal
Define a Paperclip memory service and surface API that can sit above multiple memory backends, while preserving Paperclip's control-plane requirements:
- company scoping
- auditability
- provenance back to Paperclip work objects
- budget / cost visibility
- plugin-first extensibility
This plan is based on the external landscape summarized in `doc/memory-landscape.md` and on the current Paperclip architecture in:
- `doc/SPEC-implementation.md`
- `doc/plugins/PLUGIN_SPEC.md`
- `doc/plugins/PLUGIN_AUTHORING_GUIDE.md`
- `packages/plugins/sdk/src/types.ts`
## Recommendation In One Sentence
Paperclip should not embed one opinionated memory engine into core. It should add a company-scoped memory control plane with a small normalized adapter contract, then let built-ins and plugins implement the provider-specific behavior.
## Product Decisions
### 1. Memory is company-scoped by default
Every memory binding belongs to exactly one company.
That binding can then be:
- the company default
- an agent override
- a project override later if we need it
No cross-company memory sharing in the initial design.
### 2. Providers are selected by key
Each configured memory provider gets a stable key inside a company, for example:
- `default`
- `mem0-prod`
- `local-markdown`
- `research-kb`
Agents and services resolve the active provider by key, not by hard-coded vendor logic.
### 3. Plugins are the primary provider path
Built-ins are useful for a zero-config local path, but most providers should arrive through the existing Paperclip plugin runtime.
That keeps the core small and matches the current direction that optional knowledge-like systems live at the edges.
### 4. Paperclip owns routing, provenance, and accounting
Providers should not decide how Paperclip entities map to governance.
Paperclip core should own:
- who is allowed to call a memory operation
- which company / agent / project scope is active
- what issue / run / comment / document the operation belongs to
- how usage gets recorded
### 5. Automatic memory should be narrow at first
Automatic capture is useful, but broad silent capture is dangerous.
Initial automatic hooks should be:
- post-run capture from agent runs
- issue comment / document capture when the binding enables it
- pre-run recall for agent context hydration
Everything else should start explicit.
## Proposed Concepts
### Memory provider
A built-in or plugin-supplied implementation that stores and retrieves memory.
Examples:
- local markdown + vector index
- mem0 adapter
- supermemory adapter
- MemOS adapter
### Memory binding
A company-scoped configuration record that points to a provider and carries provider-specific config.
This is the object selected by key.
### Memory scope
The normalized Paperclip scope passed into a provider request.
At minimum:
- `companyId`
- optional `agentId`
- optional `projectId`
- optional `issueId`
- optional `runId`
- optional `subjectId` for external/user identity
### Memory source reference
The provenance handle that explains where a memory came from.
Supported source kinds should include:
- `issue_comment`
- `issue_document`
- `issue`
- `run`
- `activity`
- `manual_note`
- `external_document`
### Memory operation
A normalized write, query, browse, or delete action performed through Paperclip.
Paperclip should log every operation, whether the provider is local or external.
## Required Adapter Contract
The required core should be small enough to fit `memsearch`, `mem0`, `Memori`, `MemOS`, or `OpenViking`.
```ts
export interface MemoryAdapterCapabilities {
profile?: boolean;
browse?: boolean;
correction?: boolean;
asyncIngestion?: boolean;
multimodal?: boolean;
providerManagedExtraction?: boolean;
}
export interface MemoryScope {
companyId: string;
agentId?: string;
projectId?: string;
issueId?: string;
runId?: string;
subjectId?: string;
}
export interface MemorySourceRef {
kind:
| "issue_comment"
| "issue_document"
| "issue"
| "run"
| "activity"
| "manual_note"
| "external_document";
companyId: string;
issueId?: string;
commentId?: string;
documentKey?: string;
runId?: string;
activityId?: string;
externalRef?: string;
}
export interface MemoryUsage {
provider: string;
model?: string;
inputTokens?: number;
outputTokens?: number;
embeddingTokens?: number;
costCents?: number;
latencyMs?: number;
details?: Record<string, unknown>;
}
export interface MemoryWriteRequest {
bindingKey: string;
scope: MemoryScope;
source: MemorySourceRef;
content: string;
metadata?: Record<string, unknown>;
mode?: "append" | "upsert" | "summarize";
}
export interface MemoryRecordHandle {
providerKey: string;
providerRecordId: string;
}
export interface MemoryQueryRequest {
bindingKey: string;
scope: MemoryScope;
query: string;
topK?: number;
intent?: "agent_preamble" | "answer" | "browse";
metadataFilter?: Record<string, unknown>;
}
export interface MemorySnippet {
handle: MemoryRecordHandle;
text: string;
score?: number;
summary?: string;
source?: MemorySourceRef;
metadata?: Record<string, unknown>;
}
export interface MemoryContextBundle {
snippets: MemorySnippet[];
profileSummary?: string;
usage?: MemoryUsage[];
}
export interface MemoryAdapter {
key: string;
capabilities: MemoryAdapterCapabilities;
write(req: MemoryWriteRequest): Promise<{
records?: MemoryRecordHandle[];
usage?: MemoryUsage[];
}>;
query(req: MemoryQueryRequest): Promise<MemoryContextBundle>;
get(handle: MemoryRecordHandle, scope: MemoryScope): Promise<MemorySnippet | null>;
forget(handles: MemoryRecordHandle[], scope: MemoryScope): Promise<{ usage?: MemoryUsage[] }>;
}
```
This contract intentionally does not force a provider to expose its internal graph, filesystem, or ontology.
## Optional Adapter Surfaces
These should be capability-gated, not required:
- `browse(scope, filters)` for file-system / graph / timeline inspection
- `correct(handle, patch)` for natural-language correction flows
- `profile(scope)` when the provider can synthesize stable preferences or summaries
- `sync(source)` for connectors or background ingestion
- `explain(queryResult)` for providers that can expose retrieval traces
## What Paperclip Should Persist
Paperclip should not mirror the full provider memory corpus into Postgres unless the provider is a Paperclip-managed local provider.
Paperclip core should persist:
- memory bindings and overrides
- provider keys and capability metadata
- normalized memory operation logs
- provider record handles returned by operations when available
- source references back to issue comments, documents, runs, and activity
- usage and cost data
For external providers, the memory payload itself can remain in the provider.
## Hook Model
### Automatic hooks
These should be low-risk and easy to reason about:
1. `pre-run hydrate`
Before an agent run starts, Paperclip may call `query(... intent = "agent_preamble")` using the active binding.
2. `post-run capture`
After a run finishes, Paperclip may write a summary or transcript-derived note tied to the run.
3. `issue comment / document capture`
When enabled on the binding, Paperclip may capture selected issue comments or issue documents as memory sources.
### Explicit hooks
These should be tool- or UI-driven first:
- `memory.search`
- `memory.note`
- `memory.forget`
- `memory.correct`
- `memory.browse`
### Not automatic in the first version
- broad web crawling
- silent import of arbitrary repo files
- cross-company memory sharing
- automatic destructive deletion
- provider migration between bindings
## Agent UX Rules
Paperclip should give agents both automatic recall and explicit tools, with simple guidance:
- use `memory.search` when the task depends on prior decisions, people, projects, or long-running context that is not in the current issue thread
- use `memory.note` when a durable fact, preference, or decision should survive this run
- use `memory.correct` when the user explicitly says prior context is wrong
- rely on post-run auto-capture for ordinary session residue so agents do not have to write memory notes for every trivial exchange
This keeps memory available without forcing every agent prompt to become a memory-management protocol.
## Browse And Inspect Surface
Paperclip needs a first-class UI for memory, otherwise providers become black boxes.
The initial browse surface should support:
- active binding by company and agent
- recent memory operations
- recent write sources
- query results with source backlinks
- filters by agent, issue, run, source kind, and date
- provider usage / cost / latency summaries
When a provider supports richer browsing, the plugin can add deeper views through the existing plugin UI surfaces.
## Cost And Evaluation
Every adapter response should be able to return usage records.
Paperclip should roll up:
- memory inference tokens
- embedding tokens
- external provider cost
- latency
- query count
- write count
It should also record evaluation-oriented metrics where possible:
- recall hit rate
- empty query rate
- manual correction count
- per-binding success / failure counts
This is important because a memory system that "works" but silently burns budget is not acceptable in Paperclip.
## Suggested Data Model Additions
At the control-plane level, the likely new core tables are:
- `memory_bindings`
- company-scoped key
- provider id / plugin id
- config blob
- enabled status
- `memory_binding_targets`
- target type (`company`, `agent`, later `project`)
- target id
- binding id
- `memory_operations`
- company id
- binding id
- operation type (`write`, `query`, `forget`, `browse`, `correct`)
- scope fields
- source refs
- usage / latency / cost
- success / error
Provider-specific long-form state should stay in plugin state or the provider itself unless a built-in local provider needs its own schema.
## Recommended First Built-In
The best zero-config built-in is a local markdown-first provider with optional semantic indexing.
Why:
- it matches Paperclip's local-first posture
- it is inspectable
- it is easy to back up and debug
- it gives the system a baseline even without external API keys
The design should still treat that built-in as just another provider behind the same control-plane contract.
## Rollout Phases
### Phase 1: Control-plane contract
- add memory binding models and API types
- add plugin capability / registration surface for memory providers
- add operation logging and usage reporting
### Phase 2: One built-in + one plugin example
- ship a local markdown-first provider
- ship one hosted adapter example to validate the external-provider path
### Phase 3: UI inspection
- add company / agent memory settings
- add a memory operation explorer
- add source backlinks to issues and runs
### Phase 4: Automatic hooks
- pre-run hydrate
- post-run capture
- selected issue comment / document capture
### Phase 5: Rich capabilities
- correction flows
- provider-native browse / graph views
- project-level overrides if needed
- evaluation dashboards
## Open Questions
- Should project overrides exist in V1 of the memory service, or should we force company default + agent override first?
- Do we want Paperclip-managed extraction pipelines at all, or should built-ins be the only place where Paperclip owns extraction?
- Should memory usage extend the current `cost_events` model directly, or should memory operations keep a parallel usage log and roll up into `cost_events` secondarily?
- Do we want provider install / binding changes to require approvals for some companies?
## Bottom Line
The right abstraction is:
- Paperclip owns memory bindings, scopes, provenance, governance, and usage reporting.
- Providers own extraction, ranking, storage, and provider-native memory semantics.
That gives Paperclip a stable "memory service" without locking the product to one memory philosophy or one vendor.

View File

@@ -0,0 +1,488 @@
# Release Automation and Versioning Simplification Plan
## Context
Paperclip's current release flow is documented in `doc/RELEASING.md` and implemented through:
- `.github/workflows/release.yml`
- `scripts/release-lib.sh`
- `scripts/release-start.sh`
- `scripts/release-preflight.sh`
- `scripts/release.sh`
- `scripts/create-github-release.sh`
Today the model is:
1. pick `patch`, `minor`, or `major`
2. create `release/X.Y.Z`
3. draft `releases/vX.Y.Z.md`
4. publish one or more canaries from that release branch
5. publish stable from that same branch
6. push tag + create GitHub Release
7. merge the release branch back to `master`
That is workable, but it creates friction in exactly the places that should be cheap:
- deciding `patch` vs `minor` vs `major`
- cutting and carrying release branches
- manually publishing canaries
- thinking about changelog generation for canaries
- handling npm credentials safely in a public repo
The target state from this discussion is simpler:
- every push to `master` publishes a canary automatically
- stable releases are promoted deliberately from a vetted commit
- versioning is date-driven instead of semantics-driven
- stable publishing is secure even in a public open-source repository
- changelog generation happens only for real stable releases
## Recommendation In One Sentence
Move Paperclip to semver-compatible calendar versioning, auto-publish canaries from `master`, promote stable from a chosen tested commit, and use npm trusted publishing plus GitHub environments so no long-lived npm or LLM token needs to live in Actions.
## Core Decisions
### 1. Use calendar versions, but keep semver syntax
The repo and npm tooling still assume semver-shaped version strings in many places. That does not mean Paperclip must keep semver as a product policy. It does mean the version format should remain semver-valid.
Recommended format:
- stable: `YYYY.MDD.P`
- canary: `YYYY.MDD.P-canary.N`
Examples:
- first stable on March 17, 2026: `2026.317.0`
- third canary on the `2026.317.0` line: `2026.317.0-canary.2`
Why this shape:
- it removes `patch/minor/major` decisions
- it is valid semver syntax
- it stays compatible with npm, dist-tags, and existing semver validators
- it is close to the format you actually want
Important constraints:
- the middle numeric slot should be `MDD`, where `M` is the month and `DD` is the zero-padded day
- `2026.03.17` is not the format to use
- numeric semver identifiers do not allow leading zeroes
- `2026.3.17.1` is not the format to use
- semver has three numeric components, not four
- the practical semver-safe equivalent is `2026.317.0-canary.8`
This is effectively CalVer on semver rails.
### 2. Accept that CalVer changes the compatibility contract
This is not semver in spirit anymore. It is semver in syntax only.
That tradeoff is probably acceptable for Paperclip, but it should be explicit:
- consumers no longer infer compatibility from `major/minor/patch`
- release notes become the compatibility signal
- downstream users should prefer exact pins or deliberate upgrades
This is especially relevant for public library packages like `@paperclipai/shared`, `@paperclipai/db`, and the adapter packages.
### 3. Drop release branches for normal publishing
If every merge to `master` publishes a canary, the current `release/X.Y.Z` train model becomes more ceremony than value.
Recommended replacement:
- `master` is the only canary train
- every push to `master` can publish a canary
- stable is published from a chosen commit or canary tag on `master`
This matches the workflow you actually want:
- merge continuously
- let npm always have a fresh canary
- choose a known-good canary later and promote that commit to stable
### 4. Promote by source ref, not by "renaming" a canary
This is the most important mechanical constraint.
npm can move dist-tags, but it does not let you rename an already-published version. That means:
- you can move `latest` to `paperclipai@1.2.3`
- you cannot turn `paperclipai@2026.317.0-canary.8` into `paperclipai@2026.317.0`
So "promote canary to stable" really means:
1. choose the commit or canary tag you trust
2. rebuild from that exact commit
3. publish it again with the stable version string
Because of that, the stable workflow should take a source ref, not just a bump type.
Recommended stable input:
- `source_ref`
- commit SHA, or
- a canary git tag such as `canary/v2026.317.1-canary.8`
### 5. Only stable releases get release notes, tags, and GitHub Releases
Canaries should stay lightweight:
- publish to npm under `canary`
- optionally create a lightweight or annotated git tag
- do not create GitHub Releases
- do not require `releases/v*.md`
- do not spend LLM tokens
Stable releases should remain the public narrative surface:
- git tag `v2026.317.0`
- GitHub Release `v2026.317.0`
- stable changelog file `releases/v2026.317.0.md`
## Security Model
### Recommendation
Use npm trusted publishing with GitHub Actions OIDC, then disable token-based publishing access for the packages.
Why:
- no long-lived `NPM_TOKEN` in repo or org secrets
- no personal npm token in Actions
- short-lived credentials minted only for the authorized workflow
- automatic npm provenance for public packages in public repos
This is the cleanest answer to the open-repo security concern.
### Concrete controls
#### 1. Use one release workflow file
Use one workflow filename for both canary and stable publishing:
- `.github/workflows/release.yml`
Why:
- npm trusted publishing is configured per workflow filename
- npm currently allows one trusted publisher configuration per package
- GitHub environments can still provide separate canary/stable approval rules inside the same workflow
#### 2. Use separate GitHub environments
Recommended environments:
- `npm-canary`
- `npm-stable`
Recommended policy:
- `npm-canary`
- allowed branch: `master`
- no human reviewer required
- `npm-stable`
- allowed branch: `master`
- required reviewer enabled
- prevent self-review enabled
- admin bypass disabled
Stable should require an explicit second human gate even if the workflow is manually dispatched.
#### 3. Lock down workflow edits
Add or tighten `CODEOWNERS` coverage for:
- `.github/workflows/*`
- `scripts/release*`
- `doc/RELEASING.md`
This matters because trusted publishing authorizes a workflow file. The biggest remaining risk is not secret exfiltration from forks. It is a maintainer-approved change to the release workflow itself.
#### 4. Remove traditional npm token access after OIDC works
After trusted publishing is verified:
- set package publishing access to require 2FA and disallow tokens
- revoke any legacy automation tokens
That eliminates the "someone stole the npm token" class of failure.
### What not to do
- do not put your personal Claude or npm token in GitHub Actions
- do not run release logic from `pull_request_target`
- do not make stable publishing depend on a repo secret if OIDC can handle it
- do not create canary GitHub Releases
## Changelog Strategy
### Recommendation
Generate stable changelogs only, and keep LLM-assisted changelog generation out of CI for now.
Reasoning:
- canaries happen too often
- canaries do not need polished public notes
- putting a personal Claude token into Actions is not worth the risk
- stable release cadence is low enough that a human-in-the-loop step is acceptable
Recommended stable path:
1. pick a canary commit or tag
2. run changelog generation locally from a trusted machine
3. commit `releases/vYYYY.MDD.P.md`
4. run stable promotion
If the notes are not ready yet, a fallback is acceptable:
- publish stable
- create a minimal GitHub Release
- update `releases/vYYYY.MDD.P.md` immediately afterward
But the better steady-state is to have the stable notes committed before stable publish.
### Future option
If you later want CI-assisted changelog drafting, do it with:
- a dedicated service account
- a token scoped only for changelog generation
- a manual workflow
- a dedicated environment with required reviewers
That is phase-two hardening work, not a phase-one requirement.
## Proposed Future Workflow
### Canary workflow
Trigger:
- `push` on `master`
Steps:
1. checkout the merged `master` commit
2. run verification on that exact commit
3. compute canary version for current UTC date
4. version public packages to `YYYY.MDD.P-canary.N`
5. publish to npm with dist-tag `canary`
6. create a canary git tag for traceability
Recommended canary tag format:
- `canary/v2026.317.1-canary.4`
Outputs:
- npm canary published
- git tag created
- no GitHub Release
- no changelog file required
### Stable workflow
Trigger:
- `workflow_dispatch`
Inputs:
- `source_ref`
- optional `stable_date`
- `dry_run`
Steps:
1. checkout `source_ref`
2. run verification on that exact commit
3. compute the next stable patch slot for the UTC date or provided override
4. fail if `vYYYY.MDD.P` already exists
5. require `releases/vYYYY.MDD.P.md`
6. version public packages to `YYYY.MDD.P`
7. publish to npm under `latest`
8. create git tag `vYYYY.MDD.P`
9. push tag
10. create GitHub Release from `releases/vYYYY.MDD.P.md`
Outputs:
- stable npm release
- stable git tag
- GitHub Release
- clean public changelog surface
## Implementation Guidance
### 1. Replace bump-type version math with explicit version computation
The current release scripts depend on:
- `patch`
- `minor`
- `major`
That logic should be replaced with:
- `compute_canary_version_for_date`
- `compute_stable_version_for_date`
For example:
- `next_stable_version(2026-03-17) -> 2026.317.0`
- `next_canary_for_utc_date(2026-03-17) -> 2026.317.0-canary.0`
### 2. Stop requiring `release/X.Y.Z`
These current invariants should be removed from the happy path:
- "must run from branch `release/X.Y.Z`"
- "stable and canary for `X.Y.Z` come from the same release branch"
- `release-start.sh`
Replace them with:
- canary must run from `master`
- stable may run from a pinned `source_ref`
### 3. Keep Changesets only if it stays helpful
The current system uses Changesets to:
- rewrite package versions
- maintain package-level `CHANGELOG.md` files
- publish packages
With CalVer, Changesets may still be useful for publish orchestration, but it should no longer own version selection.
Recommended implementation order:
1. keep `changeset publish` if it works with explicitly-set versions
2. replace version computation with a small explicit versioning script
3. if Changesets keeps fighting the model, remove it from release publishing entirely
Paperclip's release problem is now "publish the whole fixed package set at one explicit version", not "derive the next semantic bump from human intent".
### 4. Add a dedicated versioning script
Recommended new script:
- `scripts/set-release-version.mjs`
Responsibilities:
- set the version in all public publishable packages
- update any internal exact-version references needed for publishing
- update CLI version strings
- avoid broad string replacement across unrelated files
This is safer than keeping a bump-oriented changeset flow and then forcing it into a date-based scheme.
### 5. Keep rollback based on dist-tags
`rollback-latest.sh` should stay, but it should stop assuming a semver meaning beyond syntax.
It should continue to:
- repoint `latest` to a prior stable version
- never unpublish
## Tradeoffs and Risks
### 1. The stable patch slot is now part of the version contract
With `YYYY.MDD.P`, same-day hotfixes are supported, but the stable patch slot is now part of the visible version format.
That is the right tradeoff because:
1. npm still gets semver-valid versions
2. same-day hotfixes stay possible
3. chronological ordering still works as long as the day is zero-padded inside `MDD`
### 2. Public package consumers lose semver intent signaling
This is the main downside of CalVer.
If that becomes a problem, one alternative is:
- use CalVer for the CLI package only
- keep semver for library packages
That is more complex operationally, so I would not start there unless package consumers actually need it.
### 3. Auto-canary means more publish traffic
Publishing on every `master` merge means:
- more npm versions
- more git tags
- more registry noise
That is acceptable if canaries stay clearly separate:
- npm dist-tag `canary`
- no GitHub Release
- no external announcement
## Rollout Plan
### Phase 1: Security foundation
1. Create `release.yml`
2. Configure npm trusted publishers for all public packages
3. Create `npm-canary` and `npm-stable` environments
4. Add `CODEOWNERS` protection for release files
5. Verify OIDC publishing works
6. Disable token-based publishing access and revoke old tokens
### Phase 2: Canary automation
1. Add canary workflow on `push` to `master`
2. Add explicit calendar-version computation
3. Add canary git tagging
4. Remove changelog requirement from canaries
5. Update `doc/RELEASING.md`
### Phase 3: Stable promotion
1. Add manual stable workflow with `source_ref`
2. Require stable notes file
3. Publish stable + tag + GitHub Release
4. Update rollback docs and scripts
5. Retire release-branch assumptions
### Phase 4: Cleanup
1. Remove `release-start.sh` from the primary path
2. Remove `patch/minor/major` from maintainer docs
3. Decide whether to keep or remove Changesets from publishing
4. Document the CalVer compatibility contract publicly
## Concrete Recommendation
Paperclip should adopt this model:
- stable versions: `YYYY.MDD.P`
- canary versions: `YYYY.MDD.P-canary.N`
- canaries auto-published on every push to `master`
- stables manually promoted from a chosen tested commit or canary tag
- no release branches in the default path
- no canary changelog files
- no canary GitHub Releases
- no Claude token in GitHub Actions
- no npm automation token in GitHub Actions
- npm trusted publishing plus GitHub environments for release security
That gets rid of the annoying part of semver without fighting npm, makes canaries cheap, keeps stables deliberate, and materially improves the security posture of the public repository.
## External References
- npm trusted publishing: https://docs.npmjs.com/trusted-publishers/
- npm dist-tags: https://docs.npmjs.com/adding-dist-tags-to-packages/
- npm semantic versioning guidance: https://docs.npmjs.com/about-semantic-versioning/
- GitHub environments and deployment protection rules: https://docs.github.com/en/actions/how-tos/deploy/configure-and-manage-deployments/manage-environments
- GitHub secrets behavior for forks: https://docs.github.com/en/actions/how-tos/write-workflows/choose-what-workflows-do/use-secrets

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,882 @@
# Workspace Technical Implementation Spec
## Role of This Document
This document translates [workspace-product-model-and-work-product.md](/Users/dotta/paperclip-subissues/doc/plans/workspace-product-model-and-work-product.md) into an implementation-ready engineering plan.
It is intentionally concrete:
- schema and migration shape
- shared contract updates
- route and service changes
- UI changes
- rollout and compatibility rules
This is the implementation target for the first workspace-aware delivery slice.
## Locked Decisions
These decisions are treated as settled for this implementation:
1. Add a new durable `execution_workspaces` table now.
2. Each issue has at most one current execution workspace at a time.
3. `issues` get explicit `project_workspace_id` and `execution_workspace_id`.
4. Workspace reuse is in scope for V1.
5. The feature is gated in the UI by `/instance/settings > Experimental > Workspaces`.
6. The gate is UI-only. Backend model changes and migrations always ship.
7. Existing users upgrade into compatibility-preserving defaults.
8. `project_workspaces` evolves in place rather than being replaced.
9. Work product is issue-first, with optional links to execution workspaces and runtime services.
10. GitHub is the only PR provider in the first slice.
11. Both `adapter_managed` and `cloud_sandbox` execution modes are in scope.
12. Workspace controls ship first inside existing project properties, not in a new global navigation area.
13. Subissues are out of scope for this implementation slice.
## Non-Goals
- Building a full code review system
- Solving subissue UX in this slice
- Implementing reusable shared workspace definitions across projects in this slice
- Reworking all current runtime service behavior before introducing execution workspaces
## Existing Baseline
The repo already has:
- `project_workspaces`
- `projects.execution_workspace_policy`
- `issues.execution_workspace_settings`
- runtime service persistence in `workspace_runtime_services`
- local git-worktree realization in `workspace-runtime.ts`
This implementation should build on that baseline rather than fork it.
## Terminology
- `Project workspace`: durable configured codebase/root for a project
- `Execution workspace`: actual runtime workspace used for one or more issues
- `Work product`: user-facing output such as PR, preview, branch, commit, artifact, document
- `Runtime service`: process or service owned or tracked for a workspace
- `Compatibility mode`: existing behavior preserved for upgraded installs with no explicit workspace opt-in
## Architecture Summary
The first slice should introduce three explicit layers:
1. `Project workspace`
- existing durable project-scoped codebase record
- extended to support local, git, non-git, and remote-managed shapes
2. `Execution workspace`
- new durable runtime record
- represents shared, isolated, operator-branch, or remote-managed execution context
3. `Issue work product`
- new durable output record
- stores PRs, previews, branches, commits, artifacts, and documents
The issue remains the planning and ownership unit.
The execution workspace remains the runtime unit.
The work product remains the deliverable/output unit.
## Configuration and Deployment Topology
## Important correction
This repo already uses `PAPERCLIP_DEPLOYMENT_MODE` for auth/deployment behavior (`local_trusted | authenticated`).
Do not overload that variable for workspace execution topology.
## New env var
Add a separate execution-host hint:
- `PAPERCLIP_EXECUTION_TOPOLOGY=local|cloud|hybrid`
Default:
- if unset, treat as `local`
Purpose:
- influences defaults and validation for workspace configuration
- does not change current auth/deployment semantics
- does not break existing installs
### Semantics
- `local`
- Paperclip may create host-local worktrees, processes, and paths
- `cloud`
- Paperclip should assume no durable host-local execution workspace management
- adapter-managed and cloud-sandbox flows should be treated as first-class
- `hybrid`
- both local and remote execution strategies may exist
This is a guardrail and defaulting aid, not a hard policy engine in the first slice.
## Instance Settings
Add a new `Experimental` section under `/instance/settings`.
### New setting
- `experimental.workspaces: boolean`
Rules:
- default `false`
- UI-only gate
- stored in instance config or instance settings API response
- backend routes and migrations remain available even when false
### UI behavior when off
- hide workspace-specific issue controls
- hide workspace-specific project configuration
- hide issue `Work Product` tab if it would otherwise be empty
- do not remove or invalidate any stored workspace data
## Data Model
## 1. Extend `project_workspaces`
Current table exists and should evolve in place.
### New columns
- `source_type text not null default 'local_path'`
- `local_path | git_repo | non_git_path | remote_managed`
- `default_ref text null`
- `visibility text not null default 'default'`
- `default | advanced`
- `setup_command text null`
- `cleanup_command text null`
- `remote_provider text null`
- examples: `github`, `openai`, `anthropic`, `custom`
- `remote_workspace_ref text null`
- `shared_workspace_key text null`
- reserved for future cross-project shared workspace definitions
### Backfill rules
- if existing row has `repo_url`, backfill `source_type='git_repo'`
- else if existing row has `cwd`, backfill `source_type='local_path'`
- else backfill `source_type='remote_managed'`
- copy existing `repo_ref` into `default_ref`
### Indexes
- retain current indexes
- add `(project_id, source_type)`
- add `(company_id, shared_workspace_key)` non-unique for future support
## 2. Add `execution_workspaces`
Create a new durable table.
### Columns
- `id uuid pk`
- `company_id uuid not null`
- `project_id uuid not null`
- `project_workspace_id uuid null`
- `source_issue_id uuid null`
- `mode text not null`
- `shared_workspace | isolated_workspace | operator_branch | adapter_managed | cloud_sandbox`
- `strategy_type text not null`
- `project_primary | git_worktree | adapter_managed | cloud_sandbox`
- `name text not null`
- `status text not null default 'active'`
- `active | idle | in_review | archived | cleanup_failed`
- `cwd text null`
- `repo_url text null`
- `base_ref text null`
- `branch_name text null`
- `provider_type text not null default 'local_fs'`
- `local_fs | git_worktree | adapter_managed | cloud_sandbox`
- `provider_ref text null`
- `derived_from_execution_workspace_id uuid null`
- `last_used_at timestamptz not null default now()`
- `opened_at timestamptz not null default now()`
- `closed_at timestamptz null`
- `cleanup_eligible_at timestamptz null`
- `cleanup_reason text null`
- `metadata jsonb null`
- `created_at timestamptz not null default now()`
- `updated_at timestamptz not null default now()`
### Foreign keys
- `company_id -> companies.id`
- `project_id -> projects.id`
- `project_workspace_id -> project_workspaces.id on delete set null`
- `source_issue_id -> issues.id on delete set null`
- `derived_from_execution_workspace_id -> execution_workspaces.id on delete set null`
### Indexes
- `(company_id, project_id, status)`
- `(company_id, project_workspace_id, status)`
- `(company_id, source_issue_id)`
- `(company_id, last_used_at desc)`
- `(company_id, branch_name)` non-unique
## 3. Extend `issues`
Add explicit workspace linkage.
### New columns
- `project_workspace_id uuid null`
- `execution_workspace_id uuid null`
- `execution_workspace_preference text null`
- `inherit | shared_workspace | isolated_workspace | operator_branch | reuse_existing`
### Foreign keys
- `project_workspace_id -> project_workspaces.id on delete set null`
- `execution_workspace_id -> execution_workspaces.id on delete set null`
### Backfill rules
- all existing issues get null values
- null should be interpreted as compatibility/inherit behavior
### Invariants
- if `project_workspace_id` is set, it must belong to the issue's project and company
- if `execution_workspace_id` is set, it must belong to the issue's company
- if `execution_workspace_id` is set, the referenced workspace's `project_id` must match the issue's `project_id`
## 4. Add `issue_work_products`
Create a new durable table for outputs.
### Columns
- `id uuid pk`
- `company_id uuid not null`
- `project_id uuid null`
- `issue_id uuid not null`
- `execution_workspace_id uuid null`
- `runtime_service_id uuid null`
- `type text not null`
- `preview_url | runtime_service | pull_request | branch | commit | artifact | document`
- `provider text not null`
- `paperclip | github | vercel | s3 | custom`
- `external_id text null`
- `title text not null`
- `url text null`
- `status text not null`
- `active | ready_for_review | approved | changes_requested | merged | closed | failed | archived`
- `review_state text not null default 'none'`
- `none | needs_board_review | approved | changes_requested`
- `is_primary boolean not null default false`
- `health_status text not null default 'unknown'`
- `unknown | healthy | unhealthy`
- `summary text null`
- `metadata jsonb null`
- `created_by_run_id uuid null`
- `created_at timestamptz not null default now()`
- `updated_at timestamptz not null default now()`
### Foreign keys
- `company_id -> companies.id`
- `project_id -> projects.id on delete set null`
- `issue_id -> issues.id on delete cascade`
- `execution_workspace_id -> execution_workspaces.id on delete set null`
- `runtime_service_id -> workspace_runtime_services.id on delete set null`
- `created_by_run_id -> heartbeat_runs.id on delete set null`
### Indexes
- `(company_id, issue_id, type)`
- `(company_id, execution_workspace_id, type)`
- `(company_id, provider, external_id)`
- `(company_id, updated_at desc)`
## 5. Extend `workspace_runtime_services`
This table already exists and should remain the system of record for owned/tracked services.
### New column
- `execution_workspace_id uuid null`
### Foreign key
- `execution_workspace_id -> execution_workspaces.id on delete set null`
### Behavior
- runtime services remain workspace-first
- issue UIs should surface them through linked execution workspaces and work products
## Shared Contracts
## 1. `packages/shared`
### Update project workspace types and validators
Add fields:
- `sourceType`
- `defaultRef`
- `visibility`
- `setupCommand`
- `cleanupCommand`
- `remoteProvider`
- `remoteWorkspaceRef`
- `sharedWorkspaceKey`
### Add execution workspace types and validators
New shared types:
- `ExecutionWorkspace`
- `ExecutionWorkspaceMode`
- `ExecutionWorkspaceStatus`
- `ExecutionWorkspaceProviderType`
### Add work product types and validators
New shared types:
- `IssueWorkProduct`
- `IssueWorkProductType`
- `IssueWorkProductStatus`
- `IssueWorkProductReviewState`
### Update issue types and validators
Add:
- `projectWorkspaceId`
- `executionWorkspaceId`
- `executionWorkspacePreference`
- `workProducts?: IssueWorkProduct[]`
### Extend project execution policy contract
Replace the current narrow policy with a more explicit shape:
- `enabled`
- `defaultMode`
- `shared_workspace | isolated_workspace | operator_branch | adapter_default`
- `allowIssueOverride`
- `defaultProjectWorkspaceId`
- `workspaceStrategy`
- `branchPolicy`
- `pullRequestPolicy`
- `runtimePolicy`
- `cleanupPolicy`
Do not try to encode every possible provider-specific field in V1. Keep provider-specific extensibility in nested JSON where needed.
## Service Layer Changes
## 1. Project service
Update project workspace CRUD to handle the extended schema.
### Required rules
- when setting a primary workspace, clear `is_primary` on siblings
- `source_type=remote_managed` may have null `cwd`
- local/git-backed workspaces should still require one of `cwd` or `repo_url`
- preserve current behavior for existing callers that only send `cwd/repoUrl/repoRef`
## 2. Issue service
Update create/update flows to handle explicit workspace binding.
### Create behavior
Resolve defaults in this order:
1. explicit `projectWorkspaceId` from request
2. `project.executionWorkspacePolicy.defaultProjectWorkspaceId`
3. project's primary workspace
4. null
Resolve `executionWorkspacePreference`:
1. explicit request field
2. project policy default
3. compatibility fallback to `inherit`
Do not create an execution workspace at issue creation time unless:
- `reuse_existing` is explicitly chosen and `executionWorkspaceId` is provided
Otherwise, workspace realization happens when execution starts.
### Update behavior
- allow changing `projectWorkspaceId` only if the workspace belongs to the same project
- allow setting `executionWorkspaceId` only if it belongs to the same company and project
- do not automatically destroy or relink historical work products when workspace linkage changes
## 3. Workspace realization service
Refactor `workspace-runtime.ts` so realization produces or reuses an `execution_workspaces` row.
### New flow
Input:
- issue
- project workspace
- project execution policy
- execution topology hint
- adapter/runtime configuration
Output:
- realized execution workspace record
- runtime cwd/provider metadata
### Required modes
- `shared_workspace`
- reuse a stable execution workspace representing the project primary/shared workspace
- `isolated_workspace`
- create or reuse a derived isolated execution workspace
- `operator_branch`
- create or reuse a long-lived branch workspace
- `adapter_managed`
- create an execution workspace with provider references and optional null `cwd`
- `cloud_sandbox`
- same as adapter-managed, but explicit remote sandbox semantics
### Reuse rules
When `reuse_existing` is requested:
- only list active or recently used execution workspaces
- only for the same project
- only for the same project workspace if one is specified
- exclude archived and cleanup-failed workspaces
### Shared workspace realization
For compatibility mode and shared-workspace projects:
- create a stable execution workspace per project workspace when first needed
- reuse it for subsequent runs
This avoids a special-case branch in later work product linkage.
## 4. Runtime service integration
When runtime services are started or reused:
- populate `execution_workspace_id`
- continue populating `project_workspace_id`, `project_id`, and `issue_id`
When a runtime service yields a URL:
- optionally create or update a linked `issue_work_products` row of type `runtime_service` or `preview_url`
## 5. PR and preview reporting
Add a service for creating/updating `issue_work_products`.
### Supported V1 product types
- `pull_request`
- `preview_url`
- `runtime_service`
- `branch`
- `commit`
- `artifact`
- `document`
### GitHub PR reporting
For V1, GitHub is the only provider with richer semantics.
Supported statuses:
- `draft`
- `ready_for_review`
- `approved`
- `changes_requested`
- `merged`
- `closed`
Represent these in `status` and `review_state` rather than inventing a separate PR table in V1.
## Routes and API
## 1. Project workspace routes
Extend existing routes:
- `GET /projects/:id/workspaces`
- `POST /projects/:id/workspaces`
- `PATCH /projects/:id/workspaces/:workspaceId`
- `DELETE /projects/:id/workspaces/:workspaceId`
### New accepted/returned fields
- `sourceType`
- `defaultRef`
- `visibility`
- `setupCommand`
- `cleanupCommand`
- `remoteProvider`
- `remoteWorkspaceRef`
## 2. Execution workspace routes
Add:
- `GET /companies/:companyId/execution-workspaces`
- filters:
- `projectId`
- `projectWorkspaceId`
- `status`
- `issueId`
- `reuseEligible=true`
- `GET /execution-workspaces/:id`
- `PATCH /execution-workspaces/:id`
- update status/metadata/cleanup fields only in V1
Do not add top-level navigation for these routes yet.
## 3. Work product routes
Add:
- `GET /issues/:id/work-products`
- `POST /issues/:id/work-products`
- `PATCH /work-products/:id`
- `DELETE /work-products/:id`
### V1 mutation permissions
- board can create/update/delete all
- agents can create/update for issues they are assigned or currently executing
- deletion should generally archive rather than hard-delete once linked to historical output
## 4. Issue routes
Extend existing create/update payloads to accept:
- `projectWorkspaceId`
- `executionWorkspacePreference`
- `executionWorkspaceId`
Extend `GET /issues/:id` to return:
- `projectWorkspaceId`
- `executionWorkspaceId`
- `executionWorkspacePreference`
- `currentExecutionWorkspace`
- `workProducts[]`
## 5. Instance settings routes
Add support for:
- reading/writing `experimental.workspaces`
This is a UI gate only.
If there is no generic instance settings storage yet, the first slice can store this in the existing config/instance settings mechanism used by `/instance/settings`.
## UI Changes
## 1. `/instance/settings`
Add section:
- `Experimental`
- `Enable Workspaces`
When off:
- hide new workspace-specific affordances
- do not alter existing project or issue behavior
## 2. Project properties
Do not create a separate `Code` tab yet.
Ship inside existing project properties first.
### Add or re-enable sections
- `Project Workspaces`
- `Execution Defaults`
- `Provisioning`
- `Pull Requests`
- `Previews and Runtime`
- `Cleanup`
### Display rules
- only show when `experimental.workspaces=true`
- keep wording generic enough for local and remote setups
- only show git-specific fields when `sourceType=git_repo`
- only show local-path-specific fields when not `remote_managed`
## 3. Issue create dialog
When the workspace experimental flag is on and the selected project has workspace automation or workspaces:
### Basic fields
- `Codebase`
- select from project workspaces
- default to policy default or primary workspace
- `Execution mode`
- `Project default`
- `Shared workspace`
- `Isolated workspace`
- `Operator branch`
### Advanced section
- `Reuse existing execution workspace`
This control should query only:
- same project
- same codebase if selected
- active/recent workspaces
- compact labels with branch or workspace name
Do not expose all execution workspaces in a noisy unfiltered list.
## 4. Issue detail
Add a `Work Product` tab when:
- the experimental flag is on, or
- the issue already has work products
### Show
- current execution workspace summary
- PR cards
- preview cards
- branch/commit rows
- artifacts/documents
Add compact header chips:
- codebase
- workspace
- PR count/status
- preview status
## 5. Execution workspace detail page
Add a detail route but no nav item.
Linked from:
- issue work product tab
- project workspace/execution panels
### Show
- identity and status
- project workspace origin
- source issue
- linked issues
- branch/ref/provider info
- runtime services
- work products
- cleanup state
## Runtime and Adapter Behavior
## 1. Local adapters
For local adapters:
- continue to use existing cwd/worktree realization paths
- persist the result as execution workspaces
- attach runtime services and work product to the execution workspace and issue
## 2. Remote or cloud adapters
For remote adapters:
- allow execution workspaces with null `cwd`
- require provider metadata sufficient to identify the remote workspace/session
- allow work product creation without any host-local process ownership
Examples:
- cloud coding agent opens a branch and PR on GitHub
- Vercel preview URL is reported back as a preview work product
- remote sandbox emits artifact URLs
## 3. Approval-aware PR workflow
V1 should support richer PR state tracking, but not a full review engine.
### Required actions
- `open_pr`
- `mark_ready`
### Required review states
- `draft`
- `ready_for_review`
- `approved`
- `changes_requested`
- `merged`
- `closed`
### Storage approach
- represent these as `issue_work_products` with `type='pull_request'`
- use `status` and `review_state`
- store provider-specific details in `metadata`
## Migration Plan
## 1. Existing installs
The migration posture is backward-compatible by default.
### Guarantees
- no existing project must be edited before it keeps working
- no existing issue flow should start requiring workspace input
- all new nullable columns must preserve current behavior when absent
## 2. Project workspace migration
Migrate `project_workspaces` in place.
### Backfill
- derive `source_type`
- copy `repo_ref` to `default_ref`
- leave new optional fields null
## 3. Issue migration
Do not backfill `project_workspace_id` or `execution_workspace_id` on all existing issues.
Reason:
- the safest migration is to preserve current runtime behavior and bind explicitly only when new workspace-aware flows are used
Interpret old issues as:
- `executionWorkspacePreference = inherit`
- compatibility/shared behavior
## 4. Runtime history migration
Do not attempt a perfect historical reconstruction of execution workspaces in the migration itself.
Instead:
- create execution workspace records forward from first new run
- optionally add a later backfill tool for recent runtime services if it proves valuable
## Rollout Order
## Phase 1: Schema and shared contracts
1. extend `project_workspaces`
2. add `execution_workspaces`
3. add `issue_work_products`
4. extend `issues`
5. extend `workspace_runtime_services`
6. update shared types and validators
## Phase 2: Service wiring
1. update project workspace CRUD
2. update issue create/update resolution
3. refactor workspace realization to persist execution workspaces
4. attach runtime services to execution workspaces
5. add work product service and persistence
## Phase 3: API and UI
1. add execution workspace routes
2. add work product routes
3. add instance experimental settings toggle
4. re-enable and revise project workspace UI behind the flag
5. add issue create/update controls behind the flag
6. add issue work product tab
7. add execution workspace detail page
## Phase 4: Provider integrations
1. GitHub PR reporting
2. preview URL reporting
3. runtime-service-to-work-product linking
4. remote/cloud provider references
## Acceptance Criteria
1. Existing installs continue to behave predictably with no required reconfiguration.
2. Projects can define local, git, non-git, and remote-managed project workspaces.
3. Issues can explicitly select a project workspace and execution preference.
4. Each issue can point to one current execution workspace.
5. Multiple issues can intentionally reuse the same execution workspace.
6. Execution workspaces are persisted for both local and remote execution flows.
7. Work products can be attached to issues with optional execution workspace linkage.
8. GitHub PRs can be represented with richer lifecycle states.
9. The main UI remains simple when the experimental flag is off.
10. No top-level workspace navigation is required for this first slice.
## Risks and Mitigations
## Risk: too many overlapping workspace concepts
Mitigation:
- keep issue UI to `Codebase` and `Execution mode`
- reserve execution workspace details for advanced pages
## Risk: breaking current projects on upgrade
Mitigation:
- nullable schema additions
- in-place `project_workspaces` migration
- compatibility defaults
## Risk: local-only assumptions leaking into cloud mode
Mitigation:
- make `cwd` optional for execution workspaces
- use `provider_type` and `provider_ref`
- use `PAPERCLIP_EXECUTION_TOPOLOGY` as a defaulting guardrail
## Risk: turning PRs into a bespoke subsystem too early
Mitigation:
- represent PRs as work products in V1
- keep provider-specific details in metadata
- defer a dedicated PR table unless usage proves it necessary
## Recommended First Engineering Slice
If we want the narrowest useful implementation:
1. extend `project_workspaces`
2. add `execution_workspaces`
3. extend `issues` with explicit workspace fields
4. persist execution workspaces from existing local workspace realization
5. add `issue_work_products`
6. show project workspace controls and issue workspace controls behind the experimental flag
7. add issue `Work Product` tab with PR/preview/runtime service display
This slice is enough to validate the model without yet building every provider integration or cleanup workflow.

View File

@@ -108,6 +108,7 @@ Mount surfaces currently wired in the host include:
- `detailTab`
- `taskDetailView`
- `projectSidebarItem`
- `globalToolbarButton`
- `toolbarButton`
- `contextMenuItem`
- `commentAnnotation`

View File

@@ -0,0 +1,33 @@
services:
review:
build:
context: .
dockerfile: docker/untrusted-review/Dockerfile
init: true
tty: true
stdin_open: true
working_dir: /work
environment:
HOME: "/home/reviewer"
CODEX_HOME: "/home/reviewer/.codex"
CLAUDE_HOME: "/home/reviewer/.claude"
PAPERCLIP_HOME: "/home/reviewer/.paperclip-review"
OPENAI_API_KEY: "${OPENAI_API_KEY:-}"
ANTHROPIC_API_KEY: "${ANTHROPIC_API_KEY:-}"
GITHUB_TOKEN: "${GITHUB_TOKEN:-}"
ports:
- "${REVIEW_PAPERCLIP_PORT:-3100}:3100"
- "${REVIEW_VITE_PORT:-5173}:5173"
volumes:
- review-home:/home/reviewer
- review-work:/work
cap_drop:
- ALL
security_opt:
- no-new-privileges:true
tmpfs:
- /tmp:mode=1777,size=1g
volumes:
review-home:
review-work:

View File

@@ -0,0 +1,44 @@
FROM node:lts-trixie-slim
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
bash \
ca-certificates \
curl \
fd-find \
gh \
git \
jq \
less \
openssh-client \
procps \
ripgrep \
&& rm -rf /var/lib/apt/lists/*
RUN ln -sf /usr/bin/fdfind /usr/local/bin/fd
RUN corepack enable \
&& npm install --global --omit=dev @anthropic-ai/claude-code@latest @openai/codex@latest
RUN useradd --create-home --shell /bin/bash reviewer
ENV HOME=/home/reviewer \
CODEX_HOME=/home/reviewer/.codex \
CLAUDE_HOME=/home/reviewer/.claude \
PAPERCLIP_HOME=/home/reviewer/.paperclip-review \
PNPM_HOME=/home/reviewer/.local/share/pnpm \
PATH=/home/reviewer/.local/share/pnpm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
WORKDIR /work
COPY --chown=reviewer:reviewer docker/untrusted-review/bin/review-checkout-pr /usr/local/bin/review-checkout-pr
RUN chmod +x /usr/local/bin/review-checkout-pr \
&& mkdir -p /work \
&& chown -R reviewer:reviewer /work
USER reviewer
EXPOSE 3100 5173
CMD ["bash", "-l"]

View File

@@ -0,0 +1,65 @@
#!/usr/bin/env bash
set -euo pipefail
usage() {
cat <<'EOF'
Usage: review-checkout-pr <owner/repo|github-url> <pr-number> [checkout-dir]
Examples:
review-checkout-pr paperclipai/paperclip 432
review-checkout-pr https://github.com/paperclipai/paperclip.git 432
EOF
}
if [[ $# -lt 2 || $# -gt 3 ]]; then
usage >&2
exit 1
fi
normalize_repo_slug() {
local raw="$1"
raw="${raw#git@github.com:}"
raw="${raw#ssh://git@github.com/}"
raw="${raw#https://github.com/}"
raw="${raw#http://github.com/}"
raw="${raw%.git}"
printf '%s\n' "${raw#/}"
}
repo_slug="$(normalize_repo_slug "$1")"
pr_number="$2"
if [[ ! "$repo_slug" =~ ^[^/]+/[^/]+$ ]]; then
echo "Expected GitHub repo slug like owner/repo or a GitHub repo URL, got: $1" >&2
exit 1
fi
if [[ ! "$pr_number" =~ ^[0-9]+$ ]]; then
echo "PR number must be numeric, got: $pr_number" >&2
exit 1
fi
repo_key="${repo_slug//\//-}"
mirror_dir="/work/repos/${repo_key}"
checkout_dir="${3:-/work/checkouts/${repo_key}/pr-${pr_number}}"
pr_ref="refs/remotes/origin/pr/${pr_number}"
mkdir -p "$(dirname "$mirror_dir")" "$(dirname "$checkout_dir")"
if [[ ! -d "$mirror_dir/.git" ]]; then
if command -v gh >/dev/null 2>&1; then
gh repo clone "$repo_slug" "$mirror_dir" -- --filter=blob:none
else
git clone --filter=blob:none "https://github.com/${repo_slug}.git" "$mirror_dir"
fi
fi
git -C "$mirror_dir" fetch --force origin "pull/${pr_number}/head:${pr_ref}"
if [[ -e "$checkout_dir" ]]; then
printf '%s\n' "$checkout_dir"
exit 0
fi
git -C "$mirror_dir" worktree add --detach "$checkout_dir" "$pr_ref" >/dev/null
printf '%s\n' "$checkout_dir"

View File

@@ -38,10 +38,33 @@ PATCH /api/companies/{companyId}
{
"name": "Updated Name",
"description": "Updated description",
"budgetMonthlyCents": 100000
"budgetMonthlyCents": 100000,
"logoAssetId": "b9f5e911-6de5-4cd0-8dc6-a55a13bc02f6"
}
```
## Upload Company Logo
Upload an image for a company icon and store it as that companys logo.
```
POST /api/companies/{companyId}/logo
Content-Type: multipart/form-data
```
Valid image content types:
- `image/png`
- `image/jpeg`
- `image/jpg`
- `image/webp`
- `image/gif`
- `image/svg+xml`
Company logo uploads use the normal Paperclip attachment size limit.
Then set the company logo by PATCHing the returned `assetId` into `logoAssetId`.
## Archive Company
```
@@ -58,6 +81,8 @@ Archives a company. Archived companies are hidden from default listings.
| `name` | string | Company name |
| `description` | string | Company description |
| `status` | string | `active`, `paused`, `archived` |
| `logoAssetId` | string | Optional asset id for the stored logo image |
| `logoUrl` | string | Optional Paperclip asset content path for the stored logo image |
| `budgetMonthlyCents` | number | Monthly budget limit |
| `createdAt` | string | ISO timestamp |
| `updatedAt` | string | ISO timestamp |

View File

@@ -18,23 +18,22 @@
"db:backup": "./scripts/backup-db.sh",
"paperclipai": "node cli/node_modules/tsx/dist/cli.mjs cli/src/index.ts",
"build:npm": "./scripts/build-npm.sh",
"release:start": "./scripts/release-start.sh",
"release": "./scripts/release.sh",
"release:preflight": "./scripts/release-preflight.sh",
"release:canary": "./scripts/release.sh canary",
"release:stable": "./scripts/release.sh stable",
"release:github": "./scripts/create-github-release.sh",
"release:rollback": "./scripts/rollback-latest.sh",
"changeset": "changeset",
"version-packages": "changeset version",
"check:tokens": "node scripts/check-forbidden-tokens.mjs",
"docs:dev": "cd docs && npx mintlify dev",
"smoke:openclaw-join": "./scripts/smoke/openclaw-join.sh",
"smoke:openclaw-docker-ui": "./scripts/smoke/openclaw-docker-ui.sh",
"smoke:openclaw-sse-standalone": "./scripts/smoke/openclaw-sse-standalone.sh",
"test:e2e": "npx playwright test --config tests/e2e/playwright.config.ts",
"test:e2e:headed": "npx playwright test --config tests/e2e/playwright.config.ts --headed"
"test:e2e:headed": "npx playwright test --config tests/e2e/playwright.config.ts --headed",
"test:release-smoke": "npx playwright test --config tests/release-smoke/playwright.config.ts",
"test:release-smoke:headed": "npx playwright test --config tests/release-smoke/playwright.config.ts --headed"
},
"devDependencies": {
"@changesets/cli": "^2.30.0",
"cross-env": "^10.1.0",
"@playwright/test": "^1.58.2",
"esbuild": "^0.27.3",

View File

@@ -1,5 +1,11 @@
# @paperclipai/adapter-utils
## 0.3.1
### Patch Changes
- Stable release preparation for 0.3.1
## 0.3.0
### Minor Changes

View File

@@ -1,6 +1,16 @@
{
"name": "@paperclipai/adapter-utils",
"version": "0.3.0",
"version": "0.3.1",
"license": "MIT",
"homepage": "https://github.com/paperclipai/paperclip",
"bugs": {
"url": "https://github.com/paperclipai/paperclip/issues"
},
"repository": {
"type": "git",
"url": "https://github.com/paperclipai/paperclip",
"directory": "packages/adapter-utils"
},
"type": "module",
"exports": {
".": "./src/index.ts",

View File

@@ -0,0 +1,28 @@
import { describe, expect, it } from "vitest";
import { inferOpenAiCompatibleBiller } from "./billing.js";
describe("inferOpenAiCompatibleBiller", () => {
it("returns openrouter when OPENROUTER_API_KEY is present", () => {
expect(
inferOpenAiCompatibleBiller({ OPENROUTER_API_KEY: "sk-or-123" } as NodeJS.ProcessEnv, "openai"),
).toBe("openrouter");
});
it("returns openrouter when OPENAI_BASE_URL points at OpenRouter", () => {
expect(
inferOpenAiCompatibleBiller(
{ OPENAI_BASE_URL: "https://openrouter.ai/api/v1" } as NodeJS.ProcessEnv,
"openai",
),
).toBe("openrouter");
});
it("returns fallback when no OpenRouter markers are present", () => {
expect(
inferOpenAiCompatibleBiller(
{ OPENAI_BASE_URL: "https://api.openai.com/v1" } as NodeJS.ProcessEnv,
"openai",
),
).toBe("openai");
});
});

View File

@@ -0,0 +1,20 @@
function readEnv(env: NodeJS.ProcessEnv, key: string): string | null {
const value = env[key];
return typeof value === "string" && value.trim().length > 0 ? value.trim() : null;
}
export function inferOpenAiCompatibleBiller(
env: NodeJS.ProcessEnv,
fallback: string | null = "openai",
): string | null {
const explicitOpenRouterKey = readEnv(env, "OPENROUTER_API_KEY");
if (explicitOpenRouterKey) return "openrouter";
const baseUrl =
readEnv(env, "OPENAI_BASE_URL") ??
readEnv(env, "OPENAI_API_BASE") ??
readEnv(env, "OPENAI_API_BASE_URL");
if (baseUrl && /openrouter\.ai/i.test(baseUrl)) return "openrouter";
return fallback;
}

View File

@@ -17,14 +17,31 @@ export type {
HireApprovedPayload,
HireApprovedHookResult,
ServerAdapterModule,
QuotaWindow,
ProviderQuotaResult,
TranscriptEntry,
StdoutLineParser,
CLIAdapterModule,
CreateConfigValues,
} from "./types.js";
export type {
SessionCompactionPolicy,
NativeContextManagement,
AdapterSessionManagement,
ResolvedSessionCompactionPolicy,
} from "./session-compaction.js";
export {
ADAPTER_SESSION_MANAGEMENT,
LEGACY_SESSIONED_ADAPTER_TYPES,
getAdapterSessionManagement,
readSessionCompactionOverride,
resolveSessionCompactionPolicy,
hasSessionCompactionThresholds,
} from "./session-compaction.js";
export {
REDACTED_HOME_PATH_USER,
redactHomePathUserSegments,
redactHomePathUserSegmentsInValue,
redactTranscriptEntryPaths,
} from "./log-redaction.js";
export { inferOpenAiCompatibleBiller } from "./billing.js";

View File

@@ -0,0 +1,175 @@
export interface SessionCompactionPolicy {
enabled: boolean;
maxSessionRuns: number;
maxRawInputTokens: number;
maxSessionAgeHours: number;
}
export type NativeContextManagement = "confirmed" | "likely" | "unknown" | "none";
export interface AdapterSessionManagement {
supportsSessionResume: boolean;
nativeContextManagement: NativeContextManagement;
defaultSessionCompaction: SessionCompactionPolicy;
}
export interface ResolvedSessionCompactionPolicy {
policy: SessionCompactionPolicy;
adapterSessionManagement: AdapterSessionManagement | null;
explicitOverride: Partial<SessionCompactionPolicy>;
source: "adapter_default" | "agent_override" | "legacy_fallback";
}
const DEFAULT_SESSION_COMPACTION_POLICY: SessionCompactionPolicy = {
enabled: true,
maxSessionRuns: 200,
maxRawInputTokens: 2_000_000,
maxSessionAgeHours: 72,
};
// Adapters with native context management still participate in session resume,
// but Paperclip should not rotate them using threshold-based compaction.
const ADAPTER_MANAGED_SESSION_POLICY: SessionCompactionPolicy = {
enabled: true,
maxSessionRuns: 0,
maxRawInputTokens: 0,
maxSessionAgeHours: 0,
};
export const LEGACY_SESSIONED_ADAPTER_TYPES = new Set([
"claude_local",
"codex_local",
"cursor",
"gemini_local",
"opencode_local",
"pi_local",
]);
export const ADAPTER_SESSION_MANAGEMENT: Record<string, AdapterSessionManagement> = {
claude_local: {
supportsSessionResume: true,
nativeContextManagement: "confirmed",
defaultSessionCompaction: ADAPTER_MANAGED_SESSION_POLICY,
},
codex_local: {
supportsSessionResume: true,
nativeContextManagement: "confirmed",
defaultSessionCompaction: ADAPTER_MANAGED_SESSION_POLICY,
},
cursor: {
supportsSessionResume: true,
nativeContextManagement: "unknown",
defaultSessionCompaction: DEFAULT_SESSION_COMPACTION_POLICY,
},
gemini_local: {
supportsSessionResume: true,
nativeContextManagement: "unknown",
defaultSessionCompaction: DEFAULT_SESSION_COMPACTION_POLICY,
},
opencode_local: {
supportsSessionResume: true,
nativeContextManagement: "unknown",
defaultSessionCompaction: DEFAULT_SESSION_COMPACTION_POLICY,
},
pi_local: {
supportsSessionResume: true,
nativeContextManagement: "unknown",
defaultSessionCompaction: DEFAULT_SESSION_COMPACTION_POLICY,
},
};
function isRecord(value: unknown): value is Record<string, unknown> {
return typeof value === "object" && value !== null && !Array.isArray(value);
}
function readBoolean(value: unknown): boolean | undefined {
if (typeof value === "boolean") return value;
if (typeof value === "number") {
if (value === 1) return true;
if (value === 0) return false;
return undefined;
}
if (typeof value !== "string") return undefined;
const normalized = value.trim().toLowerCase();
if (normalized === "true" || normalized === "1" || normalized === "yes" || normalized === "on") {
return true;
}
if (normalized === "false" || normalized === "0" || normalized === "no" || normalized === "off") {
return false;
}
return undefined;
}
function readNumber(value: unknown): number | undefined {
if (typeof value === "number" && Number.isFinite(value)) {
return Math.max(0, Math.floor(value));
}
if (typeof value !== "string") return undefined;
const parsed = Number(value.trim());
return Number.isFinite(parsed) ? Math.max(0, Math.floor(parsed)) : undefined;
}
export function getAdapterSessionManagement(adapterType: string | null | undefined): AdapterSessionManagement | null {
if (!adapterType) return null;
return ADAPTER_SESSION_MANAGEMENT[adapterType] ?? null;
}
export function readSessionCompactionOverride(runtimeConfig: unknown): Partial<SessionCompactionPolicy> {
const runtime = isRecord(runtimeConfig) ? runtimeConfig : {};
const heartbeat = isRecord(runtime.heartbeat) ? runtime.heartbeat : {};
const compaction = isRecord(
heartbeat.sessionCompaction ?? heartbeat.sessionRotation ?? runtime.sessionCompaction,
)
? (heartbeat.sessionCompaction ?? heartbeat.sessionRotation ?? runtime.sessionCompaction) as Record<string, unknown>
: {};
const explicit: Partial<SessionCompactionPolicy> = {};
const enabled = readBoolean(compaction.enabled);
const maxSessionRuns = readNumber(compaction.maxSessionRuns);
const maxRawInputTokens = readNumber(compaction.maxRawInputTokens);
const maxSessionAgeHours = readNumber(compaction.maxSessionAgeHours);
if (enabled !== undefined) explicit.enabled = enabled;
if (maxSessionRuns !== undefined) explicit.maxSessionRuns = maxSessionRuns;
if (maxRawInputTokens !== undefined) explicit.maxRawInputTokens = maxRawInputTokens;
if (maxSessionAgeHours !== undefined) explicit.maxSessionAgeHours = maxSessionAgeHours;
return explicit;
}
export function resolveSessionCompactionPolicy(
adapterType: string | null | undefined,
runtimeConfig: unknown,
): ResolvedSessionCompactionPolicy {
const adapterSessionManagement = getAdapterSessionManagement(adapterType);
const explicitOverride = readSessionCompactionOverride(runtimeConfig);
const hasExplicitOverride = Object.keys(explicitOverride).length > 0;
const fallbackEnabled = Boolean(adapterType && LEGACY_SESSIONED_ADAPTER_TYPES.has(adapterType));
const basePolicy = adapterSessionManagement?.defaultSessionCompaction ?? {
...DEFAULT_SESSION_COMPACTION_POLICY,
enabled: fallbackEnabled,
};
return {
policy: {
enabled: explicitOverride.enabled ?? basePolicy.enabled,
maxSessionRuns: explicitOverride.maxSessionRuns ?? basePolicy.maxSessionRuns,
maxRawInputTokens: explicitOverride.maxRawInputTokens ?? basePolicy.maxRawInputTokens,
maxSessionAgeHours: explicitOverride.maxSessionAgeHours ?? basePolicy.maxSessionAgeHours,
},
adapterSessionManagement,
explicitOverride,
source: hasExplicitOverride
? "agent_override"
: adapterSessionManagement
? "adapter_default"
: "legacy_fallback",
};
}
export function hasSessionCompactionThresholds(policy: Pick<
SessionCompactionPolicy,
"maxSessionRuns" | "maxRawInputTokens" | "maxSessionAgeHours"
>) {
return policy.maxSessionRuns > 0 || policy.maxRawInputTokens > 0 || policy.maxSessionAgeHours > 0;
}

View File

@@ -30,7 +30,15 @@ export interface UsageSummary {
cachedInputTokens?: number;
}
export type AdapterBillingType = "api" | "subscription" | "unknown";
export type AdapterBillingType =
| "api"
| "subscription"
| "metered_api"
| "subscription_included"
| "subscription_overage"
| "credits"
| "fixed"
| "unknown";
export interface AdapterRuntimeServiceReport {
id?: string | null;
@@ -68,6 +76,7 @@ export interface AdapterExecutionResult {
sessionParams?: Record<string, unknown> | null;
sessionDisplayId?: string | null;
provider?: string | null;
biller?: string | null;
model?: string | null;
billingType?: AdapterBillingType | null;
costUsd?: number | null;
@@ -171,11 +180,43 @@ export interface HireApprovedHookResult {
detail?: Record<string, unknown>;
}
// ---------------------------------------------------------------------------
// Quota window types — used by adapters that can report provider quota/rate-limit state
// ---------------------------------------------------------------------------
/** a single rate-limit or usage window returned by a provider quota API */
export interface QuotaWindow {
/** human label, e.g. "5h", "7d", "Sonnet 7d", "Credits" */
label: string;
/** percent of the window already consumed (0-100), null when not reported */
usedPercent: number | null;
/** iso timestamp when this window resets, null when not reported */
resetsAt: string | null;
/** free-form value label for credit-style windows, e.g. "$4.20 remaining" */
valueLabel: string | null;
/** optional supporting text, e.g. reset details or provider-specific notes */
detail?: string | null;
}
/** result for one provider from getQuotaWindows() */
export interface ProviderQuotaResult {
/** provider slug, e.g. "anthropic", "openai" */
provider: string;
/** source label when the provider reports where the quota data came from */
source?: string | null;
/** true when the fetch succeeded and windows is populated */
ok: boolean;
/** error message when ok is false */
error?: string;
windows: QuotaWindow[];
}
export interface ServerAdapterModule {
type: string;
execute(ctx: AdapterExecutionContext): Promise<AdapterExecutionResult>;
testEnvironment(ctx: AdapterEnvironmentTestContext): Promise<AdapterEnvironmentTestResult>;
sessionCodec?: AdapterSessionCodec;
sessionManagement?: import("./session-compaction.js").AdapterSessionManagement;
supportsLocalAgentJwt?: boolean;
models?: AdapterModel[];
listModels?: () => Promise<AdapterModel[]>;
@@ -188,6 +229,12 @@ export interface ServerAdapterModule {
payload: HireApprovedPayload,
adapterConfig: Record<string, unknown>,
) => Promise<HireApprovedHookResult>;
/**
* Optional: fetch live provider quota/rate-limit windows for this adapter.
* Returns a ProviderQuotaResult so the server can aggregate across adapters
* without knowing provider-specific credential paths or API shapes.
*/
getQuotaWindows?: () => Promise<ProviderQuotaResult>;
}
// ---------------------------------------------------------------------------

View File

@@ -1,5 +1,13 @@
# @paperclipai/adapter-claude-local
## 0.3.1
### Patch Changes
- Stable release preparation for 0.3.1
- Updated dependencies
- @paperclipai/adapter-utils@0.3.1
## 0.3.0
### Minor Changes

View File

@@ -1,6 +1,16 @@
{
"name": "@paperclipai/adapter-claude-local",
"version": "0.3.0",
"version": "0.3.1",
"license": "MIT",
"homepage": "https://github.com/paperclipai/paperclip",
"bugs": {
"url": "https://github.com/paperclipai/paperclip/issues"
},
"repository": {
"type": "git",
"url": "https://github.com/paperclipai/paperclip",
"directory": "packages/adapters/claude-local"
},
"type": "module",
"exports": {
".": "./src/index.ts",
@@ -38,7 +48,9 @@
"scripts": {
"build": "tsc",
"clean": "rm -rf dist",
"typecheck": "tsc --noEmit"
"typecheck": "tsc --noEmit",
"probe:quota": "pnpm exec tsx src/cli/quota-probe.ts --json",
"probe:quota:raw": "pnpm exec tsx src/cli/quota-probe.ts --json --raw-cli"
},
"dependencies": {
"@paperclipai/adapter-utils": "workspace:*",

View File

@@ -0,0 +1,124 @@
#!/usr/bin/env node
import {
captureClaudeCliUsageText,
fetchClaudeCliQuota,
fetchClaudeQuota,
getQuotaWindows,
parseClaudeCliUsageText,
readClaudeAuthStatus,
readClaudeToken,
} from "../server/quota.js";
interface ProbeArgs {
json: boolean;
includeRawCli: boolean;
oauthOnly: boolean;
cliOnly: boolean;
}
function parseArgs(argv: string[]): ProbeArgs {
return {
json: argv.includes("--json"),
includeRawCli: argv.includes("--raw-cli"),
oauthOnly: argv.includes("--oauth-only"),
cliOnly: argv.includes("--cli-only"),
};
}
function stringifyError(error: unknown): string {
return error instanceof Error ? error.message : String(error);
}
async function main() {
const args = parseArgs(process.argv.slice(2));
if (args.oauthOnly && args.cliOnly) {
throw new Error("Choose either --oauth-only or --cli-only, not both.");
}
const authStatus = await readClaudeAuthStatus();
const token = await readClaudeToken();
const result: Record<string, unknown> = {
timestamp: new Date().toISOString(),
authStatus,
tokenAvailable: token != null,
};
if (!args.cliOnly) {
if (!token) {
result.oauth = {
ok: false,
error: "No Claude OAuth access token found in local credentials files.",
windows: [],
};
} else {
try {
result.oauth = {
ok: true,
windows: await fetchClaudeQuota(token),
};
} catch (error) {
result.oauth = {
ok: false,
error: stringifyError(error),
windows: [],
};
}
}
}
if (!args.oauthOnly) {
try {
const rawCliText = args.includeRawCli ? await captureClaudeCliUsageText() : null;
const windows = rawCliText ? parseClaudeCliUsageText(rawCliText) : await fetchClaudeCliQuota();
result.cli = rawCliText
? {
ok: true,
windows,
rawText: rawCliText,
}
: {
ok: true,
windows,
};
} catch (error) {
result.cli = {
ok: false,
error: stringifyError(error),
windows: [],
};
}
}
if (!args.oauthOnly && !args.cliOnly) {
try {
result.aggregated = await getQuotaWindows();
} catch (error) {
result.aggregated = {
ok: false,
error: stringifyError(error),
};
}
}
const oauthOk = (result.oauth as { ok?: boolean } | undefined)?.ok === true;
const cliOk = (result.cli as { ok?: boolean } | undefined)?.ok === true;
const aggregatedOk = (result.aggregated as { ok?: boolean } | undefined)?.ok === true;
const ok = oauthOk || cliOk || aggregatedOk;
if (args.json || process.stdout.isTTY === false) {
console.log(JSON.stringify({ ok, ...result }, null, 2));
} else {
console.log(`timestamp: ${result.timestamp}`);
console.log(`auth: ${JSON.stringify(authStatus)}`);
console.log(`tokenAvailable: ${token != null}`);
if (result.oauth) console.log(`oauth: ${JSON.stringify(result.oauth, null, 2)}`);
if (result.cli) console.log(`cli: ${JSON.stringify(result.cli, null, 2)}`);
if (result.aggregated) console.log(`aggregated: ${JSON.stringify(result.aggregated, null, 2)}`);
}
if (!ok) process.exitCode = 1;
}
await main();

View File

@@ -122,6 +122,7 @@ async function buildClaudeRuntimeConfig(input: ClaudeExecutionInput): Promise<Cl
const workspaceRepoRef = asString(workspaceContext.repoRef, "") || null;
const workspaceBranch = asString(workspaceContext.branchName, "") || null;
const workspaceWorktreePath = asString(workspaceContext.worktreePath, "") || null;
const agentHome = asString(workspaceContext.agentHome, "") || null;
const workspaceHints = Array.isArray(context.paperclipWorkspaces)
? context.paperclipWorkspaces.filter(
(value): value is Record<string, unknown> => typeof value === "object" && value !== null,
@@ -216,6 +217,9 @@ async function buildClaudeRuntimeConfig(input: ClaudeExecutionInput): Promise<Cl
if (workspaceWorktreePath) {
env.PAPERCLIP_WORKSPACE_WORKTREE_PATH = workspaceWorktreePath;
}
if (agentHome) {
env.AGENT_HOME = agentHome;
}
if (workspaceHints.length > 0) {
env.PAPERCLIP_WORKSPACES_JSON = JSON.stringify(workspaceHints);
}
@@ -336,7 +340,12 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
graceSec,
extraArgs,
} = runtimeConfig;
const billingType = resolveClaudeBillingType(env);
const effectiveEnv = Object.fromEntries(
Object.entries({ ...process.env, ...env }).filter(
(entry): entry is [string, string] => typeof entry[1] === "string",
),
);
const billingType = resolveClaudeBillingType(effectiveEnv);
const skillsDir = await buildSkillsDir();
// When instructionsFilePath is configured, create a combined temp file that
@@ -543,6 +552,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
sessionParams: resolvedSessionParams,
sessionDisplayId: resolvedSessionId,
provider: "anthropic",
biller: "anthropic",
model: parsedStream.model || asString(parsed.model, model),
billingType,
costUsd: parsedStream.costUsd ?? asNumber(parsed.total_cost_usd, 0),

View File

@@ -6,6 +6,18 @@ export {
isClaudeMaxTurnsResult,
isClaudeUnknownSessionError,
} from "./parse.js";
export {
getQuotaWindows,
readClaudeAuthStatus,
readClaudeToken,
fetchClaudeQuota,
fetchClaudeCliQuota,
captureClaudeCliUsageText,
parseClaudeCliUsageText,
toPercent,
fetchWithTimeout,
claudeConfigDir,
} from "./quota.js";
import type { AdapterSessionCodec } from "@paperclipai/adapter-utils";
function readNonEmptyString(value: unknown): string | null {

View File

@@ -0,0 +1,531 @@
import { execFile } from "node:child_process";
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { promisify } from "node:util";
import type { ProviderQuotaResult, QuotaWindow } from "@paperclipai/adapter-utils";
const execFileAsync = promisify(execFile);
const CLAUDE_USAGE_SOURCE_OAUTH = "anthropic-oauth";
const CLAUDE_USAGE_SOURCE_CLI = "claude-cli";
export function claudeConfigDir(): string {
const fromEnv = process.env.CLAUDE_CONFIG_DIR;
if (typeof fromEnv === "string" && fromEnv.trim().length > 0) return fromEnv.trim();
return path.join(os.homedir(), ".claude");
}
function hasNonEmptyProcessEnv(key: string): boolean {
const value = process.env[key];
return typeof value === "string" && value.trim().length > 0;
}
function createClaudeQuotaEnv(): Record<string, string> {
const env: Record<string, string> = {};
for (const [key, value] of Object.entries(process.env)) {
if (typeof value !== "string") continue;
if (key.startsWith("ANTHROPIC_")) continue;
env[key] = value;
}
return env;
}
function stripBackspaces(text: string): string {
let out = "";
for (const char of text) {
if (char === "\b") {
out = out.slice(0, -1);
} else {
out += char;
}
}
return out;
}
function stripAnsi(text: string): string {
return text
.replace(/\u001B\][^\u0007]*(?:\u0007|\u001B\\)/g, "")
.replace(/\u001B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])/g, "");
}
function cleanTerminalText(text: string): string {
return stripAnsi(stripBackspaces(text))
.replace(/\u0000/g, "")
.replace(/\r/g, "\n");
}
function normalizeForLabelSearch(text: string): string {
return text.toLowerCase().replace(/[^a-z0-9]+/g, "");
}
function trimToLatestUsagePanel(text: string): string | null {
const lower = text.toLowerCase();
const settingsIndex = lower.lastIndexOf("settings:");
if (settingsIndex < 0) return null;
let tail = text.slice(settingsIndex);
const tailLower = tail.toLowerCase();
if (!tailLower.includes("usage")) return null;
if (!tailLower.includes("current session") && !tailLower.includes("loading usage")) return null;
const stopMarkers = [
"status dialog dismissed",
"checking for updates",
"press ctrl-c again to exit",
];
let stopIndex = -1;
for (const marker of stopMarkers) {
const markerIndex = tailLower.indexOf(marker);
if (markerIndex >= 0 && (stopIndex === -1 || markerIndex < stopIndex)) {
stopIndex = markerIndex;
}
}
if (stopIndex >= 0) {
tail = tail.slice(0, stopIndex);
}
return tail;
}
async function readClaudeTokenFromFile(credPath: string): Promise<string | null> {
let raw: string;
try {
raw = await fs.readFile(credPath, "utf8");
} catch {
return null;
}
let parsed: unknown;
try {
parsed = JSON.parse(raw);
} catch {
return null;
}
if (typeof parsed !== "object" || parsed === null) return null;
const obj = parsed as Record<string, unknown>;
const oauth = obj["claudeAiOauth"];
if (typeof oauth !== "object" || oauth === null) return null;
const token = (oauth as Record<string, unknown>)["accessToken"];
return typeof token === "string" && token.length > 0 ? token : null;
}
interface ClaudeAuthStatus {
loggedIn: boolean;
authMethod: string | null;
subscriptionType: string | null;
}
export async function readClaudeAuthStatus(): Promise<ClaudeAuthStatus | null> {
try {
const { stdout } = await execFileAsync("claude", ["auth", "status"], {
env: process.env,
timeout: 5_000,
maxBuffer: 1024 * 1024,
});
const parsed = JSON.parse(stdout) as Record<string, unknown>;
return {
loggedIn: parsed.loggedIn === true,
authMethod: typeof parsed.authMethod === "string" ? parsed.authMethod : null,
subscriptionType: typeof parsed.subscriptionType === "string" ? parsed.subscriptionType : null,
};
} catch {
return null;
}
}
function describeClaudeSubscriptionAuth(status: ClaudeAuthStatus | null): string | null {
if (!status?.loggedIn || status.authMethod !== "claude.ai") return null;
return status.subscriptionType
? `Claude is logged in via claude.ai (${status.subscriptionType})`
: "Claude is logged in via claude.ai";
}
export async function readClaudeToken(): Promise<string | null> {
const configDir = claudeConfigDir();
for (const filename of [".credentials.json", "credentials.json"]) {
const token = await readClaudeTokenFromFile(path.join(configDir, filename));
if (token) return token;
}
return null;
}
interface AnthropicUsageWindow {
utilization?: number | null;
resets_at?: string | null;
}
interface AnthropicExtraUsage {
is_enabled?: boolean | null;
monthly_limit?: number | null;
used_credits?: number | null;
utilization?: number | null;
currency?: string | null;
}
interface AnthropicUsageResponse {
five_hour?: AnthropicUsageWindow | null;
seven_day?: AnthropicUsageWindow | null;
seven_day_sonnet?: AnthropicUsageWindow | null;
seven_day_opus?: AnthropicUsageWindow | null;
extra_usage?: AnthropicExtraUsage | null;
}
function formatCurrencyAmount(value: number, currency: string | null | undefined): string {
const code = typeof currency === "string" && currency.trim().length > 0 ? currency.trim().toUpperCase() : "USD";
return new Intl.NumberFormat("en-US", {
style: "currency",
currency: code,
maximumFractionDigits: 2,
}).format(value);
}
function formatExtraUsageLabel(extraUsage: AnthropicExtraUsage): string | null {
const monthlyLimit = extraUsage.monthly_limit;
const usedCredits = extraUsage.used_credits;
if (
typeof monthlyLimit !== "number" ||
!Number.isFinite(monthlyLimit) ||
typeof usedCredits !== "number" ||
!Number.isFinite(usedCredits)
) {
return null;
}
return `${formatCurrencyAmount(usedCredits, extraUsage.currency)} / ${formatCurrencyAmount(monthlyLimit, extraUsage.currency)}`;
}
/** Convert a 0-1 utilization fraction to a 0-100 integer percent. Returns null for null/undefined input. */
export function toPercent(utilization: number | null | undefined): number | null {
if (utilization == null) return null;
return Math.min(100, Math.round(utilization * 100));
}
/** fetch with an abort-based timeout so a hanging provider api doesn't block the response indefinitely */
export async function fetchWithTimeout(url: string, init: RequestInit, ms = 8000): Promise<Response> {
const controller = new AbortController();
const timer = setTimeout(() => controller.abort(), ms);
try {
return await fetch(url, { ...init, signal: controller.signal });
} finally {
clearTimeout(timer);
}
}
export async function fetchClaudeQuota(token: string): Promise<QuotaWindow[]> {
const resp = await fetchWithTimeout("https://api.anthropic.com/api/oauth/usage", {
headers: {
Authorization: `Bearer ${token}`,
"anthropic-beta": "oauth-2025-04-20",
},
});
if (!resp.ok) throw new Error(`anthropic usage api returned ${resp.status}`);
const body = (await resp.json()) as AnthropicUsageResponse;
const windows: QuotaWindow[] = [];
if (body.five_hour != null) {
windows.push({
label: "Current session",
usedPercent: toPercent(body.five_hour.utilization),
resetsAt: body.five_hour.resets_at ?? null,
valueLabel: null,
detail: null,
});
}
if (body.seven_day != null) {
windows.push({
label: "Current week (all models)",
usedPercent: toPercent(body.seven_day.utilization),
resetsAt: body.seven_day.resets_at ?? null,
valueLabel: null,
detail: null,
});
}
if (body.seven_day_sonnet != null) {
windows.push({
label: "Current week (Sonnet only)",
usedPercent: toPercent(body.seven_day_sonnet.utilization),
resetsAt: body.seven_day_sonnet.resets_at ?? null,
valueLabel: null,
detail: null,
});
}
if (body.seven_day_opus != null) {
windows.push({
label: "Current week (Opus only)",
usedPercent: toPercent(body.seven_day_opus.utilization),
resetsAt: body.seven_day_opus.resets_at ?? null,
valueLabel: null,
detail: null,
});
}
if (body.extra_usage != null) {
windows.push({
label: "Extra usage",
usedPercent: body.extra_usage.is_enabled === false ? null : toPercent(body.extra_usage.utilization),
resetsAt: null,
valueLabel:
body.extra_usage.is_enabled === false
? "Not enabled"
: formatExtraUsageLabel(body.extra_usage),
detail:
body.extra_usage.is_enabled === false
? "Extra usage not enabled"
: "Monthly extra usage pool",
});
}
return windows;
}
function usageOutputLooksRelevant(text: string): boolean {
const normalized = normalizeForLabelSearch(text);
return normalized.includes("currentsession")
|| normalized.includes("currentweek")
|| normalized.includes("loadingusage")
|| normalized.includes("failedtoloadusagedata")
|| normalized.includes("tokenexpired")
|| normalized.includes("authenticationerror")
|| normalized.includes("ratelimited");
}
function usageOutputLooksComplete(text: string): boolean {
const normalized = normalizeForLabelSearch(text);
if (
normalized.includes("failedtoloadusagedata")
|| normalized.includes("tokenexpired")
|| normalized.includes("authenticationerror")
|| normalized.includes("ratelimited")
) {
return true;
}
return normalized.includes("currentsession")
&& (normalized.includes("currentweek") || normalized.includes("extrausage"))
&& /[0-9]{1,3}(?:\.[0-9]+)?%/i.test(text);
}
function extractUsageError(text: string): string | null {
const lower = text.toLowerCase();
const compact = lower.replace(/\s+/g, "");
if (lower.includes("token_expired") || lower.includes("token has expired")) {
return "Claude CLI token expired. Run `claude login` to refresh.";
}
if (lower.includes("authentication_error")) {
return "Claude CLI authentication error. Run `claude login`.";
}
if (lower.includes("rate_limit_error") || lower.includes("rate limited") || compact.includes("ratelimited")) {
return "Claude CLI usage endpoint is rate limited right now. Please try again later.";
}
if (lower.includes("failed to load usage data") || compact.includes("failedtoloadusagedata")) {
return "Claude CLI could not load usage data. Open the CLI and retry `/usage`.";
}
return null;
}
function percentFromLine(line: string): number | null {
const match = line.match(/([0-9]{1,3}(?:\.[0-9]+)?)\s*%/i);
if (!match) return null;
const rawValue = Number(match[1]);
if (!Number.isFinite(rawValue)) return null;
const clamped = Math.min(100, Math.max(0, rawValue));
const lower = line.toLowerCase();
if (lower.includes("remaining") || lower.includes("left") || lower.includes("available")) {
return Math.max(0, Math.min(100, Math.round(100 - clamped)));
}
return Math.round(clamped);
}
function isQuotaLabel(line: string): boolean {
const normalized = normalizeForLabelSearch(line);
return normalized === "currentsession"
|| normalized === "currentweekallmodels"
|| normalized === "currentweeksonnetonly"
|| normalized === "currentweeksonnet"
|| normalized === "currentweekopusonly"
|| normalized === "currentweekopus"
|| normalized === "extrausage";
}
function canonicalQuotaLabel(line: string): string {
switch (normalizeForLabelSearch(line)) {
case "currentsession":
return "Current session";
case "currentweekallmodels":
return "Current week (all models)";
case "currentweeksonnetonly":
case "currentweeksonnet":
return "Current week (Sonnet only)";
case "currentweekopusonly":
case "currentweekopus":
return "Current week (Opus only)";
case "extrausage":
return "Extra usage";
default:
return line;
}
}
function formatClaudeCliDetail(label: string, lines: string[]): string | null {
const normalizedLabel = normalizeForLabelSearch(label);
if (normalizedLabel === "extrausage") {
const compact = lines.join(" ").replace(/\s+/g, "").toLowerCase();
if (compact.includes("extrausagenotenabled")) {
return "Extra usage not enabled • /extra-usage to enable";
}
const firstLine = lines.find((line) => line.trim().length > 0) ?? null;
return firstLine;
}
const resetLine = lines.find((line) => /^resets/i.test(line) || normalizeForLabelSearch(line).startsWith("resets"));
if (!resetLine) return null;
return resetLine
.replace(/^Resets/i, "Resets ")
.replace(/([A-Z][a-z]{2})(\d)/g, "$1 $2")
.replace(/(\d)at(\d)/g, "$1 at $2")
.replace(/(am|pm)\(/gi, "$1 (")
.replace(/([A-Za-z])\(/g, "$1 (")
.replace(/\s+/g, " ")
.trim();
}
export function parseClaudeCliUsageText(text: string): QuotaWindow[] {
const cleaned = trimToLatestUsagePanel(cleanTerminalText(text)) ?? cleanTerminalText(text);
const usageError = extractUsageError(cleaned);
if (usageError) throw new Error(usageError);
const lines = cleaned
.split("\n")
.map((line) => line.trim())
.filter((line) => line.length > 0);
const sections: Array<{ label: string; lines: string[] }> = [];
let current: { label: string; lines: string[] } | null = null;
for (const line of lines) {
if (isQuotaLabel(line)) {
if (current) sections.push(current);
current = { label: canonicalQuotaLabel(line), lines: [] };
continue;
}
if (current) current.lines.push(line);
}
if (current) sections.push(current);
const windows = sections.map<QuotaWindow>((section) => {
const usedPercent = section.lines.map(percentFromLine).find((value) => value != null) ?? null;
return {
label: section.label,
usedPercent,
resetsAt: null,
valueLabel: null,
detail: formatClaudeCliDetail(section.label, section.lines),
};
});
if (!windows.some((window) => normalizeForLabelSearch(window.label) === "currentsession")) {
throw new Error("Could not parse Claude CLI usage output.");
}
return windows;
}
function quoteForShell(value: string): string {
return `'${value.replace(/'/g, `'\\''`)}'`;
}
function buildClaudeCliShellProbeCommand(): string {
const feed = "(sleep 2; printf '/usage\\r'; sleep 6; printf '\\033'; sleep 1; printf '\\003')";
const claudeCommand = "claude --tools \"\"";
if (process.platform === "darwin") {
return `${feed} | script -q /dev/null ${claudeCommand}`;
}
return `${feed} | script -q -e -f -c ${quoteForShell(claudeCommand)} /dev/null`;
}
export async function captureClaudeCliUsageText(timeoutMs = 12_000): Promise<string> {
const command = buildClaudeCliShellProbeCommand();
try {
const { stdout, stderr } = await execFileAsync("sh", ["-c", command], {
env: createClaudeQuotaEnv(),
timeout: timeoutMs,
maxBuffer: 8 * 1024 * 1024,
});
const output = `${stdout}${stderr}`;
const cleaned = cleanTerminalText(output);
if (usageOutputLooksComplete(cleaned)) return output;
throw new Error("Claude CLI usage probe ended before rendering usage.");
} catch (error) {
const stdout =
typeof error === "object" && error !== null && "stdout" in error && typeof error.stdout === "string"
? error.stdout
: "";
const stderr =
typeof error === "object" && error !== null && "stderr" in error && typeof error.stderr === "string"
? error.stderr
: "";
const output = `${stdout}${stderr}`;
const cleaned = cleanTerminalText(output);
if (usageOutputLooksComplete(cleaned)) return output;
if (usageOutputLooksRelevant(cleaned)) {
throw new Error("Claude CLI usage probe ended before rendering usage.");
}
throw error instanceof Error ? error : new Error(String(error));
}
}
export async function fetchClaudeCliQuota(): Promise<QuotaWindow[]> {
const rawText = await captureClaudeCliUsageText();
return parseClaudeCliUsageText(rawText);
}
function formatProviderError(source: string, error: unknown): string {
const message = error instanceof Error ? error.message : String(error);
return `${source}: ${message}`;
}
export async function getQuotaWindows(): Promise<ProviderQuotaResult> {
const authStatus = await readClaudeAuthStatus();
const authDescription = describeClaudeSubscriptionAuth(authStatus);
const token = await readClaudeToken();
const errors: string[] = [];
if (token) {
try {
const windows = await fetchClaudeQuota(token);
return { provider: "anthropic", source: CLAUDE_USAGE_SOURCE_OAUTH, ok: true, windows };
} catch (error) {
errors.push(formatProviderError("Anthropic OAuth usage", error));
}
}
try {
const windows = await fetchClaudeCliQuota();
return { provider: "anthropic", source: CLAUDE_USAGE_SOURCE_CLI, ok: true, windows };
} catch (error) {
errors.push(formatProviderError("Claude CLI /usage", error));
}
if (hasNonEmptyProcessEnv("ANTHROPIC_API_KEY") && !authDescription) {
return {
provider: "anthropic",
ok: false,
error:
errors[0]
?? "ANTHROPIC_API_KEY is set and no local Claude subscription session is available for quota polling",
windows: [],
};
}
if (authDescription) {
return {
provider: "anthropic",
ok: false,
error:
errors.length > 0
? `${authDescription}, but quota polling failed (${errors.join("; ")})`
: `${authDescription}, but Paperclip could not load subscription quota data`,
windows: [],
};
}
return {
provider: "anthropic",
ok: false,
error: errors[0] ?? "no local claude auth token",
windows: [],
};
}

View File

@@ -1,5 +1,13 @@
# @paperclipai/adapter-codex-local
## 0.3.1
### Patch Changes
- Stable release preparation for 0.3.1
- Updated dependencies
- @paperclipai/adapter-utils@0.3.1
## 0.3.0
### Minor Changes

View File

@@ -1,6 +1,16 @@
{
"name": "@paperclipai/adapter-codex-local",
"version": "0.3.0",
"version": "0.3.1",
"license": "MIT",
"homepage": "https://github.com/paperclipai/paperclip",
"bugs": {
"url": "https://github.com/paperclipai/paperclip/issues"
},
"repository": {
"type": "git",
"url": "https://github.com/paperclipai/paperclip",
"directory": "packages/adapters/codex-local"
},
"type": "module",
"exports": {
".": "./src/index.ts",
@@ -38,7 +48,8 @@
"scripts": {
"build": "tsc",
"clean": "rm -rf dist",
"typecheck": "tsc --noEmit"
"typecheck": "tsc --noEmit",
"probe:quota": "pnpm exec tsx src/cli/quota-probe.ts --json"
},
"dependencies": {
"@paperclipai/adapter-utils": "workspace:*",

View File

@@ -0,0 +1,112 @@
#!/usr/bin/env node
import {
fetchCodexQuota,
fetchCodexRpcQuota,
getQuotaWindows,
readCodexAuthInfo,
readCodexToken,
} from "../server/quota.js";
interface ProbeArgs {
json: boolean;
rpcOnly: boolean;
whamOnly: boolean;
}
function parseArgs(argv: string[]): ProbeArgs {
return {
json: argv.includes("--json"),
rpcOnly: argv.includes("--rpc-only"),
whamOnly: argv.includes("--wham-only"),
};
}
function stringifyError(error: unknown): string {
return error instanceof Error ? error.message : String(error);
}
async function main() {
const args = parseArgs(process.argv.slice(2));
if (args.rpcOnly && args.whamOnly) {
throw new Error("Choose either --rpc-only or --wham-only, not both.");
}
const auth = await readCodexAuthInfo();
const token = await readCodexToken();
const result: Record<string, unknown> = {
timestamp: new Date().toISOString(),
auth,
tokenAvailable: token != null,
};
if (!args.whamOnly) {
try {
result.rpc = {
ok: true,
...(await fetchCodexRpcQuota()),
};
} catch (error) {
result.rpc = {
ok: false,
error: stringifyError(error),
windows: [],
};
}
}
if (!args.rpcOnly) {
if (!token) {
result.wham = {
ok: false,
error: "No local Codex auth token found in ~/.codex/auth.json.",
windows: [],
};
} else {
try {
result.wham = {
ok: true,
windows: await fetchCodexQuota(token.token, token.accountId),
};
} catch (error) {
result.wham = {
ok: false,
error: stringifyError(error),
windows: [],
};
}
}
}
if (!args.rpcOnly && !args.whamOnly) {
try {
result.aggregated = await getQuotaWindows();
} catch (error) {
result.aggregated = {
ok: false,
error: stringifyError(error),
};
}
}
const rpcOk = (result.rpc as { ok?: boolean } | undefined)?.ok === true;
const whamOk = (result.wham as { ok?: boolean } | undefined)?.ok === true;
const aggregatedOk = (result.aggregated as { ok?: boolean } | undefined)?.ok === true;
const ok = rpcOk || whamOk || aggregatedOk;
if (args.json || process.stdout.isTTY === false) {
console.log(JSON.stringify({ ok, ...result }, null, 2));
} else {
console.log(`timestamp: ${result.timestamp}`);
console.log(`auth: ${JSON.stringify(auth)}`);
console.log(`tokenAvailable: ${token != null}`);
if (result.rpc) console.log(`rpc: ${JSON.stringify(result.rpc, null, 2)}`);
if (result.wham) console.log(`wham: ${JSON.stringify(result.wham, null, 2)}`);
if (result.aggregated) console.log(`aggregated: ${JSON.stringify(result.aggregated, null, 2)}`);
}
if (!ok) process.exitCode = 1;
}
await main();

View File

@@ -94,7 +94,7 @@ export async function prepareWorktreeCodexHome(
}
await onLog(
"stderr",
"stdout",
`[paperclip] Using worktree-isolated Codex home "${targetHome}" (seeded from "${sourceHome}").\n`,
);
return targetHome;

View File

@@ -1,7 +1,7 @@
import fs from "node:fs/promises";
import path from "node:path";
import { fileURLToPath } from "node:url";
import type { AdapterExecutionContext, AdapterExecutionResult } from "@paperclipai/adapter-utils";
import { inferOpenAiCompatibleBiller, type AdapterExecutionContext, type AdapterExecutionResult } from "@paperclipai/adapter-utils";
import {
asString,
asNumber,
@@ -61,6 +61,12 @@ function resolveCodexBillingType(env: Record<string, string>): "api" | "subscrip
return hasNonEmptyEnvValue(env, "OPENAI_API_KEY") ? "api" : "subscription";
}
function resolveCodexBiller(env: Record<string, string>, billingType: "api" | "subscription"): string {
const openAiCompatibleBiller = inferOpenAiCompatibleBiller(env, "openai");
if (openAiCompatibleBiller === "openrouter") return "openrouter";
return billingType === "subscription" ? "chatgpt" : openAiCompatibleBiller ?? "openai";
}
async function isLikelyPaperclipRepoRoot(candidate: string): Promise<boolean> {
const [hasWorkspace, hasPackageJson, hasServerDir, hasAdapterUtilsDir] = await Promise.all([
pathExists(path.join(candidate, "pnpm-workspace.yaml")),
@@ -110,7 +116,7 @@ export async function ensureCodexSkillsInjected(
);
for (const skillName of removedSkills) {
await onLog(
"stderr",
"stdout",
`[paperclip] Removed maintainer-only Codex skill "${skillName}" from ${skillsHome}\n`,
);
}
@@ -137,7 +143,7 @@ export async function ensureCodexSkillsInjected(
await fs.symlink(entry.source, target);
}
await onLog(
"stderr",
"stdout",
`[paperclip] Repaired Codex skill "${entry.name}" into ${skillsHome}\n`,
);
continue;
@@ -148,7 +154,7 @@ export async function ensureCodexSkillsInjected(
if (result === "skipped") continue;
await onLog(
"stderr",
"stdout",
`[paperclip] ${result === "repaired" ? "Repaired" : "Injected"} Codex skill "${entry.name}" into ${skillsHome}\n`,
);
} catch (err) {
@@ -188,6 +194,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const workspaceRepoRef = asString(workspaceContext.repoRef, "");
const workspaceBranch = asString(workspaceContext.branchName, "");
const workspaceWorktreePath = asString(workspaceContext.worktreePath, "");
const agentHome = asString(workspaceContext.agentHome, "");
const workspaceHints = Array.isArray(context.paperclipWorkspaces)
? context.paperclipWorkspaces.filter(
(value): value is Record<string, unknown> => typeof value === "object" && value !== null,
@@ -293,6 +300,9 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (workspaceWorktreePath) {
env.PAPERCLIP_WORKSPACE_WORKTREE_PATH = workspaceWorktreePath;
}
if (agentHome) {
env.AGENT_HOME = agentHome;
}
if (workspaceHints.length > 0) {
env.PAPERCLIP_WORKSPACES_JSON = JSON.stringify(workspaceHints);
}
@@ -311,8 +321,13 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (!hasExplicitApiKey && authToken) {
env.PAPERCLIP_API_KEY = authToken;
}
const billingType = resolveCodexBillingType(env);
const runtimeEnv = ensurePathInEnv({ ...process.env, ...env });
const effectiveEnv = Object.fromEntries(
Object.entries({ ...process.env, ...env }).filter(
(entry): entry is [string, string] => typeof entry[1] === "string",
),
);
const billingType = resolveCodexBillingType(effectiveEnv);
const runtimeEnv = ensurePathInEnv(effectiveEnv);
await ensureCommandResolvable(command, cwd, runtimeEnv);
const timeoutSec = asNumber(config.timeoutSec, 0);
@@ -349,7 +364,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
`Resolve any relative file references from ${instructionsDir}.\n\n`;
instructionsChars = instructionsPrefix.length;
await onLog(
"stderr",
"stdout",
`[paperclip] Loaded agent instructions file: ${instructionsFilePath}\n`,
);
} catch (err) {
@@ -504,6 +519,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
sessionParams: resolvedSessionParams,
sessionDisplayId: resolvedSessionId,
provider: "openai",
biller: resolveCodexBiller(effectiveEnv, billingType),
model,
billingType,
costUsd: null,

View File

@@ -1,6 +1,17 @@
export { execute, ensureCodexSkillsInjected } from "./execute.js";
export { testEnvironment } from "./test.js";
export { parseCodexJsonl, isCodexUnknownSessionError } from "./parse.js";
export {
getQuotaWindows,
readCodexAuthInfo,
readCodexToken,
fetchCodexQuota,
fetchCodexRpcQuota,
mapCodexRpcQuota,
secondsToWindowLabel,
fetchWithTimeout,
codexHomeDir,
} from "./quota.js";
import type { AdapterSessionCodec } from "@paperclipai/adapter-utils";
function readNonEmptyString(value: unknown): string | null {

View File

@@ -0,0 +1,556 @@
import { spawn } from "node:child_process";
import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import type { ProviderQuotaResult, QuotaWindow } from "@paperclipai/adapter-utils";
const CODEX_USAGE_SOURCE_RPC = "codex-rpc";
const CODEX_USAGE_SOURCE_WHAM = "codex-wham";
export function codexHomeDir(): string {
const fromEnv = process.env.CODEX_HOME;
if (typeof fromEnv === "string" && fromEnv.trim().length > 0) return fromEnv.trim();
return path.join(os.homedir(), ".codex");
}
interface CodexLegacyAuthFile {
accessToken?: string | null;
accountId?: string | null;
}
interface CodexTokenBlock {
id_token?: string | null;
access_token?: string | null;
refresh_token?: string | null;
account_id?: string | null;
}
interface CodexModernAuthFile {
OPENAI_API_KEY?: string | null;
tokens?: CodexTokenBlock | null;
last_refresh?: string | null;
}
export interface CodexAuthInfo {
accessToken: string;
accountId: string | null;
refreshToken: string | null;
idToken: string | null;
email: string | null;
planType: string | null;
lastRefresh: string | null;
}
function base64UrlDecode(input: string): string | null {
try {
let normalized = input.replace(/-/g, "+").replace(/_/g, "/");
const remainder = normalized.length % 4;
if (remainder > 0) normalized += "=".repeat(4 - remainder);
return Buffer.from(normalized, "base64").toString("utf8");
} catch {
return null;
}
}
function decodeJwtPayload(token: string | null | undefined): Record<string, unknown> | null {
if (typeof token !== "string" || token.trim().length === 0) return null;
const parts = token.split(".");
if (parts.length < 2) return null;
const decoded = base64UrlDecode(parts[1] ?? "");
if (!decoded) return null;
try {
const parsed = JSON.parse(decoded) as unknown;
return typeof parsed === "object" && parsed !== null ? parsed as Record<string, unknown> : null;
} catch {
return null;
}
}
function readNestedString(record: Record<string, unknown>, pathSegments: string[]): string | null {
let current: unknown = record;
for (const segment of pathSegments) {
if (typeof current !== "object" || current === null || Array.isArray(current)) return null;
current = (current as Record<string, unknown>)[segment];
}
return typeof current === "string" && current.trim().length > 0 ? current.trim() : null;
}
function parsePlanAndEmailFromToken(idToken: string | null, accessToken: string | null): {
email: string | null;
planType: string | null;
} {
const payloads = [decodeJwtPayload(idToken), decodeJwtPayload(accessToken)].filter(
(value): value is Record<string, unknown> => value != null,
);
for (const payload of payloads) {
const directEmail = typeof payload.email === "string" ? payload.email : null;
const authBlock =
typeof payload["https://api.openai.com/auth"] === "object" &&
payload["https://api.openai.com/auth"] !== null &&
!Array.isArray(payload["https://api.openai.com/auth"])
? payload["https://api.openai.com/auth"] as Record<string, unknown>
: null;
const profileBlock =
typeof payload["https://api.openai.com/profile"] === "object" &&
payload["https://api.openai.com/profile"] !== null &&
!Array.isArray(payload["https://api.openai.com/profile"])
? payload["https://api.openai.com/profile"] as Record<string, unknown>
: null;
const email =
directEmail
?? (typeof profileBlock?.email === "string" ? profileBlock.email : null)
?? (typeof authBlock?.chatgpt_user_email === "string" ? authBlock.chatgpt_user_email : null);
const planType =
typeof authBlock?.chatgpt_plan_type === "string" ? authBlock.chatgpt_plan_type : null;
if (email || planType) return { email: email ?? null, planType };
}
return { email: null, planType: null };
}
export async function readCodexAuthInfo(): Promise<CodexAuthInfo | null> {
const authPath = path.join(codexHomeDir(), "auth.json");
let raw: string;
try {
raw = await fs.readFile(authPath, "utf8");
} catch {
return null;
}
let parsed: unknown;
try {
parsed = JSON.parse(raw);
} catch {
return null;
}
if (typeof parsed !== "object" || parsed === null) return null;
const obj = parsed as Record<string, unknown>;
const modern = obj as CodexModernAuthFile;
const legacy = obj as CodexLegacyAuthFile;
const accessToken =
legacy.accessToken
?? modern.tokens?.access_token
?? readNestedString(obj, ["tokens", "access_token"]);
if (typeof accessToken !== "string" || accessToken.length === 0) return null;
const accountId =
legacy.accountId
?? modern.tokens?.account_id
?? readNestedString(obj, ["tokens", "account_id"]);
const refreshToken =
modern.tokens?.refresh_token
?? readNestedString(obj, ["tokens", "refresh_token"]);
const idToken =
modern.tokens?.id_token
?? readNestedString(obj, ["tokens", "id_token"]);
const { email, planType } = parsePlanAndEmailFromToken(idToken, accessToken);
return {
accessToken,
accountId:
typeof accountId === "string" && accountId.trim().length > 0 ? accountId.trim() : null,
refreshToken:
typeof refreshToken === "string" && refreshToken.trim().length > 0 ? refreshToken.trim() : null,
idToken:
typeof idToken === "string" && idToken.trim().length > 0 ? idToken.trim() : null,
email,
planType,
lastRefresh:
typeof modern.last_refresh === "string" && modern.last_refresh.trim().length > 0
? modern.last_refresh.trim()
: null,
};
}
export async function readCodexToken(): Promise<{ token: string; accountId: string | null } | null> {
const auth = await readCodexAuthInfo();
if (!auth) return null;
return { token: auth.accessToken, accountId: auth.accountId };
}
interface WhamWindow {
used_percent?: number | null;
limit_window_seconds?: number | null;
reset_at?: string | number | null;
}
interface WhamCredits {
balance?: number | null;
unlimited?: boolean | null;
}
interface WhamUsageResponse {
plan_type?: string | null;
rate_limit?: {
primary_window?: WhamWindow | null;
secondary_window?: WhamWindow | null;
} | null;
credits?: WhamCredits | null;
}
/**
* Map a window duration in seconds to a human-readable label.
* Falls back to the provided fallback string when seconds is null/undefined.
*/
export function secondsToWindowLabel(
seconds: number | null | undefined,
fallback: string,
): string {
if (seconds == null) return fallback;
const hours = seconds / 3600;
if (hours < 6) return "5h";
if (hours <= 24) return "24h";
if (hours <= 168) return "7d";
return `${Math.round(hours / 24)}d`;
}
/** fetch with an abort-based timeout so a hanging provider api doesn't block the response indefinitely */
export async function fetchWithTimeout(
url: string,
init: RequestInit,
ms = 8000,
): Promise<Response> {
const controller = new AbortController();
const timer = setTimeout(() => controller.abort(), ms);
try {
return await fetch(url, { ...init, signal: controller.signal });
} finally {
clearTimeout(timer);
}
}
function normalizeCodexUsedPercent(rawPct: number | null | undefined): number | null {
if (rawPct == null) return null;
return Math.min(100, Math.round(rawPct < 1 ? rawPct * 100 : rawPct));
}
export async function fetchCodexQuota(
token: string,
accountId: string | null,
): Promise<QuotaWindow[]> {
const headers: Record<string, string> = {
Authorization: `Bearer ${token}`,
};
if (accountId) headers["ChatGPT-Account-Id"] = accountId;
const resp = await fetchWithTimeout("https://chatgpt.com/backend-api/wham/usage", { headers });
if (!resp.ok) throw new Error(`chatgpt wham api returned ${resp.status}`);
const body = (await resp.json()) as WhamUsageResponse;
const windows: QuotaWindow[] = [];
const rateLimit = body.rate_limit;
if (rateLimit?.primary_window != null) {
const w = rateLimit.primary_window;
windows.push({
label: "5h limit",
usedPercent: normalizeCodexUsedPercent(w.used_percent),
resetsAt:
typeof w.reset_at === "number"
? unixSecondsToIso(w.reset_at)
: (w.reset_at ?? null),
valueLabel: null,
detail: null,
});
}
if (rateLimit?.secondary_window != null) {
const w = rateLimit.secondary_window;
windows.push({
label: "Weekly limit",
usedPercent: normalizeCodexUsedPercent(w.used_percent),
resetsAt:
typeof w.reset_at === "number"
? unixSecondsToIso(w.reset_at)
: (w.reset_at ?? null),
valueLabel: null,
detail: null,
});
}
if (body.credits != null && body.credits.unlimited !== true) {
const balance = body.credits.balance;
const valueLabel = balance != null ? `$${(balance / 100).toFixed(2)} remaining` : "N/A";
windows.push({
label: "Credits",
usedPercent: null,
resetsAt: null,
valueLabel,
detail: null,
});
}
return windows;
}
interface CodexRpcWindow {
usedPercent?: number | null;
windowDurationMins?: number | null;
resetsAt?: number | null;
}
interface CodexRpcCredits {
hasCredits?: boolean | null;
unlimited?: boolean | null;
balance?: string | number | null;
}
interface CodexRpcLimit {
limitId?: string | null;
limitName?: string | null;
primary?: CodexRpcWindow | null;
secondary?: CodexRpcWindow | null;
credits?: CodexRpcCredits | null;
planType?: string | null;
}
interface CodexRpcRateLimitsResult {
rateLimits?: CodexRpcLimit | null;
rateLimitsByLimitId?: Record<string, CodexRpcLimit> | null;
}
interface CodexRpcAccountResult {
account?: {
type?: string | null;
email?: string | null;
planType?: string | null;
} | null;
requiresOpenaiAuth?: boolean | null;
}
export interface CodexRpcQuotaSnapshot {
windows: QuotaWindow[];
email: string | null;
planType: string | null;
}
function unixSecondsToIso(value: number | null | undefined): string | null {
if (typeof value !== "number" || !Number.isFinite(value)) return null;
return new Date(value * 1000).toISOString();
}
function buildCodexRpcWindow(label: string, window: CodexRpcWindow | null | undefined): QuotaWindow | null {
if (!window) return null;
return {
label,
usedPercent: normalizeCodexUsedPercent(window.usedPercent),
resetsAt: unixSecondsToIso(window.resetsAt),
valueLabel: null,
detail: null,
};
}
function parseCreditBalance(value: string | number | null | undefined): string | null {
if (typeof value === "number" && Number.isFinite(value)) {
return `$${value.toFixed(2)} remaining`;
}
if (typeof value === "string" && value.trim().length > 0) {
const parsed = Number(value);
if (Number.isFinite(parsed)) {
return `$${parsed.toFixed(2)} remaining`;
}
return value.trim();
}
return null;
}
export function mapCodexRpcQuota(result: CodexRpcRateLimitsResult, account?: CodexRpcAccountResult | null): CodexRpcQuotaSnapshot {
const windows: QuotaWindow[] = [];
const limitOrder = ["codex"];
const limitsById = result.rateLimitsByLimitId ?? {};
for (const key of Object.keys(limitsById)) {
if (!limitOrder.includes(key)) limitOrder.push(key);
}
const rootLimit = result.rateLimits ?? null;
const allLimits = new Map<string, CodexRpcLimit>();
if (rootLimit?.limitId) allLimits.set(rootLimit.limitId, rootLimit);
for (const [key, value] of Object.entries(limitsById)) {
allLimits.set(key, value);
}
if (!allLimits.has("codex") && rootLimit) allLimits.set("codex", rootLimit);
for (const limitId of limitOrder) {
const limit = allLimits.get(limitId);
if (!limit) continue;
const prefix =
limitId === "codex"
? ""
: `${limit.limitName ?? limitId} · `;
const primary = buildCodexRpcWindow(`${prefix}5h limit`, limit.primary);
if (primary) windows.push(primary);
const secondary = buildCodexRpcWindow(`${prefix}Weekly limit`, limit.secondary);
if (secondary) windows.push(secondary);
if (limitId === "codex" && limit.credits && limit.credits.unlimited !== true) {
windows.push({
label: "Credits",
usedPercent: null,
resetsAt: null,
valueLabel: parseCreditBalance(limit.credits.balance) ?? "N/A",
detail: null,
});
}
}
return {
windows,
email:
typeof account?.account?.email === "string" && account.account.email.trim().length > 0
? account.account.email.trim()
: null,
planType:
typeof account?.account?.planType === "string" && account.account.planType.trim().length > 0
? account.account.planType.trim()
: (typeof rootLimit?.planType === "string" && rootLimit.planType.trim().length > 0 ? rootLimit.planType.trim() : null),
};
}
type PendingRequest = {
resolve: (value: Record<string, unknown>) => void;
reject: (error: Error) => void;
timer: NodeJS.Timeout;
};
class CodexRpcClient {
private proc = spawn(
"codex",
["-s", "read-only", "-a", "untrusted", "app-server"],
{ stdio: ["pipe", "pipe", "pipe"], env: process.env },
);
private nextId = 1;
private buffer = "";
private pending = new Map<number, PendingRequest>();
private stderr = "";
constructor() {
this.proc.stdout.setEncoding("utf8");
this.proc.stderr.setEncoding("utf8");
this.proc.stdout.on("data", (chunk: string) => this.onStdout(chunk));
this.proc.stderr.on("data", (chunk: string) => {
this.stderr += chunk;
});
this.proc.on("exit", () => {
for (const request of this.pending.values()) {
clearTimeout(request.timer);
request.reject(new Error(this.stderr.trim() || "codex app-server closed unexpectedly"));
}
this.pending.clear();
});
}
private onStdout(chunk: string) {
this.buffer += chunk;
while (true) {
const newlineIndex = this.buffer.indexOf("\n");
if (newlineIndex < 0) break;
const line = this.buffer.slice(0, newlineIndex).trim();
this.buffer = this.buffer.slice(newlineIndex + 1);
if (!line) continue;
let parsed: Record<string, unknown>;
try {
parsed = JSON.parse(line) as Record<string, unknown>;
} catch {
continue;
}
const id = typeof parsed.id === "number" ? parsed.id : null;
if (id == null) continue;
const pending = this.pending.get(id);
if (!pending) continue;
this.pending.delete(id);
clearTimeout(pending.timer);
pending.resolve(parsed);
}
}
private request(method: string, params: Record<string, unknown> = {}, timeoutMs = 6_000): Promise<Record<string, unknown>> {
const id = this.nextId++;
const payload = JSON.stringify({ id, method, params }) + "\n";
return new Promise<Record<string, unknown>>((resolve, reject) => {
const timer = setTimeout(() => {
this.pending.delete(id);
reject(new Error(`codex app-server timed out on ${method}`));
}, timeoutMs);
this.pending.set(id, { resolve, reject, timer });
this.proc.stdin.write(payload);
});
}
private notify(method: string, params: Record<string, unknown> = {}) {
this.proc.stdin.write(JSON.stringify({ method, params }) + "\n");
}
async initialize() {
await this.request("initialize", {
clientInfo: {
name: "paperclip",
version: "0.0.0",
},
});
this.notify("initialized", {});
}
async fetchRateLimits(): Promise<CodexRpcRateLimitsResult> {
const message = await this.request("account/rateLimits/read");
return (message.result as CodexRpcRateLimitsResult | undefined) ?? {};
}
async fetchAccount(): Promise<CodexRpcAccountResult | null> {
try {
const message = await this.request("account/read");
return (message.result as CodexRpcAccountResult | undefined) ?? null;
} catch {
return null;
}
}
async shutdown() {
this.proc.kill("SIGTERM");
}
}
export async function fetchCodexRpcQuota(): Promise<CodexRpcQuotaSnapshot> {
const client = new CodexRpcClient();
try {
await client.initialize();
const [limits, account] = await Promise.all([
client.fetchRateLimits(),
client.fetchAccount(),
]);
return mapCodexRpcQuota(limits, account);
} finally {
await client.shutdown();
}
}
function formatProviderError(source: string, error: unknown): string {
const message = error instanceof Error ? error.message : String(error);
return `${source}: ${message}`;
}
export async function getQuotaWindows(): Promise<ProviderQuotaResult> {
const errors: string[] = [];
try {
const rpc = await fetchCodexRpcQuota();
if (rpc.windows.length > 0) {
return { provider: "openai", source: CODEX_USAGE_SOURCE_RPC, ok: true, windows: rpc.windows };
}
} catch (error) {
errors.push(formatProviderError("Codex app-server", error));
}
const auth = await readCodexToken();
if (auth) {
try {
const windows = await fetchCodexQuota(auth.token, auth.accountId);
return { provider: "openai", source: CODEX_USAGE_SOURCE_WHAM, ok: true, windows };
} catch (error) {
errors.push(formatProviderError("ChatGPT WHAM usage", error));
}
} else {
errors.push("no local codex auth token");
}
return {
provider: "openai",
ok: false,
error: errors.join("; "),
windows: [],
};
}

View File

@@ -1,5 +1,13 @@
# @paperclipai/adapter-cursor-local
## 0.3.1
### Patch Changes
- Stable release preparation for 0.3.1
- Updated dependencies
- @paperclipai/adapter-utils@0.3.1
## 0.3.0
### Minor Changes

View File

@@ -1,6 +1,16 @@
{
"name": "@paperclipai/adapter-cursor-local",
"version": "0.3.0",
"version": "0.3.1",
"license": "MIT",
"homepage": "https://github.com/paperclipai/paperclip",
"bugs": {
"url": "https://github.com/paperclipai/paperclip/issues"
},
"repository": {
"type": "git",
"url": "https://github.com/paperclipai/paperclip",
"directory": "packages/adapters/cursor-local"
},
"type": "module",
"exports": {
".": "./src/index.ts",

View File

@@ -2,7 +2,7 @@ import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { fileURLToPath } from "node:url";
import type { AdapterExecutionContext, AdapterExecutionResult } from "@paperclipai/adapter-utils";
import { inferOpenAiCompatibleBiller, type AdapterExecutionContext, type AdapterExecutionResult } from "@paperclipai/adapter-utils";
import {
asString,
asNumber,
@@ -47,6 +47,17 @@ function resolveCursorBillingType(env: Record<string, string>): "api" | "subscri
: "subscription";
}
function resolveCursorBiller(
env: Record<string, string>,
billingType: "api" | "subscription",
provider: string | null,
): string {
const openAiCompatibleBiller = inferOpenAiCompatibleBiller(env, null);
if (openAiCompatibleBiller === "openrouter") return "openrouter";
if (billingType === "subscription") return "cursor";
return provider ?? "cursor";
}
function resolveProviderFromModel(model: string): string | null {
const trimmed = model.trim().toLowerCase();
if (!trimmed) return null;
@@ -157,6 +168,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const workspaceId = asString(workspaceContext.workspaceId, "");
const workspaceRepoUrl = asString(workspaceContext.repoUrl, "");
const workspaceRepoRef = asString(workspaceContext.repoRef, "");
const agentHome = asString(workspaceContext.agentHome, "");
const workspaceHints = Array.isArray(context.paperclipWorkspaces)
? context.paperclipWorkspaces.filter(
(value): value is Record<string, unknown> => typeof value === "object" && value !== null,
@@ -230,6 +242,9 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (workspaceRepoRef) {
env.PAPERCLIP_WORKSPACE_REPO_REF = workspaceRepoRef;
}
if (agentHome) {
env.AGENT_HOME = agentHome;
}
if (workspaceHints.length > 0) {
env.PAPERCLIP_WORKSPACES_JSON = JSON.stringify(workspaceHints);
}
@@ -239,8 +254,13 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (!hasExplicitApiKey && authToken) {
env.PAPERCLIP_API_KEY = authToken;
}
const billingType = resolveCursorBillingType(env);
const runtimeEnv = ensurePathInEnv({ ...process.env, ...env });
const effectiveEnv = Object.fromEntries(
Object.entries({ ...process.env, ...env }).filter(
(entry): entry is [string, string] => typeof entry[1] === "string",
),
);
const billingType = resolveCursorBillingType(effectiveEnv);
const runtimeEnv = ensurePathInEnv(effectiveEnv);
await ensureCommandResolvable(command, cwd, runtimeEnv);
const timeoutSec = asNumber(config.timeoutSec, 0);
@@ -470,6 +490,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
sessionParams: resolvedSessionParams,
sessionDisplayId: resolvedSessionId,
provider: providerFromModel,
biller: resolveCursorBiller(effectiveEnv, billingType, providerFromModel),
model,
billingType,
costUsd: attempt.parsed.costUsd,

View File

@@ -1,6 +1,16 @@
{
"name": "@paperclipai/adapter-gemini-local",
"version": "0.2.7",
"version": "0.3.1",
"license": "MIT",
"homepage": "https://github.com/paperclipai/paperclip",
"bugs": {
"url": "https://github.com/paperclipai/paperclip/issues"
},
"repository": {
"type": "git",
"url": "https://github.com/paperclipai/paperclip",
"directory": "packages/adapters/gemini-local"
},
"type": "module",
"exports": {
".": "./src/index.ts",

View File

@@ -145,6 +145,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const workspaceId = asString(workspaceContext.workspaceId, "");
const workspaceRepoUrl = asString(workspaceContext.repoUrl, "");
const workspaceRepoRef = asString(workspaceContext.repoRef, "");
const agentHome = asString(workspaceContext.agentHome, "");
const workspaceHints = Array.isArray(context.paperclipWorkspaces)
? context.paperclipWorkspaces.filter(
(value): value is Record<string, unknown> => typeof value === "object" && value !== null,
@@ -196,6 +197,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (workspaceId) env.PAPERCLIP_WORKSPACE_ID = workspaceId;
if (workspaceRepoUrl) env.PAPERCLIP_WORKSPACE_REPO_URL = workspaceRepoUrl;
if (workspaceRepoRef) env.PAPERCLIP_WORKSPACE_REPO_REF = workspaceRepoRef;
if (agentHome) env.AGENT_HOME = agentHome;
if (workspaceHints.length > 0) env.PAPERCLIP_WORKSPACES_JSON = JSON.stringify(workspaceHints);
for (const [key, value] of Object.entries(envConfig)) {
@@ -204,8 +206,13 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (!hasExplicitApiKey && authToken) {
env.PAPERCLIP_API_KEY = authToken;
}
const billingType = resolveGeminiBillingType(env);
const runtimeEnv = ensurePathInEnv({ ...process.env, ...env });
const effectiveEnv = Object.fromEntries(
Object.entries({ ...process.env, ...env }).filter(
(entry): entry is [string, string] => typeof entry[1] === "string",
),
);
const billingType = resolveGeminiBillingType(effectiveEnv);
const runtimeEnv = ensurePathInEnv(effectiveEnv);
await ensureCommandResolvable(command, cwd, runtimeEnv);
const timeoutSec = asNumber(config.timeoutSec, 0);
@@ -418,6 +425,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
sessionParams: resolvedSessionParams,
sessionDisplayId: resolvedSessionId,
provider: "google",
biller: "google",
model,
billingType,
costUsd: attempt.parsed.costUsd,

View File

@@ -1,5 +1,13 @@
# @paperclipai/adapter-openclaw-gateway
## 0.3.1
### Patch Changes
- Stable release preparation for 0.3.1
- Updated dependencies
- @paperclipai/adapter-utils@0.3.1
## 0.3.0
### Minor Changes

View File

@@ -1,6 +1,16 @@
{
"name": "@paperclipai/adapter-openclaw-gateway",
"version": "0.3.0",
"version": "0.3.1",
"license": "MIT",
"homepage": "https://github.com/paperclipai/paperclip",
"bugs": {
"url": "https://github.com/paperclipai/paperclip/issues"
},
"repository": {
"type": "git",
"url": "https://github.com/paperclipai/paperclip",
"directory": "packages/adapters/openclaw-gateway"
},
"type": "module",
"exports": {
".": "./src/index.ts",

View File

@@ -1,5 +1,13 @@
# @paperclipai/adapter-opencode-local
## 0.3.1
### Patch Changes
- Stable release preparation for 0.3.1
- Updated dependencies
- @paperclipai/adapter-utils@0.3.1
## 0.3.0
### Minor Changes

View File

@@ -1,6 +1,16 @@
{
"name": "@paperclipai/adapter-opencode-local",
"version": "0.3.0",
"version": "0.3.1",
"license": "MIT",
"homepage": "https://github.com/paperclipai/paperclip",
"bugs": {
"url": "https://github.com/paperclipai/paperclip/issues"
},
"repository": {
"type": "git",
"url": "https://github.com/paperclipai/paperclip",
"directory": "packages/adapters/opencode-local"
},
"type": "module",
"exports": {
".": "./src/index.ts",

View File

@@ -2,7 +2,7 @@ import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { fileURLToPath } from "node:url";
import type { AdapterExecutionContext, AdapterExecutionResult } from "@paperclipai/adapter-utils";
import { inferOpenAiCompatibleBiller, type AdapterExecutionContext, type AdapterExecutionResult } from "@paperclipai/adapter-utils";
import {
asString,
asNumber,
@@ -42,6 +42,10 @@ function parseModelProvider(model: string | null): string | null {
return trimmed.slice(0, trimmed.indexOf("/")).trim() || null;
}
function resolveOpenCodeBiller(env: Record<string, string>, provider: string | null): string {
return inferOpenAiCompatibleBiller(env, null) ?? provider ?? "unknown";
}
function claudeSkillsHome(): string {
return path.join(os.homedir(), ".claude", "skills");
}
@@ -100,6 +104,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const workspaceId = asString(workspaceContext.workspaceId, "");
const workspaceRepoUrl = asString(workspaceContext.repoUrl, "");
const workspaceRepoRef = asString(workspaceContext.repoRef, "");
const agentHome = asString(workspaceContext.agentHome, "");
const workspaceHints = Array.isArray(context.paperclipWorkspaces)
? context.paperclipWorkspaces.filter(
(value): value is Record<string, unknown> => typeof value === "object" && value !== null,
@@ -151,6 +156,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (workspaceId) env.PAPERCLIP_WORKSPACE_ID = workspaceId;
if (workspaceRepoUrl) env.PAPERCLIP_WORKSPACE_REPO_URL = workspaceRepoUrl;
if (workspaceRepoRef) env.PAPERCLIP_WORKSPACE_REPO_REF = workspaceRepoRef;
if (agentHome) env.AGENT_HOME = agentHome;
if (workspaceHints.length > 0) env.PAPERCLIP_WORKSPACES_JSON = JSON.stringify(workspaceHints);
for (const [key, value] of Object.entries(envConfig)) {
@@ -359,6 +365,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
sessionParams: resolvedSessionParams,
sessionDisplayId: resolvedSessionId,
provider: parseModelProvider(modelId),
biller: resolveOpenCodeBiller(runtimeEnv, parseModelProvider(modelId)),
model: modelId,
billingType: "unknown",
costUsd: attempt.parsed.costUsd,

View File

@@ -1,4 +1,5 @@
import { createHash } from "node:crypto";
import os from "node:os";
import type { AdapterModel } from "@paperclipai/adapter-utils";
import {
asString,
@@ -20,7 +21,7 @@ function resolveOpenCodeCommand(input: unknown): string {
const discoveryCache = new Map<string, { expiresAt: number; models: AdapterModel[] }>();
const VOLATILE_ENV_KEY_PREFIXES = ["PAPERCLIP_", "npm_", "NPM_"] as const;
const VOLATILE_ENV_KEY_EXACT = new Set(["PWD", "OLDPWD", "SHLVL", "_", "TERM_SESSION_ID"]);
const VOLATILE_ENV_KEY_EXACT = new Set(["PWD", "OLDPWD", "SHLVL", "_", "TERM_SESSION_ID", "HOME"]);
function dedupeModels(models: AdapterModel[]): AdapterModel[] {
const seen = new Set<string>();
@@ -107,7 +108,19 @@ export async function discoverOpenCodeModels(input: {
const command = resolveOpenCodeCommand(input.command);
const cwd = asString(input.cwd, process.cwd());
const env = normalizeEnv(input.env);
const runtimeEnv = normalizeEnv(ensurePathInEnv({ ...process.env, ...env }));
// Ensure HOME points to the actual running user's home directory.
// When the server is started via `runuser -u <user>`, HOME may still
// reflect the parent process (e.g. /root), causing OpenCode to miss
// provider auth credentials stored under the target user's home.
let resolvedHome: string | undefined;
try {
resolvedHome = os.userInfo().homedir || undefined;
} catch {
// os.userInfo() throws a SystemError when the current UID has no
// /etc/passwd entry (e.g. `docker run --user 1234` with a minimal
// image). Fall back to process.env.HOME.
}
const runtimeEnv = normalizeEnv(ensurePathInEnv({ ...process.env, ...env, ...(resolvedHome ? { HOME: resolvedHome } : {}) }));
const result = await runChildProcess(
`opencode-models-${Date.now()}-${Math.random().toString(16).slice(2)}`,

View File

@@ -1,5 +1,13 @@
# @paperclipai/adapter-pi-local
## 0.3.1
### Patch Changes
- Stable release preparation for 0.3.1
- Updated dependencies
- @paperclipai/adapter-utils@0.3.1
## 0.3.0
### Minor Changes

View File

@@ -1,6 +1,16 @@
{
"name": "@paperclipai/adapter-pi-local",
"version": "0.3.0",
"version": "0.3.1",
"license": "MIT",
"homepage": "https://github.com/paperclipai/paperclip",
"bugs": {
"url": "https://github.com/paperclipai/paperclip/issues"
},
"repository": {
"type": "git",
"url": "https://github.com/paperclipai/paperclip",
"directory": "packages/adapters/pi-local"
},
"type": "module",
"exports": {
".": "./src/index.ts",

View File

@@ -2,7 +2,7 @@ import fs from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { fileURLToPath } from "node:url";
import type { AdapterExecutionContext, AdapterExecutionResult } from "@paperclipai/adapter-utils";
import { inferOpenAiCompatibleBiller, type AdapterExecutionContext, type AdapterExecutionResult } from "@paperclipai/adapter-utils";
import {
asString,
asNumber,
@@ -50,6 +50,10 @@ function parseModelId(model: string | null): string | null {
return trimmed.slice(trimmed.indexOf("/") + 1).trim() || null;
}
function resolvePiBiller(env: Record<string, string>, provider: string | null): string {
return inferOpenAiCompatibleBiller(env, null) ?? provider ?? "unknown";
}
async function ensurePiSkillsInjected(onLog: AdapterExecutionContext["onLog"]) {
const skillsEntries = await listPaperclipSkillEntries(__moduleDir);
if (skillsEntries.length === 0) return;
@@ -117,6 +121,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const workspaceId = asString(workspaceContext.workspaceId, "");
const workspaceRepoUrl = asString(workspaceContext.repoUrl, "");
const workspaceRepoRef = asString(workspaceContext.repoRef, "");
const agentHome = asString(workspaceContext.agentHome, "");
const workspaceHints = Array.isArray(context.paperclipWorkspaces)
? context.paperclipWorkspaces.filter(
(value): value is Record<string, unknown> => typeof value === "object" && value !== null,
@@ -176,6 +181,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (workspaceId) env.PAPERCLIP_WORKSPACE_ID = workspaceId;
if (workspaceRepoUrl) env.PAPERCLIP_WORKSPACE_REPO_URL = workspaceRepoUrl;
if (workspaceRepoRef) env.PAPERCLIP_WORKSPACE_REPO_REF = workspaceRepoRef;
if (agentHome) env.AGENT_HOME = agentHome;
if (workspaceHints.length > 0) env.PAPERCLIP_WORKSPACES_JSON = JSON.stringify(workspaceHints);
for (const [key, value] of Object.entries(envConfig)) {
@@ -445,6 +451,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
sessionParams: resolvedSessionParams,
sessionDisplayId: resolvedSessionId,
provider: provider,
biller: resolvePiBiller(runtimeEnv, provider),
model: model,
billingType: "unknown",
costUsd: attempt.parsed.usage.costUsd,

View File

@@ -1,5 +1,13 @@
# @paperclipai/db
## 0.3.1
### Patch Changes
- Stable release preparation for 0.3.1
- Updated dependencies
- @paperclipai/shared@0.3.1
## 0.3.0
### Minor Changes

View File

@@ -1,6 +1,16 @@
{
"name": "@paperclipai/db",
"version": "0.3.0",
"version": "0.3.1",
"license": "MIT",
"homepage": "https://github.com/paperclipai/paperclip",
"bugs": {
"url": "https://github.com/paperclipai/paperclip/issues"
},
"repository": {
"type": "git",
"url": "https://github.com/paperclipai/paperclip",
"directory": "packages/db"
},
"type": "module",
"exports": {
".": "./src/index.ts",
@@ -35,6 +45,7 @@
"dependencies": {
"@paperclipai/shared": "workspace:*",
"drizzle-orm": "^0.38.4",
"embedded-postgres": "^18.1.0-beta.16",
"postgres": "^3.4.5"
},
"devDependencies": {

View File

@@ -0,0 +1,157 @@
import { createHash } from "node:crypto";
import fs from "node:fs";
import net from "node:net";
import os from "node:os";
import path from "node:path";
import { afterEach, describe, expect, it } from "vitest";
import postgres from "postgres";
import {
applyPendingMigrations,
ensurePostgresDatabase,
inspectMigrations,
} from "./client.js";
type EmbeddedPostgresInstance = {
initialise(): Promise<void>;
start(): Promise<void>;
stop(): Promise<void>;
};
type EmbeddedPostgresCtor = new (opts: {
databaseDir: string;
user: string;
password: string;
port: number;
persistent: boolean;
initdbFlags?: string[];
onLog?: (message: unknown) => void;
onError?: (message: unknown) => void;
}) => EmbeddedPostgresInstance;
const tempPaths: string[] = [];
const runningInstances: EmbeddedPostgresInstance[] = [];
async function getEmbeddedPostgresCtor(): Promise<EmbeddedPostgresCtor> {
const mod = await import("embedded-postgres");
return mod.default as EmbeddedPostgresCtor;
}
async function getAvailablePort(): Promise<number> {
return await new Promise((resolve, reject) => {
const server = net.createServer();
server.unref();
server.on("error", reject);
server.listen(0, "127.0.0.1", () => {
const address = server.address();
if (!address || typeof address === "string") {
server.close(() => reject(new Error("Failed to allocate test port")));
return;
}
const { port } = address;
server.close((error) => {
if (error) reject(error);
else resolve(port);
});
});
});
}
async function createTempDatabase(): Promise<string> {
const dataDir = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-db-client-"));
tempPaths.push(dataDir);
const port = await getAvailablePort();
const EmbeddedPostgres = await getEmbeddedPostgresCtor();
const instance = new EmbeddedPostgres({
databaseDir: dataDir,
user: "paperclip",
password: "paperclip",
port,
persistent: true,
initdbFlags: ["--encoding=UTF8", "--locale=C"],
onLog: () => {},
onError: () => {},
});
await instance.initialise();
await instance.start();
runningInstances.push(instance);
const adminUrl = `postgres://paperclip:paperclip@127.0.0.1:${port}/postgres`;
await ensurePostgresDatabase(adminUrl, "paperclip");
return `postgres://paperclip:paperclip@127.0.0.1:${port}/paperclip`;
}
async function migrationHash(migrationFile: string): Promise<string> {
const content = await fs.promises.readFile(
new URL(`./migrations/${migrationFile}`, import.meta.url),
"utf8",
);
return createHash("sha256").update(content).digest("hex");
}
afterEach(async () => {
while (runningInstances.length > 0) {
const instance = runningInstances.pop();
if (!instance) continue;
await instance.stop();
}
while (tempPaths.length > 0) {
const tempPath = tempPaths.pop();
if (!tempPath) continue;
fs.rmSync(tempPath, { recursive: true, force: true });
}
});
describe("applyPendingMigrations", () => {
it(
"applies an inserted earlier migration without replaying later legacy migrations",
async () => {
const connectionString = await createTempDatabase();
await applyPendingMigrations(connectionString);
const sql = postgres(connectionString, { max: 1, onnotice: () => {} });
try {
const richMagnetoHash = await migrationHash("0030_rich_magneto.sql");
await sql.unsafe(
`DELETE FROM "drizzle"."__drizzle_migrations" WHERE hash = '${richMagnetoHash}'`,
);
await sql.unsafe(`DROP TABLE "company_logos"`);
} finally {
await sql.end();
}
const pendingState = await inspectMigrations(connectionString);
expect(pendingState).toMatchObject({
status: "needsMigrations",
pendingMigrations: ["0030_rich_magneto.sql"],
reason: "pending-migrations",
});
await applyPendingMigrations(connectionString);
const finalState = await inspectMigrations(connectionString);
expect(finalState.status).toBe("upToDate");
const verifySql = postgres(connectionString, { max: 1, onnotice: () => {} });
try {
const rows = await verifySql.unsafe<{ table_name: string }[]>(
`
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name IN ('company_logos', 'execution_workspaces')
ORDER BY table_name
`,
);
expect(rows.map((row) => row.table_name)).toEqual([
"company_logos",
"execution_workspaces",
]);
} finally {
await verifySql.end();
}
},
20_000,
);
});

View File

@@ -50,6 +50,21 @@ export function createDb(url: string) {
return drizzlePg(sql, { schema });
}
export async function getPostgresDataDirectory(url: string): Promise<string | null> {
const sql = createUtilitySql(url);
try {
const rows = await sql<{ data_directory: string | null }[]>`
SELECT current_setting('data_directory', true) AS data_directory
`;
const actual = rows[0]?.data_directory;
return typeof actual === "string" && actual.length > 0 ? actual : null;
} catch {
return null;
} finally {
await sql.end();
}
}
async function listMigrationFiles(): Promise<string[]> {
const entries = await readdir(MIGRATIONS_FOLDER, { withFileTypes: true });
return entries
@@ -646,13 +661,26 @@ export async function applyPendingMigrations(url: string): Promise<void> {
const initialState = await inspectMigrations(url);
if (initialState.status === "upToDate") return;
const sql = createUtilitySql(url);
if (initialState.reason === "no-migration-journal-empty-db") {
const sql = createUtilitySql(url);
try {
const db = drizzlePg(sql);
await migratePg(db, { migrationsFolder: MIGRATIONS_FOLDER });
} finally {
await sql.end();
}
try {
const db = drizzlePg(sql);
await migratePg(db, { migrationsFolder: MIGRATIONS_FOLDER });
} finally {
await sql.end();
const bootstrappedState = await inspectMigrations(url);
if (bootstrappedState.status === "upToDate") return;
throw new Error(
`Failed to bootstrap migrations: ${bootstrappedState.pendingMigrations.join(", ")}`,
);
}
if (initialState.reason === "no-migration-journal-non-empty-db") {
throw new Error(
"Database has tables but no migration journal; automatic migration is unsafe. Initialize migration history manually.",
);
}
let state = await inspectMigrations(url);
@@ -665,7 +693,7 @@ export async function applyPendingMigrations(url: string): Promise<void> {
}
if (state.status !== "needsMigrations" || state.reason !== "pending-migrations") {
throw new Error("Migrations are still pending after attempted apply; run inspectMigrations for details.");
throw new Error("Migrations are still pending after migration-history reconciliation; run inspectMigrations for details.");
}
await applyPendingMigrationsManually(url, state.pendingMigrations);

View File

@@ -1,5 +1,6 @@
export {
createDb,
getPostgresDataDirectory,
ensurePostgresDatabase,
inspectMigrations,
applyPendingMigrations,

View File

@@ -1,8 +1,7 @@
import { existsSync, readFileSync, rmSync } from "node:fs";
import { createRequire } from "node:module";
import { createServer } from "node:net";
import path from "node:path";
import { fileURLToPath, pathToFileURL } from "node:url";
import { ensurePostgresDatabase } from "./client.js";
import { ensurePostgresDatabase, getPostgresDataDirectory } from "./client.js";
import { resolveDatabaseTarget } from "./runtime-config.js";
type EmbeddedPostgresInstance = {
@@ -28,6 +27,18 @@ export type MigrationConnection = {
stop: () => Promise<void>;
};
function toError(error: unknown, fallbackMessage: string): Error {
if (error instanceof Error) return error;
if (error === undefined) return new Error(fallbackMessage);
if (typeof error === "string") return new Error(`${fallbackMessage}: ${error}`);
try {
return new Error(`${fallbackMessage}: ${JSON.stringify(error)}`);
} catch {
return new Error(`${fallbackMessage}: ${String(error)}`);
}
}
function readRunningPostmasterPid(postmasterPidFile: string): number | null {
if (!existsSync(postmasterPidFile)) return null;
try {
@@ -51,18 +62,34 @@ function readPidFilePort(postmasterPidFile: string): number | null {
}
}
async function loadEmbeddedPostgresCtor(): Promise<EmbeddedPostgresCtor> {
const require = createRequire(import.meta.url);
const resolveCandidates = [
path.resolve(fileURLToPath(new URL("../..", import.meta.url))),
path.resolve(fileURLToPath(new URL("../../server", import.meta.url))),
path.resolve(fileURLToPath(new URL("../../cli", import.meta.url))),
process.cwd(),
];
async function isPortInUse(port: number): Promise<boolean> {
return await new Promise((resolve) => {
const server = createServer();
server.unref();
server.once("error", (error: NodeJS.ErrnoException) => {
resolve(error.code === "EADDRINUSE");
});
server.listen(port, "127.0.0.1", () => {
server.close();
resolve(false);
});
});
}
async function findAvailablePort(startPort: number): Promise<number> {
const maxLookahead = 20;
let port = startPort;
for (let i = 0; i < maxLookahead; i += 1, port += 1) {
if (!(await isPortInUse(port))) return port;
}
throw new Error(
`Embedded PostgreSQL could not find a free port from ${startPort} to ${startPort + maxLookahead - 1}`,
);
}
async function loadEmbeddedPostgresCtor(): Promise<EmbeddedPostgresCtor> {
try {
const resolvedModulePath = require.resolve("embedded-postgres", { paths: resolveCandidates });
const mod = await import(pathToFileURL(resolvedModulePath).href);
const mod = await import("embedded-postgres");
return mod.default as EmbeddedPostgresCtor;
} catch {
throw new Error(
@@ -76,9 +103,35 @@ async function ensureEmbeddedPostgresConnection(
preferredPort: number,
): Promise<MigrationConnection> {
const EmbeddedPostgres = await loadEmbeddedPostgresCtor();
const selectedPort = await findAvailablePort(preferredPort);
const postmasterPidFile = path.resolve(dataDir, "postmaster.pid");
const pgVersionFile = path.resolve(dataDir, "PG_VERSION");
const runningPid = readRunningPostmasterPid(postmasterPidFile);
const runningPort = readPidFilePort(postmasterPidFile);
const preferredAdminConnectionString = `postgres://paperclip:paperclip@127.0.0.1:${preferredPort}/postgres`;
if (!runningPid && existsSync(pgVersionFile)) {
try {
const actualDataDir = await getPostgresDataDirectory(preferredAdminConnectionString);
const matchesDataDir =
typeof actualDataDir === "string" &&
path.resolve(actualDataDir) === path.resolve(dataDir);
if (!matchesDataDir) {
throw new Error("reachable postgres does not use the expected embedded data directory");
}
await ensurePostgresDatabase(preferredAdminConnectionString, "paperclip");
process.emitWarning(
`Adopting an existing PostgreSQL instance on port ${preferredPort} for embedded data dir ${dataDir} because postmaster.pid is missing.`,
);
return {
connectionString: `postgres://paperclip:paperclip@127.0.0.1:${preferredPort}/paperclip`,
source: `embedded-postgres@${preferredPort}`,
stop: async () => {},
};
} catch {
// Fall through and attempt to start the configured embedded cluster.
}
}
if (runningPid) {
const port = runningPort ?? preferredPort;
@@ -95,7 +148,7 @@ async function ensureEmbeddedPostgresConnection(
databaseDir: dataDir,
user: "paperclip",
password: "paperclip",
port: preferredPort,
port: selectedPort,
persistent: true,
initdbFlags: ["--encoding=UTF8", "--locale=C"],
onLog: () => {},
@@ -103,19 +156,30 @@ async function ensureEmbeddedPostgresConnection(
});
if (!existsSync(path.resolve(dataDir, "PG_VERSION"))) {
await instance.initialise();
try {
await instance.initialise();
} catch (error) {
throw toError(
error,
`Failed to initialize embedded PostgreSQL cluster in ${dataDir} on port ${selectedPort}`,
);
}
}
if (existsSync(postmasterPidFile)) {
rmSync(postmasterPidFile, { force: true });
}
await instance.start();
try {
await instance.start();
} catch (error) {
throw toError(error, `Failed to start embedded PostgreSQL on port ${selectedPort}`);
}
const adminConnectionString = `postgres://paperclip:paperclip@127.0.0.1:${preferredPort}/postgres`;
const adminConnectionString = `postgres://paperclip:paperclip@127.0.0.1:${selectedPort}/postgres`;
await ensurePostgresDatabase(adminConnectionString, "paperclip");
return {
connectionString: `postgres://paperclip:paperclip@127.0.0.1:${preferredPort}/paperclip`,
source: `embedded-postgres@${preferredPort}`,
connectionString: `postgres://paperclip:paperclip@127.0.0.1:${selectedPort}/paperclip`,
source: `embedded-postgres@${selectedPort}`,
stop: async () => {
await instance.stop();
},

View File

@@ -3,6 +3,18 @@ import { resolveMigrationConnection } from "./migration-runtime.js";
const jsonMode = process.argv.includes("--json");
function toError(error: unknown, context = "Migration status check failed"): Error {
if (error instanceof Error) return error;
if (error === undefined) return new Error(context);
if (typeof error === "string") return new Error(`${context}: ${error}`);
try {
return new Error(`${context}: ${JSON.stringify(error)}`);
} catch {
return new Error(`${context}: ${String(error)}`);
}
}
async function main(): Promise<void> {
const connection = await resolveMigrationConnection();
@@ -42,4 +54,8 @@ async function main(): Promise<void> {
}
}
await main();
main().catch((error) => {
const err = toError(error, "Migration status check failed");
process.stderr.write(`${err.stack ?? err.message}\n`);
process.exit(1);
});

View File

@@ -0,0 +1,12 @@
CREATE TABLE "company_logos" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"company_id" uuid NOT NULL,
"asset_id" uuid NOT NULL,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
ALTER TABLE "company_logos" ADD CONSTRAINT "company_logos_company_id_companies_id_fk" FOREIGN KEY ("company_id") REFERENCES "public"."companies"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "company_logos" ADD CONSTRAINT "company_logos_asset_id_assets_id_fk" FOREIGN KEY ("asset_id") REFERENCES "public"."assets"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
CREATE UNIQUE INDEX "company_logos_company_uq" ON "company_logos" USING btree ("company_id");--> statement-breakpoint
CREATE UNIQUE INDEX "company_logos_asset_uq" ON "company_logos" USING btree ("asset_id");

View File

@@ -0,0 +1,51 @@
CREATE TABLE "finance_events" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"company_id" uuid NOT NULL,
"agent_id" uuid,
"issue_id" uuid,
"project_id" uuid,
"goal_id" uuid,
"heartbeat_run_id" uuid,
"cost_event_id" uuid,
"billing_code" text,
"description" text,
"event_kind" text NOT NULL,
"direction" text DEFAULT 'debit' NOT NULL,
"biller" text NOT NULL,
"provider" text,
"execution_adapter_type" text,
"pricing_tier" text,
"region" text,
"model" text,
"quantity" integer,
"unit" text,
"amount_cents" integer NOT NULL,
"currency" text DEFAULT 'USD' NOT NULL,
"estimated" boolean DEFAULT false NOT NULL,
"external_invoice_id" text,
"metadata_json" jsonb,
"occurred_at" timestamp with time zone NOT NULL,
"created_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
ALTER TABLE "cost_events" ADD COLUMN "heartbeat_run_id" uuid;--> statement-breakpoint
ALTER TABLE "cost_events" ADD COLUMN "biller" text DEFAULT 'unknown' NOT NULL;--> statement-breakpoint
ALTER TABLE "cost_events" ADD COLUMN "billing_type" text DEFAULT 'unknown' NOT NULL;--> statement-breakpoint
ALTER TABLE "cost_events" ADD COLUMN "cached_input_tokens" integer DEFAULT 0 NOT NULL;--> statement-breakpoint
ALTER TABLE "finance_events" ADD CONSTRAINT "finance_events_company_id_companies_id_fk" FOREIGN KEY ("company_id") REFERENCES "public"."companies"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "finance_events" ADD CONSTRAINT "finance_events_agent_id_agents_id_fk" FOREIGN KEY ("agent_id") REFERENCES "public"."agents"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "finance_events" ADD CONSTRAINT "finance_events_issue_id_issues_id_fk" FOREIGN KEY ("issue_id") REFERENCES "public"."issues"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "finance_events" ADD CONSTRAINT "finance_events_project_id_projects_id_fk" FOREIGN KEY ("project_id") REFERENCES "public"."projects"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "finance_events" ADD CONSTRAINT "finance_events_goal_id_goals_id_fk" FOREIGN KEY ("goal_id") REFERENCES "public"."goals"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "finance_events" ADD CONSTRAINT "finance_events_heartbeat_run_id_heartbeat_runs_id_fk" FOREIGN KEY ("heartbeat_run_id") REFERENCES "public"."heartbeat_runs"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "finance_events" ADD CONSTRAINT "finance_events_cost_event_id_cost_events_id_fk" FOREIGN KEY ("cost_event_id") REFERENCES "public"."cost_events"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
CREATE INDEX "finance_events_company_occurred_idx" ON "finance_events" USING btree ("company_id","occurred_at");--> statement-breakpoint
CREATE INDEX "finance_events_company_biller_occurred_idx" ON "finance_events" USING btree ("company_id","biller","occurred_at");--> statement-breakpoint
CREATE INDEX "finance_events_company_kind_occurred_idx" ON "finance_events" USING btree ("company_id","event_kind","occurred_at");--> statement-breakpoint
CREATE INDEX "finance_events_company_direction_occurred_idx" ON "finance_events" USING btree ("company_id","direction","occurred_at");--> statement-breakpoint
CREATE INDEX "finance_events_company_heartbeat_run_idx" ON "finance_events" USING btree ("company_id","heartbeat_run_id");--> statement-breakpoint
CREATE INDEX "finance_events_company_cost_event_idx" ON "finance_events" USING btree ("company_id","cost_event_id");--> statement-breakpoint
ALTER TABLE "cost_events" ADD CONSTRAINT "cost_events_heartbeat_run_id_heartbeat_runs_id_fk" FOREIGN KEY ("heartbeat_run_id") REFERENCES "public"."heartbeat_runs"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
CREATE INDEX "cost_events_company_provider_occurred_idx" ON "cost_events" USING btree ("company_id","provider","occurred_at");--> statement-breakpoint
CREATE INDEX "cost_events_company_biller_occurred_idx" ON "cost_events" USING btree ("company_id","biller","occurred_at");--> statement-breakpoint
CREATE INDEX "cost_events_company_heartbeat_run_idx" ON "cost_events" USING btree ("company_id","heartbeat_run_id");

View File

@@ -0,0 +1,102 @@
CREATE TABLE "budget_incidents" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"company_id" uuid NOT NULL,
"policy_id" uuid NOT NULL,
"scope_type" text NOT NULL,
"scope_id" uuid NOT NULL,
"metric" text NOT NULL,
"window_kind" text NOT NULL,
"window_start" timestamp with time zone NOT NULL,
"window_end" timestamp with time zone NOT NULL,
"threshold_type" text NOT NULL,
"amount_limit" integer NOT NULL,
"amount_observed" integer NOT NULL,
"status" text DEFAULT 'open' NOT NULL,
"approval_id" uuid,
"resolved_at" timestamp with time zone,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
CREATE TABLE "budget_policies" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"company_id" uuid NOT NULL,
"scope_type" text NOT NULL,
"scope_id" uuid NOT NULL,
"metric" text DEFAULT 'billed_cents' NOT NULL,
"window_kind" text NOT NULL,
"amount" integer DEFAULT 0 NOT NULL,
"warn_percent" integer DEFAULT 80 NOT NULL,
"hard_stop_enabled" boolean DEFAULT true NOT NULL,
"notify_enabled" boolean DEFAULT true NOT NULL,
"is_active" boolean DEFAULT true NOT NULL,
"created_by_user_id" text,
"updated_by_user_id" text,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
ALTER TABLE "agents" ADD COLUMN "pause_reason" text;--> statement-breakpoint
ALTER TABLE "agents" ADD COLUMN "paused_at" timestamp with time zone;--> statement-breakpoint
ALTER TABLE "projects" ADD COLUMN "pause_reason" text;--> statement-breakpoint
ALTER TABLE "projects" ADD COLUMN "paused_at" timestamp with time zone;--> statement-breakpoint
INSERT INTO "budget_policies" (
"company_id",
"scope_type",
"scope_id",
"metric",
"window_kind",
"amount",
"warn_percent",
"hard_stop_enabled",
"notify_enabled",
"is_active"
)
SELECT
"id",
'company',
"id",
'billed_cents',
'calendar_month_utc',
"budget_monthly_cents",
80,
true,
true,
true
FROM "companies"
WHERE "budget_monthly_cents" > 0;--> statement-breakpoint
INSERT INTO "budget_policies" (
"company_id",
"scope_type",
"scope_id",
"metric",
"window_kind",
"amount",
"warn_percent",
"hard_stop_enabled",
"notify_enabled",
"is_active"
)
SELECT
"company_id",
'agent',
"id",
'billed_cents',
'calendar_month_utc',
"budget_monthly_cents",
80,
true,
true,
true
FROM "agents"
WHERE "budget_monthly_cents" > 0;--> statement-breakpoint
ALTER TABLE "budget_incidents" ADD CONSTRAINT "budget_incidents_company_id_companies_id_fk" FOREIGN KEY ("company_id") REFERENCES "public"."companies"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "budget_incidents" ADD CONSTRAINT "budget_incidents_policy_id_budget_policies_id_fk" FOREIGN KEY ("policy_id") REFERENCES "public"."budget_policies"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "budget_incidents" ADD CONSTRAINT "budget_incidents_approval_id_approvals_id_fk" FOREIGN KEY ("approval_id") REFERENCES "public"."approvals"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "budget_policies" ADD CONSTRAINT "budget_policies_company_id_companies_id_fk" FOREIGN KEY ("company_id") REFERENCES "public"."companies"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
CREATE INDEX "budget_incidents_company_status_idx" ON "budget_incidents" USING btree ("company_id","status");--> statement-breakpoint
CREATE INDEX "budget_incidents_company_scope_idx" ON "budget_incidents" USING btree ("company_id","scope_type","scope_id","status");--> statement-breakpoint
CREATE UNIQUE INDEX "budget_incidents_policy_window_threshold_idx" ON "budget_incidents" USING btree ("policy_id","window_start","threshold_type");--> statement-breakpoint
CREATE INDEX "budget_policies_company_scope_active_idx" ON "budget_policies" USING btree ("company_id","scope_type","scope_id","is_active");--> statement-breakpoint
CREATE INDEX "budget_policies_company_window_idx" ON "budget_policies" USING btree ("company_id","window_kind","metric");--> statement-breakpoint
CREATE UNIQUE INDEX "budget_policies_company_scope_metric_unique_idx" ON "budget_policies" USING btree ("company_id","scope_type","scope_id","metric","window_kind");

View File

@@ -0,0 +1,2 @@
ALTER TABLE "companies" ADD COLUMN "pause_reason" text;--> statement-breakpoint
ALTER TABLE "companies" ADD COLUMN "paused_at" timestamp with time zone;

View File

@@ -0,0 +1,2 @@
DROP INDEX "budget_incidents_policy_window_threshold_idx";--> statement-breakpoint
CREATE UNIQUE INDEX "budget_incidents_policy_window_threshold_idx" ON "budget_incidents" USING btree ("policy_id","window_start","threshold_type") WHERE "budget_incidents"."status" <> 'dismissed';

View File

@@ -0,0 +1,91 @@
CREATE TABLE "execution_workspaces" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"company_id" uuid NOT NULL,
"project_id" uuid NOT NULL,
"project_workspace_id" uuid,
"source_issue_id" uuid,
"mode" text NOT NULL,
"strategy_type" text NOT NULL,
"name" text NOT NULL,
"status" text DEFAULT 'active' NOT NULL,
"cwd" text,
"repo_url" text,
"base_ref" text,
"branch_name" text,
"provider_type" text DEFAULT 'local_fs' NOT NULL,
"provider_ref" text,
"derived_from_execution_workspace_id" uuid,
"last_used_at" timestamp with time zone DEFAULT now() NOT NULL,
"opened_at" timestamp with time zone DEFAULT now() NOT NULL,
"closed_at" timestamp with time zone,
"cleanup_eligible_at" timestamp with time zone,
"cleanup_reason" text,
"metadata" jsonb,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
CREATE TABLE "issue_work_products" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"company_id" uuid NOT NULL,
"project_id" uuid,
"issue_id" uuid NOT NULL,
"execution_workspace_id" uuid,
"runtime_service_id" uuid,
"type" text NOT NULL,
"provider" text NOT NULL,
"external_id" text,
"title" text NOT NULL,
"url" text,
"status" text NOT NULL,
"review_state" text DEFAULT 'none' NOT NULL,
"is_primary" boolean DEFAULT false NOT NULL,
"health_status" text DEFAULT 'unknown' NOT NULL,
"summary" text,
"metadata" jsonb,
"created_by_run_id" uuid,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
ALTER TABLE "issues" ADD COLUMN "project_workspace_id" uuid;--> statement-breakpoint
ALTER TABLE "issues" ADD COLUMN "execution_workspace_id" uuid;--> statement-breakpoint
ALTER TABLE "issues" ADD COLUMN "execution_workspace_preference" text;--> statement-breakpoint
ALTER TABLE "project_workspaces" ADD COLUMN "source_type" text DEFAULT 'local_path' NOT NULL;--> statement-breakpoint
ALTER TABLE "project_workspaces" ADD COLUMN "default_ref" text;--> statement-breakpoint
ALTER TABLE "project_workspaces" ADD COLUMN "visibility" text DEFAULT 'default' NOT NULL;--> statement-breakpoint
ALTER TABLE "project_workspaces" ADD COLUMN "setup_command" text;--> statement-breakpoint
ALTER TABLE "project_workspaces" ADD COLUMN "cleanup_command" text;--> statement-breakpoint
ALTER TABLE "project_workspaces" ADD COLUMN "remote_provider" text;--> statement-breakpoint
ALTER TABLE "project_workspaces" ADD COLUMN "remote_workspace_ref" text;--> statement-breakpoint
ALTER TABLE "project_workspaces" ADD COLUMN "shared_workspace_key" text;--> statement-breakpoint
ALTER TABLE "workspace_runtime_services" ADD COLUMN "execution_workspace_id" uuid;--> statement-breakpoint
ALTER TABLE "execution_workspaces" ADD CONSTRAINT "execution_workspaces_company_id_companies_id_fk" FOREIGN KEY ("company_id") REFERENCES "public"."companies"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "execution_workspaces" ADD CONSTRAINT "execution_workspaces_project_id_projects_id_fk" FOREIGN KEY ("project_id") REFERENCES "public"."projects"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "execution_workspaces" ADD CONSTRAINT "execution_workspaces_project_workspace_id_project_workspaces_id_fk" FOREIGN KEY ("project_workspace_id") REFERENCES "public"."project_workspaces"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "execution_workspaces" ADD CONSTRAINT "execution_workspaces_source_issue_id_issues_id_fk" FOREIGN KEY ("source_issue_id") REFERENCES "public"."issues"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "execution_workspaces" ADD CONSTRAINT "execution_workspaces_derived_from_execution_workspace_id_execution_workspaces_id_fk" FOREIGN KEY ("derived_from_execution_workspace_id") REFERENCES "public"."execution_workspaces"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "issue_work_products" ADD CONSTRAINT "issue_work_products_company_id_companies_id_fk" FOREIGN KEY ("company_id") REFERENCES "public"."companies"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "issue_work_products" ADD CONSTRAINT "issue_work_products_project_id_projects_id_fk" FOREIGN KEY ("project_id") REFERENCES "public"."projects"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "issue_work_products" ADD CONSTRAINT "issue_work_products_issue_id_issues_id_fk" FOREIGN KEY ("issue_id") REFERENCES "public"."issues"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "issue_work_products" ADD CONSTRAINT "issue_work_products_execution_workspace_id_execution_workspaces_id_fk" FOREIGN KEY ("execution_workspace_id") REFERENCES "public"."execution_workspaces"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "issue_work_products" ADD CONSTRAINT "issue_work_products_runtime_service_id_workspace_runtime_services_id_fk" FOREIGN KEY ("runtime_service_id") REFERENCES "public"."workspace_runtime_services"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "issue_work_products" ADD CONSTRAINT "issue_work_products_created_by_run_id_heartbeat_runs_id_fk" FOREIGN KEY ("created_by_run_id") REFERENCES "public"."heartbeat_runs"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
CREATE INDEX "execution_workspaces_company_project_status_idx" ON "execution_workspaces" USING btree ("company_id","project_id","status");--> statement-breakpoint
CREATE INDEX "execution_workspaces_company_project_workspace_status_idx" ON "execution_workspaces" USING btree ("company_id","project_workspace_id","status");--> statement-breakpoint
CREATE INDEX "execution_workspaces_company_source_issue_idx" ON "execution_workspaces" USING btree ("company_id","source_issue_id");--> statement-breakpoint
CREATE INDEX "execution_workspaces_company_last_used_idx" ON "execution_workspaces" USING btree ("company_id","last_used_at");--> statement-breakpoint
CREATE INDEX "execution_workspaces_company_branch_idx" ON "execution_workspaces" USING btree ("company_id","branch_name");--> statement-breakpoint
CREATE INDEX "issue_work_products_company_issue_type_idx" ON "issue_work_products" USING btree ("company_id","issue_id","type");--> statement-breakpoint
CREATE INDEX "issue_work_products_company_execution_workspace_type_idx" ON "issue_work_products" USING btree ("company_id","execution_workspace_id","type");--> statement-breakpoint
CREATE INDEX "issue_work_products_company_provider_external_id_idx" ON "issue_work_products" USING btree ("company_id","provider","external_id");--> statement-breakpoint
CREATE INDEX "issue_work_products_company_updated_idx" ON "issue_work_products" USING btree ("company_id","updated_at");--> statement-breakpoint
ALTER TABLE "issues" ADD CONSTRAINT "issues_project_workspace_id_project_workspaces_id_fk" FOREIGN KEY ("project_workspace_id") REFERENCES "public"."project_workspaces"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "issues" ADD CONSTRAINT "issues_execution_workspace_id_execution_workspaces_id_fk" FOREIGN KEY ("execution_workspace_id") REFERENCES "public"."execution_workspaces"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "workspace_runtime_services" ADD CONSTRAINT "workspace_runtime_services_execution_workspace_id_execution_workspaces_id_fk" FOREIGN KEY ("execution_workspace_id") REFERENCES "public"."execution_workspaces"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
CREATE INDEX "issues_company_project_workspace_idx" ON "issues" USING btree ("company_id","project_workspace_id");--> statement-breakpoint
CREATE INDEX "issues_company_execution_workspace_idx" ON "issues" USING btree ("company_id","execution_workspace_id");--> statement-breakpoint
CREATE INDEX "project_workspaces_project_source_type_idx" ON "project_workspaces" USING btree ("project_id","source_type");--> statement-breakpoint
CREATE INDEX "project_workspaces_company_shared_key_idx" ON "project_workspaces" USING btree ("company_id","shared_workspace_key");--> statement-breakpoint
CREATE UNIQUE INDEX "project_workspaces_project_remote_ref_idx" ON "project_workspaces" USING btree ("project_id","remote_provider","remote_workspace_ref");--> statement-breakpoint
CREATE INDEX "workspace_runtime_services_company_execution_workspace_status_idx" ON "workspace_runtime_services" USING btree ("company_id","execution_workspace_id","status");

View File

@@ -0,0 +1,9 @@
CREATE TABLE "instance_settings" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"singleton_key" text DEFAULT 'default' NOT NULL,
"experimental" jsonb DEFAULT '{}'::jsonb NOT NULL,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
CREATE UNIQUE INDEX "instance_settings_singleton_key_idx" ON "instance_settings" USING btree ("singleton_key");

View File

@@ -0,0 +1,29 @@
CREATE TABLE "workspace_operations" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"company_id" uuid NOT NULL,
"execution_workspace_id" uuid,
"heartbeat_run_id" uuid,
"phase" text NOT NULL,
"command" text,
"cwd" text,
"status" text DEFAULT 'running' NOT NULL,
"exit_code" integer,
"log_store" text,
"log_ref" text,
"log_bytes" bigint,
"log_sha256" text,
"log_compressed" boolean DEFAULT false NOT NULL,
"stdout_excerpt" text,
"stderr_excerpt" text,
"metadata" jsonb,
"started_at" timestamp with time zone DEFAULT now() NOT NULL,
"finished_at" timestamp with time zone,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
ALTER TABLE "workspace_operations" ADD CONSTRAINT "workspace_operations_company_id_companies_id_fk" FOREIGN KEY ("company_id") REFERENCES "public"."companies"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "workspace_operations" ADD CONSTRAINT "workspace_operations_execution_workspace_id_execution_workspaces_id_fk" FOREIGN KEY ("execution_workspace_id") REFERENCES "public"."execution_workspaces"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "workspace_operations" ADD CONSTRAINT "workspace_operations_heartbeat_run_id_heartbeat_runs_id_fk" FOREIGN KEY ("heartbeat_run_id") REFERENCES "public"."heartbeat_runs"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
CREATE INDEX "workspace_operations_company_run_started_idx" ON "workspace_operations" USING btree ("company_id","heartbeat_run_id","started_at");--> statement-breakpoint
CREATE INDEX "workspace_operations_company_workspace_started_idx" ON "workspace_operations" USING btree ("company_id","execution_workspace_id","started_at");

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -211,6 +211,62 @@
"when": 1773417600000,
"tag": "0029_plugin_tables",
"breakpoints": true
},
{
"idx": 30,
"version": "7",
"when": 1773670925214,
"tag": "0030_rich_magneto",
"breakpoints": true
},
{
"idx": 31,
"version": "7",
"when": 1773511922713,
"tag": "0031_zippy_magma",
"breakpoints": true
},
{
"idx": 32,
"version": "7",
"when": 1773542934499,
"tag": "0032_pretty_doctor_octopus",
"breakpoints": true
},
{
"idx": 33,
"version": "7",
"when": 1773664961967,
"tag": "0033_shiny_black_tarantula",
"breakpoints": true
},
{
"idx": 34,
"version": "7",
"when": 1773697572188,
"tag": "0034_fat_dormammu",
"breakpoints": true
},
{
"idx": 35,
"version": "7",
"when": 1773698696169,
"tag": "0035_marvelous_satana",
"breakpoints": true
},
{
"idx": 36,
"version": "7",
"when": 1773756213455,
"tag": "0036_cheerful_nitro",
"breakpoints": true
},
{
"idx": 37,
"version": "7",
"when": 1773756922363,
"tag": "0037_friendly_eddie_brock",
"breakpoints": true
}
]
}
}

View File

@@ -27,6 +27,8 @@ export const agents = pgTable(
runtimeConfig: jsonb("runtime_config").$type<Record<string, unknown>>().notNull().default({}),
budgetMonthlyCents: integer("budget_monthly_cents").notNull().default(0),
spentMonthlyCents: integer("spent_monthly_cents").notNull().default(0),
pauseReason: text("pause_reason"),
pausedAt: timestamp("paused_at", { withTimezone: true }),
permissions: jsonb("permissions").$type<Record<string, unknown>>().notNull().default({}),
lastHeartbeatAt: timestamp("last_heartbeat_at", { withTimezone: true }),
metadata: jsonb("metadata").$type<Record<string, unknown>>(),

Some files were not shown because too many files have changed in this diff Show More