Compare commits

...

86 Commits

Author SHA1 Message Date
Dotta
3da5094a52 Restore assigned issue ownership guard
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:59:46 -05:00
Dotta
7a77102193 Refresh latest comments before latest jump
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:59 -05:00
Dotta
64f46ab281 Restore guarded initial issue scroll check
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:59 -05:00
Dotta
26e1bf1db5 Address UI split review and verify failures
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
beefbe1b52 Avoid lockfile dependency in UI split
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
fce4e9eadc Fix issue list scroll pagination
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
cc8e31c05a Address UI split review findings
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
8c7ae91045 Fix latest comment jump target
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
383c888056 Keep UI split out of root tooling
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
0ebbdbfb38 Fix productivity review badge routing
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
a4fbf51db8 Cover issue chat latest comment jump
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
617c642c03 Enable routine description mentions
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
35a5057a57 Stabilize infinite issue paging
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
57f197dfc1 Page issues list with scroll offsets
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
5547328d9a PAP-2660: Bind issue chat virtualizer to <main> scroll container
The Phase 3 virtualizer (PAP-2645) used useWindowVirtualizer, but on the
real issue page the chat lives inside `<main id="main-content"
overflow-auto>`. Window scroll never moves on desktop, so the virtualizer
only ever mounted the first 7 rows and the chat area went blank past the
first viewport. Detect the actual scroll container at mount via DOM walk
and rekey the inner virtualizer to use element-mode observers/scroll-fn
when an overflow ancestor exists; fall back to window mode for mobile and
the auth-free perf fixture, where the document scrolls.

Adds a regression test that mounts IssueChatThread inside an overflow-auto
ancestor and asserts jump-to-latest drives that element's scrollTo, not
window.scrollTo.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
c19dc328a9 PAP-2659: Warn before sending issue reply with no assignee
If the issue reply composer would post without any assignee, the first
Send press now surfaces a warning toast instead of submitting. A second
press confirms the user wants to proceed unassigned, and changing the
assignee resets the confirmation.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
68ae4cdb36 Make issues page load more on scroll
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
86d3b2c84d Preserve virtualized issue chat anchors
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
001c19d4bd PAP-2645: Virtualize long issue chat threads
Add @tanstack/react-virtual and gate IssueChatThread on a 150-row
threshold. Thread shorter than the threshold still uses the direct
render path; longer threads window the merged feed via
useWindowVirtualizer with dynamic measurement and overscan, so the
document scroll continues to represent the full conversation height
without introducing a nested scrollbar.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
26f2993257 Isolate issue thread row rendering
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
374948da2a Add long-thread issue chat perf fixture
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
50f06d73f8 Split dialog action subscriptions
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
954183438c PAP-2615: rename source-header productivity-review pill to "Under review"
UX feedback split the badge by role: on the source issue the linked pill
describes a state ("Under review"), while on the productivity-review issue
itself the static pill identifies the kind of issue ("Productivity review").
Tooltip and aria-label still surface the formal name on hover.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
dd3aa3569f PAP-2614: Surface productivity review state in issue UI
Adds a productivityReview lookup to single-issue and list responses, plus
a ProductivityReviewBadge component that surfaces the open productivity
review on the source issue's header (with trigger/no-comment-streak
detail in the tooltip), and a compact eye glyph next to the StatusIcon
on IssueRow so operators can spot yellow tasks in any list. The review
issue itself shows a static "Productivity review" pill in the header
when its originKind matches.

Also extends the shared IssueOriginKind union with the recovery-side
kinds (harness_liveness_escalation, issue_productivity_review,
stranded_issue_recovery) so UI checks against issue.originKind type-check.

Wires the new helper through GET /issues/:id, GET /companies/:id/issues
(via issueService.list), and GET /issues/:id/heartbeat-context. Adds a
status-language Storybook story matrix for UX review.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
8a34401d80 PAP-2607: nudge inline mention chips up 1px
Reviewer follow-up — pills sat slightly low after the baseline-to-middle
switch. position: relative + top: -1px raises them to optically center
with the surrounding cap height.
2026-04-28 16:48:58 -05:00
Dotta
cad9111496 PAP-2607: align inline mention chips on text baseline
Inline-flex baseline behavior is browser-dependent, which made skill/agent/issue
chips in MarkdownBody and the MDXEditor hang below the surrounding text. Switch
to vertical-align: middle and drop the vertical padding so the pill sits
centered on the line with the prose around it.

Adds a skill-chip line to the MarkdownBody storybook fixture so this regresses
loudly next time.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
62c2b81d52 Improve new issue dialog typing performance
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
5ee4c9e244 PAP-2605: unify routine variables container and add help modal
Wrap the variables editor in a single rounded container so the
"Variables" header and the per-variable cards share one border with
internal dividers, instead of stacking as two visually separate
containers when expanded.

Add a help button to the variable instructions hint that opens a modal
explaining custom {{variable_name}} placeholders and the built-in
{{date}} and {{timestamp}} variables.

Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-28 16:48:58 -05:00
Dotta
1991ec9d6f [codex] Split backend control-plane QoL slice (#4700)
## Thinking Path

> - Paperclip is the control plane for autonomous AI companies, so
backend task ownership, recovery, review visibility, and company-scoped
limits need to stay enforceable without UI-only coupling.
> - Closed PR #4692 bundled those backend changes with UI workflow,
docs, skills, workflow, and lockfile churn.
> - PAP-2694 asks for a clean backend/control-plane slice from that
closed branch.
> - This branch starts from current `master` and mines only the `cli`,
`packages/db`, `packages/shared`, and `server` contracts/tests needed
for the backend behavior.
> - It explicitly excludes UI workflow/performance work,
`.github/workflows/pr.yml`, `pnpm-lock.yaml`, docs, skills,
package-script, adapter UI build-config, and perf fixture script
changes; the only UI files are fixture/test updates required by the
tightened shared `Company` contract.
> - The benefit is a smaller reviewable PR that preserves the
control-plane fixes while staying under Greptile s 100-file review
limit.

## What Changed

- Added company-scoped attachment-size limits through DB
schema/migrations, shared company portability contracts, CLI
import/export coverage, and server attachment upload enforcement.
- Added productivity review service/API behavior for no-comment streak,
long-active, and high-churn review issues, including request-depth
clamping and issue summary exposure.
- Hardened issue ownership and recovery/control-plane paths: peer-agent
mutation denial, issue tree pause/resume behavior, stranded recovery
origins, and related activity/test coverage.
- Preserved related backend contract updates for routine timestamp
variables and managed agent instruction bundles because they live in
shared/server contracts from the source branch.
- Addressed Greptile feedback by making `Company.attachmentMaxBytes`
non-optional, simplifying review request-depth clamping, fixing the
migration final newline, and enforcing the process-level attachment cap
as the final ceiling for uploads.
- Added minimal company fixtures needed for repo-wide typecheck/build
and kept the PR to 66 changed files with forbidden/non-slice paths
excluded.

## Verification

- `pnpm install --frozen-lockfile`
- `git diff --check origin/master..HEAD`
- `git diff --name-only origin/master..HEAD | wc -l` -> 66 files
- `git diff --name-only origin/master..HEAD -- .github/workflows/pr.yml
pnpm-lock.yaml package.json doc skills .agents scripts
packages/adapters` -> no output
- `pnpm exec vitest run --config vitest.config.ts
packages/shared/src/validators/issue.test.ts
packages/shared/src/routine-variables.test.ts
packages/shared/src/adapter-types.test.ts
cli/src/__tests__/company-import-export-e2e.test.ts
cli/src/__tests__/company.test.ts
server/src/__tests__/productivity-review-service.test.ts
server/src/__tests__/issue-tree-control-service.test.ts
server/src/__tests__/issue-tree-control-routes.test.ts
server/src/__tests__/issue-agent-mutation-ownership-routes.test.ts
server/src/__tests__/issue-attachment-routes.test.ts
server/src/__tests__/heartbeat-process-recovery.test.ts
server/src/__tests__/issues-service.test.ts` -> 12 files, 147 tests
passed
- `pnpm exec vitest run --config vitest.config.ts
cli/src/__tests__/company-delete.test.ts
cli/src/__tests__/company-import-export-e2e.test.ts
server/src/__tests__/productivity-review-service.test.ts` -> 3 files, 18
tests passed
- `pnpm exec vitest run --config vitest.config.ts
server/src/__tests__/issue-attachment-routes.test.ts` -> 1 file, 6 tests
passed
- `pnpm --filter @paperclipai/db typecheck && pnpm --filter
@paperclipai/shared typecheck && pnpm --filter @paperclipai/server
typecheck && pnpm --filter paperclipai typecheck`
- `pnpm --filter @paperclipai/server typecheck`
- `pnpm --filter @paperclipai/ui typecheck && pnpm --filter
@paperclipai/ui build`

## Risks

- Includes migrations `0073_shiny_salo.sql` and
`0074_striped_genesis.sql`; merge ordering matters if another PR adds
migrations first.
- This is intentionally backend-only apart from fixture/test updates
forced by shared type correctness; UI affordances from PR #4692 are not
present here and should land in separate UI slices.
- The worktree install emitted plugin SDK bin-link warnings for unbuilt
plugin packages, but the targeted tests and package typechecks completed
successfully.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected; check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5 coding agent, tool-enabled terminal/GitHub
workflow. Exact runtime context window was not exposed by the harness.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-28 16:46:45 -05:00
Dotta
d9f540c331 [codex] Refresh docs and agent skills (#4693)
## Thinking Path

> - Paperclip orchestrates AI agents through a company-scoped control
plane
> - Contributors and agents need docs and skills that match the current
V1 behavior
> - The source branch included documentation updates alongside
implementation work
> - Keeping docs and skill guidance separate makes the implementation PR
easier to review
> - This pull request refreshes the V1 docs and agent-operating guidance
without changing runtime behavior
> - The benefit is current contributor guidance that can merge
independently from code changes

## What Changed

- Refreshed V1 product, goal, implementation, database, and development
documentation.
- Updated the Paperclip heartbeat skill guidance and create-agent skill
references.
- Added the Paperclip plan-to-task conversion skill.
- Updated release changelog skill guidance.

## Verification

- `git diff --check public-gh/master..HEAD` passed in the PR worktree
after the Greptile fix.
- Greptile Review passed on head `673317ed` with zero unresolved review
threads.
- GitHub PR checks passed on head `673317ed`: `policy`, `verify`, `e2e`,
and `security/snyk (cryppadotta)`.

## Risks

- Low runtime risk because this branch only changes docs and skill
guidance.
- Documentation may need follow-up wording adjustments if reviewers want
a different framing for V1 behavior.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5 coding agent, tool-enabled terminal/GitHub
workflow. Exact runtime context window was not exposed by the harness.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-28 16:12:03 -05:00
Dotta
d0bdbe11a9 Stabilize inline selector keyboard handling (#4617)
## Thinking Path

> - Paperclip's board UI relies on compact selectors for frequent issue
and agent edits.
> - Inline selectors often live inside larger keyboard-aware surfaces
such as composers and popovers.
> - Arrow, enter, tab, and escape keys handled by the selector should
not leak to parent document shortcuts.
> - Stale company selection should also stay hidden until the company
list confirms it is valid.
> - This pull request tightens inline selector keyboard handling and
adds regression coverage for stale company bootstrap behavior.
> - The benefit is fewer accidental parent interactions and safer
company-scoped UI initialization.

## What Changed

- Added a stable empty `recentOptionIds` default so selector filtering
does not get a new array every render.
- Mirrored highlighted option state into a ref so Enter/Tab commits the
current highlighted option reliably after keyboard navigation.
- Stopped propagation for selector-owned navigation/commit/escape keys.
- Added jsdom regressions for inline selector keyboard handling and
CompanyProvider stale selection behavior.

## Verification

- `pnpm exec vitest run ui/src/components/InlineEntitySelector.test.tsx
ui/src/context/CompanyContext.test.tsx`
- Targeted selector and CompanyProvider tests pass cleanly without React
`act(...)` warnings.
- Screenshots not attached: this is keyboard/state behavior covered by
component tests.

## Risks

- Low risk: changes are scoped to inline selector key handling and
tests. The main behavior shift is intentionally preventing handled
selector keys from reaching parent listeners.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex coding agent based on GPT-5, tool-enabled local
repository and shell access, Paperclip heartbeat context.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-27 20:04:35 -05:00
Dotta
43b0f2ae58 Add pause and resume actions to sidebar agents (#4616)
## Thinking Path

> - Paperclip operators need fast control over the agents running their
company.
> - The sidebar is the persistent place operators scan agent state while
navigating the board UI.
> - Agent pause and resume already exist as control-plane actions, but
sidebar users had to navigate away to use them.
> - This pull request adds a compact per-agent action menu in the
sidebar.
> - It keeps edit, pause, and resume close to the visible agent list
while preserving existing navigation behavior.
> - The benefit is faster operator intervention when an agent needs to
be paused or restarted.

## What Changed

- Refactored sidebar agent rows into a small item component with a
hover/focus action menu.
- Added edit, pause, and resume actions using existing agent API calls
and cache invalidation keys.
- Added success/error toasts for pause and resume mutations.
- Tracked pause/resume pending state per agent so one active mutation
does not disable every sidebar row.
- Disabled direct sidebar resume for budget-paused agents and labeled
that state clearly.
- Added jsdom coverage for active-agent pause, paused-agent resume,
per-agent pending state, and budget-paused resume protection.
- Added visual review artifacts for the default row and opened action
menu:
  - [Sidebar row](docs/pr-screenshots/pr-4616/sidebar-agent-row.png)
- [Sidebar action
menu](docs/pr-screenshots/pr-4616/sidebar-agent-actions.png)

## Verification

- `pnpm exec vitest run ui/src/components/SidebarAgents.test.tsx`
- `pnpm --filter @paperclipai/server prepare:ui-dist`
- Browser screenshot pass against a temporary local trusted instance at
`http://127.0.0.1:3102` using Playwright.

## Risks

- Low risk: UI-only addition using existing agent pause/resume
endpoints. The main risk is layout crowding for very narrow sidebars,
mitigated by the icon-only trigger and existing truncation.
- Budget-paused agents now require a non-sidebar path to resume, which
is intentional to avoid accidentally restarting agents stopped by budget
enforcement.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex coding agent based on GPT-5, tool-enabled local
repository and shell/browser access, Paperclip heartbeat context.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-27 20:03:54 -05:00
Dotta
f88f538e6d Keep manual routine runs visible in the runner inbox (#4615)
## Thinking Path

> - Paperclip coordinates recurring agent work through scheduled and
manual routines.
> - Manual routine runs are board-initiated work and should stay visible
to the human who kicked them off.
> - Routine execution issues are agent-assigned, so they can be filtered
away from a board user's inbox unless the user is recorded as touching
the work.
> - Coalesced or skipped active routine runs have the same visibility
problem because they reuse an existing live issue.
> - This pull request carries the manual runner actor into routine
dispatch and touches the linked issue for that user's inbox.
> - The benefit is that manually triggered routine work stays
discoverable by the operator who started it.

## What Changed

- Passed the board or agent actor from the routine run route into the
routine service.
- Recorded manual board runners as `createdByUserId` on fresh routine
execution issues.
- Touched coalesced or skipped active routine issues for the manual
runner by updating read state and clearing that user's inbox archive.
- Added route and service regressions for manual routine run actor
propagation and inbox visibility.

## Verification

- `pnpm exec vitest run server/src/__tests__/routines-routes.test.ts
server/src/__tests__/routines-service.test.ts`

## Risks

- Low risk: the change is scoped to manual routine runs and only updates
issue attribution/read-state metadata for the initiating actor.
- No migrations.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex coding agent based on GPT-5, tool-enabled local
repository and shell access, Paperclip heartbeat context.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-27 20:03:24 -05:00
Dotta
68c37660f0 Dispatch assigned todo work during recovery sweeps (#4614)
## Thinking Path

> - Paperclip orchestrates AI agents for autonomous companies.
> - Agent assignments must reliably turn into heartbeat work without
board operators manually nudging stuck tasks.
> - The stranded-assignment recovery sweep already handles failed or
lost runs.
> - But assigned `todo` issues with no prior run could sit idle because
there was nothing to retry or recover.
> - This pull request dispatches those never-started assigned todos as
normal assignment wakes.
> - The benefit is that recovery fixes missed initial dispatches without
creating unnecessary recovery issues.

## What Changed

- Added an initial assigned-todo dispatch path to the recovery service
when an assigned `todo` issue has no heartbeat run yet.
- Reused invocation budget hard-stop checks before dispatching or
requeueing recovery work.
- Counted `assignmentDispatched` in startup/scheduled recovery logs.
- Added heartbeat recovery regressions for first dispatch, duplicate
queued wake prevention, budget-blocked skips, and paused-agent skips.

## Verification

- `pnpm exec vitest run
server/src/__tests__/heartbeat-process-recovery.test.ts`

## Risks

- Low to medium risk: this changes liveness recovery behavior for
assigned `todo` issues, but it stays on the existing assignment wake
path and skips paused or budget-blocked agents.
- No migrations.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex coding agent based on GPT-5, tool-enabled local
repository and shell access, Paperclip heartbeat context.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-27 20:02:44 -05:00
Dotta
7a9b3a6037 [codex] Harden recovery issue handling (#4600)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - The control plane must recover stranded agent work without creating
new operational loops
> - Stranded recovery issues can themselves fail, and exposing raw retry
errors in comments can leak sensitive adapter details
> - New local companies also should not force a hire-approval gate
unless operators enable that policy
> - This pull request hardens recovery issue handling, redacts retry
failure details in issue copy, preserves `maxConcurrentRuns: 1`, and
flips new-hire approval to an opt-in default
> - The benefit is safer automatic recovery and smoother default company
setup without hidden migration conflicts

## What Changed

- Added migration `0071_default_hire_approval_off` and updated company
schema/import/export/docs so hire approvals default off and serialize
only when enabled.
- Added migration `0072_large_sandman` with a partial unique index
preventing duplicate active stranded recovery issues for the same source
issue.
- Blocked failed `stranded_issue_recovery` issues in place instead of
creating nested recovery issues.
- Redacted latest retry failure details from recovery issue comments
while still linking reviewers to run evidence.
- Allowed `maxConcurrentRuns: 1` to be honored by heartbeat concurrency
normalization.
- Added focused regression coverage for recovery recursion, redaction,
migration ordering, and concurrency behavior.

## Verification

- `pnpm --filter @paperclipai/db run check:migrations`
- `pnpm exec vitest run --project @paperclipai/server
server/src/__tests__/recovery-classifiers.test.ts`
- `pnpm exec vitest run --project @paperclipai/server
server/src/__tests__/company-portability.test.ts --pool=forks
--poolOptions.forks.isolate=true`
- `pnpm exec vitest run --project @paperclipai/server
server/src/__tests__/agent-permissions-routes.test.ts --pool=forks
--poolOptions.forks.isolate=true`
- `pnpm --filter @paperclipai/server typecheck`
- `pnpm exec vitest run --project @paperclipai/server
server/src/__tests__/heartbeat-process-recovery.test.ts --pool=forks
--poolOptions.forks.isolate=true` exits 0, but this host skipped the
embedded Postgres tests with the existing init guard.
- `pnpm exec vitest run --project @paperclipai/server
server/src/__tests__/heartbeat-dependency-scheduling.test.ts
--pool=forks --poolOptions.forks.isolate=true` exits 0, but this host
skipped the embedded Postgres tests with the existing init guard.

## Risks

- Migration risk is low but this PR intentionally owns both new
migrations to avoid separate PR migration-journal conflicts.
- Recovery comments now require operators to inspect linked run evidence
for details instead of reading raw errors inline.
- The hire approval default changes behavior for newly created/imported
companies only; existing persisted company settings are not changed
except by the SQL default for future rows.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5 coding agent, tool-enabled terminal/GitHub
workflow, reasoning mode active. Context window not exposed in this
environment.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-27 15:02:47 -05:00
Dotta
6ccf80bcf2 [codex] Reject stale company skill refreshes (#4601)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - Company skills are part of the reusable agent capability layer
> - Skill inventory refresh work can outlive the company it was
requested for
> - Without an explicit company existence check, stale refreshes can
continue into bundled/local skill cleanup for deleted or missing
companies
> - This pull request makes company-skill listing fail fast when the
company no longer exists
> - The benefit is clearer API behavior and less stale background work
against missing company scope

## What Changed

- Added a company existence check before `companySkillService.list()`
refreshes bundled and local-path skill state.
- Added regression coverage asserting missing companies return `404
Company not found`.

## Verification

- `pnpm exec vitest run --project @paperclipai/server
server/src/__tests__/company-skills-service.test.ts --pool=forks
--poolOptions.forks.isolate=true` exits 0, but this host skipped the
embedded Postgres tests with the existing init guard.

## Risks

- Low risk. Existing callers for valid companies are unchanged.
- Missing-company callers now receive an explicit 404 instead of
continuing refresh work.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5 coding agent, tool-enabled terminal/GitHub
workflow, reasoning mode active. Context window not exposed in this
environment.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-27 13:19:38 -05:00
Dotta
d95968a9f8 [codex] Ignore stale stored company selections (#4602)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - The board UI is the operator’s control surface for selecting the
active company
> - A company id stored in localStorage can become stale across resets,
imports, or deleted companies
> - Exposing that stale id before companies load can briefly put
downstream UI in an invalid company scope
> - This pull request defers selected-company exposure until the loaded
company list validates the stored id
> - The benefit is a cleaner company-selection bootstrap path and fewer
transient invalid API requests

## What Changed

- Initialized `CompanyProvider` selection as `null` until companies
finish loading.
- Reused a stored company id only when it exists in the loaded
selectable company list.
- Cleared storage and selected state when no companies are available.
- Added jsdom regression coverage for stale stored ids before and after
company loading.

## Verification

- `pnpm exec vitest run --project @paperclipai/ui
ui/src/context/CompanyContext.test.tsx`

## Risks

- Low risk. The change only affects selection bootstrap and keeps valid
stored selections intact.
- There may be a slightly longer initial `null` selected-company state
while the company list is loading.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5 coding agent, tool-enabled terminal/GitHub
workflow, reasoning mode active. Context window not exposed in this
environment.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-27 13:18:21 -05:00
Dotta
15c0ce3722 Add Twitter/X link to READMEs (PAP-2475) (#4593)
## Summary

- Adds [@papercliping on X](https://x.com/papercliping) next to the
existing Discord link in the top-of-file nav and the bottom
**Community** list of both `README.md` and `cli/README.md`.
- The `cli/README.md` change keeps the npm-published readme consistent
with the GitHub one.

Resolves PAP-2475.

## Test plan

- [ ] Render `README.md` on GitHub and confirm the new Twitter link in
the header strip and the Community section.
- [ ] Render `cli/README.md` (preview on GitHub or via npm) and confirm
the same.
- [ ] Click both Twitter links and verify they land on
`https://x.com/papercliping`.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-27 09:35:33 -05:00
Dotta
ecd92af001 release: v2026.427.0 notes (#4590)
## Summary

- Adds `releases/v2026.427.0.md` covering the 77 PRs since `v2026.416.0`
- Highlights: multi-user access + invites, structured issue-thread
interactions, run liveness continuations, sub-issues as a workflow
checklist, issue subtree pause/cancel/restore, first-class issue
references
- Beta Features: Environments + pluggable sandbox providers (incl.
`@paperclipai/plugin-e2b`)
- Notes 14 additive migrations (`0057`–`0070`) in the Upgrade Guide; no
breaking changes flagged

Source issue: [PAP-2476](/PAP/issues/PAP-2476)

## Test plan
- [ ] Spot-check that each linked PR actually landed in the v2026.427.0
range
- [ ] Confirm migration list matches `server/src/database/migrations`
- [ ] Verify rendered markdown looks right on GitHub

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-27 09:16:20 -05:00
Dotta
215b6cd161 [codex] Add security role route coverage (#4589)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - Agent creation accepts roles that become part of the agent contract
and telemetry.
> - The shared role list already includes the security role.
> - Direct agent creation should preserve that role through route
handling and analytics metadata.
> - This pull request adds route coverage for creating a security-role
agent and asserting telemetry receives the same role.
> - The benefit is regression coverage for security agents without
changing the production route behavior.

## What Changed

- Added a server route test that creates an agent with `role:
"security"`.
- Asserted the create payload and telemetry metadata preserve `security`
as the agent role.

## Verification

- `pnpm exec vitest run --project @paperclipai/server
server/src/__tests__/agent-skills-routes.test.ts --pool=forks
--poolOptions.forks.isolate=true`

## Risks

- Low risk; test-only coverage.
- No runtime behavior, schema, or API contract changes.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, `gpt-5`, coding model with tool use and local command
execution; context window not exposed by the runtime.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-27 08:49:59 -05:00
Dotta
53396f272a [codex] Fix sub-issue progress summary styling (#4588)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - The issue list and issue detail surfaces summarize child/sub-issue
progress for operators.
> - Those summaries need to be compact and visually consistent because
they appear in dense lists.
> - The progress strip is most useful when there are multiple sub-issues
to compare, so the summary intentionally stays hidden for a single
sub-issue.
> - This pull request tightens the sub-issue progress summary styling
and updates the related tests.
> - The benefit is a cleaner, more scannable task list without changing
task ownership, status, or workflow behavior.

## What Changed

- Adjusted sub-issue progress summary copy/styling in the issue list and
detail summary helpers.
- Intentionally render the progress summary only for two or more child
issues; a single child issue still appears in the normal sub-issue list
without a redundant progress strip.
- Updated the UI tests that assert the rendered summary behavior.
- Clarified the two-plus-child threshold in code with a named constant.

## Verification

- `pnpm exec vitest run --project @paperclipai/ui
ui/src/components/IssuesList.test.tsx
ui/src/lib/issue-detail-subissues.test.ts`

## Screenshots

![Before/after comparison of sub-issue progress summary
styling](cd26b5bd63/pap-2449-subissue-progress-before-after.svg)

## Risks

- Low risk; this is a small UI presentation change with focused test
coverage.
- The intentional threshold change means parents with exactly one child
no longer show the aggregate progress strip, avoiding redundant summary
chrome while keeping the child visible in the list.
- No schema or API behavior changes.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, `gpt-5`, coding model with tool use and local command
execution; context window not exposed by the runtime.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-27 08:48:26 -05:00
Dotta
fda296ee4f [codex] Add configurable liveness auto-recovery controls (#4587)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - Heartbeat liveness recovery decides when stalled issue trees need
manager-visible follow-up.
> - Automatic recovery issue creation is useful, but operators need
instance-level controls for how aggressive it is.
> - Without controls, recovery behavior is harder to tune for local
development, production operations, and noisy edge cases.
> - This pull request adds configurable liveness auto-recovery settings
across shared contracts, API routes, services, and the instance
experimental settings UI.
> - The benefit is that operators can keep liveness findings advisory or
enable bounded recovery automation with explicit intervals and lookback
windows.

## What Changed

- Added shared types and validators for liveness auto-recovery settings.
- Extended instance settings routes and services to persist and validate
the new controls.
- Wired heartbeat/recovery services to honor enablement, minimum
interval, and lookback settings.
- Added UI controls for liveness recovery under instance experimental
settings.
- Covered the new server behavior with instance settings and liveness
escalation tests.

## Verification

- `pnpm exec vitest run --project @paperclipai/server
server/src/__tests__/heartbeat-issue-liveness-escalation.test.ts
server/src/__tests__/instance-settings-routes.test.ts --pool=forks
--poolOptions.forks.isolate=true`
- `pnpm --filter @paperclipai/shared typecheck`
- `pnpm --filter @paperclipai/server typecheck`
- `pnpm --filter @paperclipai/ui typecheck`

## Risks

- Moderate behavioral risk because recovery automation timing changes
when enabled; defaults keep existing advisory behavior unless the
setting is turned on.
- No database migration in this PR; settings are stored through the
existing instance settings path.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, `gpt-5`, coding model with tool use and local command
execution; context window not exposed by the runtime.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-27 08:46:44 -05:00
Neeraj Kumar Singh B
f0f9460d1d docs: AWS ECS Fargate deployment runbook (#3897)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies and ships
a
>   "local-first, cloud-ready" deployment model
> - The deploy docs currently cover local/Docker but not a production
> cloud target, so teams asking "how do I put this behind a real domain"
>   have no canonical path
> - We already support Docker images, RDS-compatible Postgres, and an
EFS
>   storage profile, so AWS ECS Fargate is a natural fit
> - Without a runbook, each team reinvents VPC, security groups, TLS,
and
>   secrets wiring and usually gets at least one step wrong
> - This pull request adds `docs/deploy/aws-ecs.md`, an ECS
task-definition
> template, and an `.env.aws.example`, cross-linked from the deploy
overview
> - The benefit is a single, reproducible ~$110/mo path to a production
>   deployment, plus a full teardown for throwaway environments

## What Changed

- New `docs/deploy/aws-ecs.md` — an 11-step ECS Fargate runbook covering
ECR,
  VPC, RDS, EFS, Secrets Manager, IAM, ALB, and ECS service with the
  deployment circuit breaker enabled
- New `docker/ecs-task-definition.json` — Fargate-ready task definition
with
  `<ACCOUNT_ID>`, `<REGION>`, `<EFS_ID>`, `<DOMAIN>` placeholder tokens
- New `docker/.env.aws.example` — documents every non-secret env var the
  ECS deployment needs
- `docs/deploy/overview.md` — one-line cross-reference to the new guide
- Greptile feedback addressed in follow-up commits:
  - `containerName` in the service-create call now matches
    `paperclip-server` in the task definition
  - HTTP :80 listener added that 301-redirects to :443
  - Dedicated RDS DB subnet group created before `create-db-instance`
  - EFS teardown polls on mount-target deletion instead of `sleep 30`

## Verification

- Walked every step of the runbook against the task definition to
confirm
  variable names (`$ALB_SG`, `$ECS_SG`, `$RDS_SG`, `$EFS_SG`, `$TG_ARN`,
`$LISTENER_ARN`, `$HTTP_LISTENER_ARN`, `$EFS_ID`, `$RDS_ENDPOINT`, etc.)
are
  defined before they are referenced
- Confirmed the `containerName` in Step 10 (`paperclip-server`) matches
  `docker/ecs-task-definition.json` line 11
- Confirmed the `sed` placeholder substitution in Step 8 matches the
tokens
  in the task definition template
- Teardown order was checked in reverse-dependency order: ECS service →
  listeners → target group → ALB → RDS (waits for deletion) → DB subnet
  group → EFS mount targets (polled) → EFS → secrets → SGs → ECR → IAM →
  log group

## Risks

- **Low risk for the repo.** Docs-only change plus two template files
under
`docker/`; no runtime code paths are touched and nothing is imported by
  the build.
- **Risk for users who follow the runbook:** AWS bills accrue
immediately
  once RDS/ALB/EFS exist. The runbook calls this out and includes a full
  teardown procedure. Placeholder tokens (`<ACCOUNT_ID>`, `<REGION>`,
`<EFS_ID>`, `<DOMAIN>`) are documented so nothing is silently
hard-coded.

## Model Used

- Claude (Anthropic), model `claude-opus-4-6`, ~200K context window,
extended thinking mode on, used with tool access (file edit, shell) via
Claude Code. The Greptile follow-up commits were authored the same way.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have run tests locally and they pass — N/A for docs/config
templates; validated by reading
- [x] I have added or updated tests where applicable — N/A for docs
- [x] If this change affects the UI, I have included before/after
screenshots — N/A, no UI
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-27 08:41:47 -05:00
Dotta
1d8c7a09b8 [codex] Add security role route regression (#4586)
## Thinking Path

> - Paperclip orchestrates AI agents through company-scoped
control-plane workflows.
> - Agent creation is one of the core board/operator surfaces for
defining who works in a company.
> - The shared taxonomy now includes a first-class `security` agent
role.
> - Direct agent creation must preserve that role through default
instruction materialization and telemetry.
> - A prior replacement PR covered this path, but Greptile identified
that the route-test mock could let a future patch object shadow the
regression.
> - This pull request reopens the narrow regression coverage from
current `master` with the mock ordering fixed.
> - The benefit is a focused guardrail that keeps `security` role
creation observable without expanding the production diff.

## What Changed

- Added a direct agent creation route regression test for `role:
"security"`.
- Verified telemetry receives `agentRole: "security"` after the default
instruction materialization update path.
- Ordered the regression mock as `...patch` before `role: "security"` so
future patch fields cannot shadow the asserted role.

## Verification

- `pnpm install --frozen-lockfile` to link dependencies in the fresh
worktree; it completed with existing plugin SDK bin warnings.
- `pnpm exec vitest run server/src/__tests__/agent-skills-routes.test.ts
packages/shared/src/adapter-types.test.ts`

## Risks

- Low risk. This is test-only coverage and does not change runtime
behavior.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5 based coding agent, tool-enabled with local shell
and repository editing capabilities.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots (N/A: no UI changes)
- [x] I have updated relevant documentation to reflect my changes (N/A:
test-only regression)
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-27 08:11:52 -05:00
Devin Foley
d2cbe2cb23 Prefer pushing feature branches to a user fork in paperclip-dev skill (#4572)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - The `paperclip-dev` skill is the canonical reference agents read
before doing development work on the Paperclip repo itself
> - Today the skill assumes feature branches get pushed to `origin` (=
`paperclipai/paperclip`), which clutters the upstream branch list when
contributors actually have personal forks
> - This is the standard open-source contribution pattern (push to fork,
PR upstream) and the skill should reflect it
> - This pull request adds a "Forks — Prefer Pushing to a User Fork"
section that teaches agents to detect a fork remote, push there, and
only fall back to `origin` when no fork is configured
> - The benefit is cleaner upstream branch hygiene and behavior that
matches typical contributor workflows without any code/runtime change

## What Changed

- Added a new **Forks — Prefer Pushing to a User Fork** section to
`skills/paperclip-dev/SKILL.md` covering:
- How to detect a user fork via `git remote -v` (treat any
non-`paperclipai` GitHub remote as the fork)
  - How to push to the fork (`git push -u <fork-remote> HEAD`)
- How to create the PR from the fork (`gh pr create --repo
paperclipai/paperclip --head <fork-owner>:<branch>`)
- The no-fork fallback (push to `origin`, do not auto-create a fork —
ask first)
  - Keeping the fork's `master` in sync
- Added a reinforcing entry to the **Common Mistakes** table linking
back to the new section

## Verification

- Docs-only change to a single markdown skill file. Reviewer can confirm
by reading the diff in `skills/paperclip-dev/SKILL.md`:
- New `## Forks — Prefer Pushing to a User Fork` section sits between
`## Worktrees` and `## Pull Requests`
  - New row appended to the `## Common Mistakes` table
- No tests, no build, no runtime behavior affected.

## Risks

- Low risk. Documentation-only edit. The instructions are advisory —
they only change agent behavior on future runs that read the skill.

## Model Used

- Provider: Anthropic (Claude)
- Model ID: `claude-opus-4-7` (Claude Opus 4)
- Capabilities: tool use (file read/edit, shell, git, gh CLI), extended
reasoning
- Context: invoked via Claude Code / Paperclip heartbeat for issue
PAPA-139

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass (N/A — docs-only change; no
test surface)
- [x] I have added or updated tests where applicable (N/A)
- [x] If this change affects the UI, I have included before/after
screenshots (N/A — no UI change)
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-26 22:19:07 -07:00
Dotta
82e257c7ba Cancel stale queued heartbeats when issue graph changes (PAP-2314) (#4534)
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-26 21:17:38 -05:00
Devin Foley
868d08903e test: isolate CLI company import e2e state (#4560)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies, and its
CLI import/export path is part of how operators move company state
safely between environments.
> - The `paperclipai company import/export` e2e test is supposed to
validate that portability flow inside a hermetic harness, not against a
developer's live Paperclip home.
> - This regression showed nested CLI subprocesses could silently fall
back to ambient `PAPERCLIP_*` state and mutate a real local instance by
creating extra companies such as `CLI-1-Roundtrip-Test`.
> - The first job was to pin the test subprocesses to isolated config,
home, instance, auth, and context paths, and to add a regression
assertion that proves the nested CLI writes stay inside the test-owned
state.
> - Once the PR was up, CI and Greptile exposed two follow-on issues
that were blocking merge: plugin SDK typecheck bootstrap was racing
across packages in fresh CI, and the new lock helper needed one more fix
to release its lock on failure.
> - This pull request therefore ends up doing two tightly related
things: fixing the original CLI isolation leak, and hardening the
supporting typecheck/bootstrap path enough for the fix to verify cleanly
in CI.
> - The benefit is that the portability e2e test is now actually
isolated, and the PR verification path is stable enough to catch
regressions instead of introducing its own nondeterministic failures.

## What Changed

- Hardened `cli/src/__tests__/company-import-export-e2e.test.ts` so
nested CLI subprocesses re-seed isolated `PAPERCLIP_CONFIG`,
`PAPERCLIP_HOME`, `PAPERCLIP_INSTANCE_ID`, `PAPERCLIP_CONTEXT`,
`PAPERCLIP_AUTH_STORE`, and throwaway `HOME` values instead of falling
back to ambient machine state.
- Added a regression assertion around `paperclipai context set --json`,
then cleared the temporary `context.json` so the isolation check and the
later export/import flow stay independent.
- Passed the same isolated `HOME` into the server subprocess so both
sides of the e2e harness are symmetric.
- Introduced locking in `scripts/ensure-plugin-build-deps.mjs` and
switched the server/plugin example `typecheck` scripts to use that
helper instead of launching concurrent raw `@paperclipai/plugin-sdk`
builds.
- Fixed the helper failure path so it releases the lock before exiting
non-zero, which prevents stale-lock timeouts during parallel typecheck
runs.

## Verification

- `pnpm vitest run cli/src/__tests__/company-import-export-e2e.test.ts
--project paperclipai`
- `pnpm --filter paperclipai typecheck`
- `pnpm -r typecheck`
- PR checks now pass on the current head, including `policy`, `verify`,
`e2e`, `security/snyk`, and `Greptile Review`.

## Risks

- Low risk. The product-facing behavior change is scoped to test harness
code in the CLI e2e suite.
- The CI stabilization changes only affect bootstrap/typecheck helper
paths for the server and plugin/example packages, but they do touch
shared verification plumbing; the main risk is changing how fresh build
artifacts are prepared in local/CI typecheck runs.

## Model Used

- Anthropic Claude via Paperclip `claude_local`, model
`claude-opus-4-7`, high-effort local coding agent, used for the initial
implementation and first peer-reviewed verification.
- OpenAI Codex via Paperclip `codex_local`, model `gpt-5.4`, high
reasoning-effort local coding agent with tool use, used for CI triage,
Greptile follow-up fixes, verification, and PR maintenance.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-26 19:10:01 -07:00
Devin Foley
1d9f7a5149 Fix flaky heartbeat recovery teardown CI failure (#4559)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - The linked CI job is in the server test/recovery path, where
heartbeat runs and issue cleanup need to leave the control plane in a
consistent state even when retries fail.
> - In this case the failure was not runtime product behavior but test
teardown behavior inside `heartbeat-process-recovery.test.ts`.
> - The failing GitHub Actions job showed a foreign-key race on
`company_skills_company_id_companies_id_fk` while the test tried to
delete the parent company record.
> - The surrounding teardown code already uses bounded retry cleanup for
other dependent tables (`issues`, `heartbeatRuns`, and `agents`) because
this test file intentionally exercises asynchronous recovery flows.
> - This pull request applies that same retry pattern to the final
`db.delete(companies)` step, re-clearing `companySkills` before each
retry.
> - The benefit is a targeted fix for the CI flake without changing
runtime behavior or expanding the scope beyond the failing teardown
path.

## What Changed

- Wrapped the final `db.delete(companies)` call in
`server/src/__tests__/heartbeat-process-recovery.test.ts` with the same
5-attempt retry pattern already used elsewhere in that teardown.
- Re-cleared `companySkills` before each company-delete retry so
late-arriving FK-dependent rows do not mask the real test result.
- Verified the fix against the originally failing
`heartbeat-process-recovery` test file and the broader `pnpm test:run`
command under CI-like env conditions.

## Verification

- `pnpm exec vitest run
server/src/__tests__/heartbeat-process-recovery.test.ts`
- Re-ran `pnpm exec vitest run
server/src/__tests__/heartbeat-process-recovery.test.ts` multiple times
locally; the previously failing teardown stayed green.
- `env -u PAPERCLIP_API_URL -u PAPERCLIP_RUNTIME_API_URL -u
PAPERCLIP_RUN_ID -u PAPERCLIP_TASK_ID -u PAPERCLIP_AGENT_ID -u
PAPERCLIP_COMPANY_ID -u PAPERCLIP_API_KEY -u PAPERCLIP_WAKE_REASON -u
PAPERCLIP_WAKE_COMMENT_ID -u PAPERCLIP_WAKE_PAYLOAD_JSON -u
PAPERCLIP_APPROVAL_ID -u PAPERCLIP_APPROVAL_STATUS pnpm test:run`

## Risks

- Low risk. The change is test-only and scoped to teardown retry
behavior in a single server test file.
- If the underlying async cleanup behavior changes again, this test
could still become flaky in a different way, but this PR addresses the
specific FK race seen in the linked CI job.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI `gpt-5.4` via Paperclip `codex_local`, high reasoning mode,
with tool use for shell, git, HTTP API calls, and patch application.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-26 17:30:20 -07:00
Devin Foley
8145141c55 Fix external issue URL rewriting in markdown (#4558)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - Issue and comment rendering is part of the board UI where humans
supervise and inspect agent work.
> - External Paperclip issue URLs can appear in comments as references
to other runs, review threads, or remote test environments.
> - Those links must preserve their full destination, including origin,
port, and `#comment-...` fragments, or the operator is taken to the
wrong place.
> - The bug here was that absolute `http(s)` issue URLs were being
normalized into internal `/issues/...` routes in the markdown path.
> - This pull request stops rewriting absolute URLs while keeping
internal issue-reference behavior for relative paths and identifiers.
> - The benefit is that authored external links now navigate exactly
where the operator expects, especially for remote test and
comment-deep-link workflows.

## What Changed

- Stopped `ui/src/lib/issue-reference.ts` from treating absolute
`http(s)` URLs as internal issue paths.
- Added defense-in-depth in `ui/src/lib/mention-chips.ts` so absolute
`http(s)` URLs are never reclassified as issue mention chips.
- Updated `ui/src/lib/issue-reference.test.ts` to cover absolute
Paperclip URLs with preserved origin, port, and comment hash.
- Updated `ui/src/components/MarkdownBody.test.tsx` to assert the
reported URL renders as an external link, not an internal `/issues/...`
href.

## Verification

- `pnpm exec vitest run ui/src/lib/issue-reference.test.ts
ui/src/components/MarkdownBody.test.tsx`
- Expected result: `2` files passed, `37` tests passed.
- Manual spot-check from the issue report path: a URL like
`http://remote.example.test:3103/PAPA/issues/PAPA-115#comment-...`
should remain an external link with its full destination preserved.

## Risks

- Low risk. The change narrows when Paperclip rewrites URLs, so the main
risk is if some existing workflow depended on absolute `http(s)`
Paperclip URLs being converted into internal issue links. The added
regression coverage is aimed at preventing that from regressing
silently.

## Model Used

- OpenAI Codex local agent via Paperclip `codex_local`
- Backing model family: GPT-5-based Codex runtime
- Exact backend model ID/version: not exposed by this adapter/runtime
surface
- Context window: not exposed by this adapter/runtime surface
- Capabilities used: tool use, shell command execution, code editing,
git operations, and local test execution

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [ ] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [ ] I will address all Greptile and reviewer comments before
requesting merge
2026-04-26 17:19:23 -07:00
Devin Foley
54ab0d24cd Fix disappearing issue comments (#4557)
## Thinking Path

> - Paperclip is a control plane for AI-agent companies, so issue detail
pages are a primary surface for understanding agent work and human
feedback.
> - The relevant subsystem here is the issue comments/chat experience
across the React issue detail page and the server comment pagination
API.
> - Long issue threads were only surfacing the newest page of comments
at first render, which hid earlier human and agent messages behind extra
pagination.
> - The first UI fix exposed that the descending cursor path on the
server could also fail for older-page fetches, leaving the chat tab
stuck on an infinite "Loading earlier comments..." state.
> - This needed to be addressed in both layers so the chat tab can
surface earlier conversation history without manual recovery and without
server errors.
> - This pull request auto-loads earlier comment pages in the issue
detail chat view and fixes the descending cursor predicate used by issue
comment pagination.
> - The benefit is that long-running issues like `PAPA-103` now show the
missing conversation history near the top of the chat surface instead of
hiding it or failing to load it.

## What Changed

- Auto-load earlier issue comment pages in the issue detail chat tab
until the thread reaches a 150-comment cap or there are no older
comments left.
- Add UI-side guard logic and regression coverage for optimistic issue
comment pagination so the autoload behavior stops cleanly.
- Replace the raw SQL descending cursor predicate in
`issueService.listComments` with typed Drizzle comparisons for the
`(createdAt, id)` anchor tuple.
- Add a server regression test that paginates earlier comments in
descending order from an anchor comment.
- Smoke-test the exact previously failing seeded `PAPA-103` cursor path
on the isolated dev instance used for review.

## Verification

- `pnpm --filter @paperclipai/server exec vitest run
src/__tests__/issues-service.test.ts`
- `pnpm --filter @paperclipai/server typecheck`
- Manual smoke against seeded `PAPA-103` data on the isolated dev
server:
- `GET /api/issues/PAPA-103/comments?order=desc&limit=50` returns `200`
- `GET
/api/issues/PAPA-103/comments?after=765d3609-edc6-4d11-a8fe-d466affbe85d&order=desc&limit=50`
now returns `200` with 50 comments instead of `500`

## Risks

- Moderate UI/perf risk on very large threads because the chat tab now
prefetches multiple earlier pages on mount; the cap is intentionally
limited to 150 comments to bound that work.
- Low API risk because the server fix only changes the cursor predicate
construction for anchor-based comment pagination, but any mistake there
would affect older-comment paging order.

> I checked `ROADMAP.md` before opening this PR and this bug fix does
not duplicate planned core work.

## Model Used

- OpenAI Codex coding agent in the Paperclip local adapter environment.
The exact backend model ID and context window were not exposed
in-session. Tool-assisted workflow included shell execution, git/GitHub
CLI, local test execution, and targeted code edits.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [ ] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-26 16:23:53 -07:00
Devin Foley
b2496c8067 fix(auth): trust allowed hostname port variants on detected listen port (#4554)
## Thinking Path

> - Paperclip is the control plane for autonomous AI companies, so
authenticated board access has to be predictable across local and
worktree deployments.
> - This change sits in the authenticated-mode server startup and Better
Auth origin-trust wiring.
> - The original auth branch fixed one real gap by adding port-qualified
trusted origins for allowed hostnames on non-default ports.
> - Review of that branch found a second-order bug: trusted origins were
still derived from the configured port before startup detected the
actual listen port.
> - In isolated worktrees, that meant a common `3100 -> 3101` port shift
could still leave Better Auth trusting the stale origin.
> - This pull request keeps the original allowed-hostname port-variant
fix, then moves trust derivation onto the resolved listen port and adds
regression coverage around startup wiring.
> - The benefit is that authenticated sessions keep working on allowed
private hostnames even when Paperclip has to auto-shift to a different
local port.

## What Changed

- Added `:port` trusted-origin variants for authenticated-mode
`allowedHostnames` when Paperclip runs on non-default ports.
- Changed authenticated startup so `listenPort` is detected before
Better Auth initialization, and explicit auth base URLs are rewritten
before auth startup.
- Updated `deriveAuthTrustedOrigins()` to accept the resolved listen
port so Better Auth trusts the actual browser origin instead of the
stale configured port.
- Added focused regression coverage in
`server/src/__tests__/better-auth.test.ts` and
`server/src/__tests__/server-startup-feedback-export.test.ts`.

## Verification

- `pnpm exec vitest run server/src/__tests__/better-auth.test.ts
server/src/__tests__/server-startup-feedback-export.test.ts`
- Reviewer re-check: reviewed commits `380f5b9f` and `092bb34c` after
the follow-up fix landed and found no remaining issues.

## Risks

- Low risk: this only affects authenticated-mode origin derivation and
startup ordering around detected listen ports.
- Main behavioral shift: startup no longer mutates `config.port` to the
selected port; it now carries `requestedListenPort` separately and uses
`listenPort` where runtime behavior needs the resolved value.
- If another path was implicitly relying on `config.port` being
overwritten during startup, that path would need follow-up, though the
current startup/test coverage did not reveal one.

> I checked `ROADMAP.md` and did not find an overlapping planned core
work item for this auth trusted-origin port handling fix.

## Model Used

- OpenAI Codex via Paperclip `codex_local` agents for implementation and
review. Exact backend model ID/context window were not surfaced in this
run context; work was performed through the Codex local adapter with
tool use, code execution, and review passes.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-26 15:40:39 -07:00
Devin Foley
08af830430 Tighten publicBaseUrl port rewriting (#4553)
## Thinking Path

> - Paperclip is a control plane for autonomous agent companies, so its
local and authenticated deployment behavior has to stay predictable
under port rebinding and worktree isolation.
> - This change sits in the server/worktree configuration path that
derives runtime URLs and auth origins from `auth.publicBaseUrl`.
> - The original hostname-port rewrite change fixed one real gap for
private/tailnet host:port worktree setups, but it widened the rewrite
rule too far.
> - Rewriting every explicit `auth.publicBaseUrl` can corrupt public or
reverse-proxy URLs by turning a stable origin like
`https://paperclip.example` into a local listen-port URL.
> - Paperclip's auth and trusted-origin handling depend on that URL
staying semantically correct, so this had to be narrowed before merge.
> - This pull request tightens the rewrite rule to explicit-port URLs
only and adds regression coverage across the CLI helper, worktree config
persistence, and server startup path.
> - The benefit is that private host:port worktree flows still work,
while public/default-port URLs remain stable and safe.

## What Changed

- Tightened `rewriteLocalUrlPort` in `cli/src/commands/worktree-lib.ts`,
`server/src/worktree-config.ts`, and `server/src/index.ts` so it only
rewrites URLs that already include an explicit port.
- Removed the old loopback-only hostname gate from the CLI/worktree
helpers and replaced it with the more precise `parsed.port` guard.
- Updated CLI helper coverage to assert that explicit-port non-loopback
URLs still rewrite while no-port public URLs stay unchanged.
- Expanded `server/src/__tests__/worktree-config.test.ts` to cover
explicit-port rewrite and no-port stability for both persisted worktree
config and in-memory runtime port selection.
- Added startup-path coverage in
`server/src/__tests__/server-startup-feedback-export.test.ts` for
`detect-port` rebinding with both explicit-port and no-port
`auth.publicBaseUrl` values.

## Verification

- `pnpm --filter @paperclipai/plugin-sdk build`
- `npx vitest run
server/src/__tests__/server-startup-feedback-export.test.ts`
- `npx vitest run cli/src/__tests__/worktree.test.ts
server/src/__tests__/worktree-config.test.ts`
- All of the above were run locally in this issue worktree and passed.

## Risks

- Low risk. The behavior change is deliberately narrower than the
reviewed broad-host rewrite and is guarded by regression coverage for
both the explicit-port and no-port cases.
- The main remaining risk is behavioral only if another code path starts
depending on port rewriting for URLs that never declared a port, which
would be a separate bug.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex local agent using `gpt-5.4` with high reasoning effort,
tool use, shell execution, and file editing.
- Anthropic Claude local agent using `claude-opus-4-6` for follow-up
code review approval on the implementation issue.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-26 14:29:22 -07:00
Devin Foley
d47ffa87f0 Fix CEO AGENT_HOME paths and centralize workspace env propagation (#4551)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - The local adapter layer is responsible for turning Paperclip runtime
context into the environment seen by the child agent process.
> - The CEO onboarding bundle tells the agent where to read and write
its persistent memory and fact files.
> - That bundle was using `./memory/...` and `./life/...`, which only
works when the process cwd happens to equal the agent home directory.
> - At the same time, six local adapters each duplicated the same
workspace-env propagation logic, including `AGENT_HOME`, which makes
this contract easy to drift.
> - This pull request fixes the CEO instructions to use
`$AGENT_HOME/...` and centralizes workspace-env propagation in one
shared helper with shared tests.
> - The benefit is a real bug fix for agent memory paths plus a single
tested contract that makes future built-in adapter work less likely to
forget `AGENT_HOME`.

## What Changed

- Updated `server/src/onboarding-assets/ceo/HEARTBEAT.md` to use
`$AGENT_HOME/memory/...` and `$AGENT_HOME/life/...` instead of
cwd-relative `./memory/...` and `./life/...`.
- Added `applyPaperclipWorkspaceEnv(...)` in
`packages/adapter-utils/src/server-utils.ts` to centralize
`PAPERCLIP_WORKSPACE_*` and `AGENT_HOME` propagation.
- Added shared helper coverage in
`packages/adapter-utils/src/server-utils.test.ts` for both populated and
skip-empty cases.
- Switched the built-in local adapters (`claude_local`, `codex_local`,
`cursor_local`, `gemini_local`, `opencode_local`, `pi_local`) over to
the shared helper instead of inline env assignment blocks.

## Verification

- `pnpm install`
- `pnpm exec vitest run packages/adapter-utils/src/server-utils.test.ts
packages/adapters/claude-local/src/server/execute.remote.test.ts
packages/adapters/codex-local/src/server/execute.remote.test.ts
packages/adapters/cursor-local/src/server/execute.remote.test.ts
packages/adapters/gemini-local/src/server/execute.remote.test.ts
packages/adapters/opencode-local/src/server/execute.remote.test.ts
packages/adapters/pi-local/src/server/execute.remote.test.ts`
- Result: 7 test files passed, 31 tests passed, 0 failures.

## Risks

- Low risk.
- The only behavioral surface is the shared env propagation refactor
across six adapters; if the helper diverged from prior semantics, an
adapter could miss a workspace env var.
- The shared helper test plus the affected adapter execute tests reduce
that risk, and the helper preserves the prior "set only non-empty
strings" behavior.

## Model Used

- OpenAI Codex via Paperclip `codex_local` agent runtime; tool-assisted
coding workflow with shell execution, file patching, git operations, and
API interaction. The exact backend model identifier and context window
are not surfaced by this local runtime.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-26 13:57:35 -07:00
Devin Foley
d1484551ee Add open-source hygiene note to paperclip-dev skill (#4541)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - The `paperclip-dev` skill is part of the contributor and agent
workflow layer that tells developers how to work in this repository
safely.
> - That skill already references the public upstream `origin`, but it
did not explicitly say that pushes there must be treated as publishable
open-source output.
> - Without that reminder, contributors are more likely to leak secrets,
PII, private logs, machine-local config, or noisy throwaway git history
into the public repo.
> - This pull request adds a prominent `OPEN SOURCE HYGIENE` callout
near the top of the skill, before the git workflow guidance.
> - The benefit is clearer safety guidance for contributors and less
accidental disclosure or branch/commit noise on the upstream project.

## What Changed

- Added an `OPEN SOURCE HYGIENE` callout near the top of
`skills/paperclip-dev/SKILL.md`.
- Explicitly warned that anything pushed to `origin` must be
publishable.
- Called out avoiding secrets, API keys, PII, private logs,
machine-local config, and noisy throwaway branches or checkpoint
commits.

## Verification

- N/a

## Risks

- Low risk. This is a docs-only change in a skill file; the main risk is
wording tone or placement, not runtime behavior.

## Model Used

- OpenAI Codex via the `codex_local` Paperclip adapter, GPT-5-based
coding agent runtime. Exact backend serving model ID is not exposed in
this heartbeat environment. Tool use, shell execution, and patch
application were enabled.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [ ] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-26 12:14:49 -07:00
Devin Foley
91333ec86f feat: add paperclip-dev skill with optional bundled skill support (#3854)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - Agents working on the Paperclip codebase itself need guidance on dev
workflows: server lifecycle, worktrees, builds, database ops,
diagnostics
> - There was no bundled skill covering these workflows — agents had to
figure it out from scratch each time
> - Additionally, not every skill should be force-installed on every
agent — a dev-focused skill should be opt-in
> - This PR adds a `paperclip-dev` skill with `required: false`
frontmatter so it ships with Paperclip but isn't auto-installed
> - The skill's PR section references canonical files
(`.github/PULL_REQUEST_TEMPLATE.md`, `CONTRIBUTING.md`) instead of
duplicating their content, with gated instructions that force agents to
read those files before creating any PR
> - The benefit is that developers (human or agent) can opt in to
structured dev guidance without polluting the default agent skill set or
creating drift between duplicated docs

## What Changed

- Added `skills/paperclip-dev/SKILL.md` covering server management,
worktree lifecycle, builds, database ops, diagnostics, agent operations,
and common mistakes
- The Pull Requests section uses gated, reference-based instructions —
agents MUST read `.github/PULL_REQUEST_TEMPLATE.md` and
`CONTRIBUTING.md` before running `gh pr create`, with a brief checklist
of required section names (no content duplication)
- Updated `packages/adapter-utils/src/server-utils.ts` to respect
`required: false` frontmatter — optional skills are bundled but not
auto-installed on agents
- Added test in `server/src/__tests__/paperclip-skill-utils.test.ts`
verifying that optional skills are excluded from the default install set

## Verification

```bash
# Run tests
pnpm test

# Manual verification: create a fresh worktree without seeding
npx paperclipai worktree:make test-optional-skill --no-seed
cd ~/paperclip-test-optional-skill
eval "$(npx paperclipai worktree env)"
npx paperclipai run

# Verify paperclip-dev appears in company skill library but is NOT auto-assigned
# Call listPaperclipSkillEntries() — paperclip-dev should show required: false
# Call resolvePaperclipDesiredSkillNames() — paperclip-dev should NOT be in the default set

# Cleanup
npx paperclipai worktree:cleanup test-optional-skill
```

## Risks

- Low risk. The `required` field defaults to `true` when absent, so all
existing skills behave identically. Only the new `paperclip-dev` skill
sets `required: false`.

## Model Used

Claude Opus 4.6 (`claude-opus-4-6`) via Claude Code, with tool use and
extended context.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-26 11:06:13 -07:00
Dotta
c036bbfa98 Add first-class security agent role to taxonomy (#4532)
## Thinking Path

> - Paperclip is the control plane for AI-agent companies, so agent
metadata is part of the platform's governance and audit surface.
> - The shared agent taxonomy in `@paperclipai/shared` is the source of
truth for allowed agent roles and their UI labels.
> - The current taxonomy lacks a `security` role, which causes Security
Engineer hires to collapse into `engineer`.
> - That breaks separation-of-duties evidence in telemetry and weakens
role-level audit fidelity even though it does not directly change
permissions.
> - This pull request adds `security` as a first-class shared role and
covers the prior rejection path with a regression test.
> - The benefit is that Security Engineer agents can now be persisted
and rendered under the correct role without schema or permission churn.

## What Changed

- Added `security` to `AGENT_ROLES` in
`packages/shared/src/constants.ts`.
- Added the `Security` display label to `AGENT_ROLE_LABELS` so existing
UI consumers render the new role automatically.
- Added a shared validator regression test proving `createAgentSchema`
accepts `role: "security"` and that the label stays stable.

## Verification

- `pnpm --filter @paperclipai/shared typecheck`
- `pnpm --filter @paperclipai/shared exec vitest run
src/adapter-types.test.ts`

## Risks

- Low risk. This is a shared enum expansion with no database migration
and no permission-model change.
- Residual risk: this PR does not backfill existing agents already
persisted as `engineer`; it only fixes new validations and labels going
forward.

> I checked `ROADMAP.md`/`doc` for overlap and did not find an existing
planned item covering this taxonomy fix.

## Model Used

- OpenAI GPT-5.4 via the Codex local adapter, with tool use and local
code execution enabled. The runtime did not surface a separate
context-window identifier in agent metadata.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [ ] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-26 07:52:05 -05:00
Dotta
df425fde96 Present ordered sub-issues as a workflow checklist (#4523)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - Operators use issue detail pages and child issue lists to understand
multi-step execution plans.
> - Ordered sub-issues currently read like a flat table, so dependency
chains and current next steps are harder to scan.
> - The branch work adds a workflow-oriented presentation for child
issues without changing the single-assignee task model.
> - This pull request makes ordered sub-issues read more like a progress
checklist while preserving normal issue list controls.
> - The benefit is that operators can see completed steps, active work,
blocked follow-ups, and dependency order at a glance.

## What Changed

- Added workflow sorting utilities and tests for dependency-aware child
issue ordering.
- Added sub-issue progress summary, checklist numbering, current-step
affordances, blocker context, and done-state de-emphasis in the issue
list UI.
- Wired issue detail sub-issue panels to use the workflow sort/progress
checklist presentation.
- Updated issue service behavior/tests for child issue ordering inputs
used by the UI.
- Added a Storybook visual review fixture and screenshot helper for the
sub-issue workflow checklist surface.

## Verification

- `pnpm run preflight:workspace-links && pnpm exec vitest run
server/src/__tests__/issues-service.test.ts
ui/src/components/IssueRow.test.tsx
ui/src/components/IssuesList.test.tsx ui/src/pages/IssueDetail.test.tsx
ui/src/lib/issue-detail-subissues.test.ts
ui/src/lib/workflow-sort.test.ts`
- Result: 6 test files passed, 55 tests passed, 34 embedded Postgres
issue-service tests skipped because `@embedded-postgres/darwin-x64` is
unavailable on this host.
- Visual review: generated Storybook screenshots from the existing local
Storybook server on port 6006 with `node
scripts/screenshot-subissues.mjs /tmp/pap-2189-subissues-screens
http://localhost:6006`.
- Screenshot artifacts:
- Desktop dark: ![Desktop
dark](doc/assets/pap-2189/desktop-1440x900-dark.png)
- Desktop light: ![Desktop
light](doc/assets/pap-2189/desktop-1440x900-light.png)
- Mobile dark: ![Mobile
dark](doc/assets/pap-2189/mobile-390x844-dark.png)
- Mobile light: ![Mobile
light](doc/assets/pap-2189/mobile-390x844-light.png)
- Local Storybook note: starting a second Storybook process selected
port 6008 because 6006 was occupied, then Vite failed with an esbuild
host/binary version mismatch (`0.25.12` host vs `0.27.3` binary). The
already-running Storybook server on 6006 served the fixture successfully
for screenshots.

## Risks

- Medium UI risk: the issue list now has additional sub-issue-specific
visual states, so dense lists should be checked for spacing and
scanability.
- Low ordering risk: workflow sorting is covered by focused unit tests,
but unusual dependency topologies may still need reviewer attention.
- No migration risk: this PR does not add database migrations or touch
`pnpm-lock.yaml`.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5 coding agent, tool-enabled shell/git/GitHub
workflow. Context window is runtime-provided and not exposed in this
environment.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-26 07:36:49 -05:00
Devin Foley
40782f703d Fix release packaging for standalone public packages (#4494)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies, and the
sandbox-provider work just moved E2B into a standalone publishable
plugin package.
> - That plugin is intentionally excluded from the root pnpm workspace
so it can model third-party install behavior without forcing lockfile
churn in the main repo.
> - The merged architecture change exposed a follow-up release problem:
the canary publish workflow tried to publish `@paperclipai/plugin-e2b`,
but the tarball had no `dist/` payload because standalone public
packages were not being built in the release path.
> - That means the release pipeline needed a packaging fix in core
release tooling, not another architectural change in the sandbox
provider itself.
> - This pull request adds a generic release step for public packages
that live outside the pnpm workspace, instead of hardcoding E2B-specific
behavior into the release script.
> - The benefit is that standalone publishable packages can be built and
packed correctly during release, including future sandbox-provider
plugins that follow the same pattern.

## What Changed

- Added `scripts/build-standalone-public-packages.mjs` to discover
public packages outside the pnpm workspace, run a clean package-local
install, and build them before publish.
- Updated `scripts/release.sh` to invoke that helper immediately after
the normal workspace build step.
- Kept the behavior generic by driving off the existing public package
map and pnpm workspace patterns rather than special-casing
`@paperclipai/plugin-e2b`.

## Verification

- `rm -rf packages/plugins/sandbox-providers/e2b/dist`
- `node ./scripts/build-standalone-public-packages.mjs`
- `cd packages/plugins/sandbox-providers/e2b && npm pack --dry-run`
- Confirm the tarball now includes the rebuilt `dist/` files instead of
only `README.md` / `package.json`

## Risks

- Low risk: this only changes the release build path for public packages
outside the pnpm workspace.
- The helper performs a clean package-local install for each standalone
public package, so release time may increase slightly as more such
packages are added.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex via `codex_local`
- Model ID: `gpt-5.4`
- Reasoning effort: `high`
- Context window observed in runtime session metadata: `258400` tokens
- Capabilities used: terminal tool execution, git, GitHub CLI, and local
build/test inspection

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-25 12:16:23 -07:00
Devin Foley
4ef969f084 Add E2B sandbox provider plugin (#4452)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - Sandbox environments are part of that execution layer, and the
recent core refactor moved provider-specific behavior to a generic
plugin seam
> - This pull request adds a dedicated `@paperclipai/plugin-e2b` package
so E2B can live entirely outside core host code
> - Because the feature is still unreleased, the plugin should model
third-party packaging directly instead of carrying extra
backward-compatibility complexity in core or the workspace lockfile
> - This branch therefore makes the E2B provider a standalone
publishable package, documents the package-local dev flow, and keeps the
publish manifest/runtime dependency story correct
> - The benefit is that E2B becomes a true plugin reference
implementation that can be installed by package name without reopening
core Paperclip code

## What Changed

- Added `packages/plugins/paperclip-plugin-e2b` as the E2B sandbox
provider plugin package
- Implemented config validation, lease acquire/resume/release/destroy
handlers, workspace realization, and command execution for E2B sandboxes
- Excluded the E2B plugin package from the root workspace so the repo no
longer needs `pnpm-lock.yaml` churn for its third-party dependency graph
- Added package-local development/install support plus a prepack
manifest generator so the published tarball still declares
`@paperclipai/plugin-sdk` and `e2b` runtime dependencies
- Addressed review feedback by fixing sandbox cleanup on acquire
failures, rejecting blank templates, normalizing fractional `timeoutMs`,
and always passing the configured template name to the E2B SDK
- Updated focused Vitest coverage for config normalization, validation,
acquire cleanup, command execution, and lease release behavior
- Updated the Dockerfile deps stage to copy the E2B package manifest so
the policy check stays in sync

## Verification

- `cd packages/plugins/paperclip-plugin-e2b && pnpm install
--ignore-workspace --no-lockfile`
- `cd packages/plugins/paperclip-plugin-e2b && pnpm build`
- `cd packages/plugins/paperclip-plugin-e2b && pnpm --ignore-workspace
test`
- `cd packages/plugins/paperclip-plugin-e2b && pnpm --ignore-workspace
typecheck`
- `cd packages/plugins/paperclip-plugin-e2b && npm pack --dry-run`

## Risks

- The package now relies on a prepack manifest rewrite so the
publish-time dependency list stays correct while the repo-local dev
manifest stays workspace-light
- The current repo snapshot is still unreleased, so the generated
publish manifest points at the repo SDK version until the normal release
flow rewrites versions before publish
- Real-world E2B environments may still expose edge cases around
lifecycle timing or sandbox metadata beyond the mocked unit coverage

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex via `codex_local`
- Model ID: `gpt-5.4`
- Reasoning effort: `high`
- Context window observed in runtime session metadata: `258400` tokens
- Capabilities used: terminal tool execution, git, GitHub CLI, and local
build/test inspection

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-25 11:01:11 -07:00
Devin Foley
5bd0f578fd Generalize sandbox provider core for plugin-only providers (#4449)
## Thinking Path

> - Paperclip is a control plane, so optional execution providers should
sit at the plugin edge instead of hardcoding provider-specific behavior
into core shared/server/ui layers.
> - Sandbox environments are already first-class, and the fake provider
proves the built-in path; the remaining gap was that real providers
still leaked provider-specific config and runtime assumptions into core.
> - That coupling showed up in config normalization, secret persistence,
capabilities reporting, lease reconstruction, and the board UI form
fields.
> - As long as core knew about those provider-shaped details, shipping a
provider as a pure third-party plugin meant every new provider would
still require host changes.
> - This pull request generalizes the sandbox provider seam around
schema-driven plugin metadata and generic secret-ref handling.
> - The runtime and UI now consume provider metadata generically, so
core only special-cases the built-in fake provider while third-party
providers can live entirely in plugins.

## What Changed

- Added generic sandbox-provider capability metadata so plugin-backed
providers can expose `configSchema` through shared environment support
and the environments capabilities API.
- Reworked sandbox config normalization/persistence/runtime resolution
to handle schema-declared secret-ref fields generically, storing them as
Paperclip secrets and resolving them for probe/execute/release flows.
- Generalized plugin sandbox runtime handling so provider validation,
reusable-lease matching, lease reconstruction, and plugin worker calls
all operate on provider-agnostic config instead of provider-shaped
branches.
- Replaced hardcoded sandbox provider form fields in Company Settings
with schema-driven rendering and blocked agent environment selection
from the built-in fake provider.
- Added regression coverage for the generic seam across shared support
helpers plus environment config, probe, routes, runtime, and
sandbox-provider runtime tests.

## Verification

- `pnpm vitest --run packages/shared/src/environment-support.test.ts
server/src/__tests__/environment-config.test.ts
server/src/__tests__/environment-probe.test.ts
server/src/__tests__/environment-routes.test.ts
server/src/__tests__/environment-runtime.test.ts
server/src/__tests__/sandbox-provider-runtime.test.ts`
- `pnpm -r typecheck`

## Risks

- Plugin sandbox providers now depend more heavily on accurate
`configSchema` declarations; incorrect schemas can misclassify
secret-bearing fields or omit required config.
- Reusable lease matching is now metadata-driven for plugin-backed
providers, so providers that fail to persist stable metadata may
reprovision instead of resuming an existing lease.
- The UI form is now fully schema-driven for plugin-backed sandbox
providers; provider manifests without good defaults or descriptions may
produce a rougher operator experience.

## Model Used

- OpenAI Codex via `codex_local`
- Model ID: `gpt-5.4`
- Reasoning effort: `high`
- Context window observed in runtime session metadata: `258400` tokens
- Capabilities used: terminal tool execution, git, and local code/test
inspection

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-24 18:03:41 -07:00
Dotta
deba60ebb2 Stabilize serialized server route tests (#4448)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - The server route suite is a core confidence layer for auth, issue
context, and workspace runtime behavior
> - Some route tests were doing extra module/server isolation work that
made local runs slower and more fragile
> - The stable Vitest runner also needs to pass server-relative exclude
paths to avoid accidentally re-including serialized suites
> - This pull request tightens route test isolation and runner
serialization behavior
> - The benefit is more reliable targeted and stable-route test
execution without product behavior changes

## What Changed

- Updated `run-vitest-stable.mjs` to exclude serialized server tests
using server-relative paths.
- Forced the server Vitest config to use a single worker in addition to
isolated forks.
- Simplified agent permission route tests to create per-request test
servers without shared server lifecycle state.
- Stabilized issue goal context route mocks by using static mocked
services and a sequential suite.
- Re-registered workspace runtime route mocks before cache-busted route
imports.

## Verification

- `pnpm exec vitest run --project @paperclipai/server
server/src/__tests__/agent-permissions-routes.test.ts
server/src/__tests__/issues-goal-context-routes.test.ts
server/src/__tests__/workspace-runtime-routes-authz.test.ts --pool=forks
--poolOptions.forks.isolate=true`
- `node --check scripts/run-vitest-stable.mjs`

## Risks

- Low risk. This is test infrastructure only.
- The stable runner path fix changes which tests are excluded from the
non-serialized server batch, matching the server project root that
Vitest applies internally.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5 coding agent, tool-enabled with
shell/GitHub/Paperclip API access. Context window was not reported by
the runtime.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-24 19:27:00 -05:00
Dotta
f68e9caa9a Polish markdown external link wrapping (#4447)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - The board UI renders agent comments, PR links, issue links, and
operational markdown throughout issue threads
> - Long GitHub and external links can wrap awkwardly, leaving icons
orphaned from the text they describe
> - Small inbox visual polish also helps repeated board scanning without
changing behavior
> - This pull request glues markdown link icons to adjacent link
characters and removes a redundant inbox list border
> - The benefit is cleaner, more stable markdown and inbox rendering for
day-to-day operator review

## What Changed

- Added an external-link indicator for external markdown links.
- Kept the GitHub icon attached to the first link character so it does
not wrap onto a separate line.
- Kept the external-link icon attached to the final link character so it
does not wrap away from the URL/text.
- Added markdown rendering regressions for GitHub and external link icon
wrapping.
- Removed the extra border around the inbox list card.

## Verification

- `pnpm exec vitest run --project @paperclipai/ui
ui/src/components/MarkdownBody.test.tsx`
- `pnpm --filter @paperclipai/ui typecheck`

## Risks

- Low risk. The markdown change is limited to link child rendering and
preserves existing href/target/rel behavior.
- Visual-only inbox polish.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5 coding agent, tool-enabled with
shell/GitHub/Paperclip API access. Context window was not reported by
the runtime.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-24 19:26:13 -05:00
Dotta
73fbdf36db Gate stale-run watchdog decisions by board access (#4446)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - The run ledger surfaces stale-run watchdog evaluation issues and
recovery actions
> - Viewer-level board users should be able to inspect status without
getting controls that the server will reject
> - The UI also needs enough board-access context to know when to hide
those decision actions
> - This pull request exposes board memberships in the current board
access snapshot and gates watchdog action controls for known viewer
contexts
> - The benefit is clearer least-privilege UI behavior around recovery
controls

## What Changed

- Included memberships in `/api/cli-auth/me` so the board UI can
distinguish active viewer memberships from operator/admin access.
- Added the stale-run evaluation issue assignee to output silence
summaries.
- Hid stale-run watchdog decision buttons for known non-owner viewer
contexts.
- Surfaced watchdog decision failures through toast and inline error
text.
- Threaded `companyId` through the issue activity run ledger so access
checks are company-scoped.
- Added IssueRunLedger coverage for non-owner viewers.

## Verification

- `pnpm exec vitest run --project @paperclipai/ui
ui/src/components/IssueRunLedger.test.tsx`
- `pnpm --filter @paperclipai/server typecheck`
- `pnpm --filter @paperclipai/ui typecheck`

## Risks

- Medium-low risk. This is a UI gating change backed by existing server
authorization.
- Local implicit and instance-admin board contexts continue to show
watchdog decision controls.
- No migrations.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5 coding agent, tool-enabled with
shell/GitHub/Paperclip API access. Context window was not reported by
the runtime.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-24 19:25:23 -05:00
Dotta
6916e30f8e Cancel stale retries when issue ownership changes (#4445)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - Issue execution is guarded by run locks and bounded retry scheduling
> - A failed run can schedule a retry, but the issue may be reassigned
before that retry becomes due
> - The old assignee's scheduled retry should not continue to hold or
reclaim execution for the issue
> - This pull request cancels stale scheduled retries when ownership
changes and cancels live work when an issue is explicitly cancelled
> - The benefit is cleaner issue handoff semantics and fewer stranded or
incorrect execution locks

## What Changed

- Cancel scheduled retry runs when their issue has been reassigned
before the retry is promoted.
- Clear stale issue execution locks and cancel the associated wakeup
request when a stale retry is cancelled.
- Avoid deferring a new assignee behind a previous assignee's scheduled
retry.
- Cancel an active run when an issue status is explicitly changed to
`cancelled`, while leaving `done` transitions alone.
- Added route and heartbeat regressions for reassignment and
cancellation behavior.

## Verification

- `pnpm exec vitest run --project @paperclipai/server
server/src/__tests__/heartbeat-retry-scheduling.test.ts
server/src/__tests__/issue-comment-reopen-routes.test.ts --pool=forks
--poolOptions.forks.isolate=true`
  - `issue-comment-reopen-routes.test.ts`: 28 passed.
- `heartbeat-retry-scheduling.test.ts`: skipped by the existing embedded
Postgres host guard (`Postgres init script exited with code null`).
- `pnpm --filter @paperclipai/server typecheck`

## Risks

- Medium risk because this changes heartbeat retry lifecycle behavior.
- The cancellation path is scoped to scheduled retries whose issue
assignee no longer matches the retrying agent, and logs a lifecycle
event for auditability.
- No migrations.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5 coding agent, tool-enabled with
shell/GitHub/Paperclip API access. Context window was not reported by
the runtime.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-24 19:24:13 -05:00
Dotta
0c6961a03e Normalize escaped multiline issue and approval text (#4444)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - The board and agent APIs accept multiline issue, approval,
interaction, and document text
> - Some callers accidentally send literal escaped newline sequences
like `\n` instead of JSON-decoded line breaks
> - That makes comments, descriptions, documents, and approval notes
render as flattened text instead of readable markdown
> - This pull request centralizes multiline text normalization in shared
validators
> - The benefit is newline-preserving API behavior across issue and
approval workflows without route-specific fixes

## What Changed

- Added a shared `multilineTextSchema` helper that normalizes escaped
`\n`, `\r\n`, and `\r` sequences to real line breaks.
- Applied the helper to issue descriptions, issue update comments, issue
comment bodies, suggested task descriptions, interaction summaries,
issue documents, approval comments, and approval decision notes.
- Added shared validator regressions for issue and approval multiline
inputs.

## Verification

- `pnpm exec vitest run --project @paperclipai/shared
packages/shared/src/validators/approval.test.ts
packages/shared/src/validators/issue.test.ts`
- `pnpm --filter @paperclipai/shared typecheck`

## Risks

- Low risk. This only changes text fields that are explicitly multiline
user/operator content.
- If a caller intentionally wanted literal backslash-n text in these
fields, it will now render as a real line break.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5 coding agent, tool-enabled with
shell/GitHub/Paperclip API access. Context window was not reported by
the runtime.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-24 18:02:45 -05:00
Dotta
5a0c1979cf [codex] Add runtime lifecycle recovery and live issue visibility (#4419) 2026-04-24 15:50:32 -05:00
Dotta
9a8d219949 [codex] Stabilize tests and local maintenance assets (#4423)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - A fast-moving control plane needs stable local tests and repeatable
local maintenance tools so contributors can safely split and review work
> - Several route suites needed stronger isolation, Codex manual model
selection needed a faster-mode option, and local browser cleanup missed
Playwright's headless shell binary
> - Storybook static output also needed to be preserved as a generated
review artifact from the working branch
> - This pull request groups the test/local-dev maintenance pieces so
they can be reviewed separately from product runtime changes
> - The benefit is more predictable contributor verification and cleaner
local maintenance without mixing these changes into feature PRs

## What Changed

- Added stable Vitest runner support and serialized route/authz test
isolation.
- Fixed workspace runtime authz route mocks and stabilized
Claude/company-import related assertions.
- Allowed Codex fast mode for manually selected models.
- Broadened the agent browser cleanup script to detect
`chrome-headless-shell` as well as Chrome for Testing.
- Preserved generated Storybook static output from the source branch.

## Verification

- `pnpm exec vitest run
src/__tests__/workspace-runtime-routes-authz.test.ts
src/__tests__/claude-local-execute.test.ts --config vitest.config.ts`
from `server/` passed: 2 files, 19 tests.
- `pnpm exec vitest run src/server/codex-args.test.ts --config
vitest.config.ts` from `packages/adapters/codex-local/` passed: 1 file,
3 tests.
- `bash -n scripts/kill-agent-browsers.sh &&
scripts/kill-agent-browsers.sh --dry` passed; dry-run detected
`chrome-headless-shell` processes without killing them.
- `test -f ui/storybook-static/index.html && test -f
ui/storybook-static/assets/forms-editors.stories-Dry7qwx2.js` passed.
- `git diff --check public-gh/master..pap-2228-test-local-maintenance --
. ':(exclude)ui/storybook-static'` passed.
- `pnpm exec vitest run
cli/src/__tests__/company-import-export-e2e.test.ts --config
cli/vitest.config.ts` did not complete in the isolated split worktree
because `paperclipai run` exited during build prep with `TS2688: Cannot
find type definition file for 'react'`; this appears to be caused by the
worktree dependency symlink setup, not the code under test.
- Confirmed this PR does not include `pnpm-lock.yaml`.

## Risks

- Medium risk: the stable Vitest runner changes how route/authz tests
are scheduled.
- Generated `ui/storybook-static` files are large and contain minified
third-party output; `git diff --check` reports whitespace inside those
generated assets, so reviewers may choose to drop or regenerate that
artifact before merge.
- No database migrations.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex coding agent based on GPT-5, with shell, git, Paperclip
API, and GitHub CLI tool use in the local Paperclip workspace.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

Note: screenshot checklist item is not applicable to source UI behavior;
the included Storybook static output is generated artifact preservation
from the source branch.

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-24 15:11:42 -05:00
Devin Foley
70679a3321 Add sandbox environment support (#4415)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - The environment/runtime layer decides where agent work executes and
how the control plane reaches those runtimes.
> - Today Paperclip can run locally and over SSH, but sandboxed
execution needs a first-class environment model instead of one-off
adapter behavior.
> - We also want sandbox providers to be pluggable so the core does not
hardcode every provider implementation.
> - This branch adds the Sandbox environment path, the provider
contract, and a deterministic fake provider plugin.
> - That required synchronized changes across shared contracts, plugin
SDK surfaces, server runtime orchestration, and the UI
environment/workspace flows.
> - The result is that sandbox execution becomes a core control-plane
capability while keeping provider implementations extensible and
testable.

## What Changed

- Added sandbox runtime support to the environment execution path,
including runtime URL discovery, sandbox execution targeting,
orchestration, and heartbeat integration.
- Added plugin-provider support for sandbox environments so providers
can be supplied via plugins instead of hardcoded server logic.
- Added the fake sandbox provider plugin with deterministic behavior
suitable for local and automated testing.
- Updated shared types, validators, plugin protocol definitions, and SDK
helpers to carry sandbox provider and workspace-runtime contracts across
package boundaries.
- Updated server routes and services so companies can create sandbox
environments, select them for work, and execute work through the sandbox
runtime path.
- Updated the UI environment and workspace surfaces to expose sandbox
environment configuration and selection.
- Added test coverage for sandbox runtime behavior, provider seams,
environment route guards, orchestration, and the fake provider plugin.

## Verification

- Ran locally before the final fixture-only scrub:
  - `pnpm -r typecheck`
  - `pnpm test:run`
  - `pnpm build`
- Ran locally after the final scrub amend:
  - `pnpm vitest run server/src/__tests__/runtime-api.test.ts`
- Reviewer spot checks:
  - create a sandbox environment backed by the fake provider plugin
  - run work through that environment
- confirm sandbox provider execution does not inherit host secrets
implicitly

## Risks

- This touches shared contracts, plugin SDK plumbing, server runtime
orchestration, and UI environment/workspace flows, so regressions would
likely show up as cross-layer mismatches rather than isolated type
errors.
- Runtime URL discovery and sandbox callback selection are sensitive to
host/bind configuration; if that logic is wrong, sandbox-backed
callbacks may fail even when execution succeeds.
- The fake provider plugin is intentionally deterministic and
test-oriented; future providers may expose capability gaps that this
branch does not yet cover.

## Model Used

- OpenAI Codex coding agent on a GPT-5-class backend in the
Paperclip/Codex harness. Exact backend model ID is not exposed
in-session. Tool-assisted workflow with shell execution, file editing,
git history inspection, and local test execution.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-24 12:15:53 -07:00
Dotta
641eb44949 [codex] Harden create-agent skill governance (#4422)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - Hiring agents is a governance-sensitive workflow because it grants
roles, adapter config, skills, and execution capability
> - The create-agent skill needs explicit templates and review guidance
so hires are auditable and not over-permissioned
> - Skill sync also needs to recognize bundled Paperclip skills
consistently for Codex local agents
> - This pull request expands create-agent role templates, adds a
security-engineer template, and documents capability/secret-handling
review requirements
> - The benefit is safer, more repeatable agent creation with clearer
approval payloads and less permission sprawl

## What Changed

- Expanded `paperclip-create-agent` guidance for template selection,
adjacent-template drafting, and role-specific review bars.
- Added a Security Engineer agent template and collaboration/safety
sections for Coder, QA, and UX Designer templates.
- Hardened draft-review guidance around desired skills, external-system
access, secrets, and confidential advisory handling.
- Updated LLM agent-configuration guidance to point hiring workflows at
the create-agent skill.
- Added tests for bundled skill sync, create-agent skill injection, hire
approval payloads, and LLM route guidance.

## Verification

- `pnpm exec vitest run server/src/__tests__/agent-skills-routes.test.ts
server/src/__tests__/codex-local-skill-injection.test.ts
server/src/__tests__/codex-local-skill-sync.test.ts
server/src/__tests__/llms-routes.test.ts
server/src/__tests__/paperclip-skill-utils.test.ts --config
server/vitest.config.ts` passed: 5 files, 23 tests.
- `git diff --check public-gh/master..pap-2228-create-agent-governance
-- . ':(exclude)ui/storybook-static'` passed.
- Confirmed this PR does not include `pnpm-lock.yaml`.

## Risks

- Low-to-medium risk: this primarily changes skills/docs and tests, but
it affects future hiring guidance and approval expectations.
- Reviewers should check whether the new Security Engineer template is
too broad for default company installs.
- No database migrations.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex coding agent based on GPT-5, with shell, git, Paperclip
API, and GitHub CLI tool use in the local Paperclip workspace.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

Note: screenshot checklist item is not applicable; this PR changes
skills, docs, and server tests.

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-24 14:15:28 -05:00
Dotta
77a72e28c2 [codex] Polish issue composer and long document display (#4420)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - Issue comments and documents are the main working surface where
operators and agents collaborate
> - File drops, markdown editing, and long issue descriptions need to
feel predictable because they sit directly in the task execution loop
> - The composer had edge cases around drag targets, attachment
feedback, image drops, and long markdown content crowding the page
> - This pull request polishes the issue composer, hardens markdown
editor regressions, and adds a fold curtain for long issue
descriptions/documents
> - The benefit is a calmer issue detail surface that handles uploads
and long work products without hiding state or breaking layout

## What Changed

- Scoped issue-composer drag/drop behavior so the composer owns file
drops without turning the whole thread into a competing drop target.
- Added clearer attachment upload feedback for non-image files and
image-drop stability coverage.
- Hardened markdown editor and markdown body handling around HTML-like
tag regressions.
- Added `FoldCurtain` and wired it into issue descriptions and issue
documents so long markdown previews can expand/collapse.
- Added Storybook coverage for the fold curtain state.

## Verification

- `pnpm exec vitest run ui/src/components/IssueChatThread.test.tsx
ui/src/components/MarkdownEditor.test.tsx
ui/src/components/MarkdownBody.test.tsx --config ui/vitest.config.ts`
passed: 3 files, 75 tests.
- `git diff --check public-gh/master..pap-2228-editor-composer-polish --
. ':(exclude)ui/storybook-static'` passed.
- Confirmed this PR does not include `pnpm-lock.yaml`.

## Risks

- Low-to-medium risk: this changes user-facing composer/drop behavior
and long markdown display.
- The fold curtain uses DOM measurement and `ResizeObserver`; reviewers
should check browser behavior for very long descriptions and documents.
- No database migrations.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex coding agent based on GPT-5, with shell, git, Paperclip
API, and GitHub CLI tool use in the local Paperclip workspace.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

Note: screenshots were not newly captured during branch splitting; the
UI states are covered by component tests and a Storybook story.

---------

Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-24 14:12:41 -05:00
Dotta
8f1cd0474f [codex] Improve transient recovery and Codex model refresh (#4383)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - Adapter execution and retry classification decide whether agent work
pauses, retries, or recovers automatically
> - Transient provider failures need to be classified precisely so
Paperclip does not convert retryable upstream conditions into false hard
failures
> - At the same time, operators need an up-to-date model list for
Codex-backed agents and prompts should nudge agents toward targeted
verification instead of repo-wide sweeps
> - This pull request tightens transient recovery classification for
Claude and Codex, updates the agent prompt guidance, and adds Codex
model refresh support end-to-end
> - The benefit is better automatic retry behavior plus fresher
operator-facing model configuration

## What Changed

- added Codex usage-limit retry-window parsing and Claude extra-usage
transient classification
- normalized the heartbeat transient-recovery contract across adapter
executions and heartbeat scheduling
- documented that deferred comment wakes only reopen completed issues
for human/comment-reopen interactions, while system follow-ups leave
closed work closed
- updated adapter-utils prompt guidance to prefer targeted verification
- added Codex model refresh support in the server route, registry,
shared types, and agent config form
- added adapter/server tests covering the new parsing, retry scheduling,
and model-refresh behavior

## Verification

- `pnpm exec vitest run --project @paperclipai/adapter-utils
packages/adapter-utils/src/server-utils.test.ts`
- `pnpm exec vitest run --project @paperclipai/adapter-claude-local
packages/adapters/claude-local/src/server/parse.test.ts`
- `pnpm exec vitest run --project @paperclipai/adapter-codex-local
packages/adapters/codex-local/src/server/parse.test.ts`
- `pnpm exec vitest run --project @paperclipai/server
server/src/__tests__/adapter-model-refresh-routes.test.ts
server/src/__tests__/adapter-models.test.ts
server/src/__tests__/claude-local-execute.test.ts
server/src/__tests__/codex-local-execute.test.ts
server/src/__tests__/heartbeat-process-recovery.test.ts
server/src/__tests__/heartbeat-retry-scheduling.test.ts`

## Risks

- Moderate behavior risk: retry classification affects whether runs
auto-recover or block, so mistakes here could either suppress needed
retries or over-retry real failures
- Low workflow risk: deferred comment wake reopening is intentionally
scoped to human/comment-reopen interactions so system follow-ups do not
revive completed issues unexpectedly

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex GPT-5-based coding agent with tool use and code execution
in the Codex CLI environment

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [ ] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-24 09:40:40 -05:00
Dotta
4fdbbeced3 [codex] Refine markdown issue reference rendering (#4382)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - Task references are a core part of how operators understand issue
relationships across the UI
> - Those references appear both in markdown bodies and in sidebar
relationship panels
> - The rendering had drifted between surfaces, and inline markdown
pills were reading awkwardly inside prose and lists
> - This pull request unifies the underlying issue-reference treatment,
routes issue descriptions through `MarkdownBody`, and switches inline
markdown references to a cleaner text-link presentation
> - The benefit is more consistent issue-reference UX with better
readability in markdown-heavy views

## What Changed

- unified sidebar and markdown issue-reference rendering around the
shared issue-reference components
- routed resting issue descriptions through `MarkdownBody` so
description previews inherit the richer issue-reference treatment
- replaced inline markdown pill chrome with a cleaner inline reference
presentation for prose contexts
- added and updated UI tests for `MarkdownBody` and `InlineEditor`

## Verification

- `pnpm exec vitest run --project @paperclipai/ui
ui/src/components/MarkdownBody.test.tsx
ui/src/components/InlineEditor.test.tsx`

## Risks

- Moderate UI risk: issue-reference rendering now differs intentionally
between inline markdown and relationship sidebars, so regressions would
show up as styling or hover-preview mismatches

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex GPT-5-based coding agent with tool use and code execution
in the Codex CLI environment

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [ ] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-24 09:39:21 -05:00
Dotta
7ad225a198 [codex] Improve issue thread review flow (#4381)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - Issue detail is where operators coordinate review, approvals, and
follow-up work with active runs
> - That thread UI needs to surface blockers, descendants, review
handoffs, and reply ergonomics clearly enough for humans to guide agent
work
> - Several small gaps in the issue-thread flow were making review and
navigation clunkier than necessary
> - This pull request improves the reply composer, descendant/blocker
presentation, interaction folding, and review-request handoff plumbing
together as one cohesive issue-thread workflow slice
> - The benefit is a cleaner operator review loop without changing the
broader task model

## What Changed

- restored and refined the floating reply composer behavior in the issue
thread
- folded expired confirmation interactions and improved post-submit
thread scrolling behavior
- surfaced descendant issue context and inline blocker/paused-assignee
notices on the issue detail view
- tightened large-board first paint behavior in `IssuesList`
- added loose review-request handoffs through the issue
execution-policy/update path and covered them with tests

## Verification

- `pnpm vitest run ui/src/pages/IssueDetail.test.tsx`
- `pnpm vitest run server/src/__tests__/issues-service.test.ts
server/src/__tests__/issue-execution-policy.test.ts`
- `pnpm exec vitest run --project @paperclipai/ui
ui/src/components/IssueChatThread.test.tsx
ui/src/components/IssueProperties.test.tsx
ui/src/components/IssuesList.test.tsx ui/src/lib/issue-tree.test.ts
ui/src/api/issues.test.ts`
- `pnpm exec vitest run --project @paperclipai/adapter-utils
packages/adapter-utils/src/server-utils.test.ts`
- `pnpm exec vitest run --project @paperclipai/server
server/src/__tests__/issue-comment-reopen-routes.test.ts -t "coerces
executor handoff patches into workflow-controlled review wakes|wakes the
return assignee with execution_changes_requested"`
- `pnpm exec vitest run --project @paperclipai/server
server/src/__tests__/issue-execution-policy.test.ts
server/src/__tests__/issues-service.test.ts`

## Visual Evidence

- UI layout changes are covered by the focused issue-thread component
and issue-detail tests listed above. Browser screenshots were not
attachable from this automated greploop environment, so reviewers should
use the running preview for final visual confirmation.

## Risks

- Moderate UI-flow risk: these changes touch the issue detail experience
in multiple spots, so regressions would most likely show up as
thread-layout quirks or incorrect review-handoff behavior

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex GPT-5-based coding agent with tool use and code execution
in the Codex CLI environment

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots or documented the visual verification path
- [ ] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-24 08:02:45 -05:00
Dotta
35a9dc37b0 [codex] Speed up company skill detail loading (#4380)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - Company skills are part of the control plane for distributing
reusable capabilities
> - Board flows that inspect company skill detail should stay responsive
because they are operator-facing control-plane reads
> - The existing detail path was doing broader work than needed for the
specific detail screen
> - This pull request narrows that company-skill detail loading path and
adds a regression test around it
> - The benefit is faster company skill detail reads without changing
the external API contract

## What Changed

- tightened the company-skill detail loading path in
`server/src/services/company-skills.ts`
- added `server/src/__tests__/company-skills-detail.test.ts` to verify
the detail route only pulls the required data

## Verification

- `pnpm exec vitest run --project @paperclipai/server
server/src/__tests__/company-skills-detail.test.ts`

## Risks

- Low risk: this only changes the company-skill detail query path, but
any missed assumption in the detail consumer would surface when loading
that screen

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex GPT-5-based coding agent with tool use and code execution
in the Codex CLI environment

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [ ] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-24 07:37:13 -05:00
Devin Foley
e4995bbb1c Add SSH environment support (#4358)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - The environments subsystem already models execution environments,
but before this branch there was no end-to-end SSH-backed runtime path
for agents to actually run work against a remote box
> - That meant agents could be configured around environment concepts
without a reliable way to execute adapter sessions remotely, sync
workspace state, and preserve run context across supported adapters
> - We also need environment selection to participate in normal
Paperclip control-plane behavior: agent defaults, project/issue
selection, route validation, and environment probing
> - Because this capability is still experimental, the UI surface should
be easy to hide and easy to remove later without undoing the underlying
implementation
> - This pull request adds SSH environment execution support across the
runtime, adapters, routes, schema, and tests, then puts the visible
environment-management UI behind an experimental flag
> - The benefit is that we can validate real SSH-backed agent execution
now while keeping the user-facing controls safely gated until the
feature is ready to come out of experimentation

## What Changed

- Added SSH-backed execution target support in the shared adapter
runtime, including remote workspace preparation, skill/runtime asset
sync, remote session handling, and workspace restore behavior after
runs.
- Added SSH execution coverage for supported local adapters, plus remote
execution tests across Claude, Codex, Cursor, Gemini, OpenCode, and Pi.
- Added environment selection and environment-management backend support
needed for SSH execution, including route/service work, validation,
probing, and agent default environment persistence.
- Added CLI support for SSH environment lab verification and updated
related docs/tests.
- Added the `enableEnvironments` experimental flag and gated the
environment UI behind it on company settings, agent configuration, and
project configuration surfaces.

## Verification

- `pnpm exec vitest run
packages/adapters/claude-local/src/server/execute.remote.test.ts
packages/adapters/cursor-local/src/server/execute.remote.test.ts
packages/adapters/gemini-local/src/server/execute.remote.test.ts
packages/adapters/opencode-local/src/server/execute.remote.test.ts
packages/adapters/pi-local/src/server/execute.remote.test.ts`
- `pnpm exec vitest run server/src/__tests__/environment-routes.test.ts`
- `pnpm exec vitest run
server/src/__tests__/instance-settings-routes.test.ts`
- `pnpm exec vitest run ui/src/lib/new-agent-hire-payload.test.ts
ui/src/lib/new-agent-runtime-config.test.ts`
- `pnpm -r typecheck`
- `pnpm build`
- Manual verification on a branch-local dev server:
  - enabled the experimental flag
  - created an SSH environment
  - created a Linux Claude agent using that environment
- confirmed a run executed on the Linux box and synced workspace changes
back

## Risks

- Medium: this touches runtime execution flow across multiple adapters,
so regressions would likely show up in remote session setup, workspace
sync, or environment selection precedence.
- The UI flag reduces exposure, but the underlying runtime and route
changes are still substantial and rely on migration correctness.
- The change set is broad across adapters, control-plane services,
migrations, and UI gating, so review should pay close attention to
environment-selection precedence and remote workspace lifecycle
behavior.

## Model Used

- OpenAI Codex via Paperclip's local Codex adapter, GPT-5-class coding
model with tool use and code execution in the local repo workspace. The
local adapter does not surface a more specific public model version
string in this branch workflow.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-23 19:15:22 -07:00
Dotta
f98c348e2b [codex] Add issue subtree pause, cancel, and restore controls (#4332)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - This branch extends the issue control-plane so board operators can
pause, cancel, and later restore whole issue subtrees while keeping
descendant execution and wake behavior coherent.
> - That required new hold state in the database, shared contracts,
server routes/services, and issue detail UI controls so subtree actions
are durable and auditable instead of ad hoc.
> - While this branch was in flight, `master` advanced with new
environment lifecycle work, including a new `0065_environments`
migration.
> - Before opening the PR, this branch had to be rebased onto
`paperclipai/paperclip:master` without losing the existing
subtree-control work or leaving conflicting migration numbering behind.
> - This pull request rebases the subtree pause/cancel/restore feature
cleanly onto current `master`, renumbers the hold migration to
`0066_issue_tree_holds`, and preserves the full branch diff in a single
PR.
> - The benefit is that reviewers get one clean, mergeable PR for the
subtree-control feature instead of stale branch history with migration
conflicts.

## What Changed

- Added durable issue subtree hold data structures, shared
API/types/validators, server routes/services, and UI flows for subtree
pause, cancel, and restore operations.
- Added server and UI coverage for subtree previewing, hold
creation/release, dependency-aware scheduling under holds, and issue
detail subtree controls.
- Rebased the branch onto current `paperclipai/paperclip:master` and
renumbered the branch migration from `0065_issue_tree_holds` to
`0066_issue_tree_holds` so it no longer conflicts with upstream
`0065_environments`.
- Added a small follow-up commit that makes restore requests return `200
OK` explicitly while keeping pause/cancel hold creation at `201
Created`, and updated the route test to match that contract.

## Verification

- `pnpm --filter @paperclipai/db typecheck`
- `pnpm --filter @paperclipai/shared typecheck`
- `pnpm --filter @paperclipai/server typecheck`
- `pnpm --filter @paperclipai/ui typecheck`
- `cd server && pnpm exec vitest run
src/__tests__/issue-tree-control-routes.test.ts
src/__tests__/issue-tree-control-service.test.ts
src/__tests__/issue-tree-control-service-unit.test.ts
src/__tests__/heartbeat-dependency-scheduling.test.ts`
- `cd ui && pnpm exec vitest run src/components/IssueChatThread.test.tsx
src/pages/IssueDetail.test.tsx`

## Risks

- This is a broad cross-layer change touching DB/schema, shared
contracts, server orchestration, and UI; regressions are most likely
around subtree status restoration or wake suppression/resume edge cases.
- The migration was renumbered during PR prep to avoid the new upstream
`0065_environments` conflict. Reviewers should confirm the final
`0066_issue_tree_holds` ordering is the only hold-related migration that
lands.
- The issue-tree restore endpoint now responds with `200` instead of
relying on implicit behavior, which is semantically better for a restore
operation but still changes an API detail that clients or tests could
have assumed.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex coding agent in the Paperclip Codex runtime (GPT-5-class
tool-using coding model; exact deployment ID/context window is not
exposed inside this session).

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [ ] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-23 14:51:46 -05:00
Russell Dempsey
854fa81757 fix(pi-local): prepend installed skill bin/ dirs to child PATH (#4331)
## Thinking Path

> - Paperclip orchestrates AI agents; each agent runs under an adapter
that spawns a model CLI as a child process.
> - The pi-local adapter (`packages/adapters/pi-local`) spawns `pi` and
inherits the child's shell environment — including `PATH`, which
determines what the child's bash tool can execute by name.
> - Paperclip skills ship executable helpers under `<skill>/bin/` (e.g.
`paperclip-get-issue`) and Reviewer/QA-style `AGENTS.md` files invoke
them by name via the agent's bash tool.
> - Pi-local builds its runtime env with `ensurePathInEnv({
...process.env, ...env })` only — it never adds the installed skills'
`bin/` dirs to PATH. The pi CLI's `--skill` arg loads each skill's
SKILL.md but does not augment PATH.
> - Consequence: every bash invocation of a skill helper fails with
`exit 127: command not found`. The agent then spends its heartbeat
guessing (re-reading SKILL.md, trying `find`, inventing command paths)
and either times out or gives up.
> - This PR prepends each injected skill's `bin/` directory to the child
PATH immediately before runtimeEnv is constructed.
> - The benefit: pi_local agents whose AGENTS.md uses any `paperclip-*`
skill helper can actually run those helpers.

## What Changed

- `packages/adapters/pi-local/src/server/execute.ts`: compute
`skillBinDirs` from the already-resolved `piSkillEntries`, dedupe
against the existing PATH, prepend them to whichever of `PATH` / `Path`
the merged env uses, then build `runtimeEnv`. No new helpers, no
adapter-utils changes.

## Verification

Manual repro before the fix:

1. Create a pi_local agent wired to a paperclip skill (e.g.
paperclip-control).
2. Wake the agent on an in_review issue with an AGENTS.md that starts
with `paperclip-get-issue "$PAPERCLIP_TASK_ID"`.
3. Session file: `{ "role": "toolResult", "isError": true, "content": [{
"text": "/bin/bash: paperclip-get-issue: command not found\n\nCommand
exited with code 127" }] }`.

After the fix: same wake; `paperclip-get-issue` resolves and returns the
issue JSON; agent proceeds.

Local commands:

```
pnpm --filter @paperclipai/adapter-pi-local typecheck   # clean
pnpm --filter @paperclipai/adapter-pi-local build       # clean
pnpm --filter @paperclipai/server exec vitest run \
  src/__tests__/pi-local-execute.test.ts \
  src/__tests__/pi-local-adapter-environment.test.ts \
  src/__tests__/pi-local-skill-sync.test.ts
# 5/5 passing
```

No new tests: the existing `pi-local-skill-sync.test.ts` covers skill
symlink injection (upstream of the PATH step), and
`pi-local-execute.test.ts` covers the spawn path; this change only
augments env on the same spawn path.

## Risks

Low. Pure PATH augmentation on the child env. Edge cases:

- Zero skills installed → no PATH change (guarded by
`skillBinDirs.length > 0`).
- Duplicate bin dirs already on PATH → deduped; no pollution on re-runs.
- Windows `Path` casing → falls back correctly when merged env uses
`Path` instead of `PATH`.
- Skill dir without `bin/` subdir → joined path simply won't resolve;
harmless.

No behavioral change for pi_local agents that don't use skill-provided
commands.

## Model Used

- Claude, `claude-opus-4-7` (1M context), extended thinking enabled,
tool use enabled. Walked pi-local/cursor-local/claude-local and
adapter-utils to isolate the gap, wrote the inlined fix, and ran
typecheck/build/test locally.

## Checklist

- [x] Thinking path from project context to this change
- [x] Model used specified
- [x] Checked ROADMAP.md — no overlap
- [x] Tests run locally, passing
- [x] Tests added — new case in
`server/src/__tests__/pi-local-execute.test.ts`; verified it fails when
the fix is reverted
- [ ] UI screenshots — N/A (backend adapter change)
- [x] Docs updated — N/A (internal adapter, no user-facing docs)
- [x] Risks documented
- [x] Will address reviewer comments before merge
2026-04-23 10:15:10 -05:00
Dotta
fe14de504c [codex] Document README architecture systems (#4250)
## Thinking Path

> - Paperclip is the control plane for autonomous AI companies.
> - The public README is the first place many operators and contributors
learn what the product already includes.
> - The existing README explained the product promise but did not give a
compact, concrete tour of the major systems behind it.
> - This made Paperclip easier to underestimate as a wrapper around
agents instead of a full control plane with identity, work, execution,
governance, budgets, plugins, and portability.
> - This pull request adds an under-the-hood README section that names
those systems and shows how adapters connect into the server.
> - Greptile caught consistency gaps between the diagram and prose, so
the final version aligns the system labels and adapter examples across
both surfaces.
> - The benefit is a clearer first-read model of Paperclip's
architecture and shipped capabilities without changing runtime behavior.

## What Changed

- Added a `What's Under the Hood` section to `README.md`.
- Added an ASCII architecture diagram for the Paperclip server and
external agent adapters.
- Added a systems table covering identity, org charts, tasks, heartbeat
execution, workspaces, governance, budgets, routines, plugins,
secrets/storage, activity/events, and company portability.
- Addressed Greptile feedback by aligning diagram labels with table rows
and grouping adapter examples consistently.

## Verification

- `git diff --check public-gh/master...HEAD`
- Attempted `pnpm exec prettier --check README.md`, but this checkout
does not expose a `prettier` binary through `pnpm exec`.
- Greptile review rerun passed after addressing its two comments; review
threads are resolved.
- Remote PR checks passed on the latest head: `policy`, `verify`, `e2e`,
`security/snyk (cryppadotta)`, and `Greptile Review`.
- Not run locally: Vitest/build suites, because this is a README-only
documentation change and the PR's remote `verify` job ran typecheck,
tests, build, and release canary dry run.

## Risks

- Low runtime risk: documentation-only change.
- The main risk is wording drift if the README overstates or
underspecifies evolving product capabilities; the section was aligned
against the current product/spec docs and roadmap.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex / GPT-5 coding agent in a Paperclip heartbeat, with shell
and GitHub CLI tool use. Exact runtime model identifier and context
window were not exposed by the adapter.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-23 09:48:19 -05:00
Michel Tomas
3d15798c22 fix(adapters/routes): apply resolveExternalAdapterRegistration on hot-install (#4324)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - The external adapter plugin system (#2218) lets adapters ship as npm
modules loaded via `server/src/adapters/plugin-loader.ts`; since #4296
merged, each `ServerAdapterModule` can declare `sessionManagement`
(`supportsSessionResume`, `nativeContextManagement`,
`defaultSessionCompaction`) and have it preserved through the init-time
load via the new `resolveExternalAdapterRegistration` helper
> - #4296 fixed the init-time IIFE path at
`server/src/adapters/registry.ts:363-369` but noted that the hot-install
path at `server/src/routes/adapters.ts:174
registerWithSessionManagement` still unconditionally overwrites
module-provided `sessionManagement` during `POST /api/adapters/install`
> - Practical impact today: an external adapter installed via the API
needs a Paperclip restart before its declared `sessionManagement` takes
effect — the IIFE runs on next boot and preserves it, but until then the
hot-install overwrite wins
> - This PR closes that parity gap: `registerWithSessionManagement`
delegates to the same `resolveExternalAdapterRegistration` helper
introduced by #4296, unifying both load paths behind one resolver
> - The benefit is consistent behaviour between cold-start and
hot-install: no "install then restart" ritual; declared
`sessionManagement` on an external module is honoured the moment `POST
/api/adapters/install` returns 201

## What Changed

- `server/src/routes/adapters.ts`: `registerWithSessionManagement`
delegates to the exported `resolveExternalAdapterRegistration` helper
(added in #4296). Honours module-provided `sessionManagement` first,
falls back to host registry lookup, defaults `undefined`. Updated the
section comment to document the parity-with-IIFE intent.
- `server/src/routes/adapters.ts`: dropped the now-unused
`getAdapterSessionManagement` import.
- `server/src/adapters/registry.ts`: updated the JSDoc on
`resolveExternalAdapterRegistration` — previously said "Exported for
unit tests; runtime callers use the IIFE below", now says the helper is
used by both the init-time IIFE and the hot-install path in
`routes/adapters.ts`. Addresses Greptile C1.
- `server/src/__tests__/adapter-routes.test.ts`: new integration test —
installs a mocked external adapter module carrying a non-trivial
`sessionManagement` declaration and asserts
`findServerAdapter(type).sessionManagement` preserves it after `POST
/api/adapters/install` returns 201.
- `server/src/__tests__/adapter-routes.test.ts`: added
`findServerAdapter` to the shared test-scope variable set so the new
test can inspect post-install registry state.

## Verification

Targeted test runs from a clean tree on
`fix/external-session-management-hot-install` (rebased onto current
`upstream/master` now that #4296 has merged):

- `pnpm test server/src/__tests__/adapter-routes.test.ts` — 6 passed
(new test + 5 pre-existing)
- `pnpm test server/src/__tests__/adapter-registry.test.ts` — 15 passed
(ensures the IIFE path from #4296 continues to behave correctly)
- `pnpm -w run test` full workspace suite — 1923 passed / 1 skipped
(unrelated skip)

End-to-end smoke on file:
[`@superbiche/cline-paperclip-adapter@0.1.1`](https://www.npmjs.com/package/@superbiche/cline-paperclip-adapter)
and
[`@superbiche/qwen-paperclip-adapter@0.1.1`](https://www.npmjs.com/package/@superbiche/qwen-paperclip-adapter),
both public on npm, both declare `sessionManagement`. With this PR in
place, the "restart after install" step disappears — the declared
compaction policy is active immediately after the install response.

## Risks

- Low risk. The change replaces an inline mutation with a call to a
helper that already has dedicated unit coverage (#4296 added three tests
for `resolveExternalAdapterRegistration` covering module-provided,
registry-fallback, and undefined paths). Behaviour is a strict superset
of the prior path — externals that did not declare `sessionManagement`
continue to get the hardcoded-registry lookup; externals that did
declare it now have those values preserved instead of overwritten.
- No migration impact. The stored plugin records
(`~/.paperclip/adapter-plugins.json`) are unchanged. Existing
hot-installed adapters behave correctly before and after.
- No behavioural change for builtin adapters; they hit
`registerServerAdapter` directly and never flow through
`registerWithSessionManagement`.

## Model Used

- Provider and model: Claude (Anthropic) via Claude Code
- Model ID: `claude-opus-4-7` (1M context)
- Reasoning mode: standard (no extended thinking on this PR)
- Tool use: yes — file edits, subprocess invocations for
builds/tests/git via the Claude Code harness

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots (N/A — server-only change)
- [x] I have updated relevant documentation to reflect my changes (the
JSDoc on `resolveExternalAdapterRegistration` and the section comment
above `registerWithSessionManagement` now document the parity-with-IIFE
intent)
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-23 09:45:24 -05:00
Michel Tomas
24232078fd fix(adapters/registry): honor module-provided sessionManagement for external adapters (#4296)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - Adapters are how paperclip hands work off to specific agent
runtimes; since #2218, external adapter packages can ship as npm modules
loaded via `server/src/adapters/plugin-loader.ts`
> - Each `ServerAdapterModule` can declare `sessionManagement`
(`supportsSessionResume`, `nativeContextManagement`,
`defaultSessionCompaction`) — but the init-time load at
`registry.ts:363-369` hard-overwrote it with a hardcoded-registry lookup
that has no entries for external types, so modules could not actually
set these fields
> - The hot-install path at `routes/adapters.ts:179` →
`registerServerAdapter` preserves module-provided `sessionManagement`,
so externals worked after `POST /api/adapters/install` — *until the next
server restart*, when the init-time IIFE wiped it back to `undefined`
> - #2218 explicitly deferred this: *"Adapter execution model, heartbeat
protocol, and session management are untouched."* This PR is the natural
follow-up for session management on the plugin-loader path
> - This PR aligns init-time registration with the hot-install path:
honor module-provided `sessionManagement` first, fall back to the
hardcoded registry when absent (so externals overriding a built-in type
still inherit its policy). Extracted as a testable helper with three
unit tests
> - The benefit is external adapters can declare session-resume
capabilities consistently across cold-start and hot-install, without
requiring upstream additions to the hardcoded registry for each new
plugin

## What Changed

- `server/src/adapters/registry.ts`: extracted the merge logic into a
new exported helper `resolveExternalAdapterRegistration()` — honors
module-provided `sessionManagement` first, falls back to
`getAdapterSessionManagement(type)`, else `undefined`. The init-time
IIFE calls the helper instead of inlining an overwrite.
- `server/src/adapters/registry.ts`: updated the section comment (lines
331–340) to reflect the new semantics and cross-reference the
hot-install path's behavior.
- `server/src/__tests__/adapter-registry.test.ts`: new
`describe("resolveExternalAdapterRegistration")` block with three tests
— module-provided value preserved, registry fallback when module omits,
`undefined` when neither provides.

## Verification

Targeted test run from a clean tree on
`fix/external-session-management`:

```
cd server && pnpm exec vitest run src/__tests__/adapter-registry.test.ts
# 1 test file, 15 tests passed, 0 failed (12 pre-existing + 3 new)
```

Full server suite via the independent review pass noted under Model
Used: **1,156 tests passed, 0 failed**.

Typecheck note: `pnpm --filter @paperclipai/server exec tsc --noEmit`
surfaces two errors in `src/services/plugin-host-services.ts:1510`
(`createInteraction` + implicit-any). Verified by `git stash` + re-run
on clean `upstream/master` — they reproduce without this PR's changes.
Pre-existing, out of scope.

## Risks

- **Low behavioral risk.** Strictly additive: externals that do NOT
provide `sessionManagement` continue to receive exactly the same value
as before (registry lookup → `undefined` for pure externals, or the
builtin's entry for externals overriding a built-in type). Only a new
capability is unlocked; no existing behavior changes for existing
adapters.
- **No breaking change.** `ServerAdapterModule.sessionManagement` was
already optional at the type level. Externals that never set it see no
difference on either path.
- **Consistency verified.** Init-time IIFE now matches the post-`POST
/api/adapters/install` behavior — a server restart no longer regresses
the field.

## Note

This is part of a broader effort to close the parity gap between
external and built-in adapters. Once externals reach 1:1 capability
coverage with internals, new-adapter contributions can increasingly be
steered toward the external-plugin path instead of the core product — a
trajectory CONTRIBUTING.md already encourages ("*If the idea fits as an
extension, prefer building it with the plugin system*").

## Model Used

- **Provider**: Anthropic
- **Model**: Claude Opus 4.7
- **Exact model ID**: `claude-opus-4-7` (1M-context variant:
`claude-opus-4-7[1m]`)
- **Context window**: 1,000,000 tokens
- **Harness**: Claude Code (Anthropic's official CLI), orchestrated by
@superbiche as human-in-the-loop. Full file-editing, shell, and `gh`
tool use, plus parallel research subagents for fact-finding against
paperclip internals (plugin-loader contract, sessionCodec reachability,
UI parser surface, Cline CLI JSON schema).
- **Independent local review**: Gemini 3.1 Pro (Google) performed a
separate verification pass on the committed branch — confirmed the
approach & necessity, ran the full workspace build, and executed the
complete server test suite (1,156 tests, all passing). Not used for
authoring; second-opinion pass only.
- **Authoring split**: @superbiche identified the gap (while mapping the
external-adapter surface for a downstream adapter build) and shaped the
plan — categorising the surface into `works / acceptable /
needs-upstream` buckets, directing the surgical-diff approach on a fresh
branch from `upstream/master`, and calling the framing ("alignment bug
between init-time IIFE and hot-install path" rather than "missing
capability"). Opus 4.7 executed the fact-finding, the diff, the tests,
and drafted this PR body — all under direct review.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work (convention-aligned bug fix on the external-adapter
plugin path introduced by #2218)
- [x] I have run tests locally and they pass (15/15 in the touched file;
1,156/1,156 full server suite via the independent Gemini 3.1 Pro review)
- [x] I have added tests where applicable (3 new for the extracted
helper)
- [x] If this change affects the UI, I have included before/after
screenshots (no UI touched)
- [x] I have updated relevant documentation to reflect my changes
(in-file comment reflects new semantics)
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-23 07:39:43 -05:00
Devin Foley
13551b2bac Add local environment lifecycle (#4297)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - Every heartbeat run needs a concrete place where the agent's adapter
process executes.
> - Today that execution location is implicitly the local machine, which
makes it hard to track, audit, and manage as a first-class runtime
concern.
> - The first step is to represent the current local execution path
explicitly without changing how users experience agent runs.
> - This pull request adds core Environment and Environment Lease
records, then routes existing local heartbeat execution through a
default `Local` environment.
> - The benefit is that local runs remain behavior-preserving while the
system now has durable environment identity, lease lifecycle tracking,
and activity records for execution placement.

## What Changed

- Added `environments` and `environment_leases` database tables, schema
exports, and migration `0065_environments.sql`.
- Added shared environment constants, TypeScript types, and validators
for environment drivers, statuses, lease policies, lease statuses, and
cleanup states.
- Added `environmentService` for listing, reading, creating, updating,
and ensuring company-scoped environments.
- Added environment lease lifecycle operations for acquire, metadata
update, single-lease release, and run-wide release.
- Updated heartbeat execution to lazily ensure a company-scoped default
`Local` environment before adapter execution.
- Updated heartbeat execution to acquire an ephemeral local environment
lease, write `paperclipEnvironment` into the run context snapshot, and
release active leases during run finalization.
- Added activity log events for environment lease acquisition and
release.
- Added tests for environment service behavior and the local heartbeat
environment lifecycle.
- Added a CI-follow-up heartbeat guard so deferred issue comment wakes
are promoted before automatic missing-comment retries, with focused
batching test coverage.

## Verification

Local verification run for this branch:

- `pnpm -r typecheck`
- `pnpm build`
- `pnpm exec vitest run server/src/__tests__/environment-service.test.ts
server/src/__tests__/heartbeat-local-environment.test.ts --pool=forks`

Additional reviewer/CI verification:

- Confirm `pnpm-lock.yaml` is not modified.
- Confirm `pnpm test:run` passes in CI.
- Confirm `PAPERCLIP_E2E_SKIP_LLM=true pnpm run test:e2e` passes in CI.
- Confirm a local heartbeat run creates one active `Local` environment
when needed, records one lease for the run, releases the lease when the
run finishes, and includes `paperclipEnvironment` in the run context
snapshot.

Screenshots: not applicable; this PR has no UI changes.

## Risks

- Migration risk: introduces two new tables and a new migration journal
entry. Review should verify company scoping, indexes, foreign keys, and
enum defaults are correct.
- Lifecycle risk: heartbeat finalization now releases environment leases
in addition to existing runtime cleanup. A finalization bug could leave
stale active leases or mark a failed run's lease incorrectly.
- Behavior-preservation risk: local adapter execution should remain
unchanged apart from environment bookkeeping. Review should pay
attention to the heartbeat path around context snapshot updates and
final cleanup ordering.
- Activity volume risk: each heartbeat run now logs lease acquisition
and release events, increasing activity log volume by two records per
run.

## Model Used

OpenAI GPT-5.4 via Codex CLI. Capabilities used: repository inspection,
TypeScript implementation review, local test/build execution, and
PR-description drafting.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots (N/A: no UI changes)
- [x] I have updated relevant documentation to reflect my changes (N/A:
no user-facing docs or commands changed)
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-22 20:07:41 -07:00
Dotta
b69b563aa8 [codex] Fix stale issue execution run locks (#4258)
## Thinking Path

> - Paperclip is a control plane for AI-agent companies, so issue
checkout and execution ownership are core safety contracts.
> - The affected subsystem is the issue service and route layer that
gates agent writes by `checkoutRunId` and `executionRunId`.
> - PAP-1982 exposed a stale-lock failure mode where a terminal
heartbeat run could leave `executionRunId` pinned after checkout
ownership had moved or been cleared.
> - That stale execution lock could reject legitimate
PATCH/comment/release requests from the rightful assignee after a
harness restart.
> - This pull request centralizes terminal-run cleanup, applies it
before ownership-gated writes, and adds a board-only recovery endpoint
for operator intervention.
> - The benefit is that crashed or terminal runs no longer strand issues
behind stale execution locks, while live execution locks still block
conflicting writes.

## What Changed

- Added `issueService.clearExecutionRunIfTerminal()` to atomically lock
the issue/run rows and clear terminal or missing execution-run locks.
- Reused stale execution-lock cleanup from checkout,
`assertCheckoutOwner()`, and `release()`.
- Allowed the same assigned agent/current run to adopt an unowned
`in_progress` checkout after stale execution-lock cleanup.
- Updated release to clear `executionRunId`, `executionAgentNameKey`,
and `executionLockedAt`.
- Added board-only `POST /api/issues/:id/admin/force-release` with
company access checks, optional `clearAssignee=true`, and
`issue.admin_force_release` audit logging.
- Added embedded Postgres service tests and route integration tests for
stale-lock recovery, release behavior, and admin force-release
authorization/audit behavior.
- Documented the new force-release API in `doc/SPEC-implementation.md`.

## Verification

- `pnpm vitest run server/src/__tests__/issues-service.test.ts
server/src/__tests__/issue-stale-execution-lock-routes.test.ts` passed.
- `pnpm vitest run
server/src/__tests__/issue-stale-execution-lock-routes.test.ts
server/src/__tests__/approval-routes-idempotency.test.ts
server/src/__tests__/issue-comment-reopen-routes.test.ts
server/src/__tests__/issue-telemetry-routes.test.ts` passed.
- `pnpm -r typecheck` passed.
- `pnpm build` passed.
- `git diff --check` passed.
- `pnpm lint` could not run because this repo has no `lint` command.
- Full `pnpm test:run` completed with 4 failures in existing route
suites: `approval-routes-idempotency.test.ts` (2),
`issue-comment-reopen-routes.test.ts` (1), and
`issue-telemetry-routes.test.ts` (1). Those same files pass when run
isolated and when run together with the new stale-lock route test, so
this appears to be a whole-suite ordering/mock-isolation issue outside
this patch path.

## Risks

- Medium: this changes ownership-gated write behavior. The new adoption
path is limited to the current run, the current assignee, `in_progress`
issues, and rows with no checkout owner after terminal-lock cleanup.
- Low: the admin force-release endpoint is board-only and
company-scoped, but misuse can intentionally clear a live lock. It
writes an audit event with prior lock IDs.
- No schema or migration changes.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5 coding agent (`gpt-5`), agentic coding with
terminal/tool use and local test execution.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-22 10:43:38 -05:00
Dotta
a957394420 [codex] Add structured issue-thread interactions (#4244)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - Operators supervise that work through issues, comments, approvals,
and the board UI.
> - Some agent proposals need structured board/user decisions, not
hidden markdown conventions or heavyweight governed approvals.
> - Issue-thread interactions already provide a natural thread-native
surface for proposed tasks and questions.
> - This pull request extends that surface with request confirmations,
richer interaction cards, and agent/plugin/MCP helpers.
> - The benefit is that plan approvals and yes/no decisions become
explicit, auditable, and resumable without losing the single-issue
workflow.

## What Changed

- Added persisted issue-thread interactions for suggested tasks,
structured questions, and request confirmations.
- Added board UI cards for interaction review, selection, question
answers, and accept/reject confirmation flows.
- Added MCP and plugin SDK helpers for creating interaction cards from
agents/plugins.
- Updated agent wake instructions, onboarding assets, Paperclip skill
docs, and public docs to prefer structured confirmations for
issue-scoped decisions.
- Rebased the branch onto `public-gh/master` and renumbered branch
migrations to `0063` and `0064`; the idempotency migration uses `ADD
COLUMN IF NOT EXISTS` for old branch users.

## Verification

- `git diff --check public-gh/master..HEAD`
- `pnpm exec vitest run packages/adapter-utils/src/server-utils.test.ts
packages/mcp-server/src/tools.test.ts
packages/shared/src/issue-thread-interactions.test.ts
ui/src/lib/issue-thread-interactions.test.ts
ui/src/lib/issue-chat-messages.test.ts
ui/src/components/IssueThreadInteractionCard.test.tsx
ui/src/components/IssueChatThread.test.tsx
server/src/__tests__/issue-thread-interaction-routes.test.ts
server/src/__tests__/issue-thread-interactions-service.test.ts
server/src/services/issue-thread-interactions.test.ts` -> 9 files / 79
tests passed
- `pnpm -r typecheck` -> passed, including `packages/db` migration
numbering check

## Risks

- Medium: this adds a new issue-thread interaction model across
db/shared/server/ui/plugin surfaces.
- Migration risk is reduced by placing this branch after current master
migrations (`0063`, `0064`) and making the idempotency column add
idempotent for users who applied the old branch numbering.
- UI interaction behavior is covered by component tests, but this PR
does not include browser screenshots.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5-class coding agent runtime. Exact model ID and
context window are not exposed in this Paperclip run; tool use and local
shell/code execution were enabled.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-21 20:15:11 -05:00
Dotta
014aa0eb2d [codex] Clear stale queued comment targets (#4234)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - Operators interact with agent work through issue threads and queued
comments.
> - When the selected comment target becomes stale, the composer can
keep pointing at an invalid target after thread state changes.
> - That makes follow-up comments easier to misroute and harder to
reason about.
> - This pull request clears stale queued comment targets and covers the
behavior with tests.
> - The benefit is more predictable issue-thread commenting during live
agent work.

## What Changed

- Clears queued comment targets when they no longer match the current
issue thread state.
- Adjusts issue detail comment-target handling to avoid stale target
reuse.
- Adds regression tests for optimistic issue comment target behavior.

## Verification

- `pnpm exec vitest run ui/src/lib/optimistic-issue-comments.test.ts`

## Risks

- Low risk; scoped to comment-target state handling in the issue UI.
- No migrations.

> Checked `ROADMAP.md`; this is a focused UI reliability fix, not a new
roadmap-level feature.

## Model Used

- OpenAI Codex, GPT-5-based coding agent, tool-enabled repository
editing and local test execution.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-21 16:50:26 -05:00
Dotta
bcbbb41a4b [codex] Harden heartbeat runtime cleanup (#4233)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - The heartbeat runtime is the control-plane path that turns issue
assignments into agent runs and recovers after process exits.
> - Several edge cases could leave high-volume reads unbounded, stale
runtime services visible, blocked dependency wakes too eager, or
terminal adapter processes still around after output finished.
> - These problems make operator views noisy and make long-running agent
work less predictable.
> - This pull request tightens the runtime/read paths and adds focused
regression coverage.
> - The benefit is safer heartbeat execution and cleaner runtime state
without changing the public task model.

## What Changed

- Bounded high-volume issue/log reads in runtime code paths.
- Hardened heartbeat handling for blocked dependency wakes and terminal
run cleanup.
- Added adapter process cleanup coverage for terminal output cases.
- Added workspace runtime control tests for stale command matching and
stopped services.

## Verification

- `pnpm exec vitest run packages/adapter-utils/src/server-utils.test.ts
server/src/__tests__/heartbeat-dependency-scheduling.test.ts
ui/src/components/WorkspaceRuntimeControls.test.tsx`

## Risks

- Medium risk because heartbeat cleanup and runtime filtering affect
active agent execution paths.
- No migrations.

> Checked `ROADMAP.md`; this is runtime hardening and bug-fix work, not
a new roadmap-level feature.

## Model Used

- OpenAI Codex, GPT-5-based coding agent, tool-enabled repository
editing and local test execution.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-21 16:48:47 -05:00
Dotta
73ef40e7be [codex] Sandbox dynamic adapter UI parsers (#4225)
## Thinking Path

> - Paperclip is a control plane for AI-agent companies.
> - External adapters can provide UI parser code that the board loads
dynamically for run transcript rendering.
> - Running adapter-provided parser code directly in the board page
gives that parser access to same-origin browser state.
> - This PR narrows that surface by evaluating dynamically loaded
external adapter UI parser code in a dedicated browser Web Worker with a
constrained postMessage protocol.
> - The worker here is a frontend isolation boundary for adapter UI
parser JavaScript; it is not Paperclip's server plugin-worker system and
it is not a server-side job runner.

## What Changed

- Runs dynamically loaded external adapter UI parsers inside a dedicated
Web Worker instead of importing/evaluating them directly in the board
page.
- Adds a narrow postMessage protocol for parser initialization and line
parsing.
- Caches completed async parse results and notifies the adapter registry
so transcript recomputation can synchronously drain the final parsed
line.
- Disables common worker network, persistence, child worker, Blob/object
URL, and WebRTC escape APIs inside the parser worker bootstrap.
- Handles worker error messages after initialization and drains pending
callbacks on worker termination or mid-session worker error.
- Adds focused regression coverage for the parser worker lockdown and
unused protocol removal.

## Verification

- `pnpm exec vitest run --config ui/vitest.config.ts
ui/src/adapters/sandboxed-parser-worker.test.ts`
- `pnpm exec tsc --noEmit --target es2021 --moduleResolution bundler
--module esnext --jsx react-jsx --lib dom,es2021 --skipLibCheck
ui/src/adapters/dynamic-loader.ts
ui/src/adapters/sandboxed-parser-worker.ts
ui/src/adapters/sandboxed-parser-worker.test.ts`
- `pnpm --filter @paperclipai/ui typecheck` was attempted; it reached
existing unrelated failures in HeartbeatRun test/storybook fixtures and
missing Storybook type resolution, with no adapter-module errors
surfaced.
- PR #4225 checks on current head `34c9da00`: `policy`, `e2e`, `verify`,
`security/snyk`, and `Greptile Review` are all `SUCCESS`.
- Greptile Review on current head `34c9da00` reached 5/5.

## Risks

- Medium risk: parser execution is now asynchronous through a worker
while the existing parser interface is synchronous, so transcript
updates should be watched with external adapters.
- Some adapter parser bundles may rely on direct ESM `export` syntax or
browser APIs that are no longer available inside the worker lockdown.
- The worker lockdown is a hardening layer around external parser code,
not a complete browser security sandbox for arbitrary untrusted
applications.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5-based coding agent runtime, shell/git tool use
enabled. Exact hosted model build and context window are not exposed in
this Paperclip heartbeat environment.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-21 13:42:44 -05:00
542 changed files with 116858 additions and 5146 deletions

View File

@@ -177,8 +177,12 @@ real name or email). To find GitHub usernames:
**Never expose contributor email addresses.** Use `@username` only.
Exclude bot accounts (e.g. `lockfile-bot`, `dependabot`) from the list. List contributors
in alphabetical order by GitHub username (case-insensitive).
Exclude bot accounts (e.g. `lockfile-bot`, `dependabot`) from the list.
Exclude Paperclip founders from the list (e.g. `cryppadotta`, `forgottendev`, `devinfoley`, `sockmonster`, `scotttong`)
List contributors in alphabetical order by GitHub username (case-insensitive).
If there are no contributors left after exclusions, then just skip this section and don't mention it.
## Step 6 — Review Before Release

View File

@@ -41,44 +41,7 @@ jobs:
node-version: 24
- name: Validate Dockerfile deps stage
run: |
missing=0
# Extract only the deps stage from the Dockerfile
deps_stage="$(awk '/^FROM .* AS deps$/{found=1; next} found && /^FROM /{exit} found{print}' Dockerfile)"
if [ -z "$deps_stage" ]; then
echo "::error::Could not extract deps stage from Dockerfile (expected 'FROM ... AS deps')"
exit 1
fi
# Derive workspace search roots from pnpm-workspace.yaml (exclude dev-only packages)
search_roots="$(grep '^ *- ' pnpm-workspace.yaml | sed 's/^ *- //' | sed 's/\*$//' | grep -v 'examples' | grep -v 'create-paperclip-plugin' | tr '\n' ' ')"
if [ -z "$search_roots" ]; then
echo "::error::Could not derive workspace roots from pnpm-workspace.yaml"
exit 1
fi
# Check all workspace package.json files are copied in the deps stage
for pkg in $(find $search_roots -maxdepth 2 -name package.json -not -path '*/examples/*' -not -path '*/create-paperclip-plugin/*' -not -path '*/node_modules/*' 2>/dev/null | sort -u); do
dir="$(dirname "$pkg")"
if ! echo "$deps_stage" | grep -q "^COPY ${dir}/package.json"; then
echo "::error::Dockerfile deps stage missing: COPY ${pkg} ${dir}/"
missing=1
fi
done
# Check patches directory is copied if it exists
if [ -d patches ] && ! echo "$deps_stage" | grep -q '^COPY patches/'; then
echo "::error::Dockerfile deps stage missing: COPY patches/ patches/"
missing=1
fi
if [ "$missing" -eq 1 ]; then
echo "Dockerfile deps stage is out of sync. Update it to include the missing files."
exit 1
fi
run: node ./scripts/check-docker-deps-stage.mjs
- name: Validate dependency resolution when manifests change
run: |

1
.gitignore vendored
View File

@@ -3,6 +3,7 @@ node_modules/
**/node_modules
**/node_modules/
dist/
ui/storybook-static/
.env
*.tsbuildinfo
drizzle/meta/

View File

@@ -123,7 +123,9 @@ pnpm test:release-smoke
Run the browser suites only when your change touches them or when you are explicitly verifying CI/release flows.
Run this full check before claiming done:
For normal issue work, run the smallest relevant verification first. Do not default to repo-wide typecheck/build/test on every heartbeat when a narrower check is enough to prove the change.
Run this full check before claiming repo work done in a PR-ready hand-off, or when the change scope is broad enough that targeted checks are not sufficient:
```sh
pnpm -r typecheck

View File

@@ -1,3 +1,4 @@
# syntax=docker/dockerfile:1.20
FROM node:lts-trixie-slim AS base
ARG USER_UID=1000
ARG USER_GID=1000
@@ -29,6 +30,8 @@ COPY packages/adapters/openclaw-gateway/package.json packages/adapters/openclaw-
COPY packages/adapters/opencode-local/package.json packages/adapters/opencode-local/
COPY packages/adapters/pi-local/package.json packages/adapters/pi-local/
COPY packages/plugins/sdk/package.json packages/plugins/sdk/
COPY --parents packages/plugins/sandbox-providers/./*/package.json packages/plugins/sandbox-providers/
COPY packages/plugins/paperclip-plugin-fake-sandbox/package.json packages/plugins/paperclip-plugin-fake-sandbox/
COPY patches/ patches/
RUN pnpm install --frozen-lockfile

113
README.md
View File

@@ -6,7 +6,8 @@
<a href="#quickstart"><strong>Quickstart</strong></a> &middot;
<a href="https://paperclip.ing/docs"><strong>Docs</strong></a> &middot;
<a href="https://github.com/paperclipai/paperclip"><strong>GitHub</strong></a> &middot;
<a href="https://discord.gg/m4HZY7xNG3"><strong>Discord</strong></a>
<a href="https://discord.gg/m4HZY7xNG3"><strong>Discord</strong></a> &middot;
<a href="https://x.com/papercliping"><strong>Twitter</strong></a>
</p>
<p align="center">
@@ -156,6 +157,115 @@ Paperclip handles the hard orchestration details correctly.
<br/>
## What's Under the Hood
Paperclip is a full control plane, not a wrapper. Before you build any of this yourself, know that it already exists:
```
┌──────────────────────────────────────────────────────────────┐
│ PAPERCLIP SERVER │
│ │
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
│ │Identity & │ │ Work & │ │ Heartbeat │ │Governance │ │
│ │ Access │ │ Tasks │ │ Execution │ │& Approvals│ │
│ └───────────┘ └───────────┘ └───────────┘ └───────────┘ │
│ │
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
│ │ Org Chart │ │Workspaces │ │ Plugins │ │ Budget │ │
│ │ & Agents │ │ & Runtime │ │ │ │ & Costs │ │
│ └───────────┘ └───────────┘ └───────────┘ └───────────┘ │
│ │
│ ┌───────────┐ ┌───────────┐ ┌───────────┐ ┌───────────┐ │
│ │ Routines │ │ Secrets & │ │ Activity │ │ Company │ │
│ │& Schedules│ │ Storage │ │ & Events │ │Portability│ │
│ └───────────┘ └───────────┘ └───────────┘ └───────────┘ │
└──────────────────────────────────────────────────────────────┘
▲ ▲ ▲ ▲
┌─────┴─────┐ ┌─────┴─────┐ ┌─────┴─────┐ ┌─────┴─────┐
│ Claude │ │ Codex │ │ CLI │ │ HTTP/web │
│ Code │ │ │ │ agents │ │ bots │
└───────────┘ └───────────┘ └───────────┘ └───────────┘
```
### The Systems
<table>
<tr>
<td width="50%">
**Identity & Access** — Two deployment modes (trusted local or authenticated), board users, agent API keys, short-lived run JWTs, company memberships, invite flows, and OpenClaw onboarding. Every mutating request is traced to an actor.
</td>
<td width="50%">
**Org Chart & Agents** — Agents have roles, titles, reporting lines, permissions, and budgets. Adapter examples match the diagram: Claude Code, Codex, CLI agents such as Cursor/Gemini/bash, HTTP/webhook bots such as OpenClaw, and external adapter plugins. If it can receive a heartbeat, it's hired.
</td>
</tr>
<tr>
<td>
**Work & Task System** — Issues carry company/project/goal/parent links, atomic checkout with execution locks, first-class blocker dependencies, comments, documents, attachments, work products, labels, and inbox state. No double-work, no lost context.
</td>
<td>
**Heartbeat Execution** — DB-backed wakeup queue with coalescing, budget checks, workspace resolution, secret injection, skill loading, and adapter invocation. Runs produce structured logs, cost events, session state, and audit trails. Recovery handles orphaned runs automatically.
</td>
</tr>
<tr>
<td>
**Workspaces & Runtime** — Project workspaces, isolated execution workspaces (git worktrees, operator branches), and runtime services (dev servers, preview URLs). Agents work in the right directory with the right context every time.
</td>
<td>
**Governance & Approvals** — Board approval workflows, execution policies with review/approval stages, decision tracking, budget hard-stops, agent pause/resume/terminate, and full audit logging. You're the board — nothing ships without your sign-off.
</td>
</tr>
<tr>
<td>
**Budget & Cost Control** — Token and cost tracking by company, agent, project, goal, issue, provider, and model. Scoped budget policies with warning thresholds and hard stops. Overspend pauses agents and cancels queued work automatically.
</td>
<td>
**Routines & Schedules** — Recurring tasks with cron, webhook, and API triggers. Concurrency and catch-up policies. Each routine execution creates a tracked issue and wakes the assigned agent — no manual kick-offs needed.
</td>
</tr>
<tr>
<td>
**Plugins** — Instance-wide plugin system with out-of-process workers, capability-gated host services, job scheduling, tool exposure, and UI contributions. Extend Paperclip without forking it.
</td>
<td>
**Secrets & Storage** — Instance and company secrets, encrypted local storage, provider-backed object storage, attachments, and work products. Sensitive values stay out of prompts unless a scoped run explicitly needs them.
</td>
</tr>
<tr>
<td>
**Activity & Events** — Mutating actions, heartbeat state changes, cost events, approvals, comments, and work products are recorded as durable activity so operators can audit what happened and why.
</td>
<td>
**Company Portability** — Export and import entire organizations — agents, skills, projects, routines, and issues — with secret scrubbing and collision handling. One deployment, many companies, complete data isolation.
</td>
</tr>
</table>
<br/>
## What Paperclip is not
| | |
@@ -300,6 +410,7 @@ We welcome contributions. See the [contributing guide](CONTRIBUTING.md) for deta
## Community
- [Discord](https://discord.gg/m4HZY7xNG3) — Join the community
- [Twitter / X](https://x.com/papercliping) — Follow updates and announcements
- [GitHub Issues](https://github.com/paperclipai/paperclip/issues) — bugs and feature requests
- [GitHub Discussions](https://github.com/paperclipai/paperclip/discussions) — ideas and RFC

View File

@@ -6,7 +6,8 @@
<a href="#quickstart"><strong>Quickstart</strong></a> &middot;
<a href="https://paperclip.ing/docs"><strong>Docs</strong></a> &middot;
<a href="https://github.com/paperclipai/paperclip"><strong>GitHub</strong></a> &middot;
<a href="https://discord.gg/m4HZY7xNG3"><strong>Discord</strong></a>
<a href="https://discord.gg/m4HZY7xNG3"><strong>Discord</strong></a> &middot;
<a href="https://x.com/papercliping"><strong>Twitter</strong></a>
</p>
<p align="center">
@@ -278,6 +279,7 @@ We welcome contributions. See the [contributing guide](https://github.com/paperc
## Community
- [Discord](https://discord.gg/m4HZY7xNG3) — Join the community
- [Twitter / X](https://x.com/papercliping) — Follow updates and announcements
- [GitHub Issues](https://github.com/paperclipai/paperclip/issues) — bugs and feature requests
- [GitHub Discussions](https://github.com/paperclipai/paperclip/discussions) — ideas and RFC

View File

@@ -14,6 +14,7 @@ function makeCompany(overrides: Partial<Company>): Company {
issueCounter: 1,
budgetMonthlyCents: 0,
spentMonthlyCents: 0,
attachmentMaxBytes: 10 * 1024 * 1024,
requireBoardApprovalForNewAgents: false,
feedbackDataSharingEnabled: false,
feedbackDataSharingConsentAt: null,

View File

@@ -1,5 +1,5 @@
import { execFile, spawn } from "node:child_process";
import { mkdirSync, mkdtempSync, readFileSync, readdirSync, rmSync, writeFileSync } from "node:fs";
import { existsSync, mkdirSync, mkdtempSync, readFileSync, readdirSync, rmSync, writeFileSync } from "node:fs";
import net from "node:net";
import os from "node:os";
import path from "node:path";
@@ -104,20 +104,50 @@ function writeTestConfig(configPath: string, tempRoot: string, port: number, con
writeFileSync(configPath, `${JSON.stringify(config, null, 2)}\n`, "utf8");
}
function createServerEnv(configPath: string, port: number, connectionString: string) {
interface TestPaperclipEnv {
configPath: string;
paperclipHome: string;
instanceId: string;
shellHome?: string;
}
function createBasePaperclipEnv(options: TestPaperclipEnv) {
const env = { ...process.env };
for (const key of Object.keys(env)) {
if (key.startsWith("PAPERCLIP_")) {
delete env[key];
}
}
env.PAPERCLIP_CONFIG = options.configPath;
env.PAPERCLIP_HOME = options.paperclipHome;
env.PAPERCLIP_INSTANCE_ID = options.instanceId;
env.PAPERCLIP_CONTEXT = path.join(options.paperclipHome, "context.json");
env.PAPERCLIP_AUTH_STORE = path.join(options.paperclipHome, "auth.json");
if (options.shellHome) {
env.HOME = options.shellHome;
}
return env;
}
function createServerEnv(
configPath: string,
port: number,
connectionString: string,
options: Omit<TestPaperclipEnv, "configPath">,
) {
const env = createBasePaperclipEnv({
configPath,
...options,
});
delete env.DATABASE_URL;
delete env.PORT;
delete env.HOST;
delete env.SERVE_UI;
delete env.HEARTBEAT_SCHEDULER_ENABLED;
env.PAPERCLIP_CONFIG = configPath;
env.DATABASE_URL = connectionString;
env.HOST = "127.0.0.1";
env.PORT = String(port);
@@ -130,13 +160,8 @@ function createServerEnv(configPath: string, port: number, connectionString: str
return env;
}
function createCliEnv() {
const env = { ...process.env };
for (const key of Object.keys(env)) {
if (key.startsWith("PAPERCLIP_")) {
delete env[key];
}
}
function createCliEnv(options: TestPaperclipEnv) {
const env = createBasePaperclipEnv(options);
delete env.DATABASE_URL;
delete env.PORT;
delete env.HOST;
@@ -183,14 +208,25 @@ async function api<T>(baseUrl: string, pathname: string, init?: RequestInit): Pr
return text ? JSON.parse(text) as T : (null as T);
}
async function runCliJson<T>(args: string[], opts: { apiBase: string; configPath: string }) {
async function runCliJson<T>(
args: string[],
opts: TestPaperclipEnv & { apiBase?: string; includeConfigArg?: boolean },
) {
const repoRoot = path.resolve(path.dirname(fileURLToPath(import.meta.url)), "../../..");
const cliArgs = ["--silent", "paperclipai", ...args];
if (opts.apiBase) {
cliArgs.push("--api-base", opts.apiBase);
}
if (opts.includeConfigArg !== false) {
cliArgs.push("--config", opts.configPath);
}
cliArgs.push("--json");
const result = await execFileAsync(
"pnpm",
["--silent", "paperclipai", ...args, "--api-base", opts.apiBase, "--config", opts.configPath, "--json"],
cliArgs,
{
cwd: repoRoot,
env: createCliEnv(),
env: createCliEnv(opts),
maxBuffer: 10 * 1024 * 1024,
},
);
@@ -235,6 +271,9 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
let configPath = "";
let exportDir = "";
let apiBase = "";
let paperclipHome = "";
let cliShellHome = "";
let paperclipInstanceId = "";
let serverProcess: ServerProcess | null = null;
let tempDb: Awaited<ReturnType<typeof startEmbeddedPostgresTestDatabase>> | null = null;
@@ -242,6 +281,11 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
tempRoot = mkdtempSync(path.join(os.tmpdir(), "paperclip-company-cli-e2e-"));
configPath = path.join(tempRoot, "config", "config.json");
exportDir = path.join(tempRoot, "exported-company");
paperclipHome = path.join(tempRoot, "paperclip-home");
cliShellHome = path.join(tempRoot, "shell-home");
paperclipInstanceId = "company-cli-e2e";
mkdirSync(paperclipHome, { recursive: true });
mkdirSync(cliShellHome, { recursive: true });
tempDb = await startEmbeddedPostgresTestDatabase("paperclip-company-cli-db-");
@@ -256,7 +300,11 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
["paperclipai", "run", "--config", configPath],
{
cwd: repoRoot,
env: createServerEnv(configPath, port, tempDb.connectionString),
env: createServerEnv(configPath, port, tempDb.connectionString, {
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
}),
stdio: ["ignore", "pipe", "pipe"],
},
);
@@ -282,6 +330,31 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
it("exports a company package and imports it into new and existing companies", async () => {
expect(serverProcess).not.toBeNull();
const cliContext = await runCliJson<{
contextPath: string;
profileName: string;
profile: { apiBase?: string };
}>(
["context", "set", "--profile", "isolation-check", "--api-base", "https://example.test"],
{
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
includeConfigArg: false,
},
);
const expectedContextPath = path.join(paperclipHome, "context.json");
const leakedContextPath = path.join(cliShellHome, ".paperclip", "context.json");
expect(cliContext.contextPath).toBe(expectedContextPath);
expect(cliContext.profileName).toBe("isolation-check");
expect(cliContext.profile.apiBase).toBe("https://example.test");
expect(existsSync(expectedContextPath)).toBe(true);
expect(existsSync(leakedContextPath)).toBe(false);
rmSync(expectedContextPath, { force: true });
expect(existsSync(expectedContextPath)).toBe(false);
const sourceCompany = await api<{ id: string; name: string; issuePrefix: string }>(apiBase, "/api/companies", {
method: "POST",
headers: { "content-type": "application/json" },
@@ -303,8 +376,11 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
name: "Export Engineer",
role: "engineer",
adapterType: "claude_local",
adapterConfig: {
promptTemplate: "You verify company portability.",
adapterConfig: {},
instructionsBundle: {
files: {
"AGENTS.md": "You verify company portability.",
},
},
}),
},
@@ -355,7 +431,13 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
"--include",
"company,agents,projects,issues",
],
{ apiBase, configPath },
{
apiBase,
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
},
);
expect(exportResult.ok).toBe(true);
@@ -379,7 +461,13 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
"company,agents,projects,issues",
"--yes",
],
{ apiBase, configPath },
{
apiBase,
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
},
);
expect(importedNew.company.action).toBe("created");
@@ -398,10 +486,11 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
apiBase,
`/api/companies/${importedNew.company.id}/issues`,
);
const importedMatchingIssues = importedIssues.filter((issue) => issue.title === sourceIssue.title);
expect(importedAgents.map((agent) => agent.name)).toContain(sourceAgent.name);
expect(importedProjects.map((project) => project.name)).toContain(sourceProject.name);
expect(importedIssues.map((issue) => issue.title)).toContain(sourceIssue.title);
expect(importedMatchingIssues).toHaveLength(1);
const previewExisting = await runCliJson<{
errors: string[];
@@ -426,7 +515,13 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
"rename",
"--dry-run",
],
{ apiBase, configPath },
{
apiBase,
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
},
);
expect(previewExisting.errors).toEqual([]);
@@ -453,7 +548,13 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
"rename",
"--yes",
],
{ apiBase, configPath },
{
apiBase,
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
},
);
expect(importedExisting.company.action).toBe("unchanged");
@@ -471,11 +572,13 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
apiBase,
`/api/companies/${importedNew.company.id}/issues`,
);
const twiceImportedMatchingIssues = twiceImportedIssues.filter((issue) => issue.title === sourceIssue.title);
expect(twiceImportedAgents).toHaveLength(2);
expect(new Set(twiceImportedAgents.map((agent) => agent.name)).size).toBe(2);
expect(twiceImportedProjects).toHaveLength(2);
expect(twiceImportedIssues).toHaveLength(2);
expect(twiceImportedMatchingIssues).toHaveLength(2);
expect(new Set(twiceImportedMatchingIssues.map((issue) => issue.identifier)).size).toBe(2);
const zipPath = path.join(tempRoot, "exported-company.zip");
const portableFiles: Record<string, string> = {};
@@ -498,10 +601,16 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
"company,agents,projects,issues",
"--yes",
],
{ apiBase, configPath },
{
apiBase,
configPath,
paperclipHome,
instanceId: paperclipInstanceId,
shellHome: cliShellHome,
},
);
expect(importedFromZip.company.action).toBe("created");
expect(importedFromZip.agents.some((agent) => agent.action === "created")).toBe(true);
}, 60_000);
}, 90_000);
});

View File

@@ -160,6 +160,7 @@ describe("renderCompanyImportPreview", () => {
path: "COMPANY.md",
name: "Source Co",
description: null,
attachmentMaxBytes: null,
brandColor: null,
logoPath: null,
requireBoardApprovalForNewAgents: false,
@@ -375,6 +376,7 @@ describe("import selection catalog", () => {
path: "COMPANY.md",
name: "Source Co",
description: null,
attachmentMaxBytes: null,
brandColor: null,
logoPath: "images/company-logo.png",
requireBoardApprovalForNewAgents: false,

View File

@@ -0,0 +1,24 @@
import path from "node:path";
import { describe, expect, it } from "vitest";
import { collectEnvLabDoctorStatus, resolveEnvLabSshStatePath } from "../commands/env-lab.js";
describe("env-lab command", () => {
it("resolves the default SSH fixture state path under the instance root", () => {
const statePath = resolveEnvLabSshStatePath("fixture-test");
expect(statePath).toContain(
path.join("instances", "fixture-test", "env-lab", "ssh-fixture", "state.json"),
);
});
it("reports doctor status for an instance without a running fixture", async () => {
const status = await collectEnvLabDoctorStatus({ instance: "fixture-test-missing" });
expect(status.statePath).toContain(
path.join("instances", "fixture-test-missing", "env-lab", "ssh-fixture", "state.json"),
);
expect(typeof status.ssh.supported).toBe("boolean");
expect(status.ssh.running).toBe(false);
expect(status.ssh.environment).toBeNull();
});
});

View File

@@ -190,8 +190,9 @@ describe("worktree helpers", () => {
).toEqual(["worktree", "add", "-b", "my-worktree", "/tmp/my-worktree", "origin/main"]);
});
it("rewrites loopback auth URLs to the new port only", () => {
it("rewrites auth URLs only when they already include a port", () => {
expect(rewriteLocalUrlPort("http://127.0.0.1:3100", 3110)).toBe("http://127.0.0.1:3110/");
expect(rewriteLocalUrlPort("http://my-host.ts.net:3100", 3110)).toBe("http://my-host.ts.net:3110/");
expect(rewriteLocalUrlPort("https://paperclip.example", 3110)).toBe("https://paperclip.example");
});
@@ -599,7 +600,7 @@ describe("worktree helpers", () => {
fs.rmSync(tempRoot, { recursive: true, force: true });
}
},
20000,
30000,
);
it("avoids ports already claimed by sibling worktree instance configs", async () => {
@@ -881,7 +882,7 @@ describe("worktree helpers", () => {
}
fs.rmSync(tempRoot, { recursive: true, force: true });
}
}, 20_000);
}, 30_000);
it("restores the current worktree config and instance data if reseed fails", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-reseed-rollback-"));
@@ -1038,7 +1039,7 @@ describe("worktree helpers", () => {
execFileSync("git", ["worktree", "remove", "--force", worktreePath], { cwd: repoRoot, stdio: "ignore" });
fs.rmSync(tempRoot, { recursive: true, force: true });
}
});
}, 15_000);
it("creates and initializes a worktree from the top-level worktree:make command", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-make-"));

View File

@@ -61,6 +61,7 @@ interface IssueUpdateOptions extends BaseClientOptions {
interface IssueCommentOptions extends BaseClientOptions {
body: string;
reopen?: boolean;
resume?: boolean;
}
interface IssueCheckoutOptions extends BaseClientOptions {
@@ -241,12 +242,14 @@ export function registerIssueCommands(program: Command): void {
.argument("<issueId>", "Issue ID")
.requiredOption("--body <text>", "Comment body")
.option("--reopen", "Reopen if issue is done/cancelled")
.option("--resume", "Request explicit follow-up and wake the assignee when resumable")
.action(async (issueId: string, opts: IssueCommentOptions) => {
try {
const ctx = resolveCommandContext(opts);
const payload = addIssueCommentSchema.parse({
body: opts.body,
reopen: opts.reopen,
resume: opts.resume,
});
const comment = await ctx.api.post<IssueComment>(`/api/issues/${issueId}/comments`, payload);
printOutput(comment, { json: ctx.json });

174
cli/src/commands/env-lab.ts Normal file
View File

@@ -0,0 +1,174 @@
import path from "node:path";
import type { Command } from "commander";
import * as p from "@clack/prompts";
import pc from "picocolors";
import {
buildSshEnvLabFixtureConfig,
getSshEnvLabSupport,
readSshEnvLabFixtureStatus,
startSshEnvLabFixture,
stopSshEnvLabFixture,
} from "@paperclipai/adapter-utils/ssh";
import { resolvePaperclipInstanceId, resolvePaperclipInstanceRoot } from "../config/home.js";
export function resolveEnvLabSshStatePath(instanceId?: string): string {
const resolvedInstanceId = resolvePaperclipInstanceId(instanceId);
return path.resolve(
resolvePaperclipInstanceRoot(resolvedInstanceId),
"env-lab",
"ssh-fixture",
"state.json",
);
}
function printJson(value: unknown) {
process.stdout.write(`${JSON.stringify(value, null, 2)}\n`);
}
function summarizeFixture(state: {
host: string;
port: number;
username: string;
workspaceDir: string;
sshdLogPath: string;
}) {
p.log.message(`Host: ${pc.cyan(state.host)}:${pc.cyan(String(state.port))}`);
p.log.message(`User: ${pc.cyan(state.username)}`);
p.log.message(`Workspace: ${pc.cyan(state.workspaceDir)}`);
p.log.message(`Log: ${pc.dim(state.sshdLogPath)}`);
}
export async function collectEnvLabDoctorStatus(opts: { instance?: string }) {
const statePath = resolveEnvLabSshStatePath(opts.instance);
const [sshSupport, sshStatus] = await Promise.all([
getSshEnvLabSupport(),
readSshEnvLabFixtureStatus(statePath),
]);
const environment = sshStatus.state ? await buildSshEnvLabFixtureConfig(sshStatus.state) : null;
return {
statePath,
ssh: {
supported: sshSupport.supported,
reason: sshSupport.reason,
running: sshStatus.running,
state: sshStatus.state,
environment,
},
};
}
export async function envLabUpCommand(opts: { instance?: string; json?: boolean }) {
const statePath = resolveEnvLabSshStatePath(opts.instance);
const state = await startSshEnvLabFixture({ statePath });
const environment = await buildSshEnvLabFixtureConfig(state);
if (opts.json) {
printJson({ state, environment });
return;
}
p.log.success("SSH env-lab fixture is running.");
summarizeFixture(state);
p.log.message(`State: ${pc.dim(statePath)}`);
}
export async function envLabStatusCommand(opts: { instance?: string; json?: boolean }) {
const statePath = resolveEnvLabSshStatePath(opts.instance);
const status = await readSshEnvLabFixtureStatus(statePath);
const environment = status.state ? await buildSshEnvLabFixtureConfig(status.state) : null;
if (opts.json) {
printJson({ ...status, environment, statePath });
return;
}
if (!status.state || !status.running) {
p.log.info(`SSH env-lab fixture is not running (${pc.dim(statePath)}).`);
return;
}
p.log.success("SSH env-lab fixture is running.");
summarizeFixture(status.state);
p.log.message(`State: ${pc.dim(statePath)}`);
}
export async function envLabDownCommand(opts: { instance?: string; json?: boolean }) {
const statePath = resolveEnvLabSshStatePath(opts.instance);
const stopped = await stopSshEnvLabFixture(statePath);
if (opts.json) {
printJson({ stopped, statePath });
return;
}
if (!stopped) {
p.log.info(`No SSH env-lab fixture was running (${pc.dim(statePath)}).`);
return;
}
p.log.success("SSH env-lab fixture stopped.");
p.log.message(`State: ${pc.dim(statePath)}`);
}
export async function envLabDoctorCommand(opts: { instance?: string; json?: boolean }) {
const status = await collectEnvLabDoctorStatus(opts);
if (opts.json) {
printJson(status);
return;
}
if (status.ssh.supported) {
p.log.success("SSH fixture prerequisites are installed.");
} else {
p.log.warn(`SSH fixture prerequisites are incomplete: ${status.ssh.reason ?? "unknown reason"}`);
}
if (status.ssh.state && status.ssh.running) {
p.log.success("SSH env-lab fixture is running.");
summarizeFixture(status.ssh.state);
p.log.message(`Private key: ${pc.dim(status.ssh.state.clientPrivateKeyPath)}`);
p.log.message(`Known hosts: ${pc.dim(status.ssh.state.knownHostsPath)}`);
} else if (status.ssh.state) {
p.log.warn("SSH env-lab fixture state exists, but the process is not running.");
p.log.message(`State: ${pc.dim(status.statePath)}`);
} else {
p.log.info("SSH env-lab fixture is not running.");
p.log.message(`State: ${pc.dim(status.statePath)}`);
}
p.log.message(`Cleanup: ${pc.dim("pnpm paperclipai env-lab down")}`);
}
export function registerEnvLabCommands(program: Command) {
const envLab = program.command("env-lab").description("Deterministic local environment fixtures");
envLab
.command("up")
.description("Start the default SSH env-lab fixture")
.option("-i, --instance <id>", "Paperclip instance id (default: current/default)")
.option("--json", "Print machine-readable fixture details")
.action(envLabUpCommand);
envLab
.command("status")
.description("Show the current SSH env-lab fixture state")
.option("-i, --instance <id>", "Paperclip instance id (default: current/default)")
.option("--json", "Print machine-readable fixture details")
.action(envLabStatusCommand);
envLab
.command("down")
.description("Stop the default SSH env-lab fixture")
.option("-i, --instance <id>", "Paperclip instance id (default: current/default)")
.option("--json", "Print machine-readable stop details")
.action(envLabDownCommand);
envLab
.command("doctor")
.description("Check SSH fixture prerequisites and current status")
.option("-i, --instance <id>", "Paperclip instance id (default: current/default)")
.option("--json", "Print machine-readable diagnostic details")
.action(envLabDoctorCommand);
}

View File

@@ -75,11 +75,6 @@ function nonEmpty(value: string | null | undefined): string | null {
return typeof value === "string" && value.trim().length > 0 ? value.trim() : null;
}
function isLoopbackHost(hostname: string): boolean {
const value = hostname.trim().toLowerCase();
return value === "127.0.0.1" || value === "localhost" || value === "::1";
}
export function sanitizeWorktreeInstanceId(rawValue: string): string {
const trimmed = rawValue.trim().toLowerCase();
const normalized = trimmed
@@ -168,7 +163,8 @@ export function rewriteLocalUrlPort(rawUrl: string | undefined, port: number): s
if (!rawUrl) return undefined;
try {
const parsed = new URL(rawUrl);
if (!isLoopbackHost(parsed.hostname)) return rawUrl;
// The URL API normalizes default ports like :80/:443 to "", so treat them as stable URLs.
if (!parsed.port) return rawUrl;
parsed.port = String(port);
return parsed.toString();
} catch {

View File

@@ -1311,6 +1311,7 @@ async function seedWorktreeDatabase(input: {
backupDir: path.resolve(input.targetPaths.backupDir, "seed"),
retention: { dailyDays: 7, weeklyWeeks: 4, monthlyMonths: 1 },
filenamePrefix: `${input.instanceId}-seed`,
backupEngine: "javascript",
includeMigrationJournal: true,
excludeTables: seedPlan.excludedTables,
nullifyColumns: seedPlan.nullifyColumns,

View File

@@ -8,6 +8,7 @@ import { heartbeatRun } from "./commands/heartbeat-run.js";
import { runCommand } from "./commands/run.js";
import { bootstrapCeoInvite } from "./commands/auth-bootstrap-ceo.js";
import { dbBackupCommand } from "./commands/db-backup.js";
import { registerEnvLabCommands } from "./commands/env-lab.js";
import { registerContextCommands } from "./commands/client/context.js";
import { registerCompanyCommands } from "./commands/client/company.js";
import { registerIssueCommands } from "./commands/client/issue.js";
@@ -147,6 +148,7 @@ registerDashboardCommands(program);
registerRoutineCommands(program);
registerFeedbackCommands(program);
registerWorktreeCommands(program);
registerEnvLabCommands(program);
registerPluginCommands(program);
const auth = program.command("auth").description("Authentication and bootstrap utilities");

View File

@@ -2,7 +2,7 @@
Paperclip CLI now supports both:
- instance setup/diagnostics (`onboard`, `doctor`, `configure`, `env`, `allowed-hostname`)
- instance setup/diagnostics (`onboard`, `doctor`, `configure`, `env`, `allowed-hostname`, `env-lab`)
- control-plane client operations (issues, approvals, agents, activity, dashboard)
## Base Usage
@@ -45,6 +45,15 @@ Allow an authenticated/private hostname (for example custom Tailscale DNS):
pnpm paperclipai allowed-hostname dotta-macbook-pro
```
Bring up the default local SSH fixture for environment testing:
```sh
pnpm paperclipai env-lab up
pnpm paperclipai env-lab doctor
pnpm paperclipai env-lab status --json
pnpm paperclipai env-lab down
```
All client commands support:
- `--data-dir <path>`

View File

@@ -59,11 +59,11 @@ cp .env.example .env
# DATABASE_URL=postgres://paperclip:paperclip@localhost:5432/paperclip
```
Run migrations (once the migration generation issue is fixed) or use `drizzle-kit push`:
Run migrations:
```sh
DATABASE_URL=postgres://paperclip:paperclip@localhost:5432/paperclip \
npx drizzle-kit push
pnpm db:migrate
```
Start the server:
@@ -100,37 +100,27 @@ postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:
### Configure
Set `DATABASE_URL` in your `.env`:
For the application runtime, use a direct PostgreSQL connection unless the database client has explicit prepared-statement configuration for your pooling mode:
```sh
DATABASE_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:6543/postgres
DATABASE_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:5432/postgres
```
For hosted deployments that use a pooled runtime URL, set
`DATABASE_MIGRATION_URL` to the direct connection URL. Paperclip uses it for
startup schema checks/migrations and plugin namespace migrations, while the app
continues to use `DATABASE_URL` for runtime queries:
If you later run the app with a pooled runtime URL, set `DATABASE_MIGRATION_URL` to the direct connection URL. Paperclip uses it for startup schema checks/migrations and plugin namespace migrations, while the app continues to use `DATABASE_URL` for runtime queries:
```sh
DATABASE_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:6543/postgres
DATABASE_MIGRATION_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:5432/postgres
```
If using connection pooling (port 6543), the `postgres` client must disable prepared statements. Update `packages/db/src/client.ts`:
```ts
export function createDb(url: string) {
const sql = postgres(url, { prepare: false });
return drizzlePg(sql, { schema });
}
```
If your hosted database requires transaction-pooling-only connections, use a direct or session-pooled connection for Paperclip until runtime pooling support is documented in this guide. Do not edit database client source files as part of deployment setup.
### Push the schema
```sh
# Use the direct connection (port 5432) for schema changes
DATABASE_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@...5432/postgres \
npx drizzle-kit push
pnpm db:migrate
```
### Free tier limits
@@ -153,6 +143,14 @@ The database mode is controlled by `DATABASE_URL`:
Your Drizzle schema (`packages/db/src/schema/`) stays the same regardless of mode.
## Plugin database namespaces
The plugin runtime tracks plugin-owned database namespaces and migrations in `plugin_database_namespaces` and `plugin_migrations`. Hosted deployments that separate runtime and migration connections should set `DATABASE_MIGRATION_URL`; plugin namespace migration work uses the migration connection when present.
## Backups
Paperclip supports automatic and manual database backups. See `doc/DEVELOPING.md` for the current `paperclipai db:backup` / `pnpm db:backup` commands and backup retention configuration.
## Secret storage
Paperclip stores secret metadata and versions in:

View File

@@ -43,6 +43,8 @@ This starts:
`pnpm dev` and `pnpm dev:once` are now idempotent for the current repo and instance: if the matching Paperclip dev runner is already alive, Paperclip reports the existing process instead of starting a duplicate.
Issue execution may also use project execution workspace policies and workspace runtime services for per-project worktrees, preview servers, and managed dev commands. Configure those through the project workspace/runtime surfaces rather than starting long-running unmanaged processes when a task needs a reusable service.
## Storybook
The board UI Storybook keeps stories and Storybook config under `ui/storybook/` so component review files stay out of the app source routes.
@@ -113,6 +115,8 @@ pnpm test:release-smoke
These browser suites are intended for targeted local verification and CI, not the default agent/human test command.
For normal issue work, start with the smallest targeted check that proves the change. Reserve repo-wide typecheck/build/test runs for PR-ready handoff or changes broad enough that narrow checks do not cover the risk.
## One-Command Local Run
For a first-time local install, you can bootstrap and run in one command:
@@ -194,6 +198,8 @@ For `codex_local`, Paperclip also manages a per-company Codex home under the ins
If the `codex` CLI is not installed or not on `PATH`, `codex_local` agent runs fail at execution time with a clear adapter error. Quota polling uses a short-lived `codex app-server` subprocess: when `codex` cannot be spawned, that provider reports `ok: false` in aggregated quota results and the API server keeps running (it must not exit on a missing binary).
Local adapters require their corresponding CLI/session setup on the machine running Paperclip. External adapters are installed through the adapter/plugin flow and should not require hardcoded imports in `server/` or `ui/`.
## Worktree-local Instances
When developing from multiple git worktrees, do not point two Paperclip servers at the same embedded PostgreSQL data directory.

View File

@@ -23,7 +23,7 @@ Paperclip is the command, communication, and control plane for a company of AI a
- **Track work in real time** — see at any moment what every agent is working on
- **Control costs** — token salary budgets per agent, spend tracking, burn rate
- **Align to goals** — agents see how their work serves the bigger mission
- **Store company knowledge** — a shared brain for the organization
- **Preserve work context** — comments, documents, work products, attachments, and company state stay attached to the work
## Architecture
@@ -36,17 +36,20 @@ The central nervous system. Manages:
- Agent registry and org chart
- Task assignment and status
- Budget and token spend tracking
- Company knowledge base
- Issue comments, documents, work products, attachments, and company state
- Goal hierarchy (company → team → agent → task)
- Heartbeat monitoring — know when agents are alive, idle, or stuck
It also enforces execution-control semantics such as single-assignee issues, atomic checkout and execution locks, blockers, recovery issues, and workspace/runtime controls.
### 2. Execution Services (adapters)
Agents run externally and report into the control plane. An agent is just Python code that gets kicked off and does work. Adapters connect different execution environments:
Agents run externally and report into the control plane. Adapters connect different execution environments and define how a heartbeat is invoked, observed, and cancelled:
- **OpenClaw** — initial adapter target
- **Heartbeat loop** — simple custom Python that loops, checks in, does work
- **Others** — any runtime that can call an API
- **Local CLI/session adapters** — built-in adapters for tools such as Claude Code, Codex, Gemini, OpenCode, Pi, and Cursor
- **HTTP/process-style adapters** — command or webhook/API integrations for custom runtimes
- **OpenClaw gateway** — integration for OpenClaw-style remote agents
- **External adapter plugins** — dynamically loaded adapters installed outside the core app
The control plane doesn't run agents. It orchestrates them. Agents run wherever they run and phone home.

View File

@@ -32,12 +32,14 @@ Then you define who reports to the CEO: a CTO managing programmers, a CMO managi
### Agent Execution
There are two fundamental modes for running an agent's heartbeat:
Paperclip supports several ways to run an agent's heartbeat:
1. **Run a command** — Paperclip kicks off a process (shell command, Python script, etc.) and tracks it. The heartbeat is "execute this and monitor it."
2. **Fire and forget a request** — Paperclip sends a webhook/API call to an externally running agent. The heartbeat is "notify this agent to wake up." (OpenClaw hooks work this way.)
1. **Local CLI/session adapters** — Paperclip starts or resumes local coding-tool sessions such as Claude Code, Codex, Gemini, OpenCode, Pi, and Cursor, then tracks the run.
2. **Run a command** — Paperclip kicks off a process (shell command, Python script, etc.) and tracks it. The heartbeat is "execute this and monitor it."
3. **Fire and forget a request** — Paperclip sends a webhook/API call to an externally running agent. The heartbeat is "notify this agent to wake up." OpenClaw-style hooks work this way.
4. **External adapter plugins** — Paperclip loads adapter packages through the plugin/adapter flow so self-hosted installs can add runtimes without hardcoding them in core.
We provide sensible defaults — a default agent that shells out to Claude Code or Codex with your configuration, remembers session IDs, runs basic scripts. But you can plug in anything.
Agent runs can use project and execution workspaces, managed runtime services such as preview/dev servers, adapter-specific session state, and HTTP/webhook-style execution. We provide sensible defaults, but the adapter is still the boundary: if a runtime can be invoked, observed, and authorized, Paperclip can coordinate it.
### Task Management
@@ -54,7 +56,7 @@ I am researching the Facebook ads Granola uses (current task)
Tasks have parentage. Every task exists in service of a parent task, all the way up to the company goal. This is what keeps autonomous agents aligned — they can always answer "why am I doing this?"
More detailed task structure TBD.
The current issue model includes stable issue identifiers, parent/sub-issues, blockers, a single assignee, comments, issue documents, attachments and work products, and review/approval handoffs. That structure keeps work inspectable by both the board and agents while still allowing agents to decompose work into smaller tasks.
## Principles
@@ -115,7 +117,7 @@ Paperclips core identity is a **control plane for autonomous AI companies**,
- Do not make the core product a general chat app. The current product definition is explicitly task/comment-centric and “not a chatbot,” and that boundary is valuable.
- Do not build a complete Jira/GitHub replacement. The repo/docs already position Paperclip as organization orchestration, not focused on pull-request review.
- Do not build enterprise-grade RBAC first. The current V1 spec still treats multi-board governance and fine-grained human permissions as out of scope, so the first multi-user version should be coarse and company-scoped.
- Do not build enterprise-grade RBAC first. Paperclip now has authenticated mode, company memberships, instance roles, and permission grants, but fine-grained enterprise governance should remain secondary to the core company control plane.
- Do not lead with raw bash logs and transcripts. Default view should be human-readable intent/progress, with raw detail beneath.
- Do not force users to understand provider/API-key plumbing unless absolutely necessary. There are active onboarding/auth issues already; friction here is clearly real.
@@ -136,11 +138,14 @@ Paperclips core identity is a **control plane for autonomous AI companies**,
5. **Output-first**
Work is not done until the user can see the result: file, document, preview link, screenshot, plan, or PR.
6. **Local-first, cloud-ready**
6. **Execution visibility without log worship**
Active runs, recovery issues, productivity review states, blockers, and work products should be first-class surfaces. Raw transcripts are available when needed, but they are not the primary product surface.
7. **Local-first, cloud-ready**
The mental model should not change between local solo use and shared/private or public/cloud deployment.
7. **Safe autonomy**
8. **Safe autonomy**
Auto mode is allowed; hidden token burn is not.
8. **Thin core, rich edges**
9. **Thin core, rich edges**
Put optional chat, knowledge, and special surfaces into plugins/extensions rather than bloating the control plane.

View File

@@ -1,7 +1,7 @@
# Paperclip V1 Implementation Spec
Status: Implementation contract for first release (V1)
Date: 2026-02-17
Date: 2026-04-28
Audience: Product, engineering, and agent-integration authors
Source inputs: `GOAL.md`, `PRODUCT.md`, `SPEC.md`, `DATABASE.md`, current monorepo code
@@ -37,8 +37,9 @@ These decisions close open questions from `SPEC.md` for V1.
| Visibility | Full visibility to board and all agents in same company |
| Communication | Tasks + comments only (no separate chat system) |
| Task ownership | Single assignee; atomic checkout required for `in_progress` transition |
| Recovery | No automatic reassignment; work recovery stays manual/explicit |
| Agent adapters | Built-in `process` and `http` adapters |
| Recovery | Liveness/watchdog recovery preserves explicit ownership: retry lost execution continuity where safe, otherwise create visible recovery issues or require human escalation (see `doc/execution-semantics.md`) |
| Agent adapters | Built-in `process`, `http`, local CLI/session adapters, and OpenClaw gateway support; external adapters can also be loaded through the adapter plugin flow |
| Plugin framework | Local/self-hosted early plugin runtime is in scope; cloud marketplace and packaged public distribution remain out of scope |
| Auth | Mode-dependent human auth (`local_trusted` implicit board in current code; authenticated mode uses sessions), API keys for agents |
| Budget period | Monthly UTC calendar window |
| Budget enforcement | Soft alerts + hard limit auto-pause |
@@ -73,7 +74,7 @@ V1 implementation extends this baseline into a company-centric, governance-aware
## 5.2 Out of Scope (V1)
- Plugin framework and third-party extension SDK
- Cloud-grade plugin marketplace/distribution beyond the local/self-hosted plugin runtime
- Revenue/expense accounting beyond model/token costs
- Knowledge base subsystem
- Public marketplace (ClipHub)
@@ -123,6 +124,16 @@ Human auth tables (`users`, `sessions`, and provider-specific auth artifacts) ar
- `name` text not null
- `description` text null
- `status` enum: `active | paused | archived`
- `pause_reason` text null
- `paused_at` timestamptz null
- `issue_prefix` text not null
- `issue_counter` int not null
- `budget_monthly_cents` int not null default 0
- `spent_monthly_cents` int not null default 0
- `attachment_max_bytes` int not null
- `require_board_approval_for_new_agents` boolean not null default false
- feedback sharing consent fields
- branding fields such as `brand_color`
Invariant: every business record belongs to exactly one company.
@@ -133,15 +144,21 @@ Invariant: every business record belongs to exactly one company.
- `name` text not null
- `role` text not null
- `title` text null
- `status` enum: `active | paused | idle | running | error | terminated`
- `icon` text null
- `status` enum: `active | paused | idle | running | error | pending_approval | terminated`
- `reports_to` uuid fk `agents.id` null
- `capabilities` text null
- `adapter_type` enum: `process | http`
- `adapter_type` text; built-ins include `process`, `http`, `claude_local`, `codex_local`, `gemini_local`, `opencode_local`, `pi_local`, `cursor`, and `openclaw_gateway`
- `adapter_config` jsonb not null
- `runtime_config` jsonb not null default `{}`
- `default_environment_id` uuid fk `environments.id` null
- `context_mode` enum: `thin | fat` default `thin`
- `budget_monthly_cents` int not null default 0
- `spent_monthly_cents` int not null default 0
- pause fields: `pause_reason`, `paused_at`
- `permissions` jsonb not null default `{}`
- `last_heartbeat_at` timestamptz null
- `metadata` jsonb null
Invariants:
@@ -195,6 +212,7 @@ Invariant:
- `id` uuid pk
- `company_id` uuid fk not null
- `project_id` uuid fk `projects.id` null
- `project_workspace_id` uuid fk `project_workspaces.id` null
- `goal_id` uuid fk `goals.id` null
- `parent_id` uuid fk `issues.id` null
- `title` text not null
@@ -202,13 +220,22 @@ Invariant:
- `status` enum: `backlog | todo | in_progress | in_review | done | blocked | cancelled`
- `priority` enum: `critical | high | medium | low`
- `assignee_agent_id` uuid fk `agents.id` null
- `assignee_user_id` text null
- checkout/execution locks: `checkout_run_id`, `execution_run_id`, `execution_agent_name_key`, `execution_locked_at`
- `created_by_agent_id` uuid fk `agents.id` null
- `created_by_user_id` uuid fk `users.id` null
- identifier fields: `issue_number`, `identifier`
- origin fields: `origin_kind`, `origin_id`, `origin_run_id`, `origin_fingerprint`
- `request_depth` int not null default 0
- `billing_code` text null
- `assignee_adapter_overrides` jsonb null
- `execution_policy` jsonb null
- `execution_state` jsonb null
- execution workspace fields: `execution_workspace_id`, `execution_workspace_preference`, `execution_workspace_settings`
- `started_at` timestamptz null
- `completed_at` timestamptz null
- `cancelled_at` timestamptz null
- `hidden_at` timestamptz null
Invariants:
@@ -261,10 +288,10 @@ Invariant: each event must attach to agent and company; rollups are aggregation,
- `id` uuid pk
- `company_id` uuid fk not null
- `type` enum: `hire_agent | approve_ceo_strategy`
- `type` enum: `hire_agent | approve_ceo_strategy | budget_override_required | request_board_approval`
- `requested_by_agent_id` uuid fk `agents.id` null
- `requested_by_user_id` uuid fk `users.id` null
- `status` enum: `pending | approved | rejected | cancelled`
- `status` enum: `pending | revision_requested | approved | rejected | cancelled`
- `payload` jsonb not null
- `decision_note` text null
- `decided_by_user_id` uuid fk `users.id` null
@@ -363,6 +390,15 @@ Operational policy:
- `document_id` uuid fk not null
- `key` text not null (`plan`, `design`, `notes`, etc.)
## 7.16 Current Implementation Addenda
The current implementation includes additional V1-control-plane tables beyond the original February snapshot:
- Issue structure and review: `issue_relations` for blockers, `labels`/`issue_labels`, `issue_thread_interactions`, `issue_approvals`, `issue_execution_decisions`, `issue_work_products`, `issue_inbox_archives`, `issue_read_states`, and issue reference mention indexes.
- Execution and workspace control: `execution_workspaces`, `project_workspaces`, `workspace_runtime_services`, `workspace_operations`, `environments`, `environment_leases`, `agent_task_sessions`, `agent_runtime_state`, `agent_wakeup_requests`, heartbeat events, and watchdog decision tables.
- Plugins and routines: `plugins`, plugin config/state/entities/jobs/logs/webhooks, plugin database namespaces/migrations, plugin company settings, and `routines`.
- Access and operations: company memberships, instance roles, principal permission grants, invites, join requests, board API keys, CLI auth challenges, budget policies/incidents, feedback exports/votes, company skills, sidebar preferences, and company logos.
## 8. State Machines
## 8.1 Agent Status
@@ -395,7 +431,14 @@ Side effects:
- entering `done` sets `completed_at`
- entering `cancelled` sets `cancelled_at`
Detailed ownership, execution, blocker, and crash-recovery semantics are documented in `doc/execution-semantics.md`.
V1 non-terminal liveness rule:
- agent-owned `todo`, `in_progress`, `in_review`, and `blocked` issues must have a live execution path, an explicit waiting path, or an explicit recovery path
- `in_review` is healthy only when a typed execution participant, pending issue-thread interaction or approval, user owner, active run, queued wake, or explicit recovery issue owns the next action
- a blocked chain is covered only when each unresolved leaf issue is live or explicitly waiting
- when Paperclip cannot safely infer the next action, it surfaces the problem through visible blocked/recovery work instead of silently completing or reassigning work
Detailed ownership, execution, blocker, active-run watchdog, crash-recovery, and non-terminal liveness semantics are documented in `doc/execution-semantics.md`.
## 8.3 Approval Status
@@ -484,6 +527,7 @@ All endpoints are under `/api` and return JSON.
- `DELETE /issues/:issueId/documents/:key`
- `POST /issues/:issueId/checkout`
- `POST /issues/:issueId/release`
- `POST /issues/:issueId/admin/force-release` (board-only lock recovery)
- `POST /issues/:issueId/comments`
- `GET /issues/:issueId/comments`
- `POST /companies/:companyId/issues/:issueId/attachments` (multipart upload)
@@ -508,6 +552,8 @@ Server behavior:
2. if updated row count is 0, return `409` with current owner/status
3. successful checkout sets `assignee_agent_id`, `status = in_progress`, and `started_at`
`POST /issues/:issueId/admin/force-release` is an operator recovery endpoint for stale harness locks. It requires board access to the issue company, clears checkout and execution run lock fields, and may clear the agent assignee when `clearAssignee=true` is passed. The route must write an `issue.admin_force_release` activity log entry containing the previous checkout and execution run IDs.
## 10.5 Projects
- `GET /companies/:companyId/projects`
@@ -553,6 +599,17 @@ Dashboard payload must include:
- `422` semantic rule violation
- `500` server error
## 10.10 Current Implementation API Addenda
The current app also exposes V1-supporting surfaces for:
- issue thread interactions (`suggest_tasks`, `ask_user_questions`, `request_confirmation`)
- issue approvals, issue references/search, labels, read state, inbox/archive state, and work products
- execution workspaces, project workspaces, workspace runtime services, and workspace operations
- routines and scheduled/API/webhook triggers
- plugin installation, configuration, state, jobs, logs, webhooks, and plugin database namespace migration
- company import/export preview/apply, feedback export/vote routes, instance backup/config routes, invites, join requests, memberships, and permission grants
## 11. Heartbeat and Adapter Contract
## 11.1 Adapter Interface
@@ -728,13 +785,14 @@ Required UX behaviors:
- Node 20+
- `DATABASE_URL` optional
- if unset, auto-use PGlite and push schema
- if unset, auto-use embedded PostgreSQL under `~/.paperclip/instances/default/db`
## 15.2 Migrations
- Drizzle migrations are source of truth
- local/dev startup applies pending migrations automatically where supported
- `pnpm db:migrate` applies pending migrations manually
- no destructive migration in-place for V1 upgrade path
- provide migration script from existing minimal tables to company-scoped schema
## 15.3 Logging and Audit
@@ -789,6 +847,8 @@ A release candidate is blocked unless these pass:
## 18. Delivery Plan
Current implementation note: the milestones below describe the original V1 sequencing. Several systems originally framed as future work have since shipped or advanced materially, including issue documents/interactions, blockers, routines, execution workspaces, import/export portability, authenticated deployment modes, multi-user basics, and the local/self-hosted plugin runtime.
## Milestone 1: Company Core and Auth
- add `companies` and company scoping to existing entities
@@ -841,7 +901,7 @@ V1 is complete only when all criteria are true:
## 20. Post-V1 Backlog (Explicitly Deferred)
- plugin architecture
- cloud-grade plugin marketplace/distribution
- richer workflow-state customization per team
- milestones/labels/dependency graph depth beyond V1 minimum
- realtime transport optimization (SSE/WebSockets)

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

View File

@@ -1,7 +1,7 @@
# Execution Semantics
Status: Current implementation guide
Date: 2026-04-13
Date: 2026-04-26
Audience: Product and engineering
This document explains how Paperclip interprets issue assignment, issue status, execution runs, wakeups, parent/sub-issue structure, and blocker relationships.
@@ -150,11 +150,23 @@ Blocked issues should stay idle while blockers remain unresolved. Paperclip shou
If a parent is truly waiting on a child, model that with blockers. Do not rely on the parent/child relationship alone.
## 7. Consistent Execution Path Rules
## 7. Non-Terminal Issue Liveness Contract
For agent-assigned, non-terminal, actionable issues, Paperclip should not leave work in a state where nobody is working it and nothing will wake it.
For agent-owned, non-terminal issues, Paperclip should never leave work in a state where nobody is responsible for the next move and nothing will wake or surface it.
The relevant execution path depends on status.
This is a visibility contract, not an auto-completion contract. If Paperclip cannot safely infer the next action, it should surface the ambiguity with a blocked state, a visible comment, or an explicit recovery issue. It must not silently mark work done from prose comments or guess that a dependency is complete.
An issue is healthy when the product can answer "what moves this forward next?" without requiring a human to reconstruct intent from the whole thread. An issue is stalled when it is non-terminal but has no live execution path, no explicit waiting path, and no recovery path.
The valid action-path primitives are:
- an active run linked to the issue
- a queued wake or continuation that can be delivered to the responsible agent
- a typed execution-policy participant, such as `executionState.currentParticipant`
- a pending issue-thread interaction or linked approval that is waiting for a specific responder
- a human owner via `assigneeUserId`
- a first-class blocker chain whose unresolved leaf issues are themselves healthy
- an open explicit recovery issue that names the owner and action needed to restore liveness
### Agent-assigned `todo`
@@ -162,9 +174,11 @@ This is dispatch state: ready to start, not yet actively claimed.
A healthy dispatch state means at least one of these is true:
- the issue already has a queued/running wake path
- the issue is intentionally resting in `todo` after a successful agent heartbeat, not after an interrupted dispatch
- the issue has been explicitly surfaced as stranded
- the issue already has a queued wake path
- the issue is intentionally resting in `todo` after a completed agent heartbeat, with no interrupted dispatch evidence
- the issue has been explicitly surfaced as stranded through a visible blocked/recovery path
An assigned `todo` issue is stalled when dispatch was interrupted, no wake remains queued or running, and no recovery path has been opened.
### Agent-assigned `in_progress`
@@ -174,7 +188,39 @@ A healthy active-work state means at least one of these is true:
- there is an active run for the issue
- there is already a queued continuation wake
- the issue has been explicitly surfaced as stranded
- there is an open explicit recovery issue for the lost execution path
An agent-owned `in_progress` issue is stalled when it has no active run, no queued continuation, and no explicit recovery surface. A still-running but silent process is not automatically stalled; it is handled by the active-run watchdog contract.
### `in_review`
This is review/approval state: execution is paused because the next move belongs to a reviewer, approver, board user, or recovery owner.
A healthy `in_review` issue has at least one valid action path:
- a typed execution-policy participant who can approve or request changes
- a pending issue-thread interaction or linked approval waiting for a named responder
- a human owner via `assigneeUserId`
- an active run or queued wake that is expected to process the review state
- an open explicit recovery issue for an ambiguous review handoff
Agent-assigned `in_review` with no typed participant is only healthy when one of the other paths exists. Assignment to the same agent that produced the handoff is not, by itself, a review path.
An `in_review` issue is stalled when it has no typed participant, no pending interaction or approval, no user owner, no active run, no queued wake, and no explicit recovery issue. Paperclip should surface that state as recovery work rather than silently completing the issue or leaving blocker chains parked indefinitely.
### `blocked`
This is explicit waiting state.
A healthy `blocked` issue has an explicit waiting path:
- first-class blockers exist, and each unresolved leaf has a valid action path under this contract
- the issue is blocked on an explicit recovery issue that itself has a live or waiting path
- the issue is waiting on a pending interaction, linked approval, human owner, or clearly named external owner/action
A blocker chain is covered only when its unresolved leaf is live or explicitly waiting. An intermediate `blocked` issue does not make the chain healthy by itself.
A `blocked` issue is stalled when the unresolved blocker leaf has no active run, queued wake, typed participant, pending interaction or approval, user owner, external owner/action, or recovery issue. In that case the parent should show the first stalled leaf instead of presenting the dependency as calmly covered.
## 8. Crash and Restart Recovery
@@ -218,15 +264,83 @@ This is an active-work continuity recovery.
Startup recovery and periodic recovery are different from normal wakeup delivery.
On startup and on the periodic recovery loop, Paperclip now does three things in sequence:
On startup and on the periodic recovery loop, Paperclip now does four things in sequence:
1. reap orphaned `running` runs
2. resume persisted `queued` runs
3. reconcile stranded assigned work
4. scan silent active runs and create or update explicit watchdog review issues
That last step is what closes the gap where issue state survives a crash but the wake/run path does not.
The stranded-work pass closes the gap where issue state survives a crash but the wake/run path does not. The silent-run scan covers the separate case where a live process exists but has stopped producing observable output.
## 10. What This Does Not Mean
## 10. Silent Active-Run Watchdog
An active run can still be unhealthy even when its process is `running`. Paperclip treats prolonged output silence as a watchdog signal, not as proof that the run is failed.
The recovery service owns this contract:
- classify active-run output silence as `ok`, `suspicious`, `critical`, `snoozed`, or `not_applicable`
- collect bounded evidence from run logs, recent run events, child issues, and blockers
- preserve redaction and truncation before evidence is written to issue descriptions
- create at most one open `stale_active_run_evaluation` issue per run
- honor active snooze decisions before creating more review work
- build the `outputSilence` summary shown by live-run and active-run API responses
Suspicious silence creates a medium-priority review issue for the selected recovery owner. Critical silence raises that review issue to high priority and blocks the source issue on the explicit evaluation task without cancelling the active process.
Watchdog decisions are explicit operator/recovery-owner decisions:
- `snooze` records an operator-chosen future quiet-until time and suppresses scan-created review work during that window
- `continue` records that the current evidence is acceptable, does not cancel or mutate the active run, and sets a 30-minute default re-arm window before the watchdog evaluates the still-silent run again
- `dismissed_false_positive` records why the review was not actionable
Operators should prefer `snooze` for known time-bounded quiet periods. `continue` is only a short acknowledgement of the current evidence; if the run remains silent after the re-arm window, the periodic watchdog scan can create or update review work again.
The board can record watchdog decisions. The assigned owner of the watchdog evaluation issue can also record them. Other agents cannot.
## 11. Auto-Recover vs Explicit Recovery vs Human Escalation
Paperclip uses three different recovery outcomes, depending on how much it can safely infer.
### Auto-Recover
Auto-recovery is allowed when ownership is clear and the control plane only lost execution continuity.
Examples:
- requeue one dispatch wake for an assigned `todo` issue whose latest run failed, timed out, or was cancelled
- requeue one continuation wake for an assigned `in_progress` issue whose live execution path disappeared
- assign an orphan blocker back to its creator when that blocker is already preventing other work
Auto-recovery preserves the existing owner. It does not choose a replacement agent.
### Explicit Recovery Issue
Paperclip creates an explicit recovery issue when the system can identify a problem but cannot safely complete the work itself.
Examples:
- automatic stranded-work retry was already exhausted
- a dependency graph has an invalid/uninvokable owner, unassigned blocker, or invalid review participant
- an active run is silent past the watchdog threshold
The source issue remains visible and blocked on the recovery issue when blocking is necessary for correctness. The recovery owner must restore a live path, resolve the source issue manually, or record the reason it is a false positive.
Instance-level issue-graph liveness auto-recovery is disabled by default. When enabled, its lookback window means "dependency paths updated within the last N hours"; older findings remain advisory and are counted as outside the configured lookback instead of creating recovery issues automatically. This is an operator noise control, not the older staleness delay for determining whether a chain is old enough to surface.
### Human Escalation
Human escalation is required when the next safe action depends on board judgment, budget/approval policy, or information unavailable to the control plane.
Examples:
- all candidate recovery owners are paused, terminated, pending approval, or budget-blocked
- the issue is human-owned rather than agent-owned
- the run is intentionally quiet but needs an operator decision before cancellation or continuation
In these cases Paperclip should leave a visible issue/comment trail instead of silently retrying.
## 12. What This Does Not Mean
These semantics do not change V1 into an auto-reassignment system.
@@ -240,9 +354,10 @@ The recovery model is intentionally conservative:
- preserve ownership
- retry once when the control plane lost execution continuity
- create explicit recovery work when the system can identify a bounded recovery owner/action
- escalate visibly when the system cannot safely keep going
## 11. Practical Interpretation
## 13. Practical Interpretation
For a board operator, the intended meaning is:

30
docker/.env.aws.example Normal file
View File

@@ -0,0 +1,30 @@
# AWS ECS Fargate deployment environment
# Copy to .env.aws and fill in values before deploying
#
# Secrets (DATABASE_URL, BETTER_AUTH_SECRET, ANTHROPIC_API_KEY, OPENAI_API_KEY,
# GITHUB_TOKEN) are injected via AWS Secrets Manager — do NOT set them here.
# Deployment mode
PAPERCLIP_DEPLOYMENT_MODE=authenticated
PAPERCLIP_DEPLOYMENT_EXPOSURE=public
PAPERCLIP_PUBLIC_URL=https://paperclip.example.com
# Server
HOST=0.0.0.0
PORT=3100
NODE_ENV=production
SERVE_UI=true
# Paperclip paths
PAPERCLIP_HOME=/paperclip
PAPERCLIP_INSTANCE_ID=default
PAPERCLIP_CONFIG=/paperclip/instances/default/config.json
# Auto-apply migrations on startup
PAPERCLIP_MIGRATION_AUTO_APPLY=true
# Enable heartbeat scheduler for remote agents
HEARTBEAT_SCHEDULER_ENABLED=true
# Post-deploy hardening (uncomment after first user signs up)
# PAPERCLIP_AUTH_DISABLE_SIGN_UP=true

View File

@@ -0,0 +1,90 @@
{
"family": "paperclip-server",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "2048",
"memory": "4096",
"executionRoleArn": "arn:aws:iam::<ACCOUNT_ID>:role/paperclip-ecs-execution",
"taskRoleArn": "arn:aws:iam::<ACCOUNT_ID>:role/paperclip-ecs-task",
"containerDefinitions": [
{
"name": "paperclip-server",
"image": "<ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/paperclip-server:latest",
"essential": true,
"portMappings": [
{
"containerPort": 3100,
"protocol": "tcp"
}
],
"environment": [
{ "name": "NODE_ENV", "value": "production" },
{ "name": "HOST", "value": "0.0.0.0" },
{ "name": "PORT", "value": "3100" },
{ "name": "SERVE_UI", "value": "true" },
{ "name": "PAPERCLIP_HOME", "value": "/paperclip" },
{ "name": "PAPERCLIP_INSTANCE_ID", "value": "default" },
{ "name": "PAPERCLIP_CONFIG", "value": "/paperclip/instances/default/config.json" },
{ "name": "PAPERCLIP_DEPLOYMENT_MODE", "value": "authenticated" },
{ "name": "PAPERCLIP_DEPLOYMENT_EXPOSURE", "value": "public" },
{ "name": "PAPERCLIP_PUBLIC_URL", "value": "https://<DOMAIN>" },
{ "name": "PAPERCLIP_MIGRATION_AUTO_APPLY", "value": "true" },
{ "name": "HEARTBEAT_SCHEDULER_ENABLED", "value": "true" }
],
"secrets": [
{
"name": "DATABASE_URL",
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/database-url"
},
{
"name": "BETTER_AUTH_SECRET",
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/better-auth-secret"
},
{
"name": "ANTHROPIC_API_KEY",
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/anthropic-api-key"
},
{
"name": "OPENAI_API_KEY",
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/openai-api-key"
},
{
"name": "GITHUB_TOKEN",
"valueFrom": "arn:aws:secretsmanager:<REGION>:<ACCOUNT_ID>:secret:paperclip/github-token"
}
],
"mountPoints": [
{
"sourceVolume": "paperclip-data",
"containerPath": "/paperclip",
"readOnly": false
}
],
"healthCheck": {
"command": ["CMD-SHELL", "curl -f http://localhost:3100/api/health || exit 1"],
"interval": 30,
"timeout": 5,
"retries": 3,
"startPeriod": 60
},
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/paperclip",
"awslogs-region": "<REGION>",
"awslogs-stream-prefix": "server"
}
}
}
],
"volumes": [
{
"name": "paperclip-data",
"efsVolumeConfiguration": {
"fileSystemId": "<EFS_ID>",
"rootDirectory": "/",
"transitEncryption": "ENABLED"
}
}
]
}

View File

@@ -1,9 +1,9 @@
---
title: Issues
summary: Issue CRUD, checkout/release, comments, documents, and attachments
summary: Issue CRUD, checkout/release, comments, documents, interactions, and attachments
---
Issues are the unit of work in Paperclip. They support hierarchical relationships, atomic checkout, comments, keyed text documents, and file attachments.
Issues are the unit of work in Paperclip. They support hierarchical relationships, atomic checkout, comments, issue-thread interactions, keyed text documents, and file attachments.
## List Issues
@@ -121,6 +121,65 @@ POST /api/issues/{issueId}/comments
@-mentions (`@AgentName`) in comments trigger heartbeats for the mentioned agent.
## Issue-Thread Interactions
Interactions are structured cards in the issue thread. Agents create them when a board/user needs to choose tasks, answer questions, or confirm a proposal through the UI instead of hidden markdown conventions.
### List Interactions
```
GET /api/issues/{issueId}/interactions
```
### Create Interaction
```
POST /api/issues/{issueId}/interactions
{
"kind": "request_confirmation",
"idempotencyKey": "confirmation:{issueId}:plan:{revisionId}",
"title": "Plan approval",
"summary": "Waiting for the board/user to accept or request changes.",
"continuationPolicy": "wake_assignee",
"payload": {
"version": 1,
"prompt": "Accept this plan?",
"acceptLabel": "Accept plan",
"rejectLabel": "Request changes",
"rejectRequiresReason": true,
"rejectReasonLabel": "What needs to change?",
"detailsMarkdown": "Review the latest plan document before accepting.",
"supersedeOnUserComment": true,
"target": {
"type": "issue_document",
"issueId": "{issueId}",
"documentId": "{documentId}",
"key": "plan",
"revisionId": "{latestRevisionId}",
"revisionNumber": 3
}
}
}
```
Supported `kind` values:
- `suggest_tasks`: propose child issues for the board/user to accept or reject
- `ask_user_questions`: ask structured questions and store selected answers
- `request_confirmation`: ask the board/user to accept or reject a proposal
For `request_confirmation`, `continuationPolicy: "wake_assignee"` wakes the assignee only after acceptance. Rejection records the reason and leaves follow-up to a normal comment unless the board/user chooses to add one.
### Resolve Interaction
```
POST /api/issues/{issueId}/interactions/{interactionId}/accept
POST /api/issues/{issueId}/interactions/{interactionId}/reject
POST /api/issues/{issueId}/interactions/{interactionId}/respond
```
Board users resolve interactions from the UI. Agents should create a fresh `request_confirmation` after changing the target document or after a board/user comment supersedes the pending request.
## Documents
Documents are editable, revisioned, text-first issue artifacts keyed by a stable identifier such as `plan`, `design`, or `notes`.

580
docs/deploy/aws-ecs.md Normal file
View File

@@ -0,0 +1,580 @@
---
title: AWS ECS Fargate
summary: Deploy Paperclip to AWS using ECS Fargate, RDS Postgres, and EFS
---
Deploy Paperclip to AWS with ECS Fargate (compute), RDS Postgres 17 (database), and EFS (persistent storage). This guide uses the AWS CLI and produces a single-task ECS service behind an ALB with HTTPS.
## Prerequisites
- AWS CLI v2 configured with a profile that has admin-level permissions
- Docker installed locally (for building and pushing the image)
- A registered domain with DNS you control (for the TLS certificate)
- The Paperclip repo cloned locally
Set these shell variables for the rest of the guide:
```bash
export AWS_REGION=us-east-1
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export PAPERCLIP_DOMAIN=paperclip.example.com # your domain
export DB_PASSWORD=$(openssl rand -base64 24 | tr -d '/+=' | head -c 32)
export AUTH_SECRET=$(openssl rand -base64 32)
```
## 1. Create ECR Repository
```bash
aws ecr create-repository \
--repository-name paperclip-server \
--image-scanning-configuration scanOnPush=true \
--region $AWS_REGION
```
## 2. Build and Push Docker Image
```bash
cd /path/to/paperclip
# Authenticate Docker to ECR
aws ecr get-login-password --region $AWS_REGION \
| docker login --username AWS --password-stdin \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
# Build
docker build -t paperclip-server .
# Tag and push
docker tag paperclip-server:latest \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/paperclip-server:latest
docker push \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/paperclip-server:latest
```
## 3. Networking (VPC, Subnets, Security Groups)
Use the default VPC or create a dedicated one. The guide assumes the default VPC with public and private subnets in two AZs.
```bash
# Get default VPC
VPC_ID=$(aws ec2 describe-vpcs \
--filters Name=isDefault,Values=true \
--query 'Vpcs[0].VpcId' --output text)
# Get two public subnets (for ALB)
SUBNET_IDS=$(aws ec2 describe-subnets \
--filters Name=vpc-id,Values=$VPC_ID \
--query 'Subnets[?MapPublicIpOnLaunch==`true`] | [0:2].SubnetId' \
--output text)
SUBNET_1=$(echo $SUBNET_IDS | awk '{print $1}')
SUBNET_2=$(echo $SUBNET_IDS | awk '{print $2}')
```
Create security groups:
```bash
# ALB security group — inbound HTTPS
ALB_SG=$(aws ec2 create-security-group \
--group-name paperclip-alb \
--description "Paperclip ALB" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
aws ec2 authorize-security-group-ingress \
--group-id $ALB_SG \
--protocol tcp --port 443 --cidr 0.0.0.0/0
# Also open port 80 so the ALB can accept HTTP and redirect to HTTPS
aws ec2 authorize-security-group-ingress \
--group-id $ALB_SG \
--protocol tcp --port 80 --cidr 0.0.0.0/0
# ECS task security group — inbound from ALB only
ECS_SG=$(aws ec2 create-security-group \
--group-name paperclip-ecs \
--description "Paperclip ECS tasks" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
aws ec2 authorize-security-group-ingress \
--group-id $ECS_SG \
--protocol tcp --port 3100 \
--source-group $ALB_SG
# RDS security group — inbound from ECS only
RDS_SG=$(aws ec2 create-security-group \
--group-name paperclip-rds \
--description "Paperclip RDS" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
aws ec2 authorize-security-group-ingress \
--group-id $RDS_SG \
--protocol tcp --port 5432 \
--source-group $ECS_SG
# EFS security group — inbound NFS from ECS only
EFS_SG=$(aws ec2 create-security-group \
--group-name paperclip-efs \
--description "Paperclip EFS" \
--vpc-id $VPC_ID \
--query 'GroupId' --output text)
aws ec2 authorize-security-group-ingress \
--group-id $EFS_SG \
--protocol tcp --port 2049 \
--source-group $ECS_SG
```
## 4. Create RDS Postgres Instance
```bash
# Custom VPCs don't come with a default DB subnet group — create one
# that spans our two subnets so RDS can place the instance.
aws rds create-db-subnet-group \
--db-subnet-group-name paperclip-db-subnet \
--db-subnet-group-description "Paperclip RDS subnets" \
--subnet-ids $SUBNET_1 $SUBNET_2
aws rds create-db-instance \
--db-instance-identifier paperclip-db \
--db-instance-class db.t4g.micro \
--engine postgres \
--engine-version 17 \
--master-username paperclip \
--master-user-password "$DB_PASSWORD" \
--allocated-storage 20 \
--storage-type gp3 \
--vpc-security-group-ids $RDS_SG \
--db-subnet-group-name paperclip-db-subnet \
--no-publicly-accessible \
--backup-retention-period 7 \
--no-multi-az \
--db-name paperclip \
--region $AWS_REGION
# Wait for it to become available (takes 5-10 min)
aws rds wait db-instance-available \
--db-instance-identifier paperclip-db
# Get the endpoint
RDS_ENDPOINT=$(aws rds describe-db-instances \
--db-instance-identifier paperclip-db \
--query 'DBInstances[0].Endpoint.Address' --output text)
DATABASE_URL="postgresql://paperclip:${DB_PASSWORD}@${RDS_ENDPOINT}:5432/paperclip"
```
## 5. Create EFS Filesystem
```bash
EFS_ID=$(aws efs create-file-system \
--performance-mode generalPurpose \
--throughput-mode bursting \
--encrypted \
--tags Key=Name,Value=paperclip-data \
--query 'FileSystemId' --output text)
# Create mount targets in each subnet
for SUBNET in $SUBNET_1 $SUBNET_2; do
aws efs create-mount-target \
--file-system-id $EFS_ID \
--subnet-id $SUBNET \
--security-groups $EFS_SG
done
# Wait for mount targets
aws efs describe-mount-targets --file-system-id $EFS_ID
```
## 6. Store Secrets
```bash
aws secretsmanager create-secret \
--name paperclip/database-url \
--secret-string "$DATABASE_URL"
aws secretsmanager create-secret \
--name paperclip/anthropic-api-key \
--secret-string "YOUR_ANTHROPIC_KEY"
aws secretsmanager create-secret \
--name paperclip/better-auth-secret \
--secret-string "$AUTH_SECRET"
aws secretsmanager create-secret \
--name paperclip/openai-api-key \
--secret-string "YOUR_OPENAI_KEY"
aws secretsmanager create-secret \
--name paperclip/github-token \
--secret-string "YOUR_GITHUB_PAT"
```
## 7. IAM Roles
Create the ECS task execution role (pulls images, reads secrets) and the task role (application permissions).
```bash
# Task execution role
aws iam create-role \
--role-name paperclip-ecs-execution \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "ecs-tasks.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
aws iam attach-role-policy \
--role-name paperclip-ecs-execution \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
# Allow reading secrets
aws iam put-role-policy \
--role-name paperclip-ecs-execution \
--policy-name SecretsAccess \
--policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": ["secretsmanager:GetSecretValue"],
"Resource": "arn:aws:secretsmanager:'$AWS_REGION':'$AWS_ACCOUNT_ID':secret:paperclip/*"
}]
}'
# Task role (application — add permissions as needed)
aws iam create-role \
--role-name paperclip-ecs-task \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "ecs-tasks.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
```
## 8. ECS Cluster and Task Definition
```bash
aws ecs create-cluster --cluster-name paperclip
aws logs create-log-group --log-group-name /ecs/paperclip
```
Register the task definition using the template at `docker/ecs-task-definition.json`. Before registering, replace the placeholder values:
```bash
sed -e "s|<ACCOUNT_ID>|$AWS_ACCOUNT_ID|g" \
-e "s|<REGION>|$AWS_REGION|g" \
-e "s|<EFS_ID>|$EFS_ID|g" \
-e "s|<DOMAIN>|$PAPERCLIP_DOMAIN|g" \
docker/ecs-task-definition.json > /tmp/paperclip-task-def.json
aws ecs register-task-definition \
--cli-input-json file:///tmp/paperclip-task-def.json
```
## 9. ALB and TLS Certificate
Request a certificate (you must validate via DNS):
```bash
CERT_ARN=$(aws acm request-certificate \
--domain-name $PAPERCLIP_DOMAIN \
--validation-method DNS \
--query 'CertificateArn' --output text)
# Get the CNAME record to add to your DNS
aws acm describe-certificate \
--certificate-arn $CERT_ARN \
--query 'Certificate.DomainValidationOptions[0].ResourceRecord'
```
Add the CNAME to your DNS provider, then wait for validation:
```bash
aws acm wait certificate-validated --certificate-arn $CERT_ARN
```
Create the ALB:
```bash
ALB_ARN=$(aws elbv2 create-load-balancer \
--name paperclip-alb \
--subnets $SUBNET_1 $SUBNET_2 \
--security-groups $ALB_SG \
--scheme internet-facing \
--type application \
--query 'LoadBalancers[0].LoadBalancerArn' --output text)
ALB_DNS=$(aws elbv2 describe-load-balancers \
--load-balancer-arns $ALB_ARN \
--query 'LoadBalancers[0].DNSName' --output text)
# Target group
TG_ARN=$(aws elbv2 create-target-group \
--name paperclip-tg \
--protocol HTTP \
--port 3100 \
--vpc-id $VPC_ID \
--target-type ip \
--health-check-path /api/health \
--health-check-interval-seconds 30 \
--healthy-threshold-count 2 \
--unhealthy-threshold-count 3 \
--query 'TargetGroups[0].TargetGroupArn' --output text)
# HTTPS listener
LISTENER_ARN=$(aws elbv2 create-listener \
--load-balancer-arn $ALB_ARN \
--protocol HTTPS \
--port 443 \
--certificates CertificateArn=$CERT_ARN \
--default-actions Type=forward,TargetGroupArn=$TG_ARN \
--query 'Listeners[0].ListenerArn' --output text)
# HTTP listener — redirect all :80 traffic to :443
HTTP_LISTENER_ARN=$(aws elbv2 create-listener \
--load-balancer-arn $ALB_ARN \
--protocol HTTP \
--port 80 \
--default-actions Type=redirect,RedirectConfig='{Protocol=HTTPS,Port=443,StatusCode=HTTP_301}' \
--query 'Listeners[0].ListenerArn' --output text)
```
Point your DNS to the ALB:
- Create a CNAME or ALIAS record for `$PAPERCLIP_DOMAIN` -> `$ALB_DNS`
## 10. Create ECS Service
```bash
aws ecs create-service \
--cluster paperclip \
--service-name paperclip-server \
--task-definition paperclip-server \
--desired-count 1 \
--launch-type FARGATE \
--deployment-configuration '{
"deploymentCircuitBreaker": {"enable": true, "rollback": true},
"maximumPercent": 200,
"minimumHealthyPercent": 100
}' \
--network-configuration '{
"awsvpcConfiguration": {
"subnets": ["'$SUBNET_1'", "'$SUBNET_2'"],
"securityGroups": ["'$ECS_SG'"],
"assignPublicIp": "ENABLED"
}
}' \
--load-balancers '[{
"targetGroupArn": "'$TG_ARN'",
"containerName": "paperclip-server",
"containerPort": 3100
}]'
```
> **Note:** `assignPublicIp: ENABLED` is needed if using public subnets without a NAT Gateway. For private subnets, set to `DISABLED` and ensure a NAT Gateway is configured for outbound internet access.
## 11. Verify Deployment
```bash
# Watch task come up
aws ecs describe-services \
--cluster paperclip \
--services paperclip-server \
--query 'services[0].{desired:desiredCount,running:runningCount,status:status}'
# Check task health
aws ecs list-tasks --cluster paperclip --service-name paperclip-server
TASK_ARN=$(aws ecs list-tasks --cluster paperclip --service-name paperclip-server --query 'taskArns[0]' --output text)
aws ecs describe-tasks --cluster paperclip --tasks $TASK_ARN \
--query 'tasks[0].{status:lastStatus,health:healthStatus}'
# Check logs
aws logs tail /ecs/paperclip --since 10m --follow
# Hit the health endpoint
curl -sf https://$PAPERCLIP_DOMAIN/api/health
```
**Healthy indicators:**
- ECS task status: `RUNNING`, health: `HEALTHY`
- Logs show `plugin job coordinator started` and `plugin-loader: loadAll complete`
- `/api/health` returns 200
## Post-Deploy Security Hardening
After the first user has signed up (which grants admin role), lock down the instance:
```bash
# Disable public sign-up (prevents unauthorized users from creating accounts)
# Add to the task definition environment section, then redeploy:
# { "name": "PAPERCLIP_AUTH_DISABLE_SIGN_UP", "value": "true" }
# Or update via Secrets Manager / task def override, then force new deployment
aws ecs update-service \
--cluster paperclip \
--service paperclip-server \
--force-new-deployment
```
Use the invite flow (added in v2026.416.0) to grant access to additional users after sign-up is disabled.
## Deploying Updates
Build, push, and force a new deployment:
```bash
# Build and push new image
docker build -t paperclip-server .
docker tag paperclip-server:latest \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/paperclip-server:latest
docker push \
$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/paperclip-server:latest
# Roll out
aws ecs update-service \
--cluster paperclip \
--service paperclip-server \
--force-new-deployment
# Watch the deployment
aws ecs describe-services \
--cluster paperclip \
--services paperclip-server \
--query 'services[0].deployments[*].{status:status,running:runningCount,desired:desiredCount,rollout:rolloutState}'
```
ECS performs a rolling update: starts a new task, waits for it to pass health checks, then drains the old task.
## Rollback
If the new deployment is unhealthy:
```bash
# ECS automatically rolls back if the new task fails health checks
# (circuit breaker is enabled in the service configuration above).
# To force rollback manually:
# 1. Find the previous task definition revision
aws ecs list-task-definitions \
--family-prefix paperclip-server \
--sort DESC \
--query 'taskDefinitionArns[0:3]'
# 2. Update service to the previous revision
aws ecs update-service \
--cluster paperclip \
--service paperclip-server \
--task-definition paperclip-server:<PREVIOUS_REVISION>
```
## Scaling to Zero (Cost Savings)
Scale down when not in use:
```bash
# Stop
aws ecs update-service \
--cluster paperclip \
--service paperclip-server \
--desired-count 0
# Start
aws ecs update-service \
--cluster paperclip \
--service paperclip-server \
--desired-count 1
```
RDS can also be stopped (auto-restarts after 7 days):
```bash
aws rds stop-db-instance --db-instance-identifier paperclip-db
aws rds start-db-instance --db-instance-identifier paperclip-db
```
## Teardown
Remove all resources in reverse order:
```bash
# 1. ECS service and cluster
aws ecs update-service --cluster paperclip --service paperclip-server --desired-count 0
aws ecs delete-service --cluster paperclip --service paperclip-server --force
aws ecs delete-cluster --cluster paperclip
# 2. ALB and ACM cert
aws elbv2 delete-listener --listener-arn $HTTP_LISTENER_ARN
aws elbv2 delete-listener --listener-arn $LISTENER_ARN
aws elbv2 delete-target-group --target-group-arn $TG_ARN
aws elbv2 delete-load-balancer --load-balancer-arn $ALB_ARN
aws acm delete-certificate --certificate-arn $CERT_ARN
# 3. RDS (creates final snapshot)
aws rds delete-db-instance \
--db-instance-identifier paperclip-db \
--final-db-snapshot-identifier paperclip-db-final
aws rds wait db-instance-deleted --db-instance-identifier paperclip-db
aws rds delete-db-subnet-group --db-subnet-group-name paperclip-db-subnet
# 4. EFS (mount targets must be deleted first)
for MT in $(aws efs describe-mount-targets --file-system-id $EFS_ID --query 'MountTargets[*].MountTargetId' --output text); do
aws efs delete-mount-target --mount-target-id $MT
done
# Mount-target deletion is async; poll until none remain before deleting
# the filesystem, otherwise delete-file-system fails with FileSystemInUse.
echo "Waiting for mount targets to delete..."
while aws efs describe-mount-targets \
--file-system-id $EFS_ID \
--query 'MountTargets[0].MountTargetId' --output text 2>/dev/null | grep -q 'fsmt-'; do
sleep 5
done
aws efs delete-file-system --file-system-id $EFS_ID
# 5. Secrets
for s in database-url anthropic-api-key better-auth-secret openai-api-key github-token; do
aws secretsmanager delete-secret --secret-id paperclip/$s --force-delete-without-recovery
done
# 6. Security groups (after all dependents are gone)
for sg in $EFS_SG $RDS_SG $ECS_SG $ALB_SG; do
aws ec2 delete-security-group --group-id $sg
done
# 7. ECR
aws ecr delete-repository --repository-name paperclip-server --force
# 8. IAM roles
aws iam delete-role-policy --role-name paperclip-ecs-execution --policy-name SecretsAccess
aws iam detach-role-policy --role-name paperclip-ecs-execution \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy
aws iam delete-role --role-name paperclip-ecs-execution
aws iam delete-role --role-name paperclip-ecs-task
# 9. Log group
aws logs delete-log-group --log-group-name /ecs/paperclip
```
## Cost Reference
| Service | Config | Monthly |
|---------|--------|---------|
| ECS Fargate | 2 vCPU, 4 GB, 24/7 | ~$70 |
| RDS Postgres | db.t4g.micro, 20 GB | ~$15 |
| ALB | 1 LCU average | ~$22 |
| NAT Gateway | 1 AZ (if using private subnets) | ~$35 |
| EFS | 1 GB Standard | ~$0.30 |
| Secrets Manager | 5 secrets | ~$2 |
| CloudWatch Logs | ~1 GB/mo | ~$0.50 |
| ECR | ~1 GB | ~$0.10 |
| **Total (public subnets, no NAT)** | | **~$110/mo** |
| **Total (private subnets + NAT)** | | **~$145/mo** |
Use Fargate Spot and scheduled scaling to 0 during off-hours to reduce to ~$60-85/mo.

View File

@@ -40,7 +40,7 @@ Paperclip supports three deployment configurations, from zero-friction local to
- **Just trying Paperclip?** Use `local_trusted` (the default)
- **Sharing with a team on private network?** Use `authenticated` + `private`
- **Deploying to the cloud?** Use `authenticated` + `public`
- **Deploying to the cloud?** Use `authenticated` + `public` — see [AWS ECS Fargate guide](aws-ecs.md)
Set the mode during onboarding:

View File

@@ -55,3 +55,15 @@ The name must match the agent's `name` field exactly (case-insensitive). This tr
- **Don't overuse mentions** — each mention triggers a budget-consuming heartbeat
- **Don't use mentions for assignment** — create/assign a task instead
- **Mention handoff exception** — if an agent is explicitly @-mentioned with a clear directive to take a task, they may self-assign via checkout
## Structured Decisions
Use issue-thread interactions when the user should respond through a structured UI card instead of a free-form comment:
- `suggest_tasks` for proposed child issues
- `ask_user_questions` for structured questions
- `request_confirmation` for explicit accept/reject decisions
For yes/no decisions, create a `request_confirmation` card with `POST /api/issues/{issueId}/interactions`. Do not ask the board/user to type "yes" or "no" in markdown when the decision controls follow-up work.
Set `supersedeOnUserComment: true` when a later board/user comment should invalidate the pending confirmation. If you wake from that comment, revise the proposal and create a fresh confirmation if the decision is still needed.

View File

@@ -5,6 +5,16 @@ summary: Agent-side approval request and response
Agents interact with the approval system in two ways: requesting approvals and responding to approval resolutions.
The approval system is for governed actions that need formal board records, such as hires, strategy gates, spend approvals, or security-sensitive actions. For ordinary issue-thread yes/no decisions, use a `request_confirmation` interaction instead.
Examples that should use `request_confirmation` instead of approvals:
- "Accept this plan?"
- "Proceed with this issue breakdown?"
- "Use option A or reject and request changes?"
Create those cards with `POST /api/issues/{issueId}/interactions` and `kind: "request_confirmation"`.
## Requesting a Hire
Managers and CEOs can request to hire new agents:
@@ -37,6 +47,16 @@ POST /api/companies/{companyId}/approvals
}
```
## Plan Approval Cards
For normal issue implementation plans, use the issue-thread confirmation surface:
1. Update the `plan` issue document.
2. Create `request_confirmation` bound to the latest `plan` revision.
3. Use an idempotency key such as `confirmation:${issueId}:plan:${latestRevisionId}`.
4. Set `supersedeOnUserComment: true` so later board/user comments expire the stale request.
5. Wait for the accepted confirmation before creating implementation subtasks.
## Responding to Approval Resolutions
When an approval you requested is resolved, you may be woken with:

View File

@@ -70,6 +70,8 @@ Use your tools and capabilities to complete the task. If the issue is actionable
Leave durable progress in comments, documents, or work products, and include the next action before exiting. For parallel or long delegated work, create child issues and let Paperclip wake the parent when they complete instead of polling agents, sessions, or processes.
When the board/user must choose tasks, answer structured questions, or confirm a proposal before work can continue, create an issue-thread interaction with `POST /api/issues/{issueId}/interactions`. Use `request_confirmation` for explicit yes/no decisions instead of asking for them in markdown. For plan approval, update the `plan` document first, create a confirmation bound to the latest revision, and wait for acceptance before creating implementation subtasks.
### Step 8: Update Status
Always include the run ID header on state changes:
@@ -107,6 +109,7 @@ Always set `parentId` and `goalId` on subtasks.
- **Start actionable work** in the same heartbeat; planning-only exits are for planning tasks
- **Leave a clear next action** in durable issue context
- **Use child issues instead of polling** for long or parallel delegated work
- **Use `request_confirmation`** for issue-scoped yes/no decisions and plan approval cards
- **Always set parentId** on subtasks
- **Never cancel cross-team tasks** — reassign to your manager
- **Escalate when stuck** — use your chain of command

View File

@@ -68,6 +68,53 @@ POST /api/companies/{companyId}/issues
Always set `parentId` to maintain the task hierarchy. Set `goalId` when applicable.
## Confirmation Pattern
When the board/user must explicitly accept or reject a proposal, create a `request_confirmation` issue-thread interaction instead of asking for a yes/no answer in markdown.
```
POST /api/issues/{issueId}/interactions
{
"kind": "request_confirmation",
"idempotencyKey": "confirmation:{issueId}:{targetKey}:{targetVersion}",
"continuationPolicy": "wake_assignee",
"payload": {
"version": 1,
"prompt": "Accept this proposal?",
"acceptLabel": "Accept",
"rejectLabel": "Request changes",
"rejectRequiresReason": true,
"supersedeOnUserComment": true
}
}
```
Use `continuationPolicy: "wake_assignee"` when acceptance should wake you to continue. For `request_confirmation`, rejection does not wake the assignee by default; the board/user can add a normal comment with revision notes.
## Plan Approval Pattern
When a plan needs approval before implementation:
1. Create or update the issue document with key `plan`.
2. Fetch the saved document so you know the latest `documentId`, `latestRevisionId`, and `latestRevisionNumber`.
3. Create a `request_confirmation` targeting that exact `plan` revision.
4. Use an idempotency key such as `confirmation:${issueId}:plan:${latestRevisionId}`.
5. Wait for acceptance before creating implementation subtasks.
6. If a board/user comment supersedes the pending confirmation, revise the plan and create a fresh confirmation if approval is still needed.
Plan approval targets look like this:
```
"target": {
"type": "issue_document",
"issueId": "{issueId}",
"documentId": "{documentId}",
"key": "plan",
"revisionId": "{latestRevisionId}",
"revisionNumber": 3
}
```
## Release Pattern
If you need to give up a task (e.g. you realize it should go to someone else):

View File

@@ -47,7 +47,7 @@ You do **not** need to tell the CEO to engage specific agents. After you approve
- **Breaks goals into concrete tasks** with clear descriptions, priorities, and acceptance criteria
- **Assigns tasks to the right agent** based on role and capabilities (e.g., engineering tasks go to the CTO or engineers, marketing tasks go to the CMO)
- **Creates subtasks** when work needs to be decomposed further
- **Hires new agents** when the team lacks capacity for a goal (subject to your approval)
- **Hires new agents** when the team lacks capacity for a goal, with hire approvals available when enabled in company settings
- **Monitors progress** on each heartbeat, checking task status and unblocking reports
- **Escalates to you** when it encounters something it can't resolve — budget issues, blocked approvals, or strategic ambiguity

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

View File

@@ -57,9 +57,9 @@ The CEO is the primary delegator. When you set company goals, the CEO:
1. Creates a strategy and submits it for your approval
2. Breaks approved goals into tasks
3. Assigns tasks to agents based on their role and capabilities
4. Hires new agents when needed (subject to your approval)
4. Hires new agents when needed, with hire approvals available when you enable them
You don't need to manually assign every task — set the goals and let the CEO organize the work. You approve key decisions (strategy, hiring) and monitor progress. See the [How Delegation Works](/guides/board-operator/delegation) guide for the full lifecycle.
You don't need to manually assign every task — set the goals and let the CEO organize the work. You approve key decisions such as strategy, can enable hire approvals when you want a gate, and monitor progress. See the [How Delegation Works](/guides/board-operator/delegation) guide for the full lifecycle.
## Heartbeats

View File

@@ -17,7 +17,7 @@
"typecheck": "pnpm run preflight:workspace-links && pnpm -r typecheck",
"test": "pnpm run test:run",
"test:watch": "pnpm run preflight:workspace-links && vitest",
"test:run": "pnpm run preflight:workspace-links && vitest run",
"test:run": "pnpm run preflight:workspace-links && node scripts/run-vitest-stable.mjs",
"db:generate": "pnpm --filter @paperclipai/db generate",
"db:migrate": "pnpm --filter @paperclipai/db migrate",
"issue-references:backfill": "pnpm run preflight:workspace-links && tsx scripts/backfill-issue-reference-mentions.ts",

View File

@@ -0,0 +1,152 @@
import path from "node:path";
import {
prepareSandboxManagedRuntime,
type PreparedSandboxManagedRuntime,
type SandboxManagedRuntimeAsset,
type SandboxManagedRuntimeClient,
type SandboxRemoteExecutionSpec,
} from "./sandbox-managed-runtime.js";
import type { RunProcessResult } from "./server-utils.js";
export interface CommandManagedRuntimeRunner {
execute(input: {
command: string;
args?: string[];
cwd?: string;
env?: Record<string, string>;
stdin?: string;
timeoutMs?: number;
onLog?: (stream: "stdout" | "stderr", chunk: string) => Promise<void>;
onSpawn?: (meta: { pid: number; startedAt: string }) => Promise<void>;
}): Promise<RunProcessResult>;
}
export interface CommandManagedRuntimeSpec {
providerKey?: string | null;
leaseId?: string | null;
remoteCwd: string;
timeoutMs?: number | null;
paperclipApiUrl?: string | null;
}
export type CommandManagedRuntimeAsset = SandboxManagedRuntimeAsset;
function shellQuote(value: string) {
return `'${value.replace(/'/g, `'"'"'`)}'`;
}
function toBuffer(bytes: Buffer | Uint8Array | ArrayBuffer): Buffer {
if (Buffer.isBuffer(bytes)) return bytes;
if (bytes instanceof ArrayBuffer) return Buffer.from(bytes);
return Buffer.from(bytes.buffer, bytes.byteOffset, bytes.byteLength);
}
function requireSuccessfulResult(result: RunProcessResult, action: string): void {
if (result.exitCode === 0 && !result.timedOut) return;
const stderr = result.stderr.trim();
const detail = stderr.length > 0 ? `: ${stderr}` : "";
throw new Error(`${action} failed with exit code ${result.exitCode ?? "null"}${detail}`);
}
function createCommandManagedRuntimeClient(input: {
runner: CommandManagedRuntimeRunner;
remoteCwd: string;
timeoutMs: number;
}): SandboxManagedRuntimeClient {
const runShell = async (script: string, opts: { stdin?: string; timeoutMs?: number } = {}) => {
const result = await input.runner.execute({
command: "sh",
args: ["-lc", script],
cwd: input.remoteCwd,
stdin: opts.stdin,
timeoutMs: opts.timeoutMs ?? input.timeoutMs,
});
requireSuccessfulResult(result, script);
return result;
};
return {
makeDir: async (remotePath) => {
await runShell(`mkdir -p ${shellQuote(remotePath)}`);
},
writeFile: async (remotePath, bytes) => {
const body = toBuffer(bytes).toString("base64");
await runShell(
`mkdir -p ${shellQuote(path.posix.dirname(remotePath))} && base64 -d > ${shellQuote(remotePath)}`,
{ stdin: body },
);
},
readFile: async (remotePath) => {
const result = await runShell(`base64 < ${shellQuote(remotePath)}`);
return Buffer.from(result.stdout.replace(/\s+/g, ""), "base64");
},
remove: async (remotePath) => {
const result = await input.runner.execute({
command: "sh",
args: ["-lc", `rm -rf ${shellQuote(remotePath)}`],
cwd: input.remoteCwd,
timeoutMs: input.timeoutMs,
});
requireSuccessfulResult(result, `remove ${remotePath}`);
},
run: async (command, options) => {
const result = await input.runner.execute({
command: "sh",
args: ["-lc", command],
cwd: input.remoteCwd,
timeoutMs: options.timeoutMs,
});
requireSuccessfulResult(result, command);
},
};
}
export async function prepareCommandManagedRuntime(input: {
runner: CommandManagedRuntimeRunner;
spec: CommandManagedRuntimeSpec;
adapterKey: string;
workspaceLocalDir: string;
workspaceRemoteDir?: string;
workspaceExclude?: string[];
preserveAbsentOnRestore?: string[];
assets?: CommandManagedRuntimeAsset[];
installCommand?: string | null;
}): Promise<PreparedSandboxManagedRuntime> {
const timeoutMs = input.spec.timeoutMs && input.spec.timeoutMs > 0 ? input.spec.timeoutMs : 300_000;
const workspaceRemoteDir = input.workspaceRemoteDir ?? input.spec.remoteCwd;
const runtimeSpec: SandboxRemoteExecutionSpec = {
transport: "sandbox",
provider: input.spec.providerKey ?? "sandbox",
sandboxId: input.spec.leaseId ?? "managed",
remoteCwd: workspaceRemoteDir,
timeoutMs,
apiKey: null,
paperclipApiUrl: input.spec.paperclipApiUrl ?? null,
};
const client = createCommandManagedRuntimeClient({
runner: input.runner,
remoteCwd: workspaceRemoteDir,
timeoutMs,
});
if (input.installCommand?.trim()) {
const result = await input.runner.execute({
command: "sh",
args: ["-lc", input.installCommand.trim()],
cwd: workspaceRemoteDir,
timeoutMs,
});
requireSuccessfulResult(result, input.installCommand.trim());
}
return await prepareSandboxManagedRuntime({
spec: runtimeSpec,
client,
adapterKey: input.adapterKey,
workspaceLocalDir: input.workspaceLocalDir,
workspaceRemoteDir,
workspaceExclude: input.workspaceExclude,
preserveAbsentOnRestore: input.preserveAbsentOnRestore,
assets: input.assets,
});
}

View File

@@ -0,0 +1,96 @@
import { describe, expect, it, vi } from "vitest";
import {
adapterExecutionTargetSessionIdentity,
adapterExecutionTargetToRemoteSpec,
runAdapterExecutionTargetProcess,
runAdapterExecutionTargetShellCommand,
type AdapterSandboxExecutionTarget,
} from "./execution-target.js";
describe("sandbox adapter execution targets", () => {
it("executes through the provider-neutral runner without a remote spec", async () => {
const runner = {
execute: vi.fn(async () => ({
exitCode: 0,
signal: null,
timedOut: false,
stdout: "ok\n",
stderr: "",
pid: null,
startedAt: new Date().toISOString(),
})),
};
const target: AdapterSandboxExecutionTarget = {
kind: "remote",
transport: "sandbox",
providerKey: "acme-sandbox",
environmentId: "env-1",
leaseId: "lease-1",
remoteCwd: "/workspace",
timeoutMs: 30_000,
runner,
};
expect(adapterExecutionTargetToRemoteSpec(target)).toBeNull();
const result = await runAdapterExecutionTargetProcess("run-1", target, "agent-cli", ["--json"], {
cwd: "/local/workspace",
env: { TOKEN: "token" },
stdin: "prompt",
timeoutSec: 5,
graceSec: 1,
onLog: async () => {},
});
expect(result.stdout).toBe("ok\n");
expect(runner.execute).toHaveBeenCalledWith(expect.objectContaining({
command: "agent-cli",
args: ["--json"],
cwd: "/workspace",
env: { TOKEN: "token" },
stdin: "prompt",
timeoutMs: 5000,
}));
expect(adapterExecutionTargetSessionIdentity(target)).toEqual({
transport: "sandbox",
providerKey: "acme-sandbox",
environmentId: "env-1",
leaseId: "lease-1",
remoteCwd: "/workspace",
});
});
it("runs shell commands through the same runner", async () => {
const runner = {
execute: vi.fn(async () => ({
exitCode: 0,
signal: null,
timedOut: false,
stdout: "/home/sandbox",
stderr: "",
pid: null,
startedAt: new Date().toISOString(),
})),
};
const target: AdapterSandboxExecutionTarget = {
kind: "remote",
transport: "sandbox",
remoteCwd: "/workspace",
runner,
};
await runAdapterExecutionTargetShellCommand("run-2", target, 'printf %s "$HOME"', {
cwd: "/local/workspace",
env: {},
timeoutSec: 7,
});
expect(runner.execute).toHaveBeenCalledWith(expect.objectContaining({
command: "sh",
args: ["-lc", 'printf %s "$HOME"'],
cwd: "/workspace",
timeoutMs: 7000,
}));
});
});

View File

@@ -0,0 +1,161 @@
import { afterEach, describe, expect, it, vi } from "vitest";
import * as ssh from "./ssh.js";
import {
adapterExecutionTargetUsesManagedHome,
runAdapterExecutionTargetShellCommand,
} from "./execution-target.js";
describe("runAdapterExecutionTargetShellCommand", () => {
afterEach(() => {
vi.restoreAllMocks();
});
it("quotes remote shell commands with the shared SSH quoting helper", async () => {
const runSshCommandSpy = vi.spyOn(ssh, "runSshCommand").mockResolvedValue({
stdout: "",
stderr: "",
});
await runAdapterExecutionTargetShellCommand(
"run-1",
{
kind: "remote",
transport: "ssh",
remoteCwd: "/srv/paperclip/workspace",
spec: {
host: "ssh.example.test",
port: 22,
username: "ssh-user",
remoteCwd: "/srv/paperclip/workspace",
remoteWorkspacePath: "/srv/paperclip/workspace",
privateKey: null,
knownHosts: null,
strictHostKeyChecking: true,
},
},
`printf '%s\\n' "$HOME" && echo "it's ok"`,
{
cwd: "/tmp/local",
env: {},
},
);
expect(runSshCommandSpy).toHaveBeenCalledWith(
expect.objectContaining({
host: "ssh.example.test",
username: "ssh-user",
}),
`sh -lc ${ssh.shellQuote(`printf '%s\\n' "$HOME" && echo "it's ok"`)}`,
expect.any(Object),
);
});
it("returns a timedOut result when the SSH shell command times out", async () => {
vi.spyOn(ssh, "runSshCommand").mockRejectedValue(Object.assign(new Error("timed out"), {
code: "ETIMEDOUT",
stdout: "partial stdout",
stderr: "partial stderr",
signal: "SIGTERM",
}));
const onLog = vi.fn(async () => {});
const result = await runAdapterExecutionTargetShellCommand(
"run-2",
{
kind: "remote",
transport: "ssh",
remoteCwd: "/srv/paperclip/workspace",
spec: {
host: "ssh.example.test",
port: 22,
username: "ssh-user",
remoteCwd: "/srv/paperclip/workspace",
remoteWorkspacePath: "/srv/paperclip/workspace",
privateKey: null,
knownHosts: null,
strictHostKeyChecking: true,
},
},
"sleep 10",
{
cwd: "/tmp/local",
env: {},
onLog,
},
);
expect(result).toMatchObject({
exitCode: null,
signal: "SIGTERM",
timedOut: true,
stdout: "partial stdout",
stderr: "partial stderr",
});
expect(onLog).toHaveBeenCalledWith("stdout", "partial stdout");
expect(onLog).toHaveBeenCalledWith("stderr", "partial stderr");
});
it("returns the SSH process exit code for non-zero remote command failures", async () => {
vi.spyOn(ssh, "runSshCommand").mockRejectedValue(Object.assign(new Error("non-zero exit"), {
code: 17,
stdout: "partial stdout",
stderr: "partial stderr",
signal: null,
}));
const onLog = vi.fn(async () => {});
const result = await runAdapterExecutionTargetShellCommand(
"run-3",
{
kind: "remote",
transport: "ssh",
remoteCwd: "/srv/paperclip/workspace",
spec: {
host: "ssh.example.test",
port: 22,
username: "ssh-user",
remoteCwd: "/srv/paperclip/workspace",
remoteWorkspacePath: "/srv/paperclip/workspace",
privateKey: null,
knownHosts: null,
strictHostKeyChecking: true,
},
},
"false",
{
cwd: "/tmp/local",
env: {},
onLog,
},
);
expect(result).toMatchObject({
exitCode: 17,
signal: null,
timedOut: false,
stdout: "partial stdout",
stderr: "partial stderr",
});
expect(onLog).toHaveBeenCalledWith("stdout", "partial stdout");
expect(onLog).toHaveBeenCalledWith("stderr", "partial stderr");
});
it("keeps managed homes disabled for both local and SSH targets", () => {
expect(adapterExecutionTargetUsesManagedHome(null)).toBe(false);
expect(adapterExecutionTargetUsesManagedHome({
kind: "remote",
transport: "ssh",
remoteCwd: "/srv/paperclip/workspace",
spec: {
host: "ssh.example.test",
port: 22,
username: "ssh-user",
remoteCwd: "/srv/paperclip/workspace",
remoteWorkspacePath: "/srv/paperclip/workspace",
privateKey: null,
knownHosts: null,
strictHostKeyChecking: true,
},
})).toBe(false);
});
});

View File

@@ -0,0 +1,516 @@
import path from "node:path";
import type { SshRemoteExecutionSpec } from "./ssh.js";
import {
prepareCommandManagedRuntime,
type CommandManagedRuntimeRunner,
} from "./command-managed-runtime.js";
import {
buildRemoteExecutionSessionIdentity,
prepareRemoteManagedRuntime,
remoteExecutionSessionMatches,
type RemoteManagedRuntimeAsset,
} from "./remote-managed-runtime.js";
import { parseSshRemoteExecutionSpec, runSshCommand, shellQuote } from "./ssh.js";
import {
ensureCommandResolvable,
resolveCommandForLogs,
runChildProcess,
type RunProcessResult,
type TerminalResultCleanupOptions,
} from "./server-utils.js";
export interface AdapterLocalExecutionTarget {
kind: "local";
environmentId?: string | null;
leaseId?: string | null;
}
export interface AdapterSshExecutionTarget {
kind: "remote";
transport: "ssh";
environmentId?: string | null;
leaseId?: string | null;
remoteCwd: string;
paperclipApiUrl?: string | null;
spec: SshRemoteExecutionSpec;
}
export interface AdapterSandboxExecutionTarget {
kind: "remote";
transport: "sandbox";
providerKey?: string | null;
environmentId?: string | null;
leaseId?: string | null;
remoteCwd: string;
paperclipApiUrl?: string | null;
timeoutMs?: number | null;
runner?: CommandManagedRuntimeRunner;
}
export type AdapterExecutionTarget =
| AdapterLocalExecutionTarget
| AdapterSshExecutionTarget
| AdapterSandboxExecutionTarget;
export type AdapterRemoteExecutionSpec = SshRemoteExecutionSpec;
export type AdapterManagedRuntimeAsset = RemoteManagedRuntimeAsset;
export interface PreparedAdapterExecutionTargetRuntime {
target: AdapterExecutionTarget;
runtimeRootDir: string | null;
assetDirs: Record<string, string>;
restoreWorkspace(): Promise<void>;
}
export interface AdapterExecutionTargetProcessOptions {
cwd: string;
env: Record<string, string>;
stdin?: string;
timeoutSec: number;
graceSec: number;
onLog: (stream: "stdout" | "stderr", chunk: string) => Promise<void>;
onSpawn?: (meta: { pid: number; processGroupId: number | null; startedAt: string }) => Promise<void>;
terminalResultCleanup?: TerminalResultCleanupOptions;
}
export interface AdapterExecutionTargetShellOptions {
cwd: string;
env: Record<string, string>;
timeoutSec?: number;
graceSec?: number;
onLog?: (stream: "stdout" | "stderr", chunk: string) => Promise<void>;
}
function parseObject(value: unknown): Record<string, unknown> {
return value && typeof value === "object" && !Array.isArray(value)
? (value as Record<string, unknown>)
: {};
}
function readString(value: unknown): string | null {
return typeof value === "string" && value.trim().length > 0 ? value.trim() : null;
}
function readStringMeta(parsed: Record<string, unknown>, key: string): string | null {
return readString(parsed[key]);
}
function isAdapterExecutionTargetInstance(value: unknown): value is AdapterExecutionTarget {
const parsed = parseObject(value);
if (parsed.kind === "local") return true;
if (parsed.kind !== "remote") return false;
if (parsed.transport === "ssh") return parseSshRemoteExecutionSpec(parseObject(parsed.spec)) !== null;
if (parsed.transport !== "sandbox") return false;
return readStringMeta(parsed, "remoteCwd") !== null;
}
export function adapterExecutionTargetToRemoteSpec(
target: AdapterExecutionTarget | null | undefined,
): AdapterRemoteExecutionSpec | null {
return target?.kind === "remote" && target.transport === "ssh" ? target.spec : null;
}
export function adapterExecutionTargetIsRemote(
target: AdapterExecutionTarget | null | undefined,
): boolean {
return target?.kind === "remote";
}
export function adapterExecutionTargetUsesManagedHome(
target: AdapterExecutionTarget | null | undefined,
): boolean {
return target?.kind === "remote" && target.transport === "sandbox";
}
export function adapterExecutionTargetRemoteCwd(
target: AdapterExecutionTarget | null | undefined,
localCwd: string,
): string {
return target?.kind === "remote" ? target.remoteCwd : localCwd;
}
export function adapterExecutionTargetPaperclipApiUrl(
target: AdapterExecutionTarget | null | undefined,
): string | null {
if (target?.kind !== "remote") return null;
if (target.transport === "ssh") return target.paperclipApiUrl ?? target.spec.paperclipApiUrl ?? null;
return target.paperclipApiUrl ?? null;
}
export function describeAdapterExecutionTarget(
target: AdapterExecutionTarget | null | undefined,
): string {
if (!target || target.kind === "local") return "local environment";
if (target.transport === "ssh") {
return `SSH environment ${target.spec.username}@${target.spec.host}:${target.spec.port}`;
}
return `sandbox environment${target.providerKey ? ` (${target.providerKey})` : ""}`;
}
function requireSandboxRunner(target: AdapterSandboxExecutionTarget): CommandManagedRuntimeRunner {
if (target.runner) return target.runner;
throw new Error(
"Sandbox execution target is missing its provider runtime runner. Sandbox commands must execute through the environment runtime.",
);
}
export async function ensureAdapterExecutionTargetCommandResolvable(
command: string,
target: AdapterExecutionTarget | null | undefined,
cwd: string,
env: NodeJS.ProcessEnv,
) {
if (target?.kind === "remote" && target.transport === "sandbox") {
return;
}
await ensureCommandResolvable(command, cwd, env, {
remoteExecution: adapterExecutionTargetToRemoteSpec(target),
});
}
export async function resolveAdapterExecutionTargetCommandForLogs(
command: string,
target: AdapterExecutionTarget | null | undefined,
cwd: string,
env: NodeJS.ProcessEnv,
): Promise<string> {
if (target?.kind === "remote" && target.transport === "sandbox") {
return `sandbox://${target.providerKey ?? "provider"}/${target.leaseId ?? "lease"}/${target.remoteCwd} :: ${command}`;
}
return await resolveCommandForLogs(command, cwd, env, {
remoteExecution: adapterExecutionTargetToRemoteSpec(target),
});
}
export async function runAdapterExecutionTargetProcess(
runId: string,
target: AdapterExecutionTarget | null | undefined,
command: string,
args: string[],
options: AdapterExecutionTargetProcessOptions,
): Promise<RunProcessResult> {
if (target?.kind === "remote" && target.transport === "sandbox") {
const runner = requireSandboxRunner(target);
return await runner.execute({
command,
args,
cwd: target.remoteCwd,
env: options.env,
stdin: options.stdin,
timeoutMs: options.timeoutSec > 0 ? options.timeoutSec * 1000 : target.timeoutMs ?? undefined,
onLog: options.onLog,
onSpawn: options.onSpawn
? async (meta) => options.onSpawn?.({ ...meta, processGroupId: null })
: undefined,
});
}
return await runChildProcess(runId, command, args, {
cwd: options.cwd,
env: options.env,
stdin: options.stdin,
timeoutSec: options.timeoutSec,
graceSec: options.graceSec,
onLog: options.onLog,
onSpawn: options.onSpawn,
terminalResultCleanup: options.terminalResultCleanup,
remoteExecution: adapterExecutionTargetToRemoteSpec(target),
});
}
export async function runAdapterExecutionTargetShellCommand(
runId: string,
target: AdapterExecutionTarget | null | undefined,
command: string,
options: AdapterExecutionTargetShellOptions,
): Promise<RunProcessResult> {
const onLog = options.onLog ?? (async () => {});
if (target?.kind === "remote") {
const startedAt = new Date().toISOString();
if (target.transport === "ssh") {
try {
const result = await runSshCommand(target.spec, `sh -lc ${shellQuote(command)}`, {
timeoutMs: (options.timeoutSec ?? 15) * 1000,
});
if (result.stdout) await onLog("stdout", result.stdout);
if (result.stderr) await onLog("stderr", result.stderr);
return {
exitCode: 0,
signal: null,
timedOut: false,
stdout: result.stdout,
stderr: result.stderr,
pid: null,
startedAt,
};
} catch (error) {
const timedOutError = error as NodeJS.ErrnoException & {
stdout?: string;
stderr?: string;
signal?: string | null;
};
const stdout = timedOutError.stdout ?? "";
const stderr = timedOutError.stderr ?? "";
if (typeof timedOutError.code === "number") {
if (stdout) await onLog("stdout", stdout);
if (stderr) await onLog("stderr", stderr);
return {
exitCode: timedOutError.code,
signal: timedOutError.signal ?? null,
timedOut: false,
stdout,
stderr,
pid: null,
startedAt,
};
}
if (timedOutError.code !== "ETIMEDOUT") {
throw error;
}
if (stdout) await onLog("stdout", stdout);
if (stderr) await onLog("stderr", stderr);
return {
exitCode: null,
signal: timedOutError.signal ?? null,
timedOut: true,
stdout,
stderr,
pid: null,
startedAt,
};
}
}
return await requireSandboxRunner(target).execute({
command: "sh",
args: ["-lc", command],
cwd: target.remoteCwd,
env: options.env,
timeoutMs: (options.timeoutSec ?? 15) * 1000,
onLog,
});
}
return await runAdapterExecutionTargetProcess(
runId,
target,
"sh",
["-lc", command],
{
cwd: options.cwd,
env: options.env,
timeoutSec: options.timeoutSec ?? 15,
graceSec: options.graceSec ?? 5,
onLog,
},
);
}
export async function readAdapterExecutionTargetHomeDir(
runId: string,
target: AdapterExecutionTarget | null | undefined,
options: AdapterExecutionTargetShellOptions,
): Promise<string | null> {
const result = await runAdapterExecutionTargetShellCommand(
runId,
target,
'printf %s "$HOME"',
options,
);
const homeDir = result.stdout.trim();
return homeDir.length > 0 ? homeDir : null;
}
export async function ensureAdapterExecutionTargetFile(
runId: string,
target: AdapterExecutionTarget | null | undefined,
filePath: string,
options: AdapterExecutionTargetShellOptions,
): Promise<void> {
await runAdapterExecutionTargetShellCommand(
runId,
target,
`mkdir -p ${shellQuote(path.posix.dirname(filePath))} && : > ${shellQuote(filePath)}`,
options,
);
}
export function adapterExecutionTargetSessionIdentity(
target: AdapterExecutionTarget | null | undefined,
): Record<string, unknown> | null {
if (!target || target.kind === "local") return null;
if (target.transport === "ssh") return buildRemoteExecutionSessionIdentity(target.spec);
return {
transport: "sandbox",
providerKey: target.providerKey ?? null,
environmentId: target.environmentId ?? null,
leaseId: target.leaseId ?? null,
remoteCwd: target.remoteCwd,
...(target.paperclipApiUrl ? { paperclipApiUrl: target.paperclipApiUrl } : {}),
};
}
export function adapterExecutionTargetSessionMatches(
saved: unknown,
target: AdapterExecutionTarget | null | undefined,
): boolean {
if (!target || target.kind === "local") {
return Object.keys(parseObject(saved)).length === 0;
}
if (target.transport === "ssh") return remoteExecutionSessionMatches(saved, target.spec);
const current = adapterExecutionTargetSessionIdentity(target);
const parsedSaved = parseObject(saved);
return (
readStringMeta(parsedSaved, "transport") === current?.transport &&
readStringMeta(parsedSaved, "providerKey") === current?.providerKey &&
readStringMeta(parsedSaved, "environmentId") === current?.environmentId &&
readStringMeta(parsedSaved, "leaseId") === current?.leaseId &&
readStringMeta(parsedSaved, "remoteCwd") === current?.remoteCwd &&
readStringMeta(parsedSaved, "paperclipApiUrl") === (current?.paperclipApiUrl ?? null)
);
}
export function parseAdapterExecutionTarget(value: unknown): AdapterExecutionTarget | null {
const parsed = parseObject(value);
const kind = readStringMeta(parsed, "kind");
if (kind === "local") {
return {
kind: "local",
environmentId: readStringMeta(parsed, "environmentId"),
leaseId: readStringMeta(parsed, "leaseId"),
};
}
if (kind === "remote" && readStringMeta(parsed, "transport") === "ssh") {
const spec = parseSshRemoteExecutionSpec(parseObject(parsed.spec));
if (!spec) return null;
return {
kind: "remote",
transport: "ssh",
environmentId: readStringMeta(parsed, "environmentId"),
leaseId: readStringMeta(parsed, "leaseId"),
remoteCwd: spec.remoteCwd,
paperclipApiUrl: readStringMeta(parsed, "paperclipApiUrl") ?? spec.paperclipApiUrl ?? null,
spec,
};
}
if (kind === "remote" && readStringMeta(parsed, "transport") === "sandbox") {
const remoteCwd = readStringMeta(parsed, "remoteCwd");
if (!remoteCwd) return null;
return {
kind: "remote",
transport: "sandbox",
providerKey: readStringMeta(parsed, "providerKey"),
environmentId: readStringMeta(parsed, "environmentId"),
leaseId: readStringMeta(parsed, "leaseId"),
remoteCwd,
paperclipApiUrl: readStringMeta(parsed, "paperclipApiUrl"),
timeoutMs: typeof parsed.timeoutMs === "number" ? parsed.timeoutMs : null,
};
}
return null;
}
export function adapterExecutionTargetFromRemoteExecution(
remoteExecution: unknown,
metadata: Pick<AdapterLocalExecutionTarget, "environmentId" | "leaseId"> = {},
): AdapterExecutionTarget | null {
const parsed = parseObject(remoteExecution);
const ssh = parseSshRemoteExecutionSpec(parsed);
if (ssh) {
return {
kind: "remote",
transport: "ssh",
environmentId: metadata.environmentId ?? null,
leaseId: metadata.leaseId ?? null,
remoteCwd: ssh.remoteCwd,
paperclipApiUrl: ssh.paperclipApiUrl ?? null,
spec: ssh,
};
}
return null;
}
export function readAdapterExecutionTarget(input: {
executionTarget?: unknown;
legacyRemoteExecution?: unknown;
}): AdapterExecutionTarget | null {
if (isAdapterExecutionTargetInstance(input.executionTarget)) {
return input.executionTarget;
}
return (
parseAdapterExecutionTarget(input.executionTarget) ??
adapterExecutionTargetFromRemoteExecution(input.legacyRemoteExecution)
);
}
export async function prepareAdapterExecutionTargetRuntime(input: {
target: AdapterExecutionTarget | null | undefined;
adapterKey: string;
workspaceLocalDir: string;
workspaceExclude?: string[];
preserveAbsentOnRestore?: string[];
assets?: AdapterManagedRuntimeAsset[];
installCommand?: string | null;
}): Promise<PreparedAdapterExecutionTargetRuntime> {
const target = input.target ?? { kind: "local" as const };
if (target.kind === "local") {
return {
target,
runtimeRootDir: null,
assetDirs: {},
restoreWorkspace: async () => {},
};
}
if (target.transport === "ssh") {
const prepared = await prepareRemoteManagedRuntime({
spec: target.spec,
adapterKey: input.adapterKey,
workspaceLocalDir: input.workspaceLocalDir,
assets: input.assets,
});
return {
target,
runtimeRootDir: prepared.runtimeRootDir,
assetDirs: prepared.assetDirs,
restoreWorkspace: prepared.restoreWorkspace,
};
}
const prepared = await prepareCommandManagedRuntime({
runner: requireSandboxRunner(target),
spec: {
providerKey: target.providerKey,
leaseId: target.leaseId,
remoteCwd: target.remoteCwd,
timeoutMs: target.timeoutMs,
paperclipApiUrl: target.paperclipApiUrl,
},
adapterKey: input.adapterKey,
workspaceLocalDir: input.workspaceLocalDir,
workspaceExclude: input.workspaceExclude,
preserveAbsentOnRestore: input.preserveAbsentOnRestore,
assets: input.assets,
installCommand: input.installCommand,
});
return {
target,
runtimeRootDir: prepared.runtimeRootDir,
assetDirs: prepared.assetDirs,
restoreWorkspace: prepared.restoreWorkspace,
};
}
export function runtimeAssetDir(
prepared: Pick<PreparedAdapterExecutionTargetRuntime, "assetDirs">,
key: string,
fallbackRemoteCwd: string,
): string {
return prepared.assetDirs[key] ?? path.posix.join(fallbackRemoteCwd, ".paperclip-runtime", key);
}

View File

@@ -0,0 +1,118 @@
import path from "node:path";
import {
type SshRemoteExecutionSpec,
prepareWorkspaceForSshExecution,
restoreWorkspaceFromSshExecution,
syncDirectoryToSsh,
} from "./ssh.js";
export interface RemoteManagedRuntimeAsset {
key: string;
localDir: string;
followSymlinks?: boolean;
exclude?: string[];
}
export interface PreparedRemoteManagedRuntime {
spec: SshRemoteExecutionSpec;
workspaceLocalDir: string;
workspaceRemoteDir: string;
runtimeRootDir: string;
assetDirs: Record<string, string>;
restoreWorkspace(): Promise<void>;
}
function asObject(value: unknown): Record<string, unknown> {
return value && typeof value === "object" && !Array.isArray(value)
? (value as Record<string, unknown>)
: {};
}
function asString(value: unknown): string {
return typeof value === "string" ? value : "";
}
function asNumber(value: unknown): number {
return typeof value === "number" ? value : Number(value);
}
export function buildRemoteExecutionSessionIdentity(spec: SshRemoteExecutionSpec | null) {
if (!spec) return null;
return {
transport: "ssh",
host: spec.host,
port: spec.port,
username: spec.username,
remoteCwd: spec.remoteCwd,
...(spec.paperclipApiUrl ? { paperclipApiUrl: spec.paperclipApiUrl } : {}),
} as const;
}
export function remoteExecutionSessionMatches(saved: unknown, current: SshRemoteExecutionSpec | null): boolean {
const currentIdentity = buildRemoteExecutionSessionIdentity(current);
if (!currentIdentity) return false;
const parsedSaved = asObject(saved);
return (
asString(parsedSaved.transport) === currentIdentity.transport &&
asString(parsedSaved.host) === currentIdentity.host &&
asNumber(parsedSaved.port) === currentIdentity.port &&
asString(parsedSaved.username) === currentIdentity.username &&
asString(parsedSaved.remoteCwd) === currentIdentity.remoteCwd &&
asString(parsedSaved.paperclipApiUrl) === asString(currentIdentity.paperclipApiUrl)
);
}
export async function prepareRemoteManagedRuntime(input: {
spec: SshRemoteExecutionSpec;
adapterKey: string;
workspaceLocalDir: string;
workspaceRemoteDir?: string;
assets?: RemoteManagedRuntimeAsset[];
}): Promise<PreparedRemoteManagedRuntime> {
const workspaceRemoteDir = input.workspaceRemoteDir ?? input.spec.remoteCwd;
const runtimeRootDir = path.posix.join(workspaceRemoteDir, ".paperclip-runtime", input.adapterKey);
await prepareWorkspaceForSshExecution({
spec: input.spec,
localDir: input.workspaceLocalDir,
remoteDir: workspaceRemoteDir,
});
const assetDirs: Record<string, string> = {};
try {
for (const asset of input.assets ?? []) {
const remoteDir = path.posix.join(runtimeRootDir, asset.key);
assetDirs[asset.key] = remoteDir;
await syncDirectoryToSsh({
spec: input.spec,
localDir: asset.localDir,
remoteDir,
followSymlinks: asset.followSymlinks,
exclude: asset.exclude,
});
}
} catch (error) {
await restoreWorkspaceFromSshExecution({
spec: input.spec,
localDir: input.workspaceLocalDir,
remoteDir: workspaceRemoteDir,
});
throw error;
}
return {
spec: input.spec,
workspaceLocalDir: input.workspaceLocalDir,
workspaceRemoteDir,
runtimeRootDir,
assetDirs,
restoreWorkspace: async () => {
await restoreWorkspaceFromSshExecution({
spec: input.spec,
localDir: input.workspaceLocalDir,
remoteDir: workspaceRemoteDir,
});
},
};
}

View File

@@ -0,0 +1,126 @@
import { lstat, mkdir, mkdtemp, readFile, rm, symlink, writeFile } from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { execFile as execFileCallback } from "node:child_process";
import { promisify } from "node:util";
import { afterEach, describe, expect, it } from "vitest";
import {
mirrorDirectory,
prepareSandboxManagedRuntime,
type SandboxManagedRuntimeClient,
} from "./sandbox-managed-runtime.js";
const execFile = promisify(execFileCallback);
describe("sandbox managed runtime", () => {
const cleanupDirs: string[] = [];
afterEach(async () => {
while (cleanupDirs.length > 0) {
const dir = cleanupDirs.pop();
if (!dir) continue;
await rm(dir, { recursive: true, force: true }).catch(() => undefined);
}
});
it("preserves excluded local workspace artifacts during restore mirroring", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-sandbox-restore-"));
cleanupDirs.push(rootDir);
const sourceDir = path.join(rootDir, "source");
const targetDir = path.join(rootDir, "target");
await mkdir(path.join(sourceDir, "src"), { recursive: true });
await mkdir(path.join(targetDir, ".claude"), { recursive: true });
await mkdir(path.join(targetDir, ".paperclip-runtime"), { recursive: true });
await writeFile(path.join(sourceDir, "src", "app.ts"), "export const value = 2;\n", "utf8");
await writeFile(path.join(targetDir, "stale.txt"), "remove me\n", "utf8");
await writeFile(path.join(targetDir, ".claude", "settings.json"), "{\"keep\":true}\n", "utf8");
await writeFile(path.join(targetDir, ".claude.json"), "{\"keep\":true}\n", "utf8");
await writeFile(path.join(targetDir, ".paperclip-runtime", "state.json"), "{}\n", "utf8");
await mirrorDirectory(sourceDir, targetDir, {
preserveAbsent: [".paperclip-runtime", ".claude", ".claude.json"],
});
await expect(readFile(path.join(targetDir, "src", "app.ts"), "utf8")).resolves.toBe("export const value = 2;\n");
await expect(readFile(path.join(targetDir, ".claude", "settings.json"), "utf8")).resolves.toBe("{\"keep\":true}\n");
await expect(readFile(path.join(targetDir, ".claude.json"), "utf8")).resolves.toBe("{\"keep\":true}\n");
await expect(readFile(path.join(targetDir, ".paperclip-runtime", "state.json"), "utf8")).resolves.toBe("{}\n");
await expect(readFile(path.join(targetDir, "stale.txt"), "utf8")).rejects.toMatchObject({ code: "ENOENT" });
});
it("syncs workspace and assets through a provider-neutral sandbox client", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-sandbox-managed-"));
cleanupDirs.push(rootDir);
const localWorkspaceDir = path.join(rootDir, "local-workspace");
const remoteWorkspaceDir = path.join(rootDir, "remote-workspace");
const localAssetsDir = path.join(rootDir, "local-assets");
const linkedAssetPath = path.join(rootDir, "linked-skill.md");
await mkdir(path.join(localWorkspaceDir, ".claude"), { recursive: true });
await mkdir(localAssetsDir, { recursive: true });
await writeFile(path.join(localWorkspaceDir, "README.md"), "local workspace\n", "utf8");
await writeFile(path.join(localWorkspaceDir, "._README.md"), "appledouble\n", "utf8");
await writeFile(path.join(localWorkspaceDir, ".claude", "settings.json"), "{\"local\":true}\n", "utf8");
await writeFile(linkedAssetPath, "skill body\n", "utf8");
await symlink(linkedAssetPath, path.join(localAssetsDir, "skill.md"));
const client: SandboxManagedRuntimeClient = {
makeDir: async (remotePath) => {
await mkdir(remotePath, { recursive: true });
},
writeFile: async (remotePath, bytes) => {
await mkdir(path.dirname(remotePath), { recursive: true });
await writeFile(remotePath, Buffer.from(bytes));
},
readFile: async (remotePath) => await readFile(remotePath),
remove: async (remotePath) => {
await rm(remotePath, { recursive: true, force: true });
},
run: async (command) => {
await execFile("sh", ["-lc", command], {
maxBuffer: 32 * 1024 * 1024,
});
},
};
const prepared = await prepareSandboxManagedRuntime({
spec: {
transport: "sandbox",
provider: "test",
sandboxId: "sandbox-1",
remoteCwd: remoteWorkspaceDir,
timeoutMs: 30_000,
apiKey: null,
},
adapterKey: "test-adapter",
client,
workspaceLocalDir: localWorkspaceDir,
workspaceExclude: [".claude"],
preserveAbsentOnRestore: [".claude"],
assets: [{
key: "skills",
localDir: localAssetsDir,
followSymlinks: true,
}],
});
await expect(readFile(path.join(remoteWorkspaceDir, "README.md"), "utf8")).resolves.toBe("local workspace\n");
await expect(readFile(path.join(remoteWorkspaceDir, "._README.md"), "utf8")).rejects.toMatchObject({ code: "ENOENT" });
await expect(readFile(path.join(remoteWorkspaceDir, ".claude", "settings.json"), "utf8")).rejects.toMatchObject({ code: "ENOENT" });
await expect(readFile(path.join(prepared.assetDirs.skills, "skill.md"), "utf8")).resolves.toBe("skill body\n");
expect((await lstat(path.join(prepared.assetDirs.skills, "skill.md"))).isFile()).toBe(true);
await writeFile(path.join(remoteWorkspaceDir, "README.md"), "remote workspace\n", "utf8");
await writeFile(path.join(remoteWorkspaceDir, "remote-only.txt"), "sync back\n", "utf8");
await mkdir(path.join(localWorkspaceDir, ".paperclip-runtime"), { recursive: true });
await writeFile(path.join(localWorkspaceDir, ".paperclip-runtime", "state.json"), "{}\n", "utf8");
await writeFile(path.join(localWorkspaceDir, "local-stale.txt"), "remove\n", "utf8");
await prepared.restoreWorkspace();
await expect(readFile(path.join(localWorkspaceDir, "README.md"), "utf8")).resolves.toBe("remote workspace\n");
await expect(readFile(path.join(localWorkspaceDir, "remote-only.txt"), "utf8")).resolves.toBe("sync back\n");
await expect(readFile(path.join(localWorkspaceDir, "local-stale.txt"), "utf8")).rejects.toMatchObject({ code: "ENOENT" });
await expect(readFile(path.join(localWorkspaceDir, ".claude", "settings.json"), "utf8")).resolves.toBe("{\"local\":true}\n");
await expect(readFile(path.join(localWorkspaceDir, ".paperclip-runtime", "state.json"), "utf8")).resolves.toBe("{}\n");
});
});

View File

@@ -0,0 +1,338 @@
import { execFile as execFileCallback } from "node:child_process";
import { constants as fsConstants, promises as fs } from "node:fs";
import os from "node:os";
import path from "node:path";
import { promisify } from "node:util";
const execFile = promisify(execFileCallback);
export interface SandboxRemoteExecutionSpec {
transport: "sandbox";
provider: string;
sandboxId: string;
remoteCwd: string;
timeoutMs: number;
apiKey: string | null;
paperclipApiUrl?: string | null;
}
export interface SandboxManagedRuntimeAsset {
key: string;
localDir: string;
followSymlinks?: boolean;
exclude?: string[];
}
export interface SandboxManagedRuntimeClient {
makeDir(remotePath: string): Promise<void>;
writeFile(remotePath: string, bytes: ArrayBuffer): Promise<void>;
readFile(remotePath: string): Promise<Buffer | Uint8Array | ArrayBuffer>;
remove(remotePath: string): Promise<void>;
run(command: string, options: { timeoutMs: number }): Promise<void>;
}
export interface PreparedSandboxManagedRuntime {
spec: SandboxRemoteExecutionSpec;
workspaceLocalDir: string;
workspaceRemoteDir: string;
runtimeRootDir: string;
assetDirs: Record<string, string>;
restoreWorkspace(): Promise<void>;
}
function asObject(value: unknown): Record<string, unknown> {
return value && typeof value === "object" && !Array.isArray(value)
? (value as Record<string, unknown>)
: {};
}
function asString(value: unknown): string {
return typeof value === "string" ? value : "";
}
function asNumber(value: unknown): number {
return typeof value === "number" ? value : Number(value);
}
function shellQuote(value: string) {
return `'${value.replace(/'/g, `'\"'\"'`)}'`;
}
export function parseSandboxRemoteExecutionSpec(value: unknown): SandboxRemoteExecutionSpec | null {
const parsed = asObject(value);
const transport = asString(parsed.transport).trim();
const provider = asString(parsed.provider).trim();
const sandboxId = asString(parsed.sandboxId).trim();
const remoteCwd = asString(parsed.remoteCwd).trim();
const timeoutMs = asNumber(parsed.timeoutMs);
if (
transport !== "sandbox" ||
provider.length === 0 ||
sandboxId.length === 0 ||
remoteCwd.length === 0 ||
!Number.isFinite(timeoutMs) ||
timeoutMs <= 0
) {
return null;
}
return {
transport: "sandbox",
provider,
sandboxId,
remoteCwd,
timeoutMs,
apiKey: asString(parsed.apiKey).trim() || null,
paperclipApiUrl: asString(parsed.paperclipApiUrl).trim() || null,
};
}
export function buildSandboxExecutionSessionIdentity(spec: SandboxRemoteExecutionSpec | null) {
if (!spec) return null;
return {
transport: "sandbox",
provider: spec.provider,
sandboxId: spec.sandboxId,
remoteCwd: spec.remoteCwd,
...(spec.paperclipApiUrl ? { paperclipApiUrl: spec.paperclipApiUrl } : {}),
} as const;
}
export function sandboxExecutionSessionMatches(saved: unknown, current: SandboxRemoteExecutionSpec | null): boolean {
const currentIdentity = buildSandboxExecutionSessionIdentity(current);
if (!currentIdentity) return false;
const parsedSaved = asObject(saved);
return (
asString(parsedSaved.transport) === currentIdentity.transport &&
asString(parsedSaved.provider) === currentIdentity.provider &&
asString(parsedSaved.sandboxId) === currentIdentity.sandboxId &&
asString(parsedSaved.remoteCwd) === currentIdentity.remoteCwd &&
asString(parsedSaved.paperclipApiUrl) === asString(currentIdentity.paperclipApiUrl)
);
}
async function withTempDir<T>(prefix: string, fn: (dir: string) => Promise<T>): Promise<T> {
const dir = await fs.mkdtemp(path.join(os.tmpdir(), prefix));
try {
return await fn(dir);
} finally {
await fs.rm(dir, { recursive: true, force: true }).catch(() => undefined);
}
}
async function execTar(args: string[]): Promise<void> {
await execFile("tar", args, {
env: {
...process.env,
COPYFILE_DISABLE: "1",
},
maxBuffer: 32 * 1024 * 1024,
});
}
async function createTarballFromDirectory(input: {
localDir: string;
archivePath: string;
exclude?: string[];
followSymlinks?: boolean;
}): Promise<void> {
const excludeArgs = ["._*", ...(input.exclude ?? [])].flatMap((entry) => ["--exclude", entry]);
await execTar([
"-c",
...(input.followSymlinks ? ["-h"] : []),
"-f",
input.archivePath,
"-C",
input.localDir,
...excludeArgs,
".",
]);
}
async function extractTarballToDirectory(input: {
archivePath: string;
localDir: string;
}): Promise<void> {
await fs.mkdir(input.localDir, { recursive: true });
await execTar(["-xf", input.archivePath, "-C", input.localDir]);
}
async function walkDirectory(root: string, relative = ""): Promise<string[]> {
const current = path.join(root, relative);
const entries = await fs.readdir(current, { withFileTypes: true }).catch(() => []);
const out: string[] = [];
for (const entry of entries) {
const nextRelative = relative ? path.posix.join(relative, entry.name) : entry.name;
out.push(nextRelative);
if (entry.isDirectory()) {
out.push(...(await walkDirectory(root, nextRelative)));
}
}
return out.sort((left, right) => right.length - left.length);
}
function isRelativePathOrDescendant(relative: string, candidate: string): boolean {
return relative === candidate || relative.startsWith(`${candidate}/`);
}
export async function mirrorDirectory(
sourceDir: string,
targetDir: string,
options: { preserveAbsent?: string[] } = {},
): Promise<void> {
await fs.mkdir(targetDir, { recursive: true });
const preserveAbsent = new Set(options.preserveAbsent ?? []);
const shouldPreserveAbsent = (relative: string) =>
[...preserveAbsent].some((candidate) => isRelativePathOrDescendant(relative, candidate));
const sourceEntries = new Set(await walkDirectory(sourceDir));
const targetEntries = await walkDirectory(targetDir);
for (const relative of targetEntries) {
if (shouldPreserveAbsent(relative)) continue;
if (!sourceEntries.has(relative)) {
await fs.rm(path.join(targetDir, relative), { recursive: true, force: true }).catch(() => undefined);
}
}
const copyEntry = async (relative: string) => {
const sourcePath = path.join(sourceDir, relative);
const targetPath = path.join(targetDir, relative);
const stats = await fs.lstat(sourcePath);
if (stats.isDirectory()) {
await fs.mkdir(targetPath, { recursive: true });
return;
}
await fs.mkdir(path.dirname(targetPath), { recursive: true });
await fs.rm(targetPath, { recursive: true, force: true }).catch(() => undefined);
if (stats.isSymbolicLink()) {
const linkTarget = await fs.readlink(sourcePath);
await fs.symlink(linkTarget, targetPath);
return;
}
await fs.copyFile(sourcePath, targetPath, fsConstants.COPYFILE_FICLONE).catch(async () => {
await fs.copyFile(sourcePath, targetPath);
});
await fs.chmod(targetPath, stats.mode);
};
const entries = (await walkDirectory(sourceDir)).sort((left, right) => left.localeCompare(right));
for (const relative of entries) {
await copyEntry(relative);
}
}
function toArrayBuffer(bytes: Buffer): ArrayBuffer {
return Uint8Array.from(bytes).buffer;
}
function toBuffer(bytes: Buffer | Uint8Array | ArrayBuffer): Buffer {
if (Buffer.isBuffer(bytes)) return bytes;
if (bytes instanceof ArrayBuffer) return Buffer.from(bytes);
return Buffer.from(bytes.buffer, bytes.byteOffset, bytes.byteLength);
}
function tarExcludeFlags(exclude: string[] | undefined): string {
return ["._*", ...(exclude ?? [])].map((entry) => `--exclude ${shellQuote(entry)}`).join(" ");
}
export async function prepareSandboxManagedRuntime(input: {
spec: SandboxRemoteExecutionSpec;
adapterKey: string;
client: SandboxManagedRuntimeClient;
workspaceLocalDir: string;
workspaceRemoteDir?: string;
workspaceExclude?: string[];
preserveAbsentOnRestore?: string[];
assets?: SandboxManagedRuntimeAsset[];
}): Promise<PreparedSandboxManagedRuntime> {
const workspaceRemoteDir = input.workspaceRemoteDir ?? input.spec.remoteCwd;
const runtimeRootDir = path.posix.join(workspaceRemoteDir, ".paperclip-runtime", input.adapterKey);
await withTempDir("paperclip-sandbox-sync-", async (tempDir) => {
const workspaceTarPath = path.join(tempDir, "workspace.tar");
await createTarballFromDirectory({
localDir: input.workspaceLocalDir,
archivePath: workspaceTarPath,
exclude: input.workspaceExclude,
});
const workspaceTarBytes = await fs.readFile(workspaceTarPath);
const remoteWorkspaceTar = path.posix.join(runtimeRootDir, "workspace-upload.tar");
await input.client.makeDir(runtimeRootDir);
await input.client.writeFile(remoteWorkspaceTar, toArrayBuffer(workspaceTarBytes));
const preservedNames = new Set([".paperclip-runtime", ...(input.preserveAbsentOnRestore ?? [])]);
const findPreserveArgs = [...preservedNames].map((entry) => `! -name ${shellQuote(entry)}`).join(" ");
await input.client.run(
`sh -lc ${shellQuote(
`mkdir -p ${shellQuote(workspaceRemoteDir)} && ` +
`find ${shellQuote(workspaceRemoteDir)} -mindepth 1 -maxdepth 1 ${findPreserveArgs} -exec rm -rf -- {} + && ` +
`tar -xf ${shellQuote(remoteWorkspaceTar)} -C ${shellQuote(workspaceRemoteDir)} && ` +
`rm -f ${shellQuote(remoteWorkspaceTar)}`,
)}`,
{ timeoutMs: input.spec.timeoutMs },
);
for (const asset of input.assets ?? []) {
const assetTarPath = path.join(tempDir, `${asset.key}.tar`);
await createTarballFromDirectory({
localDir: asset.localDir,
archivePath: assetTarPath,
followSymlinks: asset.followSymlinks,
exclude: asset.exclude,
});
const assetTarBytes = await fs.readFile(assetTarPath);
const remoteAssetDir = path.posix.join(runtimeRootDir, asset.key);
const remoteAssetTar = path.posix.join(runtimeRootDir, `${asset.key}-upload.tar`);
await input.client.writeFile(remoteAssetTar, toArrayBuffer(assetTarBytes));
await input.client.run(
`sh -lc ${shellQuote(
`rm -rf ${shellQuote(remoteAssetDir)} && ` +
`mkdir -p ${shellQuote(remoteAssetDir)} && ` +
`tar -xf ${shellQuote(remoteAssetTar)} -C ${shellQuote(remoteAssetDir)} && ` +
`rm -f ${shellQuote(remoteAssetTar)}`,
)}`,
{ timeoutMs: input.spec.timeoutMs },
);
}
});
const assetDirs = Object.fromEntries(
(input.assets ?? []).map((asset) => [asset.key, path.posix.join(runtimeRootDir, asset.key)]),
);
return {
spec: input.spec,
workspaceLocalDir: input.workspaceLocalDir,
workspaceRemoteDir,
runtimeRootDir,
assetDirs,
restoreWorkspace: async () => {
await withTempDir("paperclip-sandbox-restore-", async (tempDir) => {
const remoteWorkspaceTar = path.posix.join(runtimeRootDir, "workspace-download.tar");
await input.client.run(
`sh -lc ${shellQuote(
`mkdir -p ${shellQuote(runtimeRootDir)} && ` +
`tar -cf ${shellQuote(remoteWorkspaceTar)} -C ${shellQuote(workspaceRemoteDir)} ` +
`${tarExcludeFlags(input.workspaceExclude)} .`,
)}`,
{ timeoutMs: input.spec.timeoutMs },
);
const archiveBytes = await input.client.readFile(remoteWorkspaceTar);
await input.client.remove(remoteWorkspaceTar).catch(() => undefined);
const localArchivePath = path.join(tempDir, "workspace.tar");
const extractedDir = path.join(tempDir, "workspace");
await fs.writeFile(localArchivePath, toBuffer(archiveBytes));
await extractTarballToDirectory({
archivePath: localArchivePath,
localDir: extractedDir,
});
await mirrorDirectory(extractedDir, input.workspaceLocalDir, {
preserveAbsent: [".paperclip-runtime", ...(input.preserveAbsentOnRestore ?? [])],
});
});
},
};
}

View File

@@ -1,6 +1,7 @@
import { randomUUID } from "node:crypto";
import { describe, expect, it } from "vitest";
import {
applyPaperclipWorkspaceEnv,
appendWithByteCap,
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
renderPaperclipWakePrompt,
@@ -157,6 +158,35 @@ describe("runChildProcess", () => {
expect(await waitForPidExit(descendantPid, 2_000)).toBe(true);
});
it.skipIf(process.platform === "win32")("cleans up a still-running child after terminal output", async () => {
const result = await runChildProcess(
randomUUID(),
process.execPath,
[
"-e",
[
"process.stdout.write(`${JSON.stringify({ type: 'result', result: 'done' })}\\n`);",
"setInterval(() => {}, 1000);",
].join(" "),
],
{
cwd: process.cwd(),
env: {},
timeoutSec: 0,
graceSec: 1,
onLog: async () => {},
terminalResultCleanup: {
graceMs: 100,
hasTerminalResult: ({ stdout }) => stdout.includes('"type":"result"'),
},
},
);
expect(result.timedOut).toBe(false);
expect(result.signal).toBe("SIGTERM");
expect(result.stdout).toContain('"type":"result"');
});
it.skipIf(process.platform === "win32")("does not clean up noisy runs that have no terminal output", async () => {
const runId = randomUUID();
let observed = "";
@@ -225,8 +255,14 @@ describe("renderPaperclipWakePrompt", () => {
it("keeps the default local-agent prompt action-oriented", () => {
expect(DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE).toContain("Start actionable work in this heartbeat");
expect(DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE).toContain("do not stop at a plan");
expect(DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE).toContain("Prefer the smallest verification that proves the change");
expect(DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE).toContain("Use child issues");
expect(DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE).toContain("instead of polling agents, sessions, or processes");
expect(DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE).toContain("Create child issues directly when you know what needs to be done");
expect(DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE).toContain("POST /api/issues/{issueId}/interactions");
expect(DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE).toContain("kind suggest_tasks, ask_user_questions, or request_confirmation");
expect(DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE).toContain("confirmation:{issueId}:plan:{revisionId}");
expect(DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE).toContain("Wait for acceptance before creating implementation subtasks");
expect(DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE).toContain(
"Respect budget, pause/cancel, approval gates, and company boundaries",
);
@@ -292,6 +328,34 @@ describe("renderPaperclipWakePrompt", () => {
expect(prompt).toContain("PAP-1723 Finish blocker (todo)");
});
it("renders loose review request instructions for execution handoffs", () => {
const prompt = renderPaperclipWakePrompt({
reason: "execution_review_requested",
issue: {
id: "issue-1",
identifier: "PAP-2011",
title: "Review request handoff",
status: "in_review",
},
executionStage: {
wakeRole: "reviewer",
stageId: "stage-1",
stageType: "review",
currentParticipant: { type: "agent", agentId: "agent-1" },
returnAssignee: { type: "agent", agentId: "agent-2" },
reviewRequest: {
instructions: "Please focus on edge cases and leave a short risk summary.",
},
allowedActions: ["approve", "request_changes"],
},
fallbackFetchNeeded: false,
});
expect(prompt).toContain("Review request instructions:");
expect(prompt).toContain("Please focus on edge cases and leave a short risk summary.");
expect(prompt).toContain("You are waking as the active reviewer for this issue.");
});
it("includes continuation and child issue summaries in structured wake context", () => {
const payload = {
reason: "issue_children_completed",
@@ -362,6 +426,50 @@ describe("renderPaperclipWakePrompt", () => {
});
});
describe("applyPaperclipWorkspaceEnv", () => {
it("adds shared workspace env vars including AGENT_HOME", () => {
const env = applyPaperclipWorkspaceEnv(
{},
{
workspaceCwd: "/tmp/workspace",
workspaceSource: "project_primary",
workspaceStrategy: "git_worktree",
workspaceId: "workspace-1",
workspaceRepoUrl: "https://github.com/paperclipai/paperclip.git",
workspaceRepoRef: "main",
workspaceBranch: "feature/test",
workspaceWorktreePath: "/tmp/worktree",
agentHome: "/tmp/agent-home",
},
);
expect(env).toEqual({
PAPERCLIP_WORKSPACE_CWD: "/tmp/workspace",
PAPERCLIP_WORKSPACE_SOURCE: "project_primary",
PAPERCLIP_WORKSPACE_STRATEGY: "git_worktree",
PAPERCLIP_WORKSPACE_ID: "workspace-1",
PAPERCLIP_WORKSPACE_REPO_URL: "https://github.com/paperclipai/paperclip.git",
PAPERCLIP_WORKSPACE_REPO_REF: "main",
PAPERCLIP_WORKSPACE_BRANCH: "feature/test",
PAPERCLIP_WORKSPACE_WORKTREE_PATH: "/tmp/worktree",
AGENT_HOME: "/tmp/agent-home",
});
});
it("skips empty workspace env values", () => {
const env = applyPaperclipWorkspaceEnv(
{},
{
workspaceCwd: "",
workspaceSource: null,
agentHome: "",
},
);
expect(env).toEqual({});
});
});
describe("appendWithByteCap", () => {
it("keeps valid UTF-8 when trimming through multibyte text", () => {
const output = appendWithByteCap("prefix ", "hello — world", 7);

View File

@@ -1,6 +1,7 @@
import { spawn, type ChildProcess } from "node:child_process";
import { constants as fsConstants, promises as fs, type Dirent } from "node:fs";
import path from "node:path";
import { buildSshSpawnTarget, type SshRemoteExecutionSpec } from "./ssh.js";
import type {
AdapterSkillEntry,
AdapterSkillSnapshot,
@@ -30,8 +31,12 @@ interface RunningProcess {
interface SpawnTarget {
command: string;
args: string[];
cwd?: string;
cleanup?: () => Promise<void>;
}
type RemoteExecutionSpec = SshRemoteExecutionSpec;
type ChildProcessWithEvents = ChildProcess & {
on(event: "error", listener: (err: Error) => void): ChildProcess;
on(
@@ -82,8 +87,13 @@ export const DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE = [
"Execution contract:",
"- Start actionable work in this heartbeat; do not stop at a plan unless the issue asks for planning.",
"- Leave durable progress in comments, documents, or work products with a clear next action.",
"- Prefer the smallest verification that proves the change; do not default to full workspace typecheck/build/test on every heartbeat unless the task scope warrants it.",
"- Use child issues for parallel or long delegated work instead of polling agents, sessions, or processes.",
"- If woken by a human comment on a dependency-blocked issue, respond or triage the comment without treating the blocked deliverable work as unblocked.",
"- Create child issues directly when you know what needs to be done; use issue-thread interactions when the board/user must choose suggested tasks, answer structured questions, or confirm a proposal.",
"- To ask for that input, create an interaction on the current issue with POST /api/issues/{issueId}/interactions using kind suggest_tasks, ask_user_questions, or request_confirmation. Use continuationPolicy wake_assignee when you need to resume after a response; for request_confirmation this resumes only after acceptance.",
"- When you intentionally restart follow-up work on a completed assigned issue, include structured `resume: true` with the POST /api/issues/{issueId}/comments or PATCH /api/issues/{issueId} comment payload. Generic agent comments on closed issues are inert by default.",
"- For plan approval, update the plan document first, then create request_confirmation targeting the latest plan revision with idempotencyKey confirmation:{issueId}:plan:{revisionId}. Wait for acceptance before creating implementation subtasks, and create a fresh confirmation after superseding board/user comments if approval is still needed.",
"- If blocked, mark the issue blocked and name the unblock owner and action.",
"- Respect budget, pause/cancel, approval gates, and company boundaries.",
].join("\n");
@@ -274,6 +284,9 @@ type PaperclipWakeExecutionStage = {
stageType: string | null;
currentParticipant: PaperclipWakeExecutionPrincipal | null;
returnAssignee: PaperclipWakeExecutionPrincipal | null;
reviewRequest: {
instructions: string;
} | null;
lastDecisionOutcome: string | null;
allowedActions: string[];
};
@@ -322,11 +335,20 @@ type PaperclipWakeBlockerSummary = {
priority: string | null;
};
type PaperclipWakeTreeHoldSummary = {
holdId: string | null;
rootIssueId: string | null;
mode: string | null;
reason: string | null;
};
type PaperclipWakePayload = {
reason: string | null;
issue: PaperclipWakeIssue | null;
checkedOutByHarness: boolean;
dependencyBlockedInteraction: boolean;
treeHoldInteraction: boolean;
activeTreeHold: PaperclipWakeTreeHoldSummary | null;
unresolvedBlockerIssueIds: string[];
unresolvedBlockerSummaries: PaperclipWakeBlockerSummary[];
executionStage: PaperclipWakeExecutionStage | null;
@@ -432,6 +454,16 @@ function normalizePaperclipWakeBlockerSummary(value: unknown): PaperclipWakeBloc
return { id, identifier, title, status, priority };
}
function normalizePaperclipWakeTreeHoldSummary(value: unknown): PaperclipWakeTreeHoldSummary | null {
const hold = parseObject(value);
const holdId = asString(hold.holdId, "").trim() || null;
const rootIssueId = asString(hold.rootIssueId, "").trim() || null;
const mode = asString(hold.mode, "").trim() || null;
const reason = asString(hold.reason, "").trim() || null;
if (!holdId && !rootIssueId && !mode && !reason) return null;
return { holdId, rootIssueId, mode, reason };
}
function normalizePaperclipWakeExecutionPrincipal(value: unknown): PaperclipWakeExecutionPrincipal | null {
const principal = parseObject(value);
const typeRaw = asString(principal.type, "").trim().toLowerCase();
@@ -457,11 +489,14 @@ function normalizePaperclipWakeExecutionStage(value: unknown): PaperclipWakeExec
: [];
const currentParticipant = normalizePaperclipWakeExecutionPrincipal(stage.currentParticipant);
const returnAssignee = normalizePaperclipWakeExecutionPrincipal(stage.returnAssignee);
const reviewRequestRaw = parseObject(stage.reviewRequest);
const reviewInstructions = asString(reviewRequestRaw.instructions, "").trim();
const reviewRequest = reviewInstructions ? { instructions: reviewInstructions } : null;
const stageId = asString(stage.stageId, "").trim() || null;
const stageType = asString(stage.stageType, "").trim() || null;
const lastDecisionOutcome = asString(stage.lastDecisionOutcome, "").trim() || null;
if (!wakeRole && !stageId && !stageType && !currentParticipant && !returnAssignee && !lastDecisionOutcome && allowedActions.length === 0) {
if (!wakeRole && !stageId && !stageType && !currentParticipant && !returnAssignee && !reviewRequest && !lastDecisionOutcome && allowedActions.length === 0) {
return null;
}
@@ -471,6 +506,7 @@ function normalizePaperclipWakeExecutionStage(value: unknown): PaperclipWakeExec
stageType,
currentParticipant,
returnAssignee,
reviewRequest,
lastDecisionOutcome,
allowedActions,
};
@@ -508,7 +544,8 @@ export function normalizePaperclipWakePayload(value: unknown): PaperclipWakePayl
.filter((entry): entry is PaperclipWakeBlockerSummary => Boolean(entry))
: [];
if (comments.length === 0 && commentIds.length === 0 && childIssueSummaries.length === 0 && unresolvedBlockerIssueIds.length === 0 && unresolvedBlockerSummaries.length === 0 && !executionStage && !continuationSummary && !livenessContinuation && !normalizePaperclipWakeIssue(payload.issue)) {
const activeTreeHold = normalizePaperclipWakeTreeHoldSummary(payload.activeTreeHold);
if (comments.length === 0 && commentIds.length === 0 && childIssueSummaries.length === 0 && unresolvedBlockerIssueIds.length === 0 && unresolvedBlockerSummaries.length === 0 && !activeTreeHold && !executionStage && !continuationSummary && !livenessContinuation && !normalizePaperclipWakeIssue(payload.issue)) {
return null;
}
@@ -517,6 +554,8 @@ export function normalizePaperclipWakePayload(value: unknown): PaperclipWakePayl
issue: normalizePaperclipWakeIssue(payload.issue),
checkedOutByHarness: asBoolean(payload.checkedOutByHarness, false),
dependencyBlockedInteraction: asBoolean(payload.dependencyBlockedInteraction, false),
treeHoldInteraction: asBoolean(payload.treeHoldInteraction, false),
activeTreeHold,
unresolvedBlockerIssueIds,
unresolvedBlockerSummaries,
executionStage,
@@ -611,6 +650,14 @@ export function renderPaperclipWakePrompt(
lines.push(`- unresolved blocker issue ids: ${normalized.unresolvedBlockerIssueIds.join(", ")}`);
}
}
if (normalized.treeHoldInteraction) {
lines.push("- tree-hold interaction: yes");
lines.push("- execution scope: respond or triage the human comment; the subtree remains paused until an explicit resume action");
if (normalized.activeTreeHold) {
const hold = normalized.activeTreeHold;
lines.push(`- active tree hold: ${hold.holdId ?? "unknown"}${hold.rootIssueId ? ` rooted at ${hold.rootIssueId}` : ""}${hold.mode ? ` (${hold.mode})` : ""}`);
}
}
if (normalized.missingCount > 0) {
lines.push(`- omitted comments: ${normalized.missingCount}`);
}
@@ -626,6 +673,13 @@ export function renderPaperclipWakePrompt(
if (executionStage.allowedActions.length > 0) {
lines.push(`- allowed actions: ${executionStage.allowedActions.join(", ")}`);
}
if (executionStage.reviewRequest) {
lines.push(
"",
"Review request instructions:",
executionStage.reviewRequest.instructions,
);
}
lines.push("");
if (executionStage.wakeRole === "reviewer" || executionStage.wakeRole === "approver") {
lines.push(
@@ -773,11 +827,61 @@ export function buildPaperclipEnv(agent: { id: string; companyId: string }): Rec
process.env.PAPERCLIP_LISTEN_HOST ?? process.env.HOST ?? "localhost",
);
const runtimePort = process.env.PAPERCLIP_LISTEN_PORT ?? process.env.PORT ?? "3100";
const apiUrl = process.env.PAPERCLIP_API_URL ?? `http://${runtimeHost}:${runtimePort}`;
const apiUrl =
process.env.PAPERCLIP_RUNTIME_API_URL ??
process.env.PAPERCLIP_API_URL ??
`http://${runtimeHost}:${runtimePort}`;
vars.PAPERCLIP_API_URL = apiUrl;
return vars;
}
export function applyPaperclipWorkspaceEnv(
env: Record<string, string>,
input: {
workspaceCwd?: string | null;
workspaceSource?: string | null;
workspaceStrategy?: string | null;
workspaceId?: string | null;
workspaceRepoUrl?: string | null;
workspaceRepoRef?: string | null;
workspaceBranch?: string | null;
workspaceWorktreePath?: string | null;
agentHome?: string | null;
},
): Record<string, string> {
const mappings = [
["PAPERCLIP_WORKSPACE_CWD", input.workspaceCwd],
["PAPERCLIP_WORKSPACE_SOURCE", input.workspaceSource],
["PAPERCLIP_WORKSPACE_STRATEGY", input.workspaceStrategy],
["PAPERCLIP_WORKSPACE_ID", input.workspaceId],
["PAPERCLIP_WORKSPACE_REPO_URL", input.workspaceRepoUrl],
["PAPERCLIP_WORKSPACE_REPO_REF", input.workspaceRepoRef],
["PAPERCLIP_WORKSPACE_BRANCH", input.workspaceBranch],
["PAPERCLIP_WORKSPACE_WORKTREE_PATH", input.workspaceWorktreePath],
["AGENT_HOME", input.agentHome],
] as const;
for (const [key, value] of mappings) {
if (typeof value === "string" && value.length > 0) {
env[key] = value;
}
}
return env;
}
export function sanitizeInheritedPaperclipEnv(baseEnv: NodeJS.ProcessEnv): NodeJS.ProcessEnv {
const env: NodeJS.ProcessEnv = { ...baseEnv };
for (const key of Object.keys(env)) {
if (!key.startsWith("PAPERCLIP_")) continue;
if (key === "PAPERCLIP_RUNTIME_API_URL") continue;
if (key === "PAPERCLIP_LISTEN_HOST") continue;
if (key === "PAPERCLIP_LISTEN_PORT") continue;
delete env[key];
}
return env;
}
export function defaultPathForPlatform() {
if (process.platform === "win32") {
return "C:\\Windows\\System32;C:\\Windows;C:\\Windows\\System32\\Wbem";
@@ -826,7 +930,18 @@ async function resolveCommandPath(command: string, cwd: string, env: NodeJS.Proc
return null;
}
export async function resolveCommandForLogs(command: string, cwd: string, env: NodeJS.ProcessEnv): Promise<string> {
export async function resolveCommandForLogs(
command: string,
cwd: string,
env: NodeJS.ProcessEnv,
options: {
remoteExecution?: RemoteExecutionSpec | null;
} = {},
): Promise<string> {
const remote = options.remoteExecution ?? null;
if (remote) {
return `ssh://${remote.username}@${remote.host}:${remote.port}/${remote.remoteCwd} :: ${command}`;
}
return (await resolveCommandPath(command, cwd, env)) ?? command;
}
@@ -846,7 +961,33 @@ async function resolveSpawnTarget(
args: string[],
cwd: string,
env: NodeJS.ProcessEnv,
options: {
remoteExecution?: RemoteExecutionSpec | null;
remoteEnv?: Record<string, string> | null;
} = {},
): Promise<SpawnTarget> {
const remote = options.remoteExecution ?? null;
if (remote) {
const sshResolved = await resolveCommandPath("ssh", process.cwd(), env);
if (!sshResolved) {
throw new Error('Command not found in PATH: "ssh"');
}
const spawnTarget = await buildSshSpawnTarget({
spec: remote,
command,
args,
env: Object.fromEntries(
Object.entries(options.remoteEnv ?? {}).filter((entry): entry is [string, string] => typeof entry[1] === "string"),
),
});
return {
command: sshResolved,
args: spawnTarget.args,
cwd: process.cwd(),
cleanup: spawnTarget.cleanup,
};
}
const resolved = await resolveCommandPath(command, cwd, env);
const executable = resolved ?? command;
@@ -931,6 +1072,20 @@ export async function resolvePaperclipSkillsDir(
return null;
}
async function readSkillRequired(skillDir: string): Promise<boolean> {
try {
const content = await fs.readFile(path.join(skillDir, "SKILL.md"), "utf8");
const normalized = content.replace(/\r\n/g, "\n");
if (!normalized.startsWith("---\n")) return true;
const closing = normalized.indexOf("\n---\n", 4);
if (closing < 0) return true;
const frontmatter = normalized.slice(4, closing);
return !/^\s*required\s*:\s*false\s*$/m.test(frontmatter);
} catch {
return true;
}
}
export async function listPaperclipSkillEntries(
moduleDir: string,
additionalCandidates: string[] = [],
@@ -940,15 +1095,20 @@ export async function listPaperclipSkillEntries(
try {
const entries = await fs.readdir(root, { withFileTypes: true });
return entries
.filter((entry) => entry.isDirectory())
.map((entry) => ({
const dirs = entries.filter((entry) => entry.isDirectory());
return Promise.all(dirs.map(async (entry) => {
const skillDir = path.join(root, entry.name);
const required = await readSkillRequired(skillDir);
return {
key: `paperclipai/paperclip/${entry.name}`,
runtimeName: entry.name,
source: path.join(root, entry.name),
required: true,
requiredReason: "Bundled Paperclip skills are always available for local adapters.",
}));
source: skillDir,
required,
requiredReason: required
? "Bundled Paperclip skills are always available for local adapters."
: null,
};
}));
} catch {
return [];
}
@@ -1273,7 +1433,19 @@ export async function removeMaintainerOnlySkillSymlinks(
}
}
export async function ensureCommandResolvable(command: string, cwd: string, env: NodeJS.ProcessEnv) {
export async function ensureCommandResolvable(
command: string,
cwd: string,
env: NodeJS.ProcessEnv,
options: {
remoteExecution?: RemoteExecutionSpec | null;
} = {},
) {
if (options.remoteExecution) {
const resolvedSsh = await resolveCommandPath("ssh", process.cwd(), env);
if (resolvedSsh) return;
throw new Error('Command not found in PATH: "ssh"');
}
const resolved = await resolveCommandPath(command, cwd, env);
if (resolved) return;
if (command.includes("/") || command.includes("\\")) {
@@ -1297,12 +1469,15 @@ export async function runChildProcess(
onSpawn?: (meta: { pid: number; processGroupId: number | null; startedAt: string }) => Promise<void>;
terminalResultCleanup?: TerminalResultCleanupOptions;
stdin?: string;
remoteExecution?: RemoteExecutionSpec | null;
},
): Promise<RunProcessResult> {
const onLogError = opts.onLogError ?? ((err, id, msg) => console.warn({ err, runId: id }, msg));
return new Promise<RunProcessResult>((resolve, reject) => {
const rawMerged: NodeJS.ProcessEnv = { ...process.env, ...opts.env };
const rawMerged: NodeJS.ProcessEnv = {
...sanitizeInheritedPaperclipEnv(process.env),
...opts.env,
};
// Strip Claude Code nesting-guard env vars so spawned `claude` processes
// don't refuse to start with "cannot be launched inside another session".
@@ -1320,10 +1495,13 @@ export async function runChildProcess(
}
const mergedEnv = ensurePathInEnv(rawMerged);
void resolveSpawnTarget(command, args, opts.cwd, mergedEnv)
void resolveSpawnTarget(command, args, opts.cwd, mergedEnv, {
remoteExecution: opts.remoteExecution ?? null,
remoteEnv: opts.remoteExecution ? opts.env : null,
})
.then((target) => {
const child = spawn(target.command, target.args, {
cwd: opts.cwd,
cwd: target.cwd ?? opts.cwd,
env: mergedEnv,
detached: process.platform !== "win32",
shell: false,
@@ -1345,7 +1523,6 @@ export async function runChildProcess(
let stdout = "";
let stderr = "";
let logChain: Promise<void> = Promise.resolve();
let childExited = false;
let terminalResultSeen = false;
let terminalCleanupStarted = false;
let terminalCleanupTimer: NodeJS.Timeout | null = null;
@@ -1379,7 +1556,7 @@ export async function runChildProcess(
onLogError(err, runId, "failed to inspect terminal adapter output");
}
}
if (!terminalResultSeen || !childExited) return;
if (!terminalResultSeen) return;
if (terminalCleanupTimer) return;
const graceMs = Math.max(0, terminalCleanup.graceMs ?? 5_000);
@@ -1452,6 +1629,7 @@ export async function runChildProcess(
if (timeout) clearTimeout(timeout);
clearTerminalCleanupTimers();
runningProcesses.delete(runId);
void target.cleanup?.();
const errno = (err as NodeJS.ErrnoException).code;
const pathValue = mergedEnv.PATH ?? mergedEnv.Path ?? "";
const msg =
@@ -1462,7 +1640,6 @@ export async function runChildProcess(
});
child.on("exit", () => {
childExited = true;
maybeArmTerminalResultCleanup();
});
@@ -1471,15 +1648,19 @@ export async function runChildProcess(
clearTerminalCleanupTimers();
runningProcesses.delete(runId);
void logChain.finally(() => {
resolve({
exitCode: code,
signal,
timedOut,
stdout,
stderr,
pid: child.pid ?? null,
startedAt,
});
void Promise.resolve()
.then(() => target.cleanup?.())
.finally(() => {
resolve({
exitCode: code,
signal,
timedOut,
stdout,
stderr,
pid: child.pid ?? null,
startedAt,
});
});
});
});
})

View File

@@ -0,0 +1,275 @@
import { execFile } from "node:child_process";
import { mkdir, mkdtemp, rm, symlink, writeFile } from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { afterEach, describe, expect, it } from "vitest";
import {
buildSshSpawnTarget,
buildSshEnvLabFixtureConfig,
getSshEnvLabSupport,
prepareWorkspaceForSshExecution,
readSshEnvLabFixtureStatus,
restoreWorkspaceFromSshExecution,
runSshCommand,
syncDirectoryToSsh,
startSshEnvLabFixture,
stopSshEnvLabFixture,
} from "./ssh.js";
async function git(cwd: string, args: string[]): Promise<string> {
return await new Promise((resolve, reject) => {
execFile("git", ["-C", cwd, ...args], (error, stdout, stderr) => {
if (error) {
reject(new Error((stderr || stdout || error.message).trim()));
return;
}
resolve(stdout.trim());
});
});
}
describe("ssh env-lab fixture", () => {
const cleanupDirs: string[] = [];
afterEach(async () => {
while (cleanupDirs.length > 0) {
const dir = cleanupDirs.pop();
if (!dir) continue;
await rm(dir, { recursive: true, force: true }).catch(() => undefined);
}
});
it("starts an isolated sshd fixture and executes commands through it", async () => {
const support = await getSshEnvLabSupport();
if (!support.supported) {
console.warn(
`Skipping SSH env-lab fixture test: ${support.reason ?? "unsupported environment"}`,
);
return;
}
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-ssh-fixture-"));
cleanupDirs.push(rootDir);
const statePath = path.join(rootDir, "state.json");
const started = await startSshEnvLabFixture({ statePath });
const config = await buildSshEnvLabFixtureConfig(started);
const quotedWorkspace = JSON.stringify(started.workspaceDir);
const result = await runSshCommand(
config,
`sh -lc 'cd ${quotedWorkspace} && pwd'`,
);
expect(result.stdout.trim()).toBe(started.workspaceDir);
const status = await readSshEnvLabFixtureStatus(statePath);
expect(status.running).toBe(true);
await stopSshEnvLabFixture(statePath);
const stopped = await readSshEnvLabFixtureStatus(statePath);
expect(stopped.running).toBe(false);
});
it("does not treat an unrelated reused pid as the running fixture", async () => {
const support = await getSshEnvLabSupport();
if (!support.supported) {
console.warn(
`Skipping SSH env-lab fixture test: ${support.reason ?? "unsupported environment"}`,
);
return;
}
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-ssh-fixture-"));
cleanupDirs.push(rootDir);
const statePath = path.join(rootDir, "state.json");
const started = await startSshEnvLabFixture({ statePath });
await stopSshEnvLabFixture(statePath);
await mkdir(path.dirname(statePath), { recursive: true });
await writeFile(
statePath,
JSON.stringify({ ...started, pid: process.pid }, null, 2),
{ mode: 0o600 },
);
const staleStatus = await readSshEnvLabFixtureStatus(statePath);
expect(staleStatus.running).toBe(false);
const restarted = await startSshEnvLabFixture({ statePath });
expect(restarted.pid).not.toBe(process.pid);
await stopSshEnvLabFixture(statePath);
});
it("rejects invalid environment variable keys when constructing SSH spawn targets", async () => {
await expect(
buildSshSpawnTarget({
spec: {
host: "ssh.example.test",
port: 22,
username: "ssh-user",
remoteCwd: "/srv/paperclip/workspace",
remoteWorkspacePath: "/srv/paperclip/workspace",
privateKey: null,
knownHosts: null,
strictHostKeyChecking: true,
},
command: "env",
args: [],
env: {
"BAD KEY": "value",
},
}),
).rejects.toThrow("Invalid SSH environment variable key: BAD KEY");
});
it("syncs a local directory into the remote fixture workspace", async () => {
const support = await getSshEnvLabSupport();
if (!support.supported) {
console.warn(
`Skipping SSH env-lab fixture test: ${support.reason ?? "unsupported environment"}`,
);
return;
}
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-ssh-fixture-"));
cleanupDirs.push(rootDir);
const statePath = path.join(rootDir, "state.json");
const localDir = path.join(rootDir, "local-overlay");
await mkdir(localDir, { recursive: true });
await writeFile(path.join(localDir, "message.txt"), "hello from paperclip\n", "utf8");
await writeFile(path.join(localDir, "._message.txt"), "should never sync\n", "utf8");
const started = await startSshEnvLabFixture({ statePath });
const config = await buildSshEnvLabFixtureConfig(started);
const remoteDir = path.posix.join(started.workspaceDir, "overlay");
await syncDirectoryToSsh({
spec: {
...config,
remoteCwd: started.workspaceDir,
},
localDir,
remoteDir,
});
const result = await runSshCommand(
config,
`sh -lc 'cat ${JSON.stringify(path.posix.join(remoteDir, "message.txt"))} && if [ -e ${JSON.stringify(path.posix.join(remoteDir, "._message.txt"))} ]; then echo appledouble-present; fi'`,
);
expect(result.stdout).toContain("hello from paperclip");
expect(result.stdout).not.toContain("appledouble-present");
});
it("can dereference local symlinks while syncing to the remote fixture", async () => {
const support = await getSshEnvLabSupport();
if (!support.supported) {
console.warn(
`Skipping SSH symlink sync test: ${support.reason ?? "unsupported environment"}`,
);
return;
}
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-ssh-fixture-"));
cleanupDirs.push(rootDir);
const statePath = path.join(rootDir, "state.json");
const sourceDir = path.join(rootDir, "source");
const localDir = path.join(rootDir, "local-overlay");
await mkdir(sourceDir, { recursive: true });
await mkdir(localDir, { recursive: true });
await writeFile(path.join(sourceDir, "auth.json"), "{\"token\":\"secret\"}\n", "utf8");
await symlink(path.join(sourceDir, "auth.json"), path.join(localDir, "auth.json"));
const started = await startSshEnvLabFixture({ statePath });
const config = await buildSshEnvLabFixtureConfig(started);
const remoteDir = path.posix.join(started.workspaceDir, "overlay-follow-links");
await syncDirectoryToSsh({
spec: {
...config,
remoteCwd: started.workspaceDir,
},
localDir,
remoteDir,
followSymlinks: true,
});
const result = await runSshCommand(
config,
`sh -lc 'if [ -L ${JSON.stringify(path.posix.join(remoteDir, "auth.json"))} ]; then echo symlink; else echo regular; fi && cat ${JSON.stringify(path.posix.join(remoteDir, "auth.json"))}'`,
);
expect(result.stdout).toContain("regular");
expect(result.stdout).toContain("{\"token\":\"secret\"}");
});
it("round-trips a git workspace through the SSH fixture", async () => {
const support = await getSshEnvLabSupport();
if (!support.supported) {
console.warn(
`Skipping SSH workspace round-trip test: ${support.reason ?? "unsupported environment"}`,
);
return;
}
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-ssh-fixture-"));
cleanupDirs.push(rootDir);
const statePath = path.join(rootDir, "state.json");
const localRepo = path.join(rootDir, "local-workspace");
await mkdir(localRepo, { recursive: true });
await git(localRepo, ["init", "-b", "main"]);
await git(localRepo, ["config", "user.name", "Paperclip Test"]);
await git(localRepo, ["config", "user.email", "test@paperclip.dev"]);
await writeFile(path.join(localRepo, "tracked.txt"), "base\n", "utf8");
await writeFile(path.join(localRepo, "._tracked.txt"), "should stay local only\n", "utf8");
await git(localRepo, ["add", "tracked.txt"]);
await git(localRepo, ["commit", "-m", "initial"]);
const originalHead = await git(localRepo, ["rev-parse", "HEAD"]);
await writeFile(path.join(localRepo, "tracked.txt"), "dirty local\n", "utf8");
await writeFile(path.join(localRepo, "untracked.txt"), "from local\n", "utf8");
const started = await startSshEnvLabFixture({ statePath });
const config = await buildSshEnvLabFixtureConfig(started);
const spec = {
...config,
remoteCwd: started.workspaceDir,
} as const;
await prepareWorkspaceForSshExecution({
spec,
localDir: localRepo,
remoteDir: started.workspaceDir,
});
const remoteStatus = await runSshCommand(
config,
`sh -lc 'cd ${JSON.stringify(started.workspaceDir)} && git status --short'`,
);
expect(remoteStatus.stdout).toContain("M tracked.txt");
expect(remoteStatus.stdout).toContain("?? untracked.txt");
expect(remoteStatus.stdout).not.toContain("._tracked.txt");
await runSshCommand(
config,
`sh -lc 'cd ${JSON.stringify(started.workspaceDir)} && git config user.name "Paperclip SSH" && git config user.email "ssh@paperclip.dev" && git add tracked.txt untracked.txt && git commit -m "remote update" >/dev/null && printf "remote dirty\\n" > tracked.txt && printf "remote extra\\n" > remote-only.txt'`,
{ timeoutMs: 30_000, maxBuffer: 256 * 1024 },
);
await restoreWorkspaceFromSshExecution({
spec,
localDir: localRepo,
remoteDir: started.workspaceDir,
});
const restoredHead = await git(localRepo, ["rev-parse", "HEAD"]);
expect(restoredHead).not.toBe(originalHead);
expect(await git(localRepo, ["log", "-1", "--pretty=%s"])).toBe("remote update");
expect(await git(localRepo, ["status", "--short"])).toContain("M tracked.txt");
expect(await git(localRepo, ["status", "--short"])).not.toContain("._tracked.txt");
});
});

File diff suppressed because it is too large Load Diff

View File

@@ -2,6 +2,9 @@
// Minimal adapter-facing interfaces (no drizzle dependency)
// ---------------------------------------------------------------------------
import type { SshRemoteExecutionSpec } from "./ssh.js";
import type { AdapterExecutionTarget } from "./execution-target.js";
export interface AdapterAgent {
id: string;
companyId: string;
@@ -61,12 +64,16 @@ export interface AdapterRuntimeServiceReport {
healthStatus?: "unknown" | "healthy" | "unhealthy";
}
export type AdapterExecutionErrorFamily = "transient_upstream";
export interface AdapterExecutionResult {
exitCode: number | null;
signal: string | null;
timedOut: boolean;
errorMessage?: string | null;
errorCode?: string | null;
errorFamily?: AdapterExecutionErrorFamily | null;
retryNotBefore?: string | null;
errorMeta?: Record<string, unknown>;
usage?: UsageSummary;
/**
@@ -118,6 +125,14 @@ export interface AdapterExecutionContext {
runtime: AdapterRuntime;
config: Record<string, unknown>;
context: Record<string, unknown>;
executionTarget?: AdapterExecutionTarget | null;
/**
* Legacy remote transport view. Prefer `executionTarget`, which is the
* provider-neutral contract produced by core runtime code.
*/
executionTransport?: {
remoteExecution?: Record<string, unknown> | null;
};
onLog: (stream: "stdout" | "stderr", chunk: string) => Promise<void>;
onMeta?: (meta: AdapterInvocationMeta) => Promise<void>;
onSpawn?: (meta: { pid: number; processGroupId: number | null; startedAt: string }) => Promise<void>;
@@ -300,6 +315,13 @@ export interface ServerAdapterModule {
supportsLocalAgentJwt?: boolean;
models?: AdapterModel[];
listModels?: () => Promise<AdapterModel[]>;
/**
* Optional explicit refresh hook for model discovery.
* Use this when the adapter caches discovered models and needs a bypass path
* so the UI can fetch newly released models without waiting for cache expiry
* or a Paperclip code update.
*/
refreshModels?: () => Promise<AdapterModel[]>;
agentConfigurationDoc?: string;
/**
* Optional lifecycle hook when an agent is approved/hired (join-request or hire_agent approval).
@@ -417,6 +439,7 @@ export interface CreateConfigValues {
workspaceBranchTemplate?: string;
worktreeParentDir?: string;
runtimeServicesJson?: string;
defaultEnvironmentId?: string;
maxTurnsPerRun: number;
heartbeatEnabled: boolean;
intervalSec: number;

View File

@@ -0,0 +1,262 @@
import { mkdir, mkdtemp, rm, writeFile } from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { afterEach, describe, expect, it, vi } from "vitest";
const {
runChildProcess,
ensureCommandResolvable,
resolveCommandForLogs,
prepareWorkspaceForSshExecution,
restoreWorkspaceFromSshExecution,
syncDirectoryToSsh,
} = vi.hoisted(() => ({
runChildProcess: vi.fn(async () => ({
exitCode: 0,
signal: null,
timedOut: false,
stdout: [
JSON.stringify({ type: "system", subtype: "init", session_id: "claude-session-1", model: "claude-sonnet" }),
JSON.stringify({ type: "assistant", session_id: "claude-session-1", message: { content: [{ type: "text", text: "hello" }] } }),
JSON.stringify({ type: "result", session_id: "claude-session-1", result: "hello", usage: { input_tokens: 1, cache_read_input_tokens: 0, output_tokens: 1 } }),
].join("\n"),
stderr: "",
pid: 123,
startedAt: new Date().toISOString(),
})),
ensureCommandResolvable: vi.fn(async () => undefined),
resolveCommandForLogs: vi.fn(async () => "ssh://fixture@127.0.0.1:2222/remote/workspace :: claude"),
prepareWorkspaceForSshExecution: vi.fn(async () => undefined),
restoreWorkspaceFromSshExecution: vi.fn(async () => undefined),
syncDirectoryToSsh: vi.fn(async () => undefined),
}));
vi.mock("@paperclipai/adapter-utils/server-utils", async () => {
const actual = await vi.importActual<typeof import("@paperclipai/adapter-utils/server-utils")>(
"@paperclipai/adapter-utils/server-utils",
);
return {
...actual,
ensureCommandResolvable,
resolveCommandForLogs,
runChildProcess,
};
});
vi.mock("@paperclipai/adapter-utils/ssh", async () => {
const actual = await vi.importActual<typeof import("@paperclipai/adapter-utils/ssh")>(
"@paperclipai/adapter-utils/ssh",
);
return {
...actual,
prepareWorkspaceForSshExecution,
restoreWorkspaceFromSshExecution,
syncDirectoryToSsh,
};
});
import { execute } from "./execute.js";
describe("claude remote execution", () => {
const cleanupDirs: string[] = [];
afterEach(async () => {
vi.clearAllMocks();
while (cleanupDirs.length > 0) {
const dir = cleanupDirs.pop();
if (!dir) continue;
await rm(dir, { recursive: true, force: true }).catch(() => undefined);
}
});
it("prepares the workspace, syncs Claude runtime assets, and restores workspace changes for remote SSH execution", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-claude-remote-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
const instructionsPath = path.join(rootDir, "instructions.md");
await mkdir(workspaceDir, { recursive: true });
await writeFile(instructionsPath, "Use the remote workspace.\n", "utf8");
await execute({
runId: "run-1",
agent: {
id: "agent-1",
companyId: "company-1",
name: "Claude Coder",
adapterType: "claude_local",
adapterConfig: {},
},
runtime: {
sessionId: null,
sessionParams: null,
sessionDisplayId: null,
taskKey: null,
},
config: {
command: "claude",
instructionsFilePath: instructionsPath,
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTransport: {
remoteExecution: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
paperclipApiUrl: "http://198.51.100.10:3102",
},
},
onLog: async () => {},
});
expect(prepareWorkspaceForSshExecution).toHaveBeenCalledTimes(1);
expect(prepareWorkspaceForSshExecution).toHaveBeenCalledWith(expect.objectContaining({
localDir: workspaceDir,
remoteDir: "/remote/workspace",
}));
expect(syncDirectoryToSsh).toHaveBeenCalledTimes(1);
expect(syncDirectoryToSsh).toHaveBeenCalledWith(expect.objectContaining({
remoteDir: "/remote/workspace/.paperclip-runtime/claude/skills",
followSymlinks: true,
}));
expect(runChildProcess).toHaveBeenCalledTimes(1);
const call = runChildProcess.mock.calls[0] as unknown as
| [string, string, string[], { env: Record<string, string>; remoteExecution?: { remoteCwd: string } | null }]
| undefined;
expect(call?.[2]).toContain("--append-system-prompt-file");
expect(call?.[2]).toContain("/remote/workspace/.paperclip-runtime/claude/skills/agent-instructions.md");
expect(call?.[2]).toContain("--add-dir");
expect(call?.[2]).toContain("/remote/workspace/.paperclip-runtime/claude/skills");
expect(call?.[3].env.PAPERCLIP_API_URL).toBe("http://198.51.100.10:3102");
expect(call?.[3].remoteExecution?.remoteCwd).toBe("/remote/workspace");
expect(restoreWorkspaceFromSshExecution).toHaveBeenCalledTimes(1);
expect(restoreWorkspaceFromSshExecution).toHaveBeenCalledWith(expect.objectContaining({
localDir: workspaceDir,
remoteDir: "/remote/workspace",
}));
});
it("does not resume saved Claude sessions for remote SSH execution without a matching remote identity", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-claude-remote-resume-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
await mkdir(workspaceDir, { recursive: true });
await execute({
runId: "run-ssh-no-resume",
agent: {
id: "agent-1",
companyId: "company-1",
name: "Claude Coder",
adapterType: "claude_local",
adapterConfig: {},
},
runtime: {
sessionId: "session-123",
sessionParams: {
sessionId: "session-123",
cwd: "/remote/workspace",
},
sessionDisplayId: "session-123",
taskKey: null,
},
config: {
command: "claude",
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTransport: {
remoteExecution: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
},
},
onLog: async () => {},
});
expect(runChildProcess).toHaveBeenCalledTimes(1);
const call = runChildProcess.mock.calls[0] as unknown as [string, string, string[]] | undefined;
expect(call?.[2]).not.toContain("--resume");
});
it("resumes saved Claude sessions for remote SSH execution when the remote identity matches", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-claude-remote-resume-match-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
await mkdir(workspaceDir, { recursive: true });
await execute({
runId: "run-ssh-resume",
agent: {
id: "agent-1",
companyId: "company-1",
name: "Claude Coder",
adapterType: "claude_local",
adapterConfig: {},
},
runtime: {
sessionId: "session-123",
sessionParams: {
sessionId: "session-123",
cwd: "/remote/workspace",
remoteExecution: {
transport: "ssh",
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteCwd: "/remote/workspace",
},
},
sessionDisplayId: "session-123",
taskKey: null,
},
config: {
command: "claude",
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTransport: {
remoteExecution: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
},
},
onLog: async () => {},
});
expect(runChildProcess).toHaveBeenCalledTimes(1);
const call = runChildProcess.mock.calls[0] as unknown as [string, string, string[]] | undefined;
expect(call?.[2]).toContain("--resume");
expect(call?.[2]).toContain("session-123");
});
});

View File

@@ -3,6 +3,20 @@ import path from "node:path";
import { fileURLToPath } from "node:url";
import type { AdapterExecutionContext, AdapterExecutionResult } from "@paperclipai/adapter-utils";
import type { RunProcessResult } from "@paperclipai/adapter-utils/server-utils";
import {
adapterExecutionTargetIsRemote,
adapterExecutionTargetPaperclipApiUrl,
adapterExecutionTargetRemoteCwd,
adapterExecutionTargetSessionIdentity,
adapterExecutionTargetSessionMatches,
adapterExecutionTargetUsesManagedHome,
describeAdapterExecutionTarget,
ensureAdapterExecutionTargetCommandResolvable,
prepareAdapterExecutionTargetRuntime,
readAdapterExecutionTarget,
resolveAdapterExecutionTargetCommandForLogs,
runAdapterExecutionTargetProcess,
} from "@paperclipai/adapter-utils/execution-target";
import {
asString,
asNumber,
@@ -10,25 +24,25 @@ import {
asStringArray,
parseObject,
parseJson,
applyPaperclipWorkspaceEnv,
buildPaperclipEnv,
readPaperclipRuntimeSkillEntries,
joinPromptSections,
buildInvocationEnvForLogs,
ensureAbsoluteDirectory,
ensureCommandResolvable,
ensurePathInEnv,
resolveCommandForLogs,
renderTemplate,
renderPaperclipWakePrompt,
stringifyPaperclipWakePayload,
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
runChildProcess,
} from "@paperclipai/adapter-utils/server-utils";
import {
parseClaudeStreamJson,
describeClaudeFailure,
detectClaudeLoginRequired,
extractClaudeRetryNotBefore,
isClaudeMaxTurnsResult,
isClaudeTransientUpstreamError,
isClaudeUnknownSessionError,
} from "./parse.js";
import { resolveClaudeDesiredSkillNames } from "./skills.js";
@@ -42,6 +56,7 @@ interface ClaudeExecutionInput {
agent: AdapterExecutionContext["agent"];
config: Record<string, unknown>;
context: Record<string, unknown>;
executionTarget?: ReturnType<typeof readAdapterExecutionTarget>;
authToken?: string;
}
@@ -92,7 +107,7 @@ function resolveClaudeBillingType(env: Record<string, string>): "api" | "subscri
}
async function buildClaudeRuntimeConfig(input: ClaudeExecutionInput): Promise<ClaudeRuntimeConfig> {
const { runId, agent, config, context, authToken } = input;
const { runId, agent, config, context, executionTarget, authToken } = input;
const command = asString(config.command, "claude");
const workspaceContext = parseObject(context.paperclipWorkspace);
@@ -179,33 +194,17 @@ async function buildClaudeRuntimeConfig(input: ClaudeExecutionInput): Promise<Cl
if (wakePayloadJson) {
env.PAPERCLIP_WAKE_PAYLOAD_JSON = wakePayloadJson;
}
if (effectiveWorkspaceCwd) {
env.PAPERCLIP_WORKSPACE_CWD = effectiveWorkspaceCwd;
}
if (workspaceSource) {
env.PAPERCLIP_WORKSPACE_SOURCE = workspaceSource;
}
if (workspaceStrategy) {
env.PAPERCLIP_WORKSPACE_STRATEGY = workspaceStrategy;
}
if (workspaceId) {
env.PAPERCLIP_WORKSPACE_ID = workspaceId;
}
if (workspaceRepoUrl) {
env.PAPERCLIP_WORKSPACE_REPO_URL = workspaceRepoUrl;
}
if (workspaceRepoRef) {
env.PAPERCLIP_WORKSPACE_REPO_REF = workspaceRepoRef;
}
if (workspaceBranch) {
env.PAPERCLIP_WORKSPACE_BRANCH = workspaceBranch;
}
if (workspaceWorktreePath) {
env.PAPERCLIP_WORKSPACE_WORKTREE_PATH = workspaceWorktreePath;
}
if (agentHome) {
env.AGENT_HOME = agentHome;
}
applyPaperclipWorkspaceEnv(env, {
workspaceCwd: effectiveWorkspaceCwd,
workspaceSource,
workspaceStrategy,
workspaceId,
workspaceRepoUrl,
workspaceRepoRef,
workspaceBranch,
workspaceWorktreePath,
agentHome,
});
if (workspaceHints.length > 0) {
env.PAPERCLIP_WORKSPACES_JSON = JSON.stringify(workspaceHints);
}
@@ -218,6 +217,10 @@ async function buildClaudeRuntimeConfig(input: ClaudeExecutionInput): Promise<Cl
if (runtimePrimaryUrl) {
env.PAPERCLIP_RUNTIME_PRIMARY_URL = runtimePrimaryUrl;
}
const targetPaperclipApiUrl = adapterExecutionTargetPaperclipApiUrl(executionTarget);
if (targetPaperclipApiUrl) {
env.PAPERCLIP_API_URL = targetPaperclipApiUrl;
}
for (const [key, value] of Object.entries(envConfig)) {
if (typeof value === "string") env[key] = value;
@@ -228,8 +231,8 @@ async function buildClaudeRuntimeConfig(input: ClaudeExecutionInput): Promise<Cl
}
const runtimeEnv = ensurePathInEnv({ ...process.env, ...env });
await ensureCommandResolvable(command, cwd, runtimeEnv);
const resolvedCommand = await resolveCommandForLogs(command, cwd, runtimeEnv);
await ensureAdapterExecutionTargetCommandResolvable(command, executionTarget, cwd, runtimeEnv);
const resolvedCommand = await resolveAdapterExecutionTargetCommandForLogs(command, executionTarget, cwd, runtimeEnv);
const loggedEnv = buildInvocationEnvForLogs(env, {
runtimeEnv,
includeRuntimeKeys: ["HOME", "CLAUDE_CONFIG_DIR"],
@@ -276,7 +279,7 @@ export async function runClaudeLogin(input: {
authToken: input.authToken,
});
const proc = await runChildProcess(input.runId, runtime.command, ["login"], {
const proc = await runAdapterExecutionTargetProcess(input.runId, null, runtime.command, ["login"], {
cwd: runtime.cwd,
env: runtime.env,
timeoutSec: runtime.timeoutSec,
@@ -298,6 +301,11 @@ export async function runClaudeLogin(input: {
export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExecutionResult> {
const { runId, agent, runtime, config, context, onLog, onMeta, onSpawn, authToken } = ctx;
const executionTarget = readAdapterExecutionTarget({
executionTarget: ctx.executionTarget,
legacyRemoteExecution: ctx.executionTransport?.remoteExecution,
});
const executionTargetIsRemote = adapterExecutionTargetIsRemote(executionTarget);
const promptTemplate = asString(
config.promptTemplate,
@@ -315,6 +323,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
agent,
config,
context,
executionTarget,
authToken,
});
const {
@@ -330,6 +339,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
graceSec,
extraArgs,
} = runtimeConfig;
const effectiveExecutionCwd = adapterExecutionTargetRemoteCwd(executionTarget, cwd);
const terminalResultCleanupGraceMs = Math.max(
0,
asNumber(config.terminalResultCleanupGraceMs, 5_000),
@@ -369,27 +379,74 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
instructionsContents: combinedInstructionsContents,
onLog,
});
const effectiveInstructionsFilePath = promptBundle.instructionsFilePath ?? undefined;
const preparedExecutionTargetRuntime = executionTargetIsRemote
? await (async () => {
await onLog(
"stdout",
`[paperclip] Syncing workspace and Claude runtime assets to ${describeAdapterExecutionTarget(executionTarget)}.\n`,
);
return await prepareAdapterExecutionTargetRuntime({
target: executionTarget,
adapterKey: "claude",
workspaceLocalDir: cwd,
assets: [
{
key: "skills",
localDir: promptBundle.addDir,
followSymlinks: true,
},
],
});
})()
: null;
const restoreRemoteWorkspace = preparedExecutionTargetRuntime
? () => preparedExecutionTargetRuntime.restoreWorkspace()
: null;
const effectivePromptBundleAddDir = executionTargetIsRemote
? preparedExecutionTargetRuntime?.assetDirs.skills ??
path.posix.join(effectiveExecutionCwd, ".paperclip-runtime", "claude", "skills")
: promptBundle.addDir;
const effectiveInstructionsFilePath = promptBundle.instructionsFilePath
? executionTargetIsRemote
? path.posix.join(effectivePromptBundleAddDir, path.basename(promptBundle.instructionsFilePath))
: promptBundle.instructionsFilePath
: undefined;
const runtimeSessionParams = parseObject(runtime.sessionParams);
const runtimeSessionId = asString(runtimeSessionParams.sessionId, runtime.sessionId ?? "");
const runtimeSessionCwd = asString(runtimeSessionParams.cwd, "");
const runtimeRemoteExecution = parseObject(runtimeSessionParams.remoteExecution);
const runtimePromptBundleKey = asString(runtimeSessionParams.promptBundleKey, "");
const hasMatchingPromptBundle =
runtimePromptBundleKey.length === 0 || runtimePromptBundleKey === promptBundle.bundleKey;
const canResumeSession =
runtimeSessionId.length > 0 &&
hasMatchingPromptBundle &&
(runtimeSessionCwd.length === 0 || path.resolve(runtimeSessionCwd) === path.resolve(cwd));
(runtimeSessionCwd.length === 0 || path.resolve(runtimeSessionCwd) === path.resolve(effectiveExecutionCwd)) &&
adapterExecutionTargetSessionMatches(runtimeRemoteExecution, executionTarget);
const sessionId = canResumeSession ? runtimeSessionId : null;
if (
executionTargetIsRemote &&
runtimeSessionId &&
runtimeSessionCwd.length > 0 &&
path.resolve(runtimeSessionCwd) !== path.resolve(cwd)
!canResumeSession
) {
await onLog(
"stdout",
`[paperclip] Claude session "${runtimeSessionId}" was saved for cwd "${runtimeSessionCwd}" and will not be resumed in "${cwd}".\n`,
`[paperclip] Claude session "${runtimeSessionId}" does not match the current remote execution identity and will not be resumed in "${effectiveExecutionCwd}". Starting a fresh remote session.\n`,
);
} else if (
runtimeSessionId &&
runtimeSessionCwd.length > 0 &&
path.resolve(runtimeSessionCwd) !== path.resolve(effectiveExecutionCwd)
) {
await onLog(
"stdout",
`[paperclip] Claude session "${runtimeSessionId}" does not match the current remote execution identity and will not be resumed in "${effectiveExecutionCwd}". Starting a fresh remote session.\n`,
);
} else if (runtimeSessionId && !canResumeSession) {
await onLog(
"stdout",
`[paperclip] Claude session "${runtimeSessionId}" was saved for cwd "${runtimeSessionCwd}" and will not be resumed in "${effectiveExecutionCwd}".\n`,
);
}
if (runtimeSessionId && runtimePromptBundleKey.length > 0 && runtimePromptBundleKey !== promptBundle.bundleKey) {
@@ -416,10 +473,12 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const shouldUseResumeDeltaPrompt = Boolean(sessionId) && wakePrompt.length > 0;
const renderedPrompt = shouldUseResumeDeltaPrompt ? "" : renderTemplate(promptTemplate, templateData);
const sessionHandoffNote = asString(context.paperclipSessionHandoffMarkdown, "").trim();
const taskContextNote = asString(context.paperclipTaskMarkdown, "").trim();
const prompt = joinPromptSections([
renderedBootstrapPrompt,
wakePrompt,
sessionHandoffNote,
taskContextNote,
renderedPrompt,
]);
const promptMetrics = {
@@ -427,6 +486,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
bootstrapPromptChars: renderedBootstrapPrompt.length,
wakePromptChars: wakePrompt.length,
sessionHandoffChars: sessionHandoffNote.length,
taskContextChars: taskContextNote.length,
heartbeatPromptChars: renderedPrompt.length,
};
@@ -452,7 +512,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (attemptInstructionsFilePath && !resumeSessionId) {
args.push("--append-system-prompt-file", attemptInstructionsFilePath);
}
args.push("--add-dir", promptBundle.addDir);
args.push("--add-dir", effectivePromptBundleAddDir);
if (extraArgs.length > 0) args.push(...extraArgs);
return args;
};
@@ -489,7 +549,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
await onMeta({
adapterType: "claude_local",
command: resolvedCommand,
cwd,
cwd: effectiveExecutionCwd,
commandArgs: args,
commandNotes,
env: loggedEnv,
@@ -499,7 +559,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
});
}
const proc = await runChildProcess(runId, command, args, {
const proc = await runAdapterExecutionTargetProcess(runId, executionTarget, command, args, {
cwd,
env,
stdin: prompt,
@@ -552,16 +612,48 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
}
if (!parsed) {
const fallbackErrorMessage = parseFallbackErrorMessage(proc);
const transientUpstream =
!loginMeta.requiresLogin &&
(proc.exitCode ?? 0) !== 0 &&
isClaudeTransientUpstreamError({
parsed: null,
stdout: proc.stdout,
stderr: proc.stderr,
errorMessage: fallbackErrorMessage,
});
const transientRetryNotBefore = transientUpstream
? extractClaudeRetryNotBefore({
parsed: null,
stdout: proc.stdout,
stderr: proc.stderr,
errorMessage: fallbackErrorMessage,
})
: null;
const errorCode = loginMeta.requiresLogin
? "claude_auth_required"
: transientUpstream
? "claude_transient_upstream"
: null;
return {
exitCode: proc.exitCode,
signal: proc.signal,
timedOut: false,
errorMessage: parseFallbackErrorMessage(proc),
errorCode: loginMeta.requiresLogin ? "claude_auth_required" : null,
errorMessage: fallbackErrorMessage,
errorCode,
errorFamily: transientUpstream ? "transient_upstream" : null,
retryNotBefore: transientRetryNotBefore ? transientRetryNotBefore.toISOString() : null,
errorMeta,
resultJson: {
stdout: proc.stdout,
stderr: proc.stderr,
...(transientUpstream ? { errorFamily: "transient_upstream" } : {}),
...(transientRetryNotBefore
? { retryNotBefore: transientRetryNotBefore.toISOString() }
: {}),
...(transientRetryNotBefore
? { transientRetryNotBefore: transientRetryNotBefore.toISOString() }
: {}),
},
clearSession: Boolean(opts.clearSessionOnMissingSession),
};
@@ -584,24 +676,61 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const resolvedSessionParams = resolvedSessionId
? ({
sessionId: resolvedSessionId,
cwd,
cwd: effectiveExecutionCwd,
promptBundleKey: promptBundle.bundleKey,
...(executionTargetIsRemote
? {
remoteExecution: adapterExecutionTargetSessionIdentity(executionTarget),
}
: {}),
...(workspaceId ? { workspaceId } : {}),
...(workspaceRepoUrl ? { repoUrl: workspaceRepoUrl } : {}),
...(workspaceRepoRef ? { repoRef: workspaceRepoRef } : {}),
} as Record<string, unknown>)
: null;
const clearSessionForMaxTurns = isClaudeMaxTurnsResult(parsed);
const parsedIsError = asBoolean(parsed.is_error, false);
const failed = (proc.exitCode ?? 0) !== 0 || parsedIsError;
const errorMessage = failed
? describeClaudeFailure(parsed) ?? `Claude exited with code ${proc.exitCode ?? -1}`
: null;
const transientUpstream =
failed &&
!loginMeta.requiresLogin &&
isClaudeTransientUpstreamError({
parsed,
stdout: proc.stdout,
stderr: proc.stderr,
errorMessage,
});
const transientRetryNotBefore = transientUpstream
? extractClaudeRetryNotBefore({
parsed,
stdout: proc.stdout,
stderr: proc.stderr,
errorMessage,
})
: null;
const resolvedErrorCode = loginMeta.requiresLogin
? "claude_auth_required"
: transientUpstream
? "claude_transient_upstream"
: null;
const mergedResultJson: Record<string, unknown> = {
...parsed,
...(transientUpstream ? { errorFamily: "transient_upstream" } : {}),
...(transientRetryNotBefore ? { retryNotBefore: transientRetryNotBefore.toISOString() } : {}),
...(transientRetryNotBefore ? { transientRetryNotBefore: transientRetryNotBefore.toISOString() } : {}),
};
return {
exitCode: proc.exitCode,
signal: proc.signal,
timedOut: false,
errorMessage:
(proc.exitCode ?? 0) === 0
? null
: describeClaudeFailure(parsed) ?? `Claude exited with code ${proc.exitCode ?? -1}`,
errorCode: loginMeta.requiresLogin ? "claude_auth_required" : null,
errorMessage,
errorCode: resolvedErrorCode,
errorFamily: transientUpstream ? "transient_upstream" : null,
retryNotBefore: transientRetryNotBefore ? transientRetryNotBefore.toISOString() : null,
errorMeta,
usage,
sessionId: resolvedSessionId,
@@ -612,27 +741,37 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
model: parsedStream.model || asString(parsed.model, model),
billingType,
costUsd: parsedStream.costUsd ?? asNumber(parsed.total_cost_usd, 0),
resultJson: parsed,
resultJson: mergedResultJson,
summary: parsedStream.summary || asString(parsed.result, ""),
clearSession: clearSessionForMaxTurns || Boolean(opts.clearSessionOnMissingSession && !resolvedSessionId),
};
};
const initial = await runAttempt(sessionId ?? null);
if (
sessionId &&
!initial.proc.timedOut &&
(initial.proc.exitCode ?? 0) !== 0 &&
initial.parsed &&
isClaudeUnknownSessionError(initial.parsed)
) {
await onLog(
"stdout",
`[paperclip] Claude resume session "${sessionId}" is unavailable; retrying with a fresh session.\n`,
);
const retry = await runAttempt(null);
return toAdapterResult(retry, { fallbackSessionId: null, clearSessionOnMissingSession: true });
}
try {
const initial = await runAttempt(sessionId ?? null);
if (
sessionId &&
!initial.proc.timedOut &&
(initial.proc.exitCode ?? 0) !== 0 &&
initial.parsed &&
isClaudeUnknownSessionError(initial.parsed)
) {
await onLog(
"stdout",
`[paperclip] Claude resume session "${sessionId}" is unavailable; retrying with a fresh session.\n`,
);
const retry = await runAttempt(null);
return toAdapterResult(retry, { fallbackSessionId: null, clearSessionOnMissingSession: true });
}
return toAdapterResult(initial, { fallbackSessionId: runtimeSessionId || runtime.sessionId });
return toAdapterResult(initial, { fallbackSessionId: runtimeSessionId || runtime.sessionId });
} finally {
if (restoreRemoteWorkspace) {
await onLog(
"stdout",
`[paperclip] Restoring workspace changes from ${describeAdapterExecutionTarget(executionTarget)}.\n`,
);
await restoreRemoteWorkspace();
}
}
}

View File

@@ -0,0 +1,123 @@
import { describe, expect, it } from "vitest";
import {
extractClaudeRetryNotBefore,
isClaudeTransientUpstreamError,
} from "./parse.js";
describe("isClaudeTransientUpstreamError", () => {
it("classifies the 'out of extra usage' subscription window failure as transient", () => {
expect(
isClaudeTransientUpstreamError({
errorMessage: "You're out of extra usage · resets 4pm (America/Chicago)",
}),
).toBe(true);
expect(
isClaudeTransientUpstreamError({
parsed: {
is_error: true,
result: "You're out of extra usage. Resets at 4pm (America/Chicago).",
},
}),
).toBe(true);
});
it("classifies Anthropic API rate_limit_error and overloaded_error as transient", () => {
expect(
isClaudeTransientUpstreamError({
parsed: {
is_error: true,
errors: [{ type: "rate_limit_error", message: "Rate limit reached for requests." }],
},
}),
).toBe(true);
expect(
isClaudeTransientUpstreamError({
parsed: {
is_error: true,
errors: [{ type: "overloaded_error", message: "Overloaded" }],
},
}),
).toBe(true);
expect(
isClaudeTransientUpstreamError({
stderr: "HTTP 429: Too Many Requests",
}),
).toBe(true);
expect(
isClaudeTransientUpstreamError({
stderr: "Bedrock ThrottlingException: slow down",
}),
).toBe(true);
});
it("classifies the subscription 5-hour / weekly limit wording", () => {
expect(
isClaudeTransientUpstreamError({
errorMessage: "Claude usage limit reached — weekly limit reached. Try again in 2 days.",
}),
).toBe(true);
expect(
isClaudeTransientUpstreamError({
errorMessage: "5-hour limit reached.",
}),
).toBe(true);
});
it("does not classify login/auth failures as transient", () => {
expect(
isClaudeTransientUpstreamError({
stderr: "Please log in. Run `claude login` first.",
}),
).toBe(false);
});
it("does not classify max-turns or unknown-session as transient", () => {
expect(
isClaudeTransientUpstreamError({
parsed: { subtype: "error_max_turns", result: "Maximum turns reached." },
}),
).toBe(false);
expect(
isClaudeTransientUpstreamError({
parsed: {
result: "No conversation found with session id abc-123",
errors: [{ message: "No conversation found with session id abc-123" }],
},
}),
).toBe(false);
});
it("does not classify deterministic validation errors as transient", () => {
expect(
isClaudeTransientUpstreamError({
errorMessage: "Invalid request_error: Unknown parameter 'foo'.",
}),
).toBe(false);
});
});
describe("extractClaudeRetryNotBefore", () => {
it("parses the 'resets 4pm' hint in its explicit timezone", () => {
const now = new Date("2026-04-22T15:15:00.000Z");
const extracted = extractClaudeRetryNotBefore(
{ errorMessage: "You're out of extra usage · resets 4pm (America/Chicago)" },
now,
);
expect(extracted?.toISOString()).toBe("2026-04-22T21:00:00.000Z");
});
it("rolls forward past midnight when the reset time has already passed today", () => {
const now = new Date("2026-04-22T23:30:00.000Z");
const extracted = extractClaudeRetryNotBefore(
{ errorMessage: "Usage limit reached. Resets at 3:15 AM (UTC)." },
now,
);
expect(extracted?.toISOString()).toBe("2026-04-23T03:15:00.000Z");
});
it("returns null when no reset hint is present", () => {
expect(
extractClaudeRetryNotBefore({ errorMessage: "Overloaded. Try again later." }, new Date()),
).toBeNull();
});
});

View File

@@ -1,9 +1,19 @@
import type { UsageSummary } from "@paperclipai/adapter-utils";
import { asString, asNumber, parseObject, parseJson } from "@paperclipai/adapter-utils/server-utils";
import {
asString,
asNumber,
parseObject,
parseJson,
} from "@paperclipai/adapter-utils/server-utils";
const CLAUDE_AUTH_REQUIRED_RE = /(?:not\s+logged\s+in|please\s+log\s+in|please\s+run\s+`?claude\s+login`?|login\s+required|requires\s+login|unauthorized|authentication\s+required)/i;
const URL_RE = /(https?:\/\/[^\s'"`<>()[\]{};,!?]+[^\s'"`<>()[\]{};,!.?:]+)/gi;
const CLAUDE_TRANSIENT_UPSTREAM_RE =
/(?:rate[-\s]?limit(?:ed)?|rate_limit_error|too\s+many\s+requests|\b429\b|overloaded(?:_error)?|server\s+overloaded|service\s+unavailable|\b503\b|\b529\b|high\s+demand|try\s+again\s+later|temporarily\s+unavailable|throttl(?:ed|ing)|throttlingexception|servicequotaexceededexception|out\s+of\s+extra\s+usage|extra\s+usage\b|claude\s+usage\s+limit\s+reached|5[-\s]?hour\s+limit\s+reached|weekly\s+limit\s+reached|usage\s+limit\s+reached|usage\s+cap\s+reached)/i;
const CLAUDE_EXTRA_USAGE_RESET_RE =
/(?:out\s+of\s+extra\s+usage|extra\s+usage|usage\s+limit\s+reached|usage\s+cap\s+reached|5[-\s]?hour\s+limit\s+reached|weekly\s+limit\s+reached|claude\s+usage\s+limit\s+reached)[\s\S]{0,80}?\bresets?\s+(?:at\s+)?([^\n()]+?)(?:\s*\(([^)]+)\))?(?:[.!]|\n|$)/i;
export function parseClaudeStreamJson(stdout: string) {
let sessionId: string | null = null;
let model = "";
@@ -177,3 +187,197 @@ export function isClaudeUnknownSessionError(parsed: Record<string, unknown>): bo
/no conversation found with session id|unknown session|session .* not found/i.test(msg),
);
}
function buildClaudeTransientHaystack(input: {
parsed?: Record<string, unknown> | null;
stdout?: string | null;
stderr?: string | null;
errorMessage?: string | null;
}): string {
const parsed = input.parsed ?? null;
const resultText = parsed ? asString(parsed.result, "") : "";
const parsedErrors = parsed ? extractClaudeErrorMessages(parsed) : [];
return [
input.errorMessage ?? "",
resultText,
...parsedErrors,
input.stdout ?? "",
input.stderr ?? "",
]
.join("\n")
.split(/\r?\n/)
.map((line) => line.trim())
.filter(Boolean)
.join("\n");
}
function readTimeZoneParts(date: Date, timeZone: string) {
const values = new Map(
new Intl.DateTimeFormat("en-US", {
timeZone,
hourCycle: "h23",
year: "numeric",
month: "2-digit",
day: "2-digit",
hour: "2-digit",
minute: "2-digit",
}).formatToParts(date).map((part) => [part.type, part.value]),
);
return {
year: Number.parseInt(values.get("year") ?? "", 10),
month: Number.parseInt(values.get("month") ?? "", 10),
day: Number.parseInt(values.get("day") ?? "", 10),
hour: Number.parseInt(values.get("hour") ?? "", 10),
minute: Number.parseInt(values.get("minute") ?? "", 10),
};
}
function normalizeResetTimeZone(timeZoneHint: string | null | undefined): string | null {
const normalized = timeZoneHint?.trim();
if (!normalized) return null;
if (/^(?:utc|gmt)$/i.test(normalized)) return "UTC";
try {
new Intl.DateTimeFormat("en-US", { timeZone: normalized }).format(new Date(0));
return normalized;
} catch {
return null;
}
}
function dateFromTimeZoneWallClock(input: {
year: number;
month: number;
day: number;
hour: number;
minute: number;
timeZone: string;
}): Date | null {
let candidate = new Date(Date.UTC(input.year, input.month - 1, input.day, input.hour, input.minute, 0, 0));
const targetUtc = Date.UTC(input.year, input.month - 1, input.day, input.hour, input.minute, 0, 0);
for (let attempt = 0; attempt < 4; attempt += 1) {
const actual = readTimeZoneParts(candidate, input.timeZone);
const actualUtc = Date.UTC(actual.year, actual.month - 1, actual.day, actual.hour, actual.minute, 0, 0);
const offsetMs = targetUtc - actualUtc;
if (offsetMs === 0) break;
candidate = new Date(candidate.getTime() + offsetMs);
}
const verified = readTimeZoneParts(candidate, input.timeZone);
if (
verified.year !== input.year ||
verified.month !== input.month ||
verified.day !== input.day ||
verified.hour !== input.hour ||
verified.minute !== input.minute
) {
return null;
}
return candidate;
}
function nextClockTimeInTimeZone(input: {
now: Date;
hour: number;
minute: number;
timeZoneHint: string;
}): Date | null {
const timeZone = normalizeResetTimeZone(input.timeZoneHint);
if (!timeZone) return null;
const nowParts = readTimeZoneParts(input.now, timeZone);
let retryAt = dateFromTimeZoneWallClock({
year: nowParts.year,
month: nowParts.month,
day: nowParts.day,
hour: input.hour,
minute: input.minute,
timeZone,
});
if (!retryAt) return null;
if (retryAt.getTime() <= input.now.getTime()) {
const nextDay = new Date(Date.UTC(nowParts.year, nowParts.month - 1, nowParts.day + 1, 0, 0, 0, 0));
retryAt = dateFromTimeZoneWallClock({
year: nextDay.getUTCFullYear(),
month: nextDay.getUTCMonth() + 1,
day: nextDay.getUTCDate(),
hour: input.hour,
minute: input.minute,
timeZone,
});
}
return retryAt;
}
function parseClaudeResetClockTime(clockText: string, now: Date, timeZoneHint?: string | null): Date | null {
const normalized = clockText.trim().replace(/\s+/g, " ");
const match = normalized.match(/^(\d{1,2})(?::(\d{2}))?\s*([ap])\.?\s*m\.?/i);
if (!match) return null;
const hour12 = Number.parseInt(match[1] ?? "", 10);
const minute = Number.parseInt(match[2] ?? "0", 10);
if (!Number.isInteger(hour12) || hour12 < 1 || hour12 > 12) return null;
if (!Number.isInteger(minute) || minute < 0 || minute > 59) return null;
let hour24 = hour12 % 12;
if ((match[3] ?? "").toLowerCase() === "p") hour24 += 12;
if (timeZoneHint) {
const explicitRetryAt = nextClockTimeInTimeZone({
now,
hour: hour24,
minute,
timeZoneHint,
});
if (explicitRetryAt) return explicitRetryAt;
}
const retryAt = new Date(now);
retryAt.setHours(hour24, minute, 0, 0);
if (retryAt.getTime() <= now.getTime()) {
retryAt.setDate(retryAt.getDate() + 1);
}
return retryAt;
}
export function extractClaudeRetryNotBefore(
input: {
parsed?: Record<string, unknown> | null;
stdout?: string | null;
stderr?: string | null;
errorMessage?: string | null;
},
now = new Date(),
): Date | null {
const haystack = buildClaudeTransientHaystack(input);
const match = haystack.match(CLAUDE_EXTRA_USAGE_RESET_RE);
if (!match) return null;
return parseClaudeResetClockTime(match[1] ?? "", now, match[2]);
}
export function isClaudeTransientUpstreamError(input: {
parsed?: Record<string, unknown> | null;
stdout?: string | null;
stderr?: string | null;
errorMessage?: string | null;
}): boolean {
const parsed = input.parsed ?? null;
// Deterministic failures are handled by their own classifiers.
if (parsed && (isClaudeMaxTurnsResult(parsed) || isClaudeUnknownSessionError(parsed))) {
return false;
}
const loginMeta = detectClaudeLoginRequired({
parsed,
stdout: input.stdout ?? "",
stderr: input.stderr ?? "",
});
if (loginMeta.requiresLogin) return false;
const haystack = buildClaudeTransientHaystack(input);
if (!haystack) return false;
return CLAUDE_TRANSIENT_UPSTREAM_RE.test(haystack);
}

View File

@@ -0,0 +1,7 @@
import { defineConfig } from "vitest/config";
export default defineConfig({
test: {
environment: "node",
},
});

View File

@@ -4,7 +4,23 @@ export const DEFAULT_CODEX_LOCAL_MODEL = "gpt-5.3-codex";
export const DEFAULT_CODEX_LOCAL_BYPASS_APPROVALS_AND_SANDBOX = true;
export const CODEX_LOCAL_FAST_MODE_SUPPORTED_MODELS = ["gpt-5.4"] as const;
function normalizeModelId(model: string | null | undefined): string {
return typeof model === "string" ? model.trim() : "";
}
export function isCodexLocalKnownModel(model: string | null | undefined): boolean {
const normalizedModel = normalizeModelId(model);
if (!normalizedModel) return false;
return models.some((entry) => entry.id === normalizedModel);
}
export function isCodexLocalManualModel(model: string | null | undefined): boolean {
const normalizedModel = normalizeModelId(model);
return Boolean(normalizedModel) && !isCodexLocalKnownModel(normalizedModel);
}
export function isCodexLocalFastModeSupported(model: string | null | undefined): boolean {
if (isCodexLocalManualModel(model)) return true;
const normalizedModel = typeof model === "string" ? model.trim() : "";
return CODEX_LOCAL_FAST_MODE_SUPPORTED_MODELS.includes(
normalizedModel as (typeof CODEX_LOCAL_FAST_MODE_SUPPORTED_MODELS)[number],
@@ -35,7 +51,7 @@ Core fields:
- modelReasoningEffort (string, optional): reasoning effort override (minimal|low|medium|high|xhigh) passed via -c model_reasoning_effort=...
- promptTemplate (string, optional): run prompt template
- search (boolean, optional): run codex with --search
- fastMode (boolean, optional): enable Codex Fast mode; currently supported on GPT-5.4 only and consumes credits faster
- fastMode (boolean, optional): enable Codex Fast mode; supported on GPT-5.4 and passed through for manual model IDs
- dangerouslyBypassApprovalsAndSandbox (boolean, optional): run with bypass flag
- command (string, optional): defaults to "codex"
- extraArgs (string[], optional): additional CLI args
@@ -54,6 +70,6 @@ Notes:
- Paperclip injects desired local skills into the effective CODEX_HOME/skills/ directory at execution time so Codex can discover "$paperclip" and related skills without polluting the project working directory. In managed-home mode (the default) this is ~/.paperclip/instances/<id>/companies/<companyId>/codex-home/skills/; when CODEX_HOME is explicitly overridden in adapter config, that override is used instead.
- Unless explicitly overridden in adapter config, Paperclip runs Codex with a per-company managed CODEX_HOME under the active Paperclip instance and seeds auth/config from the shared Codex home (the CODEX_HOME env var, when set, or ~/.codex).
- Some model/tool combinations reject certain effort levels (for example minimal with web search enabled).
- Fast mode is currently supported on GPT-5.4 only. When enabled, Paperclip applies \`service_tier="fast"\` and \`features.fast_mode=true\`.
- Fast mode is supported on GPT-5.4 and manual model IDs. When enabled for those models, Paperclip applies \`service_tier="fast"\` and \`features.fast_mode=true\`.
- When Paperclip realizes a workspace/runtime for a run, it injects PAPERCLIP_WORKSPACE_* and PAPERCLIP_RUNTIME_* env vars for agent-side tooling.
`;

View File

@@ -26,6 +26,28 @@ describe("buildCodexExecArgs", () => {
]);
});
it("enables Codex fast mode overrides for manual models", () => {
const result = buildCodexExecArgs({
model: "gpt-5.5",
fastMode: true,
});
expect(result.fastModeRequested).toBe(true);
expect(result.fastModeApplied).toBe(true);
expect(result.fastModeIgnoredReason).toBeNull();
expect(result.args).toEqual([
"exec",
"--json",
"--model",
"gpt-5.5",
"-c",
'service_tier="fast"',
"-c",
"features.fast_mode=true",
"-",
]);
});
it("ignores fast mode for unsupported models", () => {
const result = buildCodexExecArgs({
model: "gpt-5.3-codex",
@@ -34,7 +56,9 @@ describe("buildCodexExecArgs", () => {
expect(result.fastModeRequested).toBe(true);
expect(result.fastModeApplied).toBe(false);
expect(result.fastModeIgnoredReason).toContain("currently only supported on gpt-5.4");
expect(result.fastModeIgnoredReason).toContain(
"currently only supported on gpt-5.4 or manually configured model IDs",
);
expect(result.args).toEqual([
"exec",
"--json",

View File

@@ -25,7 +25,7 @@ function asRecord(value: unknown): Record<string, unknown> {
}
function formatFastModeSupportedModels(): string {
return CODEX_LOCAL_FAST_MODE_SUPPORTED_MODELS.join(", ");
return `${CODEX_LOCAL_FAST_MODE_SUPPORTED_MODELS.join(", ")} or manually configured model IDs`;
}
export function buildCodexExecArgs(

View File

@@ -0,0 +1,359 @@
import { mkdir, mkdtemp, rm, writeFile } from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { afterEach, describe, expect, it, vi } from "vitest";
const {
runChildProcess,
ensureCommandResolvable,
resolveCommandForLogs,
prepareWorkspaceForSshExecution,
restoreWorkspaceFromSshExecution,
syncDirectoryToSsh,
} = vi.hoisted(() => ({
runChildProcess: vi.fn(async () => ({
exitCode: 1,
signal: null,
timedOut: false,
stdout: "",
stderr: "remote failure",
pid: 123,
startedAt: new Date().toISOString(),
})),
ensureCommandResolvable: vi.fn(async () => undefined),
resolveCommandForLogs: vi.fn(async () => "/usr/bin/codex"),
prepareWorkspaceForSshExecution: vi.fn(async () => undefined),
restoreWorkspaceFromSshExecution: vi.fn(async () => undefined),
syncDirectoryToSsh: vi.fn(async () => undefined),
}));
vi.mock("@paperclipai/adapter-utils/server-utils", async () => {
const actual = await vi.importActual<typeof import("@paperclipai/adapter-utils/server-utils")>(
"@paperclipai/adapter-utils/server-utils",
);
return {
...actual,
ensureCommandResolvable,
resolveCommandForLogs,
runChildProcess,
};
});
vi.mock("@paperclipai/adapter-utils/ssh", async () => {
const actual = await vi.importActual<typeof import("@paperclipai/adapter-utils/ssh")>(
"@paperclipai/adapter-utils/ssh",
);
return {
...actual,
prepareWorkspaceForSshExecution,
restoreWorkspaceFromSshExecution,
syncDirectoryToSsh,
};
});
import { execute } from "./execute.js";
describe("codex remote execution", () => {
const cleanupDirs: string[] = [];
afterEach(async () => {
vi.clearAllMocks();
while (cleanupDirs.length > 0) {
const dir = cleanupDirs.pop();
if (!dir) continue;
await rm(dir, { recursive: true, force: true }).catch(() => undefined);
}
});
it("prepares the workspace, syncs CODEX_HOME, and restores workspace changes for remote SSH execution", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-codex-remote-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
const codexHomeDir = path.join(rootDir, "codex-home");
await mkdir(workspaceDir, { recursive: true });
await mkdir(codexHomeDir, { recursive: true });
await writeFile(path.join(rootDir, "instructions.md"), "Use the remote workspace.\n", "utf8");
await writeFile(path.join(codexHomeDir, "auth.json"), "{}", "utf8");
await execute({
runId: "run-1",
agent: {
id: "agent-1",
companyId: "company-1",
name: "CodexCoder",
adapterType: "codex_local",
adapterConfig: {},
},
runtime: {
sessionId: null,
sessionParams: null,
sessionDisplayId: null,
taskKey: null,
},
config: {
command: "codex",
env: {
CODEX_HOME: codexHomeDir,
},
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTransport: {
remoteExecution: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
},
},
onLog: async () => {},
});
expect(prepareWorkspaceForSshExecution).toHaveBeenCalledTimes(1);
expect(prepareWorkspaceForSshExecution).toHaveBeenCalledWith(expect.objectContaining({
localDir: workspaceDir,
remoteDir: "/remote/workspace",
}));
expect(syncDirectoryToSsh).toHaveBeenCalledTimes(1);
expect(syncDirectoryToSsh).toHaveBeenCalledWith(expect.objectContaining({
localDir: codexHomeDir,
remoteDir: "/remote/workspace/.paperclip-runtime/codex/home",
followSymlinks: true,
}));
expect(runChildProcess).toHaveBeenCalledTimes(1);
const call = runChildProcess.mock.calls[0] as unknown as
| [string, string, string[], { env: Record<string, string>; remoteExecution?: { remoteCwd: string } | null }]
| undefined;
expect(call?.[3].env.CODEX_HOME).toBe("/remote/workspace/.paperclip-runtime/codex/home");
expect(call?.[3].remoteExecution?.remoteCwd).toBe("/remote/workspace");
expect(restoreWorkspaceFromSshExecution).toHaveBeenCalledTimes(1);
expect(restoreWorkspaceFromSshExecution).toHaveBeenCalledWith(expect.objectContaining({
localDir: workspaceDir,
remoteDir: "/remote/workspace",
}));
});
it("does not resume saved Codex sessions for remote SSH execution without a matching remote identity", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-codex-remote-resume-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
const codexHomeDir = path.join(rootDir, "codex-home");
await mkdir(workspaceDir, { recursive: true });
await mkdir(codexHomeDir, { recursive: true });
await writeFile(path.join(codexHomeDir, "auth.json"), "{}", "utf8");
await execute({
runId: "run-ssh-no-resume",
agent: {
id: "agent-1",
companyId: "company-1",
name: "CodexCoder",
adapterType: "codex_local",
adapterConfig: {},
},
runtime: {
sessionId: "session-123",
sessionParams: {
sessionId: "session-123",
cwd: "/remote/workspace",
},
sessionDisplayId: "session-123",
taskKey: null,
},
config: {
command: "codex",
env: {
CODEX_HOME: codexHomeDir,
},
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTransport: {
remoteExecution: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
},
},
onLog: async () => {},
});
expect(runChildProcess).toHaveBeenCalledTimes(1);
const call = runChildProcess.mock.calls[0] as unknown as [string, string, string[]] | undefined;
expect(call?.[2]).toEqual([
"exec",
"--json",
"-",
]);
});
it("resumes saved Codex sessions for remote SSH execution when the remote identity matches", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-codex-remote-resume-match-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
const codexHomeDir = path.join(rootDir, "codex-home");
await mkdir(workspaceDir, { recursive: true });
await mkdir(codexHomeDir, { recursive: true });
await writeFile(path.join(codexHomeDir, "auth.json"), "{}", "utf8");
await execute({
runId: "run-ssh-resume",
agent: {
id: "agent-1",
companyId: "company-1",
name: "CodexCoder",
adapterType: "codex_local",
adapterConfig: {},
},
runtime: {
sessionId: "session-123",
sessionParams: {
sessionId: "session-123",
cwd: "/remote/workspace",
remoteExecution: {
transport: "ssh",
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteCwd: "/remote/workspace",
},
},
sessionDisplayId: "session-123",
taskKey: null,
},
config: {
command: "codex",
env: {
CODEX_HOME: codexHomeDir,
},
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTransport: {
remoteExecution: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
},
},
onLog: async () => {},
});
expect(runChildProcess).toHaveBeenCalledTimes(1);
const call = runChildProcess.mock.calls[0] as unknown as [string, string, string[]] | undefined;
expect(call?.[2]).toEqual([
"exec",
"--json",
"resume",
"session-123",
"-",
]);
});
it("uses the provider-neutral execution target contract for remote SSH execution", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-codex-target-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
const codexHomeDir = path.join(rootDir, "codex-home");
await mkdir(workspaceDir, { recursive: true });
await mkdir(codexHomeDir, { recursive: true });
await writeFile(path.join(codexHomeDir, "auth.json"), "{}", "utf8");
await execute({
runId: "run-target",
agent: {
id: "agent-1",
companyId: "company-1",
name: "CodexCoder",
adapterType: "codex_local",
adapterConfig: {},
},
runtime: {
sessionId: "session-123",
sessionParams: {
sessionId: "session-123",
cwd: "/remote/workspace",
remoteExecution: {
transport: "ssh",
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteCwd: "/remote/workspace",
},
},
sessionDisplayId: "session-123",
taskKey: null,
},
config: {
command: "codex",
env: {
CODEX_HOME: codexHomeDir,
},
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTarget: {
kind: "remote",
transport: "ssh",
remoteCwd: "/remote/workspace",
spec: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
},
},
onLog: async () => {},
});
expect(syncDirectoryToSsh).toHaveBeenCalledTimes(1);
expect(runChildProcess).toHaveBeenCalledTimes(1);
const call = runChildProcess.mock.calls[0] as unknown as
| [string, string, string[], { env: Record<string, string>; remoteExecution?: { remoteCwd: string } | null }]
| undefined;
expect(call?.[2]).toEqual([
"exec",
"--json",
"resume",
"session-123",
"-",
]);
expect(call?.[3].env.CODEX_HOME).toBe("/remote/workspace/.paperclip-runtime/codex/home");
expect(call?.[3].remoteExecution?.remoteCwd).toBe("/remote/workspace");
});
});

View File

@@ -2,28 +2,40 @@ import fs from "node:fs/promises";
import path from "node:path";
import { fileURLToPath } from "node:url";
import { inferOpenAiCompatibleBiller, type AdapterExecutionContext, type AdapterExecutionResult } from "@paperclipai/adapter-utils";
import {
adapterExecutionTargetIsRemote,
adapterExecutionTargetPaperclipApiUrl,
adapterExecutionTargetRemoteCwd,
adapterExecutionTargetSessionIdentity,
adapterExecutionTargetSessionMatches,
describeAdapterExecutionTarget,
ensureAdapterExecutionTargetCommandResolvable,
prepareAdapterExecutionTargetRuntime,
readAdapterExecutionTarget,
resolveAdapterExecutionTargetCommandForLogs,
runAdapterExecutionTargetProcess,
} from "@paperclipai/adapter-utils/execution-target";
import {
asString,
asNumber,
parseObject,
applyPaperclipWorkspaceEnv,
buildPaperclipEnv,
buildInvocationEnvForLogs,
ensureAbsoluteDirectory,
ensureCommandResolvable,
ensurePaperclipSkillSymlink,
ensurePathInEnv,
readPaperclipRuntimeSkillEntries,
resolveCommandForLogs,
resolvePaperclipDesiredSkillNames,
renderTemplate,
renderPaperclipWakePrompt,
stringifyPaperclipWakePayload,
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
joinPromptSections,
runChildProcess,
} from "@paperclipai/adapter-utils/server-utils";
import {
parseCodexJsonl,
extractCodexRetryNotBefore,
isCodexTransientUpstreamError,
isCodexUnknownSessionError,
} from "./parse.js";
@@ -305,6 +317,11 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const effectiveWorkspaceCwd = useConfiguredInsteadOfAgentHome ? "" : workspaceCwd;
const cwd = effectiveWorkspaceCwd || configuredCwd || process.cwd();
const envConfig = parseObject(config.env);
const executionTarget = readAdapterExecutionTarget({
executionTarget: ctx.executionTarget,
legacyRemoteExecution: ctx.executionTransport?.remoteExecution,
});
const executionTargetIsRemote = adapterExecutionTargetIsRemote(executionTarget);
const configuredCodexHome =
typeof envConfig.CODEX_HOME === "string" && envConfig.CODEX_HOME.trim().length > 0
? path.resolve(envConfig.CODEX_HOME.trim())
@@ -328,10 +345,37 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
desiredSkillNames,
},
);
const effectiveExecutionCwd = adapterExecutionTargetRemoteCwd(executionTarget, cwd);
const preparedExecutionTargetRuntime = executionTargetIsRemote
? await (async () => {
await onLog(
"stdout",
`[paperclip] Syncing workspace and CODEX_HOME to ${describeAdapterExecutionTarget(executionTarget)}.\n`,
);
return await prepareAdapterExecutionTargetRuntime({
target: executionTarget,
adapterKey: "codex",
workspaceLocalDir: cwd,
assets: [
{
key: "home",
localDir: effectiveCodexHome,
followSymlinks: true,
},
],
});
})()
: null;
const restoreRemoteWorkspace = preparedExecutionTargetRuntime
? () => preparedExecutionTargetRuntime.restoreWorkspace()
: null;
const remoteCodexHome = executionTargetIsRemote
? preparedExecutionTargetRuntime?.assetDirs.home ??
path.posix.join(effectiveExecutionCwd, ".paperclip-runtime", "codex", "home")
: null;
const hasExplicitApiKey =
typeof envConfig.PAPERCLIP_API_KEY === "string" && envConfig.PAPERCLIP_API_KEY.trim().length > 0;
const env: Record<string, string> = { ...buildPaperclipEnv(agent) };
env.CODEX_HOME = effectiveCodexHome;
env.PAPERCLIP_RUN_ID = runId;
const wakeTaskId =
(typeof context.taskId === "string" && context.taskId.trim().length > 0 && context.taskId.trim()) ||
@@ -378,33 +422,17 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (wakePayloadJson) {
env.PAPERCLIP_WAKE_PAYLOAD_JSON = wakePayloadJson;
}
if (effectiveWorkspaceCwd) {
env.PAPERCLIP_WORKSPACE_CWD = effectiveWorkspaceCwd;
}
if (workspaceSource) {
env.PAPERCLIP_WORKSPACE_SOURCE = workspaceSource;
}
if (workspaceStrategy) {
env.PAPERCLIP_WORKSPACE_STRATEGY = workspaceStrategy;
}
if (workspaceId) {
env.PAPERCLIP_WORKSPACE_ID = workspaceId;
}
if (workspaceRepoUrl) {
env.PAPERCLIP_WORKSPACE_REPO_URL = workspaceRepoUrl;
}
if (workspaceRepoRef) {
env.PAPERCLIP_WORKSPACE_REPO_REF = workspaceRepoRef;
}
if (workspaceBranch) {
env.PAPERCLIP_WORKSPACE_BRANCH = workspaceBranch;
}
if (workspaceWorktreePath) {
env.PAPERCLIP_WORKSPACE_WORKTREE_PATH = workspaceWorktreePath;
}
if (agentHome) {
env.AGENT_HOME = agentHome;
}
applyPaperclipWorkspaceEnv(env, {
workspaceCwd: effectiveWorkspaceCwd,
workspaceSource,
workspaceStrategy,
workspaceId,
workspaceRepoUrl,
workspaceRepoRef,
workspaceBranch,
workspaceWorktreePath,
agentHome,
});
if (workspaceHints.length > 0) {
env.PAPERCLIP_WORKSPACES_JSON = JSON.stringify(workspaceHints);
}
@@ -417,9 +445,14 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (runtimePrimaryUrl) {
env.PAPERCLIP_RUNTIME_PRIMARY_URL = runtimePrimaryUrl;
}
const targetPaperclipApiUrl = adapterExecutionTargetPaperclipApiUrl(executionTarget);
if (targetPaperclipApiUrl) {
env.PAPERCLIP_API_URL = targetPaperclipApiUrl;
}
for (const [k, v] of Object.entries(envConfig)) {
if (typeof v === "string") env[k] = v;
}
env.CODEX_HOME = remoteCodexHome ?? effectiveCodexHome;
if (!hasExplicitApiKey && authToken) {
env.PAPERCLIP_API_KEY = authToken;
}
@@ -430,8 +463,8 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
);
const billingType = resolveCodexBillingType(effectiveEnv);
const runtimeEnv = ensurePathInEnv(effectiveEnv);
await ensureCommandResolvable(command, cwd, runtimeEnv);
const resolvedCommand = await resolveCommandForLogs(command, cwd, runtimeEnv);
await ensureAdapterExecutionTargetCommandResolvable(command, executionTarget, cwd, runtimeEnv);
const resolvedCommand = await resolveAdapterExecutionTargetCommandForLogs(command, executionTarget, cwd, runtimeEnv);
const loggedEnv = buildInvocationEnvForLogs(env, {
runtimeEnv,
includeRuntimeKeys: ["HOME"],
@@ -444,17 +477,24 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const runtimeSessionParams = parseObject(runtime.sessionParams);
const runtimeSessionId = asString(runtimeSessionParams.sessionId, runtime.sessionId ?? "");
const runtimeSessionCwd = asString(runtimeSessionParams.cwd, "");
const runtimeRemoteExecution = parseObject(runtimeSessionParams.remoteExecution);
const canResumeSession =
runtimeSessionId.length > 0 &&
(runtimeSessionCwd.length === 0 || path.resolve(runtimeSessionCwd) === path.resolve(cwd));
(runtimeSessionCwd.length === 0 || path.resolve(runtimeSessionCwd) === path.resolve(effectiveExecutionCwd)) &&
adapterExecutionTargetSessionMatches(runtimeRemoteExecution, executionTarget);
const codexTransientFallbackMode = readCodexTransientFallbackMode(context);
const forceSaferInvocation = fallbackModeUsesSaferInvocation(codexTransientFallbackMode);
const forceFreshSession = fallbackModeUsesFreshSession(codexTransientFallbackMode);
const sessionId = canResumeSession && !forceFreshSession ? runtimeSessionId : null;
if (runtimeSessionId && !canResumeSession) {
if (executionTargetIsRemote && runtimeSessionId && !canResumeSession) {
await onLog(
"stdout",
`[paperclip] Codex session "${runtimeSessionId}" was saved for cwd "${runtimeSessionCwd}" and will not be resumed in "${cwd}".\n`,
`[paperclip] Codex session "${runtimeSessionId}" does not match the current remote execution identity and will not be resumed in "${effectiveExecutionCwd}". Starting a fresh remote session.\n`,
);
} else if (runtimeSessionId && !canResumeSession) {
await onLog(
"stdout",
`[paperclip] Codex session "${runtimeSessionId}" was saved for cwd "${runtimeSessionCwd}" and will not be resumed in "${effectiveExecutionCwd}".\n`,
);
}
const instructionsFilePath = asString(config.instructionsFilePath, "").trim();
@@ -591,7 +631,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
await onMeta({
adapterType: "codex_local",
command: resolvedCommand,
cwd,
cwd: effectiveExecutionCwd,
commandNotes: commandNotesWithFastMode,
commandArgs: args.map((value, idx) => {
if (idx === args.length - 1 && value !== "-") return `<prompt ${prompt.length} chars>`;
@@ -604,7 +644,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
});
}
const proc = await runChildProcess(runId, command, args, {
const proc = await runAdapterExecutionTargetProcess(runId, executionTarget, command, args, {
cwd,
env,
stdin: prompt,
@@ -654,7 +694,12 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const resolvedSessionParams = resolvedSessionId
? ({
sessionId: resolvedSessionId,
cwd,
cwd: effectiveExecutionCwd,
...(executionTargetIsRemote
? {
remoteExecution: adapterExecutionTargetSessionIdentity(executionTarget),
}
: {}),
...(workspaceId ? { workspaceId } : {}),
...(workspaceRepoUrl ? { repoUrl: workspaceRepoUrl } : {}),
...(workspaceRepoRef ? { repoRef: workspaceRepoRef } : {}),
@@ -666,6 +711,21 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
parsedError ||
stderrLine ||
`Codex exited with code ${attempt.proc.exitCode ?? -1}`;
const transientRetryNotBefore =
(attempt.proc.exitCode ?? 0) !== 0
? extractCodexRetryNotBefore({
stdout: attempt.proc.stdout,
stderr: attempt.proc.stderr,
errorMessage: fallbackErrorMessage,
})
: null;
const transientUpstream =
(attempt.proc.exitCode ?? 0) !== 0 &&
isCodexTransientUpstreamError({
stdout: attempt.proc.stdout,
stderr: attempt.proc.stderr,
errorMessage: fallbackErrorMessage,
});
return {
exitCode: attempt.proc.exitCode,
@@ -676,14 +736,11 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
? null
: fallbackErrorMessage,
errorCode:
(attempt.proc.exitCode ?? 0) !== 0 &&
isCodexTransientUpstreamError({
stdout: attempt.proc.stdout,
stderr: attempt.proc.stderr,
errorMessage: fallbackErrorMessage,
})
transientUpstream
? "codex_transient_upstream"
: null,
errorFamily: transientUpstream ? "transient_upstream" : null,
retryNotBefore: transientRetryNotBefore ? transientRetryNotBefore.toISOString() : null,
usage: attempt.parsed.usage,
sessionId: resolvedSessionId,
sessionParams: resolvedSessionParams,
@@ -696,26 +753,39 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
resultJson: {
stdout: attempt.proc.stdout,
stderr: attempt.proc.stderr,
...(transientUpstream ? { errorFamily: "transient_upstream" } : {}),
...(transientRetryNotBefore ? { retryNotBefore: transientRetryNotBefore.toISOString() } : {}),
...(transientRetryNotBefore ? { transientRetryNotBefore: transientRetryNotBefore.toISOString() } : {}),
},
summary: attempt.parsed.summary,
clearSession: Boolean((clearSessionOnMissingSession || forceFreshSession) && !resolvedSessionId),
};
};
const initial = await runAttempt(sessionId);
if (
sessionId &&
!initial.proc.timedOut &&
(initial.proc.exitCode ?? 0) !== 0 &&
isCodexUnknownSessionError(initial.proc.stdout, initial.rawStderr)
) {
await onLog(
"stdout",
`[paperclip] Codex resume session "${sessionId}" is unavailable; retrying with a fresh session.\n`,
);
const retry = await runAttempt(null);
return toResult(retry, true, true);
}
try {
const initial = await runAttempt(sessionId);
if (
sessionId &&
!initial.proc.timedOut &&
(initial.proc.exitCode ?? 0) !== 0 &&
isCodexUnknownSessionError(initial.proc.stdout, initial.rawStderr)
) {
await onLog(
"stdout",
`[paperclip] Codex resume session "${sessionId}" is unavailable; retrying with a fresh session.\n`,
);
const retry = await runAttempt(null);
return toResult(retry, true, true);
}
return toResult(initial, false, false);
return toResult(initial, false, false);
} finally {
if (restoreRemoteWorkspace) {
await onLog(
"stdout",
`[paperclip] Restoring workspace changes from ${describeAdapterExecutionTarget(executionTarget)}.\n`,
);
await restoreRemoteWorkspace();
}
}
}

View File

@@ -1,5 +1,6 @@
import { describe, expect, it } from "vitest";
import {
extractCodexRetryNotBefore,
isCodexTransientUpstreamError,
isCodexUnknownSessionError,
parseCodexJsonl,
@@ -101,6 +102,25 @@ describe("isCodexTransientUpstreamError", () => {
).toBe(true);
});
it("classifies usage-limit windows as transient and extracts the retry time", () => {
const errorMessage = "You've hit your usage limit for GPT-5.3-Codex-Spark. Switch to another model now, or try again at 11:31 PM.";
const now = new Date(2026, 3, 22, 22, 29, 2);
expect(isCodexTransientUpstreamError({ errorMessage })).toBe(true);
expect(extractCodexRetryNotBefore({ errorMessage }, now)?.getTime()).toBe(
new Date(2026, 3, 22, 23, 31, 0, 0).getTime(),
);
});
it("parses explicit timezone hints on usage-limit retry windows", () => {
const errorMessage = "You've hit your usage limit for GPT-5.3-Codex-Spark. Switch to another model now, or try again at 11:31 PM (America/Chicago).";
const now = new Date("2026-04-23T03:29:02.000Z");
expect(extractCodexRetryNotBefore({ errorMessage }, now)?.toISOString()).toBe(
"2026-04-23T04:31:00.000Z",
);
});
it("does not classify deterministic compaction errors as transient", () => {
expect(
isCodexTransientUpstreamError({

View File

@@ -1,8 +1,15 @@
import { asString, asNumber, parseObject, parseJson } from "@paperclipai/adapter-utils/server-utils";
import {
asString,
asNumber,
parseObject,
parseJson,
} from "@paperclipai/adapter-utils/server-utils";
const CODEX_TRANSIENT_UPSTREAM_RE =
/(?:we(?:'|)re\s+currently\s+experiencing\s+high\s+demand|temporary\s+errors|rate[-\s]?limit(?:ed)?|too\s+many\s+requests|\b429\b|server\s+overloaded|service\s+unavailable|try\s+again\s+later)/i;
const CODEX_REMOTE_COMPACTION_RE = /remote\s+compact\s+task/i;
const CODEX_USAGE_LIMIT_RE =
/you(?:'|)ve hit your usage limit for .+\.\s+switch to another model now,\s+or try again at\s+([^.!\n]+)(?:[.!]|\n|$)/i;
export function parseCodexJsonl(stdout: string) {
let sessionId: string | null = null;
@@ -76,12 +83,12 @@ export function isCodexUnknownSessionError(stdout: string, stderr: string): bool
);
}
export function isCodexTransientUpstreamError(input: {
function buildCodexErrorHaystack(input: {
stdout?: string | null;
stderr?: string | null;
errorMessage?: string | null;
}): boolean {
const haystack = [
}): string {
return [
input.errorMessage ?? "",
input.stdout ?? "",
input.stderr ?? "",
@@ -91,9 +98,164 @@ export function isCodexTransientUpstreamError(input: {
.map((line) => line.trim())
.filter(Boolean)
.join("\n");
}
function readTimeZoneParts(date: Date, timeZone: string) {
const values = new Map(
new Intl.DateTimeFormat("en-US", {
timeZone,
hourCycle: "h23",
year: "numeric",
month: "2-digit",
day: "2-digit",
hour: "2-digit",
minute: "2-digit",
}).formatToParts(date).map((part) => [part.type, part.value]),
);
return {
year: Number.parseInt(values.get("year") ?? "", 10),
month: Number.parseInt(values.get("month") ?? "", 10),
day: Number.parseInt(values.get("day") ?? "", 10),
hour: Number.parseInt(values.get("hour") ?? "", 10),
minute: Number.parseInt(values.get("minute") ?? "", 10),
};
}
function normalizeResetTimeZone(timeZoneHint: string | null | undefined): string | null {
const normalized = timeZoneHint?.trim();
if (!normalized) return null;
if (/^(?:utc|gmt)$/i.test(normalized)) return "UTC";
try {
new Intl.DateTimeFormat("en-US", { timeZone: normalized }).format(new Date(0));
return normalized;
} catch {
return null;
}
}
function dateFromTimeZoneWallClock(input: {
year: number;
month: number;
day: number;
hour: number;
minute: number;
timeZone: string;
}): Date | null {
let candidate = new Date(Date.UTC(input.year, input.month - 1, input.day, input.hour, input.minute, 0, 0));
const targetUtc = Date.UTC(input.year, input.month - 1, input.day, input.hour, input.minute, 0, 0);
for (let attempt = 0; attempt < 4; attempt += 1) {
const actual = readTimeZoneParts(candidate, input.timeZone);
const actualUtc = Date.UTC(actual.year, actual.month - 1, actual.day, actual.hour, actual.minute, 0, 0);
const offsetMs = targetUtc - actualUtc;
if (offsetMs === 0) break;
candidate = new Date(candidate.getTime() + offsetMs);
}
const verified = readTimeZoneParts(candidate, input.timeZone);
if (
verified.year !== input.year ||
verified.month !== input.month ||
verified.day !== input.day ||
verified.hour !== input.hour ||
verified.minute !== input.minute
) {
return null;
}
return candidate;
}
function nextClockTimeInTimeZone(input: {
now: Date;
hour: number;
minute: number;
timeZoneHint: string;
}): Date | null {
const timeZone = normalizeResetTimeZone(input.timeZoneHint);
if (!timeZone) return null;
const nowParts = readTimeZoneParts(input.now, timeZone);
let retryAt = dateFromTimeZoneWallClock({
year: nowParts.year,
month: nowParts.month,
day: nowParts.day,
hour: input.hour,
minute: input.minute,
timeZone,
});
if (!retryAt) return null;
if (retryAt.getTime() <= input.now.getTime()) {
const nextDay = new Date(Date.UTC(nowParts.year, nowParts.month - 1, nowParts.day + 1, 0, 0, 0, 0));
retryAt = dateFromTimeZoneWallClock({
year: nextDay.getUTCFullYear(),
month: nextDay.getUTCMonth() + 1,
day: nextDay.getUTCDate(),
hour: input.hour,
minute: input.minute,
timeZone,
});
}
return retryAt;
}
function parseLocalClockTime(clockText: string, now: Date): Date | null {
const normalized = clockText.trim();
const match = normalized.match(/^(\d{1,2})(?::(\d{2}))?\s*([ap])\.?\s*m\.?(?:\s*\(([^)]+)\)|\s+([A-Z]{2,5}))?$/i);
if (!match) return null;
const hour12 = Number.parseInt(match[1] ?? "", 10);
const minute = Number.parseInt(match[2] ?? "0", 10);
if (!Number.isInteger(hour12) || hour12 < 1 || hour12 > 12) return null;
if (!Number.isInteger(minute) || minute < 0 || minute > 59) return null;
let hour24 = hour12 % 12;
if ((match[3] ?? "").toLowerCase() === "p") hour24 += 12;
const timeZoneHint = match[4] ?? match[5];
if (timeZoneHint) {
const explicitRetryAt = nextClockTimeInTimeZone({
now,
hour: hour24,
minute,
timeZoneHint,
});
if (explicitRetryAt) return explicitRetryAt;
}
const retryAt = new Date(now);
retryAt.setHours(hour24, minute, 0, 0);
if (retryAt.getTime() <= now.getTime()) {
retryAt.setDate(retryAt.getDate() + 1);
}
return retryAt;
}
export function extractCodexRetryNotBefore(input: {
stdout?: string | null;
stderr?: string | null;
errorMessage?: string | null;
}, now = new Date()): Date | null {
const haystack = buildCodexErrorHaystack(input);
const usageLimitMatch = haystack.match(CODEX_USAGE_LIMIT_RE);
if (!usageLimitMatch) return null;
return parseLocalClockTime(usageLimitMatch[1] ?? "", now);
}
export function isCodexTransientUpstreamError(input: {
stdout?: string | null;
stderr?: string | null;
errorMessage?: string | null;
}): boolean {
const haystack = buildCodexErrorHaystack(input);
if (extractCodexRetryNotBefore(input) != null) return true;
if (!CODEX_TRANSIENT_UPSTREAM_RE.test(haystack)) return false;
// Keep automatic retries scoped to the observed remote-compaction/high-demand
// failure shape; broader 429s may be caused by user or account limits.
// failure shape, plus explicit usage-limit windows that tell us when retrying
// becomes safe again.
return CODEX_REMOTE_COMPACTION_RE.test(haystack) || /high\s+demand|temporary\s+errors/i.test(haystack);
}

View File

@@ -146,7 +146,7 @@ export async function testEnvironment(
code: "codex_fast_mode_unsupported_model",
level: "warn",
message: execArgs.fastModeIgnoredReason,
hint: "Switch the agent model to GPT-5.4 to enable Codex Fast mode.",
hint: "Switch the agent model to GPT-5.4 or enter a manual model ID to enable Codex Fast mode.",
});
}

View File

@@ -0,0 +1,268 @@
import { mkdir, mkdtemp, rm } from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { afterEach, describe, expect, it, vi } from "vitest";
const {
runChildProcess,
ensureCommandResolvable,
resolveCommandForLogs,
prepareWorkspaceForSshExecution,
restoreWorkspaceFromSshExecution,
runSshCommand,
syncDirectoryToSsh,
} = vi.hoisted(() => ({
runChildProcess: vi.fn(async () => ({
exitCode: 0,
signal: null,
timedOut: false,
stdout: [
JSON.stringify({ type: "system", session_id: "cursor-session-1" }),
JSON.stringify({ type: "assistant", text: "hello" }),
JSON.stringify({ type: "result", is_error: false, result: "hello", session_id: "cursor-session-1" }),
].join("\n"),
stderr: "",
pid: 123,
startedAt: new Date().toISOString(),
})),
ensureCommandResolvable: vi.fn(async () => undefined),
resolveCommandForLogs: vi.fn(async () => "ssh://fixture@127.0.0.1:2222/remote/workspace :: agent"),
prepareWorkspaceForSshExecution: vi.fn(async () => undefined),
restoreWorkspaceFromSshExecution: vi.fn(async () => undefined),
runSshCommand: vi.fn(async () => ({
stdout: "/home/agent",
stderr: "",
exitCode: 0,
})),
syncDirectoryToSsh: vi.fn(async () => undefined),
}));
vi.mock("@paperclipai/adapter-utils/server-utils", async () => {
const actual = await vi.importActual<typeof import("@paperclipai/adapter-utils/server-utils")>(
"@paperclipai/adapter-utils/server-utils",
);
return {
...actual,
ensureCommandResolvable,
resolveCommandForLogs,
runChildProcess,
};
});
vi.mock("@paperclipai/adapter-utils/ssh", async () => {
const actual = await vi.importActual<typeof import("@paperclipai/adapter-utils/ssh")>(
"@paperclipai/adapter-utils/ssh",
);
return {
...actual,
prepareWorkspaceForSshExecution,
restoreWorkspaceFromSshExecution,
runSshCommand,
syncDirectoryToSsh,
};
});
import { execute } from "./execute.js";
describe("cursor remote execution", () => {
const cleanupDirs: string[] = [];
afterEach(async () => {
vi.clearAllMocks();
while (cleanupDirs.length > 0) {
const dir = cleanupDirs.pop();
if (!dir) continue;
await rm(dir, { recursive: true, force: true }).catch(() => undefined);
}
});
it("prepares the workspace, syncs Cursor skills, and restores workspace changes for remote SSH execution", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-cursor-remote-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
await mkdir(workspaceDir, { recursive: true });
const result = await execute({
runId: "run-1",
agent: {
id: "agent-1",
companyId: "company-1",
name: "Cursor Builder",
adapterType: "cursor",
adapterConfig: {},
},
runtime: {
sessionId: null,
sessionParams: null,
sessionDisplayId: null,
taskKey: null,
},
config: {
command: "agent",
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTransport: {
remoteExecution: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
paperclipApiUrl: "http://198.51.100.10:3102",
},
},
onLog: async () => {},
});
expect(result.sessionParams).toMatchObject({
sessionId: "cursor-session-1",
cwd: "/remote/workspace",
remoteExecution: {
transport: "ssh",
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteCwd: "/remote/workspace",
paperclipApiUrl: "http://198.51.100.10:3102",
},
});
expect(prepareWorkspaceForSshExecution).toHaveBeenCalledTimes(1);
expect(syncDirectoryToSsh).toHaveBeenCalledTimes(1);
expect(syncDirectoryToSsh).toHaveBeenCalledWith(expect.objectContaining({
remoteDir: "/remote/workspace/.paperclip-runtime/cursor/skills",
followSymlinks: true,
}));
expect(runSshCommand).toHaveBeenCalledWith(
expect.anything(),
expect.stringContaining(".cursor/skills"),
expect.anything(),
);
const call = runChildProcess.mock.calls[0] as unknown as
| [string, string, string[], { env: Record<string, string>; remoteExecution?: { remoteCwd: string } | null }]
| undefined;
expect(call?.[2]).toContain("--workspace");
expect(call?.[2]).toContain("/remote/workspace");
expect(call?.[3].env.PAPERCLIP_API_URL).toBe("http://198.51.100.10:3102");
expect(call?.[3].remoteExecution?.remoteCwd).toBe("/remote/workspace");
expect(restoreWorkspaceFromSshExecution).toHaveBeenCalledTimes(1);
});
it("resumes saved Cursor sessions for remote SSH execution only when the identity matches", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-cursor-remote-resume-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
await mkdir(workspaceDir, { recursive: true });
await execute({
runId: "run-ssh-resume",
agent: {
id: "agent-1",
companyId: "company-1",
name: "Cursor Builder",
adapterType: "cursor",
adapterConfig: {},
},
runtime: {
sessionId: "session-123",
sessionParams: {
sessionId: "session-123",
cwd: "/remote/workspace",
remoteExecution: {
transport: "ssh",
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteCwd: "/remote/workspace",
},
},
sessionDisplayId: "session-123",
taskKey: null,
},
config: {
command: "agent",
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTransport: {
remoteExecution: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
},
},
onLog: async () => {},
});
const call = runChildProcess.mock.calls[0] as unknown as [string, string, string[]] | undefined;
expect(call?.[2]).toContain("--resume");
expect(call?.[2]).toContain("session-123");
});
it("restores the remote workspace if skills sync fails after workspace prep", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-cursor-remote-sync-fail-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
await mkdir(workspaceDir, { recursive: true });
syncDirectoryToSsh.mockRejectedValueOnce(new Error("sync failed"));
await expect(execute({
runId: "run-sync-fail",
agent: {
id: "agent-1",
companyId: "company-1",
name: "Cursor Builder",
adapterType: "cursor",
adapterConfig: {},
},
runtime: {
sessionId: null,
sessionParams: null,
sessionDisplayId: null,
taskKey: null,
},
config: {
command: "agent",
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTransport: {
remoteExecution: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
},
},
onLog: async () => {},
})).rejects.toThrow("sync failed");
expect(prepareWorkspaceForSshExecution).toHaveBeenCalledTimes(1);
expect(restoreWorkspaceFromSshExecution).toHaveBeenCalledTimes(1);
expect(runChildProcess).not.toHaveBeenCalled();
});
});

View File

@@ -3,19 +3,34 @@ import os from "node:os";
import path from "node:path";
import { fileURLToPath } from "node:url";
import { inferOpenAiCompatibleBiller, type AdapterExecutionContext, type AdapterExecutionResult } from "@paperclipai/adapter-utils";
import {
adapterExecutionTargetIsRemote,
adapterExecutionTargetPaperclipApiUrl,
adapterExecutionTargetRemoteCwd,
adapterExecutionTargetSessionIdentity,
adapterExecutionTargetSessionMatches,
adapterExecutionTargetUsesManagedHome,
describeAdapterExecutionTarget,
ensureAdapterExecutionTargetCommandResolvable,
prepareAdapterExecutionTargetRuntime,
readAdapterExecutionTarget,
readAdapterExecutionTargetHomeDir,
resolveAdapterExecutionTargetCommandForLogs,
runAdapterExecutionTargetProcess,
runAdapterExecutionTargetShellCommand,
} from "@paperclipai/adapter-utils/execution-target";
import {
asString,
asNumber,
asStringArray,
parseObject,
applyPaperclipWorkspaceEnv,
buildPaperclipEnv,
buildInvocationEnvForLogs,
ensureAbsoluteDirectory,
ensureCommandResolvable,
ensurePaperclipSkillSymlink,
ensurePathInEnv,
readPaperclipRuntimeSkillEntries,
resolveCommandForLogs,
resolvePaperclipDesiredSkillNames,
removeMaintainerOnlySkillSymlinks,
renderTemplate,
@@ -23,7 +38,6 @@ import {
stringifyPaperclipWakePayload,
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
joinPromptSections,
runChildProcess,
} from "@paperclipai/adapter-utils/server-utils";
import { DEFAULT_CURSOR_LOCAL_MODEL } from "../index.js";
import { parseCursorJsonl, isCursorUnknownSessionError } from "./parse.js";
@@ -97,6 +111,19 @@ function cursorSkillsHome(): string {
return path.join(os.homedir(), ".cursor", "skills");
}
async function buildCursorSkillsDir(config: Record<string, unknown>): Promise<string> {
const tmp = await fs.mkdtemp(path.join(os.tmpdir(), "paperclip-cursor-skills-"));
const target = path.join(tmp, "skills");
await fs.mkdir(target, { recursive: true });
const availableEntries = await readPaperclipRuntimeSkillEntries(config, __moduleDir);
const desiredNames = new Set(resolvePaperclipDesiredSkillNames(config, availableEntries));
for (const entry of availableEntries) {
if (!desiredNames.has(entry.key)) continue;
await fs.symlink(entry.source, path.join(target, entry.runtimeName));
}
return target;
}
type EnsureCursorSkillsInjectedOptions = {
skillsDir?: string | null;
skillsEntries?: Array<{ key: string; runtimeName: string; source: string }>;
@@ -162,6 +189,11 @@ export async function ensureCursorSkillsInjected(
export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExecutionResult> {
const { runId, agent, runtime, config, context, onLog, onMeta, onSpawn, authToken } = ctx;
const executionTarget = readAdapterExecutionTarget({
executionTarget: ctx.executionTarget,
legacyRemoteExecution: ctx.executionTransport?.remoteExecution,
});
const executionTargetIsRemote = adapterExecutionTargetIsRemote(executionTarget);
const promptTemplate = asString(
config.promptTemplate,
@@ -190,9 +222,11 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
await ensureAbsoluteDirectory(cwd, { createIfMissing: true });
const cursorSkillEntries = await readPaperclipRuntimeSkillEntries(config, __moduleDir);
const desiredCursorSkillNames = resolvePaperclipDesiredSkillNames(config, cursorSkillEntries);
await ensureCursorSkillsInjected(onLog, {
skillsEntries: cursorSkillEntries.filter((entry) => desiredCursorSkillNames.includes(entry.key)),
});
if (!executionTargetIsRemote) {
await ensureCursorSkillsInjected(onLog, {
skillsEntries: cursorSkillEntries.filter((entry) => desiredCursorSkillNames.includes(entry.key)),
});
}
const envConfig = parseObject(config.env);
const hasExplicitApiKey =
@@ -244,27 +278,21 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (wakePayloadJson) {
env.PAPERCLIP_WAKE_PAYLOAD_JSON = wakePayloadJson;
}
if (effectiveWorkspaceCwd) {
env.PAPERCLIP_WORKSPACE_CWD = effectiveWorkspaceCwd;
}
if (workspaceSource) {
env.PAPERCLIP_WORKSPACE_SOURCE = workspaceSource;
}
if (workspaceId) {
env.PAPERCLIP_WORKSPACE_ID = workspaceId;
}
if (workspaceRepoUrl) {
env.PAPERCLIP_WORKSPACE_REPO_URL = workspaceRepoUrl;
}
if (workspaceRepoRef) {
env.PAPERCLIP_WORKSPACE_REPO_REF = workspaceRepoRef;
}
if (agentHome) {
env.AGENT_HOME = agentHome;
}
applyPaperclipWorkspaceEnv(env, {
workspaceCwd: effectiveWorkspaceCwd,
workspaceSource,
workspaceId,
workspaceRepoUrl,
workspaceRepoRef,
agentHome,
});
if (workspaceHints.length > 0) {
env.PAPERCLIP_WORKSPACES_JSON = JSON.stringify(workspaceHints);
}
const targetPaperclipApiUrl = adapterExecutionTargetPaperclipApiUrl(executionTarget);
if (targetPaperclipApiUrl) {
env.PAPERCLIP_API_URL = targetPaperclipApiUrl;
}
for (const [k, v] of Object.entries(envConfig)) {
if (typeof v === "string") env[k] = v;
}
@@ -278,8 +306,8 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
);
const billingType = resolveCursorBillingType(effectiveEnv);
const runtimeEnv = ensurePathInEnv(effectiveEnv);
await ensureCommandResolvable(command, cwd, runtimeEnv);
const resolvedCommand = await resolveCommandForLogs(command, cwd, runtimeEnv);
await ensureAdapterExecutionTargetCommandResolvable(command, executionTarget, cwd, runtimeEnv);
const resolvedCommand = await resolveAdapterExecutionTargetCommandForLogs(command, executionTarget, cwd, runtimeEnv);
const loggedEnv = buildInvocationEnvForLogs(env, {
runtimeEnv,
includeRuntimeKeys: ["HOME"],
@@ -294,18 +322,77 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
return asStringArray(config.args);
})();
const autoTrustEnabled = !hasCursorTrustBypassArg(extraArgs);
const effectiveExecutionCwd = adapterExecutionTargetRemoteCwd(executionTarget, cwd);
let restoreRemoteWorkspace: (() => Promise<void>) | null = null;
let localSkillsDir: string | null = null;
if (executionTargetIsRemote) {
try {
localSkillsDir = await buildCursorSkillsDir(config);
await onLog(
"stdout",
`[paperclip] Syncing workspace and Cursor runtime assets to ${describeAdapterExecutionTarget(executionTarget)}.\n`,
);
const preparedExecutionTargetRuntime = await prepareAdapterExecutionTargetRuntime({
target: executionTarget,
adapterKey: "cursor",
workspaceLocalDir: cwd,
assets: [{
key: "skills",
localDir: localSkillsDir,
followSymlinks: true,
}],
});
restoreRemoteWorkspace = () => preparedExecutionTargetRuntime.restoreWorkspace();
const managedHome = adapterExecutionTargetUsesManagedHome(executionTarget);
if (managedHome && preparedExecutionTargetRuntime.runtimeRootDir) {
env.HOME = preparedExecutionTargetRuntime.runtimeRootDir;
}
const remoteHomeDir = managedHome && preparedExecutionTargetRuntime.runtimeRootDir
? preparedExecutionTargetRuntime.runtimeRootDir
: await readAdapterExecutionTargetHomeDir(runId, executionTarget, {
cwd,
env,
timeoutSec,
graceSec,
onLog,
});
if (remoteHomeDir && preparedExecutionTargetRuntime.assetDirs.skills) {
const remoteSkillsDir = path.posix.join(remoteHomeDir, ".cursor", "skills");
await runAdapterExecutionTargetShellCommand(
runId,
executionTarget,
`mkdir -p ${JSON.stringify(path.posix.dirname(remoteSkillsDir))} && rm -rf ${JSON.stringify(remoteSkillsDir)} && cp -a ${JSON.stringify(preparedExecutionTargetRuntime.assetDirs.skills)} ${JSON.stringify(remoteSkillsDir)}`,
{ cwd, env, timeoutSec, graceSec, onLog },
);
}
} catch (error) {
await Promise.allSettled([
restoreRemoteWorkspace?.(),
localSkillsDir ? fs.rm(localSkillsDir, { recursive: true, force: true }).catch(() => undefined) : Promise.resolve(),
]);
throw error;
}
}
const runtimeSessionParams = parseObject(runtime.sessionParams);
const runtimeSessionId = asString(runtimeSessionParams.sessionId, runtime.sessionId ?? "");
const runtimeSessionCwd = asString(runtimeSessionParams.cwd, "");
const runtimeRemoteExecution = parseObject(runtimeSessionParams.remoteExecution);
const canResumeSession =
runtimeSessionId.length > 0 &&
(runtimeSessionCwd.length === 0 || path.resolve(runtimeSessionCwd) === path.resolve(cwd));
(runtimeSessionCwd.length === 0 || path.resolve(runtimeSessionCwd) === path.resolve(effectiveExecutionCwd)) &&
adapterExecutionTargetSessionMatches(runtimeRemoteExecution, executionTarget);
const sessionId = canResumeSession ? runtimeSessionId : null;
if (runtimeSessionId && !canResumeSession) {
if (executionTargetIsRemote && runtimeSessionId && !canResumeSession) {
await onLog(
"stdout",
`[paperclip] Cursor session "${runtimeSessionId}" was saved for cwd "${runtimeSessionCwd}" and will not be resumed in "${cwd}".\n`,
`[paperclip] Cursor session "${runtimeSessionId}" does not match the current remote execution identity and will not be resumed in "${effectiveExecutionCwd}". Starting a fresh remote session.\n`,
);
} else if (runtimeSessionId && !canResumeSession) {
await onLog(
"stdout",
`[paperclip] Cursor session "${runtimeSessionId}" was saved for cwd "${runtimeSessionCwd}" and will not be resumed in "${effectiveExecutionCwd}".\n`,
);
}
@@ -387,7 +474,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
};
const buildArgs = (resumeSessionId: string | null) => {
const args = ["-p", "--output-format", "stream-json", "--workspace", cwd];
const args = ["-p", "--output-format", "stream-json", "--workspace", effectiveExecutionCwd];
if (resumeSessionId) args.push("--resume", resumeSessionId);
if (model) args.push("--model", model);
if (mode) args.push("--mode", mode);
@@ -402,7 +489,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
await onMeta({
adapterType: "cursor",
command: resolvedCommand,
cwd,
cwd: effectiveExecutionCwd,
commandNotes,
commandArgs: args,
env: loggedEnv,
@@ -436,7 +523,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
}
};
const proc = await runChildProcess(runId, command, args, {
const proc = await runAdapterExecutionTargetProcess(runId, executionTarget, command, args, {
cwd,
env,
timeoutSec,
@@ -488,10 +575,15 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const resolvedSessionParams = resolvedSessionId
? ({
sessionId: resolvedSessionId,
cwd,
cwd: effectiveExecutionCwd,
...(workspaceId ? { workspaceId } : {}),
...(workspaceRepoUrl ? { repoUrl: workspaceRepoUrl } : {}),
...(workspaceRepoRef ? { repoRef: workspaceRepoRef } : {}),
...(executionTargetIsRemote
? {
remoteExecution: adapterExecutionTargetSessionIdentity(executionTarget),
}
: {}),
} as Record<string, unknown>)
: null;
const parsedError = typeof attempt.parsed.errorMessage === "string" ? attempt.parsed.errorMessage.trim() : "";
@@ -527,20 +619,32 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
};
};
const initial = await runAttempt(sessionId);
if (
sessionId &&
!initial.proc.timedOut &&
(initial.proc.exitCode ?? 0) !== 0 &&
isCursorUnknownSessionError(initial.proc.stdout, initial.proc.stderr)
) {
await onLog(
"stdout",
`[paperclip] Cursor resume session "${sessionId}" is unavailable; retrying with a fresh session.\n`,
);
const retry = await runAttempt(null);
return toResult(retry, true);
try {
const initial = await runAttempt(sessionId);
if (
sessionId &&
!initial.proc.timedOut &&
(initial.proc.exitCode ?? 0) !== 0 &&
isCursorUnknownSessionError(initial.proc.stdout, initial.proc.stderr)
) {
await onLog(
"stdout",
`[paperclip] Cursor resume session "${sessionId}" is unavailable; retrying with a fresh session.\n`,
);
const retry = await runAttempt(null);
return toResult(retry, true);
}
return toResult(initial);
} finally {
if (restoreRemoteWorkspace) {
await onLog(
"stdout",
`[paperclip] Restoring workspace changes from ${describeAdapterExecutionTarget(executionTarget)}.\n`,
);
await restoreRemoteWorkspace();
}
if (localSkillsDir) {
await fs.rm(localSkillsDir, { recursive: true, force: true }).catch(() => undefined);
}
}
return toResult(initial);
}

View File

@@ -0,0 +1,272 @@
import { mkdir, mkdtemp, rm } from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { afterEach, describe, expect, it, vi } from "vitest";
const {
runChildProcess,
ensureCommandResolvable,
resolveCommandForLogs,
prepareWorkspaceForSshExecution,
restoreWorkspaceFromSshExecution,
runSshCommand,
syncDirectoryToSsh,
} = vi.hoisted(() => ({
runChildProcess: vi.fn(async () => ({
exitCode: 0,
signal: null,
timedOut: false,
stdout: [
JSON.stringify({ type: "system", subtype: "init", session_id: "gemini-session-1", model: "gemini-2.5-pro" }),
JSON.stringify({ type: "assistant", message: { content: [{ type: "output_text", text: "hello" }] } }),
JSON.stringify({
type: "result",
subtype: "success",
session_id: "gemini-session-1",
usage: { promptTokenCount: 1, cachedContentTokenCount: 0, candidatesTokenCount: 1 },
result: "hello",
}),
].join("\n"),
stderr: "",
pid: 123,
startedAt: new Date().toISOString(),
})),
ensureCommandResolvable: vi.fn(async () => undefined),
resolveCommandForLogs: vi.fn(async () => "ssh://fixture@127.0.0.1:2222/remote/workspace :: gemini"),
prepareWorkspaceForSshExecution: vi.fn(async () => undefined),
restoreWorkspaceFromSshExecution: vi.fn(async () => undefined),
runSshCommand: vi.fn(async () => ({
stdout: "/home/agent",
stderr: "",
exitCode: 0,
})),
syncDirectoryToSsh: vi.fn(async () => undefined),
}));
vi.mock("@paperclipai/adapter-utils/server-utils", async () => {
const actual = await vi.importActual<typeof import("@paperclipai/adapter-utils/server-utils")>(
"@paperclipai/adapter-utils/server-utils",
);
return {
...actual,
ensureCommandResolvable,
resolveCommandForLogs,
runChildProcess,
};
});
vi.mock("@paperclipai/adapter-utils/ssh", async () => {
const actual = await vi.importActual<typeof import("@paperclipai/adapter-utils/ssh")>(
"@paperclipai/adapter-utils/ssh",
);
return {
...actual,
prepareWorkspaceForSshExecution,
restoreWorkspaceFromSshExecution,
runSshCommand,
syncDirectoryToSsh,
};
});
import { execute } from "./execute.js";
describe("gemini remote execution", () => {
const cleanupDirs: string[] = [];
afterEach(async () => {
vi.clearAllMocks();
while (cleanupDirs.length > 0) {
const dir = cleanupDirs.pop();
if (!dir) continue;
await rm(dir, { recursive: true, force: true }).catch(() => undefined);
}
});
it("prepares the workspace, syncs Gemini skills, and restores workspace changes for remote SSH execution", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-gemini-remote-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
await mkdir(workspaceDir, { recursive: true });
const result = await execute({
runId: "run-1",
agent: {
id: "agent-1",
companyId: "company-1",
name: "Gemini Builder",
adapterType: "gemini_local",
adapterConfig: {},
},
runtime: {
sessionId: null,
sessionParams: null,
sessionDisplayId: null,
taskKey: null,
},
config: {
command: "gemini",
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTransport: {
remoteExecution: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
paperclipApiUrl: "http://198.51.100.10:3102",
},
},
onLog: async () => {},
});
expect(result.sessionParams).toMatchObject({
sessionId: "gemini-session-1",
cwd: "/remote/workspace",
remoteExecution: {
transport: "ssh",
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteCwd: "/remote/workspace",
paperclipApiUrl: "http://198.51.100.10:3102",
},
});
expect(prepareWorkspaceForSshExecution).toHaveBeenCalledTimes(1);
expect(syncDirectoryToSsh).toHaveBeenCalledTimes(1);
expect(syncDirectoryToSsh).toHaveBeenCalledWith(expect.objectContaining({
remoteDir: "/remote/workspace/.paperclip-runtime/gemini/skills",
followSymlinks: true,
}));
expect(runSshCommand).toHaveBeenCalledWith(
expect.anything(),
expect.stringContaining(".gemini/skills"),
expect.anything(),
);
const call = runChildProcess.mock.calls[0] as unknown as
| [string, string, string[], { env: Record<string, string>; remoteExecution?: { remoteCwd: string } | null }]
| undefined;
expect(call?.[3].env.PAPERCLIP_API_URL).toBe("http://198.51.100.10:3102");
expect(call?.[3].remoteExecution?.remoteCwd).toBe("/remote/workspace");
expect(restoreWorkspaceFromSshExecution).toHaveBeenCalledTimes(1);
});
it("resumes saved Gemini sessions for remote SSH execution only when the identity matches", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-gemini-remote-resume-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
await mkdir(workspaceDir, { recursive: true });
await execute({
runId: "run-ssh-resume",
agent: {
id: "agent-1",
companyId: "company-1",
name: "Gemini Builder",
adapterType: "gemini_local",
adapterConfig: {},
},
runtime: {
sessionId: "session-123",
sessionParams: {
sessionId: "session-123",
cwd: "/remote/workspace",
remoteExecution: {
transport: "ssh",
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteCwd: "/remote/workspace",
},
},
sessionDisplayId: "session-123",
taskKey: null,
},
config: {
command: "gemini",
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTransport: {
remoteExecution: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
},
},
onLog: async () => {},
});
const call = runChildProcess.mock.calls[0] as unknown as [string, string, string[]] | undefined;
expect(call?.[2]).toContain("--resume");
expect(call?.[2]).toContain("session-123");
});
it("restores the remote workspace if skills sync fails after workspace prep", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-gemini-remote-sync-fail-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
await mkdir(workspaceDir, { recursive: true });
syncDirectoryToSsh.mockRejectedValueOnce(new Error("sync failed"));
await expect(execute({
runId: "run-sync-fail",
agent: {
id: "agent-1",
companyId: "company-1",
name: "Gemini Builder",
adapterType: "gemini_local",
adapterConfig: {},
},
runtime: {
sessionId: null,
sessionParams: null,
sessionDisplayId: null,
taskKey: null,
},
config: {
command: "gemini",
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTransport: {
remoteExecution: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
},
},
onLog: async () => {},
})).rejects.toThrow("sync failed");
expect(prepareWorkspaceForSshExecution).toHaveBeenCalledTimes(1);
expect(restoreWorkspaceFromSshExecution).toHaveBeenCalledTimes(1);
expect(runChildProcess).not.toHaveBeenCalled();
});
});

View File

@@ -4,20 +4,35 @@ import os from "node:os";
import path from "node:path";
import { fileURLToPath } from "node:url";
import type { AdapterExecutionContext, AdapterExecutionResult } from "@paperclipai/adapter-utils";
import {
adapterExecutionTargetIsRemote,
adapterExecutionTargetPaperclipApiUrl,
adapterExecutionTargetRemoteCwd,
adapterExecutionTargetSessionIdentity,
adapterExecutionTargetSessionMatches,
adapterExecutionTargetUsesManagedHome,
describeAdapterExecutionTarget,
ensureAdapterExecutionTargetCommandResolvable,
prepareAdapterExecutionTargetRuntime,
readAdapterExecutionTarget,
readAdapterExecutionTargetHomeDir,
resolveAdapterExecutionTargetCommandForLogs,
runAdapterExecutionTargetProcess,
runAdapterExecutionTargetShellCommand,
} from "@paperclipai/adapter-utils/execution-target";
import {
asBoolean,
asNumber,
asString,
asStringArray,
applyPaperclipWorkspaceEnv,
buildPaperclipEnv,
buildInvocationEnvForLogs,
ensureAbsoluteDirectory,
ensureCommandResolvable,
ensurePaperclipSkillSymlink,
joinPromptSections,
ensurePathInEnv,
readPaperclipRuntimeSkillEntries,
resolveCommandForLogs,
resolvePaperclipDesiredSkillNames,
removeMaintainerOnlySkillSymlinks,
parseObject,
@@ -136,8 +151,28 @@ async function ensureGeminiSkillsInjected(
}
}
async function buildGeminiSkillsDir(
config: Record<string, unknown>,
): Promise<string> {
const tmp = await fs.mkdtemp(path.join(os.tmpdir(), "paperclip-gemini-skills-"));
const target = path.join(tmp, "skills");
await fs.mkdir(target, { recursive: true });
const availableEntries = await readPaperclipRuntimeSkillEntries(config, __moduleDir);
const desiredNames = new Set(resolvePaperclipDesiredSkillNames(config, availableEntries));
for (const entry of availableEntries) {
if (!desiredNames.has(entry.key)) continue;
await fs.symlink(entry.source, path.join(target, entry.runtimeName));
}
return target;
}
export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExecutionResult> {
const { runId, agent, runtime, config, context, onLog, onMeta, onSpawn, authToken } = ctx;
const executionTarget = readAdapterExecutionTarget({
executionTarget: ctx.executionTarget,
legacyRemoteExecution: ctx.executionTransport?.remoteExecution,
});
const executionTargetIsRemote = adapterExecutionTargetIsRemote(executionTarget);
const promptTemplate = asString(
config.promptTemplate,
@@ -166,7 +201,9 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
await ensureAbsoluteDirectory(cwd, { createIfMissing: true });
const geminiSkillEntries = await readPaperclipRuntimeSkillEntries(config, __moduleDir);
const desiredGeminiSkillNames = resolvePaperclipDesiredSkillNames(config, geminiSkillEntries);
await ensureGeminiSkillsInjected(onLog, geminiSkillEntries, desiredGeminiSkillNames);
if (!executionTargetIsRemote) {
await ensureGeminiSkillsInjected(onLog, geminiSkillEntries, desiredGeminiSkillNames);
}
const envConfig = parseObject(config.env);
const hasExplicitApiKey =
@@ -204,13 +241,17 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (approvalStatus) env.PAPERCLIP_APPROVAL_STATUS = approvalStatus;
if (linkedIssueIds.length > 0) env.PAPERCLIP_LINKED_ISSUE_IDS = linkedIssueIds.join(",");
if (wakePayloadJson) env.PAPERCLIP_WAKE_PAYLOAD_JSON = wakePayloadJson;
if (effectiveWorkspaceCwd) env.PAPERCLIP_WORKSPACE_CWD = effectiveWorkspaceCwd;
if (workspaceSource) env.PAPERCLIP_WORKSPACE_SOURCE = workspaceSource;
if (workspaceId) env.PAPERCLIP_WORKSPACE_ID = workspaceId;
if (workspaceRepoUrl) env.PAPERCLIP_WORKSPACE_REPO_URL = workspaceRepoUrl;
if (workspaceRepoRef) env.PAPERCLIP_WORKSPACE_REPO_REF = workspaceRepoRef;
if (agentHome) env.AGENT_HOME = agentHome;
applyPaperclipWorkspaceEnv(env, {
workspaceCwd: effectiveWorkspaceCwd,
workspaceSource,
workspaceId,
workspaceRepoUrl,
workspaceRepoRef,
agentHome,
});
if (workspaceHints.length > 0) env.PAPERCLIP_WORKSPACES_JSON = JSON.stringify(workspaceHints);
const targetPaperclipApiUrl = adapterExecutionTargetPaperclipApiUrl(executionTarget);
if (targetPaperclipApiUrl) env.PAPERCLIP_API_URL = targetPaperclipApiUrl;
for (const [key, value] of Object.entries(envConfig)) {
if (typeof value === "string") env[key] = value;
@@ -225,8 +266,8 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
);
const billingType = resolveGeminiBillingType(effectiveEnv);
const runtimeEnv = ensurePathInEnv(effectiveEnv);
await ensureCommandResolvable(command, cwd, runtimeEnv);
const resolvedCommand = await resolveCommandForLogs(command, cwd, runtimeEnv);
await ensureAdapterExecutionTargetCommandResolvable(command, executionTarget, cwd, runtimeEnv);
const resolvedCommand = await resolveAdapterExecutionTargetCommandForLogs(command, executionTarget, cwd, runtimeEnv);
const loggedEnv = buildInvocationEnvForLogs(env, {
runtimeEnv,
includeRuntimeKeys: ["HOME"],
@@ -240,18 +281,78 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (fromExtraArgs.length > 0) return fromExtraArgs;
return asStringArray(config.args);
})();
const effectiveExecutionCwd = adapterExecutionTargetRemoteCwd(executionTarget, cwd);
let restoreRemoteWorkspace: (() => Promise<void>) | null = null;
let remoteSkillsDir: string | null = null;
let localSkillsDir: string | null = null;
if (executionTargetIsRemote) {
try {
localSkillsDir = await buildGeminiSkillsDir(config);
await onLog(
"stdout",
`[paperclip] Syncing workspace and Gemini runtime assets to ${describeAdapterExecutionTarget(executionTarget)}.\n`,
);
const preparedExecutionTargetRuntime = await prepareAdapterExecutionTargetRuntime({
target: executionTarget,
adapterKey: "gemini",
workspaceLocalDir: cwd,
assets: [{
key: "skills",
localDir: localSkillsDir,
followSymlinks: true,
}],
});
restoreRemoteWorkspace = () => preparedExecutionTargetRuntime.restoreWorkspace();
const managedHome = adapterExecutionTargetUsesManagedHome(executionTarget);
if (managedHome && preparedExecutionTargetRuntime.runtimeRootDir) {
env.HOME = preparedExecutionTargetRuntime.runtimeRootDir;
}
const remoteHomeDir = managedHome && preparedExecutionTargetRuntime.runtimeRootDir
? preparedExecutionTargetRuntime.runtimeRootDir
: await readAdapterExecutionTargetHomeDir(runId, executionTarget, {
cwd,
env,
timeoutSec,
graceSec,
onLog,
});
if (remoteHomeDir && preparedExecutionTargetRuntime.assetDirs.skills) {
remoteSkillsDir = path.posix.join(remoteHomeDir, ".gemini", "skills");
await runAdapterExecutionTargetShellCommand(
runId,
executionTarget,
`mkdir -p ${JSON.stringify(path.posix.dirname(remoteSkillsDir))} && rm -rf ${JSON.stringify(remoteSkillsDir)} && cp -a ${JSON.stringify(preparedExecutionTargetRuntime.assetDirs.skills)} ${JSON.stringify(remoteSkillsDir)}`,
{ cwd, env, timeoutSec, graceSec, onLog },
);
}
} catch (error) {
await Promise.allSettled([
restoreRemoteWorkspace?.(),
localSkillsDir ? fs.rm(path.dirname(localSkillsDir), { recursive: true, force: true }).catch(() => undefined) : Promise.resolve(),
]);
throw error;
}
}
const runtimeSessionParams = parseObject(runtime.sessionParams);
const runtimeSessionId = asString(runtimeSessionParams.sessionId, runtime.sessionId ?? "");
const runtimeSessionCwd = asString(runtimeSessionParams.cwd, "");
const runtimeRemoteExecution = parseObject(runtimeSessionParams.remoteExecution);
const canResumeSession =
runtimeSessionId.length > 0 &&
(runtimeSessionCwd.length === 0 || path.resolve(runtimeSessionCwd) === path.resolve(cwd));
(runtimeSessionCwd.length === 0 || path.resolve(runtimeSessionCwd) === path.resolve(effectiveExecutionCwd)) &&
adapterExecutionTargetSessionMatches(runtimeRemoteExecution, executionTarget);
const sessionId = canResumeSession ? runtimeSessionId : null;
if (runtimeSessionId && !canResumeSession) {
if (executionTargetIsRemote && runtimeSessionId && !canResumeSession) {
await onLog(
"stdout",
`[paperclip] Gemini session "${runtimeSessionId}" was saved for cwd "${runtimeSessionCwd}" and will not be resumed in "${cwd}".\n`,
`[paperclip] Gemini session "${runtimeSessionId}" does not match the current remote execution identity and will not be resumed in "${effectiveExecutionCwd}". Starting a fresh remote session.\n`,
);
} else if (runtimeSessionId && !canResumeSession) {
await onLog(
"stdout",
`[paperclip] Gemini session "${runtimeSessionId}" was saved for cwd "${runtimeSessionCwd}" and will not be resumed in "${effectiveExecutionCwd}".\n`,
);
}
@@ -350,7 +451,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
await onMeta({
adapterType: "gemini_local",
command: resolvedCommand,
cwd,
cwd: effectiveExecutionCwd,
commandNotes,
commandArgs: args.map((value, index) => (
index === args.length - 1 ? `<prompt ${prompt.length} chars>` : value
@@ -362,7 +463,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
});
}
const proc = await runChildProcess(runId, command, args, {
const proc = await runAdapterExecutionTargetProcess(runId, executionTarget, command, args, {
cwd,
env,
timeoutSec,
@@ -416,10 +517,15 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const resolvedSessionParams = resolvedSessionId
? ({
sessionId: resolvedSessionId,
cwd,
cwd: effectiveExecutionCwd,
...(workspaceId ? { workspaceId } : {}),
...(workspaceRepoUrl ? { repoUrl: workspaceRepoUrl } : {}),
...(workspaceRepoRef ? { repoRef: workspaceRepoRef } : {}),
...(executionTargetIsRemote
? {
remoteExecution: adapterExecutionTargetSessionIdentity(executionTarget),
}
: {}),
} as Record<string, unknown>)
: null;
const parsedError = typeof attempt.parsed.errorMessage === "string" ? attempt.parsed.errorMessage.trim() : "";
@@ -458,20 +564,27 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
};
};
const initial = await runAttempt(sessionId);
if (
sessionId &&
!initial.proc.timedOut &&
(initial.proc.exitCode ?? 0) !== 0 &&
isGeminiUnknownSessionError(initial.proc.stdout, initial.proc.stderr)
) {
await onLog(
"stdout",
`[paperclip] Gemini resume session "${sessionId}" is unavailable; retrying with a fresh session.\n`,
);
const retry = await runAttempt(null);
return toResult(retry, true, true);
}
try {
const initial = await runAttempt(sessionId);
if (
sessionId &&
!initial.proc.timedOut &&
(initial.proc.exitCode ?? 0) !== 0 &&
isGeminiUnknownSessionError(initial.proc.stdout, initial.proc.stderr)
) {
await onLog(
"stdout",
`[paperclip] Gemini resume session "${sessionId}" is unavailable; retrying with a fresh session.\n`,
);
const retry = await runAttempt(null);
return toResult(retry, true, true);
}
return toResult(initial);
return toResult(initial);
} finally {
await Promise.all([
restoreRemoteWorkspace?.(),
localSkillsDir ? fs.rm(path.dirname(localSkillsDir), { recursive: true, force: true }).catch(() => undefined) : Promise.resolve(),
]);
}
}

View File

@@ -422,6 +422,8 @@ function buildWakeText(
" - GET /api/issues/{issueId}/comments",
" - Execute the issue instructions exactly. If the issue is actionable, take concrete action in this run; do not stop at a plan unless planning was requested.",
" - Leave durable progress with a clear next action. Use child issues for long or parallel delegated work instead of polling agents, sessions, or processes.",
" - Create child issues directly when you know what needs to be done; use POST /api/issues/{issueId}/interactions with kind suggest_tasks, ask_user_questions, or request_confirmation when the board/user must choose, answer, or confirm before you can continue.",
" - For plan approval, update the plan document first, then create request_confirmation targeting the latest plan revision with idempotencyKey confirmation:{issueId}:plan:{revisionId}; wait for acceptance before creating implementation subtasks.",
" - If blocked, PATCH /api/issues/{issueId} with {\"status\":\"blocked\",\"comment\":\"what is blocked, who owns the unblock, and the next action\"}.",
" - If instructions require a comment, POST /api/issues/{issueId}/comments with {\"body\":\"...\"}.",
" - PATCH /api/issues/{issueId} with {\"status\":\"done\",\"comment\":\"what changed and why\"}.",

View File

@@ -0,0 +1,225 @@
import { mkdir, mkdtemp, rm } from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { afterEach, describe, expect, it, vi } from "vitest";
const {
runChildProcess,
ensureCommandResolvable,
resolveCommandForLogs,
prepareWorkspaceForSshExecution,
restoreWorkspaceFromSshExecution,
runSshCommand,
syncDirectoryToSsh,
} = vi.hoisted(() => ({
runChildProcess: vi.fn(async () => ({
exitCode: 0,
signal: null,
timedOut: false,
stdout: [
JSON.stringify({ type: "step_start", sessionID: "session_123" }),
JSON.stringify({ type: "text", sessionID: "session_123", part: { text: "hello" } }),
JSON.stringify({
type: "step_finish",
sessionID: "session_123",
part: { cost: 0.001, tokens: { input: 1, output: 1, reasoning: 0, cache: { read: 0, write: 0 } } },
}),
].join("\n"),
stderr: "",
pid: 123,
startedAt: new Date().toISOString(),
})),
ensureCommandResolvable: vi.fn(async () => undefined),
resolveCommandForLogs: vi.fn(async () => "ssh://fixture@127.0.0.1:2222/remote/workspace :: opencode"),
prepareWorkspaceForSshExecution: vi.fn(async () => undefined),
restoreWorkspaceFromSshExecution: vi.fn(async () => undefined),
runSshCommand: vi.fn(async () => ({
stdout: "/home/agent",
stderr: "",
exitCode: 0,
})),
syncDirectoryToSsh: vi.fn(async () => undefined),
}));
vi.mock("@paperclipai/adapter-utils/server-utils", async () => {
const actual = await vi.importActual<typeof import("@paperclipai/adapter-utils/server-utils")>(
"@paperclipai/adapter-utils/server-utils",
);
return {
...actual,
ensureCommandResolvable,
resolveCommandForLogs,
runChildProcess,
};
});
vi.mock("@paperclipai/adapter-utils/ssh", async () => {
const actual = await vi.importActual<typeof import("@paperclipai/adapter-utils/ssh")>(
"@paperclipai/adapter-utils/ssh",
);
return {
...actual,
prepareWorkspaceForSshExecution,
restoreWorkspaceFromSshExecution,
runSshCommand,
syncDirectoryToSsh,
};
});
import { execute } from "./execute.js";
describe("opencode remote execution", () => {
const cleanupDirs: string[] = [];
afterEach(async () => {
vi.clearAllMocks();
while (cleanupDirs.length > 0) {
const dir = cleanupDirs.pop();
if (!dir) continue;
await rm(dir, { recursive: true, force: true }).catch(() => undefined);
}
});
it("prepares the workspace, syncs OpenCode skills, and restores workspace changes for remote SSH execution", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-opencode-remote-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
await mkdir(workspaceDir, { recursive: true });
const result = await execute({
runId: "run-1",
agent: {
id: "agent-1",
companyId: "company-1",
name: "OpenCode Builder",
adapterType: "opencode_local",
adapterConfig: {},
},
runtime: {
sessionId: null,
sessionParams: null,
sessionDisplayId: null,
taskKey: null,
},
config: {
command: "opencode",
model: "opencode/gpt-5-nano",
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTransport: {
remoteExecution: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
paperclipApiUrl: "http://198.51.100.10:3102",
},
},
onLog: async () => {},
});
expect(result.sessionParams).toMatchObject({
sessionId: "session_123",
cwd: "/remote/workspace",
remoteExecution: {
transport: "ssh",
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteCwd: "/remote/workspace",
paperclipApiUrl: "http://198.51.100.10:3102",
},
});
expect(prepareWorkspaceForSshExecution).toHaveBeenCalledTimes(1);
expect(syncDirectoryToSsh).toHaveBeenCalledTimes(2);
expect(syncDirectoryToSsh).toHaveBeenCalledWith(expect.objectContaining({
remoteDir: "/remote/workspace/.paperclip-runtime/opencode/xdgConfig",
}));
expect(syncDirectoryToSsh).toHaveBeenCalledWith(expect.objectContaining({
remoteDir: "/remote/workspace/.paperclip-runtime/opencode/skills",
followSymlinks: true,
}));
expect(runSshCommand).toHaveBeenCalledWith(
expect.anything(),
expect.stringContaining(".claude/skills"),
expect.anything(),
);
const call = runChildProcess.mock.calls[0] as unknown as
| [string, string, string[], { env: Record<string, string>; remoteExecution?: { remoteCwd: string } | null }]
| undefined;
expect(call?.[3].env.PAPERCLIP_API_URL).toBe("http://198.51.100.10:3102");
expect(call?.[3].env.XDG_CONFIG_HOME).toBe("/remote/workspace/.paperclip-runtime/opencode/xdgConfig");
expect(call?.[3].remoteExecution?.remoteCwd).toBe("/remote/workspace");
expect(restoreWorkspaceFromSshExecution).toHaveBeenCalledTimes(1);
});
it("resumes saved OpenCode sessions for remote SSH execution only when the identity matches", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-opencode-remote-resume-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
await mkdir(workspaceDir, { recursive: true });
await execute({
runId: "run-ssh-resume",
agent: {
id: "agent-1",
companyId: "company-1",
name: "OpenCode Builder",
adapterType: "opencode_local",
adapterConfig: {},
},
runtime: {
sessionId: "session-123",
sessionParams: {
sessionId: "session-123",
cwd: "/remote/workspace",
remoteExecution: {
transport: "ssh",
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteCwd: "/remote/workspace",
},
},
sessionDisplayId: "session-123",
taskKey: null,
},
config: {
command: "opencode",
model: "opencode/gpt-5-nano",
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTransport: {
remoteExecution: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
},
},
onLog: async () => {},
});
const call = runChildProcess.mock.calls[0] as unknown as [string, string, string[]] | undefined;
expect(call?.[2]).toContain("--session");
expect(call?.[2]).toContain("session-123");
});
});

View File

@@ -3,19 +3,34 @@ import os from "node:os";
import path from "node:path";
import { fileURLToPath } from "node:url";
import { inferOpenAiCompatibleBiller, type AdapterExecutionContext, type AdapterExecutionResult } from "@paperclipai/adapter-utils";
import {
adapterExecutionTargetIsRemote,
adapterExecutionTargetPaperclipApiUrl,
adapterExecutionTargetRemoteCwd,
adapterExecutionTargetSessionIdentity,
adapterExecutionTargetSessionMatches,
adapterExecutionTargetUsesManagedHome,
describeAdapterExecutionTarget,
ensureAdapterExecutionTargetCommandResolvable,
prepareAdapterExecutionTargetRuntime,
readAdapterExecutionTarget,
readAdapterExecutionTargetHomeDir,
resolveAdapterExecutionTargetCommandForLogs,
runAdapterExecutionTargetProcess,
runAdapterExecutionTargetShellCommand,
} from "@paperclipai/adapter-utils/execution-target";
import {
asString,
asNumber,
asStringArray,
parseObject,
applyPaperclipWorkspaceEnv,
buildPaperclipEnv,
joinPromptSections,
buildInvocationEnvForLogs,
ensureAbsoluteDirectory,
ensureCommandResolvable,
ensurePaperclipSkillSymlink,
ensurePathInEnv,
resolveCommandForLogs,
renderTemplate,
renderPaperclipWakePrompt,
stringifyPaperclipWakePayload,
@@ -93,8 +108,26 @@ async function ensureOpenCodeSkillsInjected(
}
}
async function buildOpenCodeSkillsDir(config: Record<string, unknown>): Promise<string> {
const tmp = await fs.mkdtemp(path.join(os.tmpdir(), "paperclip-opencode-skills-"));
const target = path.join(tmp, "skills");
await fs.mkdir(target, { recursive: true });
const availableEntries = await readPaperclipRuntimeSkillEntries(config, __moduleDir);
const desiredNames = new Set(resolvePaperclipDesiredSkillNames(config, availableEntries));
for (const entry of availableEntries) {
if (!desiredNames.has(entry.key)) continue;
await fs.symlink(entry.source, path.join(target, entry.runtimeName));
}
return target;
}
export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExecutionResult> {
const { runId, agent, runtime, config, context, onLog, onMeta, onSpawn, authToken } = ctx;
const executionTarget = readAdapterExecutionTarget({
executionTarget: ctx.executionTarget,
legacyRemoteExecution: ctx.executionTransport?.remoteExecution,
});
const executionTargetIsRemote = adapterExecutionTargetIsRemote(executionTarget);
const promptTemplate = asString(
config.promptTemplate,
@@ -123,11 +156,13 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
await ensureAbsoluteDirectory(cwd, { createIfMissing: true });
const openCodeSkillEntries = await readPaperclipRuntimeSkillEntries(config, __moduleDir);
const desiredOpenCodeSkillNames = resolvePaperclipDesiredSkillNames(config, openCodeSkillEntries);
await ensureOpenCodeSkillsInjected(
onLog,
openCodeSkillEntries,
desiredOpenCodeSkillNames,
);
if (!executionTargetIsRemote) {
await ensureOpenCodeSkillsInjected(
onLog,
openCodeSkillEntries,
desiredOpenCodeSkillNames,
);
}
const envConfig = parseObject(config.env);
const hasExplicitApiKey =
@@ -165,13 +200,17 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (approvalStatus) env.PAPERCLIP_APPROVAL_STATUS = approvalStatus;
if (linkedIssueIds.length > 0) env.PAPERCLIP_LINKED_ISSUE_IDS = linkedIssueIds.join(",");
if (wakePayloadJson) env.PAPERCLIP_WAKE_PAYLOAD_JSON = wakePayloadJson;
if (effectiveWorkspaceCwd) env.PAPERCLIP_WORKSPACE_CWD = effectiveWorkspaceCwd;
if (workspaceSource) env.PAPERCLIP_WORKSPACE_SOURCE = workspaceSource;
if (workspaceId) env.PAPERCLIP_WORKSPACE_ID = workspaceId;
if (workspaceRepoUrl) env.PAPERCLIP_WORKSPACE_REPO_URL = workspaceRepoUrl;
if (workspaceRepoRef) env.PAPERCLIP_WORKSPACE_REPO_REF = workspaceRepoRef;
if (agentHome) env.AGENT_HOME = agentHome;
applyPaperclipWorkspaceEnv(env, {
workspaceCwd: effectiveWorkspaceCwd,
workspaceSource,
workspaceId,
workspaceRepoUrl,
workspaceRepoRef,
agentHome,
});
if (workspaceHints.length > 0) env.PAPERCLIP_WORKSPACES_JSON = JSON.stringify(workspaceHints);
const targetPaperclipApiUrl = adapterExecutionTargetPaperclipApiUrl(executionTarget);
if (targetPaperclipApiUrl) env.PAPERCLIP_API_URL = targetPaperclipApiUrl;
for (const [key, value] of Object.entries(envConfig)) {
if (typeof value === "string") env[key] = value;
@@ -185,26 +224,30 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
env.PAPERCLIP_API_KEY = authToken;
}
const preparedRuntimeConfig = await prepareOpenCodeRuntimeConfig({ env, config });
const localRuntimeConfigHome =
preparedRuntimeConfig.notes.length > 0 ? preparedRuntimeConfig.env.XDG_CONFIG_HOME : "";
try {
const runtimeEnv = Object.fromEntries(
Object.entries(ensurePathInEnv({ ...process.env, ...preparedRuntimeConfig.env })).filter(
(entry): entry is [string, string] => typeof entry[1] === "string",
),
);
await ensureCommandResolvable(command, cwd, runtimeEnv);
const resolvedCommand = await resolveCommandForLogs(command, cwd, runtimeEnv);
await ensureAdapterExecutionTargetCommandResolvable(command, executionTarget, cwd, runtimeEnv);
const resolvedCommand = await resolveAdapterExecutionTargetCommandForLogs(command, executionTarget, cwd, runtimeEnv);
const loggedEnv = buildInvocationEnvForLogs(preparedRuntimeConfig.env, {
runtimeEnv,
includeRuntimeKeys: ["HOME"],
resolvedCommand,
});
await ensureOpenCodeModelConfiguredAndAvailable({
model,
command,
cwd,
env: runtimeEnv,
});
if (!executionTargetIsRemote) {
await ensureOpenCodeModelConfiguredAndAvailable({
model,
command,
cwd,
env: runtimeEnv,
});
}
const timeoutSec = asNumber(config.timeoutSec, 0);
const graceSec = asNumber(config.graceSec, 20);
@@ -213,18 +256,80 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (fromExtraArgs.length > 0) return fromExtraArgs;
return asStringArray(config.args);
})();
const effectiveExecutionCwd = adapterExecutionTargetRemoteCwd(executionTarget, cwd);
let restoreRemoteWorkspace: (() => Promise<void>) | null = null;
let localSkillsDir: string | null = null;
if (executionTargetIsRemote) {
localSkillsDir = await buildOpenCodeSkillsDir(config);
await onLog(
"stdout",
`[paperclip] Syncing workspace and OpenCode runtime assets to ${describeAdapterExecutionTarget(executionTarget)}.\n`,
);
const preparedExecutionTargetRuntime = await prepareAdapterExecutionTargetRuntime({
target: executionTarget,
adapterKey: "opencode",
workspaceLocalDir: cwd,
assets: [
{
key: "skills",
localDir: localSkillsDir,
followSymlinks: true,
},
...(localRuntimeConfigHome
? [{
key: "xdgConfig",
localDir: localRuntimeConfigHome,
}]
: []),
],
});
restoreRemoteWorkspace = () => preparedExecutionTargetRuntime.restoreWorkspace();
const managedHome = adapterExecutionTargetUsesManagedHome(executionTarget);
if (managedHome && preparedExecutionTargetRuntime.runtimeRootDir) {
preparedRuntimeConfig.env.HOME = preparedExecutionTargetRuntime.runtimeRootDir;
}
if (localRuntimeConfigHome && preparedExecutionTargetRuntime.assetDirs.xdgConfig) {
preparedRuntimeConfig.env.XDG_CONFIG_HOME = preparedExecutionTargetRuntime.assetDirs.xdgConfig;
}
const remoteHomeDir = managedHome && preparedExecutionTargetRuntime.runtimeRootDir
? preparedExecutionTargetRuntime.runtimeRootDir
: await readAdapterExecutionTargetHomeDir(runId, executionTarget, {
cwd,
env: preparedRuntimeConfig.env,
timeoutSec,
graceSec,
onLog,
});
if (remoteHomeDir && preparedExecutionTargetRuntime.assetDirs.skills) {
const remoteSkillsDir = path.posix.join(remoteHomeDir, ".claude", "skills");
await runAdapterExecutionTargetShellCommand(
runId,
executionTarget,
`mkdir -p ${JSON.stringify(path.posix.dirname(remoteSkillsDir))} && rm -rf ${JSON.stringify(remoteSkillsDir)} && cp -a ${JSON.stringify(preparedExecutionTargetRuntime.assetDirs.skills)} ${JSON.stringify(remoteSkillsDir)}`,
{ cwd, env: preparedRuntimeConfig.env, timeoutSec, graceSec, onLog },
);
}
}
const runtimeSessionParams = parseObject(runtime.sessionParams);
const runtimeSessionId = asString(runtimeSessionParams.sessionId, runtime.sessionId ?? "");
const runtimeSessionCwd = asString(runtimeSessionParams.cwd, "");
const runtimeRemoteExecution = parseObject(runtimeSessionParams.remoteExecution);
const canResumeSession =
runtimeSessionId.length > 0 &&
(runtimeSessionCwd.length === 0 || path.resolve(runtimeSessionCwd) === path.resolve(cwd));
(runtimeSessionCwd.length === 0 || path.resolve(runtimeSessionCwd) === path.resolve(effectiveExecutionCwd)) &&
adapterExecutionTargetSessionMatches(runtimeRemoteExecution, executionTarget);
const sessionId = canResumeSession ? runtimeSessionId : null;
if (runtimeSessionId && !canResumeSession) {
if (executionTargetIsRemote && runtimeSessionId && !canResumeSession) {
await onLog(
"stdout",
`[paperclip] OpenCode session "${runtimeSessionId}" was saved for cwd "${runtimeSessionCwd}" and will not be resumed in "${cwd}".\n`,
`[paperclip] OpenCode session "${runtimeSessionId}" does not match the current remote execution identity and will not be resumed in "${effectiveExecutionCwd}". Starting a fresh remote session.\n`,
);
} else if (runtimeSessionId && !canResumeSession) {
await onLog(
"stdout",
`[paperclip] OpenCode session "${runtimeSessionId}" was saved for cwd "${runtimeSessionCwd}" and will not be resumed in "${effectiveExecutionCwd}".\n`,
);
}
const instructionsFilePath = asString(config.instructionsFilePath, "").trim();
@@ -314,7 +419,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
await onMeta({
adapterType: "opencode_local",
command: resolvedCommand,
cwd,
cwd: effectiveExecutionCwd,
commandNotes,
commandArgs: [...args, `<stdin prompt ${prompt.length} chars>`],
env: loggedEnv,
@@ -324,9 +429,9 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
});
}
const proc = await runChildProcess(runId, command, args, {
const proc = await runAdapterExecutionTargetProcess(runId, executionTarget, command, args, {
cwd,
env: runtimeEnv,
env: preparedRuntimeConfig.env,
stdin: prompt,
timeoutSec,
graceSec,
@@ -364,10 +469,15 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const resolvedSessionParams = resolvedSessionId
? ({
sessionId: resolvedSessionId,
cwd,
cwd: effectiveExecutionCwd,
...(workspaceId ? { workspaceId } : {}),
...(workspaceRepoUrl ? { repoUrl: workspaceRepoUrl } : {}),
...(workspaceRepoRef ? { repoRef: workspaceRepoRef } : {}),
...(executionTargetIsRemote
? {
remoteExecution: adapterExecutionTargetSessionIdentity(executionTarget),
}
: {}),
} as Record<string, unknown>)
: null;
@@ -408,23 +518,30 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
};
};
const initial = await runAttempt(sessionId);
const initialFailed =
!initial.proc.timedOut && ((initial.proc.exitCode ?? 0) !== 0 || Boolean(initial.parsed.errorMessage));
if (
sessionId &&
initialFailed &&
isOpenCodeUnknownSessionError(initial.proc.stdout, initial.rawStderr)
) {
await onLog(
"stdout",
`[paperclip] OpenCode session "${sessionId}" is unavailable; retrying with a fresh session.\n`,
);
const retry = await runAttempt(null);
return toResult(retry, true);
}
try {
const initial = await runAttempt(sessionId);
const initialFailed =
!initial.proc.timedOut && ((initial.proc.exitCode ?? 0) !== 0 || Boolean(initial.parsed.errorMessage));
if (
sessionId &&
initialFailed &&
isOpenCodeUnknownSessionError(initial.proc.stdout, initial.rawStderr)
) {
await onLog(
"stdout",
`[paperclip] OpenCode session "${sessionId}" is unavailable; retrying with a fresh session.\n`,
);
const retry = await runAttempt(null);
return toResult(retry, true);
}
return toResult(initial);
return toResult(initial);
} finally {
await Promise.all([
restoreRemoteWorkspace?.(),
localSkillsDir ? fs.rm(path.dirname(localSkillsDir), { recursive: true, force: true }).catch(() => undefined) : Promise.resolve(),
]);
}
} finally {
await preparedRuntimeConfig.cleanup();
}

View File

@@ -40,6 +40,33 @@ describe("parseOpenCodeJsonl", () => {
});
expect(parsed.costUsd).toBeCloseTo(0.0025, 6);
expect(parsed.errorMessage).toContain("model unavailable");
expect(parsed.toolErrors).toEqual([]);
});
it("keeps failed tool calls separate from fatal run errors", () => {
const stdout = [
JSON.stringify({
type: "tool_use",
sessionID: "session_123",
part: {
state: {
status: "error",
error: "File not found: e2b-adapter-result.txt",
},
},
}),
JSON.stringify({
type: "text",
sessionID: "session_123",
part: { text: "Recovered and completed the task" },
}),
].join("\n");
const parsed = parseOpenCodeJsonl(stdout);
expect(parsed.sessionId).toBe("session_123");
expect(parsed.summary).toBe("Recovered and completed the task");
expect(parsed.errorMessage).toBeNull();
expect(parsed.toolErrors).toEqual(["File not found: e2b-adapter-result.txt"]);
});
it("detects unknown session errors", () => {

View File

@@ -23,6 +23,7 @@ export function parseOpenCodeJsonl(stdout: string) {
let sessionId: string | null = null;
const messages: string[] = [];
const errors: string[] = [];
const toolErrors: string[] = [];
const usage = {
inputTokens: 0,
cachedInputTokens: 0,
@@ -65,7 +66,7 @@ export function parseOpenCodeJsonl(stdout: string) {
const state = parseObject(part.state);
if (asString(state.status, "") === "error") {
const text = asString(state.error, "").trim();
if (text) errors.push(text);
if (text) toolErrors.push(text);
}
continue;
}
@@ -83,6 +84,7 @@ export function parseOpenCodeJsonl(stdout: string) {
usage,
costUsd,
errorMessage: errors.length > 0 ? errors.join("\n") : null,
toolErrors,
};
}

View File

@@ -0,0 +1,229 @@
import { mkdir, mkdtemp, rm } from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import { afterEach, describe, expect, it, vi } from "vitest";
const {
runChildProcess,
ensureCommandResolvable,
resolveCommandForLogs,
prepareWorkspaceForSshExecution,
restoreWorkspaceFromSshExecution,
runSshCommand,
syncDirectoryToSsh,
} = vi.hoisted(() => ({
runChildProcess: vi.fn(async () => ({
exitCode: 0,
signal: null,
timedOut: false,
stdout: JSON.stringify({
type: "turn_end",
message: {
role: "assistant",
content: "done",
usage: {
input: 10,
output: 20,
cacheRead: 0,
cost: { total: 0.01 },
},
},
toolResults: [],
}),
stderr: "",
pid: 123,
startedAt: new Date().toISOString(),
})),
ensureCommandResolvable: vi.fn(async () => undefined),
resolveCommandForLogs: vi.fn(async () => "ssh://fixture@127.0.0.1:2222/remote/workspace :: pi"),
prepareWorkspaceForSshExecution: vi.fn(async () => undefined),
restoreWorkspaceFromSshExecution: vi.fn(async () => undefined),
runSshCommand: vi.fn(async () => ({
stdout: "",
stderr: "",
exitCode: 0,
})),
syncDirectoryToSsh: vi.fn(async () => undefined),
}));
vi.mock("@paperclipai/adapter-utils/server-utils", async () => {
const actual = await vi.importActual<typeof import("@paperclipai/adapter-utils/server-utils")>(
"@paperclipai/adapter-utils/server-utils",
);
return {
...actual,
ensureCommandResolvable,
resolveCommandForLogs,
runChildProcess,
};
});
vi.mock("@paperclipai/adapter-utils/ssh", async () => {
const actual = await vi.importActual<typeof import("@paperclipai/adapter-utils/ssh")>(
"@paperclipai/adapter-utils/ssh",
);
return {
...actual,
prepareWorkspaceForSshExecution,
restoreWorkspaceFromSshExecution,
runSshCommand,
syncDirectoryToSsh,
};
});
import { execute } from "./execute.js";
describe("pi remote execution", () => {
const cleanupDirs: string[] = [];
afterEach(async () => {
vi.clearAllMocks();
while (cleanupDirs.length > 0) {
const dir = cleanupDirs.pop();
if (!dir) continue;
await rm(dir, { recursive: true, force: true }).catch(() => undefined);
}
});
it("prepares the workspace, syncs Pi skills, and restores workspace changes for remote SSH execution", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-pi-remote-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
await mkdir(workspaceDir, { recursive: true });
const result = await execute({
runId: "run-1",
agent: {
id: "agent-1",
companyId: "company-1",
name: "Pi Builder",
adapterType: "pi_local",
adapterConfig: {},
},
runtime: {
sessionId: null,
sessionParams: null,
sessionDisplayId: null,
taskKey: null,
},
config: {
command: "pi",
model: "openai/gpt-5.4-mini",
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTransport: {
remoteExecution: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
paperclipApiUrl: "http://198.51.100.10:3102",
},
},
onLog: async () => {},
});
expect(result.sessionParams).toMatchObject({
cwd: "/remote/workspace",
remoteExecution: {
transport: "ssh",
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteCwd: "/remote/workspace",
paperclipApiUrl: "http://198.51.100.10:3102",
},
});
expect(String(result.sessionId)).toContain("/remote/workspace/.paperclip-runtime/pi/sessions/");
expect(prepareWorkspaceForSshExecution).toHaveBeenCalledTimes(1);
expect(syncDirectoryToSsh).toHaveBeenCalledTimes(1);
expect(syncDirectoryToSsh).toHaveBeenCalledWith(expect.objectContaining({
remoteDir: "/remote/workspace/.paperclip-runtime/pi/skills",
followSymlinks: true,
}));
expect(runSshCommand).toHaveBeenCalledWith(
expect.anything(),
expect.stringContaining(".paperclip-runtime/pi/sessions"),
expect.anything(),
);
const call = runChildProcess.mock.calls[0] as unknown as
| [string, string, string[], { env: Record<string, string>; remoteExecution?: { remoteCwd: string } | null }]
| undefined;
expect(call?.[2]).toContain("--session");
expect(call?.[2]).toContain("--skill");
expect(call?.[2]).toContain("/remote/workspace/.paperclip-runtime/pi/skills");
expect(call?.[3].env.PAPERCLIP_API_URL).toBe("http://198.51.100.10:3102");
expect(call?.[3].remoteExecution?.remoteCwd).toBe("/remote/workspace");
expect(restoreWorkspaceFromSshExecution).toHaveBeenCalledTimes(1);
});
it("resumes saved Pi sessions for remote SSH execution only when the identity matches", async () => {
const rootDir = await mkdtemp(path.join(os.tmpdir(), "paperclip-pi-remote-resume-"));
cleanupDirs.push(rootDir);
const workspaceDir = path.join(rootDir, "workspace");
await mkdir(workspaceDir, { recursive: true });
await execute({
runId: "run-ssh-resume",
agent: {
id: "agent-1",
companyId: "company-1",
name: "Pi Builder",
adapterType: "pi_local",
adapterConfig: {},
},
runtime: {
sessionId: "/remote/workspace/.paperclip-runtime/pi/sessions/session-123.jsonl",
sessionParams: {
sessionId: "/remote/workspace/.paperclip-runtime/pi/sessions/session-123.jsonl",
cwd: "/remote/workspace",
remoteExecution: {
transport: "ssh",
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteCwd: "/remote/workspace",
},
},
sessionDisplayId: "session-123",
taskKey: null,
},
config: {
command: "pi",
model: "openai/gpt-5.4-mini",
},
context: {
paperclipWorkspace: {
cwd: workspaceDir,
source: "project_primary",
},
},
executionTransport: {
remoteExecution: {
host: "127.0.0.1",
port: 2222,
username: "fixture",
remoteWorkspacePath: "/remote/workspace",
remoteCwd: "/remote/workspace",
privateKey: "PRIVATE KEY",
knownHosts: "[127.0.0.1]:2222 ssh-ed25519 AAAA",
strictHostKeyChecking: true,
},
},
onLog: async () => {},
});
const call = runChildProcess.mock.calls[0] as unknown as [string, string, string[]] | undefined;
expect(call?.[2]).toContain("--session");
expect(call?.[2]).toContain("/remote/workspace/.paperclip-runtime/pi/sessions/session-123.jsonl");
});
});

View File

@@ -3,20 +3,34 @@ import os from "node:os";
import path from "node:path";
import { fileURLToPath } from "node:url";
import { inferOpenAiCompatibleBiller, type AdapterExecutionContext, type AdapterExecutionResult } from "@paperclipai/adapter-utils";
import {
adapterExecutionTargetIsRemote,
adapterExecutionTargetPaperclipApiUrl,
adapterExecutionTargetRemoteCwd,
adapterExecutionTargetSessionIdentity,
adapterExecutionTargetSessionMatches,
adapterExecutionTargetUsesManagedHome,
describeAdapterExecutionTarget,
ensureAdapterExecutionTargetCommandResolvable,
ensureAdapterExecutionTargetFile,
prepareAdapterExecutionTargetRuntime,
readAdapterExecutionTarget,
resolveAdapterExecutionTargetCommandForLogs,
runAdapterExecutionTargetProcess,
} from "@paperclipai/adapter-utils/execution-target";
import {
asString,
asNumber,
asStringArray,
parseObject,
applyPaperclipWorkspaceEnv,
buildPaperclipEnv,
joinPromptSections,
buildInvocationEnvForLogs,
ensureAbsoluteDirectory,
ensureCommandResolvable,
ensurePaperclipSkillSymlink,
ensurePathInEnv,
readPaperclipRuntimeSkillEntries,
resolveCommandForLogs,
resolvePaperclipDesiredSkillNames,
removeMaintainerOnlySkillSymlinks,
renderTemplate,
@@ -95,6 +109,19 @@ async function ensurePiSkillsInjected(
}
}
async function buildPiSkillsDir(config: Record<string, unknown>): Promise<string> {
const tmp = await fs.mkdtemp(path.join(os.tmpdir(), "paperclip-pi-skills-"));
const target = path.join(tmp, "skills");
await fs.mkdir(target, { recursive: true });
const availableEntries = await readPaperclipRuntimeSkillEntries(config, __moduleDir);
const desiredNames = new Set(resolvePaperclipDesiredSkillNames(config, availableEntries));
for (const entry of availableEntries) {
if (!desiredNames.has(entry.key)) continue;
await fs.symlink(entry.source, path.join(target, entry.runtimeName));
}
return target;
}
function resolvePiBiller(env: Record<string, string>, provider: string | null): string {
return inferOpenAiCompatibleBiller(env, null) ?? provider ?? "unknown";
}
@@ -109,8 +136,18 @@ function buildSessionPath(agentId: string, timestamp: string): string {
return path.join(PAPERCLIP_SESSIONS_DIR, `${safeTimestamp}-${agentId}.jsonl`);
}
function buildRemoteSessionPath(runtimeRootDir: string, agentId: string, timestamp: string): string {
const safeTimestamp = timestamp.replace(/[:.]/g, "-");
return path.posix.join(runtimeRootDir, "sessions", `${safeTimestamp}-${agentId}.jsonl`);
}
export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExecutionResult> {
const { runId, agent, runtime, config, context, onLog, onMeta, onSpawn, authToken } = ctx;
const executionTarget = readAdapterExecutionTarget({
executionTarget: ctx.executionTarget,
legacyRemoteExecution: ctx.executionTransport?.remoteExecution,
});
const executionTargetIsRemote = adapterExecutionTargetIsRemote(executionTarget);
const promptTemplate = asString(
config.promptTemplate,
@@ -140,15 +177,18 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const useConfiguredInsteadOfAgentHome = workspaceSource === "agent_home" && configuredCwd.length > 0;
const effectiveWorkspaceCwd = useConfiguredInsteadOfAgentHome ? "" : workspaceCwd;
const cwd = effectiveWorkspaceCwd || configuredCwd || process.cwd();
const effectiveExecutionCwd = adapterExecutionTargetRemoteCwd(executionTarget, cwd);
await ensureAbsoluteDirectory(cwd, { createIfMissing: true });
// Ensure sessions directory exists
await ensureSessionsDir();
// Inject skills
if (!executionTargetIsRemote) {
await ensureSessionsDir();
}
const piSkillEntries = await readPaperclipRuntimeSkillEntries(config, __moduleDir);
const desiredPiSkillNames = resolvePaperclipDesiredSkillNames(config, piSkillEntries);
await ensurePiSkillsInjected(onLog, piSkillEntries, desiredPiSkillNames);
if (!executionTargetIsRemote) {
await ensurePiSkillsInjected(onLog, piSkillEntries, desiredPiSkillNames);
}
// Build environment
const envConfig = parseObject(config.env);
@@ -156,7 +196,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
typeof envConfig.PAPERCLIP_API_KEY === "string" && envConfig.PAPERCLIP_API_KEY.trim().length > 0;
const env: Record<string, string> = { ...buildPaperclipEnv(agent) };
env.PAPERCLIP_RUN_ID = runId;
const wakeTaskId =
(typeof context.taskId === "string" && context.taskId.trim().length > 0 && context.taskId.trim()) ||
(typeof context.issueId === "string" && context.issueId.trim().length > 0 && context.issueId.trim()) ||
@@ -189,13 +229,17 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (approvalStatus) env.PAPERCLIP_APPROVAL_STATUS = approvalStatus;
if (linkedIssueIds.length > 0) env.PAPERCLIP_LINKED_ISSUE_IDS = linkedIssueIds.join(",");
if (wakePayloadJson) env.PAPERCLIP_WAKE_PAYLOAD_JSON = wakePayloadJson;
if (workspaceCwd) env.PAPERCLIP_WORKSPACE_CWD = workspaceCwd;
if (workspaceSource) env.PAPERCLIP_WORKSPACE_SOURCE = workspaceSource;
if (workspaceId) env.PAPERCLIP_WORKSPACE_ID = workspaceId;
if (workspaceRepoUrl) env.PAPERCLIP_WORKSPACE_REPO_URL = workspaceRepoUrl;
if (workspaceRepoRef) env.PAPERCLIP_WORKSPACE_REPO_REF = workspaceRepoRef;
if (agentHome) env.AGENT_HOME = agentHome;
applyPaperclipWorkspaceEnv(env, {
workspaceCwd,
workspaceSource,
workspaceId,
workspaceRepoUrl,
workspaceRepoRef,
agentHome,
});
if (workspaceHints.length > 0) env.PAPERCLIP_WORKSPACES_JSON = JSON.stringify(workspaceHints);
const targetPaperclipApiUrl = adapterExecutionTargetPaperclipApiUrl(executionTarget);
if (targetPaperclipApiUrl) env.PAPERCLIP_API_URL = targetPaperclipApiUrl;
for (const [key, value] of Object.entries(envConfig)) {
if (typeof value === "string") env[key] = value;
@@ -203,27 +247,51 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (!hasExplicitApiKey && authToken) {
env.PAPERCLIP_API_KEY = authToken;
}
// Prepend installed skill `bin/` dirs to PATH so an agent's bash tool can
// invoke skill binaries (e.g. `paperclip-get-issue`) by name. Without this,
// any pi_local agent whose AGENTS.md calls a skill command via bash hits
// exit 127 "command not found". Only include skills that ensurePiSkillsInjected
// actually linked — otherwise non-injected skills' binaries would be reachable
// to the agent.
const injectedSkillKeys = new Set(desiredPiSkillNames);
const skillBinDirs = piSkillEntries
.filter((entry) => injectedSkillKeys.has(entry.key) && entry.source.length > 0)
.map((entry) => path.join(entry.source, "bin"));
const mergedEnv = ensurePathInEnv({ ...process.env, ...env });
const pathKey =
typeof mergedEnv.Path === "string" && mergedEnv.Path.length > 0 && !mergedEnv.PATH
? "Path"
: "PATH";
const basePath = mergedEnv[pathKey] ?? "";
if (skillBinDirs.length > 0) {
const existing = basePath.split(path.delimiter).filter(Boolean);
const additions = skillBinDirs.filter((dir) => !existing.includes(dir));
if (additions.length > 0) {
mergedEnv[pathKey] = [...additions, basePath].filter(Boolean).join(path.delimiter);
}
}
const runtimeEnv = Object.fromEntries(
Object.entries(ensurePathInEnv({ ...process.env, ...env })).filter(
Object.entries(mergedEnv).filter(
(entry): entry is [string, string] => typeof entry[1] === "string",
),
);
await ensureCommandResolvable(command, cwd, runtimeEnv);
const resolvedCommand = await resolveCommandForLogs(command, cwd, runtimeEnv);
await ensureAdapterExecutionTargetCommandResolvable(command, executionTarget, cwd, runtimeEnv);
const resolvedCommand = await resolveAdapterExecutionTargetCommandForLogs(command, executionTarget, cwd, runtimeEnv);
const loggedEnv = buildInvocationEnvForLogs(env, {
runtimeEnv,
includeRuntimeKeys: ["HOME"],
resolvedCommand,
});
// Validate model is available before execution
await ensurePiModelConfiguredAndAvailable({
model,
command,
cwd,
env: runtimeEnv,
});
if (!executionTargetIsRemote) {
await ensurePiModelConfiguredAndAvailable({
model,
command,
cwd,
env: runtimeEnv,
});
}
const timeoutSec = asNumber(config.timeoutSec, 0);
const graceSec = asNumber(config.graceSec, 20);
@@ -232,31 +300,84 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
if (fromExtraArgs.length > 0) return fromExtraArgs;
return asStringArray(config.args);
})();
let restoreRemoteWorkspace: (() => Promise<void>) | null = null;
let remoteRuntimeRootDir: string | null = null;
let localSkillsDir: string | null = null;
let remoteSkillsDir: string | null = null;
if (executionTargetIsRemote) {
try {
localSkillsDir = await buildPiSkillsDir(config);
await onLog(
"stdout",
`[paperclip] Syncing workspace and Pi runtime assets to ${describeAdapterExecutionTarget(executionTarget)}.\n`,
);
const preparedRemoteRuntime = await prepareAdapterExecutionTargetRuntime({
target: executionTarget,
adapterKey: "pi",
workspaceLocalDir: cwd,
assets: [
{
key: "skills",
localDir: localSkillsDir,
followSymlinks: true,
},
],
});
restoreRemoteWorkspace = () => preparedRemoteRuntime.restoreWorkspace();
if (adapterExecutionTargetUsesManagedHome(executionTarget) && preparedRemoteRuntime.runtimeRootDir) {
env.HOME = preparedRemoteRuntime.runtimeRootDir;
}
remoteRuntimeRootDir = preparedRemoteRuntime.runtimeRootDir;
remoteSkillsDir = preparedRemoteRuntime.assetDirs.skills ?? null;
} catch (error) {
await Promise.allSettled([
restoreRemoteWorkspace?.(),
localSkillsDir ? fs.rm(path.dirname(localSkillsDir), { recursive: true, force: true }).catch(() => undefined) : Promise.resolve(),
]);
throw error;
}
}
// Handle session
const runtimeSessionParams = parseObject(runtime.sessionParams);
const runtimeSessionId = asString(runtimeSessionParams.sessionId, runtime.sessionId ?? "");
const runtimeSessionCwd = asString(runtimeSessionParams.cwd, "");
const runtimeRemoteExecution = parseObject(runtimeSessionParams.remoteExecution);
const canResumeSession =
runtimeSessionId.length > 0 &&
(runtimeSessionCwd.length === 0 || path.resolve(runtimeSessionCwd) === path.resolve(cwd));
const sessionPath = canResumeSession ? runtimeSessionId : buildSessionPath(agent.id, new Date().toISOString());
(runtimeSessionCwd.length === 0 || path.resolve(runtimeSessionCwd) === path.resolve(effectiveExecutionCwd)) &&
adapterExecutionTargetSessionMatches(runtimeRemoteExecution, executionTarget);
const sessionPath = canResumeSession
? runtimeSessionId
: executionTargetIsRemote && remoteRuntimeRootDir
? buildRemoteSessionPath(remoteRuntimeRootDir, agent.id, new Date().toISOString())
: buildSessionPath(agent.id, new Date().toISOString());
if (runtimeSessionId && !canResumeSession) {
await onLog(
"stdout",
`[paperclip] Pi session "${runtimeSessionId}" was saved for cwd "${runtimeSessionCwd}" and will not be resumed in "${cwd}".\n`,
executionTargetIsRemote
? `[paperclip] Pi session "${runtimeSessionId}" does not match the current remote execution identity and will not be resumed in "${effectiveExecutionCwd}". Starting a fresh remote session.\n`
: `[paperclip] Pi session "${runtimeSessionId}" was saved for cwd "${runtimeSessionCwd}" and will not be resumed in "${effectiveExecutionCwd}".\n`,
);
}
// Ensure session file exists (Pi requires this on first run)
if (!canResumeSession) {
try {
await fs.writeFile(sessionPath, "", { flag: "wx" });
} catch (err) {
// File may already exist, that's ok
if ((err as NodeJS.ErrnoException).code !== "EEXIST") {
throw err;
if (executionTargetIsRemote) {
await ensureAdapterExecutionTargetFile(runId, executionTarget, sessionPath, {
cwd,
env,
timeoutSec: 15,
graceSec: 5,
onLog,
});
} else {
try {
await fs.writeFile(sessionPath, "", { flag: "wx" });
} catch (err) {
if ((err as NodeJS.ErrnoException).code !== "EEXIST") {
throw err;
}
}
}
}
@@ -267,7 +388,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
? path.resolve(cwd, instructionsFilePath)
: "";
const instructionsFileDir = instructionsFilePath ? `${path.dirname(instructionsFilePath)}/` : "";
let systemPromptExtension = "";
let instructionsReadFailed = false;
if (resolvedInstructionsFilePath) {
@@ -341,26 +462,24 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const buildArgs = (sessionFile: string): string[] => {
const args: string[] = [];
// Use JSON mode for structured output with print mode (non-interactive)
args.push("--mode", "json");
args.push("-p"); // Non-interactive mode: process prompt and exit
// Use --append-system-prompt to extend Pi's default system prompt
args.push("--append-system-prompt", renderedSystemPromptExtension);
if (provider) args.push("--provider", provider);
if (modelId) args.push("--model", modelId);
if (thinking) args.push("--thinking", thinking);
args.push("--tools", "read,bash,edit,write,grep,find,ls");
args.push("--session", sessionFile);
// Add Paperclip skills directory so Pi can load the paperclip skill
args.push("--skill", PI_AGENT_SKILLS_DIR);
args.push("--skill", remoteSkillsDir ?? PI_AGENT_SKILLS_DIR);
if (extraArgs.length > 0) args.push(...extraArgs);
// Add the user prompt as the last argument
args.push(userPrompt);
@@ -373,7 +492,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
await onMeta({
adapterType: "pi_local",
command: resolvedCommand,
cwd,
cwd: effectiveExecutionCwd,
commandNotes,
commandArgs: args,
env: loggedEnv,
@@ -391,13 +510,13 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
await onLog(stream, chunk);
return;
}
// Buffer stdout and emit only complete lines
stdoutBuffer += chunk;
const lines = stdoutBuffer.split("\n");
// Keep the last (potentially incomplete) line in the buffer
stdoutBuffer = lines.pop() || "";
// Emit complete lines
for (const line of lines) {
if (line) {
@@ -406,20 +525,20 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
}
};
const proc = await runChildProcess(runId, command, args, {
const proc = await runAdapterExecutionTargetProcess(runId, executionTarget, command, args, {
cwd,
env: runtimeEnv,
env: executionTargetIsRemote ? env : runtimeEnv,
timeoutSec,
graceSec,
onSpawn,
onLog: bufferedOnLog,
});
// Flush any remaining buffer content
if (stdoutBuffer) {
await onLog("stdout", stdoutBuffer);
}
return {
proc,
rawStderr: proc.stderr,
@@ -447,7 +566,18 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const resolvedSessionId = clearSessionOnMissingSession ? null : sessionPath;
const resolvedSessionParams = resolvedSessionId
? { sessionId: resolvedSessionId, cwd }
? {
sessionId: resolvedSessionId,
cwd: effectiveExecutionCwd,
...(workspaceId ? { workspaceId } : {}),
...(workspaceRepoUrl ? { repoUrl: workspaceRepoUrl } : {}),
...(workspaceRepoRef ? { repoRef: workspaceRepoRef } : {}),
...(executionTargetIsRemote
? {
remoteExecution: adapterExecutionTargetSessionIdentity(executionTarget),
}
: {}),
}
: null;
const stderrLine = firstNonEmptyLine(attempt.proc.stderr);
@@ -483,30 +613,49 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
};
};
const initial = await runAttempt(sessionPath);
const initialFailed =
!initial.proc.timedOut && ((initial.proc.exitCode ?? 0) !== 0 || initial.parsed.errors.length > 0);
if (
canResumeSession &&
initialFailed &&
isPiUnknownSessionError(initial.proc.stdout, initial.rawStderr)
) {
await onLog(
"stdout",
`[paperclip] Pi session "${runtimeSessionId}" is unavailable; retrying with a fresh session.\n`,
);
const newSessionPath = buildSessionPath(agent.id, new Date().toISOString());
try {
await fs.writeFile(newSessionPath, "", { flag: "wx" });
} catch (err) {
if ((err as NodeJS.ErrnoException).code !== "EEXIST") {
throw err;
}
}
const retry = await runAttempt(newSessionPath);
return toResult(retry, true);
}
try {
const initial = await runAttempt(sessionPath);
const initialFailed =
!initial.proc.timedOut && ((initial.proc.exitCode ?? 0) !== 0 || initial.parsed.errors.length > 0);
return toResult(initial);
if (
canResumeSession &&
initialFailed &&
isPiUnknownSessionError(initial.proc.stdout, initial.rawStderr)
) {
await onLog(
"stdout",
`[paperclip] Pi session "${runtimeSessionId}" is unavailable; retrying with a fresh session.\n`,
);
const newSessionPath = executionTargetIsRemote && remoteRuntimeRootDir
? buildRemoteSessionPath(remoteRuntimeRootDir, agent.id, new Date().toISOString())
: buildSessionPath(agent.id, new Date().toISOString());
if (executionTargetIsRemote) {
await ensureAdapterExecutionTargetFile(runId, executionTarget, newSessionPath, {
cwd,
env,
timeoutSec: 15,
graceSec: 5,
onLog,
});
} else {
try {
await fs.writeFile(newSessionPath, "", { flag: "wx" });
} catch (err) {
if ((err as NodeJS.ErrnoException).code !== "EEXIST") {
throw err;
}
}
}
const retry = await runAttempt(newSessionPath);
return toResult(retry, true);
}
return toResult(initial);
} finally {
await Promise.all([
restoreRemoteWorkspace?.(),
localSkillsDir ? fs.rm(path.dirname(localSkillsDir), { recursive: true, force: true }).catch(() => undefined) : Promise.resolve(),
]);
}
}

View File

@@ -0,0 +1,65 @@
CREATE TABLE IF NOT EXISTS "issue_thread_interactions" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"company_id" uuid NOT NULL,
"issue_id" uuid NOT NULL,
"kind" text NOT NULL,
"status" text DEFAULT 'pending' NOT NULL,
"continuation_policy" text DEFAULT 'wake_assignee' NOT NULL,
"source_comment_id" uuid,
"source_run_id" uuid,
"title" text,
"summary" text,
"created_by_agent_id" uuid,
"created_by_user_id" text,
"resolved_by_agent_id" uuid,
"resolved_by_user_id" text,
"payload" jsonb NOT NULL,
"result" jsonb,
"resolved_at" timestamp with time zone,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_thread_interactions_company_id_companies_id_fk') THEN
ALTER TABLE "issue_thread_interactions" ADD CONSTRAINT "issue_thread_interactions_company_id_companies_id_fk" FOREIGN KEY ("company_id") REFERENCES "public"."companies"("id") ON DELETE no action ON UPDATE no action;
END IF;
END $$;
--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_thread_interactions_issue_id_issues_id_fk') THEN
ALTER TABLE "issue_thread_interactions" ADD CONSTRAINT "issue_thread_interactions_issue_id_issues_id_fk" FOREIGN KEY ("issue_id") REFERENCES "public"."issues"("id") ON DELETE no action ON UPDATE no action;
END IF;
END $$;
--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_thread_interactions_source_comment_id_issue_comments_id_fk') THEN
ALTER TABLE "issue_thread_interactions" ADD CONSTRAINT "issue_thread_interactions_source_comment_id_issue_comments_id_fk" FOREIGN KEY ("source_comment_id") REFERENCES "public"."issue_comments"("id") ON DELETE set null ON UPDATE no action;
END IF;
END $$;
--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_thread_interactions_source_run_id_heartbeat_runs_id_fk') THEN
ALTER TABLE "issue_thread_interactions" ADD CONSTRAINT "issue_thread_interactions_source_run_id_heartbeat_runs_id_fk" FOREIGN KEY ("source_run_id") REFERENCES "public"."heartbeat_runs"("id") ON DELETE set null ON UPDATE no action;
END IF;
END $$;
--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_thread_interactions_created_by_agent_id_agents_id_fk') THEN
ALTER TABLE "issue_thread_interactions" ADD CONSTRAINT "issue_thread_interactions_created_by_agent_id_agents_id_fk" FOREIGN KEY ("created_by_agent_id") REFERENCES "public"."agents"("id") ON DELETE no action ON UPDATE no action;
END IF;
END $$;
--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_thread_interactions_resolved_by_agent_id_agents_id_fk') THEN
ALTER TABLE "issue_thread_interactions" ADD CONSTRAINT "issue_thread_interactions_resolved_by_agent_id_agents_id_fk" FOREIGN KEY ("resolved_by_agent_id") REFERENCES "public"."agents"("id") ON DELETE no action ON UPDATE no action;
END IF;
END $$;
--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "issue_thread_interactions_issue_idx" ON "issue_thread_interactions" USING btree ("issue_id");
--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "issue_thread_interactions_company_issue_created_at_idx" ON "issue_thread_interactions" USING btree ("company_id","issue_id","created_at");
--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "issue_thread_interactions_company_issue_status_idx" ON "issue_thread_interactions" USING btree ("company_id","issue_id","status");
--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "issue_thread_interactions_source_comment_idx" ON "issue_thread_interactions" USING btree ("source_comment_id");

View File

@@ -0,0 +1,4 @@
ALTER TABLE "issue_thread_interactions" ADD COLUMN IF NOT EXISTS "idempotency_key" text;--> statement-breakpoint
CREATE UNIQUE INDEX IF NOT EXISTS "issue_thread_interactions_company_issue_idempotency_uq"
ON "issue_thread_interactions" USING btree ("company_id","issue_id","idempotency_key")
WHERE "issue_thread_interactions"."idempotency_key" IS NOT NULL;

View File

@@ -0,0 +1,50 @@
CREATE TABLE "environments" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"company_id" uuid NOT NULL,
"name" text NOT NULL,
"description" text,
"driver" text DEFAULT 'local' NOT NULL,
"status" text DEFAULT 'active' NOT NULL,
"config" jsonb DEFAULT '{}'::jsonb NOT NULL,
"metadata" jsonb,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
CREATE TABLE "environment_leases" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"company_id" uuid NOT NULL,
"environment_id" uuid NOT NULL,
"execution_workspace_id" uuid,
"issue_id" uuid,
"heartbeat_run_id" uuid,
"status" text DEFAULT 'active' NOT NULL,
"lease_policy" text DEFAULT 'ephemeral' NOT NULL,
"provider" text,
"provider_lease_id" text,
"acquired_at" timestamp with time zone DEFAULT now() NOT NULL,
"last_used_at" timestamp with time zone DEFAULT now() NOT NULL,
"expires_at" timestamp with time zone,
"released_at" timestamp with time zone,
"failure_reason" text,
"cleanup_status" text,
"metadata" jsonb,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
ALTER TABLE "environments" ADD CONSTRAINT "environments_company_id_companies_id_fk" FOREIGN KEY ("company_id") REFERENCES "public"."companies"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "environment_leases" ADD CONSTRAINT "environment_leases_company_id_companies_id_fk" FOREIGN KEY ("company_id") REFERENCES "public"."companies"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "environment_leases" ADD CONSTRAINT "environment_leases_environment_id_environments_id_fk" FOREIGN KEY ("environment_id") REFERENCES "public"."environments"("id") ON DELETE cascade ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "environment_leases" ADD CONSTRAINT "environment_leases_execution_workspace_id_execution_workspaces_id_fk" FOREIGN KEY ("execution_workspace_id") REFERENCES "public"."execution_workspaces"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "environment_leases" ADD CONSTRAINT "environment_leases_issue_id_issues_id_fk" FOREIGN KEY ("issue_id") REFERENCES "public"."issues"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
ALTER TABLE "environment_leases" ADD CONSTRAINT "environment_leases_heartbeat_run_id_heartbeat_runs_id_fk" FOREIGN KEY ("heartbeat_run_id") REFERENCES "public"."heartbeat_runs"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
CREATE INDEX "environments_company_status_idx" ON "environments" USING btree ("company_id","status");--> statement-breakpoint
CREATE UNIQUE INDEX "environments_company_driver_idx" ON "environments" USING btree ("company_id","driver");--> statement-breakpoint
CREATE INDEX "environments_company_name_idx" ON "environments" USING btree ("company_id","name");--> statement-breakpoint
CREATE INDEX "environment_leases_company_environment_status_idx" ON "environment_leases" USING btree ("company_id","environment_id","status");--> statement-breakpoint
CREATE INDEX "environment_leases_company_execution_workspace_idx" ON "environment_leases" USING btree ("company_id","execution_workspace_id");--> statement-breakpoint
CREATE INDEX "environment_leases_company_issue_idx" ON "environment_leases" USING btree ("company_id","issue_id");--> statement-breakpoint
CREATE INDEX "environment_leases_heartbeat_run_idx" ON "environment_leases" USING btree ("heartbeat_run_id");--> statement-breakpoint
CREATE INDEX "environment_leases_company_last_used_idx" ON "environment_leases" USING btree ("company_id","last_used_at");--> statement-breakpoint
CREATE INDEX "environment_leases_provider_lease_idx" ON "environment_leases" USING btree ("provider_lease_id");

View File

@@ -0,0 +1,107 @@
CREATE TABLE IF NOT EXISTS "issue_tree_holds" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"company_id" uuid NOT NULL,
"root_issue_id" uuid NOT NULL,
"mode" text NOT NULL,
"status" text DEFAULT 'active' NOT NULL,
"reason" text,
"release_policy" jsonb,
"created_by_actor_type" text DEFAULT 'system' NOT NULL,
"created_by_agent_id" uuid,
"created_by_user_id" text,
"created_by_run_id" uuid,
"released_at" timestamp with time zone,
"released_by_actor_type" text,
"released_by_agent_id" uuid,
"released_by_user_id" text,
"released_by_run_id" uuid,
"release_reason" text,
"release_metadata" jsonb,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
CREATE TABLE IF NOT EXISTS "issue_tree_hold_members" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"company_id" uuid NOT NULL,
"hold_id" uuid NOT NULL,
"issue_id" uuid NOT NULL,
"parent_issue_id" uuid,
"depth" integer DEFAULT 0 NOT NULL,
"issue_identifier" text,
"issue_title" text NOT NULL,
"issue_status" text NOT NULL,
"assignee_agent_id" uuid,
"assignee_user_id" text,
"active_run_id" uuid,
"active_run_status" text,
"skipped" boolean DEFAULT false NOT NULL,
"skip_reason" text,
"created_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_tree_holds_company_id_companies_id_fk') THEN
ALTER TABLE "issue_tree_holds" ADD CONSTRAINT "issue_tree_holds_company_id_companies_id_fk" FOREIGN KEY ("company_id") REFERENCES "public"."companies"("id") ON DELETE no action ON UPDATE no action;
END IF;
END $$;--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_tree_holds_root_issue_id_issues_id_fk') THEN
ALTER TABLE "issue_tree_holds" ADD CONSTRAINT "issue_tree_holds_root_issue_id_issues_id_fk" FOREIGN KEY ("root_issue_id") REFERENCES "public"."issues"("id") ON DELETE cascade ON UPDATE no action;
END IF;
END $$;--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_tree_holds_created_by_agent_id_agents_id_fk') THEN
ALTER TABLE "issue_tree_holds" ADD CONSTRAINT "issue_tree_holds_created_by_agent_id_agents_id_fk" FOREIGN KEY ("created_by_agent_id") REFERENCES "public"."agents"("id") ON DELETE set null ON UPDATE no action;
END IF;
END $$;--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_tree_holds_created_by_run_id_heartbeat_runs_id_fk') THEN
ALTER TABLE "issue_tree_holds" ADD CONSTRAINT "issue_tree_holds_created_by_run_id_heartbeat_runs_id_fk" FOREIGN KEY ("created_by_run_id") REFERENCES "public"."heartbeat_runs"("id") ON DELETE set null ON UPDATE no action;
END IF;
END $$;--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_tree_holds_released_by_agent_id_agents_id_fk') THEN
ALTER TABLE "issue_tree_holds" ADD CONSTRAINT "issue_tree_holds_released_by_agent_id_agents_id_fk" FOREIGN KEY ("released_by_agent_id") REFERENCES "public"."agents"("id") ON DELETE set null ON UPDATE no action;
END IF;
END $$;--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_tree_holds_released_by_run_id_heartbeat_runs_id_fk') THEN
ALTER TABLE "issue_tree_holds" ADD CONSTRAINT "issue_tree_holds_released_by_run_id_heartbeat_runs_id_fk" FOREIGN KEY ("released_by_run_id") REFERENCES "public"."heartbeat_runs"("id") ON DELETE set null ON UPDATE no action;
END IF;
END $$;--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_tree_hold_members_company_id_companies_id_fk') THEN
ALTER TABLE "issue_tree_hold_members" ADD CONSTRAINT "issue_tree_hold_members_company_id_companies_id_fk" FOREIGN KEY ("company_id") REFERENCES "public"."companies"("id") ON DELETE no action ON UPDATE no action;
END IF;
END $$;--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_tree_hold_members_hold_id_issue_tree_holds_id_fk') THEN
ALTER TABLE "issue_tree_hold_members" ADD CONSTRAINT "issue_tree_hold_members_hold_id_issue_tree_holds_id_fk" FOREIGN KEY ("hold_id") REFERENCES "public"."issue_tree_holds"("id") ON DELETE cascade ON UPDATE no action;
END IF;
END $$;--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_tree_hold_members_issue_id_issues_id_fk') THEN
ALTER TABLE "issue_tree_hold_members" ADD CONSTRAINT "issue_tree_hold_members_issue_id_issues_id_fk" FOREIGN KEY ("issue_id") REFERENCES "public"."issues"("id") ON DELETE cascade ON UPDATE no action;
END IF;
END $$;--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_tree_hold_members_parent_issue_id_issues_id_fk') THEN
ALTER TABLE "issue_tree_hold_members" ADD CONSTRAINT "issue_tree_hold_members_parent_issue_id_issues_id_fk" FOREIGN KEY ("parent_issue_id") REFERENCES "public"."issues"("id") ON DELETE set null ON UPDATE no action;
END IF;
END $$;--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_tree_hold_members_assignee_agent_id_agents_id_fk') THEN
ALTER TABLE "issue_tree_hold_members" ADD CONSTRAINT "issue_tree_hold_members_assignee_agent_id_agents_id_fk" FOREIGN KEY ("assignee_agent_id") REFERENCES "public"."agents"("id") ON DELETE set null ON UPDATE no action;
END IF;
END $$;--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_tree_hold_members_active_run_id_heartbeat_runs_id_fk') THEN
ALTER TABLE "issue_tree_hold_members" ADD CONSTRAINT "issue_tree_hold_members_active_run_id_heartbeat_runs_id_fk" FOREIGN KEY ("active_run_id") REFERENCES "public"."heartbeat_runs"("id") ON DELETE set null ON UPDATE no action;
END IF;
END $$;--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "issue_tree_holds_company_root_status_idx" ON "issue_tree_holds" USING btree ("company_id","root_issue_id","status");--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "issue_tree_holds_company_status_mode_idx" ON "issue_tree_holds" USING btree ("company_id","status","mode");--> statement-breakpoint
CREATE UNIQUE INDEX IF NOT EXISTS "issue_tree_hold_members_hold_issue_uq" ON "issue_tree_hold_members" USING btree ("hold_id","issue_id");--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "issue_tree_hold_members_company_issue_idx" ON "issue_tree_hold_members" USING btree ("company_id","issue_id");--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "issue_tree_hold_members_hold_depth_idx" ON "issue_tree_hold_members" USING btree ("hold_id","depth");

View File

@@ -0,0 +1,3 @@
ALTER TABLE "agents" ADD COLUMN "default_environment_id" uuid;
ALTER TABLE "agents" ADD CONSTRAINT "agents_default_environment_id_environments_id_fk" FOREIGN KEY ("default_environment_id") REFERENCES "public"."environments"("id") ON DELETE set null ON UPDATE no action;
CREATE INDEX "agents_company_default_environment_idx" ON "agents" USING btree ("company_id","default_environment_id");

View File

@@ -0,0 +1,2 @@
DROP INDEX IF EXISTS "environments_company_driver_idx";--> statement-breakpoint
CREATE UNIQUE INDEX IF NOT EXISTS "environments_company_driver_idx" ON "environments" USING btree ("company_id","driver") WHERE "driver" = 'local';

View File

@@ -0,0 +1,13 @@
CREATE UNIQUE INDEX IF NOT EXISTS "issues_active_liveness_recovery_incident_uq"
ON "issues" USING btree ("company_id","origin_kind","origin_id")
WHERE "origin_kind" = 'harness_liveness_escalation'
AND "origin_id" IS NOT NULL
AND "hidden_at" IS NULL
AND "status" NOT IN ('done', 'cancelled');
--> statement-breakpoint
CREATE UNIQUE INDEX IF NOT EXISTS "issues_active_liveness_recovery_leaf_uq"
ON "issues" USING btree ("company_id","origin_kind","origin_fingerprint")
WHERE "origin_kind" = 'harness_liveness_escalation'
AND "origin_fingerprint" <> 'default'
AND "hidden_at" IS NULL
AND "status" NOT IN ('done', 'cancelled');

View File

@@ -0,0 +1,70 @@
ALTER TABLE "heartbeat_runs" ADD COLUMN IF NOT EXISTS "last_output_at" timestamp with time zone;
--> statement-breakpoint
ALTER TABLE "heartbeat_runs" ADD COLUMN IF NOT EXISTS "last_output_seq" integer DEFAULT 0 NOT NULL;
--> statement-breakpoint
ALTER TABLE "heartbeat_runs" ADD COLUMN IF NOT EXISTS "last_output_stream" text;
--> statement-breakpoint
ALTER TABLE "heartbeat_runs" ADD COLUMN IF NOT EXISTS "last_output_bytes" bigint;
--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "heartbeat_runs_company_status_last_output_idx"
ON "heartbeat_runs" USING btree ("company_id","status","last_output_at");
--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "heartbeat_runs_company_status_process_started_idx"
ON "heartbeat_runs" USING btree ("company_id","status","process_started_at");
--> statement-breakpoint
CREATE TABLE IF NOT EXISTS "heartbeat_run_watchdog_decisions" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"company_id" uuid NOT NULL,
"run_id" uuid NOT NULL,
"evaluation_issue_id" uuid,
"decision" text NOT NULL,
"snoozed_until" timestamp with time zone,
"reason" text,
"created_by_agent_id" uuid,
"created_by_user_id" text,
"created_by_run_id" uuid,
"created_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
DO $$ BEGIN
ALTER TABLE "heartbeat_run_watchdog_decisions" ADD CONSTRAINT "heartbeat_run_watchdog_decisions_company_id_companies_id_fk" FOREIGN KEY ("company_id") REFERENCES "public"."companies"("id") ON DELETE no action ON UPDATE no action;
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
--> statement-breakpoint
DO $$ BEGIN
ALTER TABLE "heartbeat_run_watchdog_decisions" ADD CONSTRAINT "heartbeat_run_watchdog_decisions_run_id_heartbeat_runs_id_fk" FOREIGN KEY ("run_id") REFERENCES "public"."heartbeat_runs"("id") ON DELETE cascade ON UPDATE no action;
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
--> statement-breakpoint
DO $$ BEGIN
ALTER TABLE "heartbeat_run_watchdog_decisions" ADD CONSTRAINT "heartbeat_run_watchdog_decisions_evaluation_issue_id_issues_id_fk" FOREIGN KEY ("evaluation_issue_id") REFERENCES "public"."issues"("id") ON DELETE set null ON UPDATE no action;
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
--> statement-breakpoint
DO $$ BEGIN
ALTER TABLE "heartbeat_run_watchdog_decisions" ADD CONSTRAINT "heartbeat_run_watchdog_decisions_created_by_agent_id_agents_id_fk" FOREIGN KEY ("created_by_agent_id") REFERENCES "public"."agents"("id") ON DELETE set null ON UPDATE no action;
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
--> statement-breakpoint
DO $$ BEGIN
ALTER TABLE "heartbeat_run_watchdog_decisions" ADD CONSTRAINT "heartbeat_run_watchdog_decisions_created_by_run_id_heartbeat_runs_id_fk" FOREIGN KEY ("created_by_run_id") REFERENCES "public"."heartbeat_runs"("id") ON DELETE set null ON UPDATE no action;
EXCEPTION
WHEN duplicate_object THEN null;
END $$;
--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "heartbeat_run_watchdog_decisions_company_run_created_idx"
ON "heartbeat_run_watchdog_decisions" USING btree ("company_id","run_id","created_at");
--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "heartbeat_run_watchdog_decisions_company_run_snooze_idx"
ON "heartbeat_run_watchdog_decisions" USING btree ("company_id","run_id","snoozed_until");
--> statement-breakpoint
CREATE UNIQUE INDEX IF NOT EXISTS "issues_active_stale_run_evaluation_uq"
ON "issues" USING btree ("company_id","origin_kind","origin_id")
WHERE "origin_kind" = 'stale_active_run_evaluation'
AND "origin_id" IS NOT NULL
AND "hidden_at" IS NULL
AND "status" NOT IN ('done', 'cancelled');

View File

@@ -0,0 +1 @@
ALTER TABLE "companies" ALTER COLUMN "require_board_approval_for_new_agents" SET DEFAULT false;

View File

@@ -0,0 +1,6 @@
CREATE UNIQUE INDEX IF NOT EXISTS "issues_active_stranded_issue_recovery_uq"
ON "issues" USING btree ("company_id","origin_kind","origin_id")
WHERE "origin_kind" = 'stranded_issue_recovery'
AND "origin_id" IS NOT NULL
AND "hidden_at" IS NULL
AND "status" NOT IN ('done', 'cancelled');

View File

@@ -0,0 +1 @@
ALTER TABLE "companies" ADD COLUMN "attachment_max_bytes" integer DEFAULT 10485760 NOT NULL;

View File

@@ -0,0 +1,4 @@
CREATE UNIQUE INDEX "issues_active_productivity_review_uq" ON "issues" USING btree ("company_id","origin_kind","origin_id") WHERE "issues"."origin_kind" = 'issue_productivity_review'
and "issues"."origin_id" is not null
and "issues"."hidden_at" is null
and "issues"."status" not in ('done', 'cancelled');

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -442,6 +442,90 @@
"when": 1776780000000,
"tag": "0062_routine_run_dispatch_fingerprint",
"breakpoints": true
},
{
"idx": 63,
"version": "7",
"when": 1776780001000,
"tag": "0063_issue_thread_interactions",
"breakpoints": true
},
{
"idx": 64,
"version": "7",
"when": 1776780002000,
"tag": "0064_issue_thread_interaction_idempotency",
"breakpoints": true
},
{
"idx": 65,
"version": "7",
"when": 1776903900000,
"tag": "0065_environments",
"breakpoints": true
},
{
"idx": 66,
"version": "7",
"when": 1776903901000,
"tag": "0066_issue_tree_holds",
"breakpoints": true
},
{
"idx": 67,
"version": "7",
"when": 1776904200000,
"tag": "0067_agent_default_environment",
"breakpoints": true
},
{
"idx": 68,
"version": "7",
"when": 1776959400000,
"tag": "0068_environment_local_driver_unique",
"breakpoints": true
},
{
"idx": 69,
"version": "7",
"when": 1776780003000,
"tag": "0069_liveness_recovery_dedupe",
"breakpoints": true
},
{
"idx": 70,
"version": "7",
"when": 1776780004000,
"tag": "0070_active_run_output_watchdog",
"breakpoints": true
},
{
"idx": 71,
"version": "7",
"when": 1777131234000,
"tag": "0071_default_hire_approval_off",
"breakpoints": true
},
{
"idx": 72,
"version": "7",
"when": 1777305216238,
"tag": "0072_large_sandman",
"breakpoints": true
},
{
"idx": 73,
"version": "7",
"when": 1777382021347,
"tag": "0073_shiny_salo",
"breakpoints": true
},
{
"idx": 74,
"version": "7",
"when": 1777384535070,
"tag": "0074_striped_genesis",
"breakpoints": true
}
]
}
}

View File

@@ -9,6 +9,7 @@ import {
index,
} from "drizzle-orm/pg-core";
import { companies } from "./companies.js";
import { environments } from "./environments.js";
export const agents = pgTable(
"agents",
@@ -25,6 +26,7 @@ export const agents = pgTable(
adapterType: text("adapter_type").notNull().default("process"),
adapterConfig: jsonb("adapter_config").$type<Record<string, unknown>>().notNull().default({}),
runtimeConfig: jsonb("runtime_config").$type<Record<string, unknown>>().notNull().default({}),
defaultEnvironmentId: uuid("default_environment_id").references(() => environments.id, { onDelete: "set null" }),
budgetMonthlyCents: integer("budget_monthly_cents").notNull().default(0),
spentMonthlyCents: integer("spent_monthly_cents").notNull().default(0),
pauseReason: text("pause_reason"),
@@ -38,5 +40,6 @@ export const agents = pgTable(
(table) => ({
companyStatusIdx: index("agents_company_status_idx").on(table.companyId, table.status),
companyReportsToIdx: index("agents_company_reports_to_idx").on(table.companyId, table.reportsTo),
companyDefaultEnvironmentIdx: index("agents_company_default_environment_idx").on(table.companyId, table.defaultEnvironmentId),
}),
);

View File

@@ -13,9 +13,12 @@ export const companies = pgTable(
issueCounter: integer("issue_counter").notNull().default(0),
budgetMonthlyCents: integer("budget_monthly_cents").notNull().default(0),
spentMonthlyCents: integer("spent_monthly_cents").notNull().default(0),
attachmentMaxBytes: integer("attachment_max_bytes")
.notNull()
.default(10 * 1024 * 1024),
requireBoardApprovalForNewAgents: boolean("require_board_approval_for_new_agents")
.notNull()
.default(true),
.default(false),
feedbackDataSharingEnabled: boolean("feedback_data_sharing_enabled")
.notNull()
.default(false),

View File

@@ -0,0 +1,46 @@
import { index, jsonb, pgTable, text, timestamp, uuid } from "drizzle-orm/pg-core";
import { companies } from "./companies.js";
import { environments } from "./environments.js";
import { executionWorkspaces } from "./execution_workspaces.js";
import { heartbeatRuns } from "./heartbeat_runs.js";
import { issues } from "./issues.js";
export const environmentLeases = pgTable(
"environment_leases",
{
id: uuid("id").primaryKey().defaultRandom(),
companyId: uuid("company_id").notNull().references(() => companies.id, { onDelete: "cascade" }),
environmentId: uuid("environment_id").notNull().references(() => environments.id, { onDelete: "cascade" }),
executionWorkspaceId: uuid("execution_workspace_id").references(() => executionWorkspaces.id, { onDelete: "set null" }),
issueId: uuid("issue_id").references(() => issues.id, { onDelete: "set null" }),
heartbeatRunId: uuid("heartbeat_run_id").references(() => heartbeatRuns.id, { onDelete: "set null" }),
status: text("status").notNull().default("active"),
leasePolicy: text("lease_policy").notNull().default("ephemeral"),
provider: text("provider"),
providerLeaseId: text("provider_lease_id"),
acquiredAt: timestamp("acquired_at", { withTimezone: true }).notNull().defaultNow(),
lastUsedAt: timestamp("last_used_at", { withTimezone: true }).notNull().defaultNow(),
expiresAt: timestamp("expires_at", { withTimezone: true }),
releasedAt: timestamp("released_at", { withTimezone: true }),
failureReason: text("failure_reason"),
cleanupStatus: text("cleanup_status"),
metadata: jsonb("metadata").$type<Record<string, unknown>>(),
createdAt: timestamp("created_at", { withTimezone: true }).notNull().defaultNow(),
updatedAt: timestamp("updated_at", { withTimezone: true }).notNull().defaultNow(),
},
(table) => ({
companyEnvironmentStatusIdx: index("environment_leases_company_environment_status_idx").on(
table.companyId,
table.environmentId,
table.status,
),
companyExecutionWorkspaceIdx: index("environment_leases_company_execution_workspace_idx").on(
table.companyId,
table.executionWorkspaceId,
),
companyIssueIdx: index("environment_leases_company_issue_idx").on(table.companyId, table.issueId),
heartbeatRunIdx: index("environment_leases_heartbeat_run_idx").on(table.heartbeatRunId),
companyLastUsedIdx: index("environment_leases_company_last_used_idx").on(table.companyId, table.lastUsedAt),
providerLeaseIdx: index("environment_leases_provider_lease_idx").on(table.providerLeaseId),
}),
);

View File

@@ -0,0 +1,26 @@
import { sql } from "drizzle-orm";
import { index, jsonb, pgTable, text, timestamp, uniqueIndex, uuid } from "drizzle-orm/pg-core";
import { companies } from "./companies.js";
export const environments = pgTable(
"environments",
{
id: uuid("id").primaryKey().defaultRandom(),
companyId: uuid("company_id").notNull().references(() => companies.id, { onDelete: "cascade" }),
name: text("name").notNull(),
description: text("description"),
driver: text("driver").notNull().default("local"),
status: text("status").notNull().default("active"),
config: jsonb("config").$type<Record<string, unknown>>().notNull().default({}),
metadata: jsonb("metadata").$type<Record<string, unknown>>(),
createdAt: timestamp("created_at", { withTimezone: true }).notNull().defaultNow(),
updatedAt: timestamp("updated_at", { withTimezone: true }).notNull().defaultNow(),
},
(table) => ({
companyStatusIdx: index("environments_company_status_idx").on(table.companyId, table.status),
companyDriverIdx: uniqueIndex("environments_company_driver_idx")
.on(table.companyId, table.driver)
.where(sql`${table.driver} = 'local'`),
companyNameIdx: index("environments_company_name_idx").on(table.companyId, table.name),
}),
);

View File

@@ -0,0 +1,34 @@
import { index, pgTable, text, timestamp, uuid } from "drizzle-orm/pg-core";
import { agents } from "./agents.js";
import { companies } from "./companies.js";
import { heartbeatRuns } from "./heartbeat_runs.js";
import { issues } from "./issues.js";
export const heartbeatRunWatchdogDecisions = pgTable(
"heartbeat_run_watchdog_decisions",
{
id: uuid("id").primaryKey().defaultRandom(),
companyId: uuid("company_id").notNull().references(() => companies.id),
runId: uuid("run_id").notNull().references(() => heartbeatRuns.id, { onDelete: "cascade" }),
evaluationIssueId: uuid("evaluation_issue_id").references(() => issues.id, { onDelete: "set null" }),
decision: text("decision").notNull(),
snoozedUntil: timestamp("snoozed_until", { withTimezone: true }),
reason: text("reason"),
createdByAgentId: uuid("created_by_agent_id").references(() => agents.id, { onDelete: "set null" }),
createdByUserId: text("created_by_user_id"),
createdByRunId: uuid("created_by_run_id").references(() => heartbeatRuns.id, { onDelete: "set null" }),
createdAt: timestamp("created_at", { withTimezone: true }).notNull().defaultNow(),
},
(table) => ({
companyRunCreatedIdx: index("heartbeat_run_watchdog_decisions_company_run_created_idx").on(
table.companyId,
table.runId,
table.createdAt,
),
companyRunSnoozeIdx: index("heartbeat_run_watchdog_decisions_company_run_snooze_idx").on(
table.companyId,
table.runId,
table.snoozedUntil,
),
}),
);

View File

@@ -34,6 +34,10 @@ export const heartbeatRuns = pgTable(
processPid: integer("process_pid"),
processGroupId: integer("process_group_id"),
processStartedAt: timestamp("process_started_at", { withTimezone: true }),
lastOutputAt: timestamp("last_output_at", { withTimezone: true }),
lastOutputSeq: integer("last_output_seq").notNull().default(0),
lastOutputStream: text("last_output_stream"),
lastOutputBytes: bigint("last_output_bytes", { mode: "number" }),
retryOfRunId: uuid("retry_of_run_id").references((): AnyPgColumn => heartbeatRuns.id, {
onDelete: "set null",
}),
@@ -64,5 +68,15 @@ export const heartbeatRuns = pgTable(
table.livenessState,
table.createdAt,
),
companyStatusLastOutputIdx: index("heartbeat_runs_company_status_last_output_idx").on(
table.companyId,
table.status,
table.lastOutputAt,
),
companyStatusProcessStartedIdx: index("heartbeat_runs_company_status_process_started_idx").on(
table.companyId,
table.status,
table.processStartedAt,
),
}),
);

Some files were not shown because too many files have changed in this diff Show More