Compare commits

..

40 Commits

Author SHA1 Message Date
Dotta
61b693ba7e Clean up adapter runs after terminal output 2026-04-21 14:55:24 -05:00
Dotta
1fedad05b9 Remove terminal issue run cleanup 2026-04-21 14:55:24 -05:00
Dotta
34307374b9 Fix terminal issue run cleanup 2026-04-21 14:55:23 -05:00
Dotta
fbbac4f71f Finish runs when agent completes issue 2026-04-21 14:55:23 -05:00
Dotta
17a8ea9475 test(heartbeat): relax blocked wake scheduling assertion 2026-04-21 14:55:23 -05:00
Dotta
d98c7a3c3a Keep blocked issue wakes idle until dependencies resolve 2026-04-21 14:55:23 -05:00
Dotta
3c0b2f9858 fix: clean post-rebase merge artifacts 2026-04-21 14:55:23 -05:00
Dotta
4d730c8747 Fix heartbeat runs on legacy SQL_ASCII databases 2026-04-21 14:55:23 -05:00
Dotta
a91b2d2d62 Hide stopped stale workspace services
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-21 14:55:23 -05:00
Dotta
146ac7c9e5 Fix stale workspace command matching
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-21 14:55:23 -05:00
Dotta
ec8baf427d Quarantine seeded worktree execution state
Co-Authored-By: Paperclip <noreply@paperclip.ing>
2026-04-21 14:55:23 -05:00
Dotta
bfd7af170a fix(server): bound high-volume issue and log reads 2026-04-21 14:55:23 -05:00
Dotta
73ef40e7be [codex] Sandbox dynamic adapter UI parsers (#4225)
## Thinking Path

> - Paperclip is a control plane for AI-agent companies.
> - External adapters can provide UI parser code that the board loads
dynamically for run transcript rendering.
> - Running adapter-provided parser code directly in the board page
gives that parser access to same-origin browser state.
> - This PR narrows that surface by evaluating dynamically loaded
external adapter UI parser code in a dedicated browser Web Worker with a
constrained postMessage protocol.
> - The worker here is a frontend isolation boundary for adapter UI
parser JavaScript; it is not Paperclip's server plugin-worker system and
it is not a server-side job runner.

## What Changed

- Runs dynamically loaded external adapter UI parsers inside a dedicated
Web Worker instead of importing/evaluating them directly in the board
page.
- Adds a narrow postMessage protocol for parser initialization and line
parsing.
- Caches completed async parse results and notifies the adapter registry
so transcript recomputation can synchronously drain the final parsed
line.
- Disables common worker network, persistence, child worker, Blob/object
URL, and WebRTC escape APIs inside the parser worker bootstrap.
- Handles worker error messages after initialization and drains pending
callbacks on worker termination or mid-session worker error.
- Adds focused regression coverage for the parser worker lockdown and
unused protocol removal.

## Verification

- `pnpm exec vitest run --config ui/vitest.config.ts
ui/src/adapters/sandboxed-parser-worker.test.ts`
- `pnpm exec tsc --noEmit --target es2021 --moduleResolution bundler
--module esnext --jsx react-jsx --lib dom,es2021 --skipLibCheck
ui/src/adapters/dynamic-loader.ts
ui/src/adapters/sandboxed-parser-worker.ts
ui/src/adapters/sandboxed-parser-worker.test.ts`
- `pnpm --filter @paperclipai/ui typecheck` was attempted; it reached
existing unrelated failures in HeartbeatRun test/storybook fixtures and
missing Storybook type resolution, with no adapter-module errors
surfaced.
- PR #4225 checks on current head `34c9da00`: `policy`, `e2e`, `verify`,
`security/snyk`, and `Greptile Review` are all `SUCCESS`.
- Greptile Review on current head `34c9da00` reached 5/5.

## Risks

- Medium risk: parser execution is now asynchronous through a worker
while the existing parser interface is synchronous, so transcript
updates should be watched with external adapters.
- Some adapter parser bundles may rely on direct ESM `export` syntax or
browser APIs that are no longer available inside the worker lockdown.
- The worker lockdown is a hardening layer around external parser code,
not a complete browser security sandbox for arbitrary untrusted
applications.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5-based coding agent runtime, shell/git tool use
enabled. Exact hosted model build and context window are not exposed in
this Paperclip heartbeat environment.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-21 13:42:44 -05:00
Dotta
a26e1288b6 [codex] Polish issue board workflows (#4224)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - Human operators supervise that work through issue lists, issue
detail, comments, inbox groups, markdown references, and
profile/activity surfaces
> - The branch had many small UI fixes that improve the operator loop
but do not need to ship with backend runtime migrations
> - These changes belong together as board workflow polish because they
affect scanning, navigation, issue context, comment state, and markdown
clarity
> - This pull request groups the UI-only slice so it can merge
independently from runtime/backend changes
> - The benefit is a clearer board experience with better issue context,
steadier optimistic updates, and more predictable keyboard navigation

## What Changed

- Improves issue properties, sub-issue actions, blocker chips, and issue
list/detail refresh behavior.
- Adds blocker context above the issue composer and stabilizes
queued/interrupted comment UI state.
- Improves markdown issue/GitHub link rendering and opens external
markdown links in a new tab.
- Adds inbox group keyboard navigation and fold/unfold support.
- Polishes activity/avatar/profile/settings/workspace presentation
details.

## Verification

- `pnpm exec vitest run ui/src/components/IssueProperties.test.tsx
ui/src/components/IssueChatThread.test.tsx
ui/src/components/MarkdownBody.test.tsx ui/src/lib/inbox.test.ts
ui/src/lib/optimistic-issue-comments.test.ts`

## Risks

- Low to medium risk: changes are UI-focused but cover high-traffic
issue and inbox surfaces.
- This branch intentionally does not include the backend runtime changes
from the companion PR; where UI calls newer API filters, unsupported
servers should continue to fail visibly through existing API error
handling.
- Visual screenshots were not captured in this heartbeat; targeted
component/helper tests cover the changed behavior.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5-based coding agent runtime, shell/git tool use
enabled. Exact hosted model build and context window are not exposed in
this Paperclip heartbeat environment.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-21 12:25:34 -05:00
Dotta
09d0678840 [codex] Harden heartbeat scheduling and runtime controls (#4223)
## Thinking Path

> - Paperclip orchestrates AI agents through issue checkout, heartbeat
runs, routines, and auditable control-plane state
> - The runtime path has to recover from lost local processes, transient
adapter failures, blocked dependencies, and routine coalescing without
stranding work
> - The existing branch carried several reliability fixes across
heartbeat scheduling, issue runtime controls, routine dispatch, and
operator-facing run state
> - These changes belong together because they share backend contracts,
migrations, and runtime status semantics
> - This pull request groups the control-plane/runtime slice so it can
merge independently from board UI polish and adapter sandbox work
> - The benefit is safer heartbeat recovery, clearer runtime controls,
and more predictable recurring execution behavior

## What Changed

- Adds bounded heartbeat retry scheduling, scheduled retry state, and
Codex transient failure recovery handling.
- Tightens heartbeat process recovery, blocker wake behavior, issue
comment wake handling, routine dispatch coalescing, and
activity/dashboard bounds.
- Adds runtime-control MCP tools and Paperclip skill docs for issue
workspace runtime management.
- Adds migrations `0061_lively_thor_girl.sql` and
`0062_routine_run_dispatch_fingerprint.sql`.
- Surfaces retry state in run ledger/agent UI and keeps related shared
types synchronized.

## Verification

- `pnpm exec vitest run
server/src/__tests__/heartbeat-retry-scheduling.test.ts
server/src/__tests__/heartbeat-process-recovery.test.ts
server/src/__tests__/routines-service.test.ts`
- `pnpm exec vitest run src/tools.test.ts` from `packages/mcp-server`

## Risks

- Medium risk: this touches heartbeat recovery and routine dispatch,
which are central execution paths.
- Migration order matters if split branches land out of order: merge
this PR before branches that assume the new runtime/routine fields.
- Runtime retry behavior should be watched in CI and in local operator
smoke tests because it changes how transient failures are resumed.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5-based coding agent runtime, shell/git tool use
enabled. Exact hosted model build and context window are not exposed in
this Paperclip heartbeat environment.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-21 12:24:11 -05:00
Dotta
ab9051b595 Add first-class issue references (#4214)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - Operators and agents coordinate through company-scoped issues,
comments, documents, and task relationships.
> - Issue text can mention other tickets, but those references were
previously plain markdown/text without durable relationship data.
> - That made it harder to understand related work, surface backlinks,
and keep cross-ticket context visible in the board.
> - This pull request adds first-class issue reference extraction,
storage, API responses, and UI surfaces.
> - The benefit is that issue references become queryable, navigable,
and visible without relying on ad hoc text scanning.

## What Changed

- Added shared issue-reference parsing utilities and exported
reference-related types/constants.
- Added an `issue_reference_mentions` table, idempotent migration DDL,
schema exports, and database documentation.
- Added server-side issue reference services, route integration,
activity summaries, and a backfill command for existing issue content.
- Added UI reference pills, related-work panels, markdown/editor mention
handling, and issue detail/property rendering updates.
- Added focused shared, server, and UI tests for parsing, persistence,
display, and related-work behavior.
- Rebased `PAP-735-first-class-task-references` cleanly onto
`public-gh/master`; no `pnpm-lock.yaml` changes are included.

## Verification

- `pnpm -r typecheck`
- `pnpm test:run packages/shared/src/issue-references.test.ts
server/src/__tests__/issue-references-service.test.ts
ui/src/components/IssueRelatedWorkPanel.test.tsx
ui/src/components/IssueProperties.test.tsx
ui/src/components/MarkdownBody.test.tsx`

## Risks

- Medium risk because this adds a new issue-reference persistence path
that touches shared parsing, database schema, server routes, and UI
rendering.
- Migration risk is mitigated by `CREATE TABLE IF NOT EXISTS`, guarded
foreign-key creation, and `CREATE INDEX IF NOT EXISTS` statements so
users who have applied an older local version of the numbered migration
can re-run safely.
- UI risk is limited by focused component coverage, but reviewers should
still manually inspect issue detail pages containing ticket references
before merge.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5-based coding agent, tool-using shell workflow with
repository inspection, git rebase/push, typecheck, and focused Vitest
verification.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: dotta <dotta@example.com>
Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-21 10:02:52 -05:00
Dotta
1954eb3048 [codex] Detect issue graph liveness deadlocks (#4209)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - The heartbeat harness is responsible for waking agents, reconciling
issue state, and keeping execution moving.
> - Some dependency graphs can become live-locks when a blocked issue
depends on an unassigned, cancelled, or otherwise uninvokable issue.
> - Review and approval stages can also stall when the recorded
participant can no longer be resolved.
> - This pull request adds issue graph liveness classification plus
heartbeat reconciliation that creates durable escalation work for those
cases.
> - The benefit is that harness-level deadlocks become visible,
assigned, logged, and recoverable instead of silently leaving task
sequences blocked.

## What Changed

- Added an issue graph liveness classifier for blocked dependency and
invalid review participant states.
- Added heartbeat reconciliation that creates one stable escalation
issue per liveness incident, links it as a blocker, comments on the
affected issue, wakes the recommended owner, and logs activity.
- Wired startup and periodic server reconciliation for issue graph
liveness incidents.
- Added focused tests for classifier behavior, heartbeat escalation
creation/deduplication, and queued dependency wake promotion.
- Fixed queued issue wakes so a coalesced wake re-runs queue selection,
allowing dependency-unblocked work to start immediately.

## Verification

- `pnpm exec vitest run
server/src/__tests__/heartbeat-dependency-scheduling.test.ts
server/src/__tests__/issue-liveness.test.ts
server/src/__tests__/heartbeat-issue-liveness-escalation.test.ts`
- Passed locally: `server/src/__tests__/issue-liveness.test.ts` (5
tests)
- Skipped locally: embedded Postgres suites because optional package
`@embedded-postgres/darwin-x64` is not installed on this host
- `pnpm --filter @paperclipai/server typecheck`
- `git diff --check`
- Greptile review loop: ran 3 times as requested; the final
Greptile-reviewed head `0a864eab` had 0 comments and all Greptile
threads were resolved. Later commits are CI/test-stability fixes after
the requested max Greptile pass count.
- GitHub PR checks on head `87493ed4`: `policy`, `verify`, `e2e`, and
`security/snyk (cryppadotta)` all passed.

## Risks

- Moderate operational risk: the reconciler creates escalation issues
automatically, so incorrect classification could create noise. Stable
incident keys and deduplication limit repeated escalation.
- Low schema risk: this uses existing issue, relation, comment, wake,
and activity log tables with no migration.
- No UI screenshots included because this change is server-side harness
behavior only.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5-based coding agent. Exact runtime model ID and
context window were not exposed in this session. Used tool execution for
git, tests, typecheck, Greptile review handling, and GitHub CLI
operations.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-21 09:11:12 -05:00
Robin van Duiven
8d0c3d2fe6 fix(hermes): inject agent JWT into Hermes adapter env to fix identity attribution (#3608)
## Thinking Path

> - Paperclip orchestrates AI agents and records their actions through
auditable issue comments and API writes.
> - The local adapter registry is responsible for adapting each agent
runtime to Paperclip's server-side execution context.
> - The Hermes local adapter delegated directly to
`hermes-paperclip-adapter`, whose current execution context type
predates the server `authToken` field.
> - Without explicitly passing the run-scoped agent token and run id
into Hermes, Hermes could inherit a server or board-user
`PAPERCLIP_API_KEY` and lack a usable `PAPERCLIP_RUN_ID` for mutating
API calls.
> - That made Paperclip writes from Hermes agents risk appearing under
the wrong identity or without the correct run-scoped attribution.
> - This pull request wraps the Hermes execution call so Hermes receives
the agent run JWT as `PAPERCLIP_API_KEY` and the current execution id as
`PAPERCLIP_RUN_ID` while preserving explicit adapter configuration where
appropriate.
> - Follow-up review fixes preserve Hermes' built-in prompt when no
custom prompt template exists and document the intentional type cast.
> - The benefit is reliable agent attribution for the covered local
Hermes path without clobbering Hermes' default heartbeat/task
instructions.

## What Changed

- Wrapped `hermesLocalAdapter.execute` so `ctx.authToken` is injected
into `adapterConfig.env.PAPERCLIP_API_KEY` when no explicit Paperclip
API key is already configured.
- Injected `ctx.runId` into `adapterConfig.env.PAPERCLIP_RUN_ID` so the
auth guard's `X-Paperclip-Run-Id: $PAPERCLIP_RUN_ID` instruction
resolves to the current run id.
- Added a Paperclip API auth guard to existing custom Hermes
`promptTemplate` values without creating a replacement prompt when no
custom template exists.
- Documented the intentional `as unknown as` cast needed until
`hermes-paperclip-adapter` ships an `AdapterExecutionContext` type that
includes `authToken`.
- Added registry tests for JWT injection, run-id injection, explicit key
preservation, default prompt preservation, and the no-`authToken`
early-return path.

## Verification

- [x] `pnpm --filter "./server" exec vitest run adapter-registry` - 8
tests passed.
- [x] `pnpm --filter "./server" typecheck` - passed.
- [x] Trigger a Hermes agent heartbeat and verify Paperclip writes
appear under the agent identity rather than a shared board-user
identity, with the correct run id on mutating requests.

## Risks

- Low migration risk: this changes only the Hermes local adapter wrapper
and tests.
- Existing explicit `adapterConfig.env.PAPERCLIP_API_KEY` values are
preserved to avoid breaking intentionally configured agents.
- `PAPERCLIP_RUN_ID` is set from `ctx.runId` for each execution so
mutating API calls use the current run id instead of a stale or literal
placeholder value.
- Prompt behavior is intentionally conservative: the auth guard is only
prepended when a custom prompt template already exists, so Hermes'
built-in default prompt remains intact for unconfigured agents.
- Remaining operational risk: the identity and run-id behavior should
still be verified with a live Hermes heartbeat before relying on it in
production.

## Model Used

- OpenAI Codex, GPT-5 family coding agent, tool use enabled for local
shell, GitHub CLI, and test execution.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots (not applicable: backend-only change)
- [x] I have updated relevant documentation to reflect my changes (not
applicable: no product docs changed; PR description updated)
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
Co-authored-by: Dotta <bippadotta@protonmail.com>
2026-04-21 07:18:11 -05:00
Dotta
1266954a4e [codex] Make heartbeat scheduling blocker-aware (#4157)
## Thinking Path

> - Paperclip orchestrates AI agents through issue-driven heartbeats,
checkouts, and wake scheduling.
> - This change sits in the server heartbeat and issue services that
decide which queued runs are allowed to start.
> - Before this branch, queued heartbeats could be selected even when
their issue still had unresolved blocker relationships.
> - That let blocked descendant work compete with actually-ready work
and risked auto-checking out issues that were not dependency-ready.
> - This pull request teaches the scheduler and checkout path to consult
issue dependency readiness before claiming queued runs.
> - It also exposes dependency readiness in the agent inbox so agents
can see which assigned issues are still blocked.
> - The result is that heartbeat execution follows the DAG of blocked
dependencies instead of waking work out of order.

## What Changed

- Added `IssueDependencyReadiness` helpers to `issueService`, including
unresolved blocker lookup for single issues and bulk issue lists.
- Prevented issue checkout and `in_progress` transitions when unresolved
blockers still exist.
- Made heartbeat queued-run claiming and prioritization dependency-aware
so ready work starts before blocked descendants.
- Included dependency readiness fields in `/api/agents/me/inbox-lite`
for agent heartbeat selection.
- Added regression coverage for dependency-aware heartbeat promotion and
issue-service participation filtering.

## Verification

- `pnpm run preflight:workspace-links`
- `pnpm exec vitest run
server/src/__tests__/heartbeat-dependency-scheduling.test.ts
server/src/__tests__/issues-service.test.ts`
- On this host, the Vitest command passed, but the embedded-Postgres
portions of those files were skipped because
`@embedded-postgres/darwin-x64` is not installed.

## Risks

- Scheduler ordering now prefers dependency-ready runs, so any hidden
assumptions about strict FIFO ordering could surface in edge cases.
- The new guardrails reject checkout or `in_progress` transitions for
blocked issues; callers depending on the old permissive behavior would
now get `422` errors.
- Local verification did not execute the embedded-Postgres integration
paths on this macOS host because the platform binary package was
missing.

> I checked `ROADMAP.md`; this is a targeted execution/scheduling fix
and does not duplicate planned roadmap feature work.

## Model Used

- OpenAI Codex via the Paperclip `codex_local` adapter in this
workspace. Exact backend model ID is not surfaced in the runtime here;
tool-enabled coding agent with terminal execution and repository editing
capabilities.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-20 16:03:57 -05:00
Hiuri Noronha
1bf2424377 fix: honor Hermes local command override (#3503)
## Summary

This fixes the Hermes local adapter so that a configured command
override is respected during both environment tests and execution.

## Problem

The Hermes adapter expects `adapterConfig.hermesCommand`, but the
generic local command path in the UI was storing
`adapterConfig.command`.

As a result, changing the command in the UI did not reliably affect
runtime behavior. In real use, the adapter could still fall back to the
default `hermes` binary.

This showed up clearly in setups where Hermes is launched through a
wrapper command rather than installed directly on the host.

## What changed

- switched the Hermes local UI adapter to the Hermes-specific config
builder
- updated the configuration form to read and write `hermesCommand` for
`hermes_local`
- preserved the override correctly in the test-environment path
- added server-side normalization from legacy `command` to
`hermesCommand`

## Compatibility

The server-side normalization keeps older saved agent configs working,
including configs that still store the value under `command`.

## Validation

Validated against a Docker-based Hermes workflow using a local wrapper
exposed through a symlinked command:

- `Command = hermes-docker`
- environment test respects the override
- runs no longer fall back to `hermes`

Typecheck also passed for both UI and server.

Co-authored-by: NoronhaH <NoronhaH@users.noreply.github.com>
2026-04-20 15:55:08 -05:00
LeonSGP
51f127f47b fix(hermes): stop advertising unsupported instructions bundles (#3908)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - Local adapter capability flags decide which configuration surfaces
the UI and server expose for each adapter.
> - `hermes_local` currently advertises managed instructions bundle
support, so Paperclip exposes the AGENTS.md bundle flow for Hermes
agents.
> - The bundled `hermes-paperclip-adapter` only consumes
`promptTemplate` at runtime and does not read `instructionsFilePath`, so
that advertised bundle path silently does nothing.
> - Issue #3833 reports exactly that mismatch: users configure AGENTS.md
instructions, but Hermes only receives the built-in heartbeat prompt.
> - This pull request stops advertising managed instructions bundles for
`hermes_local` until the adapter actually consumes bundle files at
runtime.

## What Changed

- Changed the built-in `hermes_local` server adapter registration to
report `supportsInstructionsBundle: false`.
- Updated the UI's synchronous built-in capability fallback so Hermes no
longer shows the managed instructions bundle affordance on first render.
- Added regression coverage in
`server/src/__tests__/adapter-routes.test.ts` to assert that
`hermes_local` still reports skills + local JWT support, but not
instructions bundle support.

## Verification

- `git diff --check`
- `node --experimental-strip-types --input-type=module -e "import {
findActiveServerAdapter } from './server/src/adapters/index.ts'; const
adapter = findActiveServerAdapter('hermes_local');
console.log(JSON.stringify({ type: adapter?.type,
supportsInstructionsBundle: adapter?.supportsInstructionsBundle,
supportsLocalAgentJwt: adapter?.supportsLocalAgentJwt, supportsSkills:
Boolean(adapter?.listSkills || adapter?.syncSkills) }));"`
- Observed
`{"type":"hermes_local","supportsInstructionsBundle":false,"supportsLocalAgentJwt":true,"supportsSkills":true}`
- Added adapter-routes regression assertions for the Hermes capability
contract; CI should validate the full route path in a clean workspace.

## Risks

- Low risk: this only changes the advertised capability surface for
`hermes_local`.
- Behavior change: Hermes agents will no longer show the broken managed
instructions bundle UI until the underlying adapter actually supports
`instructionsFilePath`.
- Existing Hermes skill sync and local JWT behavior are unchanged.

## Model Used

- OpenAI Codex, GPT-5.4 class coding agent, medium reasoning,
terminal/git/gh tool use.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [ ] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [ ] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-20 15:54:14 -05:00
github-actions[bot]
b94f1a1565 chore(lockfile): refresh pnpm-lock.yaml (#4139)
Auto-generated lockfile refresh after dependencies changed on master.
This PR only updates pnpm-lock.yaml.

Co-authored-by: lockfile-bot <lockfile-bot@users.noreply.github.com>
2026-04-20 12:15:59 -05:00
Dotta
2de893f624 [codex] add comprehensive UI Storybook coverage (#4132)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - The board UI is the main operator surface, so its component and
workflow coverage needs to stay reviewable as the product grows.
> - This branch adds Storybook as a dedicated UI reference surface for
core Paperclip screens and interaction patterns.
> - That work spans Storybook infrastructure, app-level provider wiring,
and a large fixture set that can render real control-plane states
without a live backend.
> - The branch also expands coverage across agents, budgets, issues,
chat, dialogs, navigation, projects, and data visualization so future UI
changes have a concrete visual baseline.
> - This pull request packages that Storybook work on top of the latest
`master`, excludes the lockfile from the final diff per repo policy, and
fixes one fixture contract drift caught during verification.
> - The benefit is a single reviewable PR that adds broad UI
documentation and regression-surfacing coverage without losing the
existing branch work.

## What Changed

- Added Storybook 10 wiring for the UI package, including root scripts,
UI package scripts, Storybook config, preview wrappers, Tailwind
entrypoints, and setup docs.
- Added a large fixture-backed data source for Storybook so complex
board states can render without a live server.
- Added story suites covering foundations, status language,
control-plane surfaces, overview, UX labs, agent management, budget and
finance, forms and editors, issue management, navigation and layout,
chat and comments, data visualization, dialogs and modals, and
projects/goals/workspaces.
- Adjusted several UI components for Storybook parity so dialogs, menus,
keyboard shortcuts, budget markers, markdown editing, and related
surfaces render correctly in isolation.
- Rebasing work for PR assembly: replayed the branch onto current
`master`, removed `pnpm-lock.yaml` from the final PR diff, and aligned
the dashboard fixture with the current `DashboardSummary.runActivity`
API contract.

## Verification

- `pnpm --filter @paperclipai/ui typecheck`
- `pnpm --filter @paperclipai/ui build-storybook`
- Manual diff audit after rebase: verified the PR no longer includes
`pnpm-lock.yaml` and now cleanly targets current `master`.
- Before/after UI note: before this branch there was no dedicated
Storybook surface for these Paperclip views; after this branch the local
Storybook build includes the new overview and domain story suites in
`ui/storybook-static`.

## Risks

- Large static fixture files can drift from shared types as dashboard
and UI contracts evolve; this PR already needed one fixture correction
for `runActivity`.
- Storybook bundle output includes some large chunks, so future growth
may need chunking work if build performance becomes an issue.
- Several component tweaks were made for isolated rendering parity, so
reviewers should spot-check key board surfaces against the live app
behavior.

## Model Used

- OpenAI Codex, GPT-5-based coding agent in the Paperclip harness; exact
serving model ID is not exposed in-runtime to the agent.
- Tool-assisted workflow with terminal execution, git operations, local
typecheck/build verification, and GitHub CLI PR creation.
- Context window/reasoning mode not surfaced by the harness.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-20 12:13:23 -05:00
Dotta
7a329fb8bb Harden API route authorization boundaries (#4122)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - The REST API is the control-plane boundary for companies, agents,
plugins, adapters, costs, invites, and issue mutations.
> - Several routes still relied on broad board or company access checks
without consistently enforcing the narrower actor, company, and
active-checkout boundaries those operations require.
> - That can allow agents or non-admin users to mutate sensitive
resources outside the intended governance path.
> - This pull request hardens the route authorization layer and adds
regression coverage for the audited API surfaces.
> - The benefit is tighter multi-company isolation, safer plugin and
adapter administration, and stronger enforcement of active issue
ownership.

## What Changed

- Added route-level authorization checks for budgets, plugin
administration/scoped routes, adapter management, company import/export,
direct agent creation, invite test resolution, and issue mutation/write
surfaces.
- Enforced active checkout ownership for agent-authenticated issue
mutations, while preserving explicit management overrides for permitted
managers.
- Restricted sensitive adapter and plugin management operations to
instance-admin or properly scoped actors.
- Tightened company portability and invite probing routes so agents
cannot cross company boundaries.
- Updated access constants and the Company Access UI copy for the new
active-checkout management grant.
- Added focused regression tests covering cross-company denial, agent
self-mutation denial, admin-only operations, and active checkout
ownership.
- Rebased the branch onto `public-gh/master` and fixed validation
fallout from the rebase: heartbeat-context route ordering and a company
import/export e2e fixture that now opts out of direct-hire approval
before using direct agent creation.
- Updated onboarding and signoff e2e setup to create seed agents through
`/agent-hires` plus board approval, so they remain compatible with the
approval-gated new-agent default.
- Addressed Greptile feedback by removing a duplicate company export API
alias, avoiding N+1 reporting-chain lookups in active-checkout override
checks, allowing agent mutations on unassigned `in_progress` issues, and
blocking NAT64 invite-probe targets.

## Verification

- `pnpm exec vitest run
server/src/__tests__/issues-goal-context-routes.test.ts
cli/src/__tests__/company-import-export-e2e.test.ts`
- `pnpm exec vitest run server/src/__tests__/plugin-routes-authz.test.ts
server/src/__tests__/adapter-routes-authz.test.ts
server/src/__tests__/agent-permissions-routes.test.ts
server/src/__tests__/company-portability-routes.test.ts
server/src/__tests__/costs-service.test.ts
server/src/__tests__/invite-test-resolution-route.test.ts
server/src/__tests__/issue-agent-mutation-ownership-routes.test.ts
server/src/__tests__/agent-adapter-validation-routes.test.ts`
- `pnpm exec vitest run
server/src/__tests__/issue-agent-mutation-ownership-routes.test.ts`
- `pnpm exec vitest run
server/src/__tests__/invite-test-resolution-route.test.ts`
- `pnpm -r typecheck`
- `pnpm --filter server typecheck`
- `pnpm --filter ui typecheck`
- `pnpm build`
- `pnpm test:e2e -- tests/e2e/onboarding.spec.ts
tests/e2e/signoff-policy.spec.ts`
- `pnpm test:e2e -- tests/e2e/signoff-policy.spec.ts`
- `pnpm test:run` was also run. It failed under default full-suite
parallelism with two order-dependent failures in
`plugin-routes-authz.test.ts` and `routines-e2e.test.ts`; both files
passed when rerun directly together with `pnpm exec vitest run
server/src/__tests__/plugin-routes-authz.test.ts
server/src/__tests__/routines-e2e.test.ts`.

## Risks

- Medium risk: this changes authorization behavior across multiple
sensitive API surfaces, so callers that depended on broad board/company
access may now receive `403` or `409` until they use the correct
governance path.
- Direct agent creation now respects the company-level board-approval
requirement; integrations that need pending hires should use
`/api/companies/:companyId/agent-hires`.
- Active in-progress issue mutations now require checkout ownership or
an explicit management override, which may reveal workflow assumptions
in older automation.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

OpenAI Codex, GPT-5 coding agent, tool-using workflow with local shell,
Git, GitHub CLI, and repository tests.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [ ] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-20 10:56:48 -05:00
Dotta
549ef11c14 [codex] Respect manual workspace runtime controls (#4125)
## Thinking Path

> - Paperclip orchestrates AI agents inside execution and project
workspaces
> - Workspace runtime services can be controlled manually by operators
and reused by agent runs
> - Manual start/stop state was not preserved consistently across
workspace policies and routine launches
> - Routine launches also needed branch/workspace variables to default
from the selected workspace context
> - This pull request makes runtime policy state explicit, preserves
manual control, and auto-fills routine branch variables from workspace
data
> - The benefit is less surprising workspace service behavior and fewer
manual inputs when running workspace-scoped routines

## What Changed

- Added runtime-state handling for manual workspace control across
execution and project workspace validators, routes, and services.
- Updated heartbeat/runtime startup behavior so manually stopped
services are respected.
- Auto-filled routine workspace branch variables from available
workspace context.
- Added focused server and UI tests for workspace runtime and routine
variable behavior.
- Removed muted gray background styling from workspace pages and cards
for a cleaner workspace UI.

## Verification

- `pnpm install --frozen-lockfile --ignore-scripts`
- `pnpm exec vitest run server/src/__tests__/routines-service.test.ts
server/src/__tests__/workspace-runtime.test.ts
ui/src/components/RoutineRunVariablesDialog.test.tsx`
- Result: 55 tests passed, 21 skipped. The embedded Postgres routines
tests skipped on this host with the existing PGlite/Postgres init
warning; workspace-runtime and UI tests passed.

## Risks

- Medium risk: this touches runtime service start/stop policy and
heartbeat launch behavior.
- The focused tests cover manual runtime state, routine variables, and
workspace runtime reuse paths.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex coding agent based on GPT-5, tool-enabled local shell and
GitHub workflow, exact runtime context window not exposed in this
session.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots, or documented why targeted component/service verification
is sufficient here
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-20 10:39:37 -05:00
Dotta
c7c1ca0c78 [codex] Clean up terminal-result adapter process groups (#4129)
## Thinking Path

> - Paperclip runs local adapter processes for agents and streams their
output into heartbeat runs
> - Some adapters can emit a terminal result before all descendant
processes have exited
> - If those descendants keep running, a heartbeat can appear complete
while the process group remains alive
> - Claude local runs need a bounded cleanup path after terminal JSON
output is observed and the child exits
> - This pull request adds terminal-result cleanup support to adapter
process utilities and wires it into the Claude local adapter
> - The benefit is fewer stranded adapter process groups after
successful terminal results

## What Changed

- Added terminal-result cleanup options to `runChildProcess`.
- Tracked child exit plus terminal output before signaling lingering
process groups.
- Added Claude local adapter configuration for terminal result cleanup
grace time.
- Added process cleanup tests covering terminal-output cleanup and noisy
non-terminal runs.

## Verification

- `pnpm install --frozen-lockfile --ignore-scripts`
- `pnpm exec vitest run packages/adapter-utils/src/server-utils.test.ts`
- Result: 9 tests passed.

## Risks

- Medium risk: this changes adapter child-process cleanup behavior.
- The cleanup only arms after terminal result detection and child exit,
and it is covered by process-group tests.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex coding agent based on GPT-5, tool-enabled local shell and
GitHub workflow, exact runtime context window not exposed in this
session.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots, or documented why it is not applicable
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-20 10:38:57 -05:00
Dotta
56b3120971 [codex] Improve mobile org chart navigation (#4127)
## Thinking Path

> - Paperclip models companies as teams of human and AI operators
> - The org chart is the primary visual map of that company structure
> - Mobile users need to pan and inspect the chart without awkward
gestures or layout jumps
> - The roadmap also needed to reflect that the multiple-human-users
work is complete
> - This pull request improves mobile org chart gestures and updates the
roadmap references
> - The benefit is a smoother company navigation experience and docs
that match shipped multi-user support

## What Changed

- Added one-finger mobile pan handling for the org chart.
- Expanded org chart test coverage for touch gesture behavior.
- Updated README, ROADMAP, and CLI README references to mark
multiple-human-users work as complete.

## Verification

- `pnpm install --frozen-lockfile --ignore-scripts`
- `pnpm exec vitest run ui/src/pages/OrgChart.test.tsx`
- Result: 4 tests passed.

## Risks

- Low-medium risk: org chart pointer/touch handling changed, but the
behavior is scoped to the org chart page and covered by targeted tests.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex coding agent based on GPT-5, tool-enabled local shell and
GitHub workflow, exact runtime context window not exposed in this
session.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots, or documented why targeted interaction tests are sufficient
here
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-20 10:35:33 -05:00
Dotta
4357a3f352 [codex] Harden dashboard run activity charts (#4126)
## Thinking Path

> - Paperclip gives operators a live view of agent work across
dashboards, transcripts, and run activity charts
> - Those views consume live run updates and aggregate run activity from
backend dashboard data
> - Missing or partial run data could make charts brittle, and live
transcript updates were heavier than needed
> - Operators need dashboard data to stay stable even when recent run
payloads are incomplete
> - This pull request hardens dashboard run aggregation, guards chart
rendering, and lightens live run update handling
> - The benefit is a more reliable dashboard during active agent
execution

## What Changed

- Added dashboard run activity types and backend aggregation coverage.
- Guarded activity chart rendering when run data is missing or partial.
- Reduced live transcript update churn in active agent and run chat
surfaces.
- Fixed issue chat avatar alignment in the thread renderer.
- Added focused dashboard, activity chart, and live transcript tests.

## Verification

- `pnpm install --frozen-lockfile --ignore-scripts`
- `pnpm exec vitest run server/src/__tests__/dashboard-service.test.ts
ui/src/components/ActivityCharts.test.tsx
ui/src/components/transcript/useLiveRunTranscripts.test.tsx`
- Result: 8 tests passed, 1 skipped. The embedded Postgres dashboard
service test skipped on this host with the existing PGlite/Postgres init
warning; UI chart and transcript tests passed.

## Risks

- Medium-low risk: aggregation semantics changed, but the UI remains
guarded around incomplete data.
- The dashboard service test is host-skipped here, so CI should confirm
the embedded database path.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex coding agent based on GPT-5, tool-enabled local shell and
GitHub workflow, exact runtime context window not exposed in this
session.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots, or documented why targeted component tests are sufficient
here
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-20 10:34:21 -05:00
Dotta
0f4e4b4c10 [codex] Split reusable agent hiring templates (#4124)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - Hiring new agents depends on clear, reusable operating instructions
> - The create-agent skill had one large template reference that mixed
multiple roles together
> - That made it harder to reuse, review, and adapt role-specific
instructions during governed hires
> - This pull request splits the reusable agent instruction templates
into focused role files and polishes the agent instructions pane layout
> - The benefit is faster, clearer agent hiring without bloating the
main skill document

## What Changed

- Split coder, QA, and UX designer reusable instructions into dedicated
reference files.
- Kept the index reference concise and pointed it at the role-specific
files.
- Updated the create-agent skill to describe the separated template
structure.
- Polished the agent detail instructions/package file tree layout so the
longer template references remain readable.

## Verification

- `pnpm install --frozen-lockfile --ignore-scripts`
- `pnpm --filter @paperclipai/ui typecheck`
- UI screenshot rationale: no screenshots attached because the visible
change is limited to the Agent detail instructions file-tree layout
(`wrapLabels` plus the side-by-side breakpoint). There is no new user
flow or state transition to demonstrate; reviewers can verify visually
by opening an agent's Instructions tab and resizing across the
single-column and side-by-side breakpoints to confirm long file names
wrap instead of truncating or overflowing.

## Risks

- Low risk: this is documentation and UI layout only.
- Main risk is stale links in the skill references; the new files are
committed in the referenced paths.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex coding agent based on GPT-5, tool-enabled local shell and
GitHub workflow, exact runtime context window not exposed in this
session.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots, or documented why targeted component/type verification is
sufficient here
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-20 10:33:19 -05:00
Aron Prins
73eb23734f docs: use structured agent mentions in paperclip skill (#4103)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies
> - Agents coordinate work through tasks and comments, and @-mentions
are part of the wakeup path for cross-agent handoffs and review requests
> - The current repo skill still instructs machine-authored comments to
use raw `@AgentName` text as the default mention format
> - But the current backend mention parsing is still unreliable for
multi-word display names, so agents following that guidance can silently
fail to wake the intended target
> - This pull request updates the Paperclip skill and API reference to
prefer structured `agent://` markdown mentions for machine-authored
comments
> - The benefit is a low-risk documentation workaround that steers
agents onto the mention format the server already resolves reliably
while broader runtime fixes are reviewed upstream

## What Changed

- Updated `skills/paperclip/SKILL.md` to stop recommending raw
`@AgentName` mentions for machine-authored comments
- Updated `skills/paperclip/references/api-reference.md` with a concrete
workflow: resolve the target via `GET
/api/companies/{companyId}/agents`, then emit `[@Display
Name](agent://<agent-id>)`
- Added explicit guidance that raw `@AgentName` text is fallback-only
and unreliable for names containing spaces
- Cross-referenced the current upstream mention-bug context so reviewers
can connect this docs workaround to the open parser/runtime fixes
  Related issue/PR refs: #448, #459, #558, #669, #722, #1412, #2249

## Verification

- `pnpm -r typecheck`
- `pnpm build`
- `pnpm test:run` currently fails on upstream `master` in existing tests
unrelated to this docs-only change:
- `src/__tests__/worktree.test.ts` — `seeds authenticated users into
minimally cloned worktree instances` timed out after 20000ms
- `src/__tests__/onboard.test.ts` — `keeps tailnet quickstart on
loopback until tailscale is available` expected `127.0.0.1` but got
`100.125.202.3`
- Confirmed the git diff is limited to:
  - `skills/paperclip/SKILL.md`
  - `skills/paperclip/references/api-reference.md`

## Risks

- Low risk. This is a docs/skill-only change and does not alter runtime
behavior.
- It is a mitigation, not a full fix: it helps agent-authored comments
that follow the Paperclip skill, but it does not fix manually typed raw
mentions or other code paths that still emit plain `@Name` text.
- If upstream chooses a different long-term mention format, this
guidance may need to be revised once the runtime-side fix lands.

## Model Used

- OpenAI Codex desktop agent on a GPT-5-class model. Exact deployed
model ID and context window are not exposed by the local harness. Tool
use enabled, including shell execution, git, and GitHub CLI.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-20 07:38:04 -07:00
Dotta
9c6f551595 [codex] Add plugin orchestration host APIs (#4114)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - The plugin system is the extension path for optional capabilities
that should not require core product changes for every integration.
> - Plugins need scoped host APIs for issue orchestration, documents,
wakeups, summaries, activity attribution, and isolated database state.
> - Without those host APIs, richer plugins either cannot coordinate
Paperclip work safely or need privileged core-side special cases.
> - This pull request adds the plugin orchestration host surface, scoped
route dispatch, a database namespace layer, and a smoke plugin that
exercises the contract.
> - The benefit is a broader plugin API that remains company-scoped,
auditable, and covered by tests.

## What Changed

- Added plugin orchestration host APIs for issue creation, document
access, wakeups, summaries, plugin-origin activity, and scoped API route
dispatch.
- Added plugin database namespace tables, schema exports, migration
checks, and idempotent replay coverage under migration
`0059_plugin_database_namespaces`.
- Added shared plugin route/API types and validators used by server and
SDK boundaries.
- Expanded plugin SDK types, protocol helpers, worker RPC host behavior,
and testing utilities for orchestration flows.
- Added the `plugin-orchestration-smoke-example` package to exercise
scoped routes, restricted database namespaces, issue orchestration,
documents, wakeups, summaries, and UI status surfaces.
- Kept the new orchestration smoke fixture out of the root pnpm
workspace importer so this PR preserves the repository policy of not
committing `pnpm-lock.yaml`.
- Updated plugin docs and database docs for the new orchestration and
database namespace surfaces.
- Rebased the branch onto `public-gh/master`, resolved conflicts, and
removed `pnpm-lock.yaml` from the final PR diff.

## Verification

- `pnpm install --frozen-lockfile`
- `pnpm --filter @paperclipai/db typecheck`
- `pnpm exec vitest run packages/db/src/client.test.ts`
- `pnpm exec vitest run server/src/__tests__/plugin-database.test.ts
server/src/__tests__/plugin-orchestration-apis.test.ts
server/src/__tests__/plugin-routes-authz.test.ts
server/src/__tests__/plugin-scoped-api-routes.test.ts
server/src/__tests__/plugin-sdk-orchestration-contract.test.ts`
- From `packages/plugins/examples/plugin-orchestration-smoke-example`:
`pnpm exec vitest run --config ./vitest.config.ts`
- `pnpm --dir
packages/plugins/examples/plugin-orchestration-smoke-example run
typecheck`
- `pnpm --filter @paperclipai/server typecheck`
- PR CI on latest head `293fc67c`: `policy`, `verify`, `e2e`, and
`security/snyk` all passed.

## Risks

- Medium risk: this expands plugin host authority, so route auth,
company scoping, and plugin-origin activity attribution need careful
review.
- Medium risk: database namespace migration behavior must remain
idempotent for environments that may have seen earlier branch versions.
- Medium risk: the orchestration smoke fixture is intentionally excluded
from the root workspace importer to avoid a `pnpm-lock.yaml` PR diff;
direct fixture verification remains listed above.
- Low operational risk from the PR setup itself: the branch is rebased
onto current `master`, the migration is ordered after upstream
`0057`/`0058`, and `pnpm-lock.yaml` is not in the final diff.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

Roadmap checked: this work aligns with the completed Plugin system
milestone and extends the plugin surface rather than duplicating an
unrelated planned core feature.

## Model Used

- OpenAI Codex, GPT-5-based coding agent in a tool-enabled CLI
environment. Exact hosted model build and context-window size are not
exposed by the runtime; reasoning/tool use were enabled for repository
inspection, editing, testing, git operations, and PR creation.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots (N/A: no core UI screen change; example plugin UI contract
is covered by tests)
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-20 08:52:51 -05:00
Dotta
16b2b84d84 [codex] Improve agent runtime recovery and governance (#4086)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - The heartbeat runtime, agent import path, and agent configuration
defaults determine whether work is dispatched safely and predictably.
> - Several accumulated fixes all touched agent execution recovery, wake
routing, import behavior, and runtime concurrency defaults.
> - Those changes need to land together so the heartbeat service and
agent creation defaults stay internally consistent.
> - This pull request groups the runtime/governance changes from the
split branch into one standalone branch.
> - The benefit is safer recovery for stranded runs, bounded high-volume
reads, imported-agent approval correctness, skill-template support, and
a clearer default concurrency policy.

## What Changed

- Fixed stranded continuation recovery so successful automatic retries
are requeued instead of incorrectly blocking the issue.
- Bounded high-volume issue/log reads across issue, heartbeat, agent,
project, and workspace paths.
- Fixed imported-agent approval and instruction-path permission
handling.
- Quarantined seeded worktree execution state during worktree
provisioning.
- Queued approval follow-up wakes and hardened SQL_ASCII heartbeat
output handling.
- Added reusable agent instruction templates for hiring flows.
- Set the default max concurrent agent runs to five and updated related
UI/tests/docs.

## Verification

- `pnpm install --frozen-lockfile`
- `pnpm exec vitest run server/src/__tests__/company-portability.test.ts
server/src/__tests__/heartbeat-process-recovery.test.ts
server/src/__tests__/heartbeat-comment-wake-batching.test.ts
server/src/__tests__/heartbeat-list.test.ts
server/src/__tests__/issues-service.test.ts
server/src/__tests__/agent-permissions-routes.test.ts
packages/adapter-utils/src/server-utils.test.ts
ui/src/lib/new-agent-runtime-config.test.ts`
- Split integration check: merged this branch first, followed by the
other [PAP-1614](/PAP/issues/PAP-1614) branches, with no merge
conflicts.
- Confirmed this branch does not include `pnpm-lock.yaml`.

## Risks

- Medium risk: touches heartbeat recovery, queueing, and issue list
bounds in central runtime paths.
- Imported-agent and concurrency default behavior changes may affect
existing automation that assumes one-at-a-time default runs.
- No database migrations are included.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5.4 tool-enabled coding model, agentic
code-editing/runtime with local shell and GitHub CLI access; exact
context window and reasoning mode are not exposed by the Paperclip
harness.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-20 06:19:48 -05:00
Dotta
057fee4836 [codex] Polish issue and operator workflow UI (#4090)
## Thinking Path

> - Paperclip operators spend much of their time in issues, inboxes,
selectors, and rich comment threads.
> - Small interaction problems in those surfaces slow down supervision
of AI-agent work.
> - The branch included related operator quality-of-life fixes for issue
layout, inbox actions, recent selectors, mobile inputs, and chat
rendering stability.
> - These changes are UI-focused and can land independently from
workspace navigation and access-profile work.
> - This pull request groups the operator QoL fixes into one standalone
branch.
> - The benefit is a more stable and efficient board workflow for issue
triage and task editing.

## What Changed

- Widened issue detail content and added a desktop inbox archive action.
- Fixed mobile text-field zoom by keeping touch input font sizes at
16px.
- Prioritized recent picker selections for assignees/projects in issue
and routine flows.
- Showed actionable approvals in the Mine inbox model.
- Fixed issue chat renderer state crashes and hardened tests.

## Verification

- `pnpm install --frozen-lockfile`
- `pnpm exec vitest run ui/src/components/IssueChatThread.test.tsx
ui/src/lib/inbox.test.ts ui/src/lib/recent-selections.test.ts`
- Split integration check: merged last after the other
[PAP-1614](/PAP/issues/PAP-1614) branches with no merge conflicts.
- Confirmed this branch does not include `pnpm-lock.yaml`.

## Risks

- Low to medium risk: mostly UI state, layout, and selection-priority
behavior.
- Visual layout and mobile zoom behavior may need browser/device QA
beyond component tests.
- No database migrations are included.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5.4 tool-enabled coding model, agentic
code-editing/runtime with local shell and GitHub CLI access; exact
context window and reasoning mode are not exposed by the Paperclip
harness.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-20 06:16:41 -05:00
Dotta
fee514efcb [codex] Improve workspace navigation and runtime UI (#4089)
## Thinking Path

> - Paperclip agents do real work in project and execution workspaces.
> - Operators need workspace state to be visible, navigable, and
copyable without digging through raw run logs.
> - The branch included related workspace cards, navigation, runtime
controls, stale-service handling, and issue-property visibility.
> - These changes share the workspace UI and runtime-control surfaces
and can stand alone from unrelated access/profile work.
> - This pull request groups the workspace experience changes into one
standalone branch.
> - The benefit is a clearer workspace overview, better metadata copy
flows, and more accurate runtime service controls.

## What Changed

- Polished project workspace summary cards and made workspace metadata
copyable.
- Added a workspace navigation overview and extracted reusable project
workspace content.
- Squared and polished the execution workspace configuration page.
- Fixed stale workspace command matching and hid stopped stale services
in runtime controls.
- Showed live workspace service context in issue properties.

## Verification

- `pnpm install --frozen-lockfile`
- `pnpm exec vitest run
ui/src/components/ProjectWorkspaceSummaryCard.test.tsx
ui/src/lib/project-workspaces-tab.test.ts
ui/src/components/Sidebar.test.tsx
ui/src/components/WorkspaceRuntimeControls.test.tsx
ui/src/components/IssueProperties.test.tsx`
- `pnpm exec vitest run packages/shared/src/workspace-commands.test.ts
--config /dev/null` because the root Vitest project config does not
currently include `packages/shared` tests.
- Split integration check: merged after runtime/governance,
dev-infra/backups, and access/profiles with no merge conflicts.
- Confirmed this branch does not include `pnpm-lock.yaml`.

## Risks

- Medium risk: touches workspace navigation, runtime controls, and issue
property rendering.
- Visual layout changes may need browser QA, especially around smaller
screens and dense workspace metadata.
- No database migrations are included.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5.4 tool-enabled coding model, agentic
code-editing/runtime with local shell and GitHub CLI access; exact
context window and reasoning mode are not exposed by the Paperclip
harness.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-20 06:14:32 -05:00
Dotta
d8b63a18e7 [codex] Add access cleanup and user profile page (#4088)
## Thinking Path

> - Paperclip is moving from a solo local operator model toward teams
supervising AI-agent companies.
> - Human access management and human-visible profile surfaces are part
of that multiple-user path.
> - The branch included related access cleanup, archived-member removal,
permission protection, and a user profile page.
> - These changes share company membership, user attribution, and
access-service behavior.
> - This pull request groups those human access/profile changes into one
standalone branch.
> - The benefit is safer member removal behavior and a first profile
surface for user work, activity, and cost attribution.

## What Changed

- Added archived company member removal support across shared contracts,
server routes/services, and UI.
- Protected company member removal with stricter permission checks and
tests.
- Added company user profile API, shared types, route wiring, client
API, route, and UI page.
- Simplified the user profile page visual design to a neutral
typography-led layout.

## Verification

- `pnpm install --frozen-lockfile`
- `pnpm exec vitest run server/src/__tests__/access-service.test.ts
server/src/__tests__/user-profile-routes.test.ts
ui/src/pages/CompanyAccess.test.tsx --hookTimeout=30000`
- `pnpm exec vitest run server/src/__tests__/user-profile-routes.test.ts
--testTimeout=30000 --hookTimeout=30000` after an initial local
embedded-Postgres hook timeout in the combined run.
- Split integration check: merged after runtime/governance and
dev-infra/backups with no merge conflicts.
- Confirmed this branch does not include `pnpm-lock.yaml`.

## Risks

- Medium risk: changes member removal permissions and adds a new user
profile route with cross-table stats.
- The profile page is a new UI surface and may need visual follow-up in
browser QA.
- No database migrations are included.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5.4 tool-enabled coding model, agentic
code-editing/runtime with local shell and GitHub CLI access; exact
context window and reasoning mode are not exposed by the Paperclip
harness.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-20 06:10:20 -05:00
Dotta
e89d3f7e11 [codex] Add backup endpoint and dev runtime hardening (#4087)
## Thinking Path

> - Paperclip is a local-first control plane for AI-agent companies.
> - Operators need predictable local dev behavior, recoverable instance
data, and scripts that do not churn the running app.
> - Several accumulated changes improve backup streaming, dev-server
health, static UI caching/logging, diagnostic-file ignores, and instance
isolation.
> - These are operational improvements that can land independently from
product UI work.
> - This pull request groups the dev-infra and backup changes from the
split branch into one standalone branch.
> - The benefit is safer local operation, easier manual backups, less
noisy dev output, and less cross-instance auth leakage.

## What Changed

- Added a manual instance database backup endpoint and route tests.
- Streamed backup/restore handling to avoid materializing large payloads
at once.
- Reduced dev static UI log/cache churn and ignored Node diagnostic
report captures.
- Added guarded dev auto-restart health polling coverage.
- Preserved worktree config during provisioning and scoped auth cookies
by instance.
- Added a Discord daily digest helper script and environment
documentation.
- Hardened adapter-route and startup feedback export tests around the
changed infrastructure.

## Verification

- `pnpm install --frozen-lockfile`
- `pnpm exec vitest run packages/db/src/backup-lib.test.ts
server/src/__tests__/instance-database-backups-routes.test.ts
server/src/__tests__/server-startup-feedback-export.test.ts
server/src/__tests__/adapter-routes.test.ts
server/src/__tests__/dev-runner-paths.test.ts
server/src/__tests__/health-dev-server-token.test.ts
server/src/__tests__/http-log-policy.test.ts
server/src/__tests__/vite-html-renderer.test.ts
server/src/__tests__/workspace-runtime.test.ts
server/src/__tests__/better-auth.test.ts`
- Split integration check: merged after the runtime/governance branch
and before UI branches with no merge conflicts.
- Confirmed this branch does not include `pnpm-lock.yaml`.

## Risks

- Medium risk: touches server startup, backup streaming, auth cookie
naming, dev health checks, and worktree provisioning.
- Backup endpoint behavior depends on existing board/admin access
controls and database backup helpers.
- No database migrations are included.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5.4 tool-enabled coding model, agentic
code-editing/runtime with local shell and GitHub CLI access; exact
context window and reasoning mode are not exposed by the Paperclip
harness.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-20 06:08:55 -05:00
Dotta
236d11d36f [codex] Add run liveness continuations (#4083)
## Thinking Path

> - Paperclip orchestrates AI agents for zero-human companies.
> - Heartbeat runs are the control-plane record of each agent execution
window.
> - Long-running local agents can exhaust context or stop while still
holding useful next-step state.
> - Operators need that stop reason, next action, and continuation path
to be durable and visible.
> - This pull request adds run liveness metadata, continuation
summaries, and UI surfaces for issue run ledgers.
> - The benefit is that interrupted or long-running work can resume with
clearer context instead of losing the agent's last useful handoff.

## What Changed

- Added heartbeat-run liveness fields, continuation attempt tracking,
and an idempotent `0058` migration.
- Added server services and tests for run liveness, continuation
summaries, stop metadata, and activity backfill.
- Wired local and HTTP adapters to surface continuation/liveness context
through shared adapter utilities.
- Added shared constants, validators, and heartbeat types for liveness
continuation state.
- Added issue-detail UI surfaces for continuation handoffs and the run
ledger, with component tests.
- Updated agent runtime docs, heartbeat protocol docs, prompt guidance,
onboarding assets, and skills instructions to explain continuation
behavior.
- Addressed Greptile feedback by scoping document evidence by run,
excluding system continuation-summary documents from liveness evidence,
importing shared liveness types, surfacing hidden ledger run counts,
documenting bounded retry behavior, and moving run-ledger liveness
backfill off the request path.

## Verification

- `pnpm exec vitest run packages/adapter-utils/src/server-utils.test.ts
server/src/__tests__/run-continuations.test.ts
server/src/__tests__/run-liveness.test.ts
server/src/__tests__/activity-service.test.ts
server/src/__tests__/documents-service.test.ts
server/src/__tests__/issue-continuation-summary.test.ts
server/src/services/heartbeat-stop-metadata.test.ts
ui/src/components/IssueRunLedger.test.tsx
ui/src/components/IssueContinuationHandoff.test.tsx
ui/src/components/IssueDocumentsSection.test.tsx`
- `pnpm --filter @paperclipai/db build`
- `pnpm exec vitest run server/src/__tests__/activity-service.test.ts
ui/src/components/IssueRunLedger.test.tsx`
- `pnpm --filter @paperclipai/ui typecheck`
- `pnpm --filter @paperclipai/server typecheck`
- `pnpm exec vitest run server/src/__tests__/activity-service.test.ts
server/src/__tests__/run-continuations.test.ts
ui/src/components/IssueRunLedger.test.tsx`
- `pnpm exec vitest run
server/src/__tests__/heartbeat-process-recovery.test.ts -t "treats a
plan document update"`
- `pnpm exec vitest run server/src/__tests__/activity-service.test.ts
server/src/__tests__/heartbeat-process-recovery.test.ts -t "activity
service|treats a plan document update"`
- Remote PR checks on head `e53b1a1d`: `verify`, `e2e`, `policy`, and
Snyk all passed.
- Confirmed `public-gh/master` is an ancestor of this branch after
fetching `public-gh master`.
- Confirmed `pnpm-lock.yaml` is not included in the branch diff.
- Confirmed migration `0058_wealthy_starbolt.sql` is ordered after
`0057` and uses `IF NOT EXISTS` guards for repeat application.
- Greptile inline review threads are resolved.

## Risks

- Medium risk: this touches heartbeat execution, liveness recovery,
activity rendering, issue routes, shared contracts, docs, and UI.
- Migration risk is mitigated by additive columns/indexes and idempotent
guards.
- Run-ledger liveness backfill is now asynchronous, so the first ledger
response can briefly show historical missing liveness until the
background backfill completes.
- UI screenshot coverage is not included in this packaging pass;
validation is currently through focused component tests.

> For core feature work, check [`ROADMAP.md`](ROADMAP.md) first and
discuss it in `#dev` before opening the PR. Feature PRs that overlap
with planned core work may need to be redirected — check the roadmap
first. See `CONTRIBUTING.md`.

## Model Used

- OpenAI Codex, GPT-5.4, local tool-use coding agent with terminal, git,
GitHub connector, GitHub CLI, and Paperclip API access.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have checked ROADMAP.md and confirmed this PR does not duplicate
planned core work
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

Screenshot note: no before/after screenshots were captured in this PR
packaging pass; the UI changes are covered by focused component tests
listed above.

---------

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-20 06:01:49 -05:00
Dotta
b9a80dcf22 feat: implement multi-user access and invite flows (#3784)
## Thinking Path

> - Paperclip is the control plane for autonomous AI companies.
> - V1 needs to stay local-first while also supporting shared,
authenticated deployments.
> - Human operators need real identities, company membership, invite
flows, profile surfaces, and company-scoped access controls.
> - Agents and operators also need the existing issue, inbox, workspace,
approval, and plugin flows to keep working under those authenticated
boundaries.
> - This branch accumulated the multi-user implementation, follow-up QA
fixes, workspace/runtime refinements, invite UX improvements,
release-branch conflict resolution, and review hardening.
> - This pull request consolidates that branch onto the current `master`
branch as a single reviewable PR.
> - The benefit is a complete multi-user implementation path with tests
and docs carried forward without dropping existing branch work.

## What Changed

- Added authenticated human-user access surfaces: auth/session routes,
company user directory, profile settings, company access/member
management, join requests, and invite management.
- Added invite creation, invite landing, onboarding, logo/branding,
invite grants, deduped join requests, and authenticated multi-user E2E
coverage.
- Tightened company-scoped and instance-admin authorization across
board, plugin, adapter, access, issue, and workspace routes.
- Added profile-image URL validation hardening, avatar preservation on
name-only profile updates, and join-request uniqueness migration cleanup
for pending human requests.
- Added an atomic member role/status/grants update path so Company
Access saves no longer leave partially updated permissions.
- Improved issue chat, inbox, assignee identity rendering,
sidebar/account/company navigation, workspace routing, and execution
workspace reuse behavior for multi-user operation.
- Added and updated server/UI tests covering auth, invites, membership,
issue workspace inheritance, plugin authz, inbox/chat behavior, and
multi-user flows.
- Merged current `public-gh/master` into this branch, resolved all
conflicts, and verified no `pnpm-lock.yaml` change is included in this
PR diff.

## Verification

- `pnpm exec vitest run server/src/__tests__/issues-service.test.ts
ui/src/components/IssueChatThread.test.tsx ui/src/pages/Inbox.test.tsx`
- `pnpm run preflight:workspace-links && pnpm exec vitest run
server/src/__tests__/plugin-routes-authz.test.ts`
- `pnpm exec vitest run server/src/__tests__/plugin-routes-authz.test.ts
server/src/__tests__/workspace-runtime-service-authz.test.ts
server/src/__tests__/access-validators.test.ts`
- `pnpm exec vitest run
server/src/__tests__/authz-company-access.test.ts
server/src/__tests__/routines-routes.test.ts
server/src/__tests__/sidebar-preferences-routes.test.ts
server/src/__tests__/approval-routes-idempotency.test.ts
server/src/__tests__/openclaw-invite-prompt-route.test.ts
server/src/__tests__/agent-cross-tenant-authz-routes.test.ts
server/src/__tests__/routines-e2e.test.ts`
- `pnpm exec vitest run server/src/__tests__/auth-routes.test.ts
ui/src/pages/CompanyAccess.test.tsx`
- `pnpm --filter @paperclipai/shared typecheck && pnpm --filter
@paperclipai/db typecheck && pnpm --filter @paperclipai/server
typecheck`
- `pnpm --filter @paperclipai/shared typecheck && pnpm --filter
@paperclipai/server typecheck`
- `pnpm --filter @paperclipai/ui typecheck`
- `pnpm db:generate`
- `npx playwright test --config tests/e2e/playwright.config.ts --list`
- Confirmed branch has no uncommitted changes and is `0` commits behind
`public-gh/master` before PR creation.
- Confirmed no `pnpm-lock.yaml` change is staged or present in the PR
diff.

## Risks

- High review surface area: this PR contains the accumulated multi-user
branch plus follow-up fixes, so reviewers should focus especially on
company-boundary enforcement and authenticated-vs-local deployment
behavior.
- UI behavior changed across invites, inbox, issue chat, access
settings, and sidebar navigation; no browser screenshots are included in
this branch-consolidation PR.
- Plugin install, upgrade, and lifecycle/config mutations now require
instance-admin access, which is intentional but may change expectations
for non-admin board users.
- A join-request dedupe migration rejects duplicate pending human
requests before creating unique indexes; deployments with unusual
historical duplicates should review the migration behavior.
- Company member role/status/grant saves now use a new combined
endpoint; older separate endpoints remain for compatibility.
- Full production build was not run locally in this heartbeat; CI should
cover the full matrix.

## Model Used

- OpenAI Codex coding agent, GPT-5-based model, CLI/tool-use
environment. Exact deployed model identifier and context window were not
exposed by the runtime.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

Note on screenshots: this is a branch-consolidation PR for an
already-developed multi-user branch, and no browser screenshots were
captured during this heartbeat.

---------

Co-authored-by: dotta <dotta@example.com>
Co-authored-by: Paperclip <noreply@paperclip.ing>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-17 09:44:19 -05:00
Roman Barinov
e93e418cbf fix: add ssh client and jq to production image (#3826)
## Thinking Path

> - Paperclip is the control plane that runs long-lived AI-agent work in
production.
> - The production container image is the runtime boundary for agent
tools and shell access.
> - In our deployment, Paperclip agents now need a native SSH client and
`jq` available inside the final runtime container.
> - Installing those tools only via ai-rig entrypoint hacks is brittle
and drifts from the image source of truth.
> - This pull request updates the production Docker image itself so the
required binaries are present whenever the image is built.
> - The change is intentionally scoped to the final production stage so
build/deps stages do not gain extra packages unnecessarily.
> - The benefit is a cleaner, reproducible runtime image with fewer
deploy-specific workarounds.

## What Changed

- Added `openssh-client` to the production Docker image stage.
- Added `jq` to the production Docker image stage.
- Kept the package install in the final `production` stage instead of
the shared base stage to minimize scope.

## Verification

- Reviewed the final Dockerfile diff to confirm the packages are
installed in the `production` stage only.
- Attempted local image build with:
  - `docker build --target production -t paperclip:ssh-jq-test .`
- Local build could not be completed in this environment because the
local Docker daemon was unavailable:
- `Cannot connect to the Docker daemon at
unix:///Users/roman/.docker/run/docker.sock. Is the docker daemon
running?`

## Risks

- Low risk: image footprint increases slightly because two Debian
packages are added.
- `openssh-client` expands runtime capability, so this is appropriate
only because the deployed Paperclip runtime explicitly needs SSH access.

## Model Used

- OpenAI Codex / `gpt-5.4`
- Tool-using agent workflow via Hermes
- Context from local repository inspection, git, and shell tooling

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [ ] I have run tests locally and they pass
- [ ] I have added or updated tests where applicable
- [ ] If this change affects the UI, I have included before/after
screenshots
- [ ] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge
2026-04-16 17:11:55 -05:00
Dotta
407e76c1db [codex] Fix Docker gh installation (#3844)
## Thinking Path

> - Paperclip is the control plane for autonomous AI companies, and the
Docker image is the no-local-Node path for running that control plane.
> - The deploy workflow builds and pushes that image from the repository
`Dockerfile`.
> - The current image setup adds GitHub CLI through GitHub's external
apt repository and verifies a mutable keyring URL with a pinned SHA256.
> - GitHub rotated the CLI Linux package signing key, so that pinned
keyring checksum now fails before Buildx can publish the image.
> - Paperclip already has a repo-local precedent in
`docker/untrusted-review/Dockerfile`: install Debian trixie's packaged
`gh` directly from the base distribution.
> - This pull request removes the external GitHub CLI apt
keyring/repository path from the production image and installs `gh` with
the rest of the Debian packages.
> - The benefit is a simpler Docker build that no longer fails when
GitHub rotates the apt keyring file.

## What Changed

- Updated the main `Dockerfile` base stage to install `gh` from Debian
trixie's package repositories.
- Removed the mutable GitHub CLI apt keyring download, pinned checksum
verification, extra apt source, second `apt-get update`, and separate
`gh` install step.

## Verification

- `git diff --check`
- `./scripts/docker-build-test.sh` skipped because Docker is installed
but the daemon is not running on this machine.
- Confirmed `https://packages.debian.org/trixie/gh` returns HTTP 200,
matching the base image distribution package source.

## Risks

- Debian's `gh` package can lag the latest upstream GitHub CLI release.
This is acceptable for the current image contract, which requires `gh`
availability but does not document a latest-upstream version guarantee.
- A full image build still needs to run in CI because the local Docker
daemon is unavailable in this environment.

## Model Used

- OpenAI Codex, GPT-5-based coding agent. Exact backend model ID was not
exposed in this runtime; tool use and shell execution were enabled.

## Checklist

- [x] I have included a thinking path that traces from project context
to this change
- [x] I have specified the model used (with version and capability
details)
- [x] I have run tests locally and they pass
- [x] I have added or updated tests where applicable
- [x] If this change affects the UI, I have included before/after
screenshots
- [x] I have updated relevant documentation to reflect my changes
- [x] I have considered and documented any risks above
- [x] I will address all Greptile and reviewer comments before
requesting merge

Co-authored-by: Paperclip <noreply@paperclip.ing>
2026-04-16 17:10:42 -05:00
467 changed files with 108146 additions and 3673 deletions

View File

@@ -154,6 +154,14 @@ Each AGENTS.md body should include not just what the agent does, but how they fi
This turns a collection of agents into an organization that actually works together. Without workflow context, agents operate in isolation — they do their job but don't know what happens before or after them.
Add a concise execution contract to every generated working agent:
- Start actionable work in the same heartbeat and do not stop at a plan unless planning was requested.
- Leave durable progress in comments, documents, or work products with the next action.
- Use child issues for long or parallel delegated work instead of polling agents, sessions, or processes.
- Mark blocked work with the unblock owner and action.
- Respect budget, pause/cancel, approval gates, and company boundaries.
### Step 5: Confirm Output Location
Ask the user where to write the package. Common options:

View File

@@ -105,6 +105,13 @@ Your responsibilities:
- Implement features and fix bugs
- Write tests and documentation
- Participate in code reviews
Execution contract:
- Start actionable implementation work in the same heartbeat; do not stop at a plan unless planning was requested.
- Leave durable progress with a clear next action.
- Use child issues for long or parallel delegated work instead of polling agents, sessions, or processes.
- Mark blocked work with the unblock owner and action.
```
## teams/engineering/TEAM.md

View File

@@ -548,7 +548,7 @@ Import from `@paperclipai/adapter-utils/server-utils`:
### Prompt Templates
- Support `promptTemplate` for every run
- Use `renderTemplate()` with the standard variable set
- Default prompt: `"You are agent {{agent.id}} ({{agent.name}}). Continue your Paperclip work."`
- Default prompt should use `DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE` from `@paperclipai/adapter-utils/server-utils` so local adapters share Paperclip's execution contract: act in the same heartbeat, avoid planning-only exits unless requested, leave durable progress and a next action, use child issues instead of polling, mark blockers with owner/action, and respect governance boundaries.
### Error Handling
- Differentiate timeout vs process error vs parse failure

View File

@@ -2,3 +2,6 @@ DATABASE_URL=postgres://paperclip:paperclip@localhost:5432/paperclip
PORT=3100
SERVE_UI=false
BETTER_AUTH_SECRET=paperclip-dev-secret
# Discord webhook for daily merge digest (scripts/discord-daily-digest.sh)
# DISCORD_WEBHOOK_URL=https://discord.com/api/webhooks/...

4
.gitignore vendored
View File

@@ -1,4 +1,7 @@
node_modules
node_modules/
**/node_modules
**/node_modules/
dist/
.env
*.tsbuildinfo
@@ -32,6 +35,7 @@ server/src/**/*.d.ts
server/src/**/*.d.ts.map
tmp/
feedback-export-*
diagnostics/
# Editor / tool temp files
*.tmp

View File

@@ -48,6 +48,9 @@ ARG USER_GID=1000
WORKDIR /app
COPY --chown=node:node --from=build /app /app
RUN npm install --global --omit=dev @anthropic-ai/claude-code@latest @openai/codex@latest opencode-ai \
&& apt-get update \
&& apt-get install -y --no-install-recommends openssh-client jq \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /paperclip \
&& chown node:node /paperclip

View File

@@ -256,7 +256,7 @@ See [doc/DEVELOPING.md](doc/DEVELOPING.md) for the full development guide.
- ✅ Scheduled Routines
- ✅ Better Budgeting
- ✅ Agent Reviews and Approvals
- Multiple Human Users
- Multiple Human Users
- ⚪ Cloud / Sandbox agents (e.g. Cursor / e2b agents)
- ⚪ Artifacts & Work Products
- ⚪ Memory / Knowledge

View File

@@ -44,7 +44,7 @@ Budgets are a core control-plane feature, not an afterthought. Better budgeting
Paperclip should support explicit review and approval stages as first-class workflow steps, not just ad hoc comments. That means reviewer routing, approval gates, change requests, and durable audit trails that fit the same task model as the rest of the control plane.
### Multiple Human Users
### Multiple Human Users
Paperclip needs a clearer path from solo operator to real human teams. That means shared board access, safer collaboration, and a better model for several humans supervising the same autonomous company.

View File

@@ -12,7 +12,7 @@
<p align="center">
<a href="https://github.com/paperclipai/paperclip/blob/master/LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue" alt="MIT License" /></a>
<a href="https://github.com/paperclipai/paperclip/stargazers"><img src="https://img.shields.io/github/stars/paperclipai/paperclip?style=flat" alt="Stars" /></a>
<a href="https://discord.gg/m4HZY7xNG3"><img src="https://img.shields.io/badge/discord-join%20chat-5865F2?logo=discord&logoColor=white" alt="Discord" /></a>
<a href="https://discord.gg/m4HZY7xNG3"><img src="https://img.shields.io/discord/000000000?label=discord" alt="Discord" /></a>
</p>
<br/>
@@ -258,7 +258,7 @@ See [doc/DEVELOPING.md](https://github.com/paperclipai/paperclip/blob/master/doc
- ⚪ Artifacts & Deployments
- ⚪ CEO Chat
- ⚪ MAXIMIZER MODE
- Multiple Human Users
- Multiple Human Users
- ⚪ Cloud / Sandbox agents (e.g. Cursor / e2b agents)
- ⚪ Cloud deployments
- ⚪ Desktop App

View File

@@ -287,6 +287,11 @@ describeEmbeddedPostgres("paperclipai company import/export e2e", () => {
headers: { "content-type": "application/json" },
body: JSON.stringify({ name: `CLI Export Source ${Date.now()}` }),
});
await api(apiBase, `/api/companies/${sourceCompany.id}`, {
method: "PATCH",
headers: { "content-type": "application/json" },
body: JSON.stringify({ requireBoardApprovalForNewAgents: false }),
});
const sourceAgent = await api<{ id: string; name: string }>(
apiBase,

View File

@@ -3,11 +3,15 @@ import os from "node:os";
import path from "node:path";
import { execFileSync } from "node:child_process";
import { randomUUID } from "node:crypto";
import { eq } from "drizzle-orm";
import { afterEach, describe, expect, it, vi } from "vitest";
import {
agents,
authUsers,
companies,
createDb,
issueComments,
issues,
projects,
routines,
routineTriggers,
@@ -16,6 +20,7 @@ import {
copyGitHooksToWorktreeGitDir,
copySeededSecretsKey,
pauseSeededScheduledRoutines,
quarantineSeededWorktreeExecutionState,
readSourceAttachmentBody,
rebindWorkspaceCwd,
resolveSourceConfigPath,
@@ -47,6 +52,7 @@ import {
const ORIGINAL_CWD = process.cwd();
const ORIGINAL_ENV = { ...process.env };
const embeddedPostgresSupport = await getEmbeddedPostgresTestSupport();
const itEmbeddedPostgres = embeddedPostgresSupport.supported ? it : it.skip;
const describeEmbeddedPostgres = embeddedPostgresSupport.supported ? describe : describe.skip;
if (!embeddedPostgresSupport.supported) {
@@ -280,6 +286,138 @@ describe("worktree helpers", () => {
expect(full.nullifyColumns).toEqual({});
});
itEmbeddedPostgres("quarantines copied live execution state in seeded worktree databases", async () => {
const tempDb = await startEmbeddedPostgresTestDatabase("paperclip-worktree-quarantine-");
const db = createDb(tempDb.connectionString);
const companyId = randomUUID();
const agentId = randomUUID();
const idleAgentId = randomUUID();
const inProgressIssueId = randomUUID();
const todoIssueId = randomUUID();
const reviewIssueId = randomUUID();
const userIssueId = randomUUID();
try {
await db.insert(companies).values({
id: companyId,
name: "Paperclip",
issuePrefix: "WTQ",
requireBoardApprovalForNewAgents: false,
});
await db.insert(agents).values([
{
id: agentId,
companyId,
name: "CodexCoder",
role: "engineer",
status: "running",
adapterType: "codex_local",
adapterConfig: {},
runtimeConfig: {
heartbeat: { enabled: true, intervalSec: 60 },
wakeOnDemand: true,
},
permissions: {},
},
{
id: idleAgentId,
companyId,
name: "Reviewer",
role: "reviewer",
status: "idle",
adapterType: "codex_local",
adapterConfig: {},
runtimeConfig: { heartbeat: { enabled: false, intervalSec: 300 } },
permissions: {},
},
]);
await db.insert(issues).values([
{
id: inProgressIssueId,
companyId,
title: "Copied in-flight issue",
status: "in_progress",
priority: "medium",
assigneeAgentId: agentId,
issueNumber: 1,
identifier: "WTQ-1",
executionAgentNameKey: "codexcoder",
executionLockedAt: new Date("2026-04-18T00:00:00.000Z"),
},
{
id: todoIssueId,
companyId,
title: "Copied assigned todo issue",
status: "todo",
priority: "medium",
assigneeAgentId: agentId,
issueNumber: 2,
identifier: "WTQ-2",
},
{
id: reviewIssueId,
companyId,
title: "Copied assigned review issue",
status: "in_review",
priority: "medium",
assigneeAgentId: idleAgentId,
issueNumber: 3,
identifier: "WTQ-3",
},
{
id: userIssueId,
companyId,
title: "Copied user issue",
status: "todo",
priority: "medium",
assigneeUserId: "user-1",
issueNumber: 4,
identifier: "WTQ-4",
},
]);
await expect(quarantineSeededWorktreeExecutionState(tempDb.connectionString)).resolves.toEqual({
disabledTimerHeartbeats: 1,
resetRunningAgents: 1,
quarantinedInProgressIssues: 1,
unassignedTodoIssues: 1,
unassignedReviewIssues: 1,
});
const [quarantinedAgent] = await db.select().from(agents).where(eq(agents.id, agentId));
expect(quarantinedAgent?.status).toBe("idle");
expect(quarantinedAgent?.runtimeConfig).toMatchObject({
heartbeat: { enabled: false, intervalSec: 60 },
wakeOnDemand: true,
});
const [inProgressIssue] = await db.select().from(issues).where(eq(issues.id, inProgressIssueId));
expect(inProgressIssue?.status).toBe("blocked");
expect(inProgressIssue?.assigneeAgentId).toBeNull();
expect(inProgressIssue?.executionAgentNameKey).toBeNull();
expect(inProgressIssue?.executionLockedAt).toBeNull();
const [todoIssue] = await db.select().from(issues).where(eq(issues.id, todoIssueId));
expect(todoIssue?.status).toBe("todo");
expect(todoIssue?.assigneeAgentId).toBeNull();
const [reviewIssue] = await db.select().from(issues).where(eq(issues.id, reviewIssueId));
expect(reviewIssue?.status).toBe("in_review");
expect(reviewIssue?.assigneeAgentId).toBeNull();
const [userIssue] = await db.select().from(issues).where(eq(issues.id, userIssueId));
expect(userIssue?.status).toBe("todo");
expect(userIssue?.assigneeUserId).toBe("user-1");
const comments = await db.select().from(issueComments).where(eq(issueComments.issueId, inProgressIssueId));
expect(comments).toHaveLength(1);
expect(comments[0]?.body).toContain("Quarantined during worktree seed");
} finally {
await db.$client?.end?.({ timeout: 5 }).catch(() => undefined);
await tempDb.cleanup();
}
}, 20_000);
it("copies the source local_encrypted secrets key into the seeded worktree instance", () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-secrets-"));
const originalInlineMasterKey = process.env.PAPERCLIP_SECRETS_MASTER_KEY;
@@ -373,6 +511,97 @@ describe("worktree helpers", () => {
}
});
itEmbeddedPostgres(
"seeds authenticated users into minimally cloned worktree instances",
async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-auth-seed-"));
const worktreeRoot = path.join(tempRoot, "PAP-999-auth-seed");
const sourceHome = path.join(tempRoot, "source-home");
const sourceConfigDir = path.join(sourceHome, "instances", "source");
const sourceConfigPath = path.join(sourceConfigDir, "config.json");
const sourceEnvPath = path.join(sourceConfigDir, ".env");
const sourceKeyPath = path.join(sourceConfigDir, "secrets", "master.key");
const worktreeHome = path.join(tempRoot, ".paperclip-worktrees");
const originalCwd = process.cwd();
const sourceDb = await startEmbeddedPostgresTestDatabase("paperclip-worktree-auth-source-");
try {
const sourceDbClient = createDb(sourceDb.connectionString);
await sourceDbClient.insert(authUsers).values({
id: "user-existing",
email: "existing@paperclip.ing",
name: "Existing User",
emailVerified: true,
createdAt: new Date(),
updatedAt: new Date(),
});
fs.mkdirSync(path.dirname(sourceKeyPath), { recursive: true });
fs.mkdirSync(worktreeRoot, { recursive: true });
const sourceConfig = buildSourceConfig();
sourceConfig.database = {
mode: "postgres",
embeddedPostgresDataDir: path.join(sourceConfigDir, "db"),
embeddedPostgresPort: 54329,
backup: {
enabled: true,
intervalMinutes: 60,
retentionDays: 30,
dir: path.join(sourceConfigDir, "backups"),
},
connectionString: sourceDb.connectionString,
};
sourceConfig.logging.logDir = path.join(sourceConfigDir, "logs");
sourceConfig.storage.localDisk.baseDir = path.join(sourceConfigDir, "storage");
sourceConfig.secrets.localEncrypted.keyFilePath = sourceKeyPath;
fs.writeFileSync(sourceConfigPath, JSON.stringify(sourceConfig, null, 2) + "\n", "utf8");
fs.writeFileSync(sourceEnvPath, "", "utf8");
fs.writeFileSync(sourceKeyPath, "source-master-key", "utf8");
process.chdir(worktreeRoot);
await worktreeInitCommand({
name: "PAP-999-auth-seed",
home: worktreeHome,
fromConfig: sourceConfigPath,
force: true,
});
const targetConfig = JSON.parse(
fs.readFileSync(path.join(worktreeRoot, ".paperclip", "config.json"), "utf8"),
) as PaperclipConfig;
const { default: EmbeddedPostgres } = await import("embedded-postgres");
const targetPg = new EmbeddedPostgres({
databaseDir: targetConfig.database.embeddedPostgresDataDir,
user: "paperclip",
password: "paperclip",
port: targetConfig.database.embeddedPostgresPort,
persistent: true,
initdbFlags: ["--encoding=UTF8", "--locale=C", "--lc-messages=C"],
onLog: () => {},
onError: () => {},
});
await targetPg.start();
try {
const targetDb = createDb(
`postgres://paperclip:paperclip@127.0.0.1:${targetConfig.database.embeddedPostgresPort}/paperclip`,
);
const seededUsers = await targetDb.select().from(authUsers);
expect(seededUsers.some((row) => row.email === "existing@paperclip.ing")).toBe(true);
} finally {
await targetPg.stop();
}
} finally {
process.chdir(originalCwd);
await sourceDb.cleanup();
fs.rmSync(tempRoot, { recursive: true, force: true });
}
},
20000,
);
it("avoids ports already claimed by sibling worktree instance configs", async () => {
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-claimed-ports-"));
const repoRoot = path.join(tempRoot, "repo");

View File

@@ -93,6 +93,7 @@ type WorktreeInitOptions = {
dbPort?: number;
seed?: boolean;
seedMode?: string;
preserveLiveWork?: boolean;
force?: boolean;
};
@@ -126,6 +127,7 @@ type WorktreeReseedOptions = {
fromDataDir?: string;
fromInstance?: string;
seedMode?: string;
preserveLiveWork?: boolean;
yes?: boolean;
allowLiveTarget?: boolean;
};
@@ -137,6 +139,7 @@ type WorktreeRepairOptions = {
fromDataDir?: string;
fromInstance?: string;
seedMode?: string;
preserveLiveWork?: boolean;
noSeed?: boolean;
allowLiveTarget?: boolean;
};
@@ -179,6 +182,8 @@ type CopiedGitHooksResult = {
type SeedWorktreeDatabaseResult = {
backupSummary: string;
pausedScheduledRoutines: number;
executionQuarantine: SeededWorktreeExecutionQuarantineSummary;
reboundWorkspaces: Array<{
name: string;
fromCwd: string;
@@ -186,6 +191,14 @@ type SeedWorktreeDatabaseResult = {
}>;
};
export type SeededWorktreeExecutionQuarantineSummary = {
disabledTimerHeartbeats: number;
resetRunningAgents: number;
quarantinedInProgressIssues: number;
unassignedTodoIssues: number;
unassignedReviewIssues: number;
};
function nonEmpty(value: string | null | undefined): string | null {
return typeof value === "string" && value.trim().length > 0 ? value.trim() : null;
}
@@ -198,6 +211,18 @@ function isCurrentSourceConfigPath(sourceConfigPath: string): boolean {
return path.resolve(currentConfigPath) === path.resolve(sourceConfigPath);
}
function formatSeededWorktreeExecutionQuarantineSummary(
summary: SeededWorktreeExecutionQuarantineSummary,
): string {
return [
`disabled timer heartbeats: ${summary.disabledTimerHeartbeats}`,
`reset running agents: ${summary.resetRunningAgents}`,
`quarantined in-progress issues: ${summary.quarantinedInProgressIssues}`,
`unassigned todo issues: ${summary.unassignedTodoIssues}`,
`unassigned review issues: ${summary.unassignedReviewIssues}`,
].join(", ");
}
const WORKTREE_NAME_PREFIX = "paperclip-";
function resolveWorktreeMakeName(name: string): string {
@@ -1119,6 +1144,133 @@ export async function pauseSeededScheduledRoutines(connectionString: string): Pr
}
}
const EMPTY_SEEDED_WORKTREE_EXECUTION_QUARANTINE_SUMMARY: SeededWorktreeExecutionQuarantineSummary = {
disabledTimerHeartbeats: 0,
resetRunningAgents: 0,
quarantinedInProgressIssues: 0,
unassignedTodoIssues: 0,
unassignedReviewIssues: 0,
};
function isRecord(value: unknown): value is Record<string, unknown> {
return Boolean(value) && typeof value === "object" && !Array.isArray(value);
}
function isEnabledValue(value: unknown): boolean {
return value === true || value === "true" || value === 1 || value === "1";
}
function normalizeWorktreeRuntimeConfig(runtimeConfig: unknown): {
runtimeConfig: Record<string, unknown>;
disabledTimerHeartbeat: boolean;
changed: boolean;
} {
const nextRuntimeConfig = isRecord(runtimeConfig) ? { ...runtimeConfig } : {};
const heartbeat = isRecord(nextRuntimeConfig.heartbeat) ? { ...nextRuntimeConfig.heartbeat } : null;
if (!heartbeat) {
return { runtimeConfig: nextRuntimeConfig, disabledTimerHeartbeat: false, changed: false };
}
const disabledTimerHeartbeat = isEnabledValue(heartbeat.enabled);
if (heartbeat.enabled !== false) {
heartbeat.enabled = false;
nextRuntimeConfig.heartbeat = heartbeat;
return { runtimeConfig: nextRuntimeConfig, disabledTimerHeartbeat, changed: true };
}
return { runtimeConfig: nextRuntimeConfig, disabledTimerHeartbeat: false, changed: false };
}
export async function quarantineSeededWorktreeExecutionState(
connectionString: string,
): Promise<SeededWorktreeExecutionQuarantineSummary> {
const db = createDb(connectionString);
const summary = { ...EMPTY_SEEDED_WORKTREE_EXECUTION_QUARANTINE_SUMMARY };
try {
await db.transaction(async (tx) => {
const seededAgents = await tx
.select({
id: agents.id,
status: agents.status,
runtimeConfig: agents.runtimeConfig,
})
.from(agents);
for (const agent of seededAgents) {
const normalized = normalizeWorktreeRuntimeConfig(agent.runtimeConfig);
const nextStatus = agent.status === "running" ? "idle" : agent.status;
if (normalized.disabledTimerHeartbeat) {
summary.disabledTimerHeartbeats += 1;
}
if (agent.status === "running") {
summary.resetRunningAgents += 1;
}
if (normalized.changed || nextStatus !== agent.status) {
await tx
.update(agents)
.set({
runtimeConfig: normalized.runtimeConfig,
status: nextStatus,
updatedAt: new Date(),
})
.where(eq(agents.id, agent.id));
}
}
const affectedIssues = await tx
.select({
id: issues.id,
companyId: issues.companyId,
status: issues.status,
})
.from(issues)
.where(
and(
sql`${issues.assigneeAgentId} is not null`,
sql`${issues.assigneeUserId} is null`,
inArray(issues.status, ["todo", "in_progress", "in_review"]),
),
);
for (const issue of affectedIssues) {
const nextStatus = issue.status === "in_progress" ? "blocked" : issue.status;
await tx
.update(issues)
.set({
status: nextStatus,
assigneeAgentId: null,
checkoutRunId: null,
executionRunId: null,
executionAgentNameKey: null,
executionLockedAt: null,
executionWorkspaceId: null,
updatedAt: new Date(),
})
.where(eq(issues.id, issue.id));
if (issue.status === "in_progress") {
summary.quarantinedInProgressIssues += 1;
await tx.insert(issueComments).values({
companyId: issue.companyId,
issueId: issue.id,
body:
"Quarantined during worktree seed so copied in-flight work does not auto-run in this isolated instance. " +
"Reassign or unblock here only if you intentionally want the worktree instance to own this task.",
});
} else if (issue.status === "todo") {
summary.unassignedTodoIssues += 1;
} else if (issue.status === "in_review") {
summary.unassignedReviewIssues += 1;
}
}
});
return summary;
} finally {
await db.$client?.end?.({ timeout: 5 }).catch(() => undefined);
}
}
async function seedWorktreeDatabase(input: {
sourceConfigPath: string;
sourceConfig: PaperclipConfig;
@@ -1126,6 +1278,7 @@ async function seedWorktreeDatabase(input: {
targetPaths: WorktreeLocalPaths;
instanceId: string;
seedMode: WorktreeSeedMode;
preserveLiveWork?: boolean;
}): Promise<SeedWorktreeDatabaseResult> {
const seedPlan = resolveWorktreeSeedPlan(input.seedMode);
const sourceEnvFile = resolvePaperclipEnvFile(input.sourceConfigPath);
@@ -1176,7 +1329,10 @@ async function seedWorktreeDatabase(input: {
backupFile: backup.backupFile,
});
await applyPendingMigrations(targetConnectionString);
await pauseSeededScheduledRoutines(targetConnectionString);
const executionQuarantine = input.preserveLiveWork
? { ...EMPTY_SEEDED_WORKTREE_EXECUTION_QUARANTINE_SUMMARY }
: await quarantineSeededWorktreeExecutionState(targetConnectionString);
const pausedScheduledRoutines = await pauseSeededScheduledRoutines(targetConnectionString);
const reboundWorkspaces = await rebindSeededProjectWorkspaces({
targetConnectionString,
currentCwd: input.targetPaths.cwd,
@@ -1184,6 +1340,8 @@ async function seedWorktreeDatabase(input: {
return {
backupSummary: formatDatabaseBackupResult(backup),
pausedScheduledRoutines,
executionQuarantine,
reboundWorkspaces,
};
} finally {
@@ -1262,6 +1420,8 @@ async function runWorktreeInit(opts: WorktreeInitOptions): Promise<void> {
const copiedGitHooks = copyGitHooksToWorktreeGitDir(cwd);
let seedSummary: string | null = null;
let seedExecutionQuarantineSummary: SeededWorktreeExecutionQuarantineSummary | null = null;
let pausedScheduledRoutineCount: number | null = null;
let reboundWorkspaceSummary: SeedWorktreeDatabaseResult["reboundWorkspaces"] = [];
if (opts.seed !== false) {
if (!sourceConfig) {
@@ -1279,8 +1439,11 @@ async function runWorktreeInit(opts: WorktreeInitOptions): Promise<void> {
targetPaths: paths,
instanceId,
seedMode,
preserveLiveWork: opts.preserveLiveWork,
});
seedSummary = seeded.backupSummary;
seedExecutionQuarantineSummary = seeded.executionQuarantine;
pausedScheduledRoutineCount = seeded.pausedScheduledRoutines;
reboundWorkspaceSummary = seeded.reboundWorkspaces;
spinner.stop(`Seeded isolated worktree database (${seedMode}).`);
} catch (error) {
@@ -1303,6 +1466,16 @@ async function runWorktreeInit(opts: WorktreeInitOptions): Promise<void> {
if (seedSummary) {
p.log.message(pc.dim(`Seed mode: ${seedMode}`));
p.log.message(pc.dim(`Seed snapshot: ${seedSummary}`));
if (opts.preserveLiveWork) {
p.log.warning("Preserved copied live work; this worktree instance may auto-run source-instance assignments.");
} else if (seedExecutionQuarantineSummary) {
p.log.message(
pc.dim(`Seed execution quarantine: ${formatSeededWorktreeExecutionQuarantineSummary(seedExecutionQuarantineSummary)}`),
);
}
if (pausedScheduledRoutineCount != null) {
p.log.message(pc.dim(`Paused scheduled routines: ${pausedScheduledRoutineCount}`));
}
for (const rebound of reboundWorkspaceSummary) {
p.log.message(
pc.dim(`Rebound workspace ${rebound.name}: ${rebound.fromCwd} -> ${rebound.toCwd}`),
@@ -2947,11 +3120,20 @@ async function runWorktreeReseed(opts: WorktreeReseedOptions): Promise<void> {
targetPaths,
instanceId: targetPaths.instanceId,
seedMode,
preserveLiveWork: opts.preserveLiveWork,
});
spinner.stop(`Reseeded ${targetEndpoint.label} (${seedMode}).`);
p.log.message(pc.dim(`Source: ${source.configPath}`));
p.log.message(pc.dim(`Target: ${targetEndpoint.configPath}`));
p.log.message(pc.dim(`Seed snapshot: ${seeded.backupSummary}`));
if (opts.preserveLiveWork) {
p.log.warning("Preserved copied live work; this worktree instance may auto-run source-instance assignments.");
} else {
p.log.message(
pc.dim(`Seed execution quarantine: ${formatSeededWorktreeExecutionQuarantineSummary(seeded.executionQuarantine)}`),
);
}
p.log.message(pc.dim(`Paused scheduled routines: ${seeded.pausedScheduledRoutines}`));
for (const rebound of seeded.reboundWorkspaces) {
p.log.message(
pc.dim(`Rebound workspace ${rebound.name}: ${rebound.fromCwd} -> ${rebound.toCwd}`),
@@ -3015,6 +3197,7 @@ export async function worktreeRepairCommand(opts: WorktreeRepairOptions): Promis
fromConfig: source.configPath,
to: target.rootPath,
seedMode,
preserveLiveWork: opts.preserveLiveWork,
yes: true,
allowLiveTarget: opts.allowLiveTarget,
});
@@ -3047,6 +3230,7 @@ export async function worktreeRepairCommand(opts: WorktreeRepairOptions): Promis
fromInstance: opts.fromInstance,
seed: opts.noSeed ? false : true,
seedMode,
preserveLiveWork: opts.preserveLiveWork,
force: true,
});
} finally {
@@ -3070,6 +3254,7 @@ export function registerWorktreeCommands(program: Command): void {
.option("--server-port <port>", "Preferred server port", (value) => Number(value))
.option("--db-port <port>", "Preferred embedded Postgres port", (value) => Number(value))
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: minimal)", "minimal")
.option("--preserve-live-work", "Do not quarantine copied agent timers or assigned open issues in the seeded worktree", false)
.option("--no-seed", "Skip database seeding from the source instance")
.option("--force", "Replace existing repo-local config and isolated instance data", false)
.action(worktreeMakeCommand);
@@ -3086,6 +3271,7 @@ export function registerWorktreeCommands(program: Command): void {
.option("--server-port <port>", "Preferred server port", (value) => Number(value))
.option("--db-port <port>", "Preferred embedded Postgres port", (value) => Number(value))
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: minimal)", "minimal")
.option("--preserve-live-work", "Do not quarantine copied agent timers or assigned open issues in the seeded worktree", false)
.option("--no-seed", "Skip database seeding from the source instance")
.option("--force", "Replace existing repo-local config and isolated instance data", false)
.action(worktreeInitCommand);
@@ -3125,6 +3311,7 @@ export function registerWorktreeCommands(program: Command): void {
.option("--from-data-dir <path>", "Source PAPERCLIP_HOME used when deriving the source config")
.option("--from-instance <id>", "Source instance id when deriving the source config")
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: full)", "full")
.option("--preserve-live-work", "Do not quarantine copied agent timers or assigned open issues in the seeded worktree", false)
.option("--yes", "Skip the destructive confirmation prompt", false)
.option("--allow-live-target", "Override the guard that requires the target worktree DB to be stopped first", false)
.action(worktreeReseedCommand);
@@ -3138,6 +3325,7 @@ export function registerWorktreeCommands(program: Command): void {
.option("--from-data-dir <path>", "Source PAPERCLIP_HOME used when deriving the source config")
.option("--from-instance <id>", "Source instance id when deriving the source config (default: default)")
.option("--seed-mode <mode>", "Seed profile: minimal or full (default: minimal)", "minimal")
.option("--preserve-live-work", "Do not quarantine copied agent timers or assigned open issues in the seeded worktree", false)
.option("--no-seed", "Repair metadata only and skip reseeding when bootstrapping a missing worktree config", false)
.option("--allow-live-target", "Override the guard that requires the target worktree DB to be stopped first", false)
.action(worktreeRepairCommand);

View File

@@ -27,6 +27,18 @@ pnpm db:migrate
When `DATABASE_URL` is unset, this command targets the current embedded PostgreSQL instance for your active Paperclip config/instance.
Issue reference mentions follow the normal migration path: the schema migration creates the tracking table, but it does not backfill historical issue titles, descriptions, comments, or documents automatically.
To backfill existing content manually after migrating, run:
```sh
pnpm issue-references:backfill
# optional: limit to one company
pnpm issue-references:backfill -- --company <company-id>
```
Future issue, comment, and document writes sync references automatically without running the backfill command.
This mode is ideal for local development and one-command installs.
Docker note: the Docker quickstart image also uses embedded PostgreSQL by default. Persist `/paperclip` to keep DB state across container restarts (see `doc/DOCKER.md`).
@@ -94,6 +106,16 @@ Set `DATABASE_URL` in your `.env`:
DATABASE_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:6543/postgres
```
For hosted deployments that use a pooled runtime URL, set
`DATABASE_MIGRATION_URL` to the direct connection URL. Paperclip uses it for
startup schema checks/migrations and plugin namespace migrations, while the app
continues to use `DATABASE_URL` for runtime queries:
```sh
DATABASE_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:6543/postgres
DATABASE_MIGRATION_URL=postgres://postgres.[PROJECT-REF]:[PASSWORD]@aws-0-[REGION].pooler.supabase.com:5432/postgres
```
If using connection pooling (port 6543), the `postgres` client must disable prepared statements. Update `packages/db/src/client.ts`:
```ts

View File

@@ -142,3 +142,4 @@ This prevents lockout when a user migrates from long-running local trusted usage
- implementation plan: `doc/plans/deployment-auth-mode-consolidation.md`
- V1 contract: `doc/SPEC-implementation.md`
- operator workflows: `doc/DEVELOPING.md` and `doc/CLI.md`
- invite/join state map: `doc/spec/invite-flow.md`

View File

@@ -43,6 +43,17 @@ This starts:
`pnpm dev` and `pnpm dev:once` are now idempotent for the current repo and instance: if the matching Paperclip dev runner is already alive, Paperclip reports the existing process instead of starting a duplicate.
## Storybook
The board UI Storybook keeps stories and Storybook config under `ui/storybook/` so component review files stay out of the app source routes.
```sh
pnpm storybook
pnpm build-storybook
```
These run the `@paperclipai/ui` Storybook on port `6006` and build the static output to `ui/storybook-static/`.
Inspect or stop the current repo's managed dev runner:
```sh
@@ -209,6 +220,8 @@ Seed modes:
- `full` makes a full logical clone of the source instance
- `--no-seed` creates an empty isolated instance
Seeded worktree instances quarantine copied live execution by default for both `minimal` and `full` seeds. During restore, Paperclip disables copied agent timer heartbeats, resets copied `running` agents to `idle`, blocks and unassigns copied agent-owned `in_progress` issues, and unassigns copied agent-owned `todo`/`in_review` issues. This keeps a freshly booted worktree from starting agents for work already owned by the source instance. Pass `--preserve-live-work` only when you intentionally want the isolated worktree to resume copied assignments.
After `worktree init`, both the server and the CLI auto-load the repo-local `.paperclip/.env` when run inside that worktree, so normal commands like `pnpm dev`, `paperclipai doctor`, and `paperclipai db:backup` stay scoped to the worktree instance.
`pnpm dev` now fails fast in a linked git worktree when `.paperclip/.env` is missing, instead of silently booting against the default instance/port. If that happens, run `paperclipai worktree init` in the worktree first.
@@ -222,6 +235,8 @@ That repo-local env also sets:
- `PAPERCLIP_WORKTREE_COLOR=<hex-color>`
The server/UI use those values for worktree-specific branding such as the top banner and dynamically colored favicon.
Authenticated worktree servers also use the `PAPERCLIP_INSTANCE_ID` value to scope Better Auth cookie names.
Browser cookies are shared by host rather than port, so this prevents logging into one `127.0.0.1:<port>` worktree from replacing another worktree server's session cookie.
Print shell exports explicitly when needed:

View File

@@ -115,38 +115,6 @@ If the first real publish returns npm `E404`, check npm-side prerequisites befor
- The initial publish must include `--access public` for a public scoped package.
- npm also requires either account 2FA for publishing or a granular token that is allowed to bypass 2FA.
### Manual first publish for `@paperclipai/mcp-server`
If you need to publish only the MCP server package once by hand, use:
- `@paperclipai/mcp-server`
Recommended flow from the repo root:
```bash
# optional sanity check: this 404s until the first publish exists
npm view @paperclipai/mcp-server version
# make sure the build output is fresh
pnpm --filter @paperclipai/mcp-server build
# confirm your local npm auth before the real publish
npm whoami
# safe preview of the exact publish payload
cd packages/mcp-server
pnpm publish --dry-run --no-git-checks --access public
# real publish
pnpm publish --no-git-checks --access public
```
Notes:
- Publish from `packages/mcp-server/`, not the repo root.
- If `npm view @paperclipai/mcp-server version` already returns the same version that is in [`packages/mcp-server/package.json`](../packages/mcp-server/package.json), do not republish. Bump the version or use the normal repo-wide release flow in [`scripts/release.sh`](../scripts/release.sh).
- The same npm-side prerequisites apply as above: valid npm auth, permission to publish to the `@paperclipai` scope, `--access public`, and the required publish auth/2FA policy.
## Version formats
Paperclip uses calendar versions:

View File

@@ -619,7 +619,7 @@ Per-agent schedule fields in `adapter_config`:
- `enabled` boolean
- `intervalSec` integer (minimum 30)
- `maxConcurrentRuns` fixed at `1` for V1
- `maxConcurrentRuns` integer; new agents default to `5`
Scheduler must skip invocation when:

View File

@@ -146,6 +146,8 @@ Use it for:
- explicit waiting relationships
- automatic wakeups when all blockers resolve
Blocked issues should stay idle while blockers remain unresolved. Paperclip should not create a queued heartbeat run for that issue until the final blocker is done and the `issue_blockers_resolved` wake can start real work.
If a parent is truly waiting on a child, model that with blockers. Do not rely on the parent/child relationship alone.
## 7. Consistent Execution Path Rules

View File

@@ -10,6 +10,9 @@ It is intentionally narrower than [PLUGIN_SPEC.md](./PLUGIN_SPEC.md). The spec i
- Plugin UI runs as same-origin JavaScript inside the main Paperclip app.
- Worker-side host APIs are capability-gated.
- Plugin UI is not sandboxed by manifest capabilities.
- Plugin database migrations are restricted to a host-derived plugin namespace.
- Plugin-owned JSON API routes must be declared in the manifest and are mounted
only under `/api/plugins/:pluginId/api/*`.
- There is no host-provided shared React component kit for plugins yet.
- `ctx.assets` is not supported in the current runtime.
@@ -77,10 +80,12 @@ Worker:
- secrets
- activity
- state
- database namespace via `ctx.db`
- scoped JSON API routes declared with `apiRoutes`
- entities
- projects and project workspaces
- companies
- issues and comments
- issues, comments, namespaced `plugin:<pluginKey>` origins, blocker relations, checkout assertions, assignment wakeups, and orchestration summaries
- agents and agent sessions
- goals
- data/actions
@@ -89,6 +94,55 @@ Worker:
- metrics
- logger
### Plugin database declarations
First-party or otherwise trusted orchestration plugins can declare:
```ts
database: {
migrationsDir: "migrations",
coreReadTables: ["issues"],
}
```
Required capabilities are `database.namespace.migrate` and
`database.namespace.read`; add `database.namespace.write` for runtime mutations.
The host derives `ctx.db.namespace`, runs SQL files in filename order before the
worker starts, records checksums in `plugin_migrations`, and rejects changed
already-applied migrations.
Migration SQL may create or alter objects only inside `ctx.db.namespace`. It may
reference whitelisted `public` core tables for foreign keys or read-only views,
but may not mutate/alter/drop/truncate public tables, create extensions,
triggers, untrusted languages, or runtime multi-statement SQL. Runtime
`ctx.db.query()` is restricted to `SELECT`; runtime `ctx.db.execute()` is
restricted to namespace-local `INSERT`, `UPDATE`, and `DELETE`.
### Scoped plugin API routes
Plugins can expose JSON-only routes under their own namespace:
```ts
apiRoutes: [
{
routeKey: "initialize",
method: "POST",
path: "/issues/:issueId/smoke",
auth: "board-or-agent",
capability: "api.routes.register",
checkoutPolicy: "required-for-agent-in-progress",
companyResolution: { from: "issue", param: "issueId" },
},
]
```
The host resolves the plugin, checks that it is ready, enforces
`api.routes.register`, matches the declared method/path, resolves company access,
and applies checkout policy before dispatching to the worker's `onApiRequest`
handler. The worker receives sanitized headers, route params, query, parsed JSON
body, actor context, and company id. Do not use plugin routes to claim core
paths; they always remain under `/api/plugins/:pluginId/api/*`.
UI:
- `usePluginData`

View File

@@ -28,6 +28,9 @@ Current limitations to keep in mind:
- The repo example plugins under `packages/plugins/examples/` are development conveniences. They work from a source checkout and should not be assumed to exist in a generic published build unless they are explicitly shipped with that build.
- Dynamic plugin install is not yet cloud-ready for horizontally scaled or ephemeral deployments. There is no shared artifact store, install coordination, or cross-node distribution layer yet.
- The current runtime does not yet ship a real host-provided plugin UI component kit, and it does not support plugin asset uploads/reads. Treat those as future-scope ideas in this spec, not current implementation promises.
- Scoped plugin API routes are JSON-only and must be declared in `apiRoutes`.
They mount under `/api/plugins/:pluginId/api/*`; plugins cannot shadow core
API routes.
In practice, that means the current implementation is a good fit for local development and self-hosted persistent deployments, but not yet for multi-instance cloud plugin distribution.
@@ -624,7 +627,46 @@ Required SDK clients:
Plugins that need filesystem, git, terminal, or process operations handle those directly using standard Node APIs or libraries. The host provides project workspace metadata through `ctx.projects` so plugins can resolve workspace paths, but the host does not proxy low-level OS operations.
## 14.1 Example SDK Shape
## 14.1 Issue Orchestration APIs
Trusted orchestration plugins can create and update Paperclip issues through `ctx.issues` instead of importing server internals. The public issue contract includes parent/project/goal links, board or agent assignees, blocker IDs, labels, billing code, request depth, execution workspace inheritance, and plugin origin metadata.
Origin rules:
- Built-in core issues keep built-in origins such as `manual` and `routine_execution`.
- Plugin-managed issues use `plugin:<pluginKey>` or a sub-kind such as `plugin:<pluginKey>:feature`.
- The host derives the default plugin origin from the installed plugin key and rejects attempts to set `plugin:<otherPluginKey>` origins.
- `originId` is plugin-defined and should be stable for idempotent generated work.
Relation and read helpers:
- `ctx.issues.relations.get(issueId, companyId)`
- `ctx.issues.relations.setBlockedBy(issueId, blockerIssueIds, companyId)`
- `ctx.issues.relations.addBlockers(issueId, blockerIssueIds, companyId)`
- `ctx.issues.relations.removeBlockers(issueId, blockerIssueIds, companyId)`
- `ctx.issues.getSubtree(issueId, companyId, options)`
- `ctx.issues.summaries.getOrchestration({ issueId, companyId, includeSubtree, billingCode })`
Governance helpers:
- `ctx.issues.assertCheckoutOwner({ issueId, companyId, actorAgentId, actorRunId })` lets plugin actions preserve agent-run checkout ownership.
- `ctx.issues.requestWakeup(issueId, companyId, options)` requests assignment wakeups through host heartbeat semantics, including terminal-status, blocker, assignee, and budget hard-stop checks.
- `ctx.issues.requestWakeups(issueIds, companyId, options)` applies the same host-owned wakeup semantics to a batch and may use an idempotency key prefix for stable coordinator retries.
Plugin-originated issue, relation, document, comment, and wakeup mutations must write activity entries with `actorType: "plugin"` and details fields for `sourcePluginId`, `sourcePluginKey`, `initiatingActorType`, `initiatingActorId`, and `initiatingRunId` when a user or agent run initiated the plugin work.
Scoped API routes:
- `apiRoutes[]` declares `routeKey`, `method`, plugin-local `path`, `auth`,
`capability`, optional checkout policy, and company resolution.
- The host enforces auth, company access, `api.routes.register`, route matching,
and checkout policy before worker dispatch.
- The worker implements `onApiRequest(input)` and returns a JSON response shape
`{ status?, headers?, body? }`.
- Only safe request headers are forwarded; auth/cookie headers are never passed
to the worker.
## 14.2 Example SDK Shape
```ts
/** Top-level helper for defining a plugin with type checking */
@@ -696,16 +738,24 @@ The host enforces capabilities in the SDK layer and refuses calls outside the gr
- `project.workspaces.read`
- `issues.read`
- `issue.comments.read`
- `issue.documents.read`
- `issue.relations.read`
- `issue.subtree.read`
- `agents.read`
- `goals.read`
- `activity.read`
- `costs.read`
- `issues.orchestration.read`
### Data Write
- `issues.create`
- `issues.update`
- `issue.comments.create`
- `issue.documents.write`
- `issue.relations.write`
- `issues.checkout`
- `issues.wakeup`
- `assets.write`
- `assets.read`
- `activity.log.write`
@@ -772,6 +822,13 @@ Minimum event set:
- `issue.created`
- `issue.updated`
- `issue.comment.created`
- `issue.document.created`
- `issue.document.updated`
- `issue.document.deleted`
- `issue.relations.updated`
- `issue.checked_out`
- `issue.released`
- `issue.assignment_wakeup_requested`
- `agent.created`
- `agent.updated`
- `agent.status_changed`
@@ -781,6 +838,8 @@ Minimum event set:
- `agent.run.cancelled`
- `approval.created`
- `approval.decided`
- `budget.incident.opened`
- `budget.incident.resolved`
- `cost_event.created`
- `activity.logged`
@@ -1238,6 +1297,8 @@ Plugin-originated mutations should write:
- `actor_type = plugin`
- `actor_id = <plugin-id>`
- details include `sourcePluginId` and `sourcePluginKey`
- details include `initiatingActorType`, `initiatingActorId`, and `initiatingRunId` when a user or agent run triggered the plugin work
## 21.5 Plugin Migrations

View File

@@ -114,14 +114,14 @@ If the connection drops, the UI reconnects automatically.
1. Enable timer wakeups (for example every 300s)
2. Keep assignment wakeups on
3. Use a focused prompt template
3. Use a focused prompt template that tells agents to act in the same heartbeat, leave durable progress, and mark blocked work with an owner/action
4. Watch run logs and adjust prompt/config over time
## 7.2 Event-driven loop (less constant polling)
1. Disable timer or set a long interval
2. Keep wake-on-assignment enabled
3. Use on-demand wakeups for manual nudges
3. Use child issues, comments, and on-demand wakeups for handoffs instead of loops that poll agents, sessions, or processes
## 7.3 Safety-first loop

299
doc/spec/invite-flow.md Normal file
View File

@@ -0,0 +1,299 @@
# Invite Flow State Map
Status: Current implementation map
Date: 2026-04-13
This document maps the current invite creation and acceptance states implemented in:
- `ui/src/pages/CompanyInvites.tsx`
- `ui/src/pages/CompanySettings.tsx`
- `ui/src/pages/InviteLanding.tsx`
- `server/src/routes/access.ts`
- `server/src/lib/join-request-dedupe.ts`
## State Legend
- Invite state: `active`, `revoked`, `accepted`, `expired`
- Join request status: `pending_approval`, `approved`, `rejected`
- Claim secret state for agent joins: `available`, `consumed`, `expired`
- Invite type: `company_join` or `bootstrap_ceo`
- Join type: `human`, `agent`, or `both`
## Entity Lifecycle
```mermaid
flowchart TD
Board[Board user on invite screen]
HumanInvite[Create human company invite]
OpenClawInvite[Generate OpenClaw invite prompt]
Active[Invite state: active]
Revoked[Invite state: revoked]
Expired[Invite state: expired]
Accepted[Invite state: accepted]
BootstrapDone[Bootstrap accepted<br/>no join request]
HumanReuse{Matching human join request<br/>already exists for same user/email?}
HumanPending[Join request<br/>pending_approval]
HumanApproved[Join request<br/>approved]
HumanRejected[Join request<br/>rejected]
AgentPending[Agent join request<br/>pending_approval<br/>+ optional claim secret]
AgentApproved[Agent join request<br/>approved]
AgentRejected[Agent join request<br/>rejected]
ClaimAvailable[Claim secret available]
ClaimConsumed[Claim secret consumed]
ClaimExpired[Claim secret expired]
OpenClawReplay[Special replay path:<br/>accepted invite can be POSTed again<br/>for openclaw_gateway only]
Board --> HumanInvite --> Active
Board --> OpenClawInvite --> Active
Active --> Revoked: revoke
Active --> Expired: expiresAt passes
Active --> BootstrapDone: bootstrap_ceo accept
BootstrapDone --> Accepted
Active --> HumanReuse: human accept
HumanReuse --> HumanPending: reuse existing pending request
HumanReuse --> HumanApproved: reuse existing approved request
HumanReuse --> HumanPending: no reusable request<br/>create new request
HumanPending --> HumanApproved: board approves
HumanPending --> HumanRejected: board rejects
HumanPending --> Accepted
HumanApproved --> Accepted
Active --> AgentPending: agent accept
AgentPending --> Accepted
AgentPending --> AgentApproved: board approves
AgentPending --> AgentRejected: board rejects
AgentApproved --> ClaimAvailable: createdAgentId + claimSecretHash
ClaimAvailable --> ClaimConsumed: POST claim-api-key succeeds
ClaimAvailable --> ClaimExpired: secret expires
Accepted --> OpenClawReplay
OpenClawReplay --> AgentPending
OpenClawReplay --> AgentApproved
```
## Board-Side Screen States
```mermaid
stateDiagram-v2
[*] --> CompanySelection
CompanySelection --> NoCompany: no company selected
CompanySelection --> LoadingHistory: selectedCompanyId present
LoadingHistory --> HistoryError: listInvites failed
LoadingHistory --> Ready: listInvites succeeded
state Ready {
[*] --> EmptyHistory
EmptyHistory --> PopulatedHistory: invites exist
PopulatedHistory --> LoadingMore: View more
LoadingMore --> PopulatedHistory: next page loaded
PopulatedHistory --> RevokePending: Revoke active invite
RevokePending --> PopulatedHistory: revoke succeeded
RevokePending --> PopulatedHistory: revoke failed
EmptyHistory --> CreatePending: Create invite
PopulatedHistory --> CreatePending: Create invite
CreatePending --> LatestInviteVisible: create succeeded
CreatePending --> Ready: create failed
LatestInviteVisible --> CopiedToast: clipboard copy succeeded
LatestInviteVisible --> Ready: navigate away or refresh
}
CompanySelection --> OpenClawPromptReady: Company settings prompt generator
OpenClawPromptReady --> OpenClawPromptPending: Generate OpenClaw Invite Prompt
OpenClawPromptPending --> OpenClawSnippetVisible: prompt generated
OpenClawPromptPending --> OpenClawPromptReady: generation failed
```
## Invite Landing Screen States
```mermaid
stateDiagram-v2
[*] --> TokenGate
TokenGate --> InvalidToken: token missing
TokenGate --> Loading: token present
Loading --> InviteUnavailable: invite fetch failed or invite not returned
Loading --> CheckingAccess: signed-in session and invite.companyId
Loading --> InviteResolved: invite loaded without membership check
Loading --> AcceptedInviteSummary: invite already consumed<br/>but linked join request still exists
CheckingAccess --> RedirectToBoard: current user already belongs to company
CheckingAccess --> InviteResolved: membership check finished and no join-request summary state is active
CheckingAccess --> AcceptedInviteSummary: membership check finished and invite has joinRequestStatus
state InviteResolved {
[*] --> Branch
Branch --> AgentForm: company_join + allowedJoinTypes=agent
Branch --> InlineAuth: authenticated mode + no session + join is not agent-only
Branch --> AcceptReady: bootstrap invite or human-ready session/local_trusted
InlineAuth --> InlineAuth: toggle sign-up/sign-in
InlineAuth --> InlineAuth: auth validation or auth error message
InlineAuth --> RedirectToBoard: auth succeeded and company membership already exists
InlineAuth --> AcceptPending: auth succeeded and invite still needs acceptance
AgentForm --> AcceptPending: submit request
AgentForm --> AgentForm: validation or accept error
AcceptReady --> AcceptPending: Accept invite
AcceptReady --> AcceptReady: accept error
}
AcceptPending --> BootstrapComplete: bootstrapAccepted=true
AcceptPending --> RedirectToBoard: join status=approved
AcceptPending --> PendingApprovalResult: join status=pending_approval
AcceptPending --> RejectedResult: join status=rejected
state AcceptedInviteSummary {
[*] --> SummaryBranch
SummaryBranch --> PendingApprovalReload: joinRequestStatus=pending_approval
SummaryBranch --> OpeningCompany: joinRequestStatus=approved<br/>and human invite user is now a member
SummaryBranch --> RejectedReload: joinRequestStatus=rejected
SummaryBranch --> ConsumedReload: approved agent invite or other consumed state
}
PendingApprovalResult --> PendingApprovalReload: reload after submit
RejectedResult --> RejectedReload: reload after board rejects
RedirectToBoard --> OpeningCompany: brief pre-navigation render when approved membership is detected
OpeningCompany --> RedirectToBoard: navigate to board
```
## Sequence Diagrams
### Human Invite Creation And First Acceptance
```mermaid
sequenceDiagram
autonumber
actor Board as Board user
participant Settings as Company Invites UI
participant API as Access routes
participant Invites as invites table
actor Invitee as Invite recipient
participant Landing as Invite landing UI
participant Auth as Auth session
participant Join as join_requests table
Board->>Settings: Choose role and click Create invite
Settings->>API: POST /api/companies/:companyId/invites
API->>Invites: Insert active invite
API-->>Settings: inviteUrl + metadata
Invitee->>Landing: Open invite URL
Landing->>API: GET /api/invites/:token
API->>Invites: Load active invite
API-->>Landing: Invite summary
alt Authenticated mode and no session
Landing->>Auth: Sign up or sign in
Auth-->>Landing: Session established
end
Landing->>API: POST /api/invites/:token/accept (requestType=human)
API->>Join: Look for reusable human join request
alt Reusable pending or approved request exists
API->>Invites: Mark invite accepted
API-->>Landing: Existing join request status
else No reusable request exists
API->>Invites: Mark invite accepted
API->>Join: Insert pending_approval join request
API-->>Landing: New pending_approval join request
end
```
### Human Approval And Reload Path
```mermaid
sequenceDiagram
autonumber
actor Invitee as Invite recipient
participant Landing as Invite landing UI
participant API as Access routes
participant Join as join_requests table
actor Approver as Company admin
participant Queue as Access queue UI
participant Membership as company_memberships + grants
Invitee->>Landing: Reload consumed invite URL
Landing->>API: GET /api/invites/:token
API->>Join: Load join request by inviteId
API-->>Landing: joinRequestStatus + joinRequestType
alt joinRequestStatus = pending_approval
Landing-->>Invitee: Show waiting-for-approval panel
Approver->>Queue: Review request in Company Settings -> Access
Queue->>API: POST /companies/:companyId/join-requests/:requestId/approve
API->>Membership: Ensure membership and grants
API->>Join: Mark join request approved
Invitee->>Landing: Refresh after approval
Landing->>API: GET /api/invites/:token
API->>Join: Reload approved join request
API-->>Landing: approved status
Landing-->>Invitee: Opening company and redirect
else joinRequestStatus = rejected
Landing-->>Invitee: Show rejected error panel
else joinRequestStatus = approved but membership missing
Landing-->>Invitee: Fall through to consumed/unavailable state
end
```
### Agent Invite Approval, Claim, And Replay
```mermaid
sequenceDiagram
autonumber
actor Board as Board user
participant Settings as Company Settings UI
participant API as Access routes
participant Invites as invites table
actor Gateway as OpenClaw gateway agent
participant Join as join_requests table
actor Approver as Company admin
participant Agents as agents table
participant Keys as agent_api_keys table
Board->>Settings: Generate OpenClaw invite prompt
Settings->>API: POST /api/companies/:companyId/openclaw-invite-prompt
API->>Invites: Insert active agent invite
API-->>Settings: Prompt text + invite token
Gateway->>API: POST /api/invites/:token/accept (agent, openclaw_gateway)
API->>Invites: Mark invite accepted
API->>Join: Insert pending_approval join request + claimSecretHash
API-->>Gateway: requestId + claimSecret + claimApiKeyPath
Approver->>API: POST /companies/:companyId/join-requests/:requestId/approve
API->>Agents: Create agent + membership + grants
API->>Join: Mark request approved and store createdAgentId
Gateway->>API: POST /api/join-requests/:requestId/claim-api-key (claimSecret)
API->>Keys: Create initial API key
API->>Join: Mark claim secret consumed
API-->>Gateway: Plaintext Paperclip API key
opt Replay accepted invite for updated gateway defaults
Gateway->>API: POST /api/invites/:token/accept again
API->>Join: Reuse existing approved or pending request
API->>Agents: Update approved agent adapter config when applicable
API-->>Gateway: Updated join request payload
end
```
## Notes
- `GET /api/invites/:token` treats `revoked` and `expired` invites as unavailable. Accepted invites remain resolvable when they already have a linked join request, and the summary now includes `joinRequestStatus` plus `joinRequestType`.
- Human acceptance consumes the invite immediately and then either creates a new join request or reuses an existing `pending_approval` or `approved` human join request for the same user/email.
- The landing page has two layers of post-accept UI:
- immediate mutation-result UI from `POST /api/invites/:token/accept`
- reload-time summary UI from `GET /api/invites/:token` once the invite has already been consumed
- Reload behavior for accepted company invites is now status-sensitive:
- `pending_approval` re-renders the waiting-for-approval panel
- `rejected` renders the "This join request was not approved." error panel
- `approved` only becomes a success path for human invites after membership is visible to the current session; otherwise the page falls through to the generic consumed/unavailable state
- `GET /api/invites/:token/logo` still rejects accepted invites, so accepted-invite reload states may fall back to the generated company icon even though the summary payload still carries `companyLogoUrl`.
- The only accepted-invite replay path in the current implementation is `POST /api/invites/:token/accept` for `agent` requests with `adapterType=openclaw_gateway`, and only when the existing join request is still `pending_approval` or already `approved`.
- `bootstrap_ceo` invites are one-time and do not create join requests.

View File

@@ -124,14 +124,14 @@ If the connection drops, the UI reconnects automatically.
1. Enable timer wakeups (for example every 300s)
2. Keep assignment wakeups on
3. Use a focused prompt template
3. Use a focused prompt template that tells agents to act in the same heartbeat, leave durable progress, and mark blocked work with an owner/action
4. Watch run logs and adjust prompt/config over time
## 7.2 Event-driven loop (less constant polling)
1. Disable timer or set a long interval
2. Keep wake-on-assignment enabled
3. Use on-demand wakeups for manual nudges
3. Use child issues, comments, and on-demand wakeups for handoffs instead of loops that poll agents, sessions, or processes
## 7.3 Safety-first loop

View File

@@ -48,6 +48,8 @@
"guides/board-operator/managing-tasks",
"guides/board-operator/execution-workspaces-and-runtime-services",
"guides/board-operator/delegation",
"guides/board-operator/execution-workspaces-and-runtime-services",
"guides/board-operator/delegation",
"guides/board-operator/approvals",
"guides/board-operator/costs-and-budgets",
"guides/board-operator/activity-log",

View File

@@ -66,7 +66,9 @@ Read ancestors to understand why this task exists. If woken by a specific commen
### Step 7: Do the Work
Use your tools and capabilities to complete the task.
Use your tools and capabilities to complete the task. If the issue is actionable, take a concrete action in the same heartbeat. Do not stop at a plan unless the issue asked for planning.
Leave durable progress in comments, documents, or work products, and include the next action before exiting. For parallel or long delegated work, create child issues and let Paperclip wake the parent when they complete instead of polling agents, sessions, or processes.
### Step 8: Update Status
@@ -102,6 +104,22 @@ Always set `parentId` and `goalId` on subtasks.
- **Always checkout** before working — never PATCH to `in_progress` manually
- **Never retry a 409** — the task belongs to someone else
- **Always comment** on in-progress work before exiting a heartbeat
- **Start actionable work** in the same heartbeat; planning-only exits are for planning tasks
- **Leave a clear next action** in durable issue context
- **Use child issues instead of polling** for long or parallel delegated work
- **Always set parentId** on subtasks
- **Never cancel cross-team tasks** — reassign to your manager
- **Escalate when stuck** — use your chain of command
## Run Liveness
Paperclip records run liveness as metadata on heartbeat runs. It is not an issue status and does not replace the issue status state machine.
- Issue status remains authoritative for workflow: `todo`, `in_progress`, `blocked`, `in_review`, `done`, and related states.
- Run liveness describes the latest run outcome: for example `completed`, `advanced`, `plan_only`, `empty_response`, `blocked`, `failed`, or `needs_followup`.
- Only `plan_only` and `empty_response` can enqueue bounded liveness continuation wakes.
- Continuations re-wake the same assigned agent on the same issue when the issue is still active and budget/execution policy allow it.
- `continuationAttempt` counts semantic liveness continuations for a source run chain. It is separate from process recovery, queued wake delivery, adapter session resume, and other operational retries.
- Liveness continuation wake prompts include the attempt, source run, liveness state, liveness reason, and the instruction for the next heartbeat.
- Continuations do not mark the issue `blocked` or `done`. If automatic continuations are exhausted, Paperclip leaves an audit comment so a human or manager can clarify, block, or assign follow-up work.
- Workspace provisioning alone is not treated as concrete task progress. Durable progress should appear as tool/action events, issue comments, document or work-product revisions, activity log entries, commits, or tests.

View File

@@ -20,6 +20,13 @@ The Heartbeat Procedure:
8. Update status: PATCH /api/issues/{issueId} with status and comment
9. Delegate if needed: POST /api/companies/{companyId}/issues
Execution Contract:
- If the issue is actionable, start concrete work in this heartbeat. Do not stop at a plan unless the issue asks for planning.
- Leave durable progress in comments, documents, or work products, with a clear next action.
- Use child issues for parallel or long delegated work instead of polling agents, sessions, or processes.
- If blocked, PATCH the issue to blocked and name the unblock owner and action.
- Respect budget, pause/cancel, approval gates, and company boundaries.
Critical Rules:
- Always checkout before working. Never PATCH to in_progress manually.
- Never retry a 409. The task belongs to someone else.

View File

@@ -11,6 +11,8 @@
"dev:stop": "pnpm --filter @paperclipai/server exec tsx ../scripts/dev-service.ts stop",
"dev:server": "pnpm --filter @paperclipai/server dev",
"dev:ui": "pnpm --filter @paperclipai/ui dev",
"storybook": "pnpm --filter @paperclipai/ui storybook",
"build-storybook": "pnpm --filter @paperclipai/ui build-storybook",
"build": "pnpm run preflight:workspace-links && pnpm -r build",
"typecheck": "pnpm run preflight:workspace-links && pnpm -r typecheck",
"test": "pnpm run test:run",
@@ -18,6 +20,7 @@
"test:run": "pnpm run preflight:workspace-links && vitest run",
"db:generate": "pnpm --filter @paperclipai/db generate",
"db:migrate": "pnpm --filter @paperclipai/db migrate",
"issue-references:backfill": "pnpm run preflight:workspace-links && tsx scripts/backfill-issue-reference-mentions.ts",
"secrets:migrate-inline-env": "tsx scripts/migrate-inline-env-secrets.ts",
"db:backup": "./scripts/backup-db.sh",
"paperclipai": "node cli/node_modules/tsx/dist/cli.mjs cli/src/index.ts",
@@ -34,6 +37,7 @@
"smoke:openclaw-sse-standalone": "./scripts/smoke/openclaw-sse-standalone.sh",
"test:e2e": "npx playwright test --config tests/e2e/playwright.config.ts",
"test:e2e:headed": "npx playwright test --config tests/e2e/playwright.config.ts --headed",
"test:e2e:multiuser-authenticated": "npx playwright test --config tests/e2e/playwright-multiuser-authenticated.config.ts",
"evals:smoke": "cd evals/promptfoo && npx promptfoo@0.103.3 eval",
"test:release-smoke": "npx playwright test --config tests/release-smoke/playwright.config.ts",
"test:release-smoke:headed": "npx playwright test --config tests/release-smoke/playwright.config.ts --headed",

View File

@@ -1,6 +1,13 @@
import { randomUUID } from "node:crypto";
import { describe, expect, it } from "vitest";
import { runChildProcess } from "./server-utils.js";
import {
appendWithByteCap,
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
renderPaperclipWakePrompt,
runningProcesses,
runChildProcess,
stringifyPaperclipWakePayload,
} from "./server-utils.js";
function isPidAlive(pid: number) {
try {
@@ -20,7 +27,37 @@ async function waitForPidExit(pid: number, timeoutMs = 2_000) {
return !isPidAlive(pid);
}
async function waitForTextMatch(read: () => string, pattern: RegExp, timeoutMs = 1_000) {
const deadline = Date.now() + timeoutMs;
while (Date.now() < deadline) {
const value = read();
const match = value.match(pattern);
if (match) return match;
await new Promise((resolve) => setTimeout(resolve, 25));
}
return read().match(pattern);
}
describe("runChildProcess", () => {
it("does not arm a timeout when timeoutSec is 0", async () => {
const result = await runChildProcess(
randomUUID(),
process.execPath,
["-e", "setTimeout(() => process.stdout.write('done'), 150);"],
{
cwd: process.cwd(),
env: {},
timeoutSec: 0,
graceSec: 1,
onLog: async () => {},
},
);
expect(result.exitCode).toBe(0);
expect(result.timedOut).toBe(false);
expect(result.stdout).toBe("done");
});
it("waits for onSpawn before sending stdin to the child", async () => {
const spawnDelayMs = 150;
const startedAt = Date.now();
@@ -85,4 +122,281 @@ describe("runChildProcess", () => {
expect(await waitForPidExit(descendantPid!, 2_000)).toBe(true);
});
it.skipIf(process.platform === "win32")("cleans up a lingering process group after terminal output and child exit", async () => {
const result = await runChildProcess(
randomUUID(),
process.execPath,
[
"-e",
[
"const { spawn } = require('node:child_process');",
"const child = spawn(process.execPath, ['-e', 'setInterval(() => {}, 1000)'], { stdio: ['ignore', 'inherit', 'ignore'] });",
"process.stdout.write(`descendant:${child.pid}\\n`);",
"process.stdout.write(`${JSON.stringify({ type: 'result', result: 'done' })}\\n`);",
"setTimeout(() => process.exit(0), 25);",
].join(" "),
],
{
cwd: process.cwd(),
env: {},
timeoutSec: 0,
graceSec: 1,
onLog: async () => {},
terminalResultCleanup: {
graceMs: 100,
hasTerminalResult: ({ stdout }) => stdout.includes('"type":"result"'),
},
},
);
const descendantPid = Number.parseInt(result.stdout.match(/descendant:(\d+)/)?.[1] ?? "", 10);
expect(result.timedOut).toBe(false);
expect(result.exitCode).toBe(0);
expect(Number.isInteger(descendantPid) && descendantPid > 0).toBe(true);
expect(await waitForPidExit(descendantPid, 2_000)).toBe(true);
});
it.skipIf(process.platform === "win32")("cleans up a still-running child after terminal output", async () => {
const result = await runChildProcess(
randomUUID(),
process.execPath,
[
"-e",
[
"process.stdout.write(`${JSON.stringify({ type: 'result', result: 'done' })}\\n`);",
"setInterval(() => {}, 1000);",
].join(" "),
],
{
cwd: process.cwd(),
env: {},
timeoutSec: 0,
graceSec: 1,
onLog: async () => {},
terminalResultCleanup: {
graceMs: 100,
hasTerminalResult: ({ stdout }) => stdout.includes('"type":"result"'),
},
},
);
expect(result.timedOut).toBe(false);
expect(result.signal).toBe("SIGTERM");
expect(result.stdout).toContain('"type":"result"');
});
it.skipIf(process.platform === "win32")("does not clean up noisy runs that have no terminal output", async () => {
const runId = randomUUID();
let observed = "";
const resultPromise = runChildProcess(
runId,
process.execPath,
[
"-e",
[
"const { spawn } = require('node:child_process');",
"const child = spawn(process.execPath, ['-e', \"setInterval(() => process.stdout.write('noise\\\\n'), 50)\"], { stdio: ['ignore', 'inherit', 'ignore'] });",
"process.stdout.write(`descendant:${child.pid}\\n`);",
"setTimeout(() => process.exit(0), 25);",
].join(" "),
],
{
cwd: process.cwd(),
env: {},
timeoutSec: 0,
graceSec: 1,
onLog: async (_stream, chunk) => {
observed += chunk;
},
terminalResultCleanup: {
graceMs: 50,
hasTerminalResult: ({ stdout }) => stdout.includes('"type":"result"'),
},
},
);
const pidMatch = await waitForTextMatch(() => observed, /descendant:(\d+)/);
const descendantPid = Number.parseInt(pidMatch?.[1] ?? "", 10);
expect(Number.isInteger(descendantPid) && descendantPid > 0).toBe(true);
const race = await Promise.race([
resultPromise.then(() => "settled" as const),
new Promise<"pending">((resolve) => setTimeout(() => resolve("pending"), 300)),
]);
expect(race).toBe("pending");
expect(isPidAlive(descendantPid)).toBe(true);
const running = runningProcesses.get(runId) as
| { child: { kill(signal: NodeJS.Signals): boolean }; processGroupId: number | null }
| undefined;
try {
if (running?.processGroupId) {
process.kill(-running.processGroupId, "SIGKILL");
} else {
running?.child.kill("SIGKILL");
}
await resultPromise;
} finally {
runningProcesses.delete(runId);
if (isPidAlive(descendantPid)) {
try {
process.kill(descendantPid, "SIGKILL");
} catch {
// Ignore cleanup races.
}
}
}
});
});
describe("renderPaperclipWakePrompt", () => {
it("keeps the default local-agent prompt action-oriented", () => {
expect(DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE).toContain("Start actionable work in this heartbeat");
expect(DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE).toContain("do not stop at a plan");
expect(DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE).toContain("Use child issues");
expect(DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE).toContain("instead of polling agents, sessions, or processes");
expect(DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE).toContain(
"Respect budget, pause/cancel, approval gates, and company boundaries",
);
});
it("adds the execution contract to scoped wake prompts", () => {
const prompt = renderPaperclipWakePrompt({
reason: "issue_assigned",
issue: {
id: "issue-1",
identifier: "PAP-1580",
title: "Update prompts",
status: "in_progress",
},
commentWindow: {
requestedCount: 0,
includedCount: 0,
missingCount: 0,
},
comments: [],
fallbackFetchNeeded: false,
});
expect(prompt).toContain("## Paperclip Wake Payload");
expect(prompt).toContain("Execution contract: take concrete action in this heartbeat");
expect(prompt).toContain("use child issues instead of polling");
expect(prompt).toContain("mark blocked work with the unblock owner/action");
});
it("renders dependency-blocked interaction guidance", () => {
const prompt = renderPaperclipWakePrompt({
reason: "issue_commented",
issue: {
id: "issue-1",
identifier: "PAP-1703",
title: "Blocked parent",
status: "todo",
},
dependencyBlockedInteraction: true,
unresolvedBlockerIssueIds: ["blocker-1"],
unresolvedBlockerSummaries: [
{
id: "blocker-1",
identifier: "PAP-1723",
title: "Finish blocker",
status: "todo",
priority: "medium",
},
],
commentWindow: {
requestedCount: 1,
includedCount: 1,
missingCount: 0,
},
commentIds: ["comment-1"],
latestCommentId: "comment-1",
comments: [{ id: "comment-1", body: "hello" }],
fallbackFetchNeeded: false,
});
expect(prompt).toContain("dependency-blocked interaction: yes");
expect(prompt).toContain("respond or triage the human comment");
expect(prompt).toContain("PAP-1723 Finish blocker (todo)");
});
it("includes continuation and child issue summaries in structured wake context", () => {
const payload = {
reason: "issue_children_completed",
issue: {
id: "parent-1",
identifier: "PAP-100",
title: "Integrate child work",
status: "in_progress",
priority: "medium",
},
continuationSummary: {
key: "continuation-summary",
title: "Continuation Summary",
body: "# Continuation Summary\n\n## Next Action\n\n- Integrate child outputs.",
updatedAt: "2026-04-18T12:00:00.000Z",
},
livenessContinuation: {
attempt: 2,
maxAttempts: 2,
sourceRunId: "run-1",
state: "plan_only",
reason: "Run described future work without concrete action evidence",
instruction: "Take the first concrete action now.",
},
childIssueSummaries: [
{
id: "child-1",
identifier: "PAP-101",
title: "Implement helper",
status: "done",
priority: "medium",
summary: "Added the helper route and tests.",
},
],
};
expect(JSON.parse(stringifyPaperclipWakePayload(payload) ?? "{}")).toMatchObject({
continuationSummary: {
body: expect.stringContaining("Continuation Summary"),
},
livenessContinuation: {
attempt: 2,
maxAttempts: 2,
sourceRunId: "run-1",
state: "plan_only",
instruction: "Take the first concrete action now.",
},
childIssueSummaries: [
{
identifier: "PAP-101",
summary: "Added the helper route and tests.",
},
],
});
const prompt = renderPaperclipWakePrompt(payload);
expect(prompt).toContain("Issue continuation summary:");
expect(prompt).toContain("Integrate child outputs.");
expect(prompt).toContain("Run liveness continuation:");
expect(prompt).toContain("- attempt: 2/2");
expect(prompt).toContain("- source run: run-1");
expect(prompt).toContain("- liveness state: plan_only");
expect(prompt).toContain("- reason: Run described future work without concrete action evidence");
expect(prompt).toContain("- instruction: Take the first concrete action now.");
expect(prompt).toContain("Direct child issue summaries:");
expect(prompt).toContain("PAP-101 Implement helper (done)");
expect(prompt).toContain("Added the helper route and tests.");
});
});
describe("appendWithByteCap", () => {
it("keeps valid UTF-8 when trimming through multibyte text", () => {
const output = appendWithByteCap("prefix ", "hello — world", 7);
expect(output).not.toContain("\uFFFD");
expect(Buffer.from(output, "utf8").toString("utf8")).toBe(output);
expect(Buffer.byteLength(output, "utf8")).toBeLessThanOrEqual(7);
});
});

View File

@@ -16,6 +16,11 @@ export interface RunProcessResult {
startedAt: string | null;
}
export interface TerminalResultCleanupOptions {
hasTerminalResult: (output: { stdout: string; stderr: string }) => boolean;
graceMs?: number;
}
interface RunningProcess {
child: ChildProcess;
graceSec: number;
@@ -29,6 +34,10 @@ interface SpawnTarget {
type ChildProcessWithEvents = ChildProcess & {
on(event: "error", listener: (err: Error) => void): ChildProcess;
on(
event: "exit",
listener: (code: number | null, signal: NodeJS.Signals | null) => void,
): ChildProcess;
on(
event: "close",
listener: (code: number | null, signal: NodeJS.Signals | null) => void,
@@ -60,12 +69,25 @@ function signalRunningProcess(
export const runningProcesses = new Map<string, RunningProcess>();
export const MAX_CAPTURE_BYTES = 4 * 1024 * 1024;
export const MAX_EXCERPT_BYTES = 32 * 1024;
const TERMINAL_RESULT_SCAN_OVERLAP_CHARS = 64 * 1024;
const SENSITIVE_ENV_KEY = /(key|token|secret|password|passwd|authorization|cookie)/i;
const PAPERCLIP_SKILL_ROOT_RELATIVE_CANDIDATES = [
"../../skills",
"../../../../../skills",
];
export const DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE = [
"You are agent {{agent.id}} ({{agent.name}}). Continue your Paperclip work.",
"",
"Execution contract:",
"- Start actionable work in this heartbeat; do not stop at a plan unless the issue asks for planning.",
"- Leave durable progress in comments, documents, or work products with a clear next action.",
"- Use child issues for parallel or long delegated work instead of polling agents, sessions, or processes.",
"- If woken by a human comment on a dependency-blocked issue, respond or triage the comment without treating the blocked deliverable work as unblocked.",
"- If blocked, mark the issue blocked and name the unblock owner and action.",
"- Respect budget, pause/cancel, approval gates, and company boundaries.",
].join("\n");
export interface PaperclipSkillEntry {
key: string;
runtimeName: string;
@@ -180,6 +202,22 @@ export function appendWithCap(prev: string, chunk: string, cap = MAX_CAPTURE_BYT
return combined.length > cap ? combined.slice(combined.length - cap) : combined;
}
export function appendWithByteCap(prev: string, chunk: string, cap = MAX_CAPTURE_BYTES) {
const combined = prev + chunk;
const bytes = Buffer.byteLength(combined, "utf8");
if (bytes <= cap) return combined;
const buffer = Buffer.from(combined, "utf8");
let start = Math.max(0, bytes - cap);
while (start < buffer.length && (buffer[start]! & 0xc0) === 0x80) start += 1;
return buffer.subarray(start).toString("utf8");
}
function resumeReadable(readable: { resume: () => unknown; destroyed?: boolean } | null | undefined) {
if (!readable || readable.destroyed) return;
readable.resume();
}
export function resolvePathValue(obj: Record<string, unknown>, dottedPath: string) {
const parts = dottedPath.split(".");
let cursor: unknown = obj;
@@ -250,11 +288,52 @@ type PaperclipWakeComment = {
authorId: string | null;
};
type PaperclipWakeContinuationSummary = {
key: string | null;
title: string | null;
body: string;
bodyTruncated: boolean;
updatedAt: string | null;
};
type PaperclipWakeLivenessContinuation = {
attempt: number | null;
maxAttempts: number | null;
sourceRunId: string | null;
state: string | null;
reason: string | null;
instruction: string | null;
};
type PaperclipWakeChildIssueSummary = {
id: string | null;
identifier: string | null;
title: string | null;
status: string | null;
priority: string | null;
summary: string | null;
};
type PaperclipWakeBlockerSummary = {
id: string | null;
identifier: string | null;
title: string | null;
status: string | null;
priority: string | null;
};
type PaperclipWakePayload = {
reason: string | null;
issue: PaperclipWakeIssue | null;
checkedOutByHarness: boolean;
dependencyBlockedInteraction: boolean;
unresolvedBlockerIssueIds: string[];
unresolvedBlockerSummaries: PaperclipWakeBlockerSummary[];
executionStage: PaperclipWakeExecutionStage | null;
continuationSummary: PaperclipWakeContinuationSummary | null;
livenessContinuation: PaperclipWakeLivenessContinuation | null;
childIssueSummaries: PaperclipWakeChildIssueSummary[];
childIssueSummaryTruncated: boolean;
commentIds: string[];
latestCommentId: string | null;
comments: PaperclipWakeComment[];
@@ -298,6 +377,61 @@ function normalizePaperclipWakeComment(value: unknown): PaperclipWakeComment | n
};
}
function normalizePaperclipWakeContinuationSummary(value: unknown): PaperclipWakeContinuationSummary | null {
const summary = parseObject(value);
const body = asString(summary.body, "").trim();
if (!body) return null;
return {
key: asString(summary.key, "").trim() || null,
title: asString(summary.title, "").trim() || null,
body,
bodyTruncated: asBoolean(summary.bodyTruncated, false),
updatedAt: asString(summary.updatedAt, "").trim() || null,
};
}
function normalizePaperclipWakeLivenessContinuation(value: unknown): PaperclipWakeLivenessContinuation | null {
const continuation = parseObject(value);
const attempt = asNumber(continuation.attempt, 0);
const maxAttempts = asNumber(continuation.maxAttempts, 0);
const sourceRunId = asString(continuation.sourceRunId, "").trim() || null;
const state = asString(continuation.state, "").trim() || null;
const reason = asString(continuation.reason, "").trim() || null;
const instruction = asString(continuation.instruction, "").trim() || null;
if (!attempt && !maxAttempts && !sourceRunId && !state && !reason && !instruction) return null;
return {
attempt: attempt > 0 ? attempt : null,
maxAttempts: maxAttempts > 0 ? maxAttempts : null,
sourceRunId,
state,
reason,
instruction,
};
}
function normalizePaperclipWakeChildIssueSummary(value: unknown): PaperclipWakeChildIssueSummary | null {
const child = parseObject(value);
const id = asString(child.id, "").trim() || null;
const identifier = asString(child.identifier, "").trim() || null;
const title = asString(child.title, "").trim() || null;
const status = asString(child.status, "").trim() || null;
const priority = asString(child.priority, "").trim() || null;
const summary = asString(child.summary, "").trim() || null;
if (!id && !identifier && !title && !status && !summary) return null;
return { id, identifier, title, status, priority, summary };
}
function normalizePaperclipWakeBlockerSummary(value: unknown): PaperclipWakeBlockerSummary | null {
const blocker = parseObject(value);
const id = asString(blocker.id, "").trim() || null;
const identifier = asString(blocker.identifier, "").trim() || null;
const title = asString(blocker.title, "").trim() || null;
const status = asString(blocker.status, "").trim() || null;
const priority = asString(blocker.priority, "").trim() || null;
if (!id && !identifier && !title && !status) return null;
return { id, identifier, title, status, priority };
}
function normalizePaperclipWakeExecutionPrincipal(value: unknown): PaperclipWakeExecutionPrincipal | null {
const principal = parseObject(value);
const typeRaw = asString(principal.type, "").trim().toLowerCase();
@@ -356,8 +490,25 @@ export function normalizePaperclipWakePayload(value: unknown): PaperclipWakePayl
.map((entry) => entry.trim())
: [];
const executionStage = normalizePaperclipWakeExecutionStage(payload.executionStage);
const continuationSummary = normalizePaperclipWakeContinuationSummary(payload.continuationSummary);
const livenessContinuation = normalizePaperclipWakeLivenessContinuation(payload.livenessContinuation);
const childIssueSummaries = Array.isArray(payload.childIssueSummaries)
? payload.childIssueSummaries
.map((entry) => normalizePaperclipWakeChildIssueSummary(entry))
.filter((entry): entry is PaperclipWakeChildIssueSummary => Boolean(entry))
: [];
const unresolvedBlockerIssueIds = Array.isArray(payload.unresolvedBlockerIssueIds)
? payload.unresolvedBlockerIssueIds
.map((entry) => asString(entry, "").trim())
.filter(Boolean)
: [];
const unresolvedBlockerSummaries = Array.isArray(payload.unresolvedBlockerSummaries)
? payload.unresolvedBlockerSummaries
.map((entry) => normalizePaperclipWakeBlockerSummary(entry))
.filter((entry): entry is PaperclipWakeBlockerSummary => Boolean(entry))
: [];
if (comments.length === 0 && commentIds.length === 0 && !executionStage && !normalizePaperclipWakeIssue(payload.issue)) {
if (comments.length === 0 && commentIds.length === 0 && childIssueSummaries.length === 0 && unresolvedBlockerIssueIds.length === 0 && unresolvedBlockerSummaries.length === 0 && !executionStage && !continuationSummary && !livenessContinuation && !normalizePaperclipWakeIssue(payload.issue)) {
return null;
}
@@ -365,7 +516,14 @@ export function normalizePaperclipWakePayload(value: unknown): PaperclipWakePayl
reason: asString(payload.reason, "").trim() || null,
issue: normalizePaperclipWakeIssue(payload.issue),
checkedOutByHarness: asBoolean(payload.checkedOutByHarness, false),
dependencyBlockedInteraction: asBoolean(payload.dependencyBlockedInteraction, false),
unresolvedBlockerIssueIds,
unresolvedBlockerSummaries,
executionStage,
continuationSummary,
livenessContinuation,
childIssueSummaries,
childIssueSummaryTruncated: asBoolean(payload.childIssueSummaryTruncated, false),
commentIds,
latestCommentId: asString(payload.latestCommentId, "").trim() || null,
comments,
@@ -406,6 +564,8 @@ export function renderPaperclipWakePrompt(
"Focus on the new wake delta below and continue the current task without restating the full heartbeat boilerplate.",
"Fetch the API thread only when `fallbackFetchNeeded` is true or you need broader history than this batch.",
"",
"Execution contract: take concrete action in this heartbeat when the issue is actionable; do not stop at a plan unless planning was requested. Leave durable progress with a clear next action, use child issues instead of polling for long or parallel work, and mark blocked work with the unblock owner/action.",
"",
`- reason: ${normalized.reason ?? "unknown"}`,
`- issue: ${normalized.issue?.identifier ?? normalized.issue?.id ?? "unknown"}${normalized.issue?.title ? ` ${normalized.issue.title}` : ""}`,
`- pending comments: ${normalized.includedCount}/${normalized.requestedCount}`,
@@ -421,6 +581,8 @@ export function renderPaperclipWakePrompt(
"Use this inline wake data first before refetching the issue thread.",
"Only fetch the API thread when `fallbackFetchNeeded` is true or you need broader history than this batch.",
"",
"Execution contract: take concrete action in this heartbeat when the issue is actionable; do not stop at a plan unless planning was requested. Leave durable progress with a clear next action, use child issues instead of polling for long or parallel work, and mark blocked work with the unblock owner/action.",
"",
`- reason: ${normalized.reason ?? "unknown"}`,
`- issue: ${normalized.issue?.identifier ?? normalized.issue?.id ?? "unknown"}${normalized.issue?.title ? ` ${normalized.issue.title}` : ""}`,
`- pending comments: ${normalized.includedCount}/${normalized.requestedCount}`,
@@ -437,6 +599,18 @@ export function renderPaperclipWakePrompt(
if (normalized.checkedOutByHarness) {
lines.push("- checkout: already claimed by the harness for this run");
}
if (normalized.dependencyBlockedInteraction) {
lines.push("- dependency-blocked interaction: yes");
lines.push("- execution scope: respond or triage the human comment; do not treat blocker-dependent deliverable work as unblocked");
if (normalized.unresolvedBlockerSummaries.length > 0) {
const blockers = normalized.unresolvedBlockerSummaries
.map((blocker) => `${blocker.identifier ?? blocker.id ?? "unknown"}${blocker.title ? ` ${blocker.title}` : ""}${blocker.status ? ` (${blocker.status})` : ""}`)
.join("; ");
lines.push(`- unresolved blockers: ${blockers}`);
} else if (normalized.unresolvedBlockerIssueIds.length > 0) {
lines.push(`- unresolved blocker issue ids: ${normalized.unresolvedBlockerIssueIds.join(", ")}`);
}
}
if (normalized.missingCount > 0) {
lines.push(`- omitted comments: ${normalized.missingCount}`);
}
@@ -470,6 +644,55 @@ export function renderPaperclipWakePrompt(
}
}
if (normalized.continuationSummary) {
lines.push(
"",
"Issue continuation summary:",
normalized.continuationSummary.body,
);
if (normalized.continuationSummary.bodyTruncated) {
lines.push("[continuation summary truncated]");
}
}
if (normalized.livenessContinuation) {
const continuation = normalized.livenessContinuation;
lines.push("", "Run liveness continuation:");
if (continuation.attempt) {
lines.push(
`- attempt: ${continuation.attempt}${continuation.maxAttempts ? `/${continuation.maxAttempts}` : ""}`,
);
}
if (continuation.sourceRunId) {
lines.push(`- source run: ${continuation.sourceRunId}`);
}
if (continuation.state) {
lines.push(`- liveness state: ${continuation.state}`);
}
if (continuation.reason) {
lines.push(`- reason: ${continuation.reason}`);
}
if (continuation.instruction) {
lines.push(`- instruction: ${continuation.instruction}`);
}
}
if (normalized.childIssueSummaries.length > 0) {
lines.push("", "Direct child issue summaries:");
for (const child of normalized.childIssueSummaries) {
const label = child.identifier ?? child.id ?? "unknown";
lines.push(
`- ${label}${child.title ? ` ${child.title}` : ""}${child.status ? ` (${child.status})` : ""}`,
);
if (child.summary) {
lines.push(` ${child.summary}`);
}
}
if (normalized.childIssueSummaryTruncated) {
lines.push("[child issue summaries truncated]");
}
}
if (normalized.checkedOutByHarness) {
lines.push(
"",
@@ -1072,6 +1295,7 @@ export async function runChildProcess(
onLog: (stream: "stdout" | "stderr", chunk: string) => Promise<void>;
onLogError?: (err: unknown, runId: string, message: string) => void;
onSpawn?: (meta: { pid: number; processGroupId: number | null; startedAt: string }) => Promise<void>;
terminalResultCleanup?: TerminalResultCleanupOptions;
stdin?: string;
},
): Promise<RunProcessResult> {
@@ -1121,11 +1345,60 @@ export async function runChildProcess(
let stdout = "";
let stderr = "";
let logChain: Promise<void> = Promise.resolve();
let terminalResultSeen = false;
let terminalCleanupStarted = false;
let terminalCleanupTimer: NodeJS.Timeout | null = null;
let terminalCleanupKillTimer: NodeJS.Timeout | null = null;
let terminalResultStdoutScanOffset = 0;
let terminalResultStderrScanOffset = 0;
const clearTerminalCleanupTimers = () => {
if (terminalCleanupTimer) clearTimeout(terminalCleanupTimer);
if (terminalCleanupKillTimer) clearTimeout(terminalCleanupKillTimer);
terminalCleanupTimer = null;
terminalCleanupKillTimer = null;
};
const maybeArmTerminalResultCleanup = () => {
const terminalCleanup = opts.terminalResultCleanup;
if (!terminalCleanup || terminalCleanupStarted || timedOut) return;
if (!terminalResultSeen) {
const stdoutStart = Math.max(0, terminalResultStdoutScanOffset - TERMINAL_RESULT_SCAN_OVERLAP_CHARS);
const stderrStart = Math.max(0, terminalResultStderrScanOffset - TERMINAL_RESULT_SCAN_OVERLAP_CHARS);
const scanOutput = {
stdout: stdout.slice(stdoutStart),
stderr: stderr.slice(stderrStart),
};
terminalResultStdoutScanOffset = stdout.length;
terminalResultStderrScanOffset = stderr.length;
if (scanOutput.stdout.length === 0 && scanOutput.stderr.length === 0) return;
try {
terminalResultSeen = terminalCleanup.hasTerminalResult(scanOutput);
} catch (err) {
onLogError(err, runId, "failed to inspect terminal adapter output");
}
}
if (!terminalResultSeen) return;
if (terminalCleanupTimer) return;
const graceMs = Math.max(0, terminalCleanup.graceMs ?? 5_000);
terminalCleanupTimer = setTimeout(() => {
terminalCleanupTimer = null;
if (terminalCleanupStarted || timedOut) return;
terminalCleanupStarted = true;
signalRunningProcess({ child, processGroupId }, "SIGTERM");
terminalCleanupKillTimer = setTimeout(() => {
terminalCleanupKillTimer = null;
signalRunningProcess({ child, processGroupId }, "SIGKILL");
}, Math.max(1, opts.graceSec) * 1000);
}, graceMs);
};
const timeout =
opts.timeoutSec > 0
? setTimeout(() => {
timedOut = true;
clearTerminalCleanupTimers();
signalRunningProcess({ child, processGroupId }, "SIGTERM");
setTimeout(() => {
signalRunningProcess({ child, processGroupId }, "SIGKILL");
@@ -1134,19 +1407,35 @@ export async function runChildProcess(
: null;
child.stdout?.on("data", (chunk: unknown) => {
const readable = child.stdout;
if (!readable) return;
readable.pause();
const text = String(chunk);
stdout = appendWithCap(stdout, text);
maybeArmTerminalResultCleanup();
logChain = logChain
.then(() => opts.onLog("stdout", text))
.catch((err) => onLogError(err, runId, "failed to append stdout log chunk"));
.catch((err) => onLogError(err, runId, "failed to append stdout log chunk"))
.finally(() => {
maybeArmTerminalResultCleanup();
resumeReadable(readable);
});
});
child.stderr?.on("data", (chunk: unknown) => {
const readable = child.stderr;
if (!readable) return;
readable.pause();
const text = String(chunk);
stderr = appendWithCap(stderr, text);
maybeArmTerminalResultCleanup();
logChain = logChain
.then(() => opts.onLog("stderr", text))
.catch((err) => onLogError(err, runId, "failed to append stderr log chunk"));
.catch((err) => onLogError(err, runId, "failed to append stderr log chunk"))
.finally(() => {
maybeArmTerminalResultCleanup();
resumeReadable(readable);
});
});
const stdin = child.stdin;
@@ -1160,6 +1449,7 @@ export async function runChildProcess(
child.on("error", (err: Error) => {
if (timeout) clearTimeout(timeout);
clearTerminalCleanupTimers();
runningProcesses.delete(runId);
const errno = (err as NodeJS.ErrnoException).code;
const pathValue = mergedEnv.PATH ?? mergedEnv.Path ?? "";
@@ -1170,8 +1460,13 @@ export async function runChildProcess(
reject(new Error(msg));
});
child.on("exit", () => {
maybeArmTerminalResultCleanup();
});
child.on("close", (code: number | null, signal: NodeJS.Signals | null) => {
if (timeout) clearTimeout(timeout);
clearTerminalCleanupTimers();
runningProcesses.delete(runId);
void logChain.finally(() => {
resolve({

View File

@@ -21,6 +21,7 @@ import {
renderTemplate,
renderPaperclipWakePrompt,
stringifyPaperclipWakePayload,
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
runChildProcess,
} from "@paperclipai/adapter-utils/server-utils";
import {
@@ -300,7 +301,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const promptTemplate = asString(
config.promptTemplate,
"You are agent {{agent.id}} ({{agent.name}}). Continue your Paperclip work.",
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
);
const model = asString(config.model, "");
const effort = asString(config.effort, "");
@@ -329,6 +330,10 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
graceSec,
extraArgs,
} = runtimeConfig;
const terminalResultCleanupGraceMs = Math.max(
0,
asNumber(config.terminalResultCleanupGraceMs, 5_000),
);
const effectiveEnv = Object.fromEntries(
Object.entries({ ...process.env, ...env }).filter(
(entry): entry is [string, string] => typeof entry[1] === "string",
@@ -502,6 +507,10 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
graceSec,
onSpawn,
onLog,
terminalResultCleanup: {
graceMs: terminalResultCleanupGraceMs,
hasTerminalResult: ({ stdout }) => parseClaudeStreamJson(stdout).resultJson !== null,
},
});
const parsedStream = parseClaudeStreamJson(proc.stdout);

View File

@@ -18,10 +18,15 @@ import {
renderTemplate,
renderPaperclipWakePrompt,
stringifyPaperclipWakePayload,
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
joinPromptSections,
runChildProcess,
} from "@paperclipai/adapter-utils/server-utils";
import { parseCodexJsonl, isCodexUnknownSessionError } from "./parse.js";
import {
parseCodexJsonl,
isCodexTransientUpstreamError,
isCodexUnknownSessionError,
} from "./parse.js";
import { pathExists, prepareManagedCodexHome, resolveManagedCodexHomeDir, resolveSharedCodexHomeDir } from "./codex-home.js";
import { resolveCodexDesiredSkillNames } from "./skills.js";
import { buildCodexExecArgs } from "./codex-args.js";
@@ -148,6 +153,52 @@ type EnsureCodexSkillsInjectedOptions = {
linkSkill?: (source: string, target: string) => Promise<void>;
};
type CodexTransientFallbackMode =
| "same_session"
| "safer_invocation"
| "fresh_session"
| "fresh_session_safer_invocation";
function readCodexTransientFallbackMode(context: Record<string, unknown>): CodexTransientFallbackMode | null {
const value = asString(context.codexTransientFallbackMode, "").trim();
switch (value) {
case "same_session":
case "safer_invocation":
case "fresh_session":
case "fresh_session_safer_invocation":
return value;
default:
return null;
}
}
function fallbackModeUsesSaferInvocation(mode: CodexTransientFallbackMode | null): boolean {
return mode === "safer_invocation" || mode === "fresh_session_safer_invocation";
}
function fallbackModeUsesFreshSession(mode: CodexTransientFallbackMode | null): boolean {
return mode === "fresh_session" || mode === "fresh_session_safer_invocation";
}
function buildCodexTransientHandoffNote(input: {
previousSessionId: string | null;
fallbackMode: CodexTransientFallbackMode;
continuationSummaryBody: string | null;
}): string {
return [
"Paperclip session handoff:",
input.previousSessionId ? `- Previous session: ${input.previousSessionId}` : "",
"- Rotation reason: repeated Codex transient remote-compaction failures",
`- Fallback mode: ${input.fallbackMode}`,
input.continuationSummaryBody
? `- Issue continuation summary: ${input.continuationSummaryBody.slice(0, 1_500)}`
: "",
"Continue from the current task state. Rebuild only the minimum context you need.",
]
.filter(Boolean)
.join("\n");
}
export async function ensureCodexSkillsInjected(
onLog: AdapterExecutionContext["onLog"],
options: EnsureCodexSkillsInjectedOptions = {},
@@ -218,7 +269,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const promptTemplate = asString(
config.promptTemplate,
"You are agent {{agent.id}} ({{agent.name}}). Continue your Paperclip work.",
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
);
const command = asString(config.command, "codex");
const model = asString(config.model, "");
@@ -396,7 +447,10 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const canResumeSession =
runtimeSessionId.length > 0 &&
(runtimeSessionCwd.length === 0 || path.resolve(runtimeSessionCwd) === path.resolve(cwd));
const sessionId = canResumeSession ? runtimeSessionId : null;
const codexTransientFallbackMode = readCodexTransientFallbackMode(context);
const forceSaferInvocation = fallbackModeUsesSaferInvocation(codexTransientFallbackMode);
const forceFreshSession = fallbackModeUsesFreshSession(codexTransientFallbackMode);
const sessionId = canResumeSession && !forceFreshSession ? runtimeSessionId : null;
if (runtimeSessionId && !canResumeSession) {
await onLog(
"stdout",
@@ -443,28 +497,66 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const shouldUseResumeDeltaPrompt = Boolean(sessionId) && wakePrompt.length > 0;
const promptInstructionsPrefix = shouldUseResumeDeltaPrompt ? "" : instructionsPrefix;
instructionsChars = promptInstructionsPrefix.length;
const continuationSummary = parseObject(context.paperclipContinuationSummary);
const continuationSummaryBody = asString(continuationSummary.body, "").trim() || null;
const codexFallbackHandoffNote =
forceFreshSession
? buildCodexTransientHandoffNote({
previousSessionId: runtimeSessionId || runtime.sessionId || null,
fallbackMode: codexTransientFallbackMode ?? "fresh_session",
continuationSummaryBody,
})
: "";
const commandNotes = (() => {
if (!instructionsFilePath) {
return [repoAgentsNote];
const notes = [repoAgentsNote];
if (forceSaferInvocation) {
notes.push("Codex transient fallback requested safer invocation settings for this retry.");
}
if (forceFreshSession) {
notes.push("Codex transient fallback forced a fresh session with a continuation handoff.");
}
return notes;
}
if (instructionsPrefix.length > 0) {
if (shouldUseResumeDeltaPrompt) {
return [
const notes = [
`Loaded agent instructions from ${instructionsFilePath}`,
"Skipped stdin instruction reinjection because an existing Codex session is being resumed with a wake delta.",
repoAgentsNote,
];
if (forceSaferInvocation) {
notes.push("Codex transient fallback requested safer invocation settings for this retry.");
}
if (forceFreshSession) {
notes.push("Codex transient fallback forced a fresh session with a continuation handoff.");
}
return notes;
}
return [
const notes = [
`Loaded agent instructions from ${instructionsFilePath}`,
`Prepended instructions + path directive to stdin prompt (relative references from ${instructionsDir}).`,
repoAgentsNote,
];
if (forceSaferInvocation) {
notes.push("Codex transient fallback requested safer invocation settings for this retry.");
}
if (forceFreshSession) {
notes.push("Codex transient fallback forced a fresh session with a continuation handoff.");
}
return notes;
}
return [
const notes = [
`Configured instructionsFilePath ${instructionsFilePath}, but file could not be read; continuing without injected instructions.`,
repoAgentsNote,
];
if (forceSaferInvocation) {
notes.push("Codex transient fallback requested safer invocation settings for this retry.");
}
if (forceFreshSession) {
notes.push("Codex transient fallback forced a fresh session with a continuation handoff.");
}
return notes;
})();
const renderedPrompt = shouldUseResumeDeltaPrompt ? "" : renderTemplate(promptTemplate, templateData);
const sessionHandoffNote = asString(context.paperclipSessionHandoffMarkdown, "").trim();
@@ -472,6 +564,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
promptInstructionsPrefix,
renderedBootstrapPrompt,
wakePrompt,
codexFallbackHandoffNote,
sessionHandoffNote,
renderedPrompt,
]);
@@ -485,7 +578,10 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
};
const runAttempt = async (resumeSessionId: string | null) => {
const execArgs = buildCodexExecArgs(config, { resumeSessionId });
const execArgs = buildCodexExecArgs(
forceSaferInvocation ? { ...config, fastMode: false } : config,
{ resumeSessionId },
);
const args = execArgs.args;
const commandNotesWithFastMode =
execArgs.fastModeIgnoredReason == null
@@ -539,6 +635,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const toResult = (
attempt: { proc: { exitCode: number | null; signal: string | null; timedOut: boolean; stdout: string; stderr: string }; rawStderr: string; parsed: ReturnType<typeof parseCodexJsonl> },
clearSessionOnMissingSession = false,
isRetry = false,
): AdapterExecutionResult => {
if (attempt.proc.timedOut) {
return {
@@ -550,7 +647,10 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
};
}
const resolvedSessionId = attempt.parsed.sessionId ?? runtimeSessionId ?? runtime.sessionId ?? null;
const canFallbackToRuntimeSession = !isRetry && !forceFreshSession;
const resolvedSessionId =
attempt.parsed.sessionId ??
(canFallbackToRuntimeSession ? (runtimeSessionId ?? runtime.sessionId ?? null) : null);
const resolvedSessionParams = resolvedSessionId
? ({
sessionId: resolvedSessionId,
@@ -575,6 +675,15 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
(attempt.proc.exitCode ?? 0) === 0
? null
: fallbackErrorMessage,
errorCode:
(attempt.proc.exitCode ?? 0) !== 0 &&
isCodexTransientUpstreamError({
stdout: attempt.proc.stdout,
stderr: attempt.proc.stderr,
errorMessage: fallbackErrorMessage,
})
? "codex_transient_upstream"
: null,
usage: attempt.parsed.usage,
sessionId: resolvedSessionId,
sessionParams: resolvedSessionParams,
@@ -589,7 +698,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
stderr: attempt.proc.stderr,
},
summary: attempt.parsed.summary,
clearSession: Boolean(clearSessionOnMissingSession && !resolvedSessionId),
clearSession: Boolean((clearSessionOnMissingSession || forceFreshSession) && !resolvedSessionId),
};
};
@@ -605,8 +714,8 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
`[paperclip] Codex resume session "${sessionId}" is unavailable; retrying with a fresh session.\n`,
);
const retry = await runAttempt(null);
return toResult(retry, true);
return toResult(retry, true, true);
}
return toResult(initial);
return toResult(initial, false, false);
}

View File

@@ -1,7 +1,7 @@
export { execute, ensureCodexSkillsInjected } from "./execute.js";
export { listCodexSkills, syncCodexSkills } from "./skills.js";
export { testEnvironment } from "./test.js";
export { parseCodexJsonl, isCodexUnknownSessionError } from "./parse.js";
export { parseCodexJsonl, isCodexTransientUpstreamError, isCodexUnknownSessionError } from "./parse.js";
export {
getQuotaWindows,
readCodexAuthInfo,

View File

@@ -1,5 +1,9 @@
import { describe, expect, it } from "vitest";
import { isCodexUnknownSessionError, parseCodexJsonl } from "./parse.js";
import {
isCodexTransientUpstreamError,
isCodexUnknownSessionError,
parseCodexJsonl,
} from "./parse.js";
describe("parseCodexJsonl", () => {
it("captures session id, assistant summary, usage, and error message", () => {
@@ -81,3 +85,36 @@ describe("isCodexUnknownSessionError", () => {
expect(isCodexUnknownSessionError("", "model overloaded")).toBe(false);
});
});
describe("isCodexTransientUpstreamError", () => {
it("classifies the remote-compaction high-demand failure as transient upstream", () => {
expect(
isCodexTransientUpstreamError({
errorMessage:
"Error running remote compact task: We're currently experiencing high demand, which may cause temporary errors.",
}),
).toBe(true);
expect(
isCodexTransientUpstreamError({
stderr: "We're currently experiencing high demand, which may cause temporary errors.",
}),
).toBe(true);
});
it("does not classify deterministic compaction errors as transient", () => {
expect(
isCodexTransientUpstreamError({
errorMessage: [
"Error running remote compact task: {",
' "error": {',
' "message": "Unknown parameter: \'prompt_cache_retention\'.",',
' "type": "invalid_request_error",',
' "param": "prompt_cache_retention",',
' "code": "unknown_parameter"',
" }",
"}",
].join("\n"),
}),
).toBe(false);
});
});

View File

@@ -1,5 +1,9 @@
import { asString, asNumber, parseObject, parseJson } from "@paperclipai/adapter-utils/server-utils";
const CODEX_TRANSIENT_UPSTREAM_RE =
/(?:we(?:'|)re\s+currently\s+experiencing\s+high\s+demand|temporary\s+errors|rate[-\s]?limit(?:ed)?|too\s+many\s+requests|\b429\b|server\s+overloaded|service\s+unavailable|try\s+again\s+later)/i;
const CODEX_REMOTE_COMPACTION_RE = /remote\s+compact\s+task/i;
export function parseCodexJsonl(stdout: string) {
let sessionId: string | null = null;
let finalMessage: string | null = null;
@@ -71,3 +75,25 @@ export function isCodexUnknownSessionError(stdout: string, stderr: string): bool
haystack,
);
}
export function isCodexTransientUpstreamError(input: {
stdout?: string | null;
stderr?: string | null;
errorMessage?: string | null;
}): boolean {
const haystack = [
input.errorMessage ?? "",
input.stdout ?? "",
input.stderr ?? "",
]
.join("\n")
.split(/\r?\n/)
.map((line) => line.trim())
.filter(Boolean)
.join("\n");
if (!CODEX_TRANSIENT_UPSTREAM_RE.test(haystack)) return false;
// Keep automatic retries scoped to the observed remote-compaction/high-demand
// failure shape; broader 429s may be caused by user or account limits.
return CODEX_REMOTE_COMPACTION_RE.test(haystack) || /high\s+demand|temporary\s+errors/i.test(haystack);
}

View File

@@ -21,6 +21,7 @@ import {
renderTemplate,
renderPaperclipWakePrompt,
stringifyPaperclipWakePayload,
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
joinPromptSections,
runChildProcess,
} from "@paperclipai/adapter-utils/server-utils";
@@ -164,7 +165,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const promptTemplate = asString(
config.promptTemplate,
"You are agent {{agent.id}} ({{agent.name}}). Continue your Paperclip work.",
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
);
const command = asString(config.command, "agent");
const model = asString(config.model, DEFAULT_CURSOR_LOCAL_MODEL).trim();

View File

@@ -24,6 +24,7 @@ import {
renderTemplate,
renderPaperclipWakePrompt,
stringifyPaperclipWakePayload,
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
runChildProcess,
} from "@paperclipai/adapter-utils/server-utils";
import { DEFAULT_GEMINI_LOCAL_MODEL } from "../index.js";
@@ -140,7 +141,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const promptTemplate = asString(
config.promptTemplate,
"You are agent {{agent.id}} ({{agent.name}}). Continue your Paperclip work.",
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
);
const command = asString(config.command, "gemini");
const model = asString(config.model, DEFAULT_GEMINI_LOCAL_MODEL).trim();

View File

@@ -420,7 +420,9 @@ function buildWakeText(
" - POST /api/issues/{issueId}/checkout with {\"agentId\":\"$PAPERCLIP_AGENT_ID\",\"expectedStatuses\":[\"todo\",\"backlog\",\"blocked\",\"in_review\"]}",
" - GET /api/issues/{issueId}",
" - GET /api/issues/{issueId}/comments",
" - Execute the issue instructions exactly.",
" - Execute the issue instructions exactly. If the issue is actionable, take concrete action in this run; do not stop at a plan unless planning was requested.",
" - Leave durable progress with a clear next action. Use child issues for long or parallel delegated work instead of polling agents, sessions, or processes.",
" - If blocked, PATCH /api/issues/{issueId} with {\"status\":\"blocked\",\"comment\":\"what is blocked, who owns the unblock, and the next action\"}.",
" - If instructions require a comment, POST /api/issues/{issueId}/comments with {\"body\":\"...\"}.",
" - PATCH /api/issues/{issueId} with {\"status\":\"done\",\"comment\":\"what changed and why\"}.",
"4) If issueId does not exist:",

View File

@@ -19,6 +19,7 @@ import {
renderTemplate,
renderPaperclipWakePrompt,
stringifyPaperclipWakePayload,
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
runChildProcess,
readPaperclipRuntimeSkillEntries,
resolvePaperclipDesiredSkillNames,
@@ -97,7 +98,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const promptTemplate = asString(
config.promptTemplate,
"You are agent {{agent.id}} ({{agent.name}}). Continue your Paperclip work.",
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
);
const command = asString(config.command, "opencode");
const model = asString(config.model, "").trim();

View File

@@ -22,6 +22,7 @@ import {
renderTemplate,
renderPaperclipWakePrompt,
stringifyPaperclipWakePayload,
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
runChildProcess,
} from "@paperclipai/adapter-utils/server-utils";
import { isPiUnknownSessionError, parsePiJsonl } from "./parse.js";
@@ -113,7 +114,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
const promptTemplate = asString(
config.promptTemplate,
"You are agent {{agent.id}} ({{agent.name}}). Continue your Paperclip work.",
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE,
);
const command = asString(config.command, "pi");
const model = asString(config.model, "").trim();
@@ -276,7 +277,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
`${instructionsContents}\n\n` +
`The above agent instructions were loaded from ${resolvedInstructionsFilePath}. ` +
`Resolve any relative file references from ${instructionsFileDir}.\n\n` +
`You are agent {{agent.id}} ({{agent.name}}). Continue your Paperclip work.`;
DEFAULT_PAPERCLIP_AGENT_PROMPT_TEMPLATE;
} catch (err) {
instructionsReadFailed = true;
const reason = err instanceof Error ? err.message : String(err);

View File

@@ -0,0 +1,84 @@
import { createHash, randomBytes } from "node:crypto";
import { readFileSync } from "node:fs";
import path from "node:path";
import { and, eq, gt, isNull } from "drizzle-orm";
import { createDb } from "../src/client.js";
import { invites } from "../src/schema/index.js";
function hashToken(token: string) {
return createHash("sha256").update(token).digest("hex");
}
function createInviteToken() {
return `pcp_bootstrap_${randomBytes(24).toString("hex")}`;
}
function readArg(flag: string) {
const index = process.argv.indexOf(flag);
if (index === -1) return null;
return process.argv[index + 1] ?? null;
}
async function main() {
const configPath = readArg("--config");
const baseUrl = readArg("--base-url");
if (!configPath || !baseUrl) {
throw new Error("Usage: tsx create-auth-bootstrap-invite.ts --config <path> --base-url <url>");
}
const config = JSON.parse(readFileSync(path.resolve(configPath), "utf8")) as {
database?: {
mode?: string;
embeddedPostgresPort?: number;
connectionString?: string;
};
};
const dbUrl =
config.database?.mode === "postgres"
? config.database.connectionString
: `postgres://paperclip:paperclip@127.0.0.1:${config.database?.embeddedPostgresPort ?? 54329}/paperclip`;
if (!dbUrl) {
throw new Error(`Could not resolve database connection from ${configPath}`);
}
const db = createDb(dbUrl);
const closableDb = db as typeof db & {
$client?: {
end?: (options?: { timeout?: number }) => Promise<void>;
};
};
try {
const now = new Date();
await db
.update(invites)
.set({ revokedAt: now, updatedAt: now })
.where(
and(
eq(invites.inviteType, "bootstrap_ceo"),
isNull(invites.revokedAt),
isNull(invites.acceptedAt),
gt(invites.expiresAt, now)
)
);
const token = createInviteToken();
await db.insert(invites).values({
inviteType: "bootstrap_ceo",
tokenHash: hashToken(token),
allowedJoinTypes: "human",
expiresAt: new Date(Date.now() + 72 * 60 * 60 * 1000),
invitedByUserId: "system",
});
process.stdout.write(`${baseUrl.replace(/\/+$/, "")}/invite/${token}\n`);
} finally {
await closableDb.$client?.end?.({ timeout: 5 }).catch(() => undefined);
}
}
main().catch((error) => {
process.stderr.write(`${error instanceof Error ? error.stack ?? error.message : String(error)}\n`);
process.exit(1);
});

View File

@@ -127,6 +127,7 @@ describeEmbeddedPostgres("runDatabaseBackup", () => {
backupDir,
retention: { dailyDays: 7, weeklyWeeks: 4, monthlyMonths: 1 },
filenamePrefix: "paperclip-test",
backupEngine: "javascript",
});
expect(result.backupFile).toMatch(/paperclip-test-.*\.sql\.gz$/);
@@ -148,14 +149,17 @@ describeEmbeddedPostgres("runDatabaseBackup", () => {
title: string;
payload: string;
state: string;
metadata: { index: number; even: boolean };
metadata: { index: number; even: boolean } | string;
}[]>(`
SELECT "title", "payload", "state"::text AS "state", "metadata"
FROM "public"."backup_test_records"
WHERE "title" IN ('row-0', 'row-159')
ORDER BY "title"
`);
expect(sampleRows).toEqual([
expect(sampleRows.map((row) => ({
...row,
metadata: typeof row.metadata === "string" ? JSON.parse(row.metadata) : row.metadata,
}))).toEqual([
{
title: "row-0",
payload,

View File

@@ -1,6 +1,8 @@
import { createReadStream, createWriteStream, existsSync, mkdirSync, readdirSync, statSync, unlinkSync } from "node:fs";
import { basename, resolve } from "node:path";
import { createInterface } from "node:readline";
import { spawn } from "node:child_process";
import { open as openFile } from "node:fs/promises";
import { pipeline } from "node:stream/promises";
import { createGunzip, createGzip } from "node:zlib";
import postgres from "postgres";
@@ -20,6 +22,7 @@ export type RunDatabaseBackupOptions = {
includeMigrationJournal?: boolean;
excludeTables?: string[];
nullifyColumns?: Record<string, string[]>;
backupEngine?: "auto" | "pg_dump" | "javascript";
};
export type RunDatabaseBackupResult = {
@@ -61,6 +64,9 @@ type ExtensionDefinition = {
const DRIZZLE_SCHEMA = "drizzle";
const DRIZZLE_MIGRATIONS_TABLE = "__drizzle_migrations";
const DEFAULT_BACKUP_WRITE_BUFFER_BYTES = 1024 * 1024;
const BACKUP_DATA_CURSOR_ROWS = 100;
const BACKUP_CLI_STDERR_BYTES = 64 * 1024;
const BACKUP_BREAKPOINT_DETECT_BYTES = 64 * 1024;
const STATEMENT_BREAKPOINT = "-- paperclip statement breakpoint 69f6f3f1-42fd-46a6-bf17-d1d85f8f3900";
@@ -223,6 +229,134 @@ function tableKey(schemaName: string, tableName: string): string {
return `${schemaName}.${tableName}`;
}
function hasBackupTransforms(opts: RunDatabaseBackupOptions): boolean {
return opts.includeMigrationJournal === true ||
(opts.excludeTables?.length ?? 0) > 0 ||
Object.keys(opts.nullifyColumns ?? {}).length > 0;
}
function formatSqlValue(rawValue: unknown, columnName: string | undefined, nullifiedColumns: Set<string>): string {
const val = columnName && nullifiedColumns.has(columnName) ? null : rawValue;
if (val === null || val === undefined) return "NULL";
if (typeof val === "boolean") return val ? "true" : "false";
if (typeof val === "number") return String(val);
if (val instanceof Date) return formatSqlLiteral(val.toISOString());
if (typeof val === "object") return formatSqlLiteral(JSON.stringify(val));
return formatSqlLiteral(String(val));
}
function appendCapturedStderr(previous: string, chunk: Buffer | string): string {
const next = previous + (Buffer.isBuffer(chunk) ? chunk.toString("utf8") : chunk);
if (Buffer.byteLength(next, "utf8") <= BACKUP_CLI_STDERR_BYTES) return next;
return Buffer.from(next, "utf8").subarray(-BACKUP_CLI_STDERR_BYTES).toString("utf8");
}
async function waitForChildExit(child: ReturnType<typeof spawn>, label: string): Promise<void> {
let stderr = "";
child.stderr?.on("data", (chunk) => {
stderr = appendCapturedStderr(stderr, chunk);
});
const result = await new Promise<{ code: number | null; signal: NodeJS.Signals | null }>((resolve, reject) => {
child.once("error", reject);
child.once("exit", (code, signal) => resolve({ code, signal }));
});
if (result.signal) {
throw new Error(`${label} exited via ${result.signal}${stderr.trim() ? `: ${stderr.trim()}` : ""}`);
}
if (result.code !== 0) {
throw new Error(`${label} failed with exit code ${result.code ?? "unknown"}${stderr.trim() ? `: ${stderr.trim()}` : ""}`);
}
}
async function runPgDumpBackup(opts: {
connectionString: string;
backupFile: string;
connectTimeout: number;
}): Promise<void> {
const pgDumpBin = process.env.PAPERCLIP_PG_DUMP_PATH || "pg_dump";
const child = spawn(
pgDumpBin,
[
`--dbname=${opts.connectionString}`,
"--format=plain",
"--clean",
"--if-exists",
"--no-owner",
"--no-privileges",
"--schema=public",
],
{
stdio: ["ignore", "pipe", "pipe"],
env: {
...process.env,
PGCONNECT_TIMEOUT: String(opts.connectTimeout),
},
},
);
if (!child.stdout) {
throw new Error("pg_dump did not expose stdout");
}
await Promise.all([
pipeline(child.stdout, createGzip(), createWriteStream(opts.backupFile)),
waitForChildExit(child, pgDumpBin),
]);
}
async function restoreWithPsql(opts: RunDatabaseRestoreOptions, connectTimeout: number): Promise<void> {
const psqlBin = process.env.PAPERCLIP_PSQL_PATH || "psql";
const child = spawn(
psqlBin,
[
`--dbname=${opts.connectionString}`,
"--set=ON_ERROR_STOP=1",
"--quiet",
"--no-psqlrc",
],
{
stdio: ["pipe", "ignore", "pipe"],
env: {
...process.env,
PGCONNECT_TIMEOUT: String(connectTimeout),
},
},
);
if (!child.stdin) {
throw new Error("psql did not expose stdin");
}
const input = opts.backupFile.endsWith(".gz")
? createReadStream(opts.backupFile).pipe(createGunzip())
: createReadStream(opts.backupFile);
await Promise.all([
pipeline(input, child.stdin),
waitForChildExit(child, psqlBin),
]);
}
async function hasStatementBreakpoints(backupFile: string): Promise<boolean> {
const raw = createReadStream(backupFile);
const stream = backupFile.endsWith(".gz") ? raw.pipe(createGunzip()) : raw;
let text = "";
try {
for await (const chunk of stream) {
text += Buffer.isBuffer(chunk) ? chunk.toString("utf8") : String(chunk);
if (text.includes(STATEMENT_BREAKPOINT)) return true;
if (Buffer.byteLength(text, "utf8") >= BACKUP_BREAKPOINT_DETECT_BYTES) return false;
}
return text.includes(STATEMENT_BREAKPOINT);
} finally {
stream.destroy();
raw.destroy();
}
}
async function* readRestoreStatements(backupFile: string): AsyncGenerator<string> {
const raw = createReadStream(backupFile);
const stream = backupFile.endsWith(".gz") ? raw.pipe(createGunzip()) : raw;
@@ -263,41 +397,21 @@ async function* readRestoreStatements(backupFile: string): AsyncGenerator<string
}
export function createBufferedTextFileWriter(filePath: string, maxBufferedBytes = DEFAULT_BACKUP_WRITE_BUFFER_BYTES) {
const stream = createWriteStream(filePath, { encoding: "utf8" });
const filePromise = openFile(filePath, "w");
const flushThreshold = Math.max(1, Math.trunc(maxBufferedBytes));
let bufferedLines: string[] = [];
let bufferedBytes = 0;
let firstChunk = true;
let closed = false;
let streamError: Error | null = null;
let pendingWrite = Promise.resolve();
stream.on("error", (error) => {
streamError = error;
});
const writeChunk = async (chunk: string): Promise<void> => {
if (streamError) throw streamError;
const canContinue = stream.write(chunk);
if (!canContinue) {
await new Promise<void>((resolve, reject) => {
const handleDrain = () => {
cleanup();
resolve();
};
const handleError = (error: Error) => {
cleanup();
reject(error);
};
const cleanup = () => {
stream.off("drain", handleDrain);
stream.off("error", handleError);
};
stream.once("drain", handleDrain);
stream.once("error", handleError);
});
const writeChunk = async (chunk: string | Buffer): Promise<void> => {
const file = await filePromise;
if (typeof chunk === "string") {
await file.write(chunk, null, "utf8");
} else {
await file.write(chunk);
}
if (streamError) throw streamError;
};
const flushBufferedLines = () => {
@@ -316,37 +430,43 @@ export function createBufferedTextFileWriter(filePath: string, maxBufferedBytes
if (closed) {
throw new Error(`Cannot write to closed backup file: ${filePath}`);
}
if (streamError) throw streamError;
bufferedLines.push(line);
bufferedBytes += Buffer.byteLength(line, "utf8") + 1;
if (bufferedBytes >= flushThreshold) {
flushBufferedLines();
}
},
async drain() {
if (closed) {
throw new Error(`Cannot drain closed backup file: ${filePath}`);
}
flushBufferedLines();
await pendingWrite;
},
async writeRaw(chunk: string | Buffer) {
if (closed) {
throw new Error(`Cannot write to closed backup file: ${filePath}`);
}
flushBufferedLines();
firstChunk = false;
pendingWrite = pendingWrite.then(() => writeChunk(chunk));
await pendingWrite;
},
async close() {
if (closed) return;
closed = true;
flushBufferedLines();
await pendingWrite;
await new Promise<void>((resolve, reject) => {
if (streamError) {
reject(streamError);
return;
}
stream.end((error?: Error | null) => {
if (error) reject(error);
else resolve();
});
});
if (streamError) throw streamError;
const file = await filePromise;
await file.close();
},
async abort() {
if (closed) return;
closed = true;
bufferedLines = [];
bufferedBytes = 0;
stream.destroy();
await pendingWrite.catch(() => {});
await filePromise.then((file) => file.close()).catch(() => {});
if (existsSync(filePath)) {
try {
unlinkSync(filePath);
@@ -362,16 +482,53 @@ export async function runDatabaseBackup(opts: RunDatabaseBackupOptions): Promise
const filenamePrefix = opts.filenamePrefix ?? "paperclip";
const retention = opts.retention;
const connectTimeout = Math.max(1, Math.trunc(opts.connectTimeoutSeconds ?? 5));
const backupEngine = opts.backupEngine ?? "auto";
const canUsePgDump = !hasBackupTransforms(opts);
const includeMigrationJournal = opts.includeMigrationJournal === true;
const excludedTableNames = normalizeTableNameSet(opts.excludeTables);
const nullifiedColumnsByTable = normalizeNullifyColumnMap(opts.nullifyColumns);
const sql = postgres(opts.connectionString, { max: 1, connect_timeout: connectTimeout });
let sql = postgres(opts.connectionString, { max: 1, connect_timeout: connectTimeout });
let sqlClosed = false;
const closeSql = async () => {
if (sqlClosed) return;
sqlClosed = true;
await sql.end();
};
mkdirSync(opts.backupDir, { recursive: true });
const sqlFile = resolve(opts.backupDir, `${filenamePrefix}-${timestamp()}.sql`);
const backupFile = `${sqlFile}.gz`;
const writer = createBufferedTextFileWriter(sqlFile);
try {
if (backupEngine === "pg_dump" || (backupEngine === "auto" && canUsePgDump)) {
await sql`SELECT 1`;
try {
await closeSql();
await runPgDumpBackup({
connectionString: opts.connectionString,
backupFile,
connectTimeout,
});
await writer.abort();
const sizeBytes = statSync(backupFile).size;
const prunedCount = pruneOldBackups(opts.backupDir, retention, filenamePrefix);
return {
backupFile,
sizeBytes,
prunedCount,
};
} catch (error) {
if (existsSync(backupFile)) {
try { unlinkSync(backupFile); } catch { /* ignore */ }
}
if (backupEngine === "pg_dump") {
throw error;
}
sql = postgres(opts.connectionString, { max: 1, connect_timeout: connectTimeout });
sqlClosed = false;
}
}
await sql`SELECT 1`;
const emit = (line: string) => writer.emit(line);
@@ -703,20 +860,39 @@ export async function runDatabaseBackup(opts: RunDatabaseBackupOptions): Promise
emit(`-- Data for: ${schema_name}.${tablename} (${count[0]!.n} rows)`);
const rows = await sql.unsafe(`SELECT * FROM ${qualifiedTableName}`).values();
const nullifiedColumns = nullifiedColumnsByTable.get(tablename) ?? new Set<string>();
for (const row of rows) {
const values = row.map((rawValue: unknown, index) => {
const columnName = cols[index]?.column_name;
const val = columnName && nullifiedColumns.has(columnName) ? null : rawValue;
if (val === null || val === undefined) return "NULL";
if (typeof val === "boolean") return val ? "true" : "false";
if (typeof val === "number") return String(val);
if (val instanceof Date) return formatSqlLiteral(val.toISOString());
if (typeof val === "object") return formatSqlLiteral(JSON.stringify(val));
return formatSqlLiteral(String(val));
});
emitStatement(`INSERT INTO ${qualifiedTableName} (${colNames}) VALUES (${values.join(", ")});`);
if (backupEngine !== "javascript" && nullifiedColumns.size === 0) {
emit(`COPY ${qualifiedTableName} (${colNames}) FROM stdin;`);
await writer.writeRaw("\n");
const copySql = postgres(opts.connectionString, { max: 1, connect_timeout: connectTimeout });
try {
const copyStream = await copySql
.unsafe(`COPY ${qualifiedTableName} (${colNames}) TO STDOUT`)
.readable();
for await (const chunk of copyStream) {
await writer.writeRaw(Buffer.isBuffer(chunk) ? chunk : Buffer.from(String(chunk)));
}
} finally {
await copySql.end();
}
await writer.writeRaw("\\.\n");
emitStatementBoundary();
emit("");
continue;
}
const rowCursor = sql
.unsafe(`SELECT * FROM ${qualifiedTableName}`)
.values()
.cursor(BACKUP_DATA_CURSOR_ROWS) as AsyncIterable<unknown[][]>;
for await (const rows of rowCursor) {
for (const row of rows) {
const values = row.map((rawValue, index) =>
formatSqlValue(rawValue, cols[index]?.column_name, nullifiedColumns),
);
emitStatement(`INSERT INTO ${qualifiedTableName} (${colNames}) VALUES (${values.join(", ")});`);
}
await writer.drain();
}
emit("");
}
@@ -768,12 +944,23 @@ export async function runDatabaseBackup(opts: RunDatabaseBackupOptions): Promise
}
throw error;
} finally {
await sql.end();
await closeSql();
}
}
export async function runDatabaseRestore(opts: RunDatabaseRestoreOptions): Promise<void> {
const connectTimeout = Math.max(1, Math.trunc(opts.connectTimeoutSeconds ?? 5));
try {
await restoreWithPsql(opts, connectTimeout);
return;
} catch (error) {
if (!(await hasStatementBreakpoints(opts.backupFile))) {
throw new Error(
`Failed to restore ${basename(opts.backupFile)} with psql: ${sanitizeRestoreErrorMessage(error)}`,
);
}
}
const sql = postgres(opts.connectionString, { max: 1, connect_timeout: connectTimeout });
try {

View File

@@ -467,4 +467,78 @@ describeEmbeddedPostgres("applyPendingMigrations", () => {
},
20_000,
);
it(
"replays migration 0059 safely when plugin_database_namespaces already exists",
async () => {
const connectionString = await createTempDatabase();
await applyPendingMigrations(connectionString);
const sql = postgres(connectionString, { max: 1, onnotice: () => {} });
try {
const pluginNamespacesHash = await migrationHash(
"0059_plugin_database_namespaces.sql",
);
await sql.unsafe(
`DELETE FROM "drizzle"."__drizzle_migrations" WHERE hash = '${pluginNamespacesHash}'`,
);
const tables = await sql.unsafe<{ table_name: string }[]>(
`
SELECT table_name
FROM information_schema.tables
WHERE table_schema = 'public'
AND table_name IN ('plugin_database_namespaces', 'plugin_migrations')
ORDER BY table_name
`,
);
expect(tables.map((row) => row.table_name)).toEqual([
"plugin_database_namespaces",
"plugin_migrations",
]);
} finally {
await sql.end();
}
const pendingState = await inspectMigrations(connectionString);
expect(pendingState).toMatchObject({
status: "needsMigrations",
pendingMigrations: ["0059_plugin_database_namespaces.sql"],
reason: "pending-migrations",
});
await applyPendingMigrations(connectionString);
const finalState = await inspectMigrations(connectionString);
expect(finalState.status).toBe("upToDate");
const verifySql = postgres(connectionString, { max: 1, onnotice: () => {} });
try {
const indexes = await verifySql.unsafe<{ indexname: string }[]>(
`
SELECT indexname
FROM pg_indexes
WHERE schemaname = 'public'
AND tablename IN ('plugin_database_namespaces', 'plugin_migrations')
ORDER BY indexname
`,
);
expect(indexes.map((row) => row.indexname)).toEqual(
expect.arrayContaining([
"plugin_database_namespaces_namespace_idx",
"plugin_database_namespaces_plugin_idx",
"plugin_database_namespaces_status_idx",
"plugin_migrations_plugin_idx",
"plugin_migrations_plugin_key_idx",
"plugin_migrations_status_idx",
]),
);
} finally {
await verifySql.end();
}
},
20_000,
);
});

View File

@@ -31,4 +31,5 @@ export {
formatEmbeddedPostgresError,
} from "./embedded-postgres-error.js";
export { issueRelations } from "./schema/issue_relations.js";
export { issueReferenceMentions } from "./schema/issue_reference_mentions.js";
export * from "./schema/index.js";

View File

@@ -0,0 +1,57 @@
WITH ranked_user_requests AS (
SELECT
id,
row_number() OVER (
PARTITION BY company_id, requesting_user_id
ORDER BY created_at ASC, id ASC
) AS rank
FROM join_requests
WHERE request_type = 'human'
AND status = 'pending_approval'
AND requesting_user_id IS NOT NULL
)
UPDATE join_requests
SET
status = 'rejected',
rejected_at = COALESCE(rejected_at, now()),
updated_at = now()
WHERE id IN (
SELECT id
FROM ranked_user_requests
WHERE rank > 1
);
--> statement-breakpoint
WITH ranked_email_requests AS (
SELECT
id,
row_number() OVER (
PARTITION BY company_id, lower(request_email_snapshot)
ORDER BY created_at ASC, id ASC
) AS rank
FROM join_requests
WHERE request_type = 'human'
AND status = 'pending_approval'
AND request_email_snapshot IS NOT NULL
)
UPDATE join_requests
SET
status = 'rejected',
rejected_at = COALESCE(rejected_at, now()),
updated_at = now()
WHERE id IN (
SELECT id
FROM ranked_email_requests
WHERE rank > 1
);
--> statement-breakpoint
CREATE UNIQUE INDEX IF NOT EXISTS "join_requests_pending_human_user_uq"
ON "join_requests" USING btree ("company_id", "requesting_user_id")
WHERE "request_type" = 'human'
AND "status" = 'pending_approval'
AND "requesting_user_id" IS NOT NULL;
--> statement-breakpoint
CREATE UNIQUE INDEX IF NOT EXISTS "join_requests_pending_human_email_uq"
ON "join_requests" USING btree ("company_id", lower("request_email_snapshot"))
WHERE "request_type" = 'human'
AND "status" = 'pending_approval'
AND "request_email_snapshot" IS NOT NULL;

View File

@@ -0,0 +1,6 @@
ALTER TABLE "heartbeat_runs" ADD COLUMN IF NOT EXISTS "liveness_state" text;--> statement-breakpoint
ALTER TABLE "heartbeat_runs" ADD COLUMN IF NOT EXISTS "liveness_reason" text;--> statement-breakpoint
ALTER TABLE "heartbeat_runs" ADD COLUMN IF NOT EXISTS "continuation_attempt" integer DEFAULT 0 NOT NULL;--> statement-breakpoint
ALTER TABLE "heartbeat_runs" ADD COLUMN IF NOT EXISTS "last_useful_action_at" timestamp with time zone;--> statement-breakpoint
ALTER TABLE "heartbeat_runs" ADD COLUMN IF NOT EXISTS "next_action" text;--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "heartbeat_runs_company_liveness_idx" ON "heartbeat_runs" USING btree ("company_id","liveness_state","created_at");

View File

@@ -0,0 +1,41 @@
CREATE TABLE IF NOT EXISTS "plugin_database_namespaces" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"plugin_id" uuid NOT NULL,
"plugin_key" text NOT NULL,
"namespace_name" text NOT NULL,
"namespace_mode" text DEFAULT 'schema' NOT NULL,
"status" text DEFAULT 'active' NOT NULL,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
CREATE TABLE IF NOT EXISTS "plugin_migrations" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"plugin_id" uuid NOT NULL,
"plugin_key" text NOT NULL,
"namespace_name" text NOT NULL,
"migration_key" text NOT NULL,
"checksum" text NOT NULL,
"plugin_version" text NOT NULL,
"status" text NOT NULL,
"started_at" timestamp with time zone DEFAULT now() NOT NULL,
"applied_at" timestamp with time zone,
"error_message" text
);
--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'plugin_database_namespaces_plugin_id_plugins_id_fk') THEN
ALTER TABLE "plugin_database_namespaces" ADD CONSTRAINT "plugin_database_namespaces_plugin_id_plugins_id_fk" FOREIGN KEY ("plugin_id") REFERENCES "public"."plugins"("id") ON DELETE cascade ON UPDATE no action;
END IF;
END $$;--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'plugin_migrations_plugin_id_plugins_id_fk') THEN
ALTER TABLE "plugin_migrations" ADD CONSTRAINT "plugin_migrations_plugin_id_plugins_id_fk" FOREIGN KEY ("plugin_id") REFERENCES "public"."plugins"("id") ON DELETE cascade ON UPDATE no action;
END IF;
END $$;--> statement-breakpoint
CREATE UNIQUE INDEX IF NOT EXISTS "plugin_database_namespaces_plugin_idx" ON "plugin_database_namespaces" USING btree ("plugin_id");--> statement-breakpoint
CREATE UNIQUE INDEX IF NOT EXISTS "plugin_database_namespaces_namespace_idx" ON "plugin_database_namespaces" USING btree ("namespace_name");--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "plugin_database_namespaces_status_idx" ON "plugin_database_namespaces" USING btree ("status");--> statement-breakpoint
CREATE UNIQUE INDEX IF NOT EXISTS "plugin_migrations_plugin_key_idx" ON "plugin_migrations" USING btree ("plugin_id","migration_key");--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "plugin_migrations_plugin_idx" ON "plugin_migrations" USING btree ("plugin_id");--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "plugin_migrations_status_idx" ON "plugin_migrations" USING btree ("status");

View File

@@ -0,0 +1,50 @@
CREATE TABLE IF NOT EXISTS "issue_reference_mentions" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"company_id" uuid NOT NULL,
"source_issue_id" uuid NOT NULL,
"target_issue_id" uuid NOT NULL,
"source_kind" text NOT NULL,
"source_record_id" uuid,
"document_key" text,
"matched_text" text,
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
);
--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_reference_mentions_company_id_companies_id_fk') THEN
ALTER TABLE "issue_reference_mentions" ADD CONSTRAINT "issue_reference_mentions_company_id_companies_id_fk" FOREIGN KEY ("company_id") REFERENCES "public"."companies"("id") ON DELETE no action ON UPDATE no action;
END IF;
END $$;
--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_reference_mentions_source_issue_id_issues_id_fk') THEN
ALTER TABLE "issue_reference_mentions" ADD CONSTRAINT "issue_reference_mentions_source_issue_id_issues_id_fk" FOREIGN KEY ("source_issue_id") REFERENCES "public"."issues"("id") ON DELETE cascade ON UPDATE no action;
END IF;
END $$;
--> statement-breakpoint
DO $$ BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_constraint WHERE conname = 'issue_reference_mentions_target_issue_id_issues_id_fk') THEN
ALTER TABLE "issue_reference_mentions" ADD CONSTRAINT "issue_reference_mentions_target_issue_id_issues_id_fk" FOREIGN KEY ("target_issue_id") REFERENCES "public"."issues"("id") ON DELETE cascade ON UPDATE no action;
END IF;
END $$;--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "issue_reference_mentions_company_source_issue_idx" ON "issue_reference_mentions" USING btree ("company_id","source_issue_id");--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "issue_reference_mentions_company_target_issue_idx" ON "issue_reference_mentions" USING btree ("company_id","target_issue_id");--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "issue_reference_mentions_company_issue_pair_idx" ON "issue_reference_mentions" USING btree ("company_id","source_issue_id","target_issue_id");--> statement-breakpoint
DELETE FROM "issue_reference_mentions"
WHERE "id" IN (
SELECT "id"
FROM (
SELECT
"id",
row_number() OVER (
PARTITION BY "company_id", "source_issue_id", "target_issue_id", "source_kind", "source_record_id"
ORDER BY "created_at", "id"
) AS "row_number"
FROM "issue_reference_mentions"
) AS "duplicates"
WHERE "duplicates"."row_number" > 1
);--> statement-breakpoint
DROP INDEX IF EXISTS "issue_reference_mentions_company_source_mention_uq";--> statement-breakpoint
CREATE UNIQUE INDEX IF NOT EXISTS "issue_reference_mentions_company_source_mention_record_uq" ON "issue_reference_mentions" USING btree ("company_id","source_issue_id","target_issue_id","source_kind","source_record_id") WHERE "source_record_id" IS NOT NULL;--> statement-breakpoint
CREATE UNIQUE INDEX IF NOT EXISTS "issue_reference_mentions_company_source_mention_null_record_uq" ON "issue_reference_mentions" USING btree ("company_id","source_issue_id","target_issue_id","source_kind") WHERE "source_record_id" IS NULL;

View File

@@ -0,0 +1,3 @@
ALTER TABLE "heartbeat_runs" ADD COLUMN IF NOT EXISTS "scheduled_retry_at" timestamp with time zone;--> statement-breakpoint
ALTER TABLE "heartbeat_runs" ADD COLUMN IF NOT EXISTS "scheduled_retry_attempt" integer DEFAULT 0 NOT NULL;--> statement-breakpoint
ALTER TABLE "heartbeat_runs" ADD COLUMN IF NOT EXISTS "scheduled_retry_reason" text;

View File

@@ -0,0 +1,9 @@
ALTER TABLE "routine_runs" ADD COLUMN IF NOT EXISTS "dispatch_fingerprint" text;--> statement-breakpoint
ALTER TABLE "issues" ADD COLUMN IF NOT EXISTS "origin_fingerprint" text DEFAULT 'default' NOT NULL;--> statement-breakpoint
DROP INDEX IF EXISTS "issues_open_routine_execution_uq";--> statement-breakpoint
CREATE UNIQUE INDEX IF NOT EXISTS "issues_open_routine_execution_uq" ON "issues" USING btree ("company_id","origin_kind","origin_id","origin_fingerprint") WHERE "issues"."origin_kind" = 'routine_execution'
and "issues"."origin_id" is not null
and "issues"."hidden_at" is null
and "issues"."execution_run_id" is not null
and "issues"."status" in ('backlog', 'todo', 'in_progress', 'in_review', 'blocked');--> statement-breakpoint
CREATE INDEX IF NOT EXISTS "routine_runs_dispatch_fingerprint_idx" ON "routine_runs" USING btree ("routine_id","dispatch_fingerprint");

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -400,6 +400,48 @@
"when": 1776084034244,
"tag": "0056_spooky_ultragirl",
"breakpoints": true
},
{
"idx": 57,
"version": "7",
"when": 1776309613598,
"tag": "0057_tidy_join_requests",
"breakpoints": true
},
{
"idx": 58,
"version": "7",
"when": 1776542245004,
"tag": "0058_wealthy_starbolt",
"breakpoints": true
},
{
"idx": 59,
"version": "7",
"when": 1776542246000,
"tag": "0059_plugin_database_namespaces",
"breakpoints": true
},
{
"idx": 60,
"version": "7",
"when": 1776717606743,
"tag": "0060_orange_annihilus",
"breakpoints": true
},
{
"idx": 61,
"version": "7",
"when": 1776785165389,
"tag": "0061_lively_thor_girl",
"breakpoints": true
},
{
"idx": 62,
"version": "7",
"when": 1776780000000,
"tag": "0062_routine_run_dispatch_fingerprint",
"breakpoints": true
}
]
}
}

View File

@@ -38,9 +38,17 @@ export const heartbeatRuns = pgTable(
onDelete: "set null",
}),
processLossRetryCount: integer("process_loss_retry_count").notNull().default(0),
scheduledRetryAt: timestamp("scheduled_retry_at", { withTimezone: true }),
scheduledRetryAttempt: integer("scheduled_retry_attempt").notNull().default(0),
scheduledRetryReason: text("scheduled_retry_reason"),
issueCommentStatus: text("issue_comment_status").notNull().default("not_applicable"),
issueCommentSatisfiedByCommentId: uuid("issue_comment_satisfied_by_comment_id"),
issueCommentRetryQueuedAt: timestamp("issue_comment_retry_queued_at", { withTimezone: true }),
livenessState: text("liveness_state"),
livenessReason: text("liveness_reason"),
continuationAttempt: integer("continuation_attempt").notNull().default(0),
lastUsefulActionAt: timestamp("last_useful_action_at", { withTimezone: true }),
nextAction: text("next_action"),
contextSnapshot: jsonb("context_snapshot").$type<Record<string, unknown>>(),
createdAt: timestamp("created_at", { withTimezone: true }).notNull().defaultNow(),
updatedAt: timestamp("updated_at", { withTimezone: true }).notNull().defaultNow(),
@@ -51,5 +59,10 @@ export const heartbeatRuns = pgTable(
table.agentId,
table.startedAt,
),
companyLivenessIdx: index("heartbeat_runs_company_liveness_idx").on(
table.companyId,
table.livenessState,
table.createdAt,
),
}),
);

View File

@@ -27,6 +27,7 @@ export { workspaceRuntimeServices } from "./workspace_runtime_services.js";
export { projectGoals } from "./project_goals.js";
export { goals } from "./goals.js";
export { issues } from "./issues.js";
export { issueReferenceMentions } from "./issue_reference_mentions.js";
export { issueRelations } from "./issue_relations.js";
export { routines, routineTriggers, routineRuns } from "./routines.js";
export { issueWorkProducts } from "./issue_work_products.js";
@@ -60,6 +61,7 @@ export { pluginConfig } from "./plugin_config.js";
export { pluginCompanySettings } from "./plugin_company_settings.js";
export { pluginState } from "./plugin_state.js";
export { pluginEntities } from "./plugin_entities.js";
export { pluginDatabaseNamespaces, pluginMigrations } from "./plugin_database.js";
export { pluginJobs, pluginJobRuns } from "./plugin_jobs.js";
export { pluginWebhookDeliveries } from "./plugin_webhooks.js";
export { pluginLogs } from "./plugin_logs.js";

View File

@@ -0,0 +1,48 @@
import { sql } from "drizzle-orm";
import { index, pgTable, text, timestamp, uniqueIndex, uuid } from "drizzle-orm/pg-core";
import { companies } from "./companies.js";
import { issues } from "./issues.js";
export const issueReferenceMentions = pgTable(
"issue_reference_mentions",
{
id: uuid("id").primaryKey().defaultRandom(),
companyId: uuid("company_id").notNull().references(() => companies.id),
sourceIssueId: uuid("source_issue_id").notNull().references(() => issues.id, { onDelete: "cascade" }),
targetIssueId: uuid("target_issue_id").notNull().references(() => issues.id, { onDelete: "cascade" }),
sourceKind: text("source_kind").$type<"title" | "description" | "comment" | "document">().notNull(),
sourceRecordId: uuid("source_record_id"),
documentKey: text("document_key"),
matchedText: text("matched_text"),
createdAt: timestamp("created_at", { withTimezone: true }).notNull().defaultNow(),
updatedAt: timestamp("updated_at", { withTimezone: true }).notNull().defaultNow(),
},
(table) => ({
companySourceIssueIdx: index("issue_reference_mentions_company_source_issue_idx").on(
table.companyId,
table.sourceIssueId,
),
companyTargetIssueIdx: index("issue_reference_mentions_company_target_issue_idx").on(
table.companyId,
table.targetIssueId,
),
companyIssuePairIdx: index("issue_reference_mentions_company_issue_pair_idx").on(
table.companyId,
table.sourceIssueId,
table.targetIssueId,
),
companySourceMentionWithRecordUq: uniqueIndex("issue_reference_mentions_company_source_mention_record_uq").on(
table.companyId,
table.sourceIssueId,
table.targetIssueId,
table.sourceKind,
table.sourceRecordId,
).where(sql`${table.sourceRecordId} is not null`),
companySourceMentionWithoutRecordUq: uniqueIndex("issue_reference_mentions_company_source_mention_null_record_uq").on(
table.companyId,
table.sourceIssueId,
table.targetIssueId,
table.sourceKind,
).where(sql`${table.sourceRecordId} is null`),
}),
);

View File

@@ -44,6 +44,7 @@ export const issues = pgTable(
originKind: text("origin_kind").notNull().default("manual"),
originId: text("origin_id"),
originRunId: text("origin_run_id"),
originFingerprint: text("origin_fingerprint").notNull().default("default"),
requestDepth: integer("request_depth").notNull().default(0),
billingCode: text("billing_code"),
assigneeAdapterOverrides: jsonb("assignee_adapter_overrides").$type<Record<string, unknown>>(),
@@ -82,7 +83,7 @@ export const issues = pgTable(
identifierSearchIdx: index("issues_identifier_search_idx").using("gin", table.identifier.op("gin_trgm_ops")),
descriptionSearchIdx: index("issues_description_search_idx").using("gin", table.description.op("gin_trgm_ops")),
openRoutineExecutionIdx: uniqueIndex("issues_open_routine_execution_uq")
.on(table.companyId, table.originKind, table.originId)
.on(table.companyId, table.originKind, table.originId, table.originFingerprint)
.where(
sql`${table.originKind} = 'routine_execution'
and ${table.originId} is not null

View File

@@ -1,3 +1,4 @@
import { sql } from "drizzle-orm";
import { pgTable, uuid, text, timestamp, jsonb, index, uniqueIndex } from "drizzle-orm/pg-core";
import { companies } from "./companies.js";
import { invites } from "./invites.js";
@@ -37,5 +38,11 @@ export const joinRequests = pgTable(
table.requestType,
table.createdAt,
),
pendingHumanUserUniqueIdx: uniqueIndex("join_requests_pending_human_user_uq")
.on(table.companyId, table.requestingUserId)
.where(sql`${table.requestType} = 'human' AND ${table.status} = 'pending_approval' AND ${table.requestingUserId} IS NOT NULL`),
pendingHumanEmailUniqueIdx: uniqueIndex("join_requests_pending_human_email_uq")
.on(table.companyId, sql`lower(${table.requestEmailSnapshot})`)
.where(sql`${table.requestType} = 'human' AND ${table.status} = 'pending_approval' AND ${table.requestEmailSnapshot} IS NOT NULL`),
}),
);

View File

@@ -0,0 +1,75 @@
import {
pgTable,
uuid,
text,
timestamp,
index,
uniqueIndex,
} from "drizzle-orm/pg-core";
import type {
PluginDatabaseMigrationStatus,
PluginDatabaseNamespaceMode,
PluginDatabaseNamespaceStatus,
} from "@paperclipai/shared";
import { plugins } from "./plugins.js";
/**
* Database namespace allocated to an installed plugin.
*
* Namespaces are deterministic and owned by the host. Plugin SQL may create
* objects only inside its namespace, while selected public core tables remain
* read-only join targets through runtime checks.
*/
export const pluginDatabaseNamespaces = pgTable(
"plugin_database_namespaces",
{
id: uuid("id").primaryKey().defaultRandom(),
pluginId: uuid("plugin_id")
.notNull()
.references(() => plugins.id, { onDelete: "cascade" }),
pluginKey: text("plugin_key").notNull(),
namespaceName: text("namespace_name").notNull(),
namespaceMode: text("namespace_mode").$type<PluginDatabaseNamespaceMode>().notNull().default("schema"),
status: text("status").$type<PluginDatabaseNamespaceStatus>().notNull().default("active"),
createdAt: timestamp("created_at", { withTimezone: true }).notNull().defaultNow(),
updatedAt: timestamp("updated_at", { withTimezone: true }).notNull().defaultNow(),
},
(table) => ({
pluginIdx: uniqueIndex("plugin_database_namespaces_plugin_idx").on(table.pluginId),
namespaceIdx: uniqueIndex("plugin_database_namespaces_namespace_idx").on(table.namespaceName),
statusIdx: index("plugin_database_namespaces_status_idx").on(table.status),
}),
);
/**
* Per-plugin migration ledger.
*
* Every migration file is recorded with a checksum. A previously applied
* migration whose checksum changes is rejected during later activation.
*/
export const pluginMigrations = pgTable(
"plugin_migrations",
{
id: uuid("id").primaryKey().defaultRandom(),
pluginId: uuid("plugin_id")
.notNull()
.references(() => plugins.id, { onDelete: "cascade" }),
pluginKey: text("plugin_key").notNull(),
namespaceName: text("namespace_name").notNull(),
migrationKey: text("migration_key").notNull(),
checksum: text("checksum").notNull(),
pluginVersion: text("plugin_version").notNull(),
status: text("status").$type<PluginDatabaseMigrationStatus>().notNull(),
startedAt: timestamp("started_at", { withTimezone: true }).notNull().defaultNow(),
appliedAt: timestamp("applied_at", { withTimezone: true }),
errorMessage: text("error_message"),
},
(table) => ({
pluginMigrationIdx: uniqueIndex("plugin_migrations_plugin_key_idx").on(
table.pluginId,
table.migrationKey,
),
pluginIdx: index("plugin_migrations_plugin_idx").on(table.pluginId),
statusIdx: index("plugin_migrations_status_idx").on(table.status),
}),
);

View File

@@ -96,6 +96,7 @@ export const routineRuns = pgTable(
triggeredAt: timestamp("triggered_at", { withTimezone: true }).notNull().defaultNow(),
idempotencyKey: text("idempotency_key"),
triggerPayload: jsonb("trigger_payload").$type<Record<string, unknown>>(),
dispatchFingerprint: text("dispatch_fingerprint"),
linkedIssueId: uuid("linked_issue_id").references(() => issues.id, { onDelete: "set null" }),
coalescedIntoRunId: uuid("coalesced_into_run_id"),
failureReason: text("failure_reason"),
@@ -106,6 +107,7 @@ export const routineRuns = pgTable(
(table) => ({
companyRoutineIdx: index("routine_runs_company_routine_idx").on(table.companyId, table.routineId, table.createdAt),
triggerIdx: index("routine_runs_trigger_idx").on(table.triggerId, table.createdAt),
dispatchFingerprintIdx: index("routine_runs_dispatch_fingerprint_idx").on(table.routineId, table.dispatchFingerprint),
linkedIssueIdx: index("routine_runs_linked_issue_idx").on(table.linkedIssueId),
idempotencyIdx: index("routine_runs_trigger_idempotency_idx").on(table.triggerId, table.idempotencyKey),
}),

View File

@@ -47,6 +47,8 @@ Read tools:
- `paperclipListDocumentRevisions`
- `paperclipListProjects`
- `paperclipGetProject`
- `paperclipGetIssueWorkspaceRuntime`
- `paperclipWaitForIssueWorkspaceService`
- `paperclipListGoals`
- `paperclipGetGoal`
- `paperclipListApprovals`
@@ -63,6 +65,7 @@ Write tools:
- `paperclipAddComment`
- `paperclipUpsertIssueDocument`
- `paperclipRestoreIssueDocumentRevision`
- `paperclipControlIssueWorkspaceServices`
- `paperclipCreateApproval`
- `paperclipLinkIssueApproval`
- `paperclipUnlinkIssueApproval`

View File

@@ -107,6 +107,81 @@ describe("paperclip MCP tools", () => {
});
});
it("controls issue workspace services through the current execution workspace", async () => {
const fetchMock = vi.fn()
.mockResolvedValueOnce(mockJsonResponse({
currentExecutionWorkspace: {
id: "44444444-4444-4444-8444-444444444444",
runtimeServices: [],
},
}))
.mockResolvedValueOnce(mockJsonResponse({
operation: { id: "operation-1" },
workspace: {
id: "44444444-4444-4444-8444-444444444444",
runtimeServices: [
{
id: "55555555-5555-4555-8555-555555555555",
serviceName: "web",
status: "running",
url: "http://127.0.0.1:5173",
},
],
},
}));
vi.stubGlobal("fetch", fetchMock);
const tool = getTool("paperclipControlIssueWorkspaceServices");
await tool.execute({
issueId: "PAP-1135",
action: "restart",
workspaceCommandId: "web",
});
expect(fetchMock).toHaveBeenCalledTimes(2);
const [lookupUrl, lookupInit] = fetchMock.mock.calls[0] as [string, RequestInit];
expect(String(lookupUrl)).toBe("http://localhost:3100/api/issues/PAP-1135/heartbeat-context");
expect(lookupInit.method).toBe("GET");
const [controlUrl, controlInit] = fetchMock.mock.calls[1] as [string, RequestInit];
expect(String(controlUrl)).toBe(
"http://localhost:3100/api/execution-workspaces/44444444-4444-4444-8444-444444444444/runtime-services/restart",
);
expect(controlInit.method).toBe("POST");
expect(JSON.parse(String(controlInit.body))).toEqual({
workspaceCommandId: "web",
});
});
it("waits for an issue workspace runtime service URL", async () => {
const fetchMock = vi.fn()
.mockResolvedValueOnce(mockJsonResponse({
currentExecutionWorkspace: {
id: "44444444-4444-4444-8444-444444444444",
runtimeServices: [
{
id: "55555555-5555-4555-8555-555555555555",
serviceName: "web",
status: "running",
healthStatus: "healthy",
url: "http://127.0.0.1:5173",
},
],
},
}));
vi.stubGlobal("fetch", fetchMock);
const tool = getTool("paperclipWaitForIssueWorkspaceService");
const response = await tool.execute({
issueId: "PAP-1135",
serviceName: "web",
timeoutSeconds: 1,
});
expect(fetchMock).toHaveBeenCalledTimes(1);
expect(response.content[0]?.text).toContain("http://127.0.0.1:5173");
});
it("creates approvals with the expected company-scoped payload", async () => {
const fetchMock = vi.fn().mockResolvedValue(
mockJsonResponse({ id: "approval-1" }),

View File

@@ -124,6 +124,66 @@ const apiRequestSchema = z.object({
jsonBody: z.string().optional(),
});
const workspaceRuntimeControlTargetSchema = z.object({
workspaceCommandId: z.string().min(1).optional().nullable(),
runtimeServiceId: z.string().uuid().optional().nullable(),
serviceIndex: z.number().int().nonnegative().optional().nullable(),
});
const issueWorkspaceRuntimeControlSchema = z.object({
issueId: issueIdSchema,
action: z.enum(["start", "stop", "restart"]),
}).merge(workspaceRuntimeControlTargetSchema);
const waitForIssueWorkspaceServiceSchema = z.object({
issueId: issueIdSchema,
runtimeServiceId: z.string().uuid().optional().nullable(),
serviceName: z.string().min(1).optional().nullable(),
timeoutSeconds: z.number().int().positive().max(300).optional(),
});
function sleep(ms: number) {
return new Promise((resolve) => setTimeout(resolve, ms));
}
function readCurrentExecutionWorkspace(context: unknown): Record<string, unknown> | null {
if (!context || typeof context !== "object") return null;
const workspace = (context as { currentExecutionWorkspace?: unknown }).currentExecutionWorkspace;
return workspace && typeof workspace === "object" ? workspace as Record<string, unknown> : null;
}
function readWorkspaceRuntimeServices(workspace: Record<string, unknown> | null): Array<Record<string, unknown>> {
const raw = workspace?.runtimeServices;
return Array.isArray(raw)
? raw.filter((entry): entry is Record<string, unknown> => Boolean(entry) && typeof entry === "object")
: [];
}
function selectRuntimeService(
services: Array<Record<string, unknown>>,
input: { runtimeServiceId?: string | null; serviceName?: string | null },
) {
if (input.runtimeServiceId) {
return services.find((service) => service.id === input.runtimeServiceId) ?? null;
}
if (input.serviceName) {
return services.find((service) => service.serviceName === input.serviceName) ?? null;
}
return services.find((service) => service.status === "running" || service.status === "starting")
?? services[0]
?? null;
}
async function getIssueWorkspaceRuntime(client: PaperclipApiClient, issueId: string) {
const context = await client.requestJson("GET", `/issues/${encodeURIComponent(issueId)}/heartbeat-context`);
const workspace = readCurrentExecutionWorkspace(context);
return {
context,
workspace,
runtimeServices: readWorkspaceRuntimeServices(workspace),
};
}
export function createToolDefinitions(client: PaperclipApiClient): ToolDefinition[] {
return [
makeTool(
@@ -247,6 +307,55 @@ export function createToolDefinitions(client: PaperclipApiClient): ToolDefinitio
return client.requestJson("GET", `/projects/${encodeURIComponent(projectId)}${qs}`);
},
),
makeTool(
"paperclipGetIssueWorkspaceRuntime",
"Get the current execution workspace and runtime services for an issue, including service URLs",
z.object({ issueId: issueIdSchema }),
async ({ issueId }) => getIssueWorkspaceRuntime(client, issueId),
),
makeTool(
"paperclipControlIssueWorkspaceServices",
"Start, stop, or restart the current issue execution workspace runtime services",
issueWorkspaceRuntimeControlSchema,
async ({ issueId, action, ...target }) => {
const runtime = await getIssueWorkspaceRuntime(client, issueId);
const workspaceId = typeof runtime.workspace?.id === "string" ? runtime.workspace.id : null;
if (!workspaceId) {
throw new Error("Issue has no current execution workspace");
}
return client.requestJson(
"POST",
`/execution-workspaces/${encodeURIComponent(workspaceId)}/runtime-services/${action}`,
{ body: target },
);
},
),
makeTool(
"paperclipWaitForIssueWorkspaceService",
"Wait until an issue execution workspace runtime service is running and has a URL when one is exposed",
waitForIssueWorkspaceServiceSchema,
async ({ issueId, runtimeServiceId, serviceName, timeoutSeconds }) => {
const deadline = Date.now() + (timeoutSeconds ?? 60) * 1000;
let latest: Awaited<ReturnType<typeof getIssueWorkspaceRuntime>> | null = null;
while (Date.now() <= deadline) {
latest = await getIssueWorkspaceRuntime(client, issueId);
const service = selectRuntimeService(latest.runtimeServices, { runtimeServiceId, serviceName });
if (service?.status === "running" && service.healthStatus !== "unhealthy") {
return {
workspace: latest.workspace,
service,
};
}
await sleep(1000);
}
return {
timedOut: true,
latestWorkspace: latest?.workspace ?? null,
latestRuntimeServices: latest?.runtimeServices ?? [],
};
},
),
makeTool(
"paperclipListGoals",
"List goals in a company",

View File

@@ -0,0 +1,3 @@
dist
node_modules
.paperclip-sdk

View File

@@ -0,0 +1,48 @@
# Plugin Orchestration Smoke Example
This first-party example validates the orchestration-grade plugin host surface.
It is intentionally small and exists as an acceptance fixture rather than a
product plugin.
## What it exercises
- `apiRoutes` under `/api/plugins/:pluginId/api/*`
- restricted database migrations and runtime `ctx.db`
- plugin-owned rows joined to `public.issues`
- plugin-created child issues with namespaced origin metadata
- billing codes, workspace inheritance, blocker relations, documents, wakeups,
and orchestration summaries
- issue detail and settings UI slots that surface route, capability, namespace,
and smoke status
## Development
```bash
pnpm install
pnpm typecheck
pnpm test
pnpm build
```
## Install Into Paperclip
Use an absolute local path during development:
```bash
curl -X POST http://127.0.0.1:3100/api/plugins/install \
-H "Content-Type: application/json" \
-d '{"packageName":"/absolute/path/to/paperclip/packages/plugins/examples/plugin-orchestration-smoke-example","isLocalPath":true}'
```
## Scoped Route Smoke
After the plugin is ready, run the scoped route against an existing issue:
```bash
curl -X POST http://127.0.0.1:3100/api/plugins/paperclipai.plugin-orchestration-smoke-example/api/issues/<issue-id>/smoke \
-H "Content-Type: application/json" \
-d '{"assigneeAgentId":"<agent-id>"}'
```
The route returns the generated child issue, resolved blocker, billing code,
subtree ids, and wakeup result.

View File

@@ -0,0 +1,17 @@
import esbuild from "esbuild";
import { createPluginBundlerPresets } from "@paperclipai/plugin-sdk/bundlers";
const presets = createPluginBundlerPresets({ uiEntry: "src/ui/index.tsx" });
const watch = process.argv.includes("--watch");
const workerCtx = await esbuild.context(presets.esbuild.worker);
const manifestCtx = await esbuild.context(presets.esbuild.manifest);
const uiCtx = await esbuild.context(presets.esbuild.ui);
if (watch) {
await Promise.all([workerCtx.watch(), manifestCtx.watch(), uiCtx.watch()]);
console.log("esbuild watch mode enabled for worker, manifest, and ui");
} else {
await Promise.all([workerCtx.rebuild(), manifestCtx.rebuild(), uiCtx.rebuild()]);
await Promise.all([workerCtx.dispose(), manifestCtx.dispose(), uiCtx.dispose()]);
}

View File

@@ -0,0 +1,10 @@
CREATE TABLE plugin_orchestration_smoke_1e8c264c64.smoke_runs (
id uuid PRIMARY KEY,
root_issue_id uuid NOT NULL REFERENCES public.issues(id) ON DELETE CASCADE,
child_issue_id uuid REFERENCES public.issues(id) ON DELETE SET NULL,
blocker_issue_id uuid REFERENCES public.issues(id) ON DELETE SET NULL,
billing_code text NOT NULL,
last_summary jsonb NOT NULL DEFAULT '{}'::jsonb,
created_at timestamptz NOT NULL DEFAULT now(),
updated_at timestamptz NOT NULL DEFAULT now()
);

View File

@@ -0,0 +1,46 @@
{
"name": "@paperclipai/plugin-orchestration-smoke-example",
"version": "0.1.0",
"type": "module",
"private": true,
"description": "First-party smoke plugin for orchestration-grade Paperclip plugin APIs",
"scripts": {
"prebuild": "node ../../../../scripts/ensure-plugin-build-deps.mjs",
"build": "node ./esbuild.config.mjs",
"build:rollup": "rollup -c",
"dev": "node ./esbuild.config.mjs --watch",
"dev:ui": "paperclip-plugin-dev-server --root . --ui-dir dist/ui --port 4177",
"test": "vitest run --config ./vitest.config.ts",
"typecheck": "pnpm --filter @paperclipai/plugin-sdk build && tsc --noEmit"
},
"paperclipPlugin": {
"manifest": "./dist/manifest.js",
"worker": "./dist/worker.js",
"ui": "./dist/ui/"
},
"keywords": [
"paperclip",
"plugin",
"connector"
],
"author": "Paperclip",
"license": "MIT",
"dependencies": {
"@paperclipai/plugin-sdk": "workspace:*"
},
"devDependencies": {
"@paperclipai/shared": "workspace:*",
"@rollup/plugin-node-resolve": "^16.0.1",
"@rollup/plugin-typescript": "^12.1.2",
"@types/node": "^24.6.0",
"@types/react": "^19.0.8",
"esbuild": "^0.27.3",
"rollup": "^4.38.0",
"tslib": "^2.8.1",
"typescript": "^5.7.3",
"vitest": "^3.0.5"
},
"peerDependencies": {
"react": ">=18"
}
}

View File

@@ -0,0 +1,28 @@
import { nodeResolve } from "@rollup/plugin-node-resolve";
import typescript from "@rollup/plugin-typescript";
import { createPluginBundlerPresets } from "@paperclipai/plugin-sdk/bundlers";
const presets = createPluginBundlerPresets({ uiEntry: "src/ui/index.tsx" });
function withPlugins(config) {
if (!config) return null;
return {
...config,
plugins: [
nodeResolve({
extensions: [".ts", ".tsx", ".js", ".jsx", ".mjs"],
}),
typescript({
tsconfig: "./tsconfig.json",
declaration: false,
declarationMap: false,
}),
],
};
}
export default [
withPlugins(presets.rollup.manifest),
withPlugins(presets.rollup.worker),
withPlugins(presets.rollup.ui),
].filter(Boolean);

View File

@@ -0,0 +1,82 @@
import type { PaperclipPluginManifestV1 } from "@paperclipai/plugin-sdk";
const manifest: PaperclipPluginManifestV1 = {
id: "paperclipai.plugin-orchestration-smoke-example",
apiVersion: 1,
version: "0.1.0",
displayName: "Plugin Orchestration Smoke Example",
description: "First-party smoke plugin that exercises Paperclip orchestration-grade plugin APIs.",
author: "Paperclip",
categories: ["automation", "ui"],
capabilities: [
"api.routes.register",
"database.namespace.migrate",
"database.namespace.read",
"database.namespace.write",
"issues.read",
"issues.create",
"issues.wakeup",
"issue.relations.read",
"issue.relations.write",
"issue.documents.read",
"issue.documents.write",
"issue.subtree.read",
"issues.orchestration.read",
"ui.dashboardWidget.register",
"ui.detailTab.register",
"instance.settings.register"
],
entrypoints: {
worker: "./dist/worker.js",
ui: "./dist/ui"
},
database: {
namespaceSlug: "orchestration_smoke",
migrationsDir: "migrations",
coreReadTables: ["issues"]
},
apiRoutes: [
{
routeKey: "initialize",
method: "POST",
path: "/issues/:issueId/smoke",
auth: "board-or-agent",
capability: "api.routes.register",
checkoutPolicy: "required-for-agent-in-progress",
companyResolution: { from: "issue", param: "issueId" }
},
{
routeKey: "summary",
method: "GET",
path: "/issues/:issueId/smoke",
auth: "board-or-agent",
capability: "api.routes.register",
companyResolution: { from: "issue", param: "issueId" }
}
],
ui: {
slots: [
{
type: "dashboardWidget",
id: "health-widget",
displayName: "Orchestration Smoke Health",
exportName: "DashboardWidget"
},
{
type: "taskDetailView",
id: "issue-panel",
displayName: "Orchestration Smoke",
exportName: "IssuePanel",
entityTypes: ["issue"]
},
{
type: "settingsPage",
id: "settings",
displayName: "Orchestration Smoke",
exportName: "SettingsPage"
}
]
}
};
export default manifest;

View File

@@ -0,0 +1,134 @@
import {
usePluginAction,
usePluginData,
type PluginDetailTabProps,
type PluginSettingsPageProps,
type PluginWidgetProps,
} from "@paperclipai/plugin-sdk/ui";
import type React from "react";
type SurfaceStatus = {
status: "ok" | "degraded" | "error";
checkedAt: string;
databaseNamespace: string;
routeKeys: string[];
capabilities: string[];
summary: null | {
rootIssueId: string;
childIssueId: string | null;
blockerIssueId: string | null;
billingCode: string;
subtreeIssueIds: string[];
wakeupQueued: boolean;
};
};
const panelStyle = {
display: "grid",
gap: 10,
fontSize: 13,
lineHeight: 1.45,
} satisfies React.CSSProperties;
const rowStyle = {
display: "flex",
justifyContent: "space-between",
gap: 12,
} satisfies React.CSSProperties;
const buttonStyle = {
border: "1px solid #1f2937",
background: "#111827",
color: "#fff",
borderRadius: 6,
padding: "6px 10px",
font: "inherit",
cursor: "pointer",
} satisfies React.CSSProperties;
function SurfaceRows({ data }: { data: SurfaceStatus }) {
return (
<div style={{ display: "grid", gap: 6 }}>
<div style={rowStyle}><span>Status</span><strong>{data.status}</strong></div>
<div style={rowStyle}><span>Namespace</span><code>{data.databaseNamespace}</code></div>
<div style={rowStyle}><span>Routes</span><code>{data.routeKeys.join(", ")}</code></div>
<div style={rowStyle}><span>Capabilities</span><strong>{data.capabilities.length}</strong></div>
</div>
);
}
export function DashboardWidget({ context }: PluginWidgetProps) {
const { data, loading, error } = usePluginData<SurfaceStatus>("surface-status", {
companyId: context.companyId,
});
if (loading) return <div>Loading orchestration smoke status...</div>;
if (error) return <div>Orchestration smoke error: {error.message}</div>;
if (!data) return null;
return (
<div style={panelStyle}>
<strong>Orchestration Smoke</strong>
<SurfaceRows data={data} />
<div>Checked {data.checkedAt}</div>
</div>
);
}
export function IssuePanel({ context }: PluginDetailTabProps) {
const { data, loading, error, refresh } = usePluginData<SurfaceStatus>("surface-status", {
companyId: context.companyId,
issueId: context.entityId,
});
const initialize = usePluginAction("initialize-smoke");
if (loading) return <div>Loading orchestration smoke...</div>;
if (error) return <div>Orchestration smoke error: {error.message}</div>;
if (!data) return null;
return (
<div style={panelStyle}>
<div style={rowStyle}>
<strong>Orchestration Smoke</strong>
<button
style={buttonStyle}
onClick={async () => {
await initialize({ companyId: context.companyId, issueId: context.entityId });
refresh();
}}
>
Run Smoke
</button>
</div>
<SurfaceRows data={data} />
{data.summary ? (
<div style={{ display: "grid", gap: 4 }}>
<div style={rowStyle}><span>Child</span><code>{data.summary.childIssueId ?? "none"}</code></div>
<div style={rowStyle}><span>Blocker</span><code>{data.summary.blockerIssueId ?? "none"}</code></div>
<div style={rowStyle}><span>Billing</span><code>{data.summary.billingCode}</code></div>
<div style={rowStyle}><span>Subtree</span><strong>{data.summary.subtreeIssueIds.length}</strong></div>
<div style={rowStyle}><span>Wakeup</span><strong>{data.summary.wakeupQueued ? "queued" : "not queued"}</strong></div>
</div>
) : (
<div>No smoke run recorded for this issue.</div>
)}
</div>
);
}
export function SettingsPage({ context }: PluginSettingsPageProps) {
const { data, loading, error } = usePluginData<SurfaceStatus>("surface-status", {
companyId: context.companyId,
});
if (loading) return <div>Loading orchestration smoke settings...</div>;
if (error) return <div>Orchestration smoke settings error: {error.message}</div>;
if (!data) return null;
return (
<div style={panelStyle}>
<strong>Orchestration Smoke Surface</strong>
<SurfaceRows data={data} />
</div>
);
}

View File

@@ -0,0 +1,253 @@
import { randomUUID } from "node:crypto";
import { definePlugin, runWorker, type PluginApiRequestInput } from "@paperclipai/plugin-sdk";
type SmokeInput = {
companyId: string;
issueId: string;
assigneeAgentId?: string | null;
actorAgentId?: string | null;
actorUserId?: string | null;
actorRunId?: string | null;
};
type SmokeSummary = {
rootIssueId: string;
childIssueId: string | null;
blockerIssueId: string | null;
billingCode: string;
joinedRows: unknown[];
subtreeIssueIds: string[];
wakeupQueued: boolean;
};
let readSmokeSummary: ((companyId: string, issueId: string) => Promise<SmokeSummary | null>) | null = null;
let initializeSmoke: ((input: SmokeInput) => Promise<SmokeSummary>) | null = null;
function tableName(namespace: string) {
return `${namespace}.smoke_runs`;
}
function stringField(value: unknown): string | null {
return typeof value === "string" && value.trim().length > 0 ? value : null;
}
const plugin = definePlugin({
async setup(ctx) {
readSmokeSummary = async function readSummary(companyId: string, issueId: string): Promise<SmokeSummary | null> {
const rows = await ctx.db.query<{
root_issue_id: string;
child_issue_id: string | null;
blocker_issue_id: string | null;
billing_code: string;
issue_title: string;
last_summary: unknown;
}>(
`SELECT s.root_issue_id, s.child_issue_id, s.blocker_issue_id, s.billing_code, i.title AS issue_title, s.last_summary
FROM ${tableName(ctx.db.namespace)} s
JOIN public.issues i ON i.id = s.root_issue_id
WHERE s.root_issue_id = $1`,
[issueId],
);
const row = rows[0];
if (!row) return null;
const orchestration = await ctx.issues.summaries.getOrchestration({
issueId,
companyId,
includeSubtree: true,
billingCode: row.billing_code,
});
return {
rootIssueId: row.root_issue_id,
childIssueId: row.child_issue_id,
blockerIssueId: row.blocker_issue_id,
billingCode: row.billing_code,
joinedRows: rows,
subtreeIssueIds: orchestration.subtreeIssueIds,
wakeupQueued: Boolean((row.last_summary as { wakeupQueued?: unknown } | null)?.wakeupQueued),
};
};
initializeSmoke = async function runSmoke(input: SmokeInput): Promise<SmokeSummary> {
const root = await ctx.issues.get(input.issueId, input.companyId);
if (!root) throw new Error(`Issue not found: ${input.issueId}`);
const billingCode = `plugin-smoke:${input.issueId}`;
const actor = {
actorAgentId: input.actorAgentId ?? null,
actorUserId: input.actorUserId ?? null,
actorRunId: input.actorRunId ?? null,
};
const blocker = await ctx.issues.create({
companyId: input.companyId,
parentId: input.issueId,
inheritExecutionWorkspaceFromIssueId: input.issueId,
title: "Orchestration smoke blocker",
description: "Resolved blocker used to verify plugin relation writes without preventing the smoke wakeup.",
status: "done",
priority: "low",
billingCode,
originKind: `plugin:${ctx.manifest.id}:blocker`,
originId: `${input.issueId}:blocker`,
actor,
});
const child = await ctx.issues.create({
companyId: input.companyId,
parentId: input.issueId,
inheritExecutionWorkspaceFromIssueId: input.issueId,
title: "Orchestration smoke child",
description: "Generated by the orchestration smoke plugin to verify issue, document, relation, wakeup, and summary APIs.",
status: "todo",
priority: "medium",
assigneeAgentId: input.assigneeAgentId ?? root.assigneeAgentId ?? undefined,
billingCode,
originKind: `plugin:${ctx.manifest.id}:child`,
originId: `${input.issueId}:child`,
blockedByIssueIds: [blocker.id],
actor,
});
await ctx.issues.relations.setBlockedBy(child.id, [blocker.id], input.companyId, actor);
await ctx.issues.documents.upsert({
issueId: child.id,
companyId: input.companyId,
key: "orchestration-smoke",
title: "Orchestration Smoke",
format: "markdown",
body: [
"# Orchestration Smoke",
"",
`- Root issue: ${input.issueId}`,
`- Child issue: ${child.id}`,
`- Billing code: ${billingCode}`,
].join("\n"),
changeSummary: "Recorded orchestration smoke output",
});
const wakeup = await ctx.issues.requestWakeup(child.id, input.companyId, {
reason: "plugin:orchestration_smoke",
contextSource: "plugin-orchestration-smoke",
idempotencyKey: `${input.issueId}:child`,
...actor,
});
const orchestration = await ctx.issues.summaries.getOrchestration({
issueId: input.issueId,
companyId: input.companyId,
includeSubtree: true,
billingCode,
});
const summarySnapshot = {
childIssueId: child.id,
blockerIssueId: blocker.id,
wakeupQueued: wakeup.queued,
subtreeIssueIds: orchestration.subtreeIssueIds,
};
await ctx.db.execute(
`INSERT INTO ${tableName(ctx.db.namespace)} (id, root_issue_id, child_issue_id, blocker_issue_id, billing_code, last_summary)
VALUES ($1, $2, $3, $4, $5, $6::jsonb)
ON CONFLICT (id) DO UPDATE SET
child_issue_id = EXCLUDED.child_issue_id,
blocker_issue_id = EXCLUDED.blocker_issue_id,
billing_code = EXCLUDED.billing_code,
last_summary = EXCLUDED.last_summary,
updated_at = now()`,
[
randomUUID(),
input.issueId,
child.id,
blocker.id,
billingCode,
JSON.stringify(summarySnapshot),
],
);
return {
rootIssueId: input.issueId,
childIssueId: child.id,
blockerIssueId: blocker.id,
billingCode,
joinedRows: await ctx.db.query(
`SELECT s.id, s.billing_code, i.title AS root_title
FROM ${tableName(ctx.db.namespace)} s
JOIN public.issues i ON i.id = s.root_issue_id
WHERE s.root_issue_id = $1`,
[input.issueId],
),
subtreeIssueIds: orchestration.subtreeIssueIds,
wakeupQueued: wakeup.queued,
};
};
ctx.data.register("surface-status", async (params) => {
const companyId = stringField(params.companyId);
const issueId = stringField(params.issueId);
return {
status: "ok",
checkedAt: new Date().toISOString(),
databaseNamespace: ctx.db.namespace,
routeKeys: (ctx.manifest.apiRoutes ?? []).map((route) => route.routeKey),
capabilities: ctx.manifest.capabilities,
summary: companyId && issueId ? await readSmokeSummary?.(companyId, issueId) ?? null : null,
};
});
ctx.actions.register("initialize-smoke", async (params) => {
const companyId = stringField(params.companyId);
const issueId = stringField(params.issueId);
if (!companyId || !issueId) throw new Error("companyId and issueId are required");
if (!initializeSmoke) throw new Error("Smoke initializer is not ready");
return initializeSmoke({
companyId,
issueId,
assigneeAgentId: stringField(params.assigneeAgentId),
actorAgentId: stringField(params.actorAgentId),
actorUserId: stringField(params.actorUserId),
actorRunId: stringField(params.actorRunId),
});
});
},
async onApiRequest(input: PluginApiRequestInput) {
if (input.routeKey === "summary") {
const issueId = input.params.issueId;
return {
body: await readSmokeSummary?.(input.companyId, issueId) ?? null,
};
}
if (input.routeKey === "initialize") {
if (!initializeSmoke) throw new Error("Smoke initializer is not ready");
const body = input.body as Record<string, unknown> | null;
return {
status: 201,
body: await initializeSmoke({
companyId: input.companyId,
issueId: input.params.issueId,
assigneeAgentId: stringField(body?.assigneeAgentId),
actorAgentId: input.actor.agentId ?? null,
actorUserId: input.actor.userId ?? null,
actorRunId: input.actor.runId ?? null,
}),
};
}
return {
status: 404,
body: { error: `Unknown orchestration smoke route: ${input.routeKey}` },
};
},
async onHealth() {
return {
status: "ok",
message: "Orchestration smoke plugin worker is running",
details: {
surfaces: ["database", "scoped-api-route", "issue-panel", "orchestration-apis"],
},
};
}
});
export default plugin;
runWorker(plugin, import.meta.url);

View File

@@ -0,0 +1,162 @@
import { randomUUID } from "node:crypto";
import { describe, expect, it } from "vitest";
import { pluginManifestV1Schema, type Issue } from "@paperclipai/shared";
import { createTestHarness } from "@paperclipai/plugin-sdk/testing";
import manifest from "../src/manifest.js";
import plugin from "../src/worker.js";
function issue(input: Partial<Issue> & Pick<Issue, "id" | "companyId" | "title">): Issue {
const now = new Date();
const { id, companyId, title, ...rest } = input;
return {
id,
companyId,
projectId: null,
projectWorkspaceId: null,
goalId: null,
parentId: null,
title,
description: null,
status: "todo",
priority: "medium",
assigneeAgentId: null,
assigneeUserId: null,
checkoutRunId: null,
executionRunId: null,
executionAgentNameKey: null,
executionLockedAt: null,
createdByAgentId: null,
createdByUserId: null,
issueNumber: null,
identifier: null,
originKind: "manual",
originId: null,
originRunId: null,
requestDepth: 0,
billingCode: null,
assigneeAdapterOverrides: null,
executionWorkspaceId: null,
executionWorkspacePreference: null,
executionWorkspaceSettings: null,
startedAt: null,
completedAt: null,
cancelledAt: null,
hiddenAt: null,
createdAt: now,
updatedAt: now,
...rest,
};
}
describe("orchestration smoke plugin", () => {
it("declares the Phase 1 orchestration surfaces", () => {
expect(pluginManifestV1Schema.parse(manifest)).toMatchObject({
id: "paperclipai.plugin-orchestration-smoke-example",
database: {
migrationsDir: "migrations",
coreReadTables: ["issues"],
},
apiRoutes: [
expect.objectContaining({ routeKey: "initialize" }),
expect.objectContaining({ routeKey: "summary" }),
],
});
});
it("creates plugin-owned orchestration rows, issue tree, document, wakeup, and summary reads", async () => {
const companyId = randomUUID();
const rootIssueId = randomUUID();
const agentId = randomUUID();
const harness = createTestHarness({ manifest });
harness.seed({
issues: [
issue({
id: rootIssueId,
companyId,
title: "Root orchestration issue",
assigneeAgentId: agentId,
}),
],
});
await plugin.definition.setup(harness.ctx);
const result = await harness.performAction<{
rootIssueId: string;
childIssueId: string;
blockerIssueId: string;
billingCode: string;
subtreeIssueIds: string[];
wakeupQueued: boolean;
}>("initialize-smoke", {
companyId,
issueId: rootIssueId,
assigneeAgentId: agentId,
});
expect(result.rootIssueId).toBe(rootIssueId);
expect(result.childIssueId).toEqual(expect.any(String));
expect(result.blockerIssueId).toEqual(expect.any(String));
expect(result.billingCode).toBe(`plugin-smoke:${rootIssueId}`);
expect(result.wakeupQueued).toBe(true);
expect(result.subtreeIssueIds).toEqual(expect.arrayContaining([rootIssueId, result.childIssueId]));
expect(harness.dbExecutes[0]?.sql).toContain(".smoke_runs");
expect(harness.dbQueries.some((entry) => entry.sql.includes("JOIN public.issues"))).toBe(true);
const relations = await harness.ctx.issues.relations.get(result.childIssueId, companyId);
expect(relations.blockedBy).toEqual([
expect.objectContaining({
id: result.blockerIssueId,
status: "done",
}),
]);
const docs = await harness.ctx.issues.documents.list(result.childIssueId, companyId);
expect(docs).toEqual([
expect.objectContaining({
key: "orchestration-smoke",
title: "Orchestration Smoke",
}),
]);
});
it("dispatches the scoped API route through the same smoke path", async () => {
const companyId = randomUUID();
const rootIssueId = randomUUID();
const agentId = randomUUID();
const harness = createTestHarness({ manifest });
harness.seed({
issues: [
issue({
id: rootIssueId,
companyId,
title: "Scoped API root",
assigneeAgentId: agentId,
}),
],
});
await plugin.definition.setup(harness.ctx);
await expect(plugin.definition.onApiRequest?.({
routeKey: "initialize",
method: "POST",
path: `/issues/${rootIssueId}/smoke`,
params: { issueId: rootIssueId },
query: {},
body: { assigneeAgentId: agentId },
actor: {
actorType: "user",
actorId: "board",
userId: "board",
agentId: null,
runId: null,
},
companyId,
headers: {},
})).resolves.toMatchObject({
status: 201,
body: expect.objectContaining({
rootIssueId,
wakeupQueued: true,
}),
});
});
});

View File

@@ -0,0 +1,27 @@
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"lib": [
"ES2022",
"DOM"
],
"jsx": "react-jsx",
"strict": true,
"skipLibCheck": true,
"declaration": true,
"declarationMap": true,
"sourceMap": true,
"outDir": "dist",
"rootDir": "."
},
"include": [
"src",
"tests"
],
"exclude": [
"dist",
"node_modules"
]
}

View File

@@ -0,0 +1,8 @@
import { defineConfig } from "vitest/config";
export default defineConfig({
test: {
include: ["tests/**/*.spec.ts"],
environment: "node",
},
});

View File

@@ -118,10 +118,13 @@ Subscribe in `setup` with `ctx.events.on(name, handler)` or `ctx.events.on(name,
| `project.created`, `project.updated` | project |
| `project.workspace_created`, `project.workspace_updated`, `project.workspace_deleted` | project_workspace |
| `issue.created`, `issue.updated`, `issue.comment.created` | issue |
| `issue.document.created`, `issue.document.updated`, `issue.document.deleted` | issue |
| `issue.relations.updated`, `issue.checked_out`, `issue.released`, `issue.assignment_wakeup_requested` | issue |
| `agent.created`, `agent.updated`, `agent.status_changed` | agent |
| `agent.run.started`, `agent.run.finished`, `agent.run.failed`, `agent.run.cancelled` | run |
| `goal.created`, `goal.updated` | goal |
| `approval.created`, `approval.decided` | approval |
| `budget.incident.opened`, `budget.incident.resolved` | budget_incident |
| `cost_event.created` | cost |
| `activity.logged` | activity |
@@ -301,18 +304,29 @@ Declare in `manifest.capabilities`. Grouped by scope:
| | `project.workspaces.read` |
| | `issues.read` |
| | `issue.comments.read` |
| | `issue.documents.read` |
| | `issue.relations.read` |
| | `issue.subtree.read` |
| | `agents.read` |
| | `goals.read` |
| | `goals.create` |
| | `goals.update` |
| | `activity.read` |
| | `costs.read` |
| | `issues.orchestration.read` |
| | `database.namespace.read` |
| | `issues.create` |
| | `issues.update` |
| | `issues.checkout` |
| | `issues.wakeup` |
| | `issue.comments.create` |
| | `issue.documents.write` |
| | `issue.relations.write` |
| | `activity.log.write` |
| | `metrics.write` |
| | `telemetry.track` |
| | `database.namespace.migrate` |
| | `database.namespace.write` |
| **Instance** | `instance.settings.register` |
| | `plugin.state.read` |
| | `plugin.state.write` |
@@ -320,6 +334,7 @@ Declare in `manifest.capabilities`. Grouped by scope:
| | `events.emit` |
| | `jobs.schedule` |
| | `webhooks.receive` |
| | `api.routes.register` |
| | `http.outbound` |
| | `secrets.read-ref` |
| **Agent** | `agent.tools.register` |
@@ -337,6 +352,144 @@ Declare in `manifest.capabilities`. Grouped by scope:
Full list in code: import `PLUGIN_CAPABILITIES` from `@paperclipai/plugin-sdk`.
### Restricted Database Namespace
Trusted orchestration plugins can declare a host-owned PostgreSQL namespace:
```ts
database: {
migrationsDir: "migrations",
coreReadTables: ["issues"],
}
```
Declare `database.namespace.migrate` and `database.namespace.read`; add
`database.namespace.write` when the worker needs runtime writes. Migrations run
before worker startup, are checksum-recorded, and may create or alter objects
only inside the plugin namespace. Runtime `ctx.db.query()` allows `SELECT` from
`ctx.db.namespace` plus manifest-whitelisted `public` core tables. Runtime
`ctx.db.execute()` allows `INSERT`, `UPDATE`, and `DELETE` only against the
plugin namespace.
### Scoped API Routes
Manifest-declared `apiRoutes` expose JSON routes under
`/api/plugins/:pluginId/api/*` without letting a plugin claim core paths:
```ts
apiRoutes: [
{
routeKey: "initialize",
method: "POST",
path: "/issues/:issueId/smoke",
auth: "board-or-agent",
capability: "api.routes.register",
checkoutPolicy: "required-for-agent-in-progress",
companyResolution: { from: "issue", param: "issueId" },
},
]
```
Implement `onApiRequest(input)` in the worker to handle the route. The host
performs auth, company access, capability, route matching, and checkout policy
before dispatch. The worker receives route params, query, parsed JSON body,
sanitized headers, actor context, and `companyId`; responses are JSON `{ status?,
headers?, body? }`.
## Issue Orchestration APIs
Workflow plugins can use `ctx.issues` for orchestration-grade issue operations without importing host server internals.
Expanded create/update fields include blockers, billing code, board or agent assignees, labels, namespaced plugin origins, request depth, and safe execution workspace fields:
```ts
const child = await ctx.issues.create({
companyId,
parentId: missionIssueId,
inheritExecutionWorkspaceFromIssueId: missionIssueId,
title: "Implement feature slice",
status: "todo",
assigneeAgentId: workerAgentId,
billingCode: "mission:alpha",
originKind: "plugin:paperclip.missions:feature",
originId: "mission-alpha:feature-1",
blockedByIssueIds: [planningIssueId],
});
```
If `originKind` is omitted, the host stores `plugin:<pluginKey>`. Plugins may use sub-kinds such as `plugin:<pluginKey>:feature`, but the host rejects attempts to set another plugin's namespace.
Blocker relationships are also exposed as first-class helpers:
```ts
const relations = await ctx.issues.relations.get(child.id, companyId);
await ctx.issues.relations.setBlockedBy(child.id, [planningIssueId], companyId);
await ctx.issues.relations.addBlockers(child.id, [validationIssueId], companyId);
await ctx.issues.relations.removeBlockers(child.id, [planningIssueId], companyId);
```
Subtree reads can include just the issue tree, or compact related data for orchestration dashboards:
```ts
const subtree = await ctx.issues.getSubtree(missionIssueId, companyId, {
includeRoot: true,
includeRelations: true,
includeDocuments: true,
includeActiveRuns: true,
includeAssignees: true,
});
```
Agent-run actions can assert checkout ownership before mutating in-progress work:
```ts
await ctx.issues.assertCheckoutOwner({
issueId,
companyId,
actorAgentId: runCtx.agentId,
actorRunId: runCtx.runId,
});
```
Plugins can request assignment wakeups through the host so budget stops, execution locks, blocker checks, and heartbeat policy still apply:
```ts
await ctx.issues.requestWakeup(child.id, companyId, {
reason: "mission_advance",
contextSource: "missions.advance",
});
await ctx.issues.requestWakeups([featureIssueId, validationIssueId], companyId, {
reason: "mission_advance",
contextSource: "missions.advance",
idempotencyKeyPrefix: `mission:${missionIssueId}:advance`,
});
```
Use `ctx.issues.summaries.getOrchestration()` when a workflow needs compact reads across a root issue or subtree:
```ts
const summary = await ctx.issues.summaries.getOrchestration({
issueId: missionIssueId,
companyId,
includeSubtree: true,
billingCode: "mission:alpha",
});
```
Required capabilities:
| API | Capability |
|-----|------------|
| `ctx.issues.relations.get` | `issue.relations.read` |
| `ctx.issues.relations.setBlockedBy` / `addBlockers` / `removeBlockers` | `issue.relations.write` |
| `ctx.issues.getSubtree` | `issue.subtree.read` |
| `ctx.issues.assertCheckoutOwner` | `issues.checkout` |
| `ctx.issues.requestWakeup` / `requestWakeups` | `issues.wakeup` |
| `ctx.issues.summaries.getOrchestration` | `issues.orchestration.read` |
Plugin-originated mutations are logged with `actorType: "plugin"` and details fields `sourcePluginId`, `sourcePluginKey`, `initiatingActorType`, `initiatingActorId`, and `initiatingRunId` when a user or agent run initiated the plugin work.
## UI quick start
```tsx

View File

@@ -107,6 +107,30 @@ export interface PluginWebhookInput {
requestId: string;
}
export interface PluginApiRequestInput {
routeKey: string;
method: string;
path: string;
params: Record<string, string>;
query: Record<string, string | string[]>;
body: unknown;
actor: {
actorType: "user" | "agent";
actorId: string;
agentId?: string | null;
userId?: string | null;
runId?: string | null;
};
companyId: string;
headers: Record<string, string>;
}
export interface PluginApiResponse {
status?: number;
headers?: Record<string, string>;
body?: unknown;
}
// ---------------------------------------------------------------------------
// Plugin definition
// ---------------------------------------------------------------------------
@@ -197,6 +221,13 @@ export interface PluginDefinition {
* @see PLUGIN_SPEC.md §13.7 — `handleWebhook`
*/
onWebhook?(input: PluginWebhookInput): Promise<void>;
/**
* Called for manifest-declared scoped JSON API routes under
* `/api/plugins/:pluginId/api/*` after the host has enforced auth, company
* access, capabilities, and checkout policy.
*/
onApiRequest?(input: PluginApiRequestInput): Promise<PluginApiResponse>;
}
// ---------------------------------------------------------------------------

View File

@@ -97,6 +97,13 @@ export interface HostServices {
delete(params: WorkerToHostMethods["state.delete"][0]): Promise<void>;
};
/** Provides restricted plugin database namespace methods. */
db: {
namespace(params: WorkerToHostMethods["db.namespace"][0]): Promise<WorkerToHostMethods["db.namespace"][1]>;
query(params: WorkerToHostMethods["db.query"][0]): Promise<WorkerToHostMethods["db.query"][1]>;
execute(params: WorkerToHostMethods["db.execute"][0]): Promise<WorkerToHostMethods["db.execute"][1]>;
};
/** Provides `entities.upsert`, `entities.list`. */
entities: {
upsert(params: WorkerToHostMethods["entities.upsert"][0]): Promise<WorkerToHostMethods["entities.upsert"][1]>;
@@ -160,12 +167,21 @@ export interface HostServices {
getWorkspaceForIssue(params: WorkerToHostMethods["projects.getWorkspaceForIssue"][0]): Promise<WorkerToHostMethods["projects.getWorkspaceForIssue"][1]>;
};
/** Provides `issues.list`, `issues.get`, `issues.create`, `issues.update`, `issues.listComments`, `issues.createComment`. */
/** Provides issue read/write, relation, checkout, wakeup, summary, comment methods. */
issues: {
list(params: WorkerToHostMethods["issues.list"][0]): Promise<WorkerToHostMethods["issues.list"][1]>;
get(params: WorkerToHostMethods["issues.get"][0]): Promise<WorkerToHostMethods["issues.get"][1]>;
create(params: WorkerToHostMethods["issues.create"][0]): Promise<WorkerToHostMethods["issues.create"][1]>;
update(params: WorkerToHostMethods["issues.update"][0]): Promise<WorkerToHostMethods["issues.update"][1]>;
getRelations(params: WorkerToHostMethods["issues.relations.get"][0]): Promise<WorkerToHostMethods["issues.relations.get"][1]>;
setBlockedBy(params: WorkerToHostMethods["issues.relations.setBlockedBy"][0]): Promise<WorkerToHostMethods["issues.relations.setBlockedBy"][1]>;
addBlockers(params: WorkerToHostMethods["issues.relations.addBlockers"][0]): Promise<WorkerToHostMethods["issues.relations.addBlockers"][1]>;
removeBlockers(params: WorkerToHostMethods["issues.relations.removeBlockers"][0]): Promise<WorkerToHostMethods["issues.relations.removeBlockers"][1]>;
assertCheckoutOwner(params: WorkerToHostMethods["issues.assertCheckoutOwner"][0]): Promise<WorkerToHostMethods["issues.assertCheckoutOwner"][1]>;
getSubtree(params: WorkerToHostMethods["issues.getSubtree"][0]): Promise<WorkerToHostMethods["issues.getSubtree"][1]>;
requestWakeup(params: WorkerToHostMethods["issues.requestWakeup"][0]): Promise<WorkerToHostMethods["issues.requestWakeup"][1]>;
requestWakeups(params: WorkerToHostMethods["issues.requestWakeups"][0]): Promise<WorkerToHostMethods["issues.requestWakeups"][1]>;
getOrchestrationSummary(params: WorkerToHostMethods["issues.summaries.getOrchestration"][0]): Promise<WorkerToHostMethods["issues.summaries.getOrchestration"][1]>;
listComments(params: WorkerToHostMethods["issues.listComments"][0]): Promise<WorkerToHostMethods["issues.listComments"][1]>;
createComment(params: WorkerToHostMethods["issues.createComment"][0]): Promise<WorkerToHostMethods["issues.createComment"][1]>;
};
@@ -269,6 +285,10 @@ const METHOD_CAPABILITY_MAP: Record<WorkerToHostMethodName, PluginCapability | n
"state.set": "plugin.state.write",
"state.delete": "plugin.state.write",
"db.namespace": "database.namespace.read",
"db.query": "database.namespace.read",
"db.execute": "database.namespace.write",
// Entities — no specific capability required (plugin-scoped by design)
"entities.upsert": null,
"entities.list": null,
@@ -311,6 +331,15 @@ const METHOD_CAPABILITY_MAP: Record<WorkerToHostMethodName, PluginCapability | n
"issues.get": "issues.read",
"issues.create": "issues.create",
"issues.update": "issues.update",
"issues.relations.get": "issue.relations.read",
"issues.relations.setBlockedBy": "issue.relations.write",
"issues.relations.addBlockers": "issue.relations.write",
"issues.relations.removeBlockers": "issue.relations.write",
"issues.assertCheckoutOwner": "issues.checkout",
"issues.getSubtree": "issue.subtree.read",
"issues.requestWakeup": "issues.wakeup",
"issues.requestWakeups": "issues.wakeup",
"issues.summaries.getOrchestration": "issues.orchestration.read",
"issues.listComments": "issue.comments.read",
"issues.createComment": "issue.comments.create",
@@ -419,6 +448,16 @@ export function createHostClientHandlers(
return services.state.delete(params);
}),
"db.namespace": gated("db.namespace", async (params) => {
return services.db.namespace(params);
}),
"db.query": gated("db.query", async (params) => {
return services.db.query(params);
}),
"db.execute": gated("db.execute", async (params) => {
return services.db.execute(params);
}),
// Entities
"entities.upsert": gated("entities.upsert", async (params) => {
return services.entities.upsert(params);
@@ -503,6 +542,33 @@ export function createHostClientHandlers(
"issues.update": gated("issues.update", async (params) => {
return services.issues.update(params);
}),
"issues.relations.get": gated("issues.relations.get", async (params) => {
return services.issues.getRelations(params);
}),
"issues.relations.setBlockedBy": gated("issues.relations.setBlockedBy", async (params) => {
return services.issues.setBlockedBy(params);
}),
"issues.relations.addBlockers": gated("issues.relations.addBlockers", async (params) => {
return services.issues.addBlockers(params);
}),
"issues.relations.removeBlockers": gated("issues.relations.removeBlockers", async (params) => {
return services.issues.removeBlockers(params);
}),
"issues.assertCheckoutOwner": gated("issues.assertCheckoutOwner", async (params) => {
return services.issues.assertCheckoutOwner(params);
}),
"issues.getSubtree": gated("issues.getSubtree", async (params) => {
return services.issues.getSubtree(params);
}),
"issues.requestWakeup": gated("issues.requestWakeup", async (params) => {
return services.issues.requestWakeup(params);
}),
"issues.requestWakeups": gated("issues.requestWakeups", async (params) => {
return services.issues.requestWakeups(params);
}),
"issues.summaries.getOrchestration": gated("issues.summaries.getOrchestration", async (params) => {
return services.issues.getOrchestrationSummary(params);
}),
"issues.listComments": gated("issues.listComments", async (params) => {
return services.issues.listComments(params);
}),

View File

@@ -95,6 +95,8 @@ export type {
PluginHealthDiagnostics,
PluginConfigValidationResult,
PluginWebhookInput,
PluginApiRequestInput,
PluginApiResponse,
} from "./define-plugin.js";
export type {
TestHarness,
@@ -171,6 +173,22 @@ export type {
PluginProjectsClient,
PluginCompaniesClient,
PluginIssuesClient,
PluginIssueMutationActor,
PluginIssueRelationsClient,
PluginIssueRelationSummary,
PluginIssueCheckoutOwnership,
PluginIssueWakeupResult,
PluginIssueWakeupBatchResult,
PluginIssueRunSummary,
PluginIssueApprovalSummary,
PluginIssueCostSummary,
PluginBudgetIncidentSummary,
PluginIssueInvocationBlockSummary,
PluginIssueOrchestrationSummary,
PluginIssueSubtreeOptions,
PluginIssueAssigneeSummary,
PluginIssueSubtree,
PluginIssueSummariesClient,
PluginAgentsClient,
PluginAgentSessionsClient,
AgentSession,
@@ -203,8 +221,10 @@ export type {
Project,
Issue,
IssueComment,
IssueDocumentSummary,
Agent,
Goal,
PluginDatabaseClient,
} from "./types.js";
// Manifest and constant types re-exported from @paperclipai/shared
@@ -221,7 +241,12 @@ export type {
PluginLauncherRenderDeclaration,
PluginLauncherDeclaration,
PluginMinimumHostVersion,
PluginDatabaseDeclaration,
PluginApiRouteCompanyResolution,
PluginApiRouteDeclaration,
PluginRecord,
PluginDatabaseNamespaceRecord,
PluginMigrationRecord,
PluginConfig,
JsonSchema,
PluginStatus,
@@ -238,6 +263,13 @@ export type {
PluginJobRunStatus,
PluginJobRunTrigger,
PluginWebhookDeliveryStatus,
PluginDatabaseCoreReadTable,
PluginDatabaseMigrationStatus,
PluginDatabaseNamespaceMode,
PluginDatabaseNamespaceStatus,
PluginApiRouteAuthMode,
PluginApiRouteCheckoutPolicy,
PluginApiRouteMethod,
PluginEventType,
PluginBridgeErrorCode,
} from "./types.js";

View File

@@ -34,6 +34,12 @@ export type { PluginLauncherRenderContextSnapshot } from "@paperclipai/shared";
import type {
PluginEvent,
PluginIssueCheckoutOwnership,
PluginIssueOrchestrationSummary,
PluginIssueRelationSummary,
PluginIssueSubtree,
PluginIssueWakeupBatchResult,
PluginIssueWakeupResult,
PluginJobContext,
PluginWorkspace,
ToolRunContext,
@@ -41,6 +47,8 @@ import type {
} from "./types.js";
import type {
PluginHealthDiagnostics,
PluginApiRequestInput,
PluginApiResponse,
PluginConfigValidationResult,
PluginWebhookInput,
} from "./define-plugin.js";
@@ -219,6 +227,8 @@ export interface InitializeParams {
};
/** Host API version. */
apiVersion: number;
/** Host-derived plugin database namespace, when the manifest declares database access. */
databaseNamespace?: string | null;
}
/**
@@ -374,6 +384,8 @@ export interface HostToWorkerMethods {
runJob: [params: RunJobParams, result: void];
/** @see PLUGIN_SPEC.md §13.7 */
handleWebhook: [params: PluginWebhookInput, result: void];
/** Scoped plugin API route dispatch. */
handleApiRequest: [params: PluginApiRequestInput, result: PluginApiResponse];
/** @see PLUGIN_SPEC.md §13.8 */
getData: [params: GetDataParams, result: unknown];
/** @see PLUGIN_SPEC.md §13.9 */
@@ -399,6 +411,7 @@ export const HOST_TO_WORKER_OPTIONAL_METHODS: readonly HostToWorkerMethodName[]
"onEvent",
"runJob",
"handleWebhook",
"handleApiRequest",
"getData",
"performAction",
"executeTool",
@@ -432,6 +445,20 @@ export interface WorkerToHostMethods {
result: void,
];
// Restricted plugin database namespace
"db.namespace": [
params: Record<string, never>,
result: string,
];
"db.query": [
params: { sql: string; params?: unknown[] },
result: unknown[],
];
"db.execute": [
params: { sql: string; params?: unknown[] },
result: { rowCount: number },
];
// Entities
"entities.upsert": [
params: {
@@ -569,6 +596,8 @@ export interface WorkerToHostMethods {
companyId: string;
projectId?: string;
assigneeAgentId?: string;
originKind?: string;
originId?: string;
status?: string;
limit?: number;
offset?: number;
@@ -588,8 +617,23 @@ export interface WorkerToHostMethods {
inheritExecutionWorkspaceFromIssueId?: string;
title: string;
description?: string;
status?: string;
priority?: string;
assigneeAgentId?: string;
assigneeUserId?: string | null;
requestDepth?: number;
billingCode?: string | null;
originKind?: string | null;
originId?: string | null;
originRunId?: string | null;
blockedByIssueIds?: string[];
labelIds?: string[];
executionWorkspaceId?: string | null;
executionWorkspacePreference?: string | null;
executionWorkspaceSettings?: Record<string, unknown> | null;
actorAgentId?: string | null;
actorUserId?: string | null;
actorRunId?: string | null;
},
result: Issue,
];
@@ -601,6 +645,99 @@ export interface WorkerToHostMethods {
},
result: Issue,
];
"issues.relations.get": [
params: { issueId: string; companyId: string },
result: PluginIssueRelationSummary,
];
"issues.relations.setBlockedBy": [
params: {
issueId: string;
companyId: string;
blockedByIssueIds: string[];
actorAgentId?: string | null;
actorUserId?: string | null;
actorRunId?: string | null;
},
result: PluginIssueRelationSummary,
];
"issues.relations.addBlockers": [
params: {
issueId: string;
companyId: string;
blockerIssueIds: string[];
actorAgentId?: string | null;
actorUserId?: string | null;
actorRunId?: string | null;
},
result: PluginIssueRelationSummary,
];
"issues.relations.removeBlockers": [
params: {
issueId: string;
companyId: string;
blockerIssueIds: string[];
actorAgentId?: string | null;
actorUserId?: string | null;
actorRunId?: string | null;
},
result: PluginIssueRelationSummary,
];
"issues.assertCheckoutOwner": [
params: {
issueId: string;
companyId: string;
actorAgentId: string;
actorRunId: string;
},
result: PluginIssueCheckoutOwnership,
];
"issues.getSubtree": [
params: {
issueId: string;
companyId: string;
includeRoot?: boolean;
includeRelations?: boolean;
includeDocuments?: boolean;
includeActiveRuns?: boolean;
includeAssignees?: boolean;
},
result: PluginIssueSubtree,
];
"issues.requestWakeup": [
params: {
issueId: string;
companyId: string;
reason?: string;
contextSource?: string;
idempotencyKey?: string | null;
actorAgentId?: string | null;
actorUserId?: string | null;
actorRunId?: string | null;
},
result: PluginIssueWakeupResult,
];
"issues.requestWakeups": [
params: {
issueIds: string[];
companyId: string;
reason?: string;
contextSource?: string;
idempotencyKeyPrefix?: string | null;
actorAgentId?: string | null;
actorUserId?: string | null;
actorRunId?: string | null;
},
result: PluginIssueWakeupBatchResult[],
];
"issues.summaries.getOrchestration": [
params: {
issueId: string;
companyId: string;
includeSubtree?: boolean;
billingCode?: string | null;
},
result: PluginIssueOrchestrationSummary,
];
"issues.listComments": [
params: { issueId: string; companyId: string },
result: IssueComment[],

View File

@@ -3,10 +3,12 @@ import type {
PaperclipPluginManifestV1,
PluginCapability,
PluginEventType,
PluginIssueOriginKind,
Company,
Project,
Issue,
IssueComment,
IssueDocument,
Agent,
Goal,
} from "@paperclipai/shared";
@@ -72,6 +74,8 @@ export interface TestHarness {
activity: Array<{ message: string; entityType?: string; entityId?: string; metadata?: Record<string, unknown> }>;
metrics: Array<{ name: string; value: number; tags?: Record<string, string> }>;
telemetry: Array<{ eventName: string; dimensions?: Record<string, string | number | boolean> }>;
dbQueries: Array<{ sql: string; params?: unknown[] }>;
dbExecutes: Array<{ sql: string; params?: unknown[] }>;
}
type EventRegistration = {
@@ -134,6 +138,8 @@ export function createTestHarness(options: TestHarnessOptions): TestHarness {
const activity: TestHarness["activity"] = [];
const metrics: TestHarness["metrics"] = [];
const telemetry: TestHarness["telemetry"] = [];
const dbQueries: TestHarness["dbQueries"] = [];
const dbExecutes: TestHarness["dbExecutes"] = [];
const state = new Map<string, unknown>();
const entities = new Map<string, PluginEntityRecord>();
@@ -141,7 +147,9 @@ export function createTestHarness(options: TestHarnessOptions): TestHarness {
const companies = new Map<string, Company>();
const projects = new Map<string, Project>();
const issues = new Map<string, Issue>();
const blockedByIssueIds = new Map<string, string[]>();
const issueComments = new Map<string, IssueComment[]>();
const issueDocuments = new Map<string, IssueDocument>();
const agents = new Map<string, Agent>();
const goals = new Map<string, Goal>();
const projectWorkspaces = new Map<string, PluginWorkspace[]>();
@@ -156,6 +164,42 @@ export function createTestHarness(options: TestHarnessOptions): TestHarness {
const actionHandlers = new Map<string, (params: Record<string, unknown>) => Promise<unknown>>();
const toolHandlers = new Map<string, (params: unknown, runCtx: ToolRunContext) => Promise<ToolResult>>();
function issueRelationSummary(issueId: string) {
const issue = issues.get(issueId);
if (!issue) throw new Error(`Issue not found: ${issueId}`);
const summarize = (candidateId: string) => {
const related = issues.get(candidateId);
if (!related || related.companyId !== issue.companyId) return null;
return {
id: related.id,
identifier: related.identifier,
title: related.title,
status: related.status,
priority: related.priority,
assigneeAgentId: related.assigneeAgentId,
assigneeUserId: related.assigneeUserId,
};
};
const blockedBy = (blockedByIssueIds.get(issueId) ?? [])
.map(summarize)
.filter((value): value is NonNullable<typeof value> => value !== null);
const blocks = [...blockedByIssueIds.entries()]
.filter(([, blockers]) => blockers.includes(issueId))
.map(([blockedIssueId]) => summarize(blockedIssueId))
.filter((value): value is NonNullable<typeof value> => value !== null);
return { blockedBy, blocks };
}
const defaultPluginOriginKind: PluginIssueOriginKind = `plugin:${manifest.id}`;
function normalizePluginOriginKind(originKind: unknown = defaultPluginOriginKind): PluginIssueOriginKind {
if (originKind == null || originKind === "") return defaultPluginOriginKind;
if (typeof originKind !== "string") throw new Error("Plugin issue originKind must be a string");
if (originKind === defaultPluginOriginKind || originKind.startsWith(`${defaultPluginOriginKind}:`)) {
return originKind as PluginIssueOriginKind;
}
throw new Error(`Plugin may only use originKind values under ${defaultPluginOriginKind}`);
}
const ctx: PluginContext = {
manifest,
config: {
@@ -195,6 +239,19 @@ export function createTestHarness(options: TestHarnessOptions): TestHarness {
launchers.set(launcher.id, launcher);
},
},
db: {
namespace: manifest.database ? `test_${manifest.id.replace(/[^a-z0-9_]+/g, "_")}` : "",
async query(sql, params) {
requireCapability(manifest, capabilitySet, "database.namespace.read");
dbQueries.push({ sql, params });
return [];
},
async execute(sql, params) {
requireCapability(manifest, capabilitySet, "database.namespace.write");
dbExecutes.push({ sql, params });
return { rowCount: 0 };
},
},
http: {
async fetch(url, init) {
requireCapability(manifest, capabilitySet, "http.outbound");
@@ -338,6 +395,11 @@ export function createTestHarness(options: TestHarnessOptions): TestHarness {
out = out.filter((issue) => issue.companyId === companyId);
if (input?.projectId) out = out.filter((issue) => issue.projectId === input.projectId);
if (input?.assigneeAgentId) out = out.filter((issue) => issue.assigneeAgentId === input.assigneeAgentId);
if (input?.originKind) {
if (input.originKind.startsWith("plugin:")) normalizePluginOriginKind(input.originKind);
out = out.filter((issue) => issue.originKind === input.originKind);
}
if (input?.originId) out = out.filter((issue) => issue.originId === input.originId);
if (input?.status) out = out.filter((issue) => issue.status === input.status);
if (input?.offset) out = out.slice(input.offset);
if (input?.limit) out = out.slice(0, input.limit);
@@ -360,10 +422,10 @@ export function createTestHarness(options: TestHarnessOptions): TestHarness {
parentId: input.parentId ?? null,
title: input.title,
description: input.description ?? null,
status: "todo",
status: input.status ?? "todo",
priority: input.priority ?? "medium",
assigneeAgentId: input.assigneeAgentId ?? null,
assigneeUserId: null,
assigneeUserId: input.assigneeUserId ?? null,
checkoutRunId: null,
executionRunId: null,
executionAgentNameKey: null,
@@ -372,12 +434,15 @@ export function createTestHarness(options: TestHarnessOptions): TestHarness {
createdByUserId: null,
issueNumber: null,
identifier: null,
requestDepth: 0,
billingCode: null,
originKind: normalizePluginOriginKind(input.originKind),
originId: input.originId ?? null,
originRunId: input.originRunId ?? null,
requestDepth: input.requestDepth ?? 0,
billingCode: input.billingCode ?? null,
assigneeAdapterOverrides: null,
executionWorkspaceId: null,
executionWorkspacePreference: null,
executionWorkspaceSettings: null,
executionWorkspaceId: input.executionWorkspaceId ?? null,
executionWorkspacePreference: input.executionWorkspacePreference ?? null,
executionWorkspaceSettings: input.executionWorkspaceSettings ?? null,
startedAt: null,
completedAt: null,
cancelledAt: null,
@@ -386,20 +451,75 @@ export function createTestHarness(options: TestHarnessOptions): TestHarness {
updatedAt: now,
};
issues.set(record.id, record);
if (input.blockedByIssueIds) blockedByIssueIds.set(record.id, [...new Set(input.blockedByIssueIds)]);
return record;
},
async update(issueId, patch, companyId) {
requireCapability(manifest, capabilitySet, "issues.update");
const record = issues.get(issueId);
if (!isInCompany(record, companyId)) throw new Error(`Issue not found: ${issueId}`);
const { blockedByIssueIds: nextBlockedByIssueIds, ...issuePatch } = patch;
if (issuePatch.originKind !== undefined) {
issuePatch.originKind = normalizePluginOriginKind(issuePatch.originKind);
}
const updated: Issue = {
...record,
...patch,
...issuePatch,
updatedAt: new Date(),
};
issues.set(issueId, updated);
if (nextBlockedByIssueIds !== undefined) {
blockedByIssueIds.set(issueId, [...new Set(nextBlockedByIssueIds)]);
}
return updated;
},
async assertCheckoutOwner(input) {
requireCapability(manifest, capabilitySet, "issues.checkout");
const record = issues.get(input.issueId);
if (!isInCompany(record, input.companyId)) throw new Error(`Issue not found: ${input.issueId}`);
if (
record.status !== "in_progress" ||
record.assigneeAgentId !== input.actorAgentId ||
(record.checkoutRunId !== null && record.checkoutRunId !== input.actorRunId)
) {
throw new Error("Issue run ownership conflict");
}
return {
issueId: record.id,
status: record.status,
assigneeAgentId: record.assigneeAgentId,
checkoutRunId: record.checkoutRunId,
adoptedFromRunId: null,
};
},
async requestWakeup(issueId, companyId) {
requireCapability(manifest, capabilitySet, "issues.wakeup");
const record = issues.get(issueId);
if (!isInCompany(record, companyId)) throw new Error(`Issue not found: ${issueId}`);
if (!record.assigneeAgentId) throw new Error("Issue has no assigned agent to wake");
if (["backlog", "done", "cancelled"].includes(record.status)) {
throw new Error(`Issue is not wakeable in status: ${record.status}`);
}
const unresolved = issueRelationSummary(issueId).blockedBy.filter((blocker) => blocker.status !== "done");
if (unresolved.length > 0) throw new Error("Issue is blocked by unresolved blockers");
return { queued: true, runId: randomUUID() };
},
async requestWakeups(issueIds, companyId) {
requireCapability(manifest, capabilitySet, "issues.wakeup");
const results = [];
for (const issueId of issueIds) {
const record = issues.get(issueId);
if (!isInCompany(record, companyId)) throw new Error(`Issue not found: ${issueId}`);
if (!record.assigneeAgentId) throw new Error("Issue has no assigned agent to wake");
if (["backlog", "done", "cancelled"].includes(record.status)) {
throw new Error(`Issue is not wakeable in status: ${record.status}`);
}
const unresolved = issueRelationSummary(issueId).blockedBy.filter((blocker) => blocker.status !== "done");
if (unresolved.length > 0) throw new Error("Issue is blocked by unresolved blockers");
results.push({ issueId, queued: true, runId: randomUUID() });
}
return results;
},
async listComments(issueId, companyId) {
requireCapability(manifest, capabilitySet, "issue.comments.read");
if (!isInCompany(issues.get(issueId), companyId)) return [];
@@ -431,12 +551,14 @@ export function createTestHarness(options: TestHarnessOptions): TestHarness {
async list(issueId, companyId) {
requireCapability(manifest, capabilitySet, "issue.documents.read");
if (!isInCompany(issues.get(issueId), companyId)) return [];
return [];
return [...issueDocuments.values()]
.filter((document) => document.issueId === issueId && document.companyId === companyId)
.map(({ body: _body, ...summary }) => summary);
},
async get(issueId, _key, companyId) {
async get(issueId, key, companyId) {
requireCapability(manifest, capabilitySet, "issue.documents.read");
if (!isInCompany(issues.get(issueId), companyId)) return null;
return null;
return issueDocuments.get(`${issueId}|${key}`) ?? null;
},
async upsert(input) {
requireCapability(manifest, capabilitySet, "issue.documents.write");
@@ -444,7 +566,27 @@ export function createTestHarness(options: TestHarnessOptions): TestHarness {
if (!isInCompany(parentIssue, input.companyId)) {
throw new Error(`Issue not found: ${input.issueId}`);
}
throw new Error("documents.upsert is not implemented in test context");
const now = new Date();
const existing = issueDocuments.get(`${input.issueId}|${input.key}`);
const document: IssueDocument = {
id: existing?.id ?? randomUUID(),
companyId: input.companyId,
issueId: input.issueId,
key: input.key,
title: input.title ?? existing?.title ?? null,
format: "markdown",
latestRevisionId: randomUUID(),
latestRevisionNumber: (existing?.latestRevisionNumber ?? 0) + 1,
createdByAgentId: existing?.createdByAgentId ?? null,
createdByUserId: existing?.createdByUserId ?? null,
updatedByAgentId: null,
updatedByUserId: null,
createdAt: existing?.createdAt ?? now,
updatedAt: now,
body: input.body,
};
issueDocuments.set(`${input.issueId}|${input.key}`, document);
return document;
},
async delete(issueId, _key, companyId) {
requireCapability(manifest, capabilitySet, "issue.documents.write");
@@ -452,6 +594,104 @@ export function createTestHarness(options: TestHarnessOptions): TestHarness {
if (!isInCompany(parentIssue, companyId)) {
throw new Error(`Issue not found: ${issueId}`);
}
issueDocuments.delete(`${issueId}|${_key}`);
},
},
relations: {
async get(issueId, companyId) {
requireCapability(manifest, capabilitySet, "issue.relations.read");
if (!isInCompany(issues.get(issueId), companyId)) throw new Error(`Issue not found: ${issueId}`);
return issueRelationSummary(issueId);
},
async setBlockedBy(issueId, nextBlockedByIssueIds, companyId) {
requireCapability(manifest, capabilitySet, "issue.relations.write");
if (!isInCompany(issues.get(issueId), companyId)) throw new Error(`Issue not found: ${issueId}`);
blockedByIssueIds.set(issueId, [...new Set(nextBlockedByIssueIds)]);
return issueRelationSummary(issueId);
},
async addBlockers(issueId, blockerIssueIds, companyId) {
requireCapability(manifest, capabilitySet, "issue.relations.write");
if (!isInCompany(issues.get(issueId), companyId)) throw new Error(`Issue not found: ${issueId}`);
const next = new Set(blockedByIssueIds.get(issueId) ?? []);
for (const blockerIssueId of blockerIssueIds) next.add(blockerIssueId);
blockedByIssueIds.set(issueId, [...next]);
return issueRelationSummary(issueId);
},
async removeBlockers(issueId, blockerIssueIds, companyId) {
requireCapability(manifest, capabilitySet, "issue.relations.write");
if (!isInCompany(issues.get(issueId), companyId)) throw new Error(`Issue not found: ${issueId}`);
const removals = new Set(blockerIssueIds);
blockedByIssueIds.set(
issueId,
(blockedByIssueIds.get(issueId) ?? []).filter((blockerIssueId) => !removals.has(blockerIssueId)),
);
return issueRelationSummary(issueId);
},
},
async getSubtree(issueId, companyId, options) {
requireCapability(manifest, capabilitySet, "issue.subtree.read");
const root = issues.get(issueId);
if (!isInCompany(root, companyId)) throw new Error(`Issue not found: ${issueId}`);
const includeRoot = options?.includeRoot !== false;
const allIds = [root.id];
let frontier = [root.id];
while (frontier.length > 0) {
const children = [...issues.values()]
.filter((issue) => issue.companyId === companyId && frontier.includes(issue.parentId ?? ""))
.map((issue) => issue.id)
.filter((id) => !allIds.includes(id));
allIds.push(...children);
frontier = children;
}
const issueIds = includeRoot ? allIds : allIds.filter((id) => id !== root.id);
const subtreeIssues = issueIds.map((id) => issues.get(id)).filter((candidate): candidate is Issue => Boolean(candidate));
return {
rootIssueId: root.id,
companyId,
issueIds,
issues: subtreeIssues,
...(options?.includeRelations
? { relations: Object.fromEntries(issueIds.map((id) => [id, issueRelationSummary(id)])) }
: {}),
...(options?.includeDocuments ? { documents: Object.fromEntries(issueIds.map((id) => [id, []])) } : {}),
...(options?.includeActiveRuns ? { activeRuns: Object.fromEntries(issueIds.map((id) => [id, []])) } : {}),
...(options?.includeAssignees ? { assignees: {} } : {}),
};
},
summaries: {
async getOrchestration(input) {
requireCapability(manifest, capabilitySet, "issues.orchestration.read");
const root = issues.get(input.issueId);
if (!isInCompany(root, input.companyId)) throw new Error(`Issue not found: ${input.issueId}`);
const subtreeIssueIds = [root.id];
if (input.includeSubtree) {
let frontier = [root.id];
while (frontier.length > 0) {
const children = [...issues.values()]
.filter((issue) => issue.companyId === input.companyId && frontier.includes(issue.parentId ?? ""))
.map((issue) => issue.id)
.filter((id) => !subtreeIssueIds.includes(id));
subtreeIssueIds.push(...children);
frontier = children;
}
}
return {
issueId: root.id,
companyId: input.companyId,
subtreeIssueIds,
relations: Object.fromEntries(subtreeIssueIds.map((id) => [id, issueRelationSummary(id)])),
approvals: [],
runs: [],
costs: {
costCents: 0,
inputTokens: 0,
cachedInputTokens: 0,
outputTokens: 0,
billingCode: input.billingCode ?? null,
},
openBudgetIncidents: [],
invocationBlocks: [],
};
},
},
},
@@ -660,7 +900,12 @@ export function createTestHarness(options: TestHarnessOptions): TestHarness {
seed(input) {
for (const row of input.companies ?? []) companies.set(row.id, row);
for (const row of input.projects ?? []) projects.set(row.id, row);
for (const row of input.issues ?? []) issues.set(row.id, row);
for (const row of input.issues ?? []) {
issues.set(row.id, row);
if (row.blockedBy) {
blockedByIssueIds.set(row.id, row.blockedBy.map((blocker) => blocker.id));
}
}
for (const row of input.issueComments ?? []) {
const list = issueComments.get(row.issueId) ?? [];
list.push(row);
@@ -738,6 +983,8 @@ export function createTestHarness(options: TestHarnessOptions): TestHarness {
activity,
metrics,
telemetry,
dbQueries,
dbExecutes,
};
return harness;

View File

@@ -21,6 +21,8 @@ import type {
IssueComment,
IssueDocument,
IssueDocumentSummary,
IssueRelationIssueSummary,
PluginIssueOriginKind,
Agent,
Goal,
} from "@paperclipai/shared";
@@ -40,7 +42,12 @@ export type {
PluginLauncherRenderDeclaration,
PluginLauncherDeclaration,
PluginMinimumHostVersion,
PluginDatabaseDeclaration,
PluginApiRouteDeclaration,
PluginApiRouteCompanyResolution,
PluginRecord,
PluginDatabaseNamespaceRecord,
PluginMigrationRecord,
PluginConfig,
JsonSchema,
PluginStatus,
@@ -57,6 +64,13 @@ export type {
PluginJobRunStatus,
PluginJobRunTrigger,
PluginWebhookDeliveryStatus,
PluginDatabaseCoreReadTable,
PluginDatabaseMigrationStatus,
PluginDatabaseNamespaceMode,
PluginDatabaseNamespaceStatus,
PluginApiRouteAuthMode,
PluginApiRouteCheckoutPolicy,
PluginApiRouteMethod,
PluginEventType,
PluginBridgeErrorCode,
Company,
@@ -65,6 +79,8 @@ export type {
IssueComment,
IssueDocument,
IssueDocumentSummary,
IssueRelationIssueSummary,
PluginIssueOriginKind,
Agent,
Goal,
} from "@paperclipai/shared";
@@ -407,6 +423,17 @@ export interface PluginLaunchersClient {
register(launcher: PluginLauncherRegistration): void;
}
export interface PluginDatabaseClient {
/** Host-derived PostgreSQL schema name for this plugin's namespace. */
namespace: string;
/** Run a restricted SELECT against the plugin namespace and whitelisted core tables. */
query<T = Record<string, unknown>>(sql: string, params?: unknown[]): Promise<T[]>;
/** Run a restricted INSERT, UPDATE, or DELETE against the plugin namespace. */
execute(sql: string, params?: unknown[]): Promise<{ rowCount: number }>;
}
/**
* `ctx.http` — make outbound HTTP requests.
*
@@ -867,6 +894,178 @@ export interface PluginIssueDocumentsClient {
delete(issueId: string, key: string, companyId: string): Promise<void>;
}
export interface PluginIssueMutationActor {
/** Agent that initiated the plugin operation, when the plugin is acting from an agent run. */
actorAgentId?: string | null;
/** Board/user that initiated the plugin operation, when known. */
actorUserId?: string | null;
/** Heartbeat run that initiated the operation. Required for checkout-aware agent actions. */
actorRunId?: string | null;
}
export interface PluginIssueRelationSummary {
blockedBy: IssueRelationIssueSummary[];
blocks: IssueRelationIssueSummary[];
}
export interface PluginIssueRelationsClient {
/** Read blocker relationships for an issue. Requires `issue.relations.read`. */
get(issueId: string, companyId: string): Promise<PluginIssueRelationSummary>;
/** Replace the issue's blocked-by relation set. Requires `issue.relations.write`. */
setBlockedBy(
issueId: string,
blockedByIssueIds: string[],
companyId: string,
actor?: PluginIssueMutationActor,
): Promise<PluginIssueRelationSummary>;
/** Add one or more blockers while preserving existing blockers. Requires `issue.relations.write`. */
addBlockers(
issueId: string,
blockerIssueIds: string[],
companyId: string,
actor?: PluginIssueMutationActor,
): Promise<PluginIssueRelationSummary>;
/** Remove one or more blockers while preserving all other blockers. Requires `issue.relations.write`. */
removeBlockers(
issueId: string,
blockerIssueIds: string[],
companyId: string,
actor?: PluginIssueMutationActor,
): Promise<PluginIssueRelationSummary>;
}
export interface PluginIssueCheckoutOwnership {
issueId: string;
status: Issue["status"];
assigneeAgentId: string | null;
checkoutRunId: string | null;
adoptedFromRunId: string | null;
}
export interface PluginIssueWakeupResult {
queued: boolean;
runId: string | null;
}
export interface PluginIssueWakeupBatchResult {
issueId: string;
queued: boolean;
runId: string | null;
}
export interface PluginIssueRunSummary {
id: string;
issueId: string | null;
agentId: string;
status: string;
invocationSource: string;
triggerDetail: string | null;
startedAt: string | null;
finishedAt: string | null;
error: string | null;
createdAt: string;
}
export interface PluginIssueApprovalSummary {
issueId: string;
id: string;
type: string;
status: string;
requestedByAgentId: string | null;
requestedByUserId: string | null;
decidedByUserId: string | null;
decidedAt: string | null;
createdAt: string;
}
export interface PluginIssueCostSummary {
costCents: number;
inputTokens: number;
cachedInputTokens: number;
outputTokens: number;
billingCode: string | null;
}
export interface PluginBudgetIncidentSummary {
id: string;
scopeType: string;
scopeId: string;
metric: string;
windowKind: string;
thresholdType: string;
amountLimit: number;
amountObserved: number;
status: string;
approvalId: string | null;
createdAt: string;
}
export interface PluginIssueInvocationBlockSummary {
issueId: string;
agentId: string;
scopeType: "company" | "agent" | "project";
scopeId: string;
scopeName: string;
reason: string;
}
export interface PluginIssueOrchestrationSummary {
issueId: string;
companyId: string;
subtreeIssueIds: string[];
relations: Record<string, PluginIssueRelationSummary>;
approvals: PluginIssueApprovalSummary[];
runs: PluginIssueRunSummary[];
costs: PluginIssueCostSummary;
openBudgetIncidents: PluginBudgetIncidentSummary[];
invocationBlocks: PluginIssueInvocationBlockSummary[];
}
export interface PluginIssueSubtreeOptions {
/** Include the root issue in the result. Defaults to true. */
includeRoot?: boolean;
/** Include blocker relationship summaries keyed by issue ID. */
includeRelations?: boolean;
/** Include issue document summaries keyed by issue ID. */
includeDocuments?: boolean;
/** Include queued/running heartbeat runs keyed by issue ID. */
includeActiveRuns?: boolean;
/** Include assignee summaries keyed by agent ID. */
includeAssignees?: boolean;
}
export interface PluginIssueAssigneeSummary {
id: string;
name: string;
role: string;
title: string | null;
status: Agent["status"];
}
export interface PluginIssueSubtree {
rootIssueId: string;
companyId: string;
issueIds: string[];
issues: Issue[];
relations?: Record<string, PluginIssueRelationSummary>;
documents?: Record<string, IssueDocumentSummary[]>;
activeRuns?: Record<string, PluginIssueRunSummary[]>;
assignees?: Record<string, PluginIssueAssigneeSummary>;
}
export interface PluginIssueSummariesClient {
/**
* Read the compact orchestration inputs a workflow plugin needs for an
* issue or issue subtree. Requires `issues.orchestration.read`.
*/
getOrchestration(input: {
issueId: string;
companyId: string;
includeSubtree?: boolean;
billingCode?: string | null;
}): Promise<PluginIssueOrchestrationSummary>;
}
/**
* `ctx.issues` — read and mutate issues plus comments.
*
@@ -874,6 +1073,9 @@ export interface PluginIssueDocumentsClient {
* - `issues.read` for read operations
* - `issues.create` for create
* - `issues.update` for update
* - `issues.checkout` for checkout ownership assertions
* - `issues.wakeup` for assignment wakeup requests
* - `issues.orchestration.read` for orchestration summaries
* - `issue.comments.read` for `listComments`
* - `issue.comments.create` for `createComment`
* - `issue.documents.read` for `documents.list` and `documents.get`
@@ -884,6 +1086,8 @@ export interface PluginIssuesClient {
companyId: string;
projectId?: string;
assigneeAgentId?: string;
originKind?: PluginIssueOriginKind;
originId?: string;
status?: Issue["status"];
limit?: number;
offset?: number;
@@ -897,17 +1101,80 @@ export interface PluginIssuesClient {
inheritExecutionWorkspaceFromIssueId?: string;
title: string;
description?: string;
status?: Issue["status"];
priority?: Issue["priority"];
assigneeAgentId?: string;
assigneeUserId?: string | null;
requestDepth?: number;
billingCode?: string | null;
originKind?: PluginIssueOriginKind;
originId?: string | null;
originRunId?: string | null;
blockedByIssueIds?: string[];
labelIds?: string[];
executionWorkspaceId?: string | null;
executionWorkspacePreference?: string | null;
executionWorkspaceSettings?: Record<string, unknown> | null;
actor?: PluginIssueMutationActor;
}): Promise<Issue>;
update(
issueId: string,
patch: Partial<Pick<
Issue,
"title" | "description" | "status" | "priority" | "assigneeAgentId"
>>,
| "title"
| "description"
| "status"
| "priority"
| "assigneeAgentId"
| "assigneeUserId"
| "billingCode"
| "originKind"
| "originId"
| "originRunId"
| "requestDepth"
| "executionWorkspaceId"
| "executionWorkspacePreference"
>> & {
blockedByIssueIds?: string[];
labelIds?: string[];
executionWorkspaceSettings?: Record<string, unknown> | null;
},
companyId: string,
actor?: PluginIssueMutationActor,
): Promise<Issue>;
assertCheckoutOwner(input: {
issueId: string;
companyId: string;
actorAgentId: string;
actorRunId: string;
}): Promise<PluginIssueCheckoutOwnership>;
/**
* Read a root issue's descendants with optional relation/document/run/assignee
* summaries. Requires `issue.subtree.read`.
*/
getSubtree(
issueId: string,
companyId: string,
options?: PluginIssueSubtreeOptions,
): Promise<PluginIssueSubtree>;
requestWakeup(
issueId: string,
companyId: string,
options?: {
reason?: string;
contextSource?: string;
idempotencyKey?: string | null;
} & PluginIssueMutationActor,
): Promise<PluginIssueWakeupResult>;
requestWakeups(
issueIds: string[],
companyId: string,
options?: {
reason?: string;
contextSource?: string;
idempotencyKeyPrefix?: string | null;
} & PluginIssueMutationActor,
): Promise<PluginIssueWakeupBatchResult[]>;
listComments(issueId: string, companyId: string): Promise<IssueComment[]>;
createComment(
issueId: string,
@@ -917,6 +1184,10 @@ export interface PluginIssuesClient {
): Promise<IssueComment>;
/** Read and write issue documents. Requires `issue.documents.read` / `issue.documents.write`. */
documents: PluginIssueDocumentsClient;
/** Read and write blocker relationships. */
relations: PluginIssueRelationsClient;
/** Read compact orchestration summaries. */
summaries: PluginIssueSummariesClient;
}
/**
@@ -1138,6 +1409,9 @@ export interface PluginContext {
/** Register launcher metadata that the host can surface in plugin UI entry points. */
launchers: PluginLaunchersClient;
/** Restricted plugin-owned database namespace. Requires database namespace capabilities. */
db: PluginDatabaseClient;
/** Make outbound HTTP requests. Requires `http.outbound`. */
http: PluginHttpClient;

View File

@@ -42,6 +42,7 @@ import type { PaperclipPluginManifestV1 } from "@paperclipai/shared";
import type { PaperclipPlugin } from "./define-plugin.js";
import type {
PluginApiRequestInput,
PluginHealthDiagnostics,
PluginConfigValidationResult,
PluginWebhookInput,
@@ -250,6 +251,7 @@ export function startWorkerRpcHost(options: WorkerRpcHostOptions): WorkerRpcHost
let initialized = false;
let manifest: PaperclipPluginManifestV1 | null = null;
let currentConfig: Record<string, unknown> = {};
let databaseNamespace: string | null = null;
// Plugin handler registrations (populated during setup())
const eventHandlers: EventRegistration[] = [];
@@ -416,6 +418,18 @@ export function startWorkerRpcHost(options: WorkerRpcHostOptions): WorkerRpcHost
},
},
db: {
get namespace() {
return databaseNamespace ?? "";
},
async query<T = Record<string, unknown>>(sql: string, params?: unknown[]): Promise<T[]> {
return callHost("db.query", { sql, params }) as Promise<T[]>;
},
async execute(sql: string, params?: unknown[]) {
return callHost("db.execute", { sql, params });
},
},
http: {
async fetch(url: string, init?: RequestInit): Promise<Response> {
const serializedInit: Record<string, unknown> = {};
@@ -574,6 +588,8 @@ export function startWorkerRpcHost(options: WorkerRpcHostOptions): WorkerRpcHost
companyId: input.companyId,
projectId: input.projectId,
assigneeAgentId: input.assigneeAgentId,
originKind: input.originKind,
originId: input.originId,
status: input.status,
limit: input.limit,
offset: input.offset,
@@ -593,19 +609,81 @@ export function startWorkerRpcHost(options: WorkerRpcHostOptions): WorkerRpcHost
inheritExecutionWorkspaceFromIssueId: input.inheritExecutionWorkspaceFromIssueId,
title: input.title,
description: input.description,
status: input.status,
priority: input.priority,
assigneeAgentId: input.assigneeAgentId,
assigneeUserId: input.assigneeUserId,
requestDepth: input.requestDepth,
billingCode: input.billingCode,
originKind: input.originKind,
originId: input.originId,
originRunId: input.originRunId,
blockedByIssueIds: input.blockedByIssueIds,
labelIds: input.labelIds,
executionWorkspaceId: input.executionWorkspaceId,
executionWorkspacePreference: input.executionWorkspacePreference,
executionWorkspaceSettings: input.executionWorkspaceSettings,
actorAgentId: input.actor?.actorAgentId,
actorUserId: input.actor?.actorUserId,
actorRunId: input.actor?.actorRunId,
});
},
async update(issueId: string, patch, companyId: string) {
async update(issueId: string, patch, companyId: string, actor) {
return callHost("issues.update", {
issueId,
patch: patch as Record<string, unknown>,
patch: {
...(patch as Record<string, unknown>),
actorAgentId: actor?.actorAgentId,
actorUserId: actor?.actorUserId,
actorRunId: actor?.actorRunId,
},
companyId,
});
},
async assertCheckoutOwner(input) {
return callHost("issues.assertCheckoutOwner", input);
},
async getSubtree(issueId: string, companyId: string, options) {
return callHost("issues.getSubtree", {
issueId,
companyId,
includeRoot: options?.includeRoot,
includeRelations: options?.includeRelations,
includeDocuments: options?.includeDocuments,
includeActiveRuns: options?.includeActiveRuns,
includeAssignees: options?.includeAssignees,
});
},
async requestWakeup(issueId: string, companyId: string, options) {
return callHost("issues.requestWakeup", {
issueId,
companyId,
reason: options?.reason,
contextSource: options?.contextSource,
idempotencyKey: options?.idempotencyKey,
actorAgentId: options?.actorAgentId,
actorUserId: options?.actorUserId,
actorRunId: options?.actorRunId,
});
},
async requestWakeups(issueIds: string[], companyId: string, options) {
return callHost("issues.requestWakeups", {
issueIds,
companyId,
reason: options?.reason,
contextSource: options?.contextSource,
idempotencyKeyPrefix: options?.idempotencyKeyPrefix,
actorAgentId: options?.actorAgentId,
actorUserId: options?.actorUserId,
actorRunId: options?.actorRunId,
});
},
async listComments(issueId: string, companyId: string) {
return callHost("issues.listComments", { issueId, companyId });
},
@@ -639,6 +717,51 @@ export function startWorkerRpcHost(options: WorkerRpcHostOptions): WorkerRpcHost
return callHost("issues.documents.delete", { issueId, key, companyId });
},
},
relations: {
async get(issueId: string, companyId: string) {
return callHost("issues.relations.get", { issueId, companyId });
},
async setBlockedBy(issueId: string, blockedByIssueIds: string[], companyId: string, actor) {
return callHost("issues.relations.setBlockedBy", {
issueId,
companyId,
blockedByIssueIds,
actorAgentId: actor?.actorAgentId,
actorUserId: actor?.actorUserId,
actorRunId: actor?.actorRunId,
});
},
async addBlockers(issueId: string, blockerIssueIds: string[], companyId: string, actor) {
return callHost("issues.relations.addBlockers", {
issueId,
companyId,
blockerIssueIds,
actorAgentId: actor?.actorAgentId,
actorUserId: actor?.actorUserId,
actorRunId: actor?.actorRunId,
});
},
async removeBlockers(issueId: string, blockerIssueIds: string[], companyId: string, actor) {
return callHost("issues.relations.removeBlockers", {
issueId,
companyId,
blockerIssueIds,
actorAgentId: actor?.actorAgentId,
actorUserId: actor?.actorUserId,
actorRunId: actor?.actorRunId,
});
},
},
summaries: {
async getOrchestration(input) {
return callHost("issues.summaries.getOrchestration", input);
},
},
},
agents: {
@@ -879,6 +1002,9 @@ export function startWorkerRpcHost(options: WorkerRpcHostOptions): WorkerRpcHost
case "handleWebhook":
return handleWebhook(params as PluginWebhookInput);
case "handleApiRequest":
return handleApiRequest(params as PluginApiRequestInput);
case "getData":
return handleGetData(params as GetDataParams);
@@ -907,6 +1033,7 @@ export function startWorkerRpcHost(options: WorkerRpcHostOptions): WorkerRpcHost
manifest = params.manifest;
currentConfig = params.config;
databaseNamespace = params.databaseNamespace ?? null;
// Call the plugin's setup function
await plugin.definition.setup(ctx);
@@ -919,6 +1046,7 @@ export function startWorkerRpcHost(options: WorkerRpcHostOptions): WorkerRpcHost
if (plugin.definition.onConfigChanged) supportedMethods.push("configChanged");
if (plugin.definition.onHealth) supportedMethods.push("health");
if (plugin.definition.onShutdown) supportedMethods.push("shutdown");
if (plugin.definition.onApiRequest) supportedMethods.push("handleApiRequest");
return { ok: true, supportedMethods };
}
@@ -1020,6 +1148,16 @@ export function startWorkerRpcHost(options: WorkerRpcHostOptions): WorkerRpcHost
await plugin.definition.onWebhook(params);
}
async function handleApiRequest(params: PluginApiRequestInput): Promise<unknown> {
if (!plugin.definition.onApiRequest) {
throw Object.assign(
new Error("handleApiRequest is not implemented by this plugin"),
{ code: PLUGIN_RPC_ERROR_CODES.METHOD_NOT_IMPLEMENTED },
);
}
return plugin.definition.onApiRequest(params);
}
async function handleGetData(params: GetDataParams): Promise<unknown> {
const handler = dataHandlers.get(params.key);
if (!handler) {

View File

@@ -66,6 +66,8 @@ export const AGENT_ROLE_LABELS: Record<AgentRole, string> = {
general: "General",
};
export const AGENT_DEFAULT_MAX_CONCURRENT_RUNS = 5;
export const WORKSPACE_BRANCH_ROUTINE_VARIABLE = "workspaceBranch";
export const AGENT_ICON_NAMES = [
"bot",
"cpu",
@@ -136,11 +138,25 @@ export const ISSUE_PRIORITIES = ["critical", "high", "medium", "low"] as const;
export type IssuePriority = (typeof ISSUE_PRIORITIES)[number];
export const ISSUE_ORIGIN_KINDS = ["manual", "routine_execution"] as const;
export type IssueOriginKind = (typeof ISSUE_ORIGIN_KINDS)[number];
export type BuiltInIssueOriginKind = (typeof ISSUE_ORIGIN_KINDS)[number];
export type PluginIssueOriginKind = `plugin:${string}`;
export type IssueOriginKind = BuiltInIssueOriginKind | PluginIssueOriginKind;
export const ISSUE_RELATION_TYPES = ["blocks"] as const;
export type IssueRelationType = (typeof ISSUE_RELATION_TYPES)[number];
export const ISSUE_CONTINUATION_SUMMARY_DOCUMENT_KEY = "continuation-summary" as const;
export const SYSTEM_ISSUE_DOCUMENT_KEYS = [ISSUE_CONTINUATION_SUMMARY_DOCUMENT_KEY] as const;
export type SystemIssueDocumentKey = (typeof SYSTEM_ISSUE_DOCUMENT_KEYS)[number];
const SYSTEM_ISSUE_DOCUMENT_KEY_SET = new Set<string>(SYSTEM_ISSUE_DOCUMENT_KEYS);
export function isSystemIssueDocumentKey(key: string): key is SystemIssueDocumentKey {
return SYSTEM_ISSUE_DOCUMENT_KEY_SET.has(key);
}
export const ISSUE_REFERENCE_SOURCE_KINDS = ["title", "description", "comment", "document"] as const;
export type IssueReferenceSourceKind = (typeof ISSUE_REFERENCE_SOURCE_KINDS)[number];
export const ISSUE_EXECUTION_POLICY_MODES = ["normal", "auto"] as const;
export type IssueExecutionPolicyMode = (typeof ISSUE_EXECUTION_POLICY_MODES)[number];
@@ -335,6 +351,7 @@ export type WakeupRequestStatus = (typeof WAKEUP_REQUEST_STATUSES)[number];
export const HEARTBEAT_RUN_STATUSES = [
"queued",
"scheduled_retry",
"running",
"succeeded",
"failed",
@@ -343,6 +360,17 @@ export const HEARTBEAT_RUN_STATUSES = [
] as const;
export type HeartbeatRunStatus = (typeof HEARTBEAT_RUN_STATUSES)[number];
export const RUN_LIVENESS_STATES = [
"completed",
"advanced",
"plan_only",
"empty_response",
"blocked",
"failed",
"needs_followup",
] as const;
export type RunLivenessState = (typeof RUN_LIVENESS_STATES)[number];
export const LIVE_EVENT_TYPES = [
"heartbeat.run.queued",
"heartbeat.run.status",
@@ -359,9 +387,33 @@ export type LiveEventType = (typeof LIVE_EVENT_TYPES)[number];
export const PRINCIPAL_TYPES = ["user", "agent"] as const;
export type PrincipalType = (typeof PRINCIPAL_TYPES)[number];
export const MEMBERSHIP_STATUSES = ["pending", "active", "suspended"] as const;
export const MEMBERSHIP_STATUSES = ["pending", "active", "suspended", "archived"] as const;
export type MembershipStatus = (typeof MEMBERSHIP_STATUSES)[number];
export const COMPANY_MEMBERSHIP_ROLES = [
"owner",
"admin",
"operator",
"viewer",
"member",
] as const;
export type CompanyMembershipRole = (typeof COMPANY_MEMBERSHIP_ROLES)[number];
export const HUMAN_COMPANY_MEMBERSHIP_ROLES = [
"owner",
"admin",
"operator",
"viewer",
] as const;
export type HumanCompanyMembershipRole = (typeof HUMAN_COMPANY_MEMBERSHIP_ROLES)[number];
export const HUMAN_COMPANY_MEMBERSHIP_ROLE_LABELS: Record<HumanCompanyMembershipRole, string> = {
owner: "Owner",
admin: "Admin",
operator: "Operator",
viewer: "Viewer",
};
export const INSTANCE_USER_ROLES = ["instance_admin"] as const;
export type InstanceUserRole = (typeof INSTANCE_USER_ROLES)[number];
@@ -383,6 +435,7 @@ export const PERMISSION_KEYS = [
"users:manage_permissions",
"tasks:assign",
"tasks:assign_scope",
"tasks:manage_active_checkouts",
"joins:approve",
] as const;
export type PermissionKey = (typeof PERMISSION_KEYS)[number];
@@ -451,6 +504,8 @@ export const PLUGIN_CAPABILITIES = [
"projects.read",
"project.workspaces.read",
"issues.read",
"issue.relations.read",
"issue.subtree.read",
"issue.comments.read",
"issue.documents.read",
"agents.read",
@@ -459,9 +514,14 @@ export const PLUGIN_CAPABILITIES = [
"goals.update",
"activity.read",
"costs.read",
"issues.orchestration.read",
"database.namespace.read",
// Data Write
"issues.create",
"issues.update",
"issue.relations.write",
"issues.checkout",
"issues.wakeup",
"issue.comments.create",
"issue.documents.write",
"agents.pause",
@@ -474,6 +534,8 @@ export const PLUGIN_CAPABILITIES = [
"activity.log.write",
"metrics.write",
"telemetry.track",
"database.namespace.migrate",
"database.namespace.write",
// Plugin State
"plugin.state.read",
"plugin.state.write",
@@ -482,6 +544,7 @@ export const PLUGIN_CAPABILITIES = [
"events.emit",
"jobs.schedule",
"webhooks.receive",
"api.routes.register",
"http.outbound",
"secrets.read-ref",
// Agent Tools
@@ -497,6 +560,51 @@ export const PLUGIN_CAPABILITIES = [
] as const;
export type PluginCapability = (typeof PLUGIN_CAPABILITIES)[number];
export const PLUGIN_DATABASE_NAMESPACE_MODES = ["schema"] as const;
export type PluginDatabaseNamespaceMode = (typeof PLUGIN_DATABASE_NAMESPACE_MODES)[number];
export const PLUGIN_DATABASE_NAMESPACE_STATUSES = [
"active",
"migration_failed",
] as const;
export type PluginDatabaseNamespaceStatus = (typeof PLUGIN_DATABASE_NAMESPACE_STATUSES)[number];
export const PLUGIN_DATABASE_MIGRATION_STATUSES = [
"applied",
"failed",
] as const;
export type PluginDatabaseMigrationStatus = (typeof PLUGIN_DATABASE_MIGRATION_STATUSES)[number];
export const PLUGIN_DATABASE_CORE_READ_TABLES = [
"companies",
"projects",
"goals",
"agents",
"issues",
"issue_documents",
"issue_relations",
"issue_comments",
"heartbeat_runs",
"cost_events",
"approvals",
"issue_approvals",
"budget_incidents",
] as const;
export type PluginDatabaseCoreReadTable = (typeof PLUGIN_DATABASE_CORE_READ_TABLES)[number];
export const PLUGIN_API_ROUTE_METHODS = ["GET", "POST", "PATCH", "DELETE"] as const;
export type PluginApiRouteMethod = (typeof PLUGIN_API_ROUTE_METHODS)[number];
export const PLUGIN_API_ROUTE_AUTH_MODES = ["board", "agent", "board-or-agent", "webhook"] as const;
export type PluginApiRouteAuthMode = (typeof PLUGIN_API_ROUTE_AUTH_MODES)[number];
export const PLUGIN_API_ROUTE_CHECKOUT_POLICIES = [
"none",
"required-for-agent-in-progress",
"always-for-agent",
] as const;
export type PluginApiRouteCheckoutPolicy = (typeof PLUGIN_API_ROUTE_CHECKOUT_POLICIES)[number];
/**
* UI extension slot types. Each slot type corresponds to a mount point in the
* Paperclip UI where plugin components can be rendered.
@@ -695,6 +803,13 @@ export const PLUGIN_EVENT_TYPES = [
"issue.created",
"issue.updated",
"issue.comment.created",
"issue.document.created",
"issue.document.updated",
"issue.document.deleted",
"issue.relations.updated",
"issue.checked_out",
"issue.released",
"issue.assignment_wakeup_requested",
"agent.created",
"agent.updated",
"agent.status_changed",
@@ -706,6 +821,8 @@ export const PLUGIN_EVENT_TYPES = [
"goal.updated",
"approval.created",
"approval.decided",
"budget.incident.opened",
"budget.incident.resolved",
"cost_event.created",
"activity.logged",
] as const;

View File

@@ -9,6 +9,8 @@ export {
AGENT_ADAPTER_TYPES,
AGENT_ROLES,
AGENT_ROLE_LABELS,
AGENT_DEFAULT_MAX_CONCURRENT_RUNS,
WORKSPACE_BRANCH_ROUTINE_VARIABLE,
AGENT_ICON_NAMES,
ISSUE_STATUSES,
INBOX_MINE_ISSUE_STATUSES,
@@ -16,6 +18,10 @@ export {
ISSUE_PRIORITIES,
ISSUE_ORIGIN_KINDS,
ISSUE_RELATION_TYPES,
ISSUE_CONTINUATION_SUMMARY_DOCUMENT_KEY,
SYSTEM_ISSUE_DOCUMENT_KEYS,
isSystemIssueDocumentKey,
ISSUE_REFERENCE_SOURCE_KINDS,
ISSUE_EXECUTION_POLICY_MODES,
ISSUE_EXECUTION_STAGE_TYPES,
ISSUE_EXECUTION_STATE_STATUSES,
@@ -49,11 +55,15 @@ export {
BUDGET_INCIDENT_RESOLUTION_ACTIONS,
HEARTBEAT_INVOCATION_SOURCES,
HEARTBEAT_RUN_STATUSES,
RUN_LIVENESS_STATES,
WAKEUP_TRIGGER_DETAILS,
WAKEUP_REQUEST_STATUSES,
LIVE_EVENT_TYPES,
PRINCIPAL_TYPES,
MEMBERSHIP_STATUSES,
COMPANY_MEMBERSHIP_ROLES,
HUMAN_COMPANY_MEMBERSHIP_ROLES,
HUMAN_COMPANY_MEMBERSHIP_ROLE_LABELS,
INSTANCE_USER_ROLES,
INVITE_TYPES,
INVITE_JOIN_TYPES,
@@ -75,6 +85,13 @@ export {
PLUGIN_JOB_RUN_STATUSES,
PLUGIN_JOB_RUN_TRIGGERS,
PLUGIN_WEBHOOK_DELIVERY_STATUSES,
PLUGIN_DATABASE_NAMESPACE_MODES,
PLUGIN_DATABASE_NAMESPACE_STATUSES,
PLUGIN_DATABASE_MIGRATION_STATUSES,
PLUGIN_DATABASE_CORE_READ_TABLES,
PLUGIN_API_ROUTE_METHODS,
PLUGIN_API_ROUTE_AUTH_MODES,
PLUGIN_API_ROUTE_CHECKOUT_POLICIES,
PLUGIN_EVENT_TYPES,
PLUGIN_BRIDGE_ERROR_CODES,
type CompanyStatus,
@@ -88,8 +105,12 @@ export {
type AgentIconName,
type IssueStatus,
type IssuePriority,
type BuiltInIssueOriginKind,
type PluginIssueOriginKind,
type IssueOriginKind,
type IssueRelationType,
type SystemIssueDocumentKey,
type IssueReferenceSourceKind,
type IssueExecutionPolicyMode,
type IssueExecutionStageType,
type IssueExecutionStateStatus,
@@ -122,11 +143,14 @@ export {
type BudgetIncidentResolutionAction,
type HeartbeatInvocationSource,
type HeartbeatRunStatus,
type RunLivenessState,
type WakeupTriggerDetail,
type WakeupRequestStatus,
type LiveEventType,
type PrincipalType,
type MembershipStatus,
type CompanyMembershipRole,
type HumanCompanyMembershipRole,
type InstanceUserRole,
type InviteType,
type InviteJoinType,
@@ -147,6 +171,13 @@ export {
type PluginJobRunStatus,
type PluginJobRunTrigger,
type PluginWebhookDeliveryStatus,
type PluginDatabaseNamespaceMode,
type PluginDatabaseNamespaceStatus,
type PluginDatabaseMigrationStatus,
type PluginDatabaseCoreReadTable,
type PluginApiRouteMethod,
type PluginApiRouteAuthMode,
type PluginApiRouteCheckoutPolicy,
type PluginEventType,
type PluginBridgeErrorCode,
} from "./constants.js";
@@ -257,6 +288,9 @@ export type {
IssueWorkProductReviewState,
Issue,
IssueAssigneeAdapterOverrides,
IssueReferenceSource,
IssueRelatedWorkItem,
IssueRelatedWorkSummary,
IssueRelation,
IssueRelationIssueSummary,
IssueExecutionPolicy,
@@ -303,16 +337,35 @@ export type {
AgentWakeupRequest,
InstanceSchedulerHeartbeatAgent,
LiveEvent,
DashboardRunActivityDay,
DashboardSummary,
ActivityEvent,
UserProfileActivitySummary,
UserProfileAgentUsage,
UserProfileDailyPoint,
UserProfileIdentity,
UserProfileIssueSummary,
UserProfileProviderUsage,
UserProfileResponse,
UserProfileWindowStats,
SidebarBadges,
SidebarOrderPreference,
InboxDismissal,
AccessUserProfile,
CompanyMemberRecord,
CompanyMembersResponse,
CompanyMembership,
CompanyInviteListResponse,
CompanyInviteRecord,
PrincipalPermissionGrant,
Invite,
JoinRequest,
JoinRequestInviteSummary,
JoinRequestRecord,
InstanceUserRoleGrant,
AdminUserDirectoryEntry,
UserCompanyAccessEntry,
UserCompanyAccessResponse,
CompanyPortabilityInclude,
CompanyPortabilityEnvInput,
CompanyPortabilityFileEntry,
@@ -367,8 +420,13 @@ export type {
PluginLauncherDeclaration,
PluginMinimumHostVersion,
PluginUiDeclaration,
PluginDatabaseDeclaration,
PluginApiRouteCompanyResolution,
PluginApiRouteDeclaration,
PaperclipPluginManifestV1,
PluginRecord,
PluginDatabaseNamespaceRecord,
PluginMigrationRecord,
PluginStateRecord,
PluginConfig,
PluginEntityRecord,
@@ -379,6 +437,16 @@ export type {
QuotaWindow,
ProviderQuotaResult,
} from "./types/index.js";
export {
ISSUE_REFERENCE_IDENTIFIER_RE,
buildIssueReferenceHref,
extractIssueReferenceIdentifiers,
extractIssueReferenceMatches,
findIssueReferenceMatches,
normalizeIssueIdentifier,
parseIssueReferenceHref,
type IssueReferenceMatch,
} from "./issue-references.js";
export {
sidebarOrderPreferenceSchema,
@@ -479,6 +547,7 @@ export {
type UpdateProjectWorkspace,
projectExecutionWorkspacePolicySchema,
createIssueSchema,
createChildIssueSchema,
createIssueLabelSchema,
updateIssueSchema,
issueExecutionPolicySchema,
@@ -506,6 +575,7 @@ export {
upsertIssueDocumentSchema,
restoreIssueDocumentRevisionSchema,
type CreateIssue,
type CreateChildIssue,
type CreateIssueLabel,
type UpdateIssue,
type CheckoutIssue,
@@ -566,12 +636,20 @@ export {
createCompanyInviteSchema,
createOpenClawInvitePromptSchema,
acceptInviteSchema,
listCompanyInvitesQuerySchema,
listJoinRequestsQuerySchema,
claimJoinRequestApiKeySchema,
boardCliAuthAccessLevelSchema,
createCliAuthChallengeSchema,
resolveCliAuthChallengeSchema,
currentUserProfileSchema,
authSessionSchema,
updateCurrentUserProfileSchema,
updateCompanyMemberSchema,
updateCompanyMemberWithPermissionsSchema,
archiveCompanyMemberSchema,
updateMemberPermissionsSchema,
searchAdminUsersQuerySchema,
updateUserCompanyAccessSchema,
type CreateCostEvent,
type CreateFinanceEvent,
@@ -580,12 +658,20 @@ export {
type CreateCompanyInvite,
type CreateOpenClawInvitePrompt,
type AcceptInvite,
type ListCompanyInvitesQuery,
type ListJoinRequestsQuery,
type ClaimJoinRequestApiKey,
type BoardCliAuthAccessLevel,
type CreateCliAuthChallenge,
type ResolveCliAuthChallenge,
type CurrentUserProfile,
type AuthSession,
type UpdateCurrentUserProfile,
type UpdateCompanyMember,
type UpdateCompanyMemberWithPermissions,
type ArchiveCompanyMember,
type UpdateMemberPermissions,
type SearchAdminUsersQuery,
type UpdateUserCompanyAccess,
companySkillSourceTypeSchema,
companySkillTrustLevelSchema,
@@ -629,6 +715,8 @@ export {
pluginLauncherActionDeclarationSchema,
pluginLauncherRenderDeclarationSchema,
pluginLauncherDeclarationSchema,
pluginDatabaseDeclarationSchema,
pluginApiRouteDeclarationSchema,
pluginManifestV1Schema,
installPluginSchema,
upsertPluginConfigSchema,
@@ -645,6 +733,8 @@ export {
type PluginLauncherActionDeclarationInput,
type PluginLauncherRenderDeclarationInput,
type PluginLauncherDeclarationInput,
type PluginDatabaseDeclarationInput,
type PluginApiRouteDeclarationInput,
type PluginManifestV1Input,
type InstallPlugin,
type UpsertPluginConfig,
@@ -663,18 +753,23 @@ export {
AGENT_MENTION_SCHEME,
PROJECT_MENTION_SCHEME,
SKILL_MENTION_SCHEME,
USER_MENTION_SCHEME,
buildAgentMentionHref,
buildProjectMentionHref,
buildSkillMentionHref,
buildUserMentionHref,
extractAgentMentionIds,
extractProjectMentionIds,
extractSkillMentionIds,
extractUserMentionIds,
parseAgentMentionHref,
parseProjectMentionHref,
parseSkillMentionHref,
extractProjectMentionIds,
parseUserMentionHref,
type ParsedAgentMention,
type ParsedProjectMention,
type ParsedSkillMention,
type ParsedUserMention,
} from "./project-mentions.js";
export {

View File

@@ -0,0 +1,68 @@
import { describe, expect, it } from "vitest";
import {
buildIssueReferenceHref,
extractIssueReferenceIdentifiers,
findIssueReferenceMatches,
normalizeIssueIdentifier,
parseIssueReferenceHref,
} from "./issue-references.js";
describe("issue references", () => {
it("normalizes identifiers to uppercase", () => {
expect(normalizeIssueIdentifier("pap-123")).toBe("PAP-123");
expect(normalizeIssueIdentifier("not-an-issue")).toBeNull();
});
it("parses relative and absolute issue hrefs", () => {
expect(parseIssueReferenceHref("/issues/PAP-123")).toEqual({ identifier: "PAP-123" });
expect(parseIssueReferenceHref("/PAP/issues/pap-456")).toEqual({ identifier: "PAP-456" });
expect(parseIssueReferenceHref("https://paperclip.ing/PAP/issues/pap-789#comment-1")).toEqual({
identifier: "PAP-789",
});
expect(parseIssueReferenceHref("https://paperclip.ing/projects/PAP-789")).toBeNull();
});
it("builds canonical issue hrefs", () => {
expect(buildIssueReferenceHref("pap-123")).toBe("/issues/PAP-123");
});
it("finds identifiers and issue paths in plain text", () => {
expect(findIssueReferenceMatches("See PAP-1, /issues/PAP-2, and https://x.test/PAP/issues/pap-3.")).toEqual([
{ index: 4, length: 5, identifier: "PAP-1", matchedText: "PAP-1" },
{ index: 11, length: 13, identifier: "PAP-2", matchedText: "/issues/PAP-2" },
{
index: 30,
length: 31,
identifier: "PAP-3",
matchedText: "https://x.test/PAP/issues/pap-3",
},
]);
});
it("trims unmatched square brackets from issue path tokens", () => {
expect(findIssueReferenceMatches("See /issues/PAP-123] for context.")).toEqual([
{ index: 4, length: 15, identifier: "PAP-123", matchedText: "/issues/PAP-123" },
]);
});
it("extracts and dedupes references from markdown", () => {
expect(extractIssueReferenceIdentifiers("PAP-1 [again](/issues/pap-1) PAP-2")).toEqual(["PAP-1", "PAP-2"]);
});
it("ignores inline code and fenced code blocks", () => {
const markdown = [
"Use PAP-1 here.",
"",
"`PAP-2` should not count.",
"",
"```md",
"PAP-3",
"/issues/PAP-4",
"```",
"",
"Final /issues/PAP-5 mention.",
].join("\n");
expect(extractIssueReferenceIdentifiers(markdown)).toEqual(["PAP-1", "PAP-5"]);
});
});

View File

@@ -0,0 +1,188 @@
export const ISSUE_REFERENCE_IDENTIFIER_RE = /^[A-Z]+-\d+$/;
export interface IssueReferenceMatch {
index: number;
length: number;
identifier: string;
matchedText: string;
}
const ISSUE_REFERENCE_TOKEN_RE = /https?:\/\/[^\s<>()]+|\/[^\s<>()]+|[A-Z]+-\d+/gi;
function preserveNewlinesAsWhitespace(value: string) {
return value.replace(/[^\n]/g, " ");
}
function stripMarkdownCode(markdown: string): string {
if (!markdown) return "";
let output = "";
let index = 0;
while (index < markdown.length) {
const remaining = markdown.slice(index);
const fenceMatch = /^(?:```+|~~~+)/.exec(remaining);
const atLineStart = index === 0 || markdown[index - 1] === "\n";
if (atLineStart && fenceMatch) {
const fence = fenceMatch[0]!;
const blockStart = index;
index += fence.length;
while (index < markdown.length && markdown[index] !== "\n") index += 1;
if (index < markdown.length) index += 1;
while (index < markdown.length) {
const lineStart = index === 0 || markdown[index - 1] === "\n";
if (lineStart && markdown.startsWith(fence, index)) {
index += fence.length;
while (index < markdown.length && markdown[index] !== "\n") index += 1;
if (index < markdown.length) index += 1;
break;
}
index += 1;
}
output += preserveNewlinesAsWhitespace(markdown.slice(blockStart, index));
continue;
}
if (markdown[index] === "`") {
let tickCount = 1;
while (index + tickCount < markdown.length && markdown[index + tickCount] === "`") {
tickCount += 1;
}
const fence = "`".repeat(tickCount);
const inlineStart = index;
index += tickCount;
const closeIndex = markdown.indexOf(fence, index);
if (closeIndex === -1) {
output += markdown.slice(inlineStart, inlineStart + tickCount);
index = inlineStart + tickCount;
continue;
}
index = closeIndex + tickCount;
output += preserveNewlinesAsWhitespace(markdown.slice(inlineStart, index));
continue;
}
output += markdown[index]!;
index += 1;
}
return output;
}
function trimTrailingPunctuation(token: string): string {
let trimmed = token;
while (trimmed.length > 0) {
const last = trimmed[trimmed.length - 1]!;
if (!".,!?;:".includes(last) && last !== ")" && last !== "]") break;
if (
(last === ")" && (trimmed.match(/\(/g)?.length ?? 0) >= (trimmed.match(/\)/g)?.length ?? 0))
|| (last === "]" && (trimmed.match(/\[/g)?.length ?? 0) >= (trimmed.match(/\]/g)?.length ?? 0))
) {
break;
}
trimmed = trimmed.slice(0, -1);
}
return trimmed;
}
export function normalizeIssueIdentifier(value: string): string | null {
const trimmed = value.trim().toUpperCase();
return ISSUE_REFERENCE_IDENTIFIER_RE.test(trimmed) ? trimmed : null;
}
export function buildIssueReferenceHref(identifier: string): string {
const normalized = normalizeIssueIdentifier(identifier);
return `/issues/${normalized ?? identifier.trim()}`;
}
export function parseIssueReferenceHref(href: string): { identifier: string } | null {
const raw = href.trim();
if (!raw) return null;
let url: URL;
try {
url = raw.startsWith("/")
? new URL(raw, "https://paperclip.invalid")
: new URL(raw);
} catch {
return null;
}
const segments = url.pathname
.split("/")
.map((segment) => segment.trim())
.filter(Boolean);
for (let index = 0; index < segments.length - 1; index += 1) {
if (segments[index]?.toLowerCase() !== "issues") continue;
const identifier = normalizeIssueIdentifier(segments[index + 1] ?? "");
if (identifier) {
return { identifier };
}
}
return null;
}
export function findIssueReferenceMatches(text: string): IssueReferenceMatch[] {
if (!text) return [];
const matches: IssueReferenceMatch[] = [];
let match: RegExpExecArray | null;
const re = new RegExp(ISSUE_REFERENCE_TOKEN_RE);
while ((match = re.exec(text)) !== null) {
const rawToken = match[0];
const cleanedToken = trimTrailingPunctuation(rawToken);
if (!cleanedToken) continue;
const identifier =
normalizeIssueIdentifier(cleanedToken)
?? parseIssueReferenceHref(cleanedToken)?.identifier
?? null;
if (!identifier) continue;
const cleanedIndex = match.index;
matches.push({
index: cleanedIndex,
length: cleanedToken.length,
identifier,
matchedText: cleanedToken,
});
}
return matches;
}
export function extractIssueReferenceIdentifiers(markdown: string): string[] {
const scrubbed = stripMarkdownCode(markdown);
const seen = new Set<string>();
const ordered: string[] = [];
for (const match of findIssueReferenceMatches(scrubbed)) {
if (seen.has(match.identifier)) continue;
seen.add(match.identifier);
ordered.push(match.identifier);
}
return ordered;
}
export function extractIssueReferenceMatches(markdown: string): IssueReferenceMatch[] {
const scrubbed = stripMarkdownCode(markdown);
const seen = new Set<string>();
const ordered: IssueReferenceMatch[] = [];
for (const match of findIssueReferenceMatches(scrubbed)) {
if (seen.has(match.identifier)) continue;
seen.add(match.identifier);
ordered.push(match);
}
return ordered;
}

View File

@@ -3,12 +3,15 @@ import {
buildAgentMentionHref,
buildProjectMentionHref,
buildSkillMentionHref,
buildUserMentionHref,
extractAgentMentionIds,
extractProjectMentionIds,
extractSkillMentionIds,
extractUserMentionIds,
parseAgentMentionHref,
parseProjectMentionHref,
parseSkillMentionHref,
parseUserMentionHref,
} from "./project-mentions.js";
describe("project-mentions", () => {
@@ -30,6 +33,14 @@ describe("project-mentions", () => {
expect(extractAgentMentionIds(`[@CodexCoder](${href})`)).toEqual(["agent-123"]);
});
it("round-trips user mentions", () => {
const href = buildUserMentionHref("user-123");
expect(parseUserMentionHref(href)).toEqual({
userId: "user-123",
});
expect(extractUserMentionIds(`[@Taylor](${href})`)).toEqual(["user-123"]);
});
it("round-trips skill mentions with slug metadata", () => {
const href = buildSkillMentionHref("skill-123", "release-changelog");
expect(parseSkillMentionHref(href)).toEqual({

View File

@@ -1,5 +1,6 @@
export const PROJECT_MENTION_SCHEME = "project://";
export const AGENT_MENTION_SCHEME = "agent://";
export const USER_MENTION_SCHEME = "user://";
export const SKILL_MENTION_SCHEME = "skill://";
const HEX_COLOR_RE = /^[0-9a-f]{6}$/i;
@@ -8,6 +9,7 @@ const HEX_COLOR_WITH_HASH_RE = /^#[0-9a-f]{6}$/i;
const HEX_COLOR_SHORT_WITH_HASH_RE = /^#[0-9a-f]{3}$/i;
const PROJECT_MENTION_LINK_RE = /\[[^\]]*]\((project:\/\/[^)\s]+)\)/gi;
const AGENT_MENTION_LINK_RE = /\[[^\]]*]\((agent:\/\/[^)\s]+)\)/gi;
const USER_MENTION_LINK_RE = /\[[^\]]*]\((user:\/\/[^)\s]+)\)/gi;
const SKILL_MENTION_LINK_RE = /\[[^\]]*]\((skill:\/\/[^)\s]+)\)/gi;
const AGENT_ICON_NAME_RE = /^[a-z0-9-]+$/i;
const SKILL_SLUG_RE = /^[a-z0-9][a-z0-9-]*$/i;
@@ -22,6 +24,10 @@ export interface ParsedAgentMention {
icon: string | null;
}
export interface ParsedUserMention {
userId: string;
}
export interface ParsedSkillMention {
skillId: string;
slug: string | null;
@@ -111,6 +117,28 @@ export function parseAgentMentionHref(href: string): ParsedAgentMention | null {
};
}
export function buildUserMentionHref(userId: string): string {
return `${USER_MENTION_SCHEME}${userId.trim()}`;
}
export function parseUserMentionHref(href: string): ParsedUserMention | null {
if (!href.startsWith(USER_MENTION_SCHEME)) return null;
let url: URL;
try {
url = new URL(href);
} catch {
return null;
}
if (url.protocol !== "user:") return null;
const userId = `${url.hostname}${url.pathname}`.replace(/^\/+/, "").trim();
if (!userId) return null;
return { userId };
}
export function buildSkillMentionHref(skillId: string, slug?: string | null): string {
const trimmedSkillId = skillId.trim();
const normalizedSlug = normalizeSkillSlug(slug ?? null);
@@ -165,6 +193,18 @@ export function extractAgentMentionIds(markdown: string): string[] {
return [...ids];
}
export function extractUserMentionIds(markdown: string): string[] {
if (!markdown) return [];
const ids = new Set<string>();
const re = new RegExp(USER_MENTION_LINK_RE);
let match: RegExpExecArray | null;
while ((match = re.exec(markdown)) !== null) {
const parsed = parseUserMentionHref(match[1]);
if (parsed) ids.add(parsed.userId);
}
return [...ids];
}
export function extractSkillMentionIds(markdown: string): string[] {
if (!markdown) return [];
const ids = new Set<string>();

View File

@@ -1,5 +1,7 @@
import type {
AgentAdapterType,
CompanyStatus,
HumanCompanyMembershipRole,
InstanceUserRole,
InviteJoinType,
InviteType,
@@ -33,6 +35,39 @@ export interface PrincipalPermissionGrant {
updatedAt: Date;
}
export interface AccessUserProfile {
id: string;
email: string | null;
name: string | null;
image: string | null;
}
export interface CompanyMemberRecord extends CompanyMembership {
principalType: "user";
membershipRole: HumanCompanyMembershipRole | null;
user: AccessUserProfile | null;
grants: PrincipalPermissionGrant[];
removal?: {
canArchive: boolean;
reason: string | null;
};
}
export interface CompanyMembersResponse {
members: CompanyMemberRecord[];
access: {
currentUserRole: HumanCompanyMembershipRole | null;
canManageMembers: boolean;
canInviteUsers: boolean;
canApproveJoinRequests: boolean;
};
}
export interface ArchiveCompanyMemberResponse {
member: CompanyMemberRecord;
reassignedIssueCount: number;
}
export interface Invite {
id: string;
companyId: string | null;
@@ -48,6 +83,22 @@ export interface Invite {
updatedAt: Date;
}
export type InviteState = "active" | "revoked" | "accepted" | "expired";
export interface CompanyInviteRecord extends Invite {
companyName: string | null;
humanRole: HumanCompanyMembershipRole | null;
inviteMessage: string | null;
state: InviteState;
invitedByUser: AccessUserProfile | null;
relatedJoinRequestId: string | null;
}
export interface CompanyInviteListResponse {
invites: CompanyInviteRecord[];
nextOffset: number | null;
}
export interface JoinRequest {
id: string;
inviteId: string;
@@ -72,6 +123,26 @@ export interface JoinRequest {
updatedAt: Date;
}
export interface JoinRequestInviteSummary {
id: string;
inviteType: InviteType;
allowedJoinTypes: InviteJoinType;
humanRole: HumanCompanyMembershipRole | null;
inviteMessage: string | null;
createdAt: Date;
expiresAt: Date;
revokedAt: Date | null;
acceptedAt: Date | null;
invitedByUser: AccessUserProfile | null;
}
export interface JoinRequestRecord extends JoinRequest {
requesterUser: AccessUserProfile | null;
approvedByUser: AccessUserProfile | null;
rejectedByUser: AccessUserProfile | null;
invite: JoinRequestInviteSummary | null;
}
export interface InstanceUserRoleGrant {
id: string;
userId: string;
@@ -79,3 +150,21 @@ export interface InstanceUserRoleGrant {
createdAt: Date;
updatedAt: Date;
}
export interface AdminUserDirectoryEntry extends AccessUserProfile {
isInstanceAdmin: boolean;
activeCompanyMembershipCount: number;
}
export interface UserCompanyAccessEntry extends CompanyMembership {
principalType: "user";
companyName: string | null;
companyStatus: CompanyStatus | null;
}
export interface UserCompanyAccessResponse {
user: (AccessUserProfile & {
isInstanceAdmin: boolean;
}) | null;
companyAccess: UserCompanyAccessEntry[];
}

View File

@@ -1,7 +1,7 @@
export interface ActivityEvent {
id: string;
companyId: string;
actorType: "agent" | "user" | "system";
actorType: "agent" | "user" | "system" | "plugin";
actorId: string;
action: string;
entityType: string;

View File

@@ -1,3 +1,11 @@
export interface DashboardRunActivityDay {
date: string;
succeeded: number;
failed: number;
other: number;
total: number;
}
export interface DashboardSummary {
companyId: string;
agents: {
@@ -24,4 +32,5 @@ export interface DashboardSummary {
pausedAgents: number;
pausedProjects: number;
};
runActivity: DashboardRunActivityDay[];
}

View File

@@ -3,6 +3,7 @@ import type {
AgentStatus,
HeartbeatInvocationSource,
HeartbeatRunStatus,
RunLivenessState,
WakeupTriggerDetail,
WakeupRequestStatus,
} from "../constants.js";
@@ -38,6 +39,15 @@ export interface HeartbeatRun {
processStartedAt: Date | null;
retryOfRunId: string | null;
processLossRetryCount: number;
scheduledRetryAt?: Date | null;
scheduledRetryAttempt?: number;
scheduledRetryReason?: string | null;
retryExhaustedReason?: string | null;
livenessState: RunLivenessState | null;
livenessReason: string | null;
continuationAttempt: number;
lastUsefulActionAt: Date | null;
nextAction: string | null;
contextSnapshot: Record<string, unknown> | null;
createdAt: Date;
updatedAt: Date;

View File

@@ -102,6 +102,9 @@ export type {
export type {
Issue,
IssueAssigneeAdapterOverrides,
IssueReferenceSource,
IssueRelatedWorkItem,
IssueRelatedWorkSummary,
IssueRelation,
IssueRelationIssueSummary,
IssueExecutionPolicy,
@@ -167,17 +170,38 @@ export type {
InstanceSchedulerHeartbeatAgent,
} from "./heartbeat.js";
export type { LiveEvent } from "./live.js";
export type { DashboardSummary } from "./dashboard.js";
export type { DashboardRunActivityDay, DashboardSummary } from "./dashboard.js";
export type { ActivityEvent } from "./activity.js";
export type {
UserProfileActivitySummary,
UserProfileAgentUsage,
UserProfileDailyPoint,
UserProfileIdentity,
UserProfileIssueSummary,
UserProfileProviderUsage,
UserProfileResponse,
UserProfileWindowStats,
} from "./user-profile.js";
export type { SidebarBadges } from "./sidebar-badges.js";
export type { SidebarOrderPreference } from "./sidebar-preferences.js";
export type { InboxDismissal } from "./inbox-dismissal.js";
export type {
AccessUserProfile,
CompanyMemberRecord,
CompanyMembersResponse,
ArchiveCompanyMemberResponse,
CompanyMembership,
CompanyInviteListResponse,
CompanyInviteRecord,
PrincipalPermissionGrant,
Invite,
JoinRequest,
JoinRequestInviteSummary,
JoinRequestRecord,
InstanceUserRoleGrant,
AdminUserDirectoryEntry,
UserCompanyAccessEntry,
UserCompanyAccessResponse,
} from "./access.js";
export type { QuotaWindow, ProviderQuotaResult } from "./quota.js";
export type {
@@ -223,8 +247,13 @@ export type {
PluginLauncherDeclaration,
PluginMinimumHostVersion,
PluginUiDeclaration,
PluginDatabaseDeclaration,
PluginApiRouteCompanyResolution,
PluginApiRouteDeclaration,
PaperclipPluginManifestV1,
PluginRecord,
PluginDatabaseNamespaceRecord,
PluginMigrationRecord,
PluginStateRecord,
PluginConfig,
PluginEntityRecord,
@@ -232,4 +261,8 @@ export type {
PluginJobRecord,
PluginJobRunRecord,
PluginWebhookDeliveryRecord,
PluginDatabaseCoreReadTable,
PluginDatabaseMigrationStatus,
PluginDatabaseNamespaceMode,
PluginDatabaseNamespaceStatus,
} from "./plugin.js";

View File

@@ -1,6 +1,7 @@
import type {
IssueExecutionDecisionOutcome,
IssueExecutionPolicyMode,
IssueReferenceSourceKind,
IssueExecutionStageType,
IssueExecutionStateStatus,
IssueOriginKind,
@@ -123,6 +124,24 @@ export interface IssueRelation {
relatedIssue: IssueRelationIssueSummary;
}
export interface IssueReferenceSource {
kind: IssueReferenceSourceKind;
sourceRecordId: string | null;
label: string;
matchedText: string | null;
}
export interface IssueRelatedWorkItem {
issue: IssueRelationIssueSummary;
mentionCount: number;
sources: IssueReferenceSource[];
}
export interface IssueRelatedWorkSummary {
outbound: IssueRelatedWorkItem[];
inbound: IssueRelatedWorkItem[];
}
export interface IssueExecutionStagePrincipal {
type: "agent" | "user";
agentId?: string | null;
@@ -198,6 +217,7 @@ export interface Issue {
originKind?: IssueOriginKind;
originId?: string | null;
originRunId?: string | null;
originFingerprint?: string | null;
requestDepth: number;
billingCode: string | null;
assigneeAdapterOverrides: IssueAssigneeAdapterOverrides | null;
@@ -214,6 +234,8 @@ export interface Issue {
labels?: IssueLabel[];
blockedBy?: IssueRelationIssueSummary[];
blocks?: IssueRelationIssueSummary[];
relatedWork?: IssueRelatedWorkSummary;
referencedIssueIdentifiers?: string[];
planDocument?: IssueDocument | null;
documentSummaries?: IssueDocumentSummary[];
legacyPlanDocument?: LegacyPlanDocument | null;

View File

@@ -9,6 +9,13 @@ import type {
PluginLauncherAction,
PluginLauncherBounds,
PluginLauncherRenderEnvironment,
PluginApiRouteAuthMode,
PluginApiRouteCheckoutPolicy,
PluginApiRouteMethod,
PluginDatabaseCoreReadTable,
PluginDatabaseMigrationStatus,
PluginDatabaseNamespaceMode,
PluginDatabaseNamespaceStatus,
} from "../constants.js";
// ---------------------------------------------------------------------------
@@ -21,6 +28,13 @@ import type {
*/
export type JsonSchema = Record<string, unknown>;
export type {
PluginDatabaseCoreReadTable,
PluginDatabaseMigrationStatus,
PluginDatabaseNamespaceMode,
PluginDatabaseNamespaceStatus,
} from "../constants.js";
// ---------------------------------------------------------------------------
// Manifest sub-types — nested declarations within PaperclipPluginManifestV1
// ---------------------------------------------------------------------------
@@ -190,6 +204,44 @@ export interface PluginUiDeclaration {
launchers?: PluginLauncherDeclaration[];
}
/**
* Declares restricted database access for trusted orchestration plugins.
*
* The host derives the final namespace from the plugin key and optional slug,
* applies SQL migrations before worker startup, and gates runtime SQL through
* the `database.namespace.*` capabilities.
*/
export interface PluginDatabaseDeclaration {
/** Optional stable human-readable slug included in the host-derived namespace. */
namespaceSlug?: string;
/** SQL migration directory relative to the plugin package root. */
migrationsDir: string;
/** Public core tables this plugin may read or join at runtime. */
coreReadTables?: PluginDatabaseCoreReadTable[];
}
export type PluginApiRouteCompanyResolution =
| { from: "body"; key: string }
| { from: "query"; key: string }
| { from: "issue"; param: string };
export interface PluginApiRouteDeclaration {
/** Stable plugin-defined route key passed to the worker. */
routeKey: string;
/** HTTP method accepted by this route. */
method: PluginApiRouteMethod;
/** Plugin-local path under `/api/plugins/:pluginId/api`, e.g. `/issues/:issueId/smoke`. */
path: string;
/** Actor class allowed to call the route. */
auth: PluginApiRouteAuthMode;
/** Capability required to expose the route. Currently `api.routes.register`. */
capability: "api.routes.register";
/** Optional checkout policy enforced by the host before worker dispatch. */
checkoutPolicy?: PluginApiRouteCheckoutPolicy;
/** How the host resolves company access for this route. */
companyResolution?: PluginApiRouteCompanyResolution;
}
// ---------------------------------------------------------------------------
// Plugin Manifest V1
// ---------------------------------------------------------------------------
@@ -240,6 +292,10 @@ export interface PaperclipPluginManifestV1 {
webhooks?: PluginWebhookDeclaration[];
/** Agent tools this plugin contributes. Requires `agent.tools.register` capability. */
tools?: PluginToolDeclaration[];
/** Restricted plugin-owned database namespace declaration. */
database?: PluginDatabaseDeclaration;
/** Scoped JSON API routes mounted under `/api/plugins/:pluginId/api/*`. */
apiRoutes?: PluginApiRouteDeclaration[];
/**
* Legacy top-level launcher declarations.
* Prefer `ui.launchers` for new manifests.
@@ -286,6 +342,31 @@ export interface PluginRecord {
updatedAt: Date;
}
export interface PluginDatabaseNamespaceRecord {
id: string;
pluginId: string;
pluginKey: string;
namespaceName: string;
namespaceMode: PluginDatabaseNamespaceMode;
status: PluginDatabaseNamespaceStatus;
createdAt: Date;
updatedAt: Date;
}
export interface PluginMigrationRecord {
id: string;
pluginId: string;
pluginKey: string;
namespaceName: string;
migrationKey: string;
checksum: string;
pluginVersion: string;
status: PluginDatabaseMigrationStatus;
startedAt: Date;
appliedAt: Date | null;
errorMessage: string | null;
}
// ---------------------------------------------------------------------------
// Plugin State represents a row in the `plugin_state` table
// ---------------------------------------------------------------------------

View File

@@ -1,10 +1,10 @@
import type { PauseReason, ProjectStatus } from "../constants.js";
import type { AgentEnvConfig } from "./secrets.js";
import type {
ProjectExecutionWorkspacePolicy,
ProjectWorkspaceRuntimeConfig,
WorkspaceRuntimeService,
} from "./workspace-runtime.js";
import type { AgentEnvConfig } from "./secrets.js";
export type ProjectWorkspaceSourceType = "local_path" | "git_repo" | "remote_managed" | "non_git_path";
export type ProjectWorkspaceVisibility = "default" | "advanced";

View File

@@ -95,6 +95,7 @@ export interface RoutineRun {
triggeredAt: Date;
idempotencyKey: string | null;
triggerPayload: Record<string, unknown> | null;
dispatchFingerprint: string | null;
linkedIssueId: string | null;
coalescedIntoRunId: string | null;
failureReason: string | null;

Some files were not shown because too many files have changed in this diff Show More