mirror of
https://github.com/paperclipai/paperclip
synced 2026-04-25 17:25:15 +02:00
Compare commits
177 Commits
@paperclip
...
split/docs
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
69b9e45eaf | ||
|
|
c32d19415b | ||
|
|
0a0d74eb94 | ||
|
|
2c5e48993d | ||
|
|
77af1ae544 | ||
|
|
872d2434a9 | ||
|
|
fe764cac75 | ||
|
|
f81d37fbf7 | ||
|
|
d14e656ec1 | ||
|
|
5201222ce7 | ||
|
|
b888f92718 | ||
|
|
56df8d3cf0 | ||
|
|
8808a33fe1 | ||
|
|
ac82cae39a | ||
|
|
9c6a913ef1 | ||
|
|
18f7092b71 | ||
|
|
c8b08e64d6 | ||
|
|
e6ff4eb8b2 | ||
|
|
7adc14ab50 | ||
|
|
aeafeba12b | ||
|
|
890ff39bdb | ||
|
|
55c145bff2 | ||
|
|
7809405e8f | ||
|
|
88916fd11b | ||
|
|
06b50ba161 | ||
|
|
f76a7ef408 | ||
|
|
448e9c192b | ||
|
|
1d5e5247e8 | ||
|
|
5f3f354b3a | ||
|
|
7df74b170d | ||
|
|
7e6a5682fa | ||
|
|
e6a684d96a | ||
|
|
c3cf4279fa | ||
|
|
d4d1b2e7f9 | ||
|
|
b7744a2215 | ||
|
|
f5c766beb9 | ||
|
|
3e8993b449 | ||
|
|
32bdcf1dca | ||
|
|
369dfa4397 | ||
|
|
905403c1af | ||
|
|
dc3f3776ea | ||
|
|
44396be7c1 | ||
|
|
c49e5e90be | ||
|
|
01180d3027 | ||
|
|
397e6d0915 | ||
|
|
778afd31b1 | ||
|
|
6fe7f7a510 | ||
|
|
088eaea0cb | ||
|
|
b1bf09970f | ||
|
|
6540084ddf | ||
|
|
cde3a8c604 | ||
|
|
57113b1075 | ||
|
|
cbe5cfe603 | ||
|
|
833ccb9921 | ||
|
|
bfbb42a9fc | ||
|
|
c4e64be4bc | ||
|
|
88b47c805c | ||
|
|
908e01655a | ||
|
|
ea54c018ad | ||
|
|
6c351cb37d | ||
|
|
ee3d8c1890 | ||
|
|
3b9da0ee95 | ||
|
|
6bfe0b8422 | ||
|
|
33c6d093ab | ||
|
|
d0b1079b9b | ||
|
|
7945e7e780 | ||
|
|
6e7266eeb4 | ||
|
|
d19ff3f4dd | ||
|
|
4435e14838 | ||
|
|
df121c61dc | ||
|
|
1f204e4d76 | ||
|
|
8194132996 | ||
|
|
f7cc292742 | ||
|
|
2efc3a3ef6 | ||
|
|
057e3a494c | ||
|
|
bb6e721567 | ||
|
|
e76adf6ed1 | ||
|
|
2b4d82bfdd | ||
|
|
5e9c223077 | ||
|
|
98ede67b9b | ||
|
|
f594edd39f | ||
|
|
487c86f58e | ||
|
|
b3e71ca562 | ||
|
|
ab2f9e90eb | ||
|
|
cb77b2eb7e | ||
|
|
6c9e639a68 | ||
|
|
6e4694716b | ||
|
|
87b8e21701 | ||
|
|
dd5d2c7c92 | ||
|
|
e168dc7b97 | ||
|
|
4670f60d3e | ||
|
|
472322de24 | ||
|
|
3770e94d56 | ||
|
|
d9492f02d6 | ||
|
|
57d8d01079 | ||
|
|
345c7f4a88 | ||
|
|
521b24da3d | ||
|
|
96e03b45b9 | ||
|
|
57dcdb51af | ||
|
|
a503d2c12c | ||
|
|
21d2b075e7 | ||
|
|
426b16987a | ||
|
|
92aef9bae8 | ||
|
|
5f76d03913 | ||
|
|
d3ac8722be | ||
|
|
183d71eb7c | ||
|
|
3273692944 | ||
|
|
b5935349ed | ||
|
|
4b49efa02e | ||
|
|
c2c63868e9 | ||
|
|
9d2800e691 | ||
|
|
3a003e11cc | ||
|
|
d388255e66 | ||
|
|
80d87d3b4e | ||
|
|
21eb904a4d | ||
|
|
d62b89cadd | ||
|
|
78207304d4 | ||
|
|
c799fca313 | ||
|
|
50db379db2 | ||
|
|
56aeddfa1c | ||
|
|
42c8aca5c0 | ||
|
|
00495d3d89 | ||
|
|
a613435249 | ||
|
|
576b408682 | ||
|
|
193b7c0570 | ||
|
|
93a8b55ff8 | ||
|
|
24a553c255 | ||
|
|
2332a79e0b | ||
|
|
65af1d77a4 | ||
|
|
b0b7ec779a | ||
|
|
859c82aa12 | ||
|
|
6fd29e05ad | ||
|
|
12216b5cc6 | ||
|
|
0c525febf2 | ||
|
|
b0fe48b730 | ||
|
|
f3a9b6de21 | ||
|
|
31561724f7 | ||
|
|
c363428966 | ||
|
|
f783f66866 | ||
|
|
deec68ab16 | ||
|
|
6733a6cd7e | ||
|
|
dfbb4f1ccb | ||
|
|
6956dad53a | ||
|
|
e9fc403b94 | ||
|
|
8eb8b16047 | ||
|
|
4e5f67ef96 | ||
|
|
ec445e4cc9 | ||
|
|
af97259a9c | ||
|
|
9c68c1b80b | ||
|
|
e94ce47ba5 | ||
|
|
6186eba098 | ||
|
|
b83a87f42f | ||
|
|
3120c72372 | ||
|
|
7934952a77 | ||
|
|
d9574fea71 | ||
|
|
83738b45cd | ||
|
|
4a67db6a4d | ||
|
|
0704854926 | ||
|
|
1959badde7 | ||
|
|
3ff07c23d2 | ||
|
|
dec02225f1 | ||
|
|
f6f5fee200 | ||
|
|
49b9511889 | ||
|
|
1a53567cb6 | ||
|
|
9248881d42 | ||
|
|
ef978dd601 | ||
|
|
fbf9d5714f | ||
|
|
8ac064499f | ||
|
|
cbbf695c35 | ||
|
|
7e8908afa2 | ||
|
|
58d4d04e99 | ||
|
|
c672b71f7f | ||
|
|
01c5a6f198 | ||
|
|
8a7b7a2383 | ||
|
|
1a75e6d15c | ||
|
|
5e18ccace7 | ||
|
|
f99f174e2d |
202
.agents/skills/pr-report/SKILL.md
Normal file
202
.agents/skills/pr-report/SKILL.md
Normal file
@@ -0,0 +1,202 @@
|
||||
---
|
||||
name: pr-report
|
||||
description: >
|
||||
Review a pull request or contribution deeply, explain it tutorial-style for a
|
||||
maintainer, and produce a polished report artifact such as HTML or Markdown.
|
||||
Use when asked to analyze a PR, explain a contributor's design decisions,
|
||||
compare it with similar systems, or prepare a merge recommendation.
|
||||
---
|
||||
|
||||
# PR Report Skill
|
||||
|
||||
Produce a maintainer-grade review of a PR, branch, or large contribution.
|
||||
|
||||
Default posture:
|
||||
|
||||
- understand the change before judging it
|
||||
- explain the system as built, not just the diff
|
||||
- separate architectural problems from product-scope objections
|
||||
- make a concrete recommendation, not a vague impression
|
||||
|
||||
## When to Use
|
||||
|
||||
Use this skill when the user asks for things like:
|
||||
|
||||
- "review this PR deeply"
|
||||
- "explain this contribution to me"
|
||||
- "make me a report or webpage for this PR"
|
||||
- "compare this design to similar systems"
|
||||
- "should I merge this?"
|
||||
|
||||
## Outputs
|
||||
|
||||
Common outputs:
|
||||
|
||||
- standalone HTML report in `tmp/reports/...`
|
||||
- Markdown report in `report/` or another requested folder
|
||||
- short maintainer summary in chat
|
||||
|
||||
If the user asks for a webpage, build a polished standalone HTML artifact with
|
||||
clear sections and readable visual hierarchy.
|
||||
|
||||
Resources bundled with this skill:
|
||||
|
||||
- `references/style-guide.md` for visual direction and report presentation rules
|
||||
- `assets/html-report-starter.html` for a reusable standalone HTML/CSS starter
|
||||
|
||||
## Workflow
|
||||
|
||||
### 1. Acquire and frame the target
|
||||
|
||||
Work from local code when possible, not just the GitHub PR page.
|
||||
|
||||
Gather:
|
||||
|
||||
- target branch or worktree
|
||||
- diff size and changed subsystems
|
||||
- relevant repo docs, specs, and invariants
|
||||
- contributor intent if it is documented in PR text or design docs
|
||||
|
||||
Start by answering: what is this change *trying* to become?
|
||||
|
||||
### 2. Build a mental model of the system
|
||||
|
||||
Do not stop at file-by-file notes. Reconstruct the design:
|
||||
|
||||
- what new runtime or contract exists
|
||||
- which layers changed: db, shared types, server, UI, CLI, docs
|
||||
- lifecycle: install, startup, execution, UI, failure, disablement
|
||||
- trust boundary: what code runs where, under what authority
|
||||
|
||||
For large contributions, include a tutorial-style section that teaches the
|
||||
system from first principles.
|
||||
|
||||
### 3. Review like a maintainer
|
||||
|
||||
Findings come first. Order by severity.
|
||||
|
||||
Prioritize:
|
||||
|
||||
- behavioral regressions
|
||||
- trust or security gaps
|
||||
- misleading abstractions
|
||||
- lifecycle and operational risks
|
||||
- coupling that will be hard to unwind
|
||||
- missing tests or unverifiable claims
|
||||
|
||||
Always cite concrete file references when possible.
|
||||
|
||||
### 4. Distinguish the objection type
|
||||
|
||||
Be explicit about whether a concern is:
|
||||
|
||||
- product direction
|
||||
- architecture
|
||||
- implementation quality
|
||||
- rollout strategy
|
||||
- documentation honesty
|
||||
|
||||
Do not hide an architectural objection inside a scope objection.
|
||||
|
||||
### 5. Compare to external precedents when needed
|
||||
|
||||
If the contribution introduces a framework or platform concept, compare it to
|
||||
similar open-source systems.
|
||||
|
||||
When comparing:
|
||||
|
||||
- prefer official docs or source
|
||||
- focus on extension boundaries, context passing, trust model, and UI ownership
|
||||
- extract lessons, not just similarities
|
||||
|
||||
Good comparison questions:
|
||||
|
||||
- Who owns lifecycle?
|
||||
- Who owns UI composition?
|
||||
- Is context explicit or ambient?
|
||||
- Are plugins trusted code or sandboxed code?
|
||||
- Are extension points named and typed?
|
||||
|
||||
### 6. Make the recommendation actionable
|
||||
|
||||
Do not stop at "merge" or "do not merge."
|
||||
|
||||
Choose one:
|
||||
|
||||
- merge as-is
|
||||
- merge after specific redesign
|
||||
- salvage specific pieces
|
||||
- keep as design research
|
||||
|
||||
If rejecting or narrowing, say what should be kept.
|
||||
|
||||
Useful recommendation buckets:
|
||||
|
||||
- keep the protocol/type model
|
||||
- redesign the UI boundary
|
||||
- narrow the initial surface area
|
||||
- defer third-party execution
|
||||
- ship a host-owned extension-point model first
|
||||
|
||||
### 7. Build the artifact
|
||||
|
||||
Suggested report structure:
|
||||
|
||||
1. Executive summary
|
||||
2. What the PR actually adds
|
||||
3. Tutorial: how the system works
|
||||
4. Strengths
|
||||
5. Main findings
|
||||
6. Comparisons
|
||||
7. Recommendation
|
||||
|
||||
For HTML reports:
|
||||
|
||||
- use intentional typography and color
|
||||
- make navigation easy for long reports
|
||||
- favor strong section headings and small reference labels
|
||||
- avoid generic dashboard styling
|
||||
|
||||
Before building from scratch, read `references/style-guide.md`.
|
||||
If a fast polished starter is helpful, begin from `assets/html-report-starter.html`
|
||||
and replace the placeholder content with the actual report.
|
||||
|
||||
### 8. Verify before handoff
|
||||
|
||||
Check:
|
||||
|
||||
- artifact path exists
|
||||
- findings still match the actual code
|
||||
- any requested forbidden strings are absent from generated output
|
||||
- if tests were not run, say so explicitly
|
||||
|
||||
## Review Heuristics
|
||||
|
||||
### Plugin and platform work
|
||||
|
||||
Watch closely for:
|
||||
|
||||
- docs claiming sandboxing while runtime executes trusted host processes
|
||||
- module-global state used to smuggle React context
|
||||
- hidden dependence on render order
|
||||
- plugins reaching into host internals instead of using explicit APIs
|
||||
- "capabilities" that are really policy labels on top of fully trusted code
|
||||
|
||||
### Good signs
|
||||
|
||||
- typed contracts shared across layers
|
||||
- explicit extension points
|
||||
- host-owned lifecycle
|
||||
- honest trust model
|
||||
- narrow first rollout with room to grow
|
||||
|
||||
## Final Response
|
||||
|
||||
In chat, summarize:
|
||||
|
||||
- where the report is
|
||||
- your overall call
|
||||
- the top one or two reasons
|
||||
- whether verification or tests were skipped
|
||||
|
||||
Keep the chat summary shorter than the report itself.
|
||||
426
.agents/skills/pr-report/assets/html-report-starter.html
Normal file
426
.agents/skills/pr-report/assets/html-report-starter.html
Normal file
@@ -0,0 +1,426 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="utf-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
||||
<title>PR Report Starter</title>
|
||||
<link rel="preconnect" href="https://fonts.googleapis.com" />
|
||||
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin />
|
||||
<link
|
||||
href="https://fonts.googleapis.com/css2?family=IBM+Plex+Sans:wght@400;500;600;700&family=Newsreader:opsz,wght@6..72,500;6..72,700&display=swap"
|
||||
rel="stylesheet"
|
||||
/>
|
||||
<style>
|
||||
:root {
|
||||
--bg: #f4efe5;
|
||||
--paper: rgba(255, 251, 244, 0.88);
|
||||
--paper-strong: #fffaf1;
|
||||
--ink: #1f1b17;
|
||||
--muted: #6a6257;
|
||||
--line: rgba(31, 27, 23, 0.12);
|
||||
--accent: #9c4729;
|
||||
--accent-soft: rgba(156, 71, 41, 0.1);
|
||||
--good: #2f6a42;
|
||||
--warn: #946200;
|
||||
--bad: #8c2f25;
|
||||
--shadow: 0 22px 60px rgba(52, 37, 19, 0.1);
|
||||
--radius: 20px;
|
||||
}
|
||||
|
||||
* {
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
html {
|
||||
scroll-behavior: smooth;
|
||||
}
|
||||
|
||||
body {
|
||||
margin: 0;
|
||||
color: var(--ink);
|
||||
font-family: "IBM Plex Sans", sans-serif;
|
||||
background:
|
||||
radial-gradient(circle at top left, rgba(156, 71, 41, 0.12), transparent 34rem),
|
||||
radial-gradient(circle at top right, rgba(47, 106, 66, 0.08), transparent 28rem),
|
||||
linear-gradient(180deg, #efe6d6 0%, var(--bg) 48%, #ece5d8 100%);
|
||||
}
|
||||
|
||||
.shell {
|
||||
width: min(1360px, calc(100vw - 32px));
|
||||
margin: 24px auto;
|
||||
display: grid;
|
||||
grid-template-columns: 280px minmax(0, 1fr);
|
||||
gap: 24px;
|
||||
}
|
||||
|
||||
.panel {
|
||||
background: var(--paper);
|
||||
backdrop-filter: blur(12px);
|
||||
border: 1px solid var(--line);
|
||||
border-radius: var(--radius);
|
||||
box-shadow: var(--shadow);
|
||||
}
|
||||
|
||||
.nav {
|
||||
position: sticky;
|
||||
top: 20px;
|
||||
align-self: start;
|
||||
padding: 22px;
|
||||
}
|
||||
|
||||
.eyebrow {
|
||||
letter-spacing: 0.12em;
|
||||
text-transform: uppercase;
|
||||
font-size: 11px;
|
||||
font-weight: 700;
|
||||
color: var(--accent);
|
||||
}
|
||||
|
||||
.nav h1,
|
||||
.hero h1,
|
||||
h2,
|
||||
h3 {
|
||||
font-family: "Newsreader", serif;
|
||||
line-height: 0.96;
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.nav h1 {
|
||||
font-size: 2rem;
|
||||
margin-top: 10px;
|
||||
}
|
||||
|
||||
.nav p {
|
||||
color: var(--muted);
|
||||
font-size: 0.95rem;
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
.nav ul {
|
||||
list-style: none;
|
||||
padding: 0;
|
||||
margin: 18px 0 0;
|
||||
display: grid;
|
||||
gap: 10px;
|
||||
}
|
||||
|
||||
.nav a {
|
||||
display: block;
|
||||
color: var(--ink);
|
||||
text-decoration: none;
|
||||
padding: 10px 12px;
|
||||
border-radius: 12px;
|
||||
border: 1px solid transparent;
|
||||
background: rgba(255, 255, 255, 0.35);
|
||||
}
|
||||
|
||||
.nav a:hover {
|
||||
border-color: var(--line);
|
||||
background: rgba(255, 255, 255, 0.75);
|
||||
}
|
||||
|
||||
.meta-block {
|
||||
margin-top: 20px;
|
||||
padding-top: 18px;
|
||||
border-top: 1px solid var(--line);
|
||||
color: var(--muted);
|
||||
font-size: 0.86rem;
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
main {
|
||||
display: grid;
|
||||
gap: 24px;
|
||||
}
|
||||
|
||||
section {
|
||||
padding: 26px 28px 28px;
|
||||
}
|
||||
|
||||
.hero {
|
||||
padding: 28px;
|
||||
overflow: hidden;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.hero::after {
|
||||
content: "";
|
||||
position: absolute;
|
||||
inset: auto -3rem -6rem auto;
|
||||
width: 18rem;
|
||||
height: 18rem;
|
||||
border-radius: 50%;
|
||||
background: radial-gradient(circle, rgba(156, 71, 41, 0.14), transparent 68%);
|
||||
pointer-events: none;
|
||||
}
|
||||
|
||||
.hero h1 {
|
||||
font-size: clamp(2.6rem, 5vw, 4.6rem);
|
||||
max-width: 12ch;
|
||||
margin-top: 12px;
|
||||
}
|
||||
|
||||
.lede {
|
||||
margin-top: 16px;
|
||||
max-width: 70ch;
|
||||
font-size: 1.05rem;
|
||||
line-height: 1.65;
|
||||
color: #2b2723;
|
||||
}
|
||||
|
||||
.hero-grid,
|
||||
.card-grid,
|
||||
.two-col {
|
||||
display: grid;
|
||||
gap: 14px;
|
||||
}
|
||||
|
||||
.hero-grid {
|
||||
margin-top: 24px;
|
||||
grid-template-columns: repeat(4, minmax(0, 1fr));
|
||||
}
|
||||
|
||||
.card-grid {
|
||||
grid-template-columns: repeat(2, minmax(0, 1fr));
|
||||
}
|
||||
|
||||
.two-col {
|
||||
grid-template-columns: repeat(2, minmax(0, 1fr));
|
||||
}
|
||||
|
||||
.metric,
|
||||
.card,
|
||||
.finding {
|
||||
padding: 18px;
|
||||
background: rgba(255, 255, 255, 0.68);
|
||||
border: 1px solid var(--line);
|
||||
border-radius: 18px;
|
||||
}
|
||||
|
||||
.metric .label {
|
||||
color: var(--muted);
|
||||
font-size: 0.82rem;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.08em;
|
||||
}
|
||||
|
||||
.metric .value {
|
||||
margin-top: 8px;
|
||||
font-size: 1.45rem;
|
||||
font-weight: 700;
|
||||
}
|
||||
|
||||
h2 {
|
||||
font-size: 2rem;
|
||||
margin-bottom: 16px;
|
||||
}
|
||||
|
||||
h3 {
|
||||
font-size: 1.3rem;
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
|
||||
p {
|
||||
margin: 0 0 14px;
|
||||
line-height: 1.65;
|
||||
}
|
||||
|
||||
ul,
|
||||
ol {
|
||||
margin: 0;
|
||||
padding-left: 20px;
|
||||
line-height: 1.65;
|
||||
}
|
||||
|
||||
li + li {
|
||||
margin-top: 8px;
|
||||
}
|
||||
|
||||
.badge-row {
|
||||
display: flex;
|
||||
flex-wrap: wrap;
|
||||
gap: 8px;
|
||||
margin: 18px 0 8px;
|
||||
}
|
||||
|
||||
.badge {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
gap: 8px;
|
||||
padding: 8px 10px;
|
||||
border-radius: 999px;
|
||||
font-size: 0.82rem;
|
||||
font-weight: 700;
|
||||
border: 1px solid var(--line);
|
||||
background: rgba(255, 255, 255, 0.68);
|
||||
}
|
||||
|
||||
.badge.good {
|
||||
color: var(--good);
|
||||
}
|
||||
|
||||
.badge.warn {
|
||||
color: var(--warn);
|
||||
}
|
||||
|
||||
.badge.bad {
|
||||
color: var(--bad);
|
||||
}
|
||||
|
||||
.quote {
|
||||
margin-top: 18px;
|
||||
padding: 18px;
|
||||
border-left: 4px solid var(--accent);
|
||||
border-radius: 14px;
|
||||
background: var(--accent-soft);
|
||||
}
|
||||
|
||||
.severity {
|
||||
display: inline-flex;
|
||||
margin-bottom: 12px;
|
||||
padding: 6px 10px;
|
||||
border-radius: 999px;
|
||||
font-size: 0.78rem;
|
||||
font-weight: 700;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.08em;
|
||||
}
|
||||
|
||||
.severity.high {
|
||||
background: rgba(140, 47, 37, 0.12);
|
||||
color: var(--bad);
|
||||
}
|
||||
|
||||
.severity.medium {
|
||||
background: rgba(148, 98, 0, 0.12);
|
||||
color: var(--warn);
|
||||
}
|
||||
|
||||
.severity.low {
|
||||
background: rgba(47, 106, 66, 0.12);
|
||||
color: var(--good);
|
||||
}
|
||||
|
||||
.ref {
|
||||
color: var(--muted);
|
||||
font-size: 0.82rem;
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
@media (max-width: 980px) {
|
||||
.shell {
|
||||
grid-template-columns: 1fr;
|
||||
}
|
||||
|
||||
.nav {
|
||||
position: static;
|
||||
}
|
||||
|
||||
.hero-grid,
|
||||
.card-grid,
|
||||
.two-col {
|
||||
grid-template-columns: 1fr;
|
||||
}
|
||||
|
||||
.hero h1 {
|
||||
max-width: 100%;
|
||||
}
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="shell">
|
||||
<aside class="panel nav">
|
||||
<div class="eyebrow">Maintainer Report</div>
|
||||
<h1>Report Title</h1>
|
||||
<p>Replace this with a concise description of what the report covers.</p>
|
||||
<ul>
|
||||
<li><a href="#summary">Summary</a></li>
|
||||
<li><a href="#tutorial">Tutorial</a></li>
|
||||
<li><a href="#findings">Findings</a></li>
|
||||
<li><a href="#recommendation">Recommendation</a></li>
|
||||
</ul>
|
||||
<div class="meta-block">
|
||||
Replace with project metadata, review date, or scope notes.
|
||||
</div>
|
||||
</aside>
|
||||
|
||||
<main>
|
||||
<section class="panel hero" id="summary">
|
||||
<div class="eyebrow">Executive Summary</div>
|
||||
<h1>Use the hero for the clearest one-line judgment.</h1>
|
||||
<p class="lede">
|
||||
Replace this with the short explanation of what the contribution does, why it matters,
|
||||
and what the core maintainer question is.
|
||||
</p>
|
||||
<div class="badge-row">
|
||||
<span class="badge good">Strength</span>
|
||||
<span class="badge warn">Tradeoff</span>
|
||||
<span class="badge bad">Risk</span>
|
||||
</div>
|
||||
<div class="hero-grid">
|
||||
<div class="metric">
|
||||
<div class="label">Overall Call</div>
|
||||
<div class="value">Placeholder</div>
|
||||
</div>
|
||||
<div class="metric">
|
||||
<div class="label">Main Concern</div>
|
||||
<div class="value">Placeholder</div>
|
||||
</div>
|
||||
<div class="metric">
|
||||
<div class="label">Best Part</div>
|
||||
<div class="value">Placeholder</div>
|
||||
</div>
|
||||
<div class="metric">
|
||||
<div class="label">Weakest Part</div>
|
||||
<div class="value">Placeholder</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="quote">
|
||||
Use this block for the thesis, a sharp takeaway, or a key cited point.
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="panel" id="tutorial">
|
||||
<h2>Tutorial Section</h2>
|
||||
<div class="two-col">
|
||||
<div class="card">
|
||||
<h3>Concept Card</h3>
|
||||
<p>Use cards for mental models, subsystems, or comparison slices.</p>
|
||||
<div class="ref">path/to/file.ts:10</div>
|
||||
</div>
|
||||
<div class="card">
|
||||
<h3>Second Card</h3>
|
||||
<p>Keep cards fairly dense. This template is about style, not fixed structure.</p>
|
||||
<div class="ref">path/to/file.ts:20</div>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="panel" id="findings">
|
||||
<h2>Findings</h2>
|
||||
<article class="finding">
|
||||
<div class="severity high">High</div>
|
||||
<h3>Finding Title</h3>
|
||||
<p>Use findings for the sharpest judgment calls and risks.</p>
|
||||
<div class="ref">path/to/file.ts:30</div>
|
||||
</article>
|
||||
</section>
|
||||
|
||||
<section class="panel" id="recommendation">
|
||||
<h2>Recommendation</h2>
|
||||
<div class="card-grid">
|
||||
<div class="card">
|
||||
<h3>Path Forward</h3>
|
||||
<p>Use this area for merge guidance, salvage plan, or rollout advice.</p>
|
||||
</div>
|
||||
<div class="card">
|
||||
<h3>What To Keep</h3>
|
||||
<p>Call out the parts worth preserving even if the whole proposal should not land.</p>
|
||||
</div>
|
||||
</div>
|
||||
</section>
|
||||
</main>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
149
.agents/skills/pr-report/references/style-guide.md
Normal file
149
.agents/skills/pr-report/references/style-guide.md
Normal file
@@ -0,0 +1,149 @@
|
||||
# PR Report Style Guide
|
||||
|
||||
Use this guide when the user wants a report artifact, especially a webpage.
|
||||
|
||||
## Goal
|
||||
|
||||
Make the report feel like an editorial review, not an internal admin dashboard.
|
||||
The page should make a long technical argument easy to scan without looking
|
||||
generic or overdesigned.
|
||||
|
||||
## Visual Direction
|
||||
|
||||
Preferred tone:
|
||||
|
||||
- editorial
|
||||
- warm
|
||||
- serious
|
||||
- high-contrast
|
||||
- handcrafted, not corporate SaaS
|
||||
|
||||
Avoid:
|
||||
|
||||
- default app-shell layouts
|
||||
- purple gradients on white
|
||||
- generic card dashboards
|
||||
- cramped pages with weak hierarchy
|
||||
- novelty fonts that hurt readability
|
||||
|
||||
## Typography
|
||||
|
||||
Recommended pattern:
|
||||
|
||||
- one expressive serif or display face for major headings
|
||||
- one sturdy sans-serif for body copy and UI labels
|
||||
|
||||
Good combinations:
|
||||
|
||||
- Newsreader + IBM Plex Sans
|
||||
- Source Serif 4 + Instrument Sans
|
||||
- Fraunces + Public Sans
|
||||
- Libre Baskerville + Work Sans
|
||||
|
||||
Rules:
|
||||
|
||||
- headings should feel deliberate and large
|
||||
- body copy should stay comfortable for long reading
|
||||
- reference labels and badges should use smaller dense sans text
|
||||
|
||||
## Layout
|
||||
|
||||
Recommended structure:
|
||||
|
||||
- a sticky side or top navigation for long reports
|
||||
- one strong hero summary at the top
|
||||
- panel or paper-like sections for each major topic
|
||||
- multi-column card grids for comparisons and strengths
|
||||
- single-column body text for findings and recommendations
|
||||
|
||||
Use generous spacing. Long-form technical reports need breathing room.
|
||||
|
||||
## Color
|
||||
|
||||
Prefer muted paper-like backgrounds with one warm accent and one cool counterweight.
|
||||
|
||||
Suggested token categories:
|
||||
|
||||
- `--bg`
|
||||
- `--paper`
|
||||
- `--ink`
|
||||
- `--muted`
|
||||
- `--line`
|
||||
- `--accent`
|
||||
- `--good`
|
||||
- `--warn`
|
||||
- `--bad`
|
||||
|
||||
The accent should highlight navigation, badges, and important labels. Do not
|
||||
let accent colors dominate body text.
|
||||
|
||||
## Useful UI Elements
|
||||
|
||||
Include small reusable styles for:
|
||||
|
||||
- summary metrics
|
||||
- badges
|
||||
- quotes or callouts
|
||||
- finding cards
|
||||
- severity labels
|
||||
- reference labels
|
||||
- comparison cards
|
||||
- responsive two-column sections
|
||||
|
||||
## Motion
|
||||
|
||||
Keep motion restrained.
|
||||
|
||||
Good:
|
||||
|
||||
- soft fade/slide-in on first load
|
||||
- hover response on nav items or cards
|
||||
|
||||
Bad:
|
||||
|
||||
- constant animation
|
||||
- floating blobs
|
||||
- decorative motion with no reading benefit
|
||||
|
||||
## Content Presentation
|
||||
|
||||
Even when the user wants design polish, clarity stays primary.
|
||||
|
||||
Good structure for long reports:
|
||||
|
||||
1. executive summary
|
||||
2. what changed
|
||||
3. tutorial explanation
|
||||
4. strengths
|
||||
5. findings
|
||||
6. comparisons
|
||||
7. recommendation
|
||||
|
||||
The exact headings can change. The important thing is to separate explanation
|
||||
from judgment.
|
||||
|
||||
## References
|
||||
|
||||
Reference labels should be visually quiet but easy to spot.
|
||||
|
||||
Good pattern:
|
||||
|
||||
- small muted text
|
||||
- monospace or compact sans
|
||||
- keep them close to the paragraph they support
|
||||
|
||||
## Starter Usage
|
||||
|
||||
If you need a fast polished base, start from:
|
||||
|
||||
- `assets/html-report-starter.html`
|
||||
|
||||
Customize:
|
||||
|
||||
- fonts
|
||||
- color tokens
|
||||
- hero copy
|
||||
- section ordering
|
||||
- card density
|
||||
|
||||
Do not preserve the placeholder sections if they do not fit the actual report.
|
||||
@@ -33,7 +33,7 @@ Use this skill when leadership asks for:
|
||||
|
||||
Before proceeding, verify all of the following:
|
||||
|
||||
1. `skills/release-changelog/SKILL.md` exists and is usable.
|
||||
1. `.agents/skills/release-changelog/SKILL.md` exists and is usable.
|
||||
2. The repo working tree is clean, including untracked files.
|
||||
3. There are commits since the last stable tag.
|
||||
4. The release SHA has passed the verification gate or is about to.
|
||||
@@ -1,5 +0,0 @@
|
||||
---
|
||||
"@paperclipai/shared": minor
|
||||
---
|
||||
|
||||
Add support for Pi local adapter in constants and onboarding UI.
|
||||
@@ -16,6 +16,7 @@ COPY packages/adapter-utils/package.json packages/adapter-utils/
|
||||
COPY packages/adapters/claude-local/package.json packages/adapters/claude-local/
|
||||
COPY packages/adapters/codex-local/package.json packages/adapters/codex-local/
|
||||
COPY packages/adapters/cursor-local/package.json packages/adapters/cursor-local/
|
||||
COPY packages/adapters/gemini-local/package.json packages/adapters/gemini-local/
|
||||
COPY packages/adapters/openclaw-gateway/package.json packages/adapters/openclaw-gateway/
|
||||
COPY packages/adapters/opencode-local/package.json packages/adapters/opencode-local/
|
||||
COPY packages/adapters/pi-local/package.json packages/adapters/pi-local/
|
||||
|
||||
@@ -248,8 +248,6 @@ See [doc/DEVELOPING.md](doc/DEVELOPING.md) for the full development guide.
|
||||
|
||||
We welcome contributions. See the [contributing guide](CONTRIBUTING.md) for details.
|
||||
|
||||
<!-- TODO: add CONTRIBUTING.md -->
|
||||
|
||||
<br/>
|
||||
|
||||
## Community
|
||||
|
||||
@@ -1,5 +1,26 @@
|
||||
# paperclipai
|
||||
|
||||
## 0.3.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Stable release preparation for 0.3.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies [6077ae6]
|
||||
- Updated dependencies
|
||||
- @paperclipai/shared@0.3.0
|
||||
- @paperclipai/adapter-utils@0.3.0
|
||||
- @paperclipai/adapter-claude-local@0.3.0
|
||||
- @paperclipai/adapter-codex-local@0.3.0
|
||||
- @paperclipai/adapter-cursor-local@0.3.0
|
||||
- @paperclipai/adapter-openclaw-gateway@0.3.0
|
||||
- @paperclipai/adapter-opencode-local@0.3.0
|
||||
- @paperclipai/adapter-pi-local@0.3.0
|
||||
- @paperclipai/db@0.3.0
|
||||
- @paperclipai/server@0.3.0
|
||||
|
||||
## 0.2.7
|
||||
|
||||
### Patch Changes
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "paperclipai",
|
||||
"version": "0.2.7",
|
||||
"version": "0.3.0",
|
||||
"description": "Paperclip CLI — orchestrate AI agent teams to run a business",
|
||||
"type": "module",
|
||||
"bin": {
|
||||
@@ -37,6 +37,7 @@
|
||||
"@paperclipai/adapter-claude-local": "workspace:*",
|
||||
"@paperclipai/adapter-codex-local": "workspace:*",
|
||||
"@paperclipai/adapter-cursor-local": "workspace:*",
|
||||
"@paperclipai/adapter-gemini-local": "workspace:*",
|
||||
"@paperclipai/adapter-opencode-local": "workspace:*",
|
||||
"@paperclipai/adapter-pi-local": "workspace:*",
|
||||
"@paperclipai/adapter-openclaw-gateway": "workspace:*",
|
||||
@@ -47,6 +48,7 @@
|
||||
"drizzle-orm": "0.38.4",
|
||||
"dotenv": "^17.0.1",
|
||||
"commander": "^13.1.0",
|
||||
"embedded-postgres": "^18.1.0-beta.16",
|
||||
"picocolors": "^1.1.1"
|
||||
},
|
||||
"devDependencies": {
|
||||
|
||||
99
cli/src/__tests__/doctor.test.ts
Normal file
99
cli/src/__tests__/doctor.test.ts
Normal file
@@ -0,0 +1,99 @@
|
||||
import fs from "node:fs";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { afterEach, beforeEach, describe, expect, it } from "vitest";
|
||||
import { doctor } from "../commands/doctor.js";
|
||||
import { writeConfig } from "../config/store.js";
|
||||
import type { PaperclipConfig } from "../config/schema.js";
|
||||
|
||||
const ORIGINAL_ENV = { ...process.env };
|
||||
|
||||
function createTempConfig(): string {
|
||||
const root = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-doctor-"));
|
||||
const configPath = path.join(root, ".paperclip", "config.json");
|
||||
const runtimeRoot = path.join(root, "runtime");
|
||||
|
||||
const config: PaperclipConfig = {
|
||||
$meta: {
|
||||
version: 1,
|
||||
updatedAt: "2026-03-10T00:00:00.000Z",
|
||||
source: "configure",
|
||||
},
|
||||
database: {
|
||||
mode: "embedded-postgres",
|
||||
embeddedPostgresDataDir: path.join(runtimeRoot, "db"),
|
||||
embeddedPostgresPort: 55432,
|
||||
backup: {
|
||||
enabled: true,
|
||||
intervalMinutes: 60,
|
||||
retentionDays: 30,
|
||||
dir: path.join(runtimeRoot, "backups"),
|
||||
},
|
||||
},
|
||||
logging: {
|
||||
mode: "file",
|
||||
logDir: path.join(runtimeRoot, "logs"),
|
||||
},
|
||||
server: {
|
||||
deploymentMode: "local_trusted",
|
||||
exposure: "private",
|
||||
host: "127.0.0.1",
|
||||
port: 3199,
|
||||
allowedHostnames: [],
|
||||
serveUi: true,
|
||||
},
|
||||
auth: {
|
||||
baseUrlMode: "auto",
|
||||
disableSignUp: false,
|
||||
},
|
||||
storage: {
|
||||
provider: "local_disk",
|
||||
localDisk: {
|
||||
baseDir: path.join(runtimeRoot, "storage"),
|
||||
},
|
||||
s3: {
|
||||
bucket: "paperclip",
|
||||
region: "us-east-1",
|
||||
prefix: "",
|
||||
forcePathStyle: false,
|
||||
},
|
||||
},
|
||||
secrets: {
|
||||
provider: "local_encrypted",
|
||||
strictMode: false,
|
||||
localEncrypted: {
|
||||
keyFilePath: path.join(runtimeRoot, "secrets", "master.key"),
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
writeConfig(config, configPath);
|
||||
return configPath;
|
||||
}
|
||||
|
||||
describe("doctor", () => {
|
||||
beforeEach(() => {
|
||||
process.env = { ...ORIGINAL_ENV };
|
||||
delete process.env.PAPERCLIP_AGENT_JWT_SECRET;
|
||||
delete process.env.PAPERCLIP_SECRETS_MASTER_KEY;
|
||||
delete process.env.PAPERCLIP_SECRETS_MASTER_KEY_FILE;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
process.env = { ...ORIGINAL_ENV };
|
||||
});
|
||||
|
||||
it("re-runs repairable checks so repaired failures do not remain blocking", async () => {
|
||||
const configPath = createTempConfig();
|
||||
|
||||
const summary = await doctor({
|
||||
config: configPath,
|
||||
repair: true,
|
||||
yes: true,
|
||||
});
|
||||
|
||||
expect(summary.failed).toBe(0);
|
||||
expect(summary.warned).toBe(0);
|
||||
expect(process.env.PAPERCLIP_AGENT_JWT_SECRET).toBeTruthy();
|
||||
});
|
||||
});
|
||||
405
cli/src/__tests__/worktree.test.ts
Normal file
405
cli/src/__tests__/worktree.test.ts
Normal file
@@ -0,0 +1,405 @@
|
||||
import fs from "node:fs";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { execFileSync } from "node:child_process";
|
||||
import { afterEach, describe, expect, it, vi } from "vitest";
|
||||
import {
|
||||
copyGitHooksToWorktreeGitDir,
|
||||
copySeededSecretsKey,
|
||||
rebindWorkspaceCwd,
|
||||
resolveGitWorktreeAddArgs,
|
||||
resolveWorktreeMakeTargetPath,
|
||||
worktreeInitCommand,
|
||||
worktreeMakeCommand,
|
||||
} from "../commands/worktree.js";
|
||||
import {
|
||||
buildWorktreeConfig,
|
||||
buildWorktreeEnvEntries,
|
||||
formatShellExports,
|
||||
resolveWorktreeSeedPlan,
|
||||
resolveWorktreeLocalPaths,
|
||||
rewriteLocalUrlPort,
|
||||
sanitizeWorktreeInstanceId,
|
||||
} from "../commands/worktree-lib.js";
|
||||
import type { PaperclipConfig } from "../config/schema.js";
|
||||
|
||||
const ORIGINAL_CWD = process.cwd();
|
||||
const ORIGINAL_ENV = { ...process.env };
|
||||
|
||||
afterEach(() => {
|
||||
process.chdir(ORIGINAL_CWD);
|
||||
for (const key of Object.keys(process.env)) {
|
||||
if (!(key in ORIGINAL_ENV)) delete process.env[key];
|
||||
}
|
||||
for (const [key, value] of Object.entries(ORIGINAL_ENV)) {
|
||||
if (value === undefined) delete process.env[key];
|
||||
else process.env[key] = value;
|
||||
}
|
||||
});
|
||||
|
||||
function buildSourceConfig(): PaperclipConfig {
|
||||
return {
|
||||
$meta: {
|
||||
version: 1,
|
||||
updatedAt: "2026-03-09T00:00:00.000Z",
|
||||
source: "configure",
|
||||
},
|
||||
database: {
|
||||
mode: "embedded-postgres",
|
||||
embeddedPostgresDataDir: "/tmp/main/db",
|
||||
embeddedPostgresPort: 54329,
|
||||
backup: {
|
||||
enabled: true,
|
||||
intervalMinutes: 60,
|
||||
retentionDays: 30,
|
||||
dir: "/tmp/main/backups",
|
||||
},
|
||||
},
|
||||
logging: {
|
||||
mode: "file",
|
||||
logDir: "/tmp/main/logs",
|
||||
},
|
||||
server: {
|
||||
deploymentMode: "authenticated",
|
||||
exposure: "private",
|
||||
host: "127.0.0.1",
|
||||
port: 3100,
|
||||
allowedHostnames: ["localhost"],
|
||||
serveUi: true,
|
||||
},
|
||||
auth: {
|
||||
baseUrlMode: "explicit",
|
||||
publicBaseUrl: "http://127.0.0.1:3100",
|
||||
disableSignUp: false,
|
||||
},
|
||||
storage: {
|
||||
provider: "local_disk",
|
||||
localDisk: {
|
||||
baseDir: "/tmp/main/storage",
|
||||
},
|
||||
s3: {
|
||||
bucket: "paperclip",
|
||||
region: "us-east-1",
|
||||
prefix: "",
|
||||
forcePathStyle: false,
|
||||
},
|
||||
},
|
||||
secrets: {
|
||||
provider: "local_encrypted",
|
||||
strictMode: false,
|
||||
localEncrypted: {
|
||||
keyFilePath: "/tmp/main/secrets/master.key",
|
||||
},
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
describe("worktree helpers", () => {
|
||||
it("sanitizes instance ids", () => {
|
||||
expect(sanitizeWorktreeInstanceId("feature/worktree-support")).toBe("feature-worktree-support");
|
||||
expect(sanitizeWorktreeInstanceId(" ")).toBe("worktree");
|
||||
});
|
||||
|
||||
it("resolves worktree:make target paths under the user home directory", () => {
|
||||
expect(resolveWorktreeMakeTargetPath("paperclip-pr-432")).toBe(
|
||||
path.resolve(os.homedir(), "paperclip-pr-432"),
|
||||
);
|
||||
});
|
||||
|
||||
it("rejects worktree:make names that are not safe directory/branch names", () => {
|
||||
expect(() => resolveWorktreeMakeTargetPath("paperclip/pr-432")).toThrow(
|
||||
"Worktree name must contain only letters, numbers, dots, underscores, or dashes.",
|
||||
);
|
||||
});
|
||||
|
||||
it("builds git worktree add args for new and existing branches", () => {
|
||||
expect(
|
||||
resolveGitWorktreeAddArgs({
|
||||
branchName: "feature-branch",
|
||||
targetPath: "/tmp/feature-branch",
|
||||
branchExists: false,
|
||||
}),
|
||||
).toEqual(["worktree", "add", "-b", "feature-branch", "/tmp/feature-branch", "HEAD"]);
|
||||
|
||||
expect(
|
||||
resolveGitWorktreeAddArgs({
|
||||
branchName: "feature-branch",
|
||||
targetPath: "/tmp/feature-branch",
|
||||
branchExists: true,
|
||||
}),
|
||||
).toEqual(["worktree", "add", "/tmp/feature-branch", "feature-branch"]);
|
||||
});
|
||||
|
||||
it("builds git worktree add args with a start point", () => {
|
||||
expect(
|
||||
resolveGitWorktreeAddArgs({
|
||||
branchName: "my-worktree",
|
||||
targetPath: "/tmp/my-worktree",
|
||||
branchExists: false,
|
||||
startPoint: "public-gh/master",
|
||||
}),
|
||||
).toEqual(["worktree", "add", "-b", "my-worktree", "/tmp/my-worktree", "public-gh/master"]);
|
||||
});
|
||||
|
||||
it("uses start point even when a local branch with the same name exists", () => {
|
||||
expect(
|
||||
resolveGitWorktreeAddArgs({
|
||||
branchName: "my-worktree",
|
||||
targetPath: "/tmp/my-worktree",
|
||||
branchExists: true,
|
||||
startPoint: "origin/main",
|
||||
}),
|
||||
).toEqual(["worktree", "add", "-b", "my-worktree", "/tmp/my-worktree", "origin/main"]);
|
||||
});
|
||||
|
||||
it("rewrites loopback auth URLs to the new port only", () => {
|
||||
expect(rewriteLocalUrlPort("http://127.0.0.1:3100", 3110)).toBe("http://127.0.0.1:3110/");
|
||||
expect(rewriteLocalUrlPort("https://paperclip.example", 3110)).toBe("https://paperclip.example");
|
||||
});
|
||||
|
||||
it("builds isolated config and env paths for a worktree", () => {
|
||||
const paths = resolveWorktreeLocalPaths({
|
||||
cwd: "/tmp/paperclip-feature",
|
||||
homeDir: "/tmp/paperclip-worktrees",
|
||||
instanceId: "feature-worktree-support",
|
||||
});
|
||||
const config = buildWorktreeConfig({
|
||||
sourceConfig: buildSourceConfig(),
|
||||
paths,
|
||||
serverPort: 3110,
|
||||
databasePort: 54339,
|
||||
now: new Date("2026-03-09T12:00:00.000Z"),
|
||||
});
|
||||
|
||||
expect(config.database.embeddedPostgresDataDir).toBe(
|
||||
path.resolve("/tmp/paperclip-worktrees", "instances", "feature-worktree-support", "db"),
|
||||
);
|
||||
expect(config.database.embeddedPostgresPort).toBe(54339);
|
||||
expect(config.server.port).toBe(3110);
|
||||
expect(config.auth.publicBaseUrl).toBe("http://127.0.0.1:3110/");
|
||||
expect(config.storage.localDisk.baseDir).toBe(
|
||||
path.resolve("/tmp/paperclip-worktrees", "instances", "feature-worktree-support", "data", "storage"),
|
||||
);
|
||||
|
||||
const env = buildWorktreeEnvEntries(paths);
|
||||
expect(env.PAPERCLIP_HOME).toBe(path.resolve("/tmp/paperclip-worktrees"));
|
||||
expect(env.PAPERCLIP_INSTANCE_ID).toBe("feature-worktree-support");
|
||||
expect(env.PAPERCLIP_IN_WORKTREE).toBe("true");
|
||||
expect(formatShellExports(env)).toContain("export PAPERCLIP_INSTANCE_ID='feature-worktree-support'");
|
||||
});
|
||||
|
||||
it("uses minimal seed mode to keep app state but drop heavy runtime history", () => {
|
||||
const minimal = resolveWorktreeSeedPlan("minimal");
|
||||
const full = resolveWorktreeSeedPlan("full");
|
||||
|
||||
expect(minimal.excludedTables).toContain("heartbeat_runs");
|
||||
expect(minimal.excludedTables).toContain("heartbeat_run_events");
|
||||
expect(minimal.excludedTables).toContain("workspace_runtime_services");
|
||||
expect(minimal.excludedTables).toContain("agent_task_sessions");
|
||||
expect(minimal.nullifyColumns.issues).toEqual(["checkout_run_id", "execution_run_id"]);
|
||||
|
||||
expect(full.excludedTables).toEqual([]);
|
||||
expect(full.nullifyColumns).toEqual({});
|
||||
});
|
||||
|
||||
it("copies the source local_encrypted secrets key into the seeded worktree instance", () => {
|
||||
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-secrets-"));
|
||||
const originalInlineMasterKey = process.env.PAPERCLIP_SECRETS_MASTER_KEY;
|
||||
const originalKeyFile = process.env.PAPERCLIP_SECRETS_MASTER_KEY_FILE;
|
||||
try {
|
||||
delete process.env.PAPERCLIP_SECRETS_MASTER_KEY;
|
||||
delete process.env.PAPERCLIP_SECRETS_MASTER_KEY_FILE;
|
||||
const sourceConfigPath = path.join(tempRoot, "source", "config.json");
|
||||
const sourceKeyPath = path.join(tempRoot, "source", "secrets", "master.key");
|
||||
const targetKeyPath = path.join(tempRoot, "target", "secrets", "master.key");
|
||||
fs.mkdirSync(path.dirname(sourceKeyPath), { recursive: true });
|
||||
fs.writeFileSync(sourceKeyPath, "source-master-key", "utf8");
|
||||
|
||||
const sourceConfig = buildSourceConfig();
|
||||
sourceConfig.secrets.localEncrypted.keyFilePath = sourceKeyPath;
|
||||
|
||||
copySeededSecretsKey({
|
||||
sourceConfigPath,
|
||||
sourceConfig,
|
||||
sourceEnvEntries: {},
|
||||
targetKeyFilePath: targetKeyPath,
|
||||
});
|
||||
|
||||
expect(fs.readFileSync(targetKeyPath, "utf8")).toBe("source-master-key");
|
||||
} finally {
|
||||
if (originalInlineMasterKey === undefined) {
|
||||
delete process.env.PAPERCLIP_SECRETS_MASTER_KEY;
|
||||
} else {
|
||||
process.env.PAPERCLIP_SECRETS_MASTER_KEY = originalInlineMasterKey;
|
||||
}
|
||||
if (originalKeyFile === undefined) {
|
||||
delete process.env.PAPERCLIP_SECRETS_MASTER_KEY_FILE;
|
||||
} else {
|
||||
process.env.PAPERCLIP_SECRETS_MASTER_KEY_FILE = originalKeyFile;
|
||||
}
|
||||
fs.rmSync(tempRoot, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
it("writes the source inline secrets master key into the seeded worktree instance", () => {
|
||||
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-secrets-"));
|
||||
try {
|
||||
const sourceConfigPath = path.join(tempRoot, "source", "config.json");
|
||||
const targetKeyPath = path.join(tempRoot, "target", "secrets", "master.key");
|
||||
|
||||
copySeededSecretsKey({
|
||||
sourceConfigPath,
|
||||
sourceConfig: buildSourceConfig(),
|
||||
sourceEnvEntries: {
|
||||
PAPERCLIP_SECRETS_MASTER_KEY: "inline-source-master-key",
|
||||
},
|
||||
targetKeyFilePath: targetKeyPath,
|
||||
});
|
||||
|
||||
expect(fs.readFileSync(targetKeyPath, "utf8")).toBe("inline-source-master-key");
|
||||
} finally {
|
||||
fs.rmSync(tempRoot, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
it("persists the current agent jwt secret into the worktree env file", async () => {
|
||||
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-jwt-"));
|
||||
const repoRoot = path.join(tempRoot, "repo");
|
||||
const originalCwd = process.cwd();
|
||||
const originalJwtSecret = process.env.PAPERCLIP_AGENT_JWT_SECRET;
|
||||
|
||||
try {
|
||||
fs.mkdirSync(repoRoot, { recursive: true });
|
||||
process.env.PAPERCLIP_AGENT_JWT_SECRET = "worktree-shared-secret";
|
||||
process.chdir(repoRoot);
|
||||
|
||||
await worktreeInitCommand({
|
||||
seed: false,
|
||||
fromConfig: path.join(tempRoot, "missing", "config.json"),
|
||||
home: path.join(tempRoot, ".paperclip-worktrees"),
|
||||
});
|
||||
|
||||
const envPath = path.join(repoRoot, ".paperclip", ".env");
|
||||
expect(fs.readFileSync(envPath, "utf8")).toContain("PAPERCLIP_AGENT_JWT_SECRET=worktree-shared-secret");
|
||||
} finally {
|
||||
process.chdir(originalCwd);
|
||||
if (originalJwtSecret === undefined) {
|
||||
delete process.env.PAPERCLIP_AGENT_JWT_SECRET;
|
||||
} else {
|
||||
process.env.PAPERCLIP_AGENT_JWT_SECRET = originalJwtSecret;
|
||||
}
|
||||
fs.rmSync(tempRoot, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
it("rebinds same-repo workspace paths onto the current worktree root", () => {
|
||||
expect(
|
||||
rebindWorkspaceCwd({
|
||||
sourceRepoRoot: "/Users/example/paperclip",
|
||||
targetRepoRoot: "/Users/example/paperclip-pr-432",
|
||||
workspaceCwd: "/Users/example/paperclip",
|
||||
}),
|
||||
).toBe("/Users/example/paperclip-pr-432");
|
||||
|
||||
expect(
|
||||
rebindWorkspaceCwd({
|
||||
sourceRepoRoot: "/Users/example/paperclip",
|
||||
targetRepoRoot: "/Users/example/paperclip-pr-432",
|
||||
workspaceCwd: "/Users/example/paperclip/packages/db",
|
||||
}),
|
||||
).toBe("/Users/example/paperclip-pr-432/packages/db");
|
||||
});
|
||||
|
||||
it("does not rebind paths outside the source repo root", () => {
|
||||
expect(
|
||||
rebindWorkspaceCwd({
|
||||
sourceRepoRoot: "/Users/example/paperclip",
|
||||
targetRepoRoot: "/Users/example/paperclip-pr-432",
|
||||
workspaceCwd: "/Users/example/other-project",
|
||||
}),
|
||||
).toBeNull();
|
||||
});
|
||||
|
||||
it("copies shared git hooks into a linked worktree git dir", () => {
|
||||
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-hooks-"));
|
||||
const repoRoot = path.join(tempRoot, "repo");
|
||||
const worktreePath = path.join(tempRoot, "repo-feature");
|
||||
|
||||
try {
|
||||
fs.mkdirSync(repoRoot, { recursive: true });
|
||||
execFileSync("git", ["init"], { cwd: repoRoot, stdio: "ignore" });
|
||||
execFileSync("git", ["config", "user.email", "test@example.com"], { cwd: repoRoot, stdio: "ignore" });
|
||||
execFileSync("git", ["config", "user.name", "Test User"], { cwd: repoRoot, stdio: "ignore" });
|
||||
fs.writeFileSync(path.join(repoRoot, "README.md"), "# temp\n", "utf8");
|
||||
execFileSync("git", ["add", "README.md"], { cwd: repoRoot, stdio: "ignore" });
|
||||
execFileSync("git", ["commit", "-m", "Initial commit"], { cwd: repoRoot, stdio: "ignore" });
|
||||
|
||||
const sourceHooksDir = path.join(repoRoot, ".git", "hooks");
|
||||
const sourceHookPath = path.join(sourceHooksDir, "pre-commit");
|
||||
const sourceTokensPath = path.join(sourceHooksDir, "forbidden-tokens.txt");
|
||||
fs.writeFileSync(sourceHookPath, "#!/usr/bin/env bash\nexit 0\n", { encoding: "utf8", mode: 0o755 });
|
||||
fs.chmodSync(sourceHookPath, 0o755);
|
||||
fs.writeFileSync(sourceTokensPath, "secret-token\n", "utf8");
|
||||
|
||||
execFileSync("git", ["worktree", "add", "--detach", worktreePath], { cwd: repoRoot, stdio: "ignore" });
|
||||
|
||||
const copied = copyGitHooksToWorktreeGitDir(worktreePath);
|
||||
const worktreeGitDir = execFileSync("git", ["rev-parse", "--git-dir"], {
|
||||
cwd: worktreePath,
|
||||
encoding: "utf8",
|
||||
stdio: ["ignore", "pipe", "ignore"],
|
||||
}).trim();
|
||||
const resolvedSourceHooksDir = fs.realpathSync(sourceHooksDir);
|
||||
const resolvedTargetHooksDir = fs.realpathSync(path.resolve(worktreePath, worktreeGitDir, "hooks"));
|
||||
const targetHookPath = path.join(resolvedTargetHooksDir, "pre-commit");
|
||||
const targetTokensPath = path.join(resolvedTargetHooksDir, "forbidden-tokens.txt");
|
||||
|
||||
expect(copied).toMatchObject({
|
||||
sourceHooksPath: resolvedSourceHooksDir,
|
||||
targetHooksPath: resolvedTargetHooksDir,
|
||||
copied: true,
|
||||
});
|
||||
expect(fs.readFileSync(targetHookPath, "utf8")).toBe("#!/usr/bin/env bash\nexit 0\n");
|
||||
expect(fs.statSync(targetHookPath).mode & 0o111).not.toBe(0);
|
||||
expect(fs.readFileSync(targetTokensPath, "utf8")).toBe("secret-token\n");
|
||||
} finally {
|
||||
execFileSync("git", ["worktree", "remove", "--force", worktreePath], { cwd: repoRoot, stdio: "ignore" });
|
||||
fs.rmSync(tempRoot, { recursive: true, force: true });
|
||||
}
|
||||
});
|
||||
|
||||
it("creates and initializes a worktree from the top-level worktree:make command", async () => {
|
||||
const tempRoot = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-worktree-make-"));
|
||||
const repoRoot = path.join(tempRoot, "repo");
|
||||
const fakeHome = path.join(tempRoot, "home");
|
||||
const worktreePath = path.join(fakeHome, "paperclip-make-test");
|
||||
const originalCwd = process.cwd();
|
||||
const homedirSpy = vi.spyOn(os, "homedir").mockReturnValue(fakeHome);
|
||||
|
||||
try {
|
||||
fs.mkdirSync(repoRoot, { recursive: true });
|
||||
fs.mkdirSync(fakeHome, { recursive: true });
|
||||
execFileSync("git", ["init"], { cwd: repoRoot, stdio: "ignore" });
|
||||
execFileSync("git", ["config", "user.email", "test@example.com"], { cwd: repoRoot, stdio: "ignore" });
|
||||
execFileSync("git", ["config", "user.name", "Test User"], { cwd: repoRoot, stdio: "ignore" });
|
||||
fs.writeFileSync(path.join(repoRoot, "README.md"), "# temp\n", "utf8");
|
||||
execFileSync("git", ["add", "README.md"], { cwd: repoRoot, stdio: "ignore" });
|
||||
execFileSync("git", ["commit", "-m", "Initial commit"], { cwd: repoRoot, stdio: "ignore" });
|
||||
|
||||
process.chdir(repoRoot);
|
||||
|
||||
await worktreeMakeCommand("paperclip-make-test", {
|
||||
seed: false,
|
||||
home: path.join(tempRoot, ".paperclip-worktrees"),
|
||||
});
|
||||
|
||||
expect(fs.existsSync(path.join(worktreePath, ".git"))).toBe(true);
|
||||
expect(fs.existsSync(path.join(worktreePath, ".paperclip", "config.json"))).toBe(true);
|
||||
expect(fs.existsSync(path.join(worktreePath, ".paperclip", ".env"))).toBe(true);
|
||||
} finally {
|
||||
process.chdir(originalCwd);
|
||||
homedirSpy.mockRestore();
|
||||
fs.rmSync(tempRoot, { recursive: true, force: true });
|
||||
}
|
||||
}, 20_000);
|
||||
});
|
||||
@@ -2,6 +2,7 @@ import type { CLIAdapterModule } from "@paperclipai/adapter-utils";
|
||||
import { printClaudeStreamEvent } from "@paperclipai/adapter-claude-local/cli";
|
||||
import { printCodexStreamEvent } from "@paperclipai/adapter-codex-local/cli";
|
||||
import { printCursorStreamEvent } from "@paperclipai/adapter-cursor-local/cli";
|
||||
import { printGeminiStreamEvent } from "@paperclipai/adapter-gemini-local/cli";
|
||||
import { printOpenCodeStreamEvent } from "@paperclipai/adapter-opencode-local/cli";
|
||||
import { printPiStreamEvent } from "@paperclipai/adapter-pi-local/cli";
|
||||
import { printOpenClawGatewayStreamEvent } from "@paperclipai/adapter-openclaw-gateway/cli";
|
||||
@@ -33,6 +34,11 @@ const cursorLocalCLIAdapter: CLIAdapterModule = {
|
||||
formatStdoutEvent: printCursorStreamEvent,
|
||||
};
|
||||
|
||||
const geminiLocalCLIAdapter: CLIAdapterModule = {
|
||||
type: "gemini_local",
|
||||
formatStdoutEvent: printGeminiStreamEvent,
|
||||
};
|
||||
|
||||
const openclawGatewayCLIAdapter: CLIAdapterModule = {
|
||||
type: "openclaw_gateway",
|
||||
formatStdoutEvent: printOpenClawGatewayStreamEvent,
|
||||
@@ -45,6 +51,7 @@ const adaptersByType = new Map<string, CLIAdapterModule>(
|
||||
openCodeLocalCLIAdapter,
|
||||
piLocalCLIAdapter,
|
||||
cursorLocalCLIAdapter,
|
||||
geminiLocalCLIAdapter,
|
||||
openclawGatewayCLIAdapter,
|
||||
processCLIAdapter,
|
||||
httpCLIAdapter,
|
||||
|
||||
@@ -26,6 +26,9 @@ export async function addAllowedHostname(host: string, opts: { config?: string }
|
||||
p.log.info(`Hostname ${pc.cyan(normalized)} is already allowed.`);
|
||||
} else {
|
||||
p.log.success(`Added allowed hostname: ${pc.cyan(normalized)}`);
|
||||
p.log.message(
|
||||
pc.dim("Restart the Paperclip server for this change to take effect."),
|
||||
);
|
||||
}
|
||||
|
||||
if (!(config.server.deploymentMode === "authenticated" && config.server.exposure === "private")) {
|
||||
|
||||
@@ -75,6 +75,11 @@ export async function bootstrapCeoInvite(opts: {
|
||||
}
|
||||
|
||||
const db = createDb(dbUrl);
|
||||
const closableDb = db as typeof db & {
|
||||
$client?: {
|
||||
end?: (options?: { timeout?: number }) => Promise<void>;
|
||||
};
|
||||
};
|
||||
try {
|
||||
const existingAdminCount = await db
|
||||
.select()
|
||||
@@ -122,5 +127,7 @@ export async function bootstrapCeoInvite(opts: {
|
||||
} catch (err) {
|
||||
p.log.error(`Could not create bootstrap invite: ${err instanceof Error ? err.message : String(err)}`);
|
||||
p.log.info("If using embedded-postgres, start the Paperclip server and run this command again.");
|
||||
} finally {
|
||||
await closableDb.$client?.end?.({ timeout: 5 }).catch(() => undefined);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -66,28 +66,40 @@ export async function doctor(opts: {
|
||||
printResult(deploymentAuthResult);
|
||||
|
||||
// 3. Agent JWT check
|
||||
const jwtResult = agentJwtSecretCheck(opts.config);
|
||||
results.push(jwtResult);
|
||||
printResult(jwtResult);
|
||||
await maybeRepair(jwtResult, opts);
|
||||
results.push(
|
||||
await runRepairableCheck({
|
||||
run: () => agentJwtSecretCheck(opts.config),
|
||||
configPath,
|
||||
opts,
|
||||
}),
|
||||
);
|
||||
|
||||
// 4. Secrets adapter check
|
||||
const secretsResult = secretsCheck(config, configPath);
|
||||
results.push(secretsResult);
|
||||
printResult(secretsResult);
|
||||
await maybeRepair(secretsResult, opts);
|
||||
results.push(
|
||||
await runRepairableCheck({
|
||||
run: () => secretsCheck(config, configPath),
|
||||
configPath,
|
||||
opts,
|
||||
}),
|
||||
);
|
||||
|
||||
// 5. Storage check
|
||||
const storageResult = storageCheck(config, configPath);
|
||||
results.push(storageResult);
|
||||
printResult(storageResult);
|
||||
await maybeRepair(storageResult, opts);
|
||||
results.push(
|
||||
await runRepairableCheck({
|
||||
run: () => storageCheck(config, configPath),
|
||||
configPath,
|
||||
opts,
|
||||
}),
|
||||
);
|
||||
|
||||
// 6. Database check
|
||||
const dbResult = await databaseCheck(config, configPath);
|
||||
results.push(dbResult);
|
||||
printResult(dbResult);
|
||||
await maybeRepair(dbResult, opts);
|
||||
results.push(
|
||||
await runRepairableCheck({
|
||||
run: () => databaseCheck(config, configPath),
|
||||
configPath,
|
||||
opts,
|
||||
}),
|
||||
);
|
||||
|
||||
// 7. LLM check
|
||||
const llmResult = await llmCheck(config);
|
||||
@@ -95,10 +107,13 @@ export async function doctor(opts: {
|
||||
printResult(llmResult);
|
||||
|
||||
// 8. Log directory check
|
||||
const logResult = logCheck(config, configPath);
|
||||
results.push(logResult);
|
||||
printResult(logResult);
|
||||
await maybeRepair(logResult, opts);
|
||||
results.push(
|
||||
await runRepairableCheck({
|
||||
run: () => logCheck(config, configPath),
|
||||
configPath,
|
||||
opts,
|
||||
}),
|
||||
);
|
||||
|
||||
// 9. Port check
|
||||
const portResult = await portCheck(config);
|
||||
@@ -120,9 +135,9 @@ function printResult(result: CheckResult): void {
|
||||
async function maybeRepair(
|
||||
result: CheckResult,
|
||||
opts: { repair?: boolean; yes?: boolean },
|
||||
): Promise<void> {
|
||||
if (result.status === "pass" || !result.canRepair || !result.repair) return;
|
||||
if (!opts.repair) return;
|
||||
): Promise<boolean> {
|
||||
if (result.status === "pass" || !result.canRepair || !result.repair) return false;
|
||||
if (!opts.repair) return false;
|
||||
|
||||
let shouldRepair = opts.yes;
|
||||
if (!shouldRepair) {
|
||||
@@ -130,7 +145,7 @@ async function maybeRepair(
|
||||
message: `Repair "${result.name}"?`,
|
||||
initialValue: true,
|
||||
});
|
||||
if (p.isCancel(answer)) return;
|
||||
if (p.isCancel(answer)) return false;
|
||||
shouldRepair = answer;
|
||||
}
|
||||
|
||||
@@ -138,10 +153,30 @@ async function maybeRepair(
|
||||
try {
|
||||
await result.repair();
|
||||
p.log.success(`Repaired: ${result.name}`);
|
||||
return true;
|
||||
} catch (err) {
|
||||
p.log.error(`Repair failed: ${err instanceof Error ? err.message : String(err)}`);
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
async function runRepairableCheck(input: {
|
||||
run: () => CheckResult | Promise<CheckResult>;
|
||||
configPath: string;
|
||||
opts: { repair?: boolean; yes?: boolean };
|
||||
}): Promise<CheckResult> {
|
||||
let result = await input.run();
|
||||
printResult(result);
|
||||
|
||||
const repaired = await maybeRepair(result, input.opts);
|
||||
if (!repaired) return result;
|
||||
|
||||
// Repairs may create/update the adjacent .env file or other local resources.
|
||||
loadPaperclipEnvFile(input.configPath);
|
||||
result = await input.run();
|
||||
printResult(result);
|
||||
return result;
|
||||
}
|
||||
|
||||
function printSummary(results: CheckResult[]): { passed: number; warned: number; failed: number } {
|
||||
|
||||
218
cli/src/commands/worktree-lib.ts
Normal file
218
cli/src/commands/worktree-lib.ts
Normal file
@@ -0,0 +1,218 @@
|
||||
import path from "node:path";
|
||||
import type { PaperclipConfig } from "../config/schema.js";
|
||||
import { expandHomePrefix } from "../config/home.js";
|
||||
|
||||
export const DEFAULT_WORKTREE_HOME = "~/.paperclip-worktrees";
|
||||
export const WORKTREE_SEED_MODES = ["minimal", "full"] as const;
|
||||
|
||||
export type WorktreeSeedMode = (typeof WORKTREE_SEED_MODES)[number];
|
||||
|
||||
export type WorktreeSeedPlan = {
|
||||
mode: WorktreeSeedMode;
|
||||
excludedTables: string[];
|
||||
nullifyColumns: Record<string, string[]>;
|
||||
};
|
||||
|
||||
const MINIMAL_WORKTREE_EXCLUDED_TABLES = [
|
||||
"activity_log",
|
||||
"agent_runtime_state",
|
||||
"agent_task_sessions",
|
||||
"agent_wakeup_requests",
|
||||
"cost_events",
|
||||
"heartbeat_run_events",
|
||||
"heartbeat_runs",
|
||||
"workspace_runtime_services",
|
||||
];
|
||||
|
||||
const MINIMAL_WORKTREE_NULLIFIED_COLUMNS: Record<string, string[]> = {
|
||||
issues: ["checkout_run_id", "execution_run_id"],
|
||||
};
|
||||
|
||||
export type WorktreeLocalPaths = {
|
||||
cwd: string;
|
||||
repoConfigDir: string;
|
||||
configPath: string;
|
||||
envPath: string;
|
||||
homeDir: string;
|
||||
instanceId: string;
|
||||
instanceRoot: string;
|
||||
contextPath: string;
|
||||
embeddedPostgresDataDir: string;
|
||||
backupDir: string;
|
||||
logDir: string;
|
||||
secretsKeyFilePath: string;
|
||||
storageDir: string;
|
||||
};
|
||||
|
||||
export function isWorktreeSeedMode(value: string): value is WorktreeSeedMode {
|
||||
return (WORKTREE_SEED_MODES as readonly string[]).includes(value);
|
||||
}
|
||||
|
||||
export function resolveWorktreeSeedPlan(mode: WorktreeSeedMode): WorktreeSeedPlan {
|
||||
if (mode === "full") {
|
||||
return {
|
||||
mode,
|
||||
excludedTables: [],
|
||||
nullifyColumns: {},
|
||||
};
|
||||
}
|
||||
return {
|
||||
mode,
|
||||
excludedTables: [...MINIMAL_WORKTREE_EXCLUDED_TABLES],
|
||||
nullifyColumns: {
|
||||
...MINIMAL_WORKTREE_NULLIFIED_COLUMNS,
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
function nonEmpty(value: string | null | undefined): string | null {
|
||||
return typeof value === "string" && value.trim().length > 0 ? value.trim() : null;
|
||||
}
|
||||
|
||||
function isLoopbackHost(hostname: string): boolean {
|
||||
const value = hostname.trim().toLowerCase();
|
||||
return value === "127.0.0.1" || value === "localhost" || value === "::1";
|
||||
}
|
||||
|
||||
export function sanitizeWorktreeInstanceId(rawValue: string): string {
|
||||
const trimmed = rawValue.trim().toLowerCase();
|
||||
const normalized = trimmed
|
||||
.replace(/[^a-z0-9_-]+/g, "-")
|
||||
.replace(/-+/g, "-")
|
||||
.replace(/^[-_]+|[-_]+$/g, "");
|
||||
return normalized || "worktree";
|
||||
}
|
||||
|
||||
export function resolveSuggestedWorktreeName(cwd: string, explicitName?: string): string {
|
||||
return nonEmpty(explicitName) ?? path.basename(path.resolve(cwd));
|
||||
}
|
||||
|
||||
export function resolveWorktreeLocalPaths(opts: {
|
||||
cwd: string;
|
||||
homeDir?: string;
|
||||
instanceId: string;
|
||||
}): WorktreeLocalPaths {
|
||||
const cwd = path.resolve(opts.cwd);
|
||||
const homeDir = path.resolve(expandHomePrefix(opts.homeDir ?? DEFAULT_WORKTREE_HOME));
|
||||
const instanceRoot = path.resolve(homeDir, "instances", opts.instanceId);
|
||||
const repoConfigDir = path.resolve(cwd, ".paperclip");
|
||||
return {
|
||||
cwd,
|
||||
repoConfigDir,
|
||||
configPath: path.resolve(repoConfigDir, "config.json"),
|
||||
envPath: path.resolve(repoConfigDir, ".env"),
|
||||
homeDir,
|
||||
instanceId: opts.instanceId,
|
||||
instanceRoot,
|
||||
contextPath: path.resolve(homeDir, "context.json"),
|
||||
embeddedPostgresDataDir: path.resolve(instanceRoot, "db"),
|
||||
backupDir: path.resolve(instanceRoot, "data", "backups"),
|
||||
logDir: path.resolve(instanceRoot, "logs"),
|
||||
secretsKeyFilePath: path.resolve(instanceRoot, "secrets", "master.key"),
|
||||
storageDir: path.resolve(instanceRoot, "data", "storage"),
|
||||
};
|
||||
}
|
||||
|
||||
export function rewriteLocalUrlPort(rawUrl: string | undefined, port: number): string | undefined {
|
||||
if (!rawUrl) return undefined;
|
||||
try {
|
||||
const parsed = new URL(rawUrl);
|
||||
if (!isLoopbackHost(parsed.hostname)) return rawUrl;
|
||||
parsed.port = String(port);
|
||||
return parsed.toString();
|
||||
} catch {
|
||||
return rawUrl;
|
||||
}
|
||||
}
|
||||
|
||||
export function buildWorktreeConfig(input: {
|
||||
sourceConfig: PaperclipConfig | null;
|
||||
paths: WorktreeLocalPaths;
|
||||
serverPort: number;
|
||||
databasePort: number;
|
||||
now?: Date;
|
||||
}): PaperclipConfig {
|
||||
const { sourceConfig, paths, serverPort, databasePort } = input;
|
||||
const nowIso = (input.now ?? new Date()).toISOString();
|
||||
|
||||
const source = sourceConfig;
|
||||
const authPublicBaseUrl = rewriteLocalUrlPort(source?.auth.publicBaseUrl, serverPort);
|
||||
|
||||
return {
|
||||
$meta: {
|
||||
version: 1,
|
||||
updatedAt: nowIso,
|
||||
source: "configure",
|
||||
},
|
||||
...(source?.llm ? { llm: source.llm } : {}),
|
||||
database: {
|
||||
mode: "embedded-postgres",
|
||||
embeddedPostgresDataDir: paths.embeddedPostgresDataDir,
|
||||
embeddedPostgresPort: databasePort,
|
||||
backup: {
|
||||
enabled: source?.database.backup.enabled ?? true,
|
||||
intervalMinutes: source?.database.backup.intervalMinutes ?? 60,
|
||||
retentionDays: source?.database.backup.retentionDays ?? 30,
|
||||
dir: paths.backupDir,
|
||||
},
|
||||
},
|
||||
logging: {
|
||||
mode: source?.logging.mode ?? "file",
|
||||
logDir: paths.logDir,
|
||||
},
|
||||
server: {
|
||||
deploymentMode: source?.server.deploymentMode ?? "local_trusted",
|
||||
exposure: source?.server.exposure ?? "private",
|
||||
host: source?.server.host ?? "127.0.0.1",
|
||||
port: serverPort,
|
||||
allowedHostnames: source?.server.allowedHostnames ?? [],
|
||||
serveUi: source?.server.serveUi ?? true,
|
||||
},
|
||||
auth: {
|
||||
baseUrlMode: source?.auth.baseUrlMode ?? "auto",
|
||||
...(authPublicBaseUrl ? { publicBaseUrl: authPublicBaseUrl } : {}),
|
||||
disableSignUp: source?.auth.disableSignUp ?? false,
|
||||
},
|
||||
storage: {
|
||||
provider: source?.storage.provider ?? "local_disk",
|
||||
localDisk: {
|
||||
baseDir: paths.storageDir,
|
||||
},
|
||||
s3: {
|
||||
bucket: source?.storage.s3.bucket ?? "paperclip",
|
||||
region: source?.storage.s3.region ?? "us-east-1",
|
||||
endpoint: source?.storage.s3.endpoint,
|
||||
prefix: source?.storage.s3.prefix ?? "",
|
||||
forcePathStyle: source?.storage.s3.forcePathStyle ?? false,
|
||||
},
|
||||
},
|
||||
secrets: {
|
||||
provider: source?.secrets.provider ?? "local_encrypted",
|
||||
strictMode: source?.secrets.strictMode ?? false,
|
||||
localEncrypted: {
|
||||
keyFilePath: paths.secretsKeyFilePath,
|
||||
},
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
export function buildWorktreeEnvEntries(paths: WorktreeLocalPaths): Record<string, string> {
|
||||
return {
|
||||
PAPERCLIP_HOME: paths.homeDir,
|
||||
PAPERCLIP_INSTANCE_ID: paths.instanceId,
|
||||
PAPERCLIP_CONFIG: paths.configPath,
|
||||
PAPERCLIP_CONTEXT: paths.contextPath,
|
||||
PAPERCLIP_IN_WORKTREE: "true",
|
||||
};
|
||||
}
|
||||
|
||||
function shellEscape(value: string): string {
|
||||
return `'${value.replaceAll("'", `'\"'\"'`)}'`;
|
||||
}
|
||||
|
||||
export function formatShellExports(entries: Record<string, string>): string {
|
||||
return Object.entries(entries)
|
||||
.filter(([, value]) => typeof value === "string" && value.trim().length > 0)
|
||||
.map(([key, value]) => `export ${key}=${shellEscape(value)}`)
|
||||
.join("\n");
|
||||
}
|
||||
1111
cli/src/commands/worktree.ts
Normal file
1111
cli/src/commands/worktree.ts
Normal file
File diff suppressed because it is too large
Load Diff
@@ -25,13 +25,17 @@ function parseEnvFile(contents: string) {
|
||||
function renderEnvFile(entries: Record<string, string>) {
|
||||
const lines = [
|
||||
"# Paperclip environment variables",
|
||||
"# Generated by `paperclipai onboard`",
|
||||
"# Generated by Paperclip CLI commands",
|
||||
...Object.entries(entries).map(([key, value]) => `${key}=${value}`),
|
||||
"",
|
||||
];
|
||||
return lines.join("\n");
|
||||
}
|
||||
|
||||
export function resolvePaperclipEnvFile(configPath?: string): string {
|
||||
return resolveEnvFilePath(configPath);
|
||||
}
|
||||
|
||||
export function resolveAgentJwtEnvFile(configPath?: string): string {
|
||||
return resolveEnvFilePath(configPath);
|
||||
}
|
||||
@@ -82,13 +86,33 @@ export function ensureAgentJwtSecret(configPath?: string): { secret: string; cre
|
||||
}
|
||||
|
||||
export function writeAgentJwtEnv(secret: string, filePath = resolveEnvFilePath()): void {
|
||||
mergePaperclipEnvEntries({ [JWT_SECRET_ENV_KEY]: secret }, filePath);
|
||||
}
|
||||
|
||||
export function readPaperclipEnvEntries(filePath = resolveEnvFilePath()): Record<string, string> {
|
||||
if (!fs.existsSync(filePath)) return {};
|
||||
return parseEnvFile(fs.readFileSync(filePath, "utf-8"));
|
||||
}
|
||||
|
||||
export function writePaperclipEnvEntries(entries: Record<string, string>, filePath = resolveEnvFilePath()): void {
|
||||
const dir = path.dirname(filePath);
|
||||
fs.mkdirSync(dir, { recursive: true });
|
||||
|
||||
const current = fs.existsSync(filePath) ? parseEnvFile(fs.readFileSync(filePath, "utf-8")) : {};
|
||||
current[JWT_SECRET_ENV_KEY] = secret;
|
||||
|
||||
fs.writeFileSync(filePath, renderEnvFile(current), {
|
||||
fs.writeFileSync(filePath, renderEnvFile(entries), {
|
||||
mode: 0o600,
|
||||
});
|
||||
}
|
||||
|
||||
export function mergePaperclipEnvEntries(
|
||||
entries: Record<string, string>,
|
||||
filePath = resolveEnvFilePath(),
|
||||
): Record<string, string> {
|
||||
const current = readPaperclipEnvEntries(filePath);
|
||||
const next = {
|
||||
...current,
|
||||
...Object.fromEntries(
|
||||
Object.entries(entries).filter(([, value]) => typeof value === "string" && value.trim().length > 0),
|
||||
),
|
||||
};
|
||||
writePaperclipEnvEntries(next, filePath);
|
||||
return next;
|
||||
}
|
||||
|
||||
@@ -16,6 +16,8 @@ import { registerApprovalCommands } from "./commands/client/approval.js";
|
||||
import { registerActivityCommands } from "./commands/client/activity.js";
|
||||
import { registerDashboardCommands } from "./commands/client/dashboard.js";
|
||||
import { applyDataDirOverride, type DataDirOptionLike } from "./config/data-dir.js";
|
||||
import { loadPaperclipEnvFile } from "./config/env.js";
|
||||
import { registerWorktreeCommands } from "./commands/worktree.js";
|
||||
|
||||
const program = new Command();
|
||||
const DATA_DIR_OPTION_HELP =
|
||||
@@ -33,6 +35,7 @@ program.hook("preAction", (_thisCommand, actionCommand) => {
|
||||
hasConfigOption: optionNames.has("config"),
|
||||
hasContextOption: optionNames.has("context"),
|
||||
});
|
||||
loadPaperclipEnvFile(options.config);
|
||||
});
|
||||
|
||||
program
|
||||
@@ -132,6 +135,7 @@ registerAgentCommands(program);
|
||||
registerApprovalCommands(program);
|
||||
registerActivityCommands(program);
|
||||
registerDashboardCommands(program);
|
||||
registerWorktreeCommands(program);
|
||||
|
||||
const auth = program.command("auth").description("Authentication and bootstrap utilities");
|
||||
|
||||
|
||||
@@ -162,4 +162,3 @@ export async function promptServer(opts?: {
|
||||
auth,
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
@@ -19,6 +19,14 @@ That's it. On first start the server:
|
||||
|
||||
Data persists across restarts in `~/.paperclip/instances/default/db/`. To reset local dev data, delete that directory.
|
||||
|
||||
If you need to apply pending migrations manually, run:
|
||||
|
||||
```sh
|
||||
pnpm db:migrate
|
||||
```
|
||||
|
||||
When `DATABASE_URL` is unset, this command targets the current embedded PostgreSQL instance for your active Paperclip config/instance.
|
||||
|
||||
This mode is ideal for local development and one-command installs.
|
||||
|
||||
Docker note: the Docker quickstart image also uses embedded PostgreSQL by default. Persist `/paperclip` to keep DB state across container restarts (see `doc/DOCKER.md`).
|
||||
|
||||
@@ -124,6 +124,113 @@ When a local agent run has no resolved project/session workspace, Paperclip fall
|
||||
|
||||
This path honors `PAPERCLIP_HOME` and `PAPERCLIP_INSTANCE_ID` in non-default setups.
|
||||
|
||||
## Worktree-local Instances
|
||||
|
||||
When developing from multiple git worktrees, do not point two Paperclip servers at the same embedded PostgreSQL data directory.
|
||||
|
||||
Instead, create a repo-local Paperclip config plus an isolated instance for the worktree:
|
||||
|
||||
```sh
|
||||
paperclipai worktree init
|
||||
# or create the git worktree and initialize it in one step:
|
||||
pnpm paperclipai worktree:make paperclip-pr-432
|
||||
```
|
||||
|
||||
This command:
|
||||
|
||||
- writes repo-local files at `.paperclip/config.json` and `.paperclip/.env`
|
||||
- creates an isolated instance under `~/.paperclip-worktrees/instances/<worktree-id>/`
|
||||
- when run inside a linked git worktree, mirrors the effective git hooks into that worktree's private git dir
|
||||
- picks a free app port and embedded PostgreSQL port
|
||||
- by default seeds the isolated DB in `minimal` mode from your main instance via a logical SQL snapshot
|
||||
|
||||
Seed modes:
|
||||
|
||||
- `minimal` keeps core app state like companies, projects, issues, comments, approvals, and auth state, preserves schema for all tables, but omits row data from heavy operational history such as heartbeat runs, wake requests, activity logs, runtime services, and agent session state
|
||||
- `full` makes a full logical clone of the source instance
|
||||
- `--no-seed` creates an empty isolated instance
|
||||
|
||||
After `worktree init`, both the server and the CLI auto-load the repo-local `.paperclip/.env` when run inside that worktree, so normal commands like `pnpm dev`, `paperclipai doctor`, and `paperclipai db:backup` stay scoped to the worktree instance.
|
||||
|
||||
That repo-local env also sets `PAPERCLIP_IN_WORKTREE=true`, which the server can use for worktree-specific UI behavior such as an alternate favicon.
|
||||
|
||||
Print shell exports explicitly when needed:
|
||||
|
||||
```sh
|
||||
paperclipai worktree env
|
||||
# or:
|
||||
eval "$(paperclipai worktree env)"
|
||||
```
|
||||
|
||||
### Worktree CLI Reference
|
||||
|
||||
**`pnpm paperclipai worktree init [options]`** — Create repo-local config/env and an isolated instance for the current worktree.
|
||||
|
||||
| Option | Description |
|
||||
|---|---|
|
||||
| `--name <name>` | Display name used to derive the instance id |
|
||||
| `--instance <id>` | Explicit isolated instance id |
|
||||
| `--home <path>` | Home root for worktree instances (default: `~/.paperclip-worktrees`) |
|
||||
| `--from-config <path>` | Source config.json to seed from |
|
||||
| `--from-data-dir <path>` | Source PAPERCLIP_HOME used when deriving the source config |
|
||||
| `--from-instance <id>` | Source instance id (default: `default`) |
|
||||
| `--server-port <port>` | Preferred server port |
|
||||
| `--db-port <port>` | Preferred embedded Postgres port |
|
||||
| `--seed-mode <mode>` | Seed profile: `minimal` or `full` (default: `minimal`) |
|
||||
| `--no-seed` | Skip database seeding from the source instance |
|
||||
| `--force` | Replace existing repo-local config and isolated instance data |
|
||||
|
||||
Examples:
|
||||
|
||||
```sh
|
||||
paperclipai worktree init --no-seed
|
||||
paperclipai worktree init --seed-mode full
|
||||
paperclipai worktree init --from-instance default
|
||||
paperclipai worktree init --from-data-dir ~/.paperclip
|
||||
paperclipai worktree init --force
|
||||
```
|
||||
|
||||
**`pnpm paperclipai worktree:make <name> [options]`** — Create `~/NAME` as a git worktree, then initialize an isolated Paperclip instance inside it. This combines `git worktree add` with `worktree init` in a single step.
|
||||
|
||||
| Option | Description |
|
||||
|---|---|
|
||||
| `--start-point <ref>` | Remote ref to base the new branch on (e.g. `origin/main`) |
|
||||
| `--instance <id>` | Explicit isolated instance id |
|
||||
| `--home <path>` | Home root for worktree instances (default: `~/.paperclip-worktrees`) |
|
||||
| `--from-config <path>` | Source config.json to seed from |
|
||||
| `--from-data-dir <path>` | Source PAPERCLIP_HOME used when deriving the source config |
|
||||
| `--from-instance <id>` | Source instance id (default: `default`) |
|
||||
| `--server-port <port>` | Preferred server port |
|
||||
| `--db-port <port>` | Preferred embedded Postgres port |
|
||||
| `--seed-mode <mode>` | Seed profile: `minimal` or `full` (default: `minimal`) |
|
||||
| `--no-seed` | Skip database seeding from the source instance |
|
||||
| `--force` | Replace existing repo-local config and isolated instance data |
|
||||
|
||||
Examples:
|
||||
|
||||
```sh
|
||||
pnpm paperclipai worktree:make paperclip-pr-432
|
||||
pnpm paperclipai worktree:make my-feature --start-point origin/main
|
||||
pnpm paperclipai worktree:make experiment --no-seed
|
||||
```
|
||||
|
||||
**`pnpm paperclipai worktree env [options]`** — Print shell exports for the current worktree-local Paperclip instance.
|
||||
|
||||
| Option | Description |
|
||||
|---|---|
|
||||
| `-c, --config <path>` | Path to config file |
|
||||
| `--json` | Print JSON instead of shell exports |
|
||||
|
||||
Examples:
|
||||
|
||||
```sh
|
||||
pnpm paperclipai worktree env
|
||||
pnpm paperclipai worktree env --json
|
||||
eval "$(pnpm paperclipai worktree env)"
|
||||
```
|
||||
|
||||
For project execution worktrees, Paperclip can also run a project-defined provision command after it creates or reuses an isolated git worktree. Configure this on the project's execution workspace policy (`workspaceStrategy.provisionCommand`). The command runs inside the derived worktree and receives `PAPERCLIP_WORKSPACE_*`, `PAPERCLIP_PROJECT_ID`, `PAPERCLIP_AGENT_ID`, and `PAPERCLIP_ISSUE_*` environment variables so each repo can bootstrap itself however it wants.
|
||||
|
||||
## Quick Health Checks
|
||||
|
||||
In another terminal:
|
||||
|
||||
@@ -94,3 +94,53 @@ Canonical mode design and command expectations live in `doc/DEPLOYMENT-MODES.md`
|
||||
## Further Detail
|
||||
|
||||
See [SPEC.md](./SPEC.md) for the full technical specification and [TASKS.md](./TASKS.md) for the task management data model.
|
||||
|
||||
---
|
||||
|
||||
Paperclip’s core identity is a **control plane for autonomous AI companies**, centered on **companies, org charts, goals, issues/comments, heartbeats, budgets, approvals, and board governance**. The public docs are also explicit about the current boundaries: **tasks/comments are the built-in communication model**, Paperclip is **not a chatbot**, and it is **not a code review tool**. The roadmap already points toward **easier onboarding, cloud agents, easier agent configuration, plugins, better docs, and ClipMart/ClipHub-style reusable companies/templates**.
|
||||
|
||||
## What Paperclip should do vs. not do
|
||||
|
||||
**Do**
|
||||
|
||||
- Stay **board-level and company-level**. Users should manage goals, orgs, budgets, approvals, and outputs.
|
||||
- Make the first five minutes feel magical: install, answer a few questions, see a CEO do something real.
|
||||
- Keep work anchored to **issues/comments/projects/goals**, even if the surface feels conversational.
|
||||
- Treat **agency / internal team / startup** as the same underlying abstraction with different templates and labels.
|
||||
- Make outputs first-class: files, docs, reports, previews, links, screenshots.
|
||||
- Provide **hooks into engineering workflows**: worktrees, preview servers, PR links, external review tools.
|
||||
- Use **plugins** for edge cases like rich chat, knowledge bases, doc editors, custom tracing.
|
||||
|
||||
**Do not**
|
||||
|
||||
- Do not make the core product a general chat app. The current product definition is explicitly task/comment-centric and “not a chatbot,” and that boundary is valuable.
|
||||
- Do not build a complete Jira/GitHub replacement. The repo/docs already position Paperclip as organization orchestration, not focused on pull-request review.
|
||||
- Do not build enterprise-grade RBAC first. The current V1 spec still treats multi-board governance and fine-grained human permissions as out of scope, so the first multi-user version should be coarse and company-scoped.
|
||||
- Do not lead with raw bash logs and transcripts. Default view should be human-readable intent/progress, with raw detail beneath.
|
||||
- Do not force users to understand provider/API-key plumbing unless absolutely necessary. There are active onboarding/auth issues already; friction here is clearly real.
|
||||
|
||||
## Specific design goals
|
||||
|
||||
1. **Time-to-first-success under 5 minutes**
|
||||
A fresh user should go from install to “my CEO completed a first task” in one sitting.
|
||||
|
||||
2. **Board-level abstraction always wins**
|
||||
The default UI should answer: what is the company doing, who is doing it, why does it matter, what did it cost, and what needs my approval.
|
||||
|
||||
3. **Conversation stays attached to work objects**
|
||||
“Chat with CEO” should still resolve to strategy threads, decisions, tasks, or approvals.
|
||||
|
||||
4. **Progressive disclosure**
|
||||
Top layer: human-readable summary. Middle layer: checklist/steps/artifacts. Bottom layer: raw logs/tool calls/transcript.
|
||||
|
||||
5. **Output-first**
|
||||
Work is not done until the user can see the result: file, document, preview link, screenshot, plan, or PR.
|
||||
|
||||
6. **Local-first, cloud-ready**
|
||||
The mental model should not change between local solo use and shared/private or public/cloud deployment.
|
||||
|
||||
7. **Safe autonomy**
|
||||
Auto mode is allowed; hidden token burn is not.
|
||||
|
||||
8. **Thin core, rich edges**
|
||||
Put optional chat, knowledge, and special surfaces into plugins/extensions rather than bloating the control plane.
|
||||
|
||||
@@ -58,7 +58,7 @@ From the release worktree:
|
||||
|
||||
```bash
|
||||
VERSION=X.Y.Z
|
||||
claude --print --output-format stream-json --verbose --dangerously-skip-permissions --model claude-opus-4-6 "Use the release-changelog skill to draft or update releases/v${VERSION}.md for Paperclip. Read doc/RELEASING.md and skills/release-changelog/SKILL.md, then generate the stable changelog for v${VERSION} from commits since the last stable tag. Do not create a canary changelog."
|
||||
claude --print --output-format stream-json --verbose --dangerously-skip-permissions --model claude-opus-4-6 "Use the release-changelog skill to draft or update releases/v${VERSION}.md for Paperclip. Read doc/RELEASING.md and .agents/skills/release-changelog/SKILL.md, then generate the stable changelog for v${VERSION} from commits since the last stable tag. Do not create a canary changelog."
|
||||
```
|
||||
|
||||
### 3. Verify and publish a canary
|
||||
@@ -418,5 +418,5 @@ If the release already exists, the script updates it.
|
||||
## Related Docs
|
||||
|
||||
- [doc/PUBLISHING.md](PUBLISHING.md) — low-level npm build and packaging internals
|
||||
- [skills/release/SKILL.md](../skills/release/SKILL.md) — agent release coordination workflow
|
||||
- [skills/release-changelog/SKILL.md](../skills/release-changelog/SKILL.md) — stable changelog drafting workflow
|
||||
- [.agents/skills/release/SKILL.md](../.agents/skills/release/SKILL.md) — maintainer release coordination workflow
|
||||
- [.agents/skills/release-changelog/SKILL.md](../.agents/skills/release-changelog/SKILL.md) — stable changelog drafting workflow
|
||||
|
||||
@@ -37,7 +37,7 @@ These decisions close open questions from `SPEC.md` for V1.
|
||||
| Visibility | Full visibility to board and all agents in same company |
|
||||
| Communication | Tasks + comments only (no separate chat system) |
|
||||
| Task ownership | Single assignee; atomic checkout required for `in_progress` transition |
|
||||
| Recovery | No automatic reassignment; stale work is surfaced, not silently fixed |
|
||||
| Recovery | No automatic reassignment; work recovery stays manual/explicit |
|
||||
| Agent adapters | Built-in `process` and `http` adapters |
|
||||
| Auth | Mode-dependent human auth (`local_trusted` implicit board in current code; authenticated mode uses sessions), API keys for agents |
|
||||
| Budget period | Monthly UTC calendar window |
|
||||
@@ -106,7 +106,6 @@ A lightweight scheduler/worker in the server process handles:
|
||||
- heartbeat trigger checks
|
||||
- stuck run detection
|
||||
- budget threshold checks
|
||||
- stale task reporting generation
|
||||
|
||||
Separate queue infrastructure is not required for V1.
|
||||
|
||||
@@ -502,7 +501,6 @@ Dashboard payload must include:
|
||||
- open/in-progress/blocked/done issue counts
|
||||
- month-to-date spend and budget utilization
|
||||
- pending approvals count
|
||||
- stale task count
|
||||
|
||||
## 10.9 Error Semantics
|
||||
|
||||
@@ -681,7 +679,6 @@ Required UX behaviors:
|
||||
- global company selector
|
||||
- quick actions: pause/resume agent, create task, approve/reject request
|
||||
- conflict toasts on atomic checkout failure
|
||||
- clear stale-task indicators
|
||||
- no silent background failures; every failed run visible in UI
|
||||
|
||||
## 15. Operational Requirements
|
||||
@@ -780,7 +777,6 @@ A release candidate is blocked unless these pass:
|
||||
|
||||
- add company selector and org chart view
|
||||
- add approvals and cost pages
|
||||
- add operational dashboard and stale-task surfacing
|
||||
|
||||
## Milestone 6: Hardening and Release
|
||||
|
||||
|
||||
62
doc/experimental/issue-worktree-support.md
Normal file
62
doc/experimental/issue-worktree-support.md
Normal file
@@ -0,0 +1,62 @@
|
||||
# Issue worktree support
|
||||
|
||||
Status: experimental, runtime-only, not shipping as a user-facing feature yet.
|
||||
|
||||
This branch contains the runtime and seeding work needed for issue-scoped worktrees:
|
||||
|
||||
- project execution workspace policy support
|
||||
- issue-level execution workspace settings
|
||||
- git worktree realization for isolated issue execution
|
||||
- optional command-based worktree provisioning
|
||||
- seeded worktree fixes for secrets key compatibility
|
||||
- seeded project workspace rebinding to the current git worktree
|
||||
|
||||
We are intentionally not shipping the UI for this yet. The runtime code remains in place, but the main UI entrypoints are hard-gated off for now.
|
||||
|
||||
## What works today
|
||||
|
||||
- projects can carry execution workspace policy in the backend
|
||||
- issues can carry execution workspace settings in the backend
|
||||
- heartbeat execution can realize isolated git worktrees
|
||||
- runtime can run a project-defined provision command inside the derived worktree
|
||||
- seeded worktree instances can keep local-encrypted secrets working
|
||||
- seeded worktree instances can rebind same-repo project workspace paths onto the current git worktree
|
||||
|
||||
## Hidden UI entrypoints
|
||||
|
||||
These are the current user-facing UI surfaces for the feature, now intentionally disabled:
|
||||
|
||||
- project settings:
|
||||
- `ui/src/components/ProjectProperties.tsx`
|
||||
- execution workspace policy controls
|
||||
- git worktree base ref / branch template / parent dir
|
||||
- provision / teardown command inputs
|
||||
|
||||
- issue creation:
|
||||
- `ui/src/components/NewIssueDialog.tsx`
|
||||
- isolated issue checkout toggle
|
||||
- defaulting issue execution workspace settings from project policy
|
||||
|
||||
- issue editing:
|
||||
- `ui/src/components/IssueProperties.tsx`
|
||||
- issue-level workspace mode toggle
|
||||
- defaulting issue execution workspace settings when project changes
|
||||
|
||||
- agent/runtime settings:
|
||||
- `ui/src/adapters/runtime-json-fields.tsx`
|
||||
- runtime services JSON field, which is part of the broader workspace-runtime support surface
|
||||
|
||||
## Why the UI is hidden
|
||||
|
||||
- the runtime behavior is still being validated
|
||||
- the workflow and operator ergonomics are not final
|
||||
- we do not want to expose a partially-baked user-facing feature in issues, projects, or settings
|
||||
|
||||
## Re-enable plan
|
||||
|
||||
When this is ready to ship:
|
||||
|
||||
- re-enable the gated UI sections in the files above
|
||||
- review wording and defaults for project and issue controls
|
||||
- decide which agent/runtime settings should remain advanced-only
|
||||
- add end-to-end product-level verification for the full UI workflow
|
||||
780
doc/plans/2026-03-13-features.md
Normal file
780
doc/plans/2026-03-13-features.md
Normal file
@@ -0,0 +1,780 @@
|
||||
# Feature specs
|
||||
|
||||
## 1) Guided onboarding + first-job magic
|
||||
|
||||
The repo already has `onboard`, `doctor`, `run`, deployment modes, and even agent-oriented onboarding text/skills endpoints, but there are also current onboarding/auth validation issues and an open “onboard failed” report. That means this is not just polish; it is product-critical. ([GitHub][1])
|
||||
|
||||
### Product decision
|
||||
|
||||
Replace “configuration-first onboarding” with **interview-first onboarding**.
|
||||
|
||||
### What we want
|
||||
|
||||
- Ask 3–4 questions up front, not 20 settings.
|
||||
- Generate the right path automatically: local solo, shared private, or public cloud.
|
||||
- Detect what agent/runtime environment already exists.
|
||||
- Make it normal to have Claude/OpenClaw/Codex help complete setup.
|
||||
- End onboarding with a **real first task**, not a blank dashboard.
|
||||
|
||||
### What we do not want
|
||||
|
||||
- Provider jargon before value.
|
||||
- “Go find an API key” as the default first instruction.
|
||||
- A successful install that still leaves users unsure what to do next.
|
||||
|
||||
### Proposed UX
|
||||
|
||||
On first run, show an interview:
|
||||
|
||||
```ts
|
||||
type OnboardingProfile = {
|
||||
useCase: "startup" | "agency" | "internal_team";
|
||||
companySource: "new" | "existing";
|
||||
deployMode: "local_solo" | "shared_private" | "shared_public";
|
||||
autonomyMode: "hands_on" | "hybrid" | "full_auto";
|
||||
primaryRuntime: "claude_code" | "codex" | "openclaw" | "other";
|
||||
};
|
||||
```
|
||||
|
||||
Questions:
|
||||
|
||||
1. What are you building?
|
||||
2. Is this a new company, an existing company, or a service/agency team?
|
||||
3. Are you working solo on one machine, sharing privately with a team, or deploying publicly?
|
||||
4. Do you want full auto, hybrid, or tight manual control?
|
||||
|
||||
Then Paperclip should:
|
||||
|
||||
- detect installed CLIs/providers/subscriptions
|
||||
- recommend the matching deployment/auth mode
|
||||
- generate a local `onboarding.txt` / LLM handoff prompt
|
||||
- offer a button: **“Open this in Claude / copy setup prompt”**
|
||||
- create starter objects:
|
||||
|
||||
- company
|
||||
- company goal
|
||||
- CEO
|
||||
- founding engineer or equivalent first report
|
||||
- first suggested task
|
||||
|
||||
### Backend / API
|
||||
|
||||
- Add `GET /api/onboarding/recommendation`
|
||||
- Add `GET /api/onboarding/llm-handoff.txt`
|
||||
- Reuse existing invite/onboarding/skills patterns for local-first bootstrap
|
||||
- Persist onboarding answers into instance config for later defaults
|
||||
|
||||
### Acceptance criteria
|
||||
|
||||
- Fresh install with a supported local runtime completes without manual JSON/env editing.
|
||||
- User sees first live agent action before leaving onboarding.
|
||||
- A blank dashboard is no longer the default post-install state.
|
||||
- If a required dependency is missing, the error is prescriptive and fixable from the UI/CLI.
|
||||
|
||||
### Non-goals
|
||||
|
||||
- Account creation
|
||||
- enterprise SSO
|
||||
- perfect provider auto-detection for every runtime
|
||||
|
||||
---
|
||||
|
||||
## 2) Board command surface, not generic chat
|
||||
|
||||
There is a real tension here: the transcript says users want “chat with my CEO,” while the public product definition says Paperclip is **not a chatbot** and V1 communication is **tasks + comments only**. At the same time, the repo is already exploring plugin infrastructure and even a chat plugin via plugin SSE streaming. The clean resolution is: **make the core surface conversational, but keep the data model task/thread-centric; reserve full chat as an optional plugin**. ([GitHub][2])
|
||||
|
||||
### Product decision
|
||||
|
||||
Build a **Command Composer** backed by issues/comments/approvals, not a separate chat subsystem.
|
||||
|
||||
### What we want
|
||||
|
||||
- “Talk to the CEO” feeling for the user.
|
||||
- Every conversation ends up attached to a real company object.
|
||||
- Strategy discussion can produce issues, artifacts, and approvals.
|
||||
|
||||
### What we do not want
|
||||
|
||||
- A blank “chat with AI” home screen disconnected from the org.
|
||||
- Yet another agent-chat product.
|
||||
|
||||
### Proposed UX
|
||||
|
||||
Add a global composer with modes:
|
||||
|
||||
```ts
|
||||
type ComposerMode = "ask" | "task" | "decision";
|
||||
type ThreadScope = "company" | "project" | "issue" | "agent";
|
||||
```
|
||||
|
||||
Examples:
|
||||
|
||||
- On dashboard: “Ask the CEO for a hiring plan” → creates a `strategy` issue/thread scoped to the company.
|
||||
- On agent page: “Tell the designer to make this cleaner” → appends an instruction comment to an issue or spawns a new delegated task.
|
||||
- On approval page: “Why are you asking to hire?” → appends a board comment to the approval context.
|
||||
|
||||
Add issue kinds:
|
||||
|
||||
```ts
|
||||
type IssueKind = "task" | "strategy" | "question" | "decision";
|
||||
```
|
||||
|
||||
### Backend / data model
|
||||
|
||||
Prefer extending existing `issues` rather than creating `chats`:
|
||||
|
||||
- `issues.kind`
|
||||
- `issues.scope`
|
||||
- optional `issues.target_agent_id`
|
||||
- comment metadata: `comment.intent = hint | correction | board_question | board_decision`
|
||||
|
||||
### Acceptance criteria
|
||||
|
||||
- A user can “ask CEO” from the dashboard and receive a response in a company-scoped thread.
|
||||
- From that thread, the user can create/approve tasks with one click.
|
||||
- No separate chat database is required for v1 of this feature.
|
||||
|
||||
### Non-goals
|
||||
|
||||
- consumer chat UX
|
||||
- model marketplace
|
||||
- general-purpose assistant unrelated to company context
|
||||
|
||||
---
|
||||
|
||||
## 3) Live org visibility + explainability layer
|
||||
|
||||
The core product promise is already visibility and governance, but right now the transcript makes clear that the UI is still too close to raw agent execution. The repo already has org charts, activity, heartbeat runs, costs, and agent detail surfaces; the missing piece is the explanatory layer above them. ([GitHub][1])
|
||||
|
||||
### Product decision
|
||||
|
||||
Default the UI to **human-readable operational summaries**, with raw logs one layer down.
|
||||
|
||||
### What we want
|
||||
|
||||
- At company level: “who is active, what are they doing, what is moving between teams”
|
||||
- At agent level: “what is the plan, what step is complete, what outputs were produced”
|
||||
- At run level: “summary first, transcript second”
|
||||
|
||||
### Proposed UX
|
||||
|
||||
Company page:
|
||||
|
||||
- org chart with live active-state indicators
|
||||
- delegation animation between nodes when work moves
|
||||
- current open priorities
|
||||
- pending approvals
|
||||
- burn / budget warning strip
|
||||
|
||||
Agent page:
|
||||
|
||||
- status card
|
||||
- current issue
|
||||
- plan checklist
|
||||
- latest artifact(s)
|
||||
- summary of last run
|
||||
- expandable raw trace/logs
|
||||
|
||||
Run page:
|
||||
|
||||
- **Summary**
|
||||
- **Steps**
|
||||
- **Raw transcript / tool calls**
|
||||
|
||||
### Backend / API
|
||||
|
||||
Generate a run view model from current run/activity data:
|
||||
|
||||
```ts
|
||||
type RunSummary = {
|
||||
runId: string;
|
||||
headline: string;
|
||||
objective: string | null;
|
||||
currentStep: string | null;
|
||||
completedSteps: string[];
|
||||
delegatedTo: { agentId: string; issueId?: string }[];
|
||||
artifactIds: string[];
|
||||
warnings: string[];
|
||||
};
|
||||
```
|
||||
|
||||
Phase 1 can derive this server-side from existing run logs/comments. Persist only if needed later.
|
||||
|
||||
### Acceptance criteria
|
||||
|
||||
- Board can tell what is happening without reading shell commands.
|
||||
- Raw logs are still accessible, but not the default surface.
|
||||
- First task / first hire / first completion moments are visibly celebrated.
|
||||
|
||||
### Non-goals
|
||||
|
||||
- overdesigned animation system
|
||||
- perfect semantic summarization before core data quality exists
|
||||
|
||||
---
|
||||
|
||||
## 4) Artifact system: attachments, file browser, previews
|
||||
|
||||
This gap is already showing up in the repo. Storage is present, attachment endpoints exist, but current issues show that attachments are still effectively image-centric and comment attachment rendering is incomplete. At the same time, your transcript wants plans, docs, files, and generated web pages surfaced cleanly. ([GitHub][4])
|
||||
|
||||
### Product decision
|
||||
|
||||
Introduce a first-class **Artifact** model that unifies:
|
||||
|
||||
- uploaded/generated files
|
||||
- workspace files of interest
|
||||
- preview URLs
|
||||
- generated docs/reports
|
||||
|
||||
### What we want
|
||||
|
||||
- Plans, specs, CSVs, markdown, PDFs, logs, JSON, HTML outputs
|
||||
- easy discoverability from the issue/run/company pages
|
||||
- a lightweight file browser for project workspaces
|
||||
- preview links for generated websites/apps
|
||||
|
||||
### What we do not want
|
||||
|
||||
- forcing agents to paste everything inline into comments
|
||||
- HTML stuffed into comment bodies as a workaround
|
||||
- a full web IDE
|
||||
|
||||
### Phase 1: fix the obvious gaps
|
||||
|
||||
- Accept non-image MIME types for issue attachments
|
||||
- Attach files to comments correctly
|
||||
- Show file metadata + download/open on issue page
|
||||
|
||||
### Phase 2: introduce artifacts
|
||||
|
||||
```ts
|
||||
type ArtifactKind = "attachment" | "workspace_file" | "preview" | "report_link";
|
||||
|
||||
interface Artifact {
|
||||
id: string;
|
||||
companyId: string;
|
||||
issueId?: string;
|
||||
runId?: string;
|
||||
agentId?: string;
|
||||
kind: ArtifactKind;
|
||||
title: string;
|
||||
mimeType?: string;
|
||||
filename?: string;
|
||||
sizeBytes?: number;
|
||||
storageKind: "local_disk" | "s3" | "external_url";
|
||||
contentPath?: string;
|
||||
previewUrl?: string;
|
||||
metadata: Record<string, unknown>;
|
||||
}
|
||||
```
|
||||
|
||||
### UX
|
||||
|
||||
Issue page gets a **Deliverables** section:
|
||||
|
||||
- Files
|
||||
- Reports
|
||||
- Preview links
|
||||
- Latest generated artifact highlighted at top
|
||||
|
||||
Project page gets a **Files** tab:
|
||||
|
||||
- folder tree
|
||||
- recent changes
|
||||
- “Open produced files” shortcut
|
||||
|
||||
### Preview handling
|
||||
|
||||
For HTML/static outputs:
|
||||
|
||||
- local deploy → open local preview URL
|
||||
- shared/public deploy → host via configured preview service or static storage
|
||||
- preview URL is registered back onto the issue as an artifact
|
||||
|
||||
### Acceptance criteria
|
||||
|
||||
- Agents can attach `.md`, `.txt`, `.json`, `.csv`, `.pdf`, and `.html`.
|
||||
- Users can open/download them from the issue page.
|
||||
- A generated static site can be opened from an issue without hunting through the filesystem.
|
||||
|
||||
### Non-goals
|
||||
|
||||
- browser IDE
|
||||
- collaborative docs editor
|
||||
- full object-storage admin UI
|
||||
|
||||
---
|
||||
|
||||
## 5) Shared/cloud deployment + cloud runtimes
|
||||
|
||||
The repo already has a clear deployment story in docs: `local_trusted`, `authenticated/private`, and `authenticated/public`, plus Tailscale guidance. The roadmap explicitly calls out cloud agents like Cursor / e2b. That means the next step is not inventing a deployment model; it is making the shared/cloud path canonical and production-usable. ([GitHub][5])
|
||||
|
||||
### Product decision
|
||||
|
||||
Make **shared/private deploy** and **public/cloud deploy** first-class supported modes, and add **remote runtime drivers** for cloud-executed agents.
|
||||
|
||||
### What we want
|
||||
|
||||
- one instance a team can actually share
|
||||
- local-first path that upgrades to private/public without a mental model change
|
||||
- remote agent execution for non-local runtimes
|
||||
|
||||
### Proposed architecture
|
||||
|
||||
Separate **control plane** from **execution runtime** more explicitly:
|
||||
|
||||
```ts
|
||||
type RuntimeDriver = "local_process" | "remote_sandbox" | "webhook";
|
||||
|
||||
interface ExecutionHandle {
|
||||
externalRunId: string;
|
||||
status: "queued" | "running" | "completed" | "failed" | "cancelled";
|
||||
previewUrl?: string;
|
||||
logsUrl?: string;
|
||||
}
|
||||
```
|
||||
|
||||
First remote driver: `remote_sandbox` for e2b-style execution.
|
||||
|
||||
### Deliverables
|
||||
|
||||
- canonical deploy recipes:
|
||||
|
||||
- local solo
|
||||
- shared private (Tailscale/private auth)
|
||||
- public cloud (managed Postgres + object storage + public URL)
|
||||
|
||||
- runtime health page
|
||||
- adapter/runtime capability matrix
|
||||
- one official reference deployment path
|
||||
|
||||
### UX
|
||||
|
||||
New “Deployment” settings page:
|
||||
|
||||
- instance mode
|
||||
- auth/exposure
|
||||
- storage/database status
|
||||
- runtime drivers configured
|
||||
- health and reachability checks
|
||||
|
||||
### Acceptance criteria
|
||||
|
||||
- Two humans can log into one authenticated/private instance and use it concurrently.
|
||||
- A public deployment can run agents via at least one remote runtime.
|
||||
- `doctor` catches missing public/private config and gives concrete fixes.
|
||||
|
||||
### Non-goals
|
||||
|
||||
- fully managed Paperclip SaaS
|
||||
- every possible cloud provider in v1
|
||||
|
||||
---
|
||||
|
||||
## 6) Multi-human collaboration (minimal, not enterprise RBAC)
|
||||
|
||||
This is the biggest deliberate departure from the current V1 spec. Publicly, V1 still says “single human board operator” and puts role-based human granularity out of scope. But the transcript is right that shared use is necessary if Paperclip is going to be real for teams. The key is to do a **minimal collaboration model**, not a giant permission system. ([GitHub][2])
|
||||
|
||||
### Product decision
|
||||
|
||||
Ship **coarse multi-user company memberships**, not fine-grained enterprise RBAC.
|
||||
|
||||
### Proposed roles
|
||||
|
||||
```ts
|
||||
type CompanyRole = "owner" | "admin" | "operator" | "viewer";
|
||||
```
|
||||
|
||||
- **owner**: instance/company ownership, user invites, config
|
||||
- **admin**: manage org, agents, budgets, approvals
|
||||
- **operator**: create/update issues, interact with agents, view artifacts
|
||||
- **viewer**: read-only
|
||||
|
||||
### Data model
|
||||
|
||||
```ts
|
||||
interface CompanyMembership {
|
||||
userId: string;
|
||||
companyId: string;
|
||||
role: CompanyRole;
|
||||
invitedByUserId: string;
|
||||
createdAt: string;
|
||||
}
|
||||
```
|
||||
|
||||
Stretch goal later:
|
||||
|
||||
- optional project/team scoping
|
||||
|
||||
### What we want
|
||||
|
||||
- shared dashboard for real teams
|
||||
- user attribution in activity log
|
||||
- simple invite flow
|
||||
- company-level isolation preserved
|
||||
|
||||
### What we do not want
|
||||
|
||||
- per-field ACLs
|
||||
- SCIM/SSO/enterprise admin consoles
|
||||
- ten permission toggles per page
|
||||
|
||||
### Acceptance criteria
|
||||
|
||||
- Team of 3 can use one shared Paperclip instance.
|
||||
- Every user action is attributed correctly in activity.
|
||||
- Company membership boundaries are enforced.
|
||||
- Viewer cannot mutate; operator/admin can.
|
||||
|
||||
### Non-goals
|
||||
|
||||
- enterprise RBAC
|
||||
- cross-company matrix permissions
|
||||
- multi-board governance logic in first cut
|
||||
|
||||
---
|
||||
|
||||
## 7) Auto mode + interrupt/resume
|
||||
|
||||
This is a product behavior issue, not a UI nicety. If agents cannot keep working or accept course correction without restarting, the autonomy model feels fake.
|
||||
|
||||
### Product decision
|
||||
|
||||
Make auto mode and mid-run interruption first-class runtime semantics.
|
||||
|
||||
### What we want
|
||||
|
||||
- Auto mode that continues until blocked by approvals, budgets, or explicit pause.
|
||||
- Mid-run “you missed this” correction without losing session continuity.
|
||||
- Clear state when an agent is waiting, blocked, or paused.
|
||||
|
||||
### Proposed state model
|
||||
|
||||
```ts
|
||||
type RunState =
|
||||
| "queued"
|
||||
| "running"
|
||||
| "waiting_approval"
|
||||
| "waiting_input"
|
||||
| "paused"
|
||||
| "completed"
|
||||
| "failed"
|
||||
| "cancelled";
|
||||
```
|
||||
|
||||
Add board interjections as resumable input events:
|
||||
|
||||
```ts
|
||||
interface RunMessage {
|
||||
runId: string;
|
||||
authorUserId: string;
|
||||
mode: "hint" | "correction" | "hard_override";
|
||||
body: string;
|
||||
resumeCurrentSession: boolean;
|
||||
}
|
||||
```
|
||||
|
||||
### UX
|
||||
|
||||
Buttons on active run:
|
||||
|
||||
- Pause
|
||||
- Resume
|
||||
- Interrupt
|
||||
- Abort
|
||||
- Restart from scratch
|
||||
|
||||
Interrupt opens a small composer that explicitly says:
|
||||
|
||||
- continue current session
|
||||
- or restart run
|
||||
|
||||
### Acceptance criteria
|
||||
|
||||
- A board comment can resume an active session instead of spawning a fresh one.
|
||||
- Session ID remains stable for “continue” path.
|
||||
- UI clearly distinguishes blocked vs. waiting vs. paused.
|
||||
|
||||
### Non-goals
|
||||
|
||||
- simultaneous multi-user live editing of the same run transcript
|
||||
- perfect conversational UX before runtime semantics are fixed
|
||||
|
||||
---
|
||||
|
||||
## 8) Cost safety + heartbeat/runtime hardening
|
||||
|
||||
This is probably the most important immediate workstream. The transcript says token burn is the highest pain, and the repo currently has active issues around budget enforcement evidence, onboarding/auth validation, and circuit-breaker style waste prevention. Public docs already promise hard budgets, and the issue tracker is pointing at the missing operational protections. ([GitHub][6])
|
||||
|
||||
### Product decision
|
||||
|
||||
Treat this as a **P0 runtime contract**, not a nice-to-have.
|
||||
|
||||
### Part A: deterministic wake gating
|
||||
|
||||
Do cheap, explicit work detection before invoking an LLM.
|
||||
|
||||
```ts
|
||||
type WakeReason =
|
||||
| "new_assignment"
|
||||
| "new_comment"
|
||||
| "mention"
|
||||
| "approval_resolved"
|
||||
| "scheduled_scan"
|
||||
| "manual";
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
- if no new actionable input exists, do not call the model
|
||||
- scheduled scan should be a cheap policy check first, not a full reasoning pass
|
||||
|
||||
### Part B: budget contract
|
||||
|
||||
Keep the existing public promise, but make it undeniable:
|
||||
|
||||
- warning at 80%
|
||||
- auto-pause at 100%
|
||||
- visible audit trail
|
||||
- explicit board override to continue
|
||||
|
||||
### Part C: circuit breaker
|
||||
|
||||
Add per-agent runtime guards:
|
||||
|
||||
```ts
|
||||
interface CircuitBreakerConfig {
|
||||
enabled: boolean;
|
||||
maxConsecutiveNoProgress: number;
|
||||
maxConsecutiveFailures: number;
|
||||
tokenVelocityMultiplier: number;
|
||||
}
|
||||
```
|
||||
|
||||
Trip when:
|
||||
|
||||
- no issue/status/comment progress for N runs
|
||||
- N failures in a row
|
||||
- token spike vs rolling average
|
||||
|
||||
### Part D: refactor heartbeat service
|
||||
|
||||
Split current orchestration into modules:
|
||||
|
||||
- wake detector
|
||||
- checkout/lock manager
|
||||
- adapter runner
|
||||
- session manager
|
||||
- cost recorder
|
||||
- breaker evaluator
|
||||
- event streamer
|
||||
|
||||
### Part E: regression suite
|
||||
|
||||
Mandatory automated proofs for:
|
||||
|
||||
- onboarding/auth matrix
|
||||
- 80/100 budget behavior
|
||||
- no cross-company auth leakage
|
||||
- no-spurious-wake idle behavior
|
||||
- active-run resume/interruption
|
||||
- remote runtime smoke
|
||||
|
||||
### Acceptance criteria
|
||||
|
||||
- Idle org with no new work does not generate model calls from heartbeat scans.
|
||||
- 80% shows warning only.
|
||||
- 100% pauses the agent and blocks continued execution until override.
|
||||
- Circuit breaker pause is visible in audit/activity.
|
||||
- Runtime modules have explicit contracts and are testable independently.
|
||||
|
||||
### Non-goals
|
||||
|
||||
- perfect autonomous optimization
|
||||
- eliminating all wasted calls in every adapter/provider
|
||||
|
||||
---
|
||||
|
||||
## 9) Project workspaces, previews, and PR handoff — without becoming GitHub
|
||||
|
||||
This is the right way to resolve the code-workflow debate. The repo already has worktree-local instances, project `workspaceStrategy.provisionCommand`, and an RFC for adapter-level git worktree isolation. That is the correct architectural direction: **project execution policies and workspace isolation**, not built-in PR review. ([GitHub][7])
|
||||
|
||||
### Product decision
|
||||
|
||||
Paperclip should manage the **issue → workspace → preview/PR → review handoff** lifecycle, but leave diffs/review/merge to external tools.
|
||||
|
||||
### Proposed config
|
||||
|
||||
Prefer repo-local project config:
|
||||
|
||||
```yaml
|
||||
# .paperclip/project.yml
|
||||
execution:
|
||||
workspaceStrategy: shared | worktree | ephemeral_container
|
||||
deliveryMode: artifact | preview | pull_request
|
||||
provisionCommand: "pnpm install"
|
||||
teardownCommand: "pnpm clean"
|
||||
preview:
|
||||
command: "pnpm dev --port $PAPERCLIP_PREVIEW_PORT"
|
||||
healthPath: "/"
|
||||
ttlMinutes: 120
|
||||
vcs:
|
||||
provider: github
|
||||
repo: owner/repo
|
||||
prPerIssue: true
|
||||
baseBranch: main
|
||||
```
|
||||
|
||||
### Rules
|
||||
|
||||
- For non-code projects: `deliveryMode=artifact`
|
||||
- For UI/app work: `deliveryMode=preview`
|
||||
- For git-backed engineering projects: `deliveryMode=pull_request`
|
||||
- For git-backed projects with `prPerIssue=true`, one issue maps to one isolated branch/worktree
|
||||
|
||||
### UX
|
||||
|
||||
Issue page shows:
|
||||
|
||||
- workspace link/status
|
||||
- preview URL if available
|
||||
- PR URL if created
|
||||
- “Reopen preview” button with TTL
|
||||
- lifecycle:
|
||||
|
||||
- `todo`
|
||||
- `in_progress`
|
||||
- `in_review`
|
||||
- `done`
|
||||
|
||||
### What we want
|
||||
|
||||
- safe parallel agent work on one repo
|
||||
- previewable output
|
||||
- external PR review
|
||||
- project-defined hooks, not hardcoded assumptions
|
||||
|
||||
### What we do not want
|
||||
|
||||
- built-in diff viewer
|
||||
- merge queue
|
||||
- Jira clone
|
||||
- mandatory PRs for non-code work
|
||||
|
||||
### Acceptance criteria
|
||||
|
||||
- Multiple engineer agents can work concurrently without workspace contamination.
|
||||
- When a project is in PR mode, the issue contains branch/worktree/preview/PR metadata.
|
||||
- Preview can be reopened on demand until TTL expires.
|
||||
|
||||
### Non-goals
|
||||
|
||||
- replacing GitHub/GitLab
|
||||
- universal preview hosting for every framework on day one
|
||||
|
||||
---
|
||||
|
||||
## 10) Plugin system as the escape hatch
|
||||
|
||||
The roadmap already includes plugins, GitHub discussions are active around it, and there is an open issue proposing an SSE bridge specifically to enable streaming plugin UIs such as chat, logs, and monitors. This is exactly the right place for optional surfaces. ([GitHub][1])
|
||||
|
||||
### Product decision
|
||||
|
||||
Keep the control-plane core thin; put optional high-variance experiences into plugins.
|
||||
|
||||
### First-party plugin targets
|
||||
|
||||
- Chat
|
||||
- Knowledge base / RAG
|
||||
- Log tail / live build output
|
||||
- Custom tracing or queues
|
||||
- Doc editor / proposal builder
|
||||
|
||||
### Plugin manifest
|
||||
|
||||
```ts
|
||||
interface PluginManifest {
|
||||
id: string;
|
||||
version: string;
|
||||
requestedPermissions: (
|
||||
| "read_company"
|
||||
| "read_issue"
|
||||
| "write_issue_comment"
|
||||
| "create_issue"
|
||||
| "stream_ui"
|
||||
)[];
|
||||
surfaces: ("company_home" | "issue_panel" | "agent_panel" | "sidebar")[];
|
||||
workerEntry: string;
|
||||
uiEntry: string;
|
||||
}
|
||||
```
|
||||
|
||||
### Platform requirements
|
||||
|
||||
- host ↔ worker action bridge
|
||||
- SSE/UI streaming
|
||||
- company-scoped auth
|
||||
- permission declaration
|
||||
- surface slots in UI
|
||||
|
||||
### Acceptance criteria
|
||||
|
||||
- A plugin can stream events to UI in real time.
|
||||
- A chat plugin can converse without requiring chat to become the core Paperclip product.
|
||||
- Plugin permissions are company-scoped and auditable.
|
||||
|
||||
### Non-goals
|
||||
|
||||
- plugins mutating core schema directly
|
||||
- arbitrary privileged code execution without explicit permissions
|
||||
|
||||
---
|
||||
|
||||
## Priority order I would use
|
||||
|
||||
Given the repo state and the transcript, I would sequence it like this:
|
||||
|
||||
**P0**
|
||||
|
||||
1. Cost safety + heartbeat hardening
|
||||
2. Guided onboarding + first-job magic
|
||||
3. Shared/cloud deployment foundation
|
||||
4. Artifact phase 1: non-image attachments + deliverables surfacing
|
||||
|
||||
**P1** 5. Board command surface 6. Visibility/explainability layer 7. Auto mode + interrupt/resume 8. Minimal multi-user collaboration
|
||||
|
||||
**P2** 9. Project workspace / preview / PR lifecycle 10. Plugin system + optional chat plugin 11. Template/preset expansion for startup vs agency vs internal-team onboarding
|
||||
|
||||
Why this order: the current repo is already getting pressure on onboarding failures, auth/onboarding validation, budget enforcement, and wasted token burn. If those are shaky, everything else feels impressive but unsafe. ([GitHub][3])
|
||||
|
||||
## Bottom line
|
||||
|
||||
The best synthesis is:
|
||||
|
||||
- **Keep** Paperclip as the board-level control plane.
|
||||
- **Do not** make chat, code review, or workflow-building the core identity.
|
||||
- **Do** make the product feel conversational, visible, output-oriented, and shared.
|
||||
- **Do** make coding workflows an integration surface via workspaces/previews/PR links.
|
||||
- **Use plugins** for richer edges like chat and knowledge.
|
||||
|
||||
That keeps the repo’s current product direction intact while solving almost every pain surfaced in the transcript.
|
||||
|
||||
### Key references
|
||||
|
||||
- README / positioning / roadmap / product boundaries. ([GitHub][1])
|
||||
- Product definition. ([GitHub][8])
|
||||
- V1 implementation spec and explicit non-goals. ([GitHub][2])
|
||||
- Core concepts and architecture. ([GitHub][9])
|
||||
- Deployment modes / Tailscale / local-to-cloud path. ([GitHub][5])
|
||||
- Developing guide: worktree-local instances, provision hooks, onboarding endpoints. ([GitHub][7])
|
||||
- Current issue pressure: onboarding failure, auth/onboarding validation, budget enforcement, circuit breaker, attachment gaps, plugin chat. ([GitHub][3])
|
||||
|
||||
[1]: https://github.com/paperclipai/paperclip "https://github.com/paperclipai/paperclip"
|
||||
[2]: https://github.com/paperclipai/paperclip/blob/master/doc/SPEC-implementation.md "https://github.com/paperclipai/paperclip/blob/master/doc/SPEC-implementation.md"
|
||||
[3]: https://github.com/paperclipai/paperclip/issues/704 "https://github.com/paperclipai/paperclip/issues/704"
|
||||
[4]: https://github.com/paperclipai/paperclip/blob/master/docs/deploy/tailscale-private-access.md "https://github.com/paperclipai/paperclip/blob/master/docs/deploy/tailscale-private-access.md"
|
||||
[5]: https://github.com/paperclipai/paperclip/blob/master/docs/deploy/deployment-modes.md "https://github.com/paperclipai/paperclip/blob/master/docs/deploy/deployment-modes.md"
|
||||
[6]: https://github.com/paperclipai/paperclip/issues/692 "https://github.com/paperclipai/paperclip/issues/692"
|
||||
[7]: https://github.com/paperclipai/paperclip/blob/master/doc/DEVELOPING.md "https://github.com/paperclipai/paperclip/blob/master/doc/DEVELOPING.md"
|
||||
[8]: https://github.com/paperclipai/paperclip/blob/master/doc/PRODUCT.md "https://github.com/paperclipai/paperclip/blob/master/doc/PRODUCT.md"
|
||||
[9]: https://github.com/paperclipai/paperclip/blob/master/docs/start/core-concepts.md "https://github.com/paperclipai/paperclip/blob/master/docs/start/core-concepts.md"
|
||||
329
doc/plans/agent-chat-ui-and-issue-backed-conversations.md
Normal file
329
doc/plans/agent-chat-ui-and-issue-backed-conversations.md
Normal file
@@ -0,0 +1,329 @@
|
||||
# Agent Chat UI and Issue-Backed Conversations
|
||||
|
||||
## Context
|
||||
|
||||
`PAP-475` asks two related questions:
|
||||
|
||||
1. What UI kit should Paperclip use if we add a chat surface with an agent?
|
||||
2. How should chat fit the product without breaking the current issue-centric model?
|
||||
|
||||
This is not only a component-library decision. In Paperclip today:
|
||||
|
||||
- V1 explicitly says communication is `tasks + comments only`, with no separate chat system.
|
||||
- Issues already carry assignment, audit trail, billing code, project linkage, goal linkage, and active run linkage.
|
||||
- Live run streaming already exists on issue detail pages.
|
||||
- Agent sessions already persist by `taskKey`, and today `taskKey` falls back to `issueId`.
|
||||
- The OpenClaw gateway adapter already supports an issue-scoped session key strategy.
|
||||
|
||||
That means the cheapest useful path is not "add a second messaging product inside Paperclip." It is "add a better conversational UI on top of issue and run primitives we already have."
|
||||
|
||||
## Current Constraints From the Codebase
|
||||
|
||||
### Durable work object
|
||||
|
||||
The durable object in Paperclip is the issue, not a chat thread.
|
||||
|
||||
- `IssueDetail` already combines comments, linked runs, live runs, and activity into one timeline.
|
||||
- `CommentThread` already renders markdown comments and supports reply/reassignment flows.
|
||||
- `LiveRunWidget` already renders streaming assistant/tool/system output for active runs.
|
||||
|
||||
### Session behavior
|
||||
|
||||
Session continuity is already task-shaped.
|
||||
|
||||
- `heartbeat.ts` derives `taskKey` from `taskKey`, then `taskId`, then `issueId`.
|
||||
- `agent_task_sessions` stores session state per company + agent + adapter + task key.
|
||||
- OpenClaw gateway supports `sessionKeyStrategy=issue|fixed|run`, and `issue` already matches the Paperclip mental model well.
|
||||
|
||||
That means "chat with the CEO about this issue" naturally maps to one durable session per issue today without inventing a second session system.
|
||||
|
||||
### Billing behavior
|
||||
|
||||
Billing is already issue-aware.
|
||||
|
||||
- `cost_events` can attach to `issueId`, `projectId`, `goalId`, and `billingCode`.
|
||||
- heartbeat context already propagates issue linkage into runs and cost rollups.
|
||||
|
||||
If chat leaves the issue model, Paperclip would need a second billing story. That is avoidable.
|
||||
|
||||
## UI Kit Recommendation
|
||||
|
||||
## Recommendation: `assistant-ui`
|
||||
|
||||
Use `assistant-ui` as the chat presentation layer.
|
||||
|
||||
Why it fits Paperclip:
|
||||
|
||||
- It is a real chat UI kit, not just a hook.
|
||||
- It is composable and aligned with shadcn-style primitives, which matches the current UI stack well.
|
||||
- It explicitly supports custom backends, which matters because Paperclip talks to agents through issue comments, heartbeats, and run streams rather than direct provider calls.
|
||||
- It gives us polished chat affordances quickly: message list, composer, streaming text, attachments, thread affordances, and markdown-oriented rendering.
|
||||
|
||||
Why not make "the Vercel one" the primary choice:
|
||||
|
||||
- Vercel AI SDK is stronger today than the older "just `useChat` over `/api/chat`" framing. Its transport layer is flexible and can support custom protocols.
|
||||
- But AI SDK is still better understood here as a transport/runtime protocol layer than as the best end-user chat surface for Paperclip.
|
||||
- Paperclip does not need Vercel to own message state, persistence, or the backend contract. Paperclip already has its own issue, run, and session model.
|
||||
|
||||
So the clean split is:
|
||||
|
||||
- `assistant-ui` for UI primitives
|
||||
- Paperclip-owned runtime/store for state, persistence, and transport
|
||||
- optional AI SDK usage later only if we want its stream protocol or client transport abstraction
|
||||
|
||||
## Product Options
|
||||
|
||||
### Option A: Separate chat object
|
||||
|
||||
Create a new top-level chat/thread model unrelated to issues.
|
||||
|
||||
Pros:
|
||||
|
||||
- clean mental model if users want freeform conversation
|
||||
- easy to hide from issue boards
|
||||
|
||||
Cons:
|
||||
|
||||
- breaks the current V1 product decision that communication is issue-centric
|
||||
- needs new persistence, billing, session, permissions, activity, and wakeup rules
|
||||
- creates a second "why does this exist?" object beside issues
|
||||
- makes "pick up an old chat" a separate retrieval problem
|
||||
|
||||
Verdict: not recommended for V1.
|
||||
|
||||
### Option B: Every chat is an issue
|
||||
|
||||
Treat chat as a UI mode over an issue. The issue remains the durable record.
|
||||
|
||||
Pros:
|
||||
|
||||
- matches current product spec
|
||||
- billing, runs, comments, approvals, and activity already work
|
||||
- sessions already resume on issue identity
|
||||
- works with all adapters, including OpenClaw, without new agent auth or a second API surface
|
||||
|
||||
Cons:
|
||||
|
||||
- some chats are not really "tasks" in a board sense
|
||||
- onboarding and review conversations may clutter normal issue lists
|
||||
|
||||
Verdict: best V1 foundation.
|
||||
|
||||
### Option C: Hybrid with hidden conversation issues
|
||||
|
||||
Back every conversation with an issue, but allow a conversation-flavored issue mode that is hidden from default execution boards unless promoted.
|
||||
|
||||
Pros:
|
||||
|
||||
- preserves the issue-centric backend
|
||||
- gives onboarding/review chat a cleaner UX
|
||||
- preserves billing and session continuity
|
||||
|
||||
Cons:
|
||||
|
||||
- requires extra UI rules and possibly a small schema or filtering addition
|
||||
- can become a disguised second system if not kept narrow
|
||||
|
||||
Verdict: likely the right product shape after a basic issue-backed MVP.
|
||||
|
||||
## Recommended Product Model
|
||||
|
||||
### Phase 1 product decision
|
||||
|
||||
For the first implementation, chat should be issue-backed.
|
||||
|
||||
More specifically:
|
||||
|
||||
- the board opens a chat surface for an issue
|
||||
- sending a message is a comment mutation on that issue
|
||||
- the assigned agent is woken through the existing issue-comment flow
|
||||
- streaming output comes from the existing live run stream for that issue
|
||||
- durable assistant output remains comments and run history, not an extra transcript store
|
||||
|
||||
This keeps Paperclip honest about what it is:
|
||||
|
||||
- the control plane stays issue-centric
|
||||
- chat is a better way to interact with issue work, not a new collaboration product
|
||||
|
||||
### Onboarding and CEO conversations
|
||||
|
||||
For onboarding, weekly reviews, and "chat with the CEO", use a conversation issue rather than a global chat tab.
|
||||
|
||||
Suggested shape:
|
||||
|
||||
- create a board-initiated issue assigned to the CEO
|
||||
- mark it as conversation-flavored in UI treatment
|
||||
- optionally hide it from normal issue boards by default later
|
||||
- keep all cost/run/session linkage on that issue
|
||||
|
||||
This solves several concerns at once:
|
||||
|
||||
- no separate API key or direct provider wiring is needed
|
||||
- the same CEO adapter is used
|
||||
- old conversations are recovered through normal issue history
|
||||
- the CEO can still create or update real child issues from the conversation
|
||||
|
||||
## Session Model
|
||||
|
||||
### V1
|
||||
|
||||
Use one durable conversation session per issue.
|
||||
|
||||
That already matches current behavior:
|
||||
|
||||
- adapter task sessions persist against `taskKey`
|
||||
- `taskKey` already falls back to `issueId`
|
||||
- OpenClaw already supports an issue-scoped session key
|
||||
|
||||
This means "resume the CEO conversation later" works by reopening the same issue and waking the same agent on the same issue.
|
||||
|
||||
### What not to add yet
|
||||
|
||||
Do not add multi-thread-per-issue chat in the first pass.
|
||||
|
||||
If Paperclip later needs several parallel threads on one issue, then add an explicit conversation identity and derive:
|
||||
|
||||
- `taskKey = issue:<issueId>:conversation:<conversationId>`
|
||||
- OpenClaw `sessionKey = paperclip:conversation:<conversationId>`
|
||||
|
||||
Until that requirement becomes real, one issue == one durable conversation is the simpler and better rule.
|
||||
|
||||
## Billing Model
|
||||
|
||||
Chat should not invent a separate billing pipeline.
|
||||
|
||||
All chat cost should continue to roll up through the issue:
|
||||
|
||||
- `cost_events.issueId`
|
||||
- project and goal rollups through existing relationships
|
||||
- issue `billingCode` when present
|
||||
|
||||
If a conversation is important enough to exist, it is important enough to have a durable issue-backed audit and cost trail.
|
||||
|
||||
This is another reason ephemeral freeform chat should not be the default.
|
||||
|
||||
## UI Architecture
|
||||
|
||||
### Recommended stack
|
||||
|
||||
1. Keep Paperclip as the source of truth for message history and run state.
|
||||
2. Add `assistant-ui` as the rendering/composer layer.
|
||||
3. Build a Paperclip runtime adapter that maps:
|
||||
- issue comments -> user/assistant messages
|
||||
- live run deltas -> streaming assistant messages
|
||||
- issue attachments -> chat attachments
|
||||
4. Keep current markdown rendering and code-block support where possible.
|
||||
|
||||
### Interaction flow
|
||||
|
||||
1. Board opens issue detail in "Chat" mode.
|
||||
2. Existing comment history is mapped into chat messages.
|
||||
3. When the board sends a message:
|
||||
- `POST /api/issues/{id}/comments`
|
||||
- optionally interrupt the active run if the UX wants "send and replace current response"
|
||||
4. Existing issue comment wakeup logic wakes the assignee.
|
||||
5. Existing `/issues/{id}/live-runs` and `/issues/{id}/active-run` data feeds drive streaming.
|
||||
6. When the run completes, durable state remains in comments/runs/activity as it does now.
|
||||
|
||||
### Why this fits the current code
|
||||
|
||||
Paperclip already has most of the backend pieces:
|
||||
|
||||
- issue comments
|
||||
- run timeline
|
||||
- run log and event streaming
|
||||
- markdown rendering
|
||||
- attachment support
|
||||
- assignee wakeups on comments
|
||||
|
||||
The missing piece is mostly the presentation and the mapping layer, not a new backend domain.
|
||||
|
||||
## Agent Scope
|
||||
|
||||
Do not launch this as "chat with every agent."
|
||||
|
||||
Start narrower:
|
||||
|
||||
- onboarding chat with CEO
|
||||
- workflow/review chat with CEO
|
||||
- maybe selected exec roles later
|
||||
|
||||
Reasons:
|
||||
|
||||
- it keeps the feature from becoming a second inbox/chat product
|
||||
- it limits permission and UX questions early
|
||||
- it matches the stated product demand
|
||||
|
||||
If direct chat with other agents becomes useful later, the same issue-backed pattern can expand cleanly.
|
||||
|
||||
## Recommended Delivery Phases
|
||||
|
||||
### Phase 1: Chat UI on existing issues
|
||||
|
||||
- add a chat presentation mode to issue detail
|
||||
- use `assistant-ui`
|
||||
- map comments + live runs into the chat surface
|
||||
- no schema change
|
||||
- no new API surface
|
||||
|
||||
This is the highest-leverage step because it tests whether the UX is actually useful before product model expansion.
|
||||
|
||||
### Phase 2: Conversation-flavored issues for CEO chat
|
||||
|
||||
- add a lightweight conversation classification
|
||||
- support creation of CEO conversation issues from onboarding and workflow entry points
|
||||
- optionally hide these from normal backlog/board views by default
|
||||
|
||||
The smallest implementation could be a label or issue metadata flag. If it becomes important enough, then promote it to a first-class issue subtype later.
|
||||
|
||||
### Phase 3: Promotion and thread splitting only if needed
|
||||
|
||||
Only if we later see a real need:
|
||||
|
||||
- allow promoting a conversation to a formal task issue
|
||||
- allow several threads per issue with explicit conversation identity
|
||||
|
||||
This should be demand-driven, not designed up front.
|
||||
|
||||
## Clear Recommendation
|
||||
|
||||
If the question is "what should we use?", the answer is:
|
||||
|
||||
- use `assistant-ui` for the chat UI
|
||||
- do not treat raw Vercel AI SDK UI hooks as the main product answer
|
||||
- keep chat issue-backed in V1
|
||||
- use the current issue comment + run + session + billing model rather than inventing a parallel chat subsystem
|
||||
|
||||
If the question is "how should we think about chat in Paperclip?", the answer is:
|
||||
|
||||
- chat is a mode of interacting with issue-backed agent work
|
||||
- not a separate product silo
|
||||
- not an excuse to stop tracing work, cost, and session history back to the issue
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
### Immediate implementation target
|
||||
|
||||
The most defensible first build is:
|
||||
|
||||
- add a chat tab or chat-focused layout on issue detail
|
||||
- back it with the currently assigned agent on that issue
|
||||
- use `assistant-ui` primitives over existing comments and live run events
|
||||
|
||||
### Defer these until proven necessary
|
||||
|
||||
- standalone global chat objects
|
||||
- multi-thread chat inside one issue
|
||||
- chat with every agent in the org
|
||||
- a second persistence layer for message history
|
||||
- separate cost tracking for chats
|
||||
|
||||
## References
|
||||
|
||||
- V1 communication model: `doc/SPEC-implementation.md`
|
||||
- Current issue/comment/run UI: `ui/src/pages/IssueDetail.tsx`, `ui/src/components/CommentThread.tsx`, `ui/src/components/LiveRunWidget.tsx`
|
||||
- Session persistence and task key derivation: `server/src/services/heartbeat.ts`, `packages/db/src/schema/agent_task_sessions.ts`
|
||||
- OpenClaw session routing: `packages/adapters/openclaw-gateway/README.md`
|
||||
- assistant-ui docs: <https://www.assistant-ui.com/docs>
|
||||
- assistant-ui repo: <https://github.com/assistant-ui/assistant-ui>
|
||||
- AI SDK transport docs: <https://ai-sdk.dev/docs/ai-sdk-ui/transport>
|
||||
1335
doc/plans/workspace-strategy-and-git-worktrees.md
Normal file
1335
doc/plans/workspace-strategy-and-git-worktrees.md
Normal file
File diff suppressed because it is too large
Load Diff
@@ -249,7 +249,7 @@ Runs local `claude` CLI directly.
|
||||
"cwd": "/absolute/or/relative/path",
|
||||
"promptTemplate": "You are agent {{agent.id}} ...",
|
||||
"model": "optional-model-id",
|
||||
"maxTurnsPerRun": 80,
|
||||
"maxTurnsPerRun": 300,
|
||||
"dangerouslySkipPermissions": true,
|
||||
"env": {"KEY": "VALUE"},
|
||||
"extraArgs": [],
|
||||
|
||||
@@ -114,7 +114,7 @@ No section header — these are always at the top, below the company header.
|
||||
My Issues
|
||||
```
|
||||
|
||||
- **Inbox** — items requiring the board operator's attention. Badge count on the right. Includes: pending approvals, stale tasks, budget alerts, failed heartbeats. The number is the total unread/unresolved count.
|
||||
- **Inbox** — items requiring the board operator's attention. Badge count on the right. Includes: pending approvals, budget alerts, failed heartbeats. The number is the total unread/unresolved count.
|
||||
- **My Issues** — issues created by or assigned to the board operator.
|
||||
|
||||
### 3.3 Work Section
|
||||
|
||||
@@ -20,7 +20,7 @@ The `claude_local` adapter runs Anthropic's Claude Code CLI locally. It supports
|
||||
| `env` | object | No | Environment variables (supports secret refs) |
|
||||
| `timeoutSec` | number | No | Process timeout (0 = no timeout) |
|
||||
| `graceSec` | number | No | Grace period before force-kill |
|
||||
| `maxTurnsPerRun` | number | No | Max agentic turns per heartbeat |
|
||||
| `maxTurnsPerRun` | number | No | Max agentic turns per heartbeat (defaults to `300`) |
|
||||
| `dangerouslySkipPermissions` | boolean | No | Skip permission prompts (dev only) |
|
||||
|
||||
## Prompt Templates
|
||||
|
||||
45
docs/adapters/gemini-local.md
Normal file
45
docs/adapters/gemini-local.md
Normal file
@@ -0,0 +1,45 @@
|
||||
---
|
||||
title: Gemini Local
|
||||
summary: Gemini CLI local adapter setup and configuration
|
||||
---
|
||||
|
||||
The `gemini_local` adapter runs Google's Gemini CLI locally. It supports session persistence with `--resume`, skills injection, and structured `stream-json` output parsing.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Gemini CLI installed (`gemini` command available)
|
||||
- `GEMINI_API_KEY` or `GOOGLE_API_KEY` set, or local Gemini CLI auth configured
|
||||
|
||||
## Configuration Fields
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `cwd` | string | Yes | Working directory for the agent process (absolute path; created automatically if missing when permissions allow) |
|
||||
| `model` | string | No | Gemini model to use. Defaults to `auto`. |
|
||||
| `promptTemplate` | string | No | Prompt used for all runs |
|
||||
| `instructionsFilePath` | string | No | Markdown instructions file prepended to the prompt |
|
||||
| `env` | object | No | Environment variables (supports secret refs) |
|
||||
| `timeoutSec` | number | No | Process timeout (0 = no timeout) |
|
||||
| `graceSec` | number | No | Grace period before force-kill |
|
||||
| `yolo` | boolean | No | Pass `--approval-mode yolo` for unattended operation |
|
||||
|
||||
## Session Persistence
|
||||
|
||||
The adapter persists Gemini session IDs between heartbeats. On the next wake, it resumes the existing conversation with `--resume` so the agent retains context.
|
||||
|
||||
Session resume is cwd-aware: if the working directory changed since the last run, a fresh session starts instead.
|
||||
|
||||
If resume fails with an unknown session error, the adapter automatically retries with a fresh session.
|
||||
|
||||
## Skills Injection
|
||||
|
||||
The adapter symlinks Paperclip skills into the Gemini global skills directory (`~/.gemini/skills`). Existing user skills are not overwritten.
|
||||
|
||||
## Environment Test
|
||||
|
||||
Use the "Test Environment" button in the UI to validate the adapter config. It checks:
|
||||
|
||||
- Gemini CLI is installed and accessible
|
||||
- Working directory is absolute and available (auto-created if missing and permitted)
|
||||
- API key/auth hints (`GEMINI_API_KEY` or `GOOGLE_API_KEY`)
|
||||
- A live hello probe (`gemini --output-format json "Respond with hello."`) to verify CLI readiness
|
||||
@@ -20,6 +20,7 @@ When a heartbeat fires, Paperclip:
|
||||
|---------|----------|-------------|
|
||||
| [Claude Local](/adapters/claude-local) | `claude_local` | Runs Claude Code CLI locally |
|
||||
| [Codex Local](/adapters/codex-local) | `codex_local` | Runs OpenAI Codex CLI locally |
|
||||
| [Gemini Local](/adapters/gemini-local) | `gemini_local` | Runs Gemini CLI locally |
|
||||
| OpenCode Local | `opencode_local` | Runs OpenCode CLI locally (multi-provider `provider/model`) |
|
||||
| OpenClaw | `openclaw` | Sends wake payloads to an OpenClaw webhook |
|
||||
| [Process](/adapters/process) | `process` | Executes arbitrary shell commands |
|
||||
@@ -54,7 +55,7 @@ Three registries consume these modules:
|
||||
|
||||
## Choosing an Adapter
|
||||
|
||||
- **Need a coding agent?** Use `claude_local`, `codex_local`, or `opencode_local`
|
||||
- **Need a coding agent?** Use `claude_local`, `codex_local`, `gemini_local`, or `opencode_local`
|
||||
- **Need to run a script or command?** Use `process`
|
||||
- **Need to call an external service?** Use `http`
|
||||
- **Need something custom?** [Create your own adapter](/adapters/creating-an-adapter)
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"dev": "node scripts/dev-runner.mjs watch",
|
||||
"dev:watch": "cross-env PAPERCLIP_MIGRATION_PROMPT=never node scripts/dev-runner.mjs watch",
|
||||
"dev:watch": "node scripts/dev-runner.mjs watch",
|
||||
"dev:once": "node scripts/dev-runner.mjs dev",
|
||||
"dev:server": "pnpm --filter @paperclipai/server dev",
|
||||
"dev:ui": "pnpm --filter @paperclipai/ui dev",
|
||||
|
||||
@@ -1,5 +1,11 @@
|
||||
# @paperclipai/adapter-utils
|
||||
|
||||
## 0.3.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Stable release preparation for 0.3.0
|
||||
|
||||
## 0.2.7
|
||||
|
||||
### Patch Changes
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@paperclipai/adapter-utils",
|
||||
"version": "0.2.7",
|
||||
"version": "0.3.0",
|
||||
"type": "module",
|
||||
"exports": {
|
||||
".": "./src/index.ts",
|
||||
|
||||
@@ -3,6 +3,7 @@ export type {
|
||||
AdapterRuntime,
|
||||
UsageSummary,
|
||||
AdapterBillingType,
|
||||
AdapterRuntimeServiceReport,
|
||||
AdapterExecutionResult,
|
||||
AdapterInvocationMeta,
|
||||
AdapterExecutionContext,
|
||||
@@ -21,3 +22,9 @@ export type {
|
||||
CLIAdapterModule,
|
||||
CreateConfigValues,
|
||||
} from "./types.js";
|
||||
export {
|
||||
REDACTED_HOME_PATH_USER,
|
||||
redactHomePathUserSegments,
|
||||
redactHomePathUserSegmentsInValue,
|
||||
redactTranscriptEntryPaths,
|
||||
} from "./log-redaction.js";
|
||||
|
||||
81
packages/adapter-utils/src/log-redaction.ts
Normal file
81
packages/adapter-utils/src/log-redaction.ts
Normal file
@@ -0,0 +1,81 @@
|
||||
import type { TranscriptEntry } from "./types.js";
|
||||
|
||||
export const REDACTED_HOME_PATH_USER = "[]";
|
||||
|
||||
const HOME_PATH_PATTERNS = [
|
||||
{
|
||||
regex: /\/Users\/[^/\\\s]+/g,
|
||||
replace: `/Users/${REDACTED_HOME_PATH_USER}`,
|
||||
},
|
||||
{
|
||||
regex: /\/home\/[^/\\\s]+/g,
|
||||
replace: `/home/${REDACTED_HOME_PATH_USER}`,
|
||||
},
|
||||
{
|
||||
regex: /([A-Za-z]:\\Users\\)[^\\/\s]+/g,
|
||||
replace: `$1${REDACTED_HOME_PATH_USER}`,
|
||||
},
|
||||
] as const;
|
||||
|
||||
function isPlainObject(value: unknown): value is Record<string, unknown> {
|
||||
if (typeof value !== "object" || value === null || Array.isArray(value)) return false;
|
||||
const proto = Object.getPrototypeOf(value);
|
||||
return proto === Object.prototype || proto === null;
|
||||
}
|
||||
|
||||
export function redactHomePathUserSegments(text: string): string {
|
||||
let result = text;
|
||||
for (const pattern of HOME_PATH_PATTERNS) {
|
||||
result = result.replace(pattern.regex, pattern.replace);
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
export function redactHomePathUserSegmentsInValue<T>(value: T): T {
|
||||
if (typeof value === "string") {
|
||||
return redactHomePathUserSegments(value) as T;
|
||||
}
|
||||
if (Array.isArray(value)) {
|
||||
return value.map((entry) => redactHomePathUserSegmentsInValue(entry)) as T;
|
||||
}
|
||||
if (!isPlainObject(value)) {
|
||||
return value;
|
||||
}
|
||||
|
||||
const redacted: Record<string, unknown> = {};
|
||||
for (const [key, entry] of Object.entries(value)) {
|
||||
redacted[key] = redactHomePathUserSegmentsInValue(entry);
|
||||
}
|
||||
return redacted as T;
|
||||
}
|
||||
|
||||
export function redactTranscriptEntryPaths(entry: TranscriptEntry): TranscriptEntry {
|
||||
switch (entry.kind) {
|
||||
case "assistant":
|
||||
case "thinking":
|
||||
case "user":
|
||||
case "stderr":
|
||||
case "system":
|
||||
case "stdout":
|
||||
return { ...entry, text: redactHomePathUserSegments(entry.text) };
|
||||
case "tool_call":
|
||||
return { ...entry, name: redactHomePathUserSegments(entry.name), input: redactHomePathUserSegmentsInValue(entry.input) };
|
||||
case "tool_result":
|
||||
return { ...entry, content: redactHomePathUserSegments(entry.content) };
|
||||
case "init":
|
||||
return {
|
||||
...entry,
|
||||
model: redactHomePathUserSegments(entry.model),
|
||||
sessionId: redactHomePathUserSegments(entry.sessionId),
|
||||
};
|
||||
case "result":
|
||||
return {
|
||||
...entry,
|
||||
text: redactHomePathUserSegments(entry.text),
|
||||
subtype: redactHomePathUserSegments(entry.subtype),
|
||||
errors: entry.errors.map((error) => redactHomePathUserSegments(error)),
|
||||
};
|
||||
default:
|
||||
return entry;
|
||||
}
|
||||
}
|
||||
@@ -272,7 +272,24 @@ export async function runChildProcess(
|
||||
const onLogError = opts.onLogError ?? ((err, id, msg) => console.warn({ err, runId: id }, msg));
|
||||
|
||||
return new Promise<RunProcessResult>((resolve, reject) => {
|
||||
const mergedEnv = ensurePathInEnv({ ...process.env, ...opts.env });
|
||||
const rawMerged: NodeJS.ProcessEnv = { ...process.env, ...opts.env };
|
||||
|
||||
// Strip Claude Code nesting-guard env vars so spawned `claude` processes
|
||||
// don't refuse to start with "cannot be launched inside another session".
|
||||
// These vars leak in when the Paperclip server itself is started from
|
||||
// within a Claude Code session (e.g. `npx paperclipai run` in a terminal
|
||||
// owned by Claude Code) or when cron inherits a contaminated shell env.
|
||||
const CLAUDE_CODE_NESTING_VARS = [
|
||||
"CLAUDECODE",
|
||||
"CLAUDE_CODE_ENTRYPOINT",
|
||||
"CLAUDE_CODE_SESSION",
|
||||
"CLAUDE_CODE_PARENT_SESSION",
|
||||
] as const;
|
||||
for (const key of CLAUDE_CODE_NESTING_VARS) {
|
||||
delete rawMerged[key];
|
||||
}
|
||||
|
||||
const mergedEnv = ensurePathInEnv(rawMerged);
|
||||
void resolveSpawnTarget(command, args, opts.cwd, mergedEnv)
|
||||
.then((target) => {
|
||||
const child = spawn(target.command, target.args, {
|
||||
|
||||
@@ -32,6 +32,27 @@ export interface UsageSummary {
|
||||
|
||||
export type AdapterBillingType = "api" | "subscription" | "unknown";
|
||||
|
||||
export interface AdapterRuntimeServiceReport {
|
||||
id?: string | null;
|
||||
projectId?: string | null;
|
||||
projectWorkspaceId?: string | null;
|
||||
issueId?: string | null;
|
||||
scopeType?: "project_workspace" | "execution_workspace" | "run" | "agent";
|
||||
scopeId?: string | null;
|
||||
serviceName: string;
|
||||
status?: "starting" | "running" | "stopped" | "failed";
|
||||
lifecycle?: "shared" | "ephemeral";
|
||||
reuseKey?: string | null;
|
||||
command?: string | null;
|
||||
cwd?: string | null;
|
||||
port?: number | null;
|
||||
url?: string | null;
|
||||
providerRef?: string | null;
|
||||
ownerAgentId?: string | null;
|
||||
stopPolicy?: Record<string, unknown> | null;
|
||||
healthStatus?: "unknown" | "healthy" | "unhealthy";
|
||||
}
|
||||
|
||||
export interface AdapterExecutionResult {
|
||||
exitCode: number | null;
|
||||
signal: string | null;
|
||||
@@ -51,8 +72,17 @@ export interface AdapterExecutionResult {
|
||||
billingType?: AdapterBillingType | null;
|
||||
costUsd?: number | null;
|
||||
resultJson?: Record<string, unknown> | null;
|
||||
runtimeServices?: AdapterRuntimeServiceReport[];
|
||||
summary?: string | null;
|
||||
clearSession?: boolean;
|
||||
question?: {
|
||||
prompt: string;
|
||||
choices: Array<{
|
||||
key: string;
|
||||
label: string;
|
||||
description?: string;
|
||||
}>;
|
||||
} | null;
|
||||
}
|
||||
|
||||
export interface AdapterSessionCodec {
|
||||
@@ -167,7 +197,7 @@ export type TranscriptEntry =
|
||||
| { kind: "assistant"; ts: string; text: string; delta?: boolean }
|
||||
| { kind: "thinking"; ts: string; text: string; delta?: boolean }
|
||||
| { kind: "user"; ts: string; text: string }
|
||||
| { kind: "tool_call"; ts: string; name: string; input: unknown }
|
||||
| { kind: "tool_call"; ts: string; name: string; input: unknown; toolUseId?: string }
|
||||
| { kind: "tool_result"; ts: string; toolUseId: string; content: string; isError: boolean }
|
||||
| { kind: "init"; ts: string; model: string; sessionId: string }
|
||||
| { kind: "result"; ts: string; text: string; inputTokens: number; outputTokens: number; cachedTokens: number; costUsd: number; subtype: string; isError: boolean; errors: string[] }
|
||||
@@ -208,6 +238,12 @@ export interface CreateConfigValues {
|
||||
envBindings: Record<string, unknown>;
|
||||
url: string;
|
||||
bootstrapPrompt: string;
|
||||
payloadTemplateJson?: string;
|
||||
workspaceStrategyType?: string;
|
||||
workspaceBaseRef?: string;
|
||||
workspaceBranchTemplate?: string;
|
||||
worktreeParentDir?: string;
|
||||
runtimeServicesJson?: string;
|
||||
maxTurnsPerRun: number;
|
||||
heartbeatEnabled: boolean;
|
||||
intervalSec: number;
|
||||
|
||||
@@ -1,5 +1,16 @@
|
||||
# @paperclipai/adapter-claude-local
|
||||
|
||||
## 0.3.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Stable release preparation for 0.3.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies
|
||||
- @paperclipai/adapter-utils@0.3.0
|
||||
|
||||
## 0.2.7
|
||||
|
||||
### Patch Changes
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@paperclipai/adapter-claude-local",
|
||||
"version": "0.2.7",
|
||||
"version": "0.3.0",
|
||||
"type": "module",
|
||||
"exports": {
|
||||
".": "./src/index.ts",
|
||||
|
||||
@@ -25,8 +25,13 @@ Core fields:
|
||||
- command (string, optional): defaults to "claude"
|
||||
- extraArgs (string[], optional): additional CLI args
|
||||
- env (object, optional): KEY=VALUE environment variables
|
||||
- workspaceStrategy (object, optional): execution workspace strategy; currently supports { type: "git_worktree", baseRef?, branchTemplate?, worktreeParentDir? }
|
||||
- workspaceRuntime (object, optional): workspace runtime service intents; local host-managed services are realized before Claude starts and exposed back via context/env
|
||||
|
||||
Operational fields:
|
||||
- timeoutSec (number, optional): run timeout in seconds
|
||||
- graceSec (number, optional): SIGTERM grace period in seconds
|
||||
|
||||
Notes:
|
||||
- When Paperclip realizes a workspace/runtime for a run, it injects PAPERCLIP_WORKSPACE_* and PAPERCLIP_RUNTIME_* env vars for agent-side tooling.
|
||||
`;
|
||||
|
||||
@@ -115,14 +115,28 @@ async function buildClaudeRuntimeConfig(input: ClaudeExecutionInput): Promise<Cl
|
||||
const workspaceContext = parseObject(context.paperclipWorkspace);
|
||||
const workspaceCwd = asString(workspaceContext.cwd, "");
|
||||
const workspaceSource = asString(workspaceContext.source, "");
|
||||
const workspaceStrategy = asString(workspaceContext.strategy, "");
|
||||
const workspaceId = asString(workspaceContext.workspaceId, "") || null;
|
||||
const workspaceRepoUrl = asString(workspaceContext.repoUrl, "") || null;
|
||||
const workspaceRepoRef = asString(workspaceContext.repoRef, "") || null;
|
||||
const workspaceBranch = asString(workspaceContext.branchName, "") || null;
|
||||
const workspaceWorktreePath = asString(workspaceContext.worktreePath, "") || null;
|
||||
const workspaceHints = Array.isArray(context.paperclipWorkspaces)
|
||||
? context.paperclipWorkspaces.filter(
|
||||
(value): value is Record<string, unknown> => typeof value === "object" && value !== null,
|
||||
)
|
||||
: [];
|
||||
const runtimeServiceIntents = Array.isArray(context.paperclipRuntimeServiceIntents)
|
||||
? context.paperclipRuntimeServiceIntents.filter(
|
||||
(value): value is Record<string, unknown> => typeof value === "object" && value !== null,
|
||||
)
|
||||
: [];
|
||||
const runtimeServices = Array.isArray(context.paperclipRuntimeServices)
|
||||
? context.paperclipRuntimeServices.filter(
|
||||
(value): value is Record<string, unknown> => typeof value === "object" && value !== null,
|
||||
)
|
||||
: [];
|
||||
const runtimePrimaryUrl = asString(context.paperclipRuntimePrimaryUrl, "");
|
||||
const configuredCwd = asString(config.cwd, "");
|
||||
const useConfiguredInsteadOfAgentHome = workspaceSource === "agent_home" && configuredCwd.length > 0;
|
||||
const effectiveWorkspaceCwd = useConfiguredInsteadOfAgentHome ? "" : workspaceCwd;
|
||||
@@ -183,6 +197,9 @@ async function buildClaudeRuntimeConfig(input: ClaudeExecutionInput): Promise<Cl
|
||||
if (workspaceSource) {
|
||||
env.PAPERCLIP_WORKSPACE_SOURCE = workspaceSource;
|
||||
}
|
||||
if (workspaceStrategy) {
|
||||
env.PAPERCLIP_WORKSPACE_STRATEGY = workspaceStrategy;
|
||||
}
|
||||
if (workspaceId) {
|
||||
env.PAPERCLIP_WORKSPACE_ID = workspaceId;
|
||||
}
|
||||
@@ -192,9 +209,24 @@ async function buildClaudeRuntimeConfig(input: ClaudeExecutionInput): Promise<Cl
|
||||
if (workspaceRepoRef) {
|
||||
env.PAPERCLIP_WORKSPACE_REPO_REF = workspaceRepoRef;
|
||||
}
|
||||
if (workspaceBranch) {
|
||||
env.PAPERCLIP_WORKSPACE_BRANCH = workspaceBranch;
|
||||
}
|
||||
if (workspaceWorktreePath) {
|
||||
env.PAPERCLIP_WORKSPACE_WORKTREE_PATH = workspaceWorktreePath;
|
||||
}
|
||||
if (workspaceHints.length > 0) {
|
||||
env.PAPERCLIP_WORKSPACES_JSON = JSON.stringify(workspaceHints);
|
||||
}
|
||||
if (runtimeServiceIntents.length > 0) {
|
||||
env.PAPERCLIP_RUNTIME_SERVICE_INTENTS_JSON = JSON.stringify(runtimeServiceIntents);
|
||||
}
|
||||
if (runtimeServices.length > 0) {
|
||||
env.PAPERCLIP_RUNTIME_SERVICES_JSON = JSON.stringify(runtimeServices);
|
||||
}
|
||||
if (runtimePrimaryUrl) {
|
||||
env.PAPERCLIP_RUNTIME_PRIMARY_URL = runtimePrimaryUrl;
|
||||
}
|
||||
|
||||
for (const [key, value] of Object.entries(envConfig)) {
|
||||
if (typeof value === "string") env[key] = value;
|
||||
|
||||
@@ -50,6 +50,18 @@ function parseEnvBindings(bindings: unknown): Record<string, unknown> {
|
||||
return env;
|
||||
}
|
||||
|
||||
function parseJsonObject(text: string): Record<string, unknown> | null {
|
||||
const trimmed = text.trim();
|
||||
if (!trimmed) return null;
|
||||
try {
|
||||
const parsed = JSON.parse(trimmed);
|
||||
if (typeof parsed !== "object" || parsed === null || Array.isArray(parsed)) return null;
|
||||
return parsed as Record<string, unknown>;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
export function buildClaudeLocalConfig(v: CreateConfigValues): Record<string, unknown> {
|
||||
const ac: Record<string, unknown> = {};
|
||||
if (v.cwd) ac.cwd = v.cwd;
|
||||
@@ -70,6 +82,18 @@ export function buildClaudeLocalConfig(v: CreateConfigValues): Record<string, un
|
||||
if (Object.keys(env).length > 0) ac.env = env;
|
||||
ac.maxTurnsPerRun = v.maxTurnsPerRun;
|
||||
ac.dangerouslySkipPermissions = v.dangerouslySkipPermissions;
|
||||
if (v.workspaceStrategyType === "git_worktree") {
|
||||
ac.workspaceStrategy = {
|
||||
type: "git_worktree",
|
||||
...(v.workspaceBaseRef ? { baseRef: v.workspaceBaseRef } : {}),
|
||||
...(v.workspaceBranchTemplate ? { branchTemplate: v.workspaceBranchTemplate } : {}),
|
||||
...(v.worktreeParentDir ? { worktreeParentDir: v.worktreeParentDir } : {}),
|
||||
};
|
||||
}
|
||||
const runtimeServices = parseJsonObject(v.runtimeServicesJson ?? "");
|
||||
if (runtimeServices && Array.isArray(runtimeServices.services)) {
|
||||
ac.workspaceRuntime = runtimeServices;
|
||||
}
|
||||
if (v.command) ac.command = v.command;
|
||||
if (v.extraArgs) ac.extraArgs = parseCommaArgs(v.extraArgs);
|
||||
return ac;
|
||||
|
||||
@@ -71,6 +71,12 @@ export function parseClaudeStdoutLine(line: string, ts: string): TranscriptEntry
|
||||
kind: "tool_call",
|
||||
ts,
|
||||
name: typeof block.name === "string" ? block.name : "unknown",
|
||||
toolUseId:
|
||||
typeof block.id === "string"
|
||||
? block.id
|
||||
: typeof block.tool_use_id === "string"
|
||||
? block.tool_use_id
|
||||
: undefined,
|
||||
input: block.input ?? {},
|
||||
});
|
||||
}
|
||||
|
||||
@@ -1,5 +1,16 @@
|
||||
# @paperclipai/adapter-codex-local
|
||||
|
||||
## 0.3.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Stable release preparation for 0.3.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies
|
||||
- @paperclipai/adapter-utils@0.3.0
|
||||
|
||||
## 0.2.7
|
||||
|
||||
### Patch Changes
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@paperclipai/adapter-codex-local",
|
||||
"version": "0.2.7",
|
||||
"version": "0.3.0",
|
||||
"type": "module",
|
||||
"exports": {
|
||||
".": "./src/index.ts",
|
||||
|
||||
@@ -31,6 +31,8 @@ Core fields:
|
||||
- command (string, optional): defaults to "codex"
|
||||
- extraArgs (string[], optional): additional CLI args
|
||||
- env (object, optional): KEY=VALUE environment variables
|
||||
- workspaceStrategy (object, optional): execution workspace strategy; currently supports { type: "git_worktree", baseRef?, branchTemplate?, worktreeParentDir? }
|
||||
- workspaceRuntime (object, optional): workspace runtime service intents; local host-managed services are realized before Codex starts and exposed back via context/env
|
||||
|
||||
Operational fields:
|
||||
- timeoutSec (number, optional): run timeout in seconds
|
||||
@@ -40,4 +42,5 @@ Notes:
|
||||
- Prompts are piped via stdin (Codex receives "-" prompt argument).
|
||||
- Paperclip auto-injects local skills into Codex personal skills dir ("$CODEX_HOME/skills" or "~/.codex/skills") when missing, so Codex can discover "$paperclip" and related skills.
|
||||
- Some model/tool combinations reject certain effort levels (for example minimal with web search enabled).
|
||||
- When Paperclip realizes a workspace/runtime for a run, it injects PAPERCLIP_WORKSPACE_* and PAPERCLIP_RUNTIME_* env vars for agent-side tooling.
|
||||
`;
|
||||
|
||||
@@ -126,14 +126,28 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
|
||||
const workspaceContext = parseObject(context.paperclipWorkspace);
|
||||
const workspaceCwd = asString(workspaceContext.cwd, "");
|
||||
const workspaceSource = asString(workspaceContext.source, "");
|
||||
const workspaceStrategy = asString(workspaceContext.strategy, "");
|
||||
const workspaceId = asString(workspaceContext.workspaceId, "");
|
||||
const workspaceRepoUrl = asString(workspaceContext.repoUrl, "");
|
||||
const workspaceRepoRef = asString(workspaceContext.repoRef, "");
|
||||
const workspaceBranch = asString(workspaceContext.branchName, "");
|
||||
const workspaceWorktreePath = asString(workspaceContext.worktreePath, "");
|
||||
const workspaceHints = Array.isArray(context.paperclipWorkspaces)
|
||||
? context.paperclipWorkspaces.filter(
|
||||
(value): value is Record<string, unknown> => typeof value === "object" && value !== null,
|
||||
)
|
||||
: [];
|
||||
const runtimeServiceIntents = Array.isArray(context.paperclipRuntimeServiceIntents)
|
||||
? context.paperclipRuntimeServiceIntents.filter(
|
||||
(value): value is Record<string, unknown> => typeof value === "object" && value !== null,
|
||||
)
|
||||
: [];
|
||||
const runtimeServices = Array.isArray(context.paperclipRuntimeServices)
|
||||
? context.paperclipRuntimeServices.filter(
|
||||
(value): value is Record<string, unknown> => typeof value === "object" && value !== null,
|
||||
)
|
||||
: [];
|
||||
const runtimePrimaryUrl = asString(context.paperclipRuntimePrimaryUrl, "");
|
||||
const configuredCwd = asString(config.cwd, "");
|
||||
const useConfiguredInsteadOfAgentHome = workspaceSource === "agent_home" && configuredCwd.length > 0;
|
||||
const effectiveWorkspaceCwd = useConfiguredInsteadOfAgentHome ? "" : workspaceCwd;
|
||||
@@ -192,6 +206,9 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
|
||||
if (workspaceSource) {
|
||||
env.PAPERCLIP_WORKSPACE_SOURCE = workspaceSource;
|
||||
}
|
||||
if (workspaceStrategy) {
|
||||
env.PAPERCLIP_WORKSPACE_STRATEGY = workspaceStrategy;
|
||||
}
|
||||
if (workspaceId) {
|
||||
env.PAPERCLIP_WORKSPACE_ID = workspaceId;
|
||||
}
|
||||
@@ -201,9 +218,24 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
|
||||
if (workspaceRepoRef) {
|
||||
env.PAPERCLIP_WORKSPACE_REPO_REF = workspaceRepoRef;
|
||||
}
|
||||
if (workspaceBranch) {
|
||||
env.PAPERCLIP_WORKSPACE_BRANCH = workspaceBranch;
|
||||
}
|
||||
if (workspaceWorktreePath) {
|
||||
env.PAPERCLIP_WORKSPACE_WORKTREE_PATH = workspaceWorktreePath;
|
||||
}
|
||||
if (workspaceHints.length > 0) {
|
||||
env.PAPERCLIP_WORKSPACES_JSON = JSON.stringify(workspaceHints);
|
||||
}
|
||||
if (runtimeServiceIntents.length > 0) {
|
||||
env.PAPERCLIP_RUNTIME_SERVICE_INTENTS_JSON = JSON.stringify(runtimeServiceIntents);
|
||||
}
|
||||
if (runtimeServices.length > 0) {
|
||||
env.PAPERCLIP_RUNTIME_SERVICES_JSON = JSON.stringify(runtimeServices);
|
||||
}
|
||||
if (runtimePrimaryUrl) {
|
||||
env.PAPERCLIP_RUNTIME_PRIMARY_URL = runtimePrimaryUrl;
|
||||
}
|
||||
for (const [k, v] of Object.entries(envConfig)) {
|
||||
if (typeof v === "string") env[k] = v;
|
||||
}
|
||||
|
||||
@@ -54,6 +54,18 @@ function parseEnvBindings(bindings: unknown): Record<string, unknown> {
|
||||
return env;
|
||||
}
|
||||
|
||||
function parseJsonObject(text: string): Record<string, unknown> | null {
|
||||
const trimmed = text.trim();
|
||||
if (!trimmed) return null;
|
||||
try {
|
||||
const parsed = JSON.parse(trimmed);
|
||||
if (typeof parsed !== "object" || parsed === null || Array.isArray(parsed)) return null;
|
||||
return parsed as Record<string, unknown>;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
export function buildCodexLocalConfig(v: CreateConfigValues): Record<string, unknown> {
|
||||
const ac: Record<string, unknown> = {};
|
||||
if (v.cwd) ac.cwd = v.cwd;
|
||||
@@ -76,6 +88,18 @@ export function buildCodexLocalConfig(v: CreateConfigValues): Record<string, unk
|
||||
typeof v.dangerouslyBypassSandbox === "boolean"
|
||||
? v.dangerouslyBypassSandbox
|
||||
: DEFAULT_CODEX_LOCAL_BYPASS_APPROVALS_AND_SANDBOX;
|
||||
if (v.workspaceStrategyType === "git_worktree") {
|
||||
ac.workspaceStrategy = {
|
||||
type: "git_worktree",
|
||||
...(v.workspaceBaseRef ? { baseRef: v.workspaceBaseRef } : {}),
|
||||
...(v.workspaceBranchTemplate ? { branchTemplate: v.workspaceBranchTemplate } : {}),
|
||||
...(v.worktreeParentDir ? { worktreeParentDir: v.worktreeParentDir } : {}),
|
||||
};
|
||||
}
|
||||
const runtimeServices = parseJsonObject(v.runtimeServicesJson ?? "");
|
||||
if (runtimeServices && Array.isArray(runtimeServices.services)) {
|
||||
ac.workspaceRuntime = runtimeServices;
|
||||
}
|
||||
if (v.command) ac.command = v.command;
|
||||
if (v.extraArgs) ac.extraArgs = parseCommaArgs(v.extraArgs);
|
||||
return ac;
|
||||
|
||||
@@ -1,4 +1,8 @@
|
||||
import type { TranscriptEntry } from "@paperclipai/adapter-utils";
|
||||
import {
|
||||
redactHomePathUserSegments,
|
||||
redactHomePathUserSegmentsInValue,
|
||||
type TranscriptEntry,
|
||||
} from "@paperclipai/adapter-utils";
|
||||
|
||||
function safeJsonParse(text: string): unknown {
|
||||
try {
|
||||
@@ -39,12 +43,12 @@ function errorText(value: unknown): string {
|
||||
}
|
||||
|
||||
function stringifyUnknown(value: unknown): string {
|
||||
if (typeof value === "string") return value;
|
||||
if (typeof value === "string") return redactHomePathUserSegments(value);
|
||||
if (value === null || value === undefined) return "";
|
||||
try {
|
||||
return JSON.stringify(value, null, 2);
|
||||
return JSON.stringify(redactHomePathUserSegmentsInValue(value), null, 2);
|
||||
} catch {
|
||||
return String(value);
|
||||
return redactHomePathUserSegments(String(value));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -57,22 +61,24 @@ function parseCommandExecutionItem(
|
||||
const command = asString(item.command);
|
||||
const status = asString(item.status);
|
||||
const exitCode = typeof item.exit_code === "number" && Number.isFinite(item.exit_code) ? item.exit_code : null;
|
||||
const output = asString(item.aggregated_output).replace(/\s+$/, "");
|
||||
const safeCommand = redactHomePathUserSegments(command);
|
||||
const output = redactHomePathUserSegments(asString(item.aggregated_output)).replace(/\s+$/, "");
|
||||
|
||||
if (phase === "started") {
|
||||
return [{
|
||||
kind: "tool_call",
|
||||
ts,
|
||||
name: "command_execution",
|
||||
toolUseId: id || command || "command_execution",
|
||||
input: {
|
||||
id,
|
||||
command,
|
||||
command: safeCommand,
|
||||
},
|
||||
}];
|
||||
}
|
||||
|
||||
const lines: string[] = [];
|
||||
if (command) lines.push(`command: ${command}`);
|
||||
if (safeCommand) lines.push(`command: ${safeCommand}`);
|
||||
if (status) lines.push(`status: ${status}`);
|
||||
if (exitCode !== null) lines.push(`exit_code: ${exitCode}`);
|
||||
if (output) {
|
||||
@@ -103,7 +109,7 @@ function parseFileChangeItem(item: Record<string, unknown>, ts: string): Transcr
|
||||
.filter((change): change is Record<string, unknown> => Boolean(change))
|
||||
.map((change) => {
|
||||
const kind = asString(change.kind, "update");
|
||||
const path = asString(change.path, "unknown");
|
||||
const path = redactHomePathUserSegments(asString(change.path, "unknown"));
|
||||
return `${kind} ${path}`;
|
||||
});
|
||||
|
||||
@@ -125,13 +131,13 @@ function parseCodexItem(
|
||||
|
||||
if (itemType === "agent_message") {
|
||||
const text = asString(item.text);
|
||||
if (text) return [{ kind: "assistant", ts, text }];
|
||||
if (text) return [{ kind: "assistant", ts, text: redactHomePathUserSegments(text) }];
|
||||
return [];
|
||||
}
|
||||
|
||||
if (itemType === "reasoning") {
|
||||
const text = asString(item.text);
|
||||
if (text) return [{ kind: "thinking", ts, text }];
|
||||
if (text) return [{ kind: "thinking", ts, text: redactHomePathUserSegments(text) }];
|
||||
return [{ kind: "system", ts, text: phase === "started" ? "reasoning started" : "reasoning completed" }];
|
||||
}
|
||||
|
||||
@@ -147,8 +153,9 @@ function parseCodexItem(
|
||||
return [{
|
||||
kind: "tool_call",
|
||||
ts,
|
||||
name: asString(item.name, "unknown"),
|
||||
input: item.input ?? {},
|
||||
name: redactHomePathUserSegments(asString(item.name, "unknown")),
|
||||
toolUseId: asString(item.id),
|
||||
input: redactHomePathUserSegmentsInValue(item.input ?? {}),
|
||||
}];
|
||||
}
|
||||
|
||||
@@ -160,24 +167,28 @@ function parseCodexItem(
|
||||
asString(item.result) ||
|
||||
stringifyUnknown(item.content ?? item.output ?? item.result);
|
||||
const isError = item.is_error === true || asString(item.status) === "error";
|
||||
return [{ kind: "tool_result", ts, toolUseId, content, isError }];
|
||||
return [{ kind: "tool_result", ts, toolUseId, content: redactHomePathUserSegments(content), isError }];
|
||||
}
|
||||
|
||||
if (itemType === "error" && phase === "completed") {
|
||||
const text = errorText(item.message ?? item.error ?? item);
|
||||
return [{ kind: "stderr", ts, text: text || "error" }];
|
||||
return [{ kind: "stderr", ts, text: redactHomePathUserSegments(text || "error") }];
|
||||
}
|
||||
|
||||
const id = asString(item.id);
|
||||
const status = asString(item.status);
|
||||
const meta = [id ? `id=${id}` : "", status ? `status=${status}` : ""].filter(Boolean).join(" ");
|
||||
return [{ kind: "system", ts, text: `item ${phase}: ${itemType || "unknown"}${meta ? ` (${meta})` : ""}` }];
|
||||
return [{
|
||||
kind: "system",
|
||||
ts,
|
||||
text: redactHomePathUserSegments(`item ${phase}: ${itemType || "unknown"}${meta ? ` (${meta})` : ""}`),
|
||||
}];
|
||||
}
|
||||
|
||||
export function parseCodexStdoutLine(line: string, ts: string): TranscriptEntry[] {
|
||||
const parsed = asRecord(safeJsonParse(line));
|
||||
if (!parsed) {
|
||||
return [{ kind: "stdout", ts, text: line }];
|
||||
return [{ kind: "stdout", ts, text: redactHomePathUserSegments(line) }];
|
||||
}
|
||||
|
||||
const type = asString(parsed.type);
|
||||
@@ -187,8 +198,8 @@ export function parseCodexStdoutLine(line: string, ts: string): TranscriptEntry[
|
||||
return [{
|
||||
kind: "init",
|
||||
ts,
|
||||
model: asString(parsed.model, "codex"),
|
||||
sessionId: threadId,
|
||||
model: redactHomePathUserSegments(asString(parsed.model, "codex")),
|
||||
sessionId: redactHomePathUserSegments(threadId),
|
||||
}];
|
||||
}
|
||||
|
||||
@@ -210,15 +221,15 @@ export function parseCodexStdoutLine(line: string, ts: string): TranscriptEntry[
|
||||
return [{
|
||||
kind: "result",
|
||||
ts,
|
||||
text: asString(parsed.result),
|
||||
text: redactHomePathUserSegments(asString(parsed.result)),
|
||||
inputTokens,
|
||||
outputTokens,
|
||||
cachedTokens,
|
||||
costUsd: asNumber(parsed.total_cost_usd),
|
||||
subtype: asString(parsed.subtype),
|
||||
subtype: redactHomePathUserSegments(asString(parsed.subtype)),
|
||||
isError: parsed.is_error === true,
|
||||
errors: Array.isArray(parsed.errors)
|
||||
? parsed.errors.map(errorText).filter(Boolean)
|
||||
? parsed.errors.map(errorText).map(redactHomePathUserSegments).filter(Boolean)
|
||||
: [],
|
||||
}];
|
||||
}
|
||||
@@ -232,21 +243,21 @@ export function parseCodexStdoutLine(line: string, ts: string): TranscriptEntry[
|
||||
return [{
|
||||
kind: "result",
|
||||
ts,
|
||||
text: asString(parsed.result),
|
||||
text: redactHomePathUserSegments(asString(parsed.result)),
|
||||
inputTokens,
|
||||
outputTokens,
|
||||
cachedTokens,
|
||||
costUsd: asNumber(parsed.total_cost_usd),
|
||||
subtype: asString(parsed.subtype, "turn.failed"),
|
||||
subtype: redactHomePathUserSegments(asString(parsed.subtype, "turn.failed")),
|
||||
isError: true,
|
||||
errors: message ? [message] : [],
|
||||
errors: message ? [redactHomePathUserSegments(message)] : [],
|
||||
}];
|
||||
}
|
||||
|
||||
if (type === "error") {
|
||||
const message = errorText(parsed.message ?? parsed.error ?? parsed);
|
||||
return [{ kind: "stderr", ts, text: message || line }];
|
||||
return [{ kind: "stderr", ts, text: redactHomePathUserSegments(message || line) }];
|
||||
}
|
||||
|
||||
return [{ kind: "stdout", ts, text: line }];
|
||||
return [{ kind: "stdout", ts, text: redactHomePathUserSegments(line) }];
|
||||
}
|
||||
|
||||
@@ -1,5 +1,16 @@
|
||||
# @paperclipai/adapter-cursor-local
|
||||
|
||||
## 0.3.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Stable release preparation for 0.3.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies
|
||||
- @paperclipai/adapter-utils@0.3.0
|
||||
|
||||
## 0.2.7
|
||||
|
||||
### Patch Changes
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@paperclipai/adapter-cursor-local",
|
||||
"version": "0.2.7",
|
||||
"version": "0.3.0",
|
||||
"type": "module",
|
||||
"exports": {
|
||||
".": "./src/index.ts",
|
||||
|
||||
@@ -142,6 +142,12 @@ function parseAssistantMessage(messageRaw: unknown, ts: string): TranscriptEntry
|
||||
kind: "tool_call",
|
||||
ts,
|
||||
name,
|
||||
toolUseId:
|
||||
asString(part.tool_use_id) ||
|
||||
asString(part.toolUseId) ||
|
||||
asString(part.call_id) ||
|
||||
asString(part.id) ||
|
||||
undefined,
|
||||
input,
|
||||
});
|
||||
continue;
|
||||
@@ -199,6 +205,7 @@ function parseCursorToolCallEvent(event: Record<string, unknown>, ts: string): T
|
||||
kind: "tool_call",
|
||||
ts,
|
||||
name: toolName,
|
||||
toolUseId: callId,
|
||||
input,
|
||||
}];
|
||||
}
|
||||
|
||||
51
packages/adapters/gemini-local/package.json
Normal file
51
packages/adapters/gemini-local/package.json
Normal file
@@ -0,0 +1,51 @@
|
||||
{
|
||||
"name": "@paperclipai/adapter-gemini-local",
|
||||
"version": "0.2.7",
|
||||
"type": "module",
|
||||
"exports": {
|
||||
".": "./src/index.ts",
|
||||
"./server": "./src/server/index.ts",
|
||||
"./ui": "./src/ui/index.ts",
|
||||
"./cli": "./src/cli/index.ts"
|
||||
},
|
||||
"publishConfig": {
|
||||
"access": "public",
|
||||
"exports": {
|
||||
".": {
|
||||
"types": "./dist/index.d.ts",
|
||||
"import": "./dist/index.js"
|
||||
},
|
||||
"./server": {
|
||||
"types": "./dist/server/index.d.ts",
|
||||
"import": "./dist/server/index.js"
|
||||
},
|
||||
"./ui": {
|
||||
"types": "./dist/ui/index.d.ts",
|
||||
"import": "./dist/ui/index.js"
|
||||
},
|
||||
"./cli": {
|
||||
"types": "./dist/cli/index.d.ts",
|
||||
"import": "./dist/cli/index.js"
|
||||
}
|
||||
},
|
||||
"main": "./dist/index.js",
|
||||
"types": "./dist/index.d.ts"
|
||||
},
|
||||
"files": [
|
||||
"dist",
|
||||
"skills"
|
||||
],
|
||||
"scripts": {
|
||||
"build": "tsc",
|
||||
"clean": "rm -rf dist",
|
||||
"typecheck": "tsc --noEmit"
|
||||
},
|
||||
"dependencies": {
|
||||
"@paperclipai/adapter-utils": "workspace:*",
|
||||
"picocolors": "^1.1.1"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/node": "^24.6.0",
|
||||
"typescript": "^5.7.3"
|
||||
}
|
||||
}
|
||||
208
packages/adapters/gemini-local/src/cli/format-event.ts
Normal file
208
packages/adapters/gemini-local/src/cli/format-event.ts
Normal file
@@ -0,0 +1,208 @@
|
||||
import pc from "picocolors";
|
||||
|
||||
function asRecord(value: unknown): Record<string, unknown> | null {
|
||||
if (typeof value !== "object" || value === null || Array.isArray(value)) return null;
|
||||
return value as Record<string, unknown>;
|
||||
}
|
||||
|
||||
function asString(value: unknown, fallback = ""): string {
|
||||
return typeof value === "string" ? value : fallback;
|
||||
}
|
||||
|
||||
function asNumber(value: unknown, fallback = 0): number {
|
||||
return typeof value === "number" && Number.isFinite(value) ? value : fallback;
|
||||
}
|
||||
|
||||
function stringifyUnknown(value: unknown): string {
|
||||
if (typeof value === "string") return value;
|
||||
if (value === null || value === undefined) return "";
|
||||
try {
|
||||
return JSON.stringify(value, null, 2);
|
||||
} catch {
|
||||
return String(value);
|
||||
}
|
||||
}
|
||||
|
||||
function errorText(value: unknown): string {
|
||||
if (typeof value === "string") return value;
|
||||
const rec = asRecord(value);
|
||||
if (!rec) return "";
|
||||
const msg =
|
||||
(typeof rec.message === "string" && rec.message) ||
|
||||
(typeof rec.error === "string" && rec.error) ||
|
||||
(typeof rec.code === "string" && rec.code) ||
|
||||
"";
|
||||
if (msg) return msg;
|
||||
try {
|
||||
return JSON.stringify(rec);
|
||||
} catch {
|
||||
return "";
|
||||
}
|
||||
}
|
||||
|
||||
function printTextMessage(prefix: string, colorize: (text: string) => string, messageRaw: unknown): void {
|
||||
if (typeof messageRaw === "string") {
|
||||
const text = messageRaw.trim();
|
||||
if (text) console.log(colorize(`${prefix}: ${text}`));
|
||||
return;
|
||||
}
|
||||
|
||||
const message = asRecord(messageRaw);
|
||||
if (!message) return;
|
||||
|
||||
const directText = asString(message.text).trim();
|
||||
if (directText) console.log(colorize(`${prefix}: ${directText}`));
|
||||
|
||||
const content = Array.isArray(message.content) ? message.content : [];
|
||||
for (const partRaw of content) {
|
||||
const part = asRecord(partRaw);
|
||||
if (!part) continue;
|
||||
const type = asString(part.type).trim();
|
||||
|
||||
if (type === "output_text" || type === "text" || type === "content") {
|
||||
const text = asString(part.text).trim() || asString(part.content).trim();
|
||||
if (text) console.log(colorize(`${prefix}: ${text}`));
|
||||
continue;
|
||||
}
|
||||
|
||||
if (type === "thinking") {
|
||||
const text = asString(part.text).trim();
|
||||
if (text) console.log(pc.gray(`thinking: ${text}`));
|
||||
continue;
|
||||
}
|
||||
|
||||
if (type === "tool_call") {
|
||||
const name = asString(part.name, asString(part.tool, "tool"));
|
||||
console.log(pc.yellow(`tool_call: ${name}`));
|
||||
const input = part.input ?? part.arguments ?? part.args;
|
||||
if (input !== undefined) console.log(pc.gray(stringifyUnknown(input)));
|
||||
continue;
|
||||
}
|
||||
|
||||
if (type === "tool_result" || type === "tool_response") {
|
||||
const isError = part.is_error === true || asString(part.status).toLowerCase() === "error";
|
||||
const contentText =
|
||||
asString(part.output) ||
|
||||
asString(part.text) ||
|
||||
asString(part.result) ||
|
||||
stringifyUnknown(part.output ?? part.result ?? part.text ?? part.response);
|
||||
console.log((isError ? pc.red : pc.cyan)(`tool_result${isError ? " (error)" : ""}`));
|
||||
if (contentText) console.log((isError ? pc.red : pc.gray)(contentText));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function printUsage(parsed: Record<string, unknown>) {
|
||||
const usage = asRecord(parsed.usage) ?? asRecord(parsed.usageMetadata);
|
||||
const usageMetadata = asRecord(usage?.usageMetadata);
|
||||
const source = usageMetadata ?? usage ?? {};
|
||||
const input = asNumber(source.input_tokens, asNumber(source.inputTokens, asNumber(source.promptTokenCount)));
|
||||
const output = asNumber(source.output_tokens, asNumber(source.outputTokens, asNumber(source.candidatesTokenCount)));
|
||||
const cached = asNumber(
|
||||
source.cached_input_tokens,
|
||||
asNumber(source.cachedInputTokens, asNumber(source.cachedContentTokenCount)),
|
||||
);
|
||||
const cost = asNumber(parsed.total_cost_usd, asNumber(parsed.cost_usd, asNumber(parsed.cost)));
|
||||
console.log(pc.blue(`tokens: in=${input} out=${output} cached=${cached} cost=$${cost.toFixed(6)}`));
|
||||
}
|
||||
|
||||
export function printGeminiStreamEvent(raw: string, _debug: boolean): void {
|
||||
const line = raw.trim();
|
||||
if (!line) return;
|
||||
|
||||
let parsed: Record<string, unknown> | null = null;
|
||||
try {
|
||||
parsed = JSON.parse(line) as Record<string, unknown>;
|
||||
} catch {
|
||||
console.log(line);
|
||||
return;
|
||||
}
|
||||
|
||||
const type = asString(parsed.type);
|
||||
|
||||
if (type === "system") {
|
||||
const subtype = asString(parsed.subtype);
|
||||
if (subtype === "init") {
|
||||
const sessionId =
|
||||
asString(parsed.session_id) ||
|
||||
asString(parsed.sessionId) ||
|
||||
asString(parsed.sessionID) ||
|
||||
asString(parsed.checkpoint_id);
|
||||
const model = asString(parsed.model);
|
||||
const details = [sessionId ? `session: ${sessionId}` : "", model ? `model: ${model}` : ""]
|
||||
.filter(Boolean)
|
||||
.join(", ");
|
||||
console.log(pc.blue(`Gemini init${details ? ` (${details})` : ""}`));
|
||||
return;
|
||||
}
|
||||
if (subtype === "error") {
|
||||
const text = errorText(parsed.error ?? parsed.message ?? parsed.detail);
|
||||
if (text) console.log(pc.red(`error: ${text}`));
|
||||
return;
|
||||
}
|
||||
console.log(pc.blue(`system: ${subtype || "event"}`));
|
||||
return;
|
||||
}
|
||||
|
||||
if (type === "assistant") {
|
||||
printTextMessage("assistant", pc.green, parsed.message);
|
||||
return;
|
||||
}
|
||||
|
||||
if (type === "user") {
|
||||
printTextMessage("user", pc.gray, parsed.message);
|
||||
return;
|
||||
}
|
||||
|
||||
if (type === "thinking") {
|
||||
const text = asString(parsed.text).trim() || asString(asRecord(parsed.delta)?.text).trim();
|
||||
if (text) console.log(pc.gray(`thinking: ${text}`));
|
||||
return;
|
||||
}
|
||||
|
||||
if (type === "tool_call") {
|
||||
const subtype = asString(parsed.subtype).trim().toLowerCase();
|
||||
const toolCall = asRecord(parsed.tool_call ?? parsed.toolCall);
|
||||
const [toolName] = toolCall ? Object.keys(toolCall) : [];
|
||||
if (!toolCall || !toolName) {
|
||||
console.log(pc.yellow(`tool_call${subtype ? `: ${subtype}` : ""}`));
|
||||
return;
|
||||
}
|
||||
const payload = asRecord(toolCall[toolName]) ?? {};
|
||||
if (subtype === "started" || subtype === "start") {
|
||||
console.log(pc.yellow(`tool_call: ${toolName}`));
|
||||
console.log(pc.gray(stringifyUnknown(payload.args ?? payload.input ?? payload.arguments ?? payload)));
|
||||
return;
|
||||
}
|
||||
if (subtype === "completed" || subtype === "complete" || subtype === "finished") {
|
||||
const isError =
|
||||
parsed.is_error === true ||
|
||||
payload.is_error === true ||
|
||||
payload.error !== undefined ||
|
||||
asString(payload.status).toLowerCase() === "error";
|
||||
console.log((isError ? pc.red : pc.cyan)(`tool_result${isError ? " (error)" : ""}`));
|
||||
console.log((isError ? pc.red : pc.gray)(stringifyUnknown(payload.result ?? payload.output ?? payload.error)));
|
||||
return;
|
||||
}
|
||||
console.log(pc.yellow(`tool_call: ${toolName}${subtype ? ` (${subtype})` : ""}`));
|
||||
return;
|
||||
}
|
||||
|
||||
if (type === "result") {
|
||||
printUsage(parsed);
|
||||
const subtype = asString(parsed.subtype, "result");
|
||||
const isError = parsed.is_error === true;
|
||||
if (subtype || isError) {
|
||||
console.log((isError ? pc.red : pc.blue)(`result: subtype=${subtype} is_error=${isError ? "true" : "false"}`));
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (type === "error") {
|
||||
const text = errorText(parsed.error ?? parsed.message ?? parsed.detail);
|
||||
if (text) console.log(pc.red(`error: ${text}`));
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(line);
|
||||
}
|
||||
1
packages/adapters/gemini-local/src/cli/index.ts
Normal file
1
packages/adapters/gemini-local/src/cli/index.ts
Normal file
@@ -0,0 +1 @@
|
||||
export { printGeminiStreamEvent } from "./format-event.js";
|
||||
47
packages/adapters/gemini-local/src/index.ts
Normal file
47
packages/adapters/gemini-local/src/index.ts
Normal file
@@ -0,0 +1,47 @@
|
||||
export const type = "gemini_local";
|
||||
export const label = "Gemini CLI (local)";
|
||||
export const DEFAULT_GEMINI_LOCAL_MODEL = "auto";
|
||||
|
||||
export const models = [
|
||||
{ id: DEFAULT_GEMINI_LOCAL_MODEL, label: "Auto" },
|
||||
{ id: "gemini-2.5-pro", label: "Gemini 2.5 Pro" },
|
||||
{ id: "gemini-2.5-flash", label: "Gemini 2.5 Flash" },
|
||||
{ id: "gemini-2.5-flash-lite", label: "Gemini 2.5 Flash Lite" },
|
||||
{ id: "gemini-2.0-flash", label: "Gemini 2.0 Flash" },
|
||||
{ id: "gemini-2.0-flash-lite", label: "Gemini 2.0 Flash Lite" },
|
||||
];
|
||||
|
||||
export const agentConfigurationDoc = `# gemini_local agent configuration
|
||||
|
||||
Adapter: gemini_local
|
||||
|
||||
Use when:
|
||||
- You want Paperclip to run the Gemini CLI locally on the host machine
|
||||
- You want Gemini chat sessions resumed across heartbeats with --resume
|
||||
- You want Paperclip skills injected locally without polluting the global environment
|
||||
|
||||
Don't use when:
|
||||
- You need webhook-style external invocation (use http or openclaw_gateway)
|
||||
- You only need a one-shot script without an AI coding agent loop (use process)
|
||||
- Gemini CLI is not installed on the machine that runs Paperclip
|
||||
|
||||
Core fields:
|
||||
- cwd (string, optional): default absolute working directory fallback for the agent process (created if missing when possible)
|
||||
- instructionsFilePath (string, optional): absolute path to a markdown instructions file prepended to the run prompt
|
||||
- promptTemplate (string, optional): run prompt template
|
||||
- model (string, optional): Gemini model id. Defaults to auto.
|
||||
- sandbox (boolean, optional): run in sandbox mode (default: false, passes --sandbox=none)
|
||||
- command (string, optional): defaults to "gemini"
|
||||
- extraArgs (string[], optional): additional CLI args
|
||||
- env (object, optional): KEY=VALUE environment variables
|
||||
|
||||
Operational fields:
|
||||
- timeoutSec (number, optional): run timeout in seconds
|
||||
- graceSec (number, optional): SIGTERM grace period in seconds
|
||||
|
||||
Notes:
|
||||
- Runs use positional prompt arguments, not stdin.
|
||||
- Sessions resume with --resume when stored session cwd matches the current cwd.
|
||||
- Paperclip auto-injects local skills into \`~/.gemini/skills/\` via symlinks, so the CLI can discover both credentials and skills in their natural location.
|
||||
- Authentication can use GEMINI_API_KEY / GOOGLE_API_KEY or local Gemini CLI login.
|
||||
`;
|
||||
436
packages/adapters/gemini-local/src/server/execute.ts
Normal file
436
packages/adapters/gemini-local/src/server/execute.ts
Normal file
@@ -0,0 +1,436 @@
|
||||
import fs from "node:fs/promises";
|
||||
import type { Dirent } from "node:fs";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { fileURLToPath } from "node:url";
|
||||
import type { AdapterExecutionContext, AdapterExecutionResult } from "@paperclipai/adapter-utils";
|
||||
import {
|
||||
asBoolean,
|
||||
asNumber,
|
||||
asString,
|
||||
asStringArray,
|
||||
buildPaperclipEnv,
|
||||
ensureAbsoluteDirectory,
|
||||
ensureCommandResolvable,
|
||||
ensurePathInEnv,
|
||||
parseObject,
|
||||
redactEnvForLogs,
|
||||
renderTemplate,
|
||||
runChildProcess,
|
||||
} from "@paperclipai/adapter-utils/server-utils";
|
||||
import { DEFAULT_GEMINI_LOCAL_MODEL } from "../index.js";
|
||||
import {
|
||||
describeGeminiFailure,
|
||||
detectGeminiAuthRequired,
|
||||
isGeminiTurnLimitResult,
|
||||
isGeminiUnknownSessionError,
|
||||
parseGeminiJsonl,
|
||||
} from "./parse.js";
|
||||
import { firstNonEmptyLine } from "./utils.js";
|
||||
|
||||
const __moduleDir = path.dirname(fileURLToPath(import.meta.url));
|
||||
const PAPERCLIP_SKILLS_CANDIDATES = [
|
||||
path.resolve(__moduleDir, "../../skills"),
|
||||
path.resolve(__moduleDir, "../../../../../skills"),
|
||||
];
|
||||
|
||||
function hasNonEmptyEnvValue(env: Record<string, string>, key: string): boolean {
|
||||
const raw = env[key];
|
||||
return typeof raw === "string" && raw.trim().length > 0;
|
||||
}
|
||||
|
||||
function resolveGeminiBillingType(env: Record<string, string>): "api" | "subscription" {
|
||||
return hasNonEmptyEnvValue(env, "GEMINI_API_KEY") || hasNonEmptyEnvValue(env, "GOOGLE_API_KEY")
|
||||
? "api"
|
||||
: "subscription";
|
||||
}
|
||||
|
||||
function renderPaperclipEnvNote(env: Record<string, string>): string {
|
||||
const paperclipKeys = Object.keys(env)
|
||||
.filter((key) => key.startsWith("PAPERCLIP_"))
|
||||
.sort();
|
||||
if (paperclipKeys.length === 0) return "";
|
||||
return [
|
||||
"Paperclip runtime note:",
|
||||
`The following PAPERCLIP_* environment variables are available in this run: ${paperclipKeys.join(", ")}`,
|
||||
"Do not assume these variables are missing without checking your shell environment.",
|
||||
"",
|
||||
"",
|
||||
].join("\n");
|
||||
}
|
||||
|
||||
function renderApiAccessNote(env: Record<string, string>): string {
|
||||
if (!hasNonEmptyEnvValue(env, "PAPERCLIP_API_URL") || !hasNonEmptyEnvValue(env, "PAPERCLIP_API_KEY")) return "";
|
||||
return [
|
||||
"Paperclip API access note:",
|
||||
"Use run_shell_command with curl to make Paperclip API requests.",
|
||||
"GET example:",
|
||||
` run_shell_command({ command: "curl -s -H \\"Authorization: Bearer $PAPERCLIP_API_KEY\\" \\"$PAPERCLIP_API_URL/api/agents/me\\"" })`,
|
||||
"POST/PATCH example:",
|
||||
` run_shell_command({ command: "curl -s -X POST -H \\"Authorization: Bearer $PAPERCLIP_API_KEY\\" -H 'Content-Type: application/json' -H \\"X-Paperclip-Run-Id: $PAPERCLIP_RUN_ID\\" -d '{...}' \\"$PAPERCLIP_API_URL/api/issues/{id}/checkout\\"" })`,
|
||||
"",
|
||||
"",
|
||||
].join("\n");
|
||||
}
|
||||
|
||||
async function resolvePaperclipSkillsDir(): Promise<string | null> {
|
||||
for (const candidate of PAPERCLIP_SKILLS_CANDIDATES) {
|
||||
const isDir = await fs.stat(candidate).then((s) => s.isDirectory()).catch(() => false);
|
||||
if (isDir) return candidate;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function geminiSkillsHome(): string {
|
||||
return path.join(os.homedir(), ".gemini", "skills");
|
||||
}
|
||||
|
||||
/**
|
||||
* Inject Paperclip skills directly into `~/.gemini/skills/` via symlinks.
|
||||
* This avoids needing GEMINI_CLI_HOME overrides, so the CLI naturally finds
|
||||
* both its auth credentials and the injected skills in the real home directory.
|
||||
*/
|
||||
async function ensureGeminiSkillsInjected(
|
||||
onLog: AdapterExecutionContext["onLog"],
|
||||
): Promise<void> {
|
||||
const skillsDir = await resolvePaperclipSkillsDir();
|
||||
if (!skillsDir) return;
|
||||
|
||||
const skillsHome = geminiSkillsHome();
|
||||
try {
|
||||
await fs.mkdir(skillsHome, { recursive: true });
|
||||
} catch (err) {
|
||||
await onLog(
|
||||
"stderr",
|
||||
`[paperclip] Failed to prepare Gemini skills directory ${skillsHome}: ${err instanceof Error ? err.message : String(err)}\n`,
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
let entries: Dirent[];
|
||||
try {
|
||||
entries = await fs.readdir(skillsDir, { withFileTypes: true });
|
||||
} catch (err) {
|
||||
await onLog(
|
||||
"stderr",
|
||||
`[paperclip] Failed to read Paperclip skills from ${skillsDir}: ${err instanceof Error ? err.message : String(err)}\n`,
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
for (const entry of entries) {
|
||||
if (!entry.isDirectory()) continue;
|
||||
const source = path.join(skillsDir, entry.name);
|
||||
const target = path.join(skillsHome, entry.name);
|
||||
const existing = await fs.lstat(target).catch(() => null);
|
||||
if (existing) continue;
|
||||
|
||||
try {
|
||||
await fs.symlink(source, target);
|
||||
await onLog("stderr", `[paperclip] Linked Gemini skill: ${entry.name}\n`);
|
||||
} catch (err) {
|
||||
await onLog(
|
||||
"stderr",
|
||||
`[paperclip] Failed to link Gemini skill "${entry.name}": ${err instanceof Error ? err.message : String(err)}\n`,
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExecutionResult> {
|
||||
const { runId, agent, runtime, config, context, onLog, onMeta, authToken } = ctx;
|
||||
|
||||
const promptTemplate = asString(
|
||||
config.promptTemplate,
|
||||
"You are agent {{agent.id}} ({{agent.name}}). Continue your Paperclip work.",
|
||||
);
|
||||
const command = asString(config.command, "gemini");
|
||||
const model = asString(config.model, DEFAULT_GEMINI_LOCAL_MODEL).trim();
|
||||
const sandbox = asBoolean(config.sandbox, false);
|
||||
|
||||
const workspaceContext = parseObject(context.paperclipWorkspace);
|
||||
const workspaceCwd = asString(workspaceContext.cwd, "");
|
||||
const workspaceSource = asString(workspaceContext.source, "");
|
||||
const workspaceId = asString(workspaceContext.workspaceId, "");
|
||||
const workspaceRepoUrl = asString(workspaceContext.repoUrl, "");
|
||||
const workspaceRepoRef = asString(workspaceContext.repoRef, "");
|
||||
const workspaceHints = Array.isArray(context.paperclipWorkspaces)
|
||||
? context.paperclipWorkspaces.filter(
|
||||
(value): value is Record<string, unknown> => typeof value === "object" && value !== null,
|
||||
)
|
||||
: [];
|
||||
const configuredCwd = asString(config.cwd, "");
|
||||
const useConfiguredInsteadOfAgentHome = workspaceSource === "agent_home" && configuredCwd.length > 0;
|
||||
const effectiveWorkspaceCwd = useConfiguredInsteadOfAgentHome ? "" : workspaceCwd;
|
||||
const cwd = effectiveWorkspaceCwd || configuredCwd || process.cwd();
|
||||
await ensureAbsoluteDirectory(cwd, { createIfMissing: true });
|
||||
await ensureGeminiSkillsInjected(onLog);
|
||||
|
||||
const envConfig = parseObject(config.env);
|
||||
const hasExplicitApiKey =
|
||||
typeof envConfig.PAPERCLIP_API_KEY === "string" && envConfig.PAPERCLIP_API_KEY.trim().length > 0;
|
||||
const env: Record<string, string> = { ...buildPaperclipEnv(agent) };
|
||||
env.PAPERCLIP_RUN_ID = runId;
|
||||
const wakeTaskId =
|
||||
(typeof context.taskId === "string" && context.taskId.trim().length > 0 && context.taskId.trim()) ||
|
||||
(typeof context.issueId === "string" && context.issueId.trim().length > 0 && context.issueId.trim()) ||
|
||||
null;
|
||||
const wakeReason =
|
||||
typeof context.wakeReason === "string" && context.wakeReason.trim().length > 0
|
||||
? context.wakeReason.trim()
|
||||
: null;
|
||||
const wakeCommentId =
|
||||
(typeof context.wakeCommentId === "string" && context.wakeCommentId.trim().length > 0 && context.wakeCommentId.trim()) ||
|
||||
(typeof context.commentId === "string" && context.commentId.trim().length > 0 && context.commentId.trim()) ||
|
||||
null;
|
||||
const approvalId =
|
||||
typeof context.approvalId === "string" && context.approvalId.trim().length > 0
|
||||
? context.approvalId.trim()
|
||||
: null;
|
||||
const approvalStatus =
|
||||
typeof context.approvalStatus === "string" && context.approvalStatus.trim().length > 0
|
||||
? context.approvalStatus.trim()
|
||||
: null;
|
||||
const linkedIssueIds = Array.isArray(context.issueIds)
|
||||
? context.issueIds.filter((value): value is string => typeof value === "string" && value.trim().length > 0)
|
||||
: [];
|
||||
if (wakeTaskId) env.PAPERCLIP_TASK_ID = wakeTaskId;
|
||||
if (wakeReason) env.PAPERCLIP_WAKE_REASON = wakeReason;
|
||||
if (wakeCommentId) env.PAPERCLIP_WAKE_COMMENT_ID = wakeCommentId;
|
||||
if (approvalId) env.PAPERCLIP_APPROVAL_ID = approvalId;
|
||||
if (approvalStatus) env.PAPERCLIP_APPROVAL_STATUS = approvalStatus;
|
||||
if (linkedIssueIds.length > 0) env.PAPERCLIP_LINKED_ISSUE_IDS = linkedIssueIds.join(",");
|
||||
if (effectiveWorkspaceCwd) env.PAPERCLIP_WORKSPACE_CWD = effectiveWorkspaceCwd;
|
||||
if (workspaceSource) env.PAPERCLIP_WORKSPACE_SOURCE = workspaceSource;
|
||||
if (workspaceId) env.PAPERCLIP_WORKSPACE_ID = workspaceId;
|
||||
if (workspaceRepoUrl) env.PAPERCLIP_WORKSPACE_REPO_URL = workspaceRepoUrl;
|
||||
if (workspaceRepoRef) env.PAPERCLIP_WORKSPACE_REPO_REF = workspaceRepoRef;
|
||||
if (workspaceHints.length > 0) env.PAPERCLIP_WORKSPACES_JSON = JSON.stringify(workspaceHints);
|
||||
|
||||
for (const [key, value] of Object.entries(envConfig)) {
|
||||
if (typeof value === "string") env[key] = value;
|
||||
}
|
||||
if (!hasExplicitApiKey && authToken) {
|
||||
env.PAPERCLIP_API_KEY = authToken;
|
||||
}
|
||||
const billingType = resolveGeminiBillingType(env);
|
||||
const runtimeEnv = ensurePathInEnv({ ...process.env, ...env });
|
||||
await ensureCommandResolvable(command, cwd, runtimeEnv);
|
||||
|
||||
const timeoutSec = asNumber(config.timeoutSec, 0);
|
||||
const graceSec = asNumber(config.graceSec, 20);
|
||||
const extraArgs = (() => {
|
||||
const fromExtraArgs = asStringArray(config.extraArgs);
|
||||
if (fromExtraArgs.length > 0) return fromExtraArgs;
|
||||
return asStringArray(config.args);
|
||||
})();
|
||||
|
||||
const runtimeSessionParams = parseObject(runtime.sessionParams);
|
||||
const runtimeSessionId = asString(runtimeSessionParams.sessionId, runtime.sessionId ?? "");
|
||||
const runtimeSessionCwd = asString(runtimeSessionParams.cwd, "");
|
||||
const canResumeSession =
|
||||
runtimeSessionId.length > 0 &&
|
||||
(runtimeSessionCwd.length === 0 || path.resolve(runtimeSessionCwd) === path.resolve(cwd));
|
||||
const sessionId = canResumeSession ? runtimeSessionId : null;
|
||||
if (runtimeSessionId && !canResumeSession) {
|
||||
await onLog(
|
||||
"stderr",
|
||||
`[paperclip] Gemini session "${runtimeSessionId}" was saved for cwd "${runtimeSessionCwd}" and will not be resumed in "${cwd}".\n`,
|
||||
);
|
||||
}
|
||||
|
||||
const instructionsFilePath = asString(config.instructionsFilePath, "").trim();
|
||||
const instructionsDir = instructionsFilePath ? `${path.dirname(instructionsFilePath)}/` : "";
|
||||
let instructionsPrefix = "";
|
||||
if (instructionsFilePath) {
|
||||
try {
|
||||
const instructionsContents = await fs.readFile(instructionsFilePath, "utf8");
|
||||
instructionsPrefix =
|
||||
`${instructionsContents}\n\n` +
|
||||
`The above agent instructions were loaded from ${instructionsFilePath}. ` +
|
||||
`Resolve any relative file references from ${instructionsDir}.\n\n`;
|
||||
await onLog(
|
||||
"stderr",
|
||||
`[paperclip] Loaded agent instructions file: ${instructionsFilePath}\n`,
|
||||
);
|
||||
} catch (err) {
|
||||
const reason = err instanceof Error ? err.message : String(err);
|
||||
await onLog(
|
||||
"stderr",
|
||||
`[paperclip] Warning: could not read agent instructions file "${instructionsFilePath}": ${reason}\n`,
|
||||
);
|
||||
}
|
||||
}
|
||||
const commandNotes = (() => {
|
||||
const notes: string[] = ["Prompt is passed to Gemini as the final positional argument."];
|
||||
notes.push("Added --approval-mode yolo for unattended execution.");
|
||||
if (!instructionsFilePath) return notes;
|
||||
if (instructionsPrefix.length > 0) {
|
||||
notes.push(
|
||||
`Loaded agent instructions from ${instructionsFilePath}`,
|
||||
`Prepended instructions + path directive to prompt (relative references from ${instructionsDir}).`,
|
||||
);
|
||||
return notes;
|
||||
}
|
||||
notes.push(
|
||||
`Configured instructionsFilePath ${instructionsFilePath}, but file could not be read; continuing without injected instructions.`,
|
||||
);
|
||||
return notes;
|
||||
})();
|
||||
|
||||
const renderedPrompt = renderTemplate(promptTemplate, {
|
||||
agentId: agent.id,
|
||||
companyId: agent.companyId,
|
||||
runId,
|
||||
company: { id: agent.companyId },
|
||||
agent,
|
||||
run: { id: runId, source: "on_demand" },
|
||||
context,
|
||||
});
|
||||
const paperclipEnvNote = renderPaperclipEnvNote(env);
|
||||
const apiAccessNote = renderApiAccessNote(env);
|
||||
const prompt = `${instructionsPrefix}${paperclipEnvNote}${apiAccessNote}${renderedPrompt}`;
|
||||
|
||||
const buildArgs = (resumeSessionId: string | null) => {
|
||||
const args = ["--output-format", "stream-json"];
|
||||
if (resumeSessionId) args.push("--resume", resumeSessionId);
|
||||
if (model && model !== DEFAULT_GEMINI_LOCAL_MODEL) args.push("--model", model);
|
||||
args.push("--approval-mode", "yolo");
|
||||
if (sandbox) {
|
||||
args.push("--sandbox");
|
||||
} else {
|
||||
args.push("--sandbox=none");
|
||||
}
|
||||
if (extraArgs.length > 0) args.push(...extraArgs);
|
||||
args.push(prompt);
|
||||
return args;
|
||||
};
|
||||
|
||||
const runAttempt = async (resumeSessionId: string | null) => {
|
||||
const args = buildArgs(resumeSessionId);
|
||||
if (onMeta) {
|
||||
await onMeta({
|
||||
adapterType: "gemini_local",
|
||||
command,
|
||||
cwd,
|
||||
commandNotes,
|
||||
commandArgs: args.map((value, index) => (
|
||||
index === args.length - 1 ? `<prompt ${prompt.length} chars>` : value
|
||||
)),
|
||||
env: redactEnvForLogs(env),
|
||||
prompt,
|
||||
context,
|
||||
});
|
||||
}
|
||||
|
||||
const proc = await runChildProcess(runId, command, args, {
|
||||
cwd,
|
||||
env,
|
||||
timeoutSec,
|
||||
graceSec,
|
||||
onLog,
|
||||
});
|
||||
return {
|
||||
proc,
|
||||
parsed: parseGeminiJsonl(proc.stdout),
|
||||
};
|
||||
};
|
||||
|
||||
const toResult = (
|
||||
attempt: {
|
||||
proc: {
|
||||
exitCode: number | null;
|
||||
signal: string | null;
|
||||
timedOut: boolean;
|
||||
stdout: string;
|
||||
stderr: string;
|
||||
};
|
||||
parsed: ReturnType<typeof parseGeminiJsonl>;
|
||||
},
|
||||
clearSessionOnMissingSession = false,
|
||||
isRetry = false,
|
||||
): AdapterExecutionResult => {
|
||||
const authMeta = detectGeminiAuthRequired({
|
||||
parsed: attempt.parsed.resultEvent,
|
||||
stdout: attempt.proc.stdout,
|
||||
stderr: attempt.proc.stderr,
|
||||
});
|
||||
|
||||
if (attempt.proc.timedOut) {
|
||||
return {
|
||||
exitCode: attempt.proc.exitCode,
|
||||
signal: attempt.proc.signal,
|
||||
timedOut: true,
|
||||
errorMessage: `Timed out after ${timeoutSec}s`,
|
||||
errorCode: authMeta.requiresAuth ? "gemini_auth_required" : null,
|
||||
clearSession: clearSessionOnMissingSession,
|
||||
};
|
||||
}
|
||||
|
||||
const clearSessionForTurnLimit = isGeminiTurnLimitResult(attempt.parsed.resultEvent, attempt.proc.exitCode);
|
||||
|
||||
// On retry, don't fall back to old session ID — the old session was stale
|
||||
const canFallbackToRuntimeSession = !isRetry;
|
||||
const resolvedSessionId = attempt.parsed.sessionId
|
||||
?? (canFallbackToRuntimeSession ? (runtimeSessionId ?? runtime.sessionId ?? null) : null);
|
||||
const resolvedSessionParams = resolvedSessionId
|
||||
? ({
|
||||
sessionId: resolvedSessionId,
|
||||
cwd,
|
||||
...(workspaceId ? { workspaceId } : {}),
|
||||
...(workspaceRepoUrl ? { repoUrl: workspaceRepoUrl } : {}),
|
||||
...(workspaceRepoRef ? { repoRef: workspaceRepoRef } : {}),
|
||||
} as Record<string, unknown>)
|
||||
: null;
|
||||
const parsedError = typeof attempt.parsed.errorMessage === "string" ? attempt.parsed.errorMessage.trim() : "";
|
||||
const stderrLine = firstNonEmptyLine(attempt.proc.stderr);
|
||||
const structuredFailure = attempt.parsed.resultEvent
|
||||
? describeGeminiFailure(attempt.parsed.resultEvent)
|
||||
: null;
|
||||
const fallbackErrorMessage =
|
||||
parsedError ||
|
||||
structuredFailure ||
|
||||
stderrLine ||
|
||||
`Gemini exited with code ${attempt.proc.exitCode ?? -1}`;
|
||||
|
||||
return {
|
||||
exitCode: attempt.proc.exitCode,
|
||||
signal: attempt.proc.signal,
|
||||
timedOut: false,
|
||||
errorMessage: (attempt.proc.exitCode ?? 0) === 0 ? null : fallbackErrorMessage,
|
||||
errorCode: (attempt.proc.exitCode ?? 0) !== 0 && authMeta.requiresAuth ? "gemini_auth_required" : null,
|
||||
usage: attempt.parsed.usage,
|
||||
sessionId: resolvedSessionId,
|
||||
sessionParams: resolvedSessionParams,
|
||||
sessionDisplayId: resolvedSessionId,
|
||||
provider: "google",
|
||||
model,
|
||||
billingType,
|
||||
costUsd: attempt.parsed.costUsd,
|
||||
resultJson: attempt.parsed.resultEvent ?? {
|
||||
stdout: attempt.proc.stdout,
|
||||
stderr: attempt.proc.stderr,
|
||||
},
|
||||
summary: attempt.parsed.summary,
|
||||
question: attempt.parsed.question,
|
||||
clearSession: clearSessionForTurnLimit || Boolean(clearSessionOnMissingSession && !resolvedSessionId),
|
||||
};
|
||||
};
|
||||
|
||||
const initial = await runAttempt(sessionId);
|
||||
if (
|
||||
sessionId &&
|
||||
!initial.proc.timedOut &&
|
||||
(initial.proc.exitCode ?? 0) !== 0 &&
|
||||
isGeminiUnknownSessionError(initial.proc.stdout, initial.proc.stderr)
|
||||
) {
|
||||
await onLog(
|
||||
"stderr",
|
||||
`[paperclip] Gemini resume session "${sessionId}" is unavailable; retrying with a fresh session.\n`,
|
||||
);
|
||||
const retry = await runAttempt(null);
|
||||
return toResult(retry, true, true);
|
||||
}
|
||||
|
||||
return toResult(initial);
|
||||
}
|
||||
70
packages/adapters/gemini-local/src/server/index.ts
Normal file
70
packages/adapters/gemini-local/src/server/index.ts
Normal file
@@ -0,0 +1,70 @@
|
||||
export { execute } from "./execute.js";
|
||||
export { testEnvironment } from "./test.js";
|
||||
export {
|
||||
parseGeminiJsonl,
|
||||
isGeminiUnknownSessionError,
|
||||
describeGeminiFailure,
|
||||
detectGeminiAuthRequired,
|
||||
isGeminiTurnLimitResult,
|
||||
} from "./parse.js";
|
||||
import type { AdapterSessionCodec } from "@paperclipai/adapter-utils";
|
||||
|
||||
function readNonEmptyString(value: unknown): string | null {
|
||||
return typeof value === "string" && value.trim().length > 0 ? value.trim() : null;
|
||||
}
|
||||
|
||||
export const sessionCodec: AdapterSessionCodec = {
|
||||
deserialize(raw: unknown) {
|
||||
if (typeof raw !== "object" || raw === null || Array.isArray(raw)) return null;
|
||||
const record = raw as Record<string, unknown>;
|
||||
const sessionId =
|
||||
readNonEmptyString(record.sessionId) ??
|
||||
readNonEmptyString(record.session_id) ??
|
||||
readNonEmptyString(record.sessionID);
|
||||
if (!sessionId) return null;
|
||||
const cwd =
|
||||
readNonEmptyString(record.cwd) ??
|
||||
readNonEmptyString(record.workdir) ??
|
||||
readNonEmptyString(record.folder);
|
||||
const workspaceId = readNonEmptyString(record.workspaceId) ?? readNonEmptyString(record.workspace_id);
|
||||
const repoUrl = readNonEmptyString(record.repoUrl) ?? readNonEmptyString(record.repo_url);
|
||||
const repoRef = readNonEmptyString(record.repoRef) ?? readNonEmptyString(record.repo_ref);
|
||||
return {
|
||||
sessionId,
|
||||
...(cwd ? { cwd } : {}),
|
||||
...(workspaceId ? { workspaceId } : {}),
|
||||
...(repoUrl ? { repoUrl } : {}),
|
||||
...(repoRef ? { repoRef } : {}),
|
||||
};
|
||||
},
|
||||
serialize(params: Record<string, unknown> | null) {
|
||||
if (!params) return null;
|
||||
const sessionId =
|
||||
readNonEmptyString(params.sessionId) ??
|
||||
readNonEmptyString(params.session_id) ??
|
||||
readNonEmptyString(params.sessionID);
|
||||
if (!sessionId) return null;
|
||||
const cwd =
|
||||
readNonEmptyString(params.cwd) ??
|
||||
readNonEmptyString(params.workdir) ??
|
||||
readNonEmptyString(params.folder);
|
||||
const workspaceId = readNonEmptyString(params.workspaceId) ?? readNonEmptyString(params.workspace_id);
|
||||
const repoUrl = readNonEmptyString(params.repoUrl) ?? readNonEmptyString(params.repo_url);
|
||||
const repoRef = readNonEmptyString(params.repoRef) ?? readNonEmptyString(params.repo_ref);
|
||||
return {
|
||||
sessionId,
|
||||
...(cwd ? { cwd } : {}),
|
||||
...(workspaceId ? { workspaceId } : {}),
|
||||
...(repoUrl ? { repoUrl } : {}),
|
||||
...(repoRef ? { repoRef } : {}),
|
||||
};
|
||||
},
|
||||
getDisplayId(params: Record<string, unknown> | null) {
|
||||
if (!params) return null;
|
||||
return (
|
||||
readNonEmptyString(params.sessionId) ??
|
||||
readNonEmptyString(params.session_id) ??
|
||||
readNonEmptyString(params.sessionID)
|
||||
);
|
||||
},
|
||||
};
|
||||
263
packages/adapters/gemini-local/src/server/parse.ts
Normal file
263
packages/adapters/gemini-local/src/server/parse.ts
Normal file
@@ -0,0 +1,263 @@
|
||||
import { asNumber, asString, parseJson, parseObject } from "@paperclipai/adapter-utils/server-utils";
|
||||
|
||||
function collectMessageText(message: unknown): string[] {
|
||||
if (typeof message === "string") {
|
||||
const trimmed = message.trim();
|
||||
return trimmed ? [trimmed] : [];
|
||||
}
|
||||
|
||||
const record = parseObject(message);
|
||||
const direct = asString(record.text, "").trim();
|
||||
const lines: string[] = direct ? [direct] : [];
|
||||
const content = Array.isArray(record.content) ? record.content : [];
|
||||
|
||||
for (const partRaw of content) {
|
||||
const part = parseObject(partRaw);
|
||||
const type = asString(part.type, "").trim();
|
||||
if (type === "output_text" || type === "text" || type === "content") {
|
||||
const text = asString(part.text, "").trim() || asString(part.content, "").trim();
|
||||
if (text) lines.push(text);
|
||||
}
|
||||
}
|
||||
|
||||
return lines;
|
||||
}
|
||||
|
||||
function readSessionId(event: Record<string, unknown>): string | null {
|
||||
return (
|
||||
asString(event.session_id, "").trim() ||
|
||||
asString(event.sessionId, "").trim() ||
|
||||
asString(event.sessionID, "").trim() ||
|
||||
asString(event.checkpoint_id, "").trim() ||
|
||||
asString(event.thread_id, "").trim() ||
|
||||
null
|
||||
);
|
||||
}
|
||||
|
||||
function asErrorText(value: unknown): string {
|
||||
if (typeof value === "string") return value;
|
||||
const rec = parseObject(value);
|
||||
const message =
|
||||
asString(rec.message, "") ||
|
||||
asString(rec.error, "") ||
|
||||
asString(rec.code, "") ||
|
||||
asString(rec.detail, "");
|
||||
if (message) return message;
|
||||
try {
|
||||
return JSON.stringify(rec);
|
||||
} catch {
|
||||
return "";
|
||||
}
|
||||
}
|
||||
|
||||
function accumulateUsage(
|
||||
target: { inputTokens: number; cachedInputTokens: number; outputTokens: number },
|
||||
usageRaw: unknown,
|
||||
) {
|
||||
const usage = parseObject(usageRaw);
|
||||
const usageMetadata = parseObject(usage.usageMetadata);
|
||||
const source = Object.keys(usageMetadata).length > 0 ? usageMetadata : usage;
|
||||
|
||||
target.inputTokens += asNumber(
|
||||
source.input_tokens,
|
||||
asNumber(source.inputTokens, asNumber(source.promptTokenCount, 0)),
|
||||
);
|
||||
target.cachedInputTokens += asNumber(
|
||||
source.cached_input_tokens,
|
||||
asNumber(source.cachedInputTokens, asNumber(source.cachedContentTokenCount, 0)),
|
||||
);
|
||||
target.outputTokens += asNumber(
|
||||
source.output_tokens,
|
||||
asNumber(source.outputTokens, asNumber(source.candidatesTokenCount, 0)),
|
||||
);
|
||||
}
|
||||
|
||||
export function parseGeminiJsonl(stdout: string) {
|
||||
let sessionId: string | null = null;
|
||||
const messages: string[] = [];
|
||||
let errorMessage: string | null = null;
|
||||
let costUsd: number | null = null;
|
||||
let resultEvent: Record<string, unknown> | null = null;
|
||||
let question: { prompt: string; choices: Array<{ key: string; label: string; description?: string }> } | null = null;
|
||||
const usage = {
|
||||
inputTokens: 0,
|
||||
cachedInputTokens: 0,
|
||||
outputTokens: 0,
|
||||
};
|
||||
|
||||
for (const rawLine of stdout.split(/\r?\n/)) {
|
||||
const line = rawLine.trim();
|
||||
if (!line) continue;
|
||||
|
||||
const event = parseJson(line);
|
||||
if (!event) continue;
|
||||
|
||||
const foundSessionId = readSessionId(event);
|
||||
if (foundSessionId) sessionId = foundSessionId;
|
||||
|
||||
const type = asString(event.type, "").trim();
|
||||
|
||||
if (type === "assistant") {
|
||||
messages.push(...collectMessageText(event.message));
|
||||
const messageObj = parseObject(event.message);
|
||||
const content = Array.isArray(messageObj.content) ? messageObj.content : [];
|
||||
for (const partRaw of content) {
|
||||
const part = parseObject(partRaw);
|
||||
if (asString(part.type, "").trim() === "question") {
|
||||
question = {
|
||||
prompt: asString(part.prompt, "").trim(),
|
||||
choices: (Array.isArray(part.choices) ? part.choices : []).map((choiceRaw) => {
|
||||
const choice = parseObject(choiceRaw);
|
||||
return {
|
||||
key: asString(choice.key, "").trim(),
|
||||
label: asString(choice.label, "").trim(),
|
||||
description: asString(choice.description, "").trim() || undefined,
|
||||
};
|
||||
}),
|
||||
};
|
||||
break; // only one question per message
|
||||
}
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
if (type === "result") {
|
||||
resultEvent = event;
|
||||
accumulateUsage(usage, event.usage ?? event.usageMetadata);
|
||||
const resultText =
|
||||
asString(event.result, "").trim() ||
|
||||
asString(event.text, "").trim() ||
|
||||
asString(event.response, "").trim();
|
||||
if (resultText && messages.length === 0) messages.push(resultText);
|
||||
costUsd = asNumber(event.total_cost_usd, asNumber(event.cost_usd, asNumber(event.cost, costUsd ?? 0))) || costUsd;
|
||||
const isError = event.is_error === true || asString(event.subtype, "").toLowerCase() === "error";
|
||||
if (isError) {
|
||||
const text = asErrorText(event.error ?? event.message ?? event.result).trim();
|
||||
if (text) errorMessage = text;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
if (type === "error") {
|
||||
const text = asErrorText(event.error ?? event.message ?? event.detail).trim();
|
||||
if (text) errorMessage = text;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (type === "system") {
|
||||
const subtype = asString(event.subtype, "").trim().toLowerCase();
|
||||
if (subtype === "error") {
|
||||
const text = asErrorText(event.error ?? event.message ?? event.detail).trim();
|
||||
if (text) errorMessage = text;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
if (type === "text") {
|
||||
const part = parseObject(event.part);
|
||||
const text = asString(part.text, "").trim();
|
||||
if (text) messages.push(text);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (type === "step_finish" || event.usage || event.usageMetadata) {
|
||||
accumulateUsage(usage, event.usage ?? event.usageMetadata);
|
||||
costUsd = asNumber(event.total_cost_usd, asNumber(event.cost_usd, asNumber(event.cost, costUsd ?? 0))) || costUsd;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
sessionId,
|
||||
summary: messages.join("\n\n").trim(),
|
||||
usage,
|
||||
costUsd,
|
||||
errorMessage,
|
||||
resultEvent,
|
||||
question,
|
||||
};
|
||||
}
|
||||
|
||||
export function isGeminiUnknownSessionError(stdout: string, stderr: string): boolean {
|
||||
const haystack = `${stdout}\n${stderr}`
|
||||
.split(/\r?\n/)
|
||||
.map((line) => line.trim())
|
||||
.filter(Boolean)
|
||||
.join("\n");
|
||||
|
||||
return /unknown\s+session|session\s+.*\s+not\s+found|resume\s+.*\s+not\s+found|checkpoint\s+.*\s+not\s+found|cannot\s+resume|failed\s+to\s+resume/i.test(
|
||||
haystack,
|
||||
);
|
||||
}
|
||||
|
||||
function extractGeminiErrorMessages(parsed: Record<string, unknown>): string[] {
|
||||
const messages: string[] = [];
|
||||
const errorMsg = asString(parsed.error, "").trim();
|
||||
if (errorMsg) messages.push(errorMsg);
|
||||
|
||||
const raw = Array.isArray(parsed.errors) ? parsed.errors : [];
|
||||
for (const entry of raw) {
|
||||
if (typeof entry === "string") {
|
||||
const msg = entry.trim();
|
||||
if (msg) messages.push(msg);
|
||||
continue;
|
||||
}
|
||||
if (typeof entry !== "object" || entry === null || Array.isArray(entry)) continue;
|
||||
const obj = entry as Record<string, unknown>;
|
||||
const msg = asString(obj.message, "") || asString(obj.error, "") || asString(obj.code, "");
|
||||
if (msg) {
|
||||
messages.push(msg);
|
||||
continue;
|
||||
}
|
||||
try {
|
||||
messages.push(JSON.stringify(obj));
|
||||
} catch {
|
||||
// skip non-serializable entry
|
||||
}
|
||||
}
|
||||
|
||||
return messages;
|
||||
}
|
||||
|
||||
export function describeGeminiFailure(parsed: Record<string, unknown>): string | null {
|
||||
const status = asString(parsed.status, "");
|
||||
const errors = extractGeminiErrorMessages(parsed);
|
||||
|
||||
const detail = errors[0] ?? "";
|
||||
const parts = ["Gemini run failed"];
|
||||
if (status) parts.push(`status=${status}`);
|
||||
if (detail) parts.push(detail);
|
||||
return parts.length > 1 ? parts.join(": ") : null;
|
||||
}
|
||||
|
||||
const GEMINI_AUTH_REQUIRED_RE = /(?:not\s+authenticated|please\s+authenticate|api[_ ]?key\s+(?:required|missing|invalid)|authentication\s+required|unauthorized|invalid\s+credentials|not\s+logged\s+in|login\s+required|run\s+`?gemini\s+auth(?:\s+login)?`?\s+first)/i;
|
||||
|
||||
export function detectGeminiAuthRequired(input: {
|
||||
parsed: Record<string, unknown> | null;
|
||||
stdout: string;
|
||||
stderr: string;
|
||||
}): { requiresAuth: boolean } {
|
||||
const errors = extractGeminiErrorMessages(input.parsed ?? {});
|
||||
const messages = [...errors, input.stdout, input.stderr]
|
||||
.join("\n")
|
||||
.split(/\r?\n/)
|
||||
.map((line) => line.trim())
|
||||
.filter(Boolean);
|
||||
|
||||
const requiresAuth = messages.some((line) => GEMINI_AUTH_REQUIRED_RE.test(line));
|
||||
return { requiresAuth };
|
||||
}
|
||||
|
||||
export function isGeminiTurnLimitResult(
|
||||
parsed: Record<string, unknown> | null | undefined,
|
||||
exitCode?: number | null,
|
||||
): boolean {
|
||||
if (exitCode === 53) return true;
|
||||
if (!parsed) return false;
|
||||
|
||||
const status = asString(parsed.status, "").trim().toLowerCase();
|
||||
if (status === "turn_limit" || status === "max_turns") return true;
|
||||
|
||||
const error = asString(parsed.error, "").trim();
|
||||
return /turn\s*limit|max(?:imum)?\s+turns?/i.test(error);
|
||||
}
|
||||
223
packages/adapters/gemini-local/src/server/test.ts
Normal file
223
packages/adapters/gemini-local/src/server/test.ts
Normal file
@@ -0,0 +1,223 @@
|
||||
import path from "node:path";
|
||||
import type {
|
||||
AdapterEnvironmentCheck,
|
||||
AdapterEnvironmentTestContext,
|
||||
AdapterEnvironmentTestResult,
|
||||
} from "@paperclipai/adapter-utils";
|
||||
import {
|
||||
asBoolean,
|
||||
asString,
|
||||
asStringArray,
|
||||
ensureAbsoluteDirectory,
|
||||
ensureCommandResolvable,
|
||||
ensurePathInEnv,
|
||||
parseObject,
|
||||
runChildProcess,
|
||||
} from "@paperclipai/adapter-utils/server-utils";
|
||||
import { DEFAULT_GEMINI_LOCAL_MODEL } from "../index.js";
|
||||
import { detectGeminiAuthRequired, parseGeminiJsonl } from "./parse.js";
|
||||
import { firstNonEmptyLine } from "./utils.js";
|
||||
|
||||
function summarizeStatus(checks: AdapterEnvironmentCheck[]): AdapterEnvironmentTestResult["status"] {
|
||||
if (checks.some((check) => check.level === "error")) return "fail";
|
||||
if (checks.some((check) => check.level === "warn")) return "warn";
|
||||
return "pass";
|
||||
}
|
||||
|
||||
function isNonEmpty(value: unknown): value is string {
|
||||
return typeof value === "string" && value.trim().length > 0;
|
||||
}
|
||||
|
||||
function commandLooksLike(command: string, expected: string): boolean {
|
||||
const base = path.basename(command).toLowerCase();
|
||||
return base === expected || base === `${expected}.cmd` || base === `${expected}.exe`;
|
||||
}
|
||||
|
||||
function summarizeProbeDetail(stdout: string, stderr: string, parsedError: string | null): string | null {
|
||||
const raw = parsedError?.trim() || firstNonEmptyLine(stderr) || firstNonEmptyLine(stdout);
|
||||
if (!raw) return null;
|
||||
const clean = raw.replace(/\s+/g, " ").trim();
|
||||
const max = 240;
|
||||
return clean.length > max ? `${clean.slice(0, max - 1)}…` : clean;
|
||||
}
|
||||
|
||||
export async function testEnvironment(
|
||||
ctx: AdapterEnvironmentTestContext,
|
||||
): Promise<AdapterEnvironmentTestResult> {
|
||||
const checks: AdapterEnvironmentCheck[] = [];
|
||||
const config = parseObject(ctx.config);
|
||||
const command = asString(config.command, "gemini");
|
||||
const cwd = asString(config.cwd, process.cwd());
|
||||
|
||||
try {
|
||||
await ensureAbsoluteDirectory(cwd, { createIfMissing: true });
|
||||
checks.push({
|
||||
code: "gemini_cwd_valid",
|
||||
level: "info",
|
||||
message: `Working directory is valid: ${cwd}`,
|
||||
});
|
||||
} catch (err) {
|
||||
checks.push({
|
||||
code: "gemini_cwd_invalid",
|
||||
level: "error",
|
||||
message: err instanceof Error ? err.message : "Invalid working directory",
|
||||
detail: cwd,
|
||||
});
|
||||
}
|
||||
|
||||
const envConfig = parseObject(config.env);
|
||||
const env: Record<string, string> = {};
|
||||
for (const [key, value] of Object.entries(envConfig)) {
|
||||
if (typeof value === "string") env[key] = value;
|
||||
}
|
||||
const runtimeEnv = ensurePathInEnv({ ...process.env, ...env });
|
||||
try {
|
||||
await ensureCommandResolvable(command, cwd, runtimeEnv);
|
||||
checks.push({
|
||||
code: "gemini_command_resolvable",
|
||||
level: "info",
|
||||
message: `Command is executable: ${command}`,
|
||||
});
|
||||
} catch (err) {
|
||||
checks.push({
|
||||
code: "gemini_command_unresolvable",
|
||||
level: "error",
|
||||
message: err instanceof Error ? err.message : "Command is not executable",
|
||||
detail: command,
|
||||
});
|
||||
}
|
||||
|
||||
const configGeminiApiKey = env.GEMINI_API_KEY;
|
||||
const hostGeminiApiKey = process.env.GEMINI_API_KEY;
|
||||
const configGoogleApiKey = env.GOOGLE_API_KEY;
|
||||
const hostGoogleApiKey = process.env.GOOGLE_API_KEY;
|
||||
const hasGca = env.GOOGLE_GENAI_USE_GCA === "true" || process.env.GOOGLE_GENAI_USE_GCA === "true";
|
||||
if (
|
||||
isNonEmpty(configGeminiApiKey) ||
|
||||
isNonEmpty(hostGeminiApiKey) ||
|
||||
isNonEmpty(configGoogleApiKey) ||
|
||||
isNonEmpty(hostGoogleApiKey) ||
|
||||
hasGca
|
||||
) {
|
||||
const source = hasGca
|
||||
? "Google account login (GCA)"
|
||||
: isNonEmpty(configGeminiApiKey) || isNonEmpty(configGoogleApiKey)
|
||||
? "adapter config env"
|
||||
: "server environment";
|
||||
checks.push({
|
||||
code: "gemini_api_key_present",
|
||||
level: "info",
|
||||
message: "Gemini API credentials are set for CLI authentication.",
|
||||
detail: `Detected in ${source}.`,
|
||||
});
|
||||
} else {
|
||||
checks.push({
|
||||
code: "gemini_api_key_missing",
|
||||
level: "info",
|
||||
message: "No explicit API key detected. Gemini CLI may still authenticate via `gemini auth login` (OAuth).",
|
||||
hint: "If the hello probe fails with an auth error, set GEMINI_API_KEY or GOOGLE_API_KEY in adapter env, or run `gemini auth login`.",
|
||||
});
|
||||
}
|
||||
|
||||
const canRunProbe =
|
||||
checks.every((check) => check.code !== "gemini_cwd_invalid" && check.code !== "gemini_command_unresolvable");
|
||||
if (canRunProbe) {
|
||||
if (!commandLooksLike(command, "gemini")) {
|
||||
checks.push({
|
||||
code: "gemini_hello_probe_skipped_custom_command",
|
||||
level: "info",
|
||||
message: "Skipped hello probe because command is not `gemini`.",
|
||||
detail: command,
|
||||
hint: "Use the `gemini` CLI command to run the automatic installation and auth probe.",
|
||||
});
|
||||
} else {
|
||||
const model = asString(config.model, DEFAULT_GEMINI_LOCAL_MODEL).trim();
|
||||
const approvalMode = asString(config.approvalMode, asBoolean(config.yolo, false) ? "yolo" : "default");
|
||||
const sandbox = asBoolean(config.sandbox, false);
|
||||
const extraArgs = (() => {
|
||||
const fromExtraArgs = asStringArray(config.extraArgs);
|
||||
if (fromExtraArgs.length > 0) return fromExtraArgs;
|
||||
return asStringArray(config.args);
|
||||
})();
|
||||
|
||||
const args = ["--output-format", "stream-json"];
|
||||
if (model && model !== DEFAULT_GEMINI_LOCAL_MODEL) args.push("--model", model);
|
||||
if (approvalMode !== "default") args.push("--approval-mode", approvalMode);
|
||||
if (sandbox) {
|
||||
args.push("--sandbox");
|
||||
} else {
|
||||
args.push("--sandbox=none");
|
||||
}
|
||||
if (extraArgs.length > 0) args.push(...extraArgs);
|
||||
args.push("Respond with hello.");
|
||||
|
||||
const probe = await runChildProcess(
|
||||
`gemini-envtest-${Date.now()}-${Math.random().toString(16).slice(2)}`,
|
||||
command,
|
||||
args,
|
||||
{
|
||||
cwd,
|
||||
env,
|
||||
timeoutSec: 45,
|
||||
graceSec: 5,
|
||||
onLog: async () => { },
|
||||
},
|
||||
);
|
||||
const parsed = parseGeminiJsonl(probe.stdout);
|
||||
const detail = summarizeProbeDetail(probe.stdout, probe.stderr, parsed.errorMessage);
|
||||
const authMeta = detectGeminiAuthRequired({
|
||||
parsed: parsed.resultEvent,
|
||||
stdout: probe.stdout,
|
||||
stderr: probe.stderr,
|
||||
});
|
||||
|
||||
if (probe.timedOut) {
|
||||
checks.push({
|
||||
code: "gemini_hello_probe_timed_out",
|
||||
level: "warn",
|
||||
message: "Gemini hello probe timed out.",
|
||||
hint: "Retry the probe. If this persists, verify Gemini can run `Respond with hello.` from this directory manually.",
|
||||
});
|
||||
} else if ((probe.exitCode ?? 1) === 0) {
|
||||
const summary = parsed.summary.trim();
|
||||
const hasHello = /\bhello\b/i.test(summary);
|
||||
checks.push({
|
||||
code: hasHello ? "gemini_hello_probe_passed" : "gemini_hello_probe_unexpected_output",
|
||||
level: hasHello ? "info" : "warn",
|
||||
message: hasHello
|
||||
? "Gemini hello probe succeeded."
|
||||
: "Gemini probe ran but did not return `hello` as expected.",
|
||||
...(summary ? { detail: summary.replace(/\s+/g, " ").trim().slice(0, 240) } : {}),
|
||||
...(hasHello
|
||||
? {}
|
||||
: {
|
||||
hint: "Try `gemini --output-format json \"Respond with hello.\"` manually to inspect full output.",
|
||||
}),
|
||||
});
|
||||
} else if (authMeta.requiresAuth) {
|
||||
checks.push({
|
||||
code: "gemini_hello_probe_auth_required",
|
||||
level: "warn",
|
||||
message: "Gemini CLI is installed, but authentication is not ready.",
|
||||
...(detail ? { detail } : {}),
|
||||
hint: "Run `gemini auth` or configure GEMINI_API_KEY / GOOGLE_API_KEY in adapter env/shell, then retry the probe.",
|
||||
});
|
||||
} else {
|
||||
checks.push({
|
||||
code: "gemini_hello_probe_failed",
|
||||
level: "error",
|
||||
message: "Gemini hello probe failed.",
|
||||
...(detail ? { detail } : {}),
|
||||
hint: "Run `gemini --output-format json \"Respond with hello.\"` manually in this working directory to debug.",
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return {
|
||||
adapterType: ctx.adapterType,
|
||||
status: summarizeStatus(checks),
|
||||
checks,
|
||||
testedAt: new Date().toISOString(),
|
||||
};
|
||||
}
|
||||
8
packages/adapters/gemini-local/src/server/utils.ts
Normal file
8
packages/adapters/gemini-local/src/server/utils.ts
Normal file
@@ -0,0 +1,8 @@
|
||||
export function firstNonEmptyLine(text: string): string {
|
||||
return (
|
||||
text
|
||||
.split(/\r?\n/)
|
||||
.map((line) => line.trim())
|
||||
.find(Boolean) ?? ""
|
||||
);
|
||||
}
|
||||
75
packages/adapters/gemini-local/src/ui/build-config.ts
Normal file
75
packages/adapters/gemini-local/src/ui/build-config.ts
Normal file
@@ -0,0 +1,75 @@
|
||||
import type { CreateConfigValues } from "@paperclipai/adapter-utils";
|
||||
import { DEFAULT_GEMINI_LOCAL_MODEL } from "../index.js";
|
||||
|
||||
function parseCommaArgs(value: string): string[] {
|
||||
return value
|
||||
.split(",")
|
||||
.map((item) => item.trim())
|
||||
.filter(Boolean);
|
||||
}
|
||||
|
||||
function parseEnvVars(text: string): Record<string, string> {
|
||||
const env: Record<string, string> = {};
|
||||
for (const line of text.split(/\r?\n/)) {
|
||||
const trimmed = line.trim();
|
||||
if (!trimmed || trimmed.startsWith("#")) continue;
|
||||
const eq = trimmed.indexOf("=");
|
||||
if (eq <= 0) continue;
|
||||
const key = trimmed.slice(0, eq).trim();
|
||||
const value = trimmed.slice(eq + 1);
|
||||
if (!/^[A-Za-z_][A-Za-z0-9_]*$/.test(key)) continue;
|
||||
env[key] = value;
|
||||
}
|
||||
return env;
|
||||
}
|
||||
|
||||
function parseEnvBindings(bindings: unknown): Record<string, unknown> {
|
||||
if (typeof bindings !== "object" || bindings === null || Array.isArray(bindings)) return {};
|
||||
const env: Record<string, unknown> = {};
|
||||
for (const [key, raw] of Object.entries(bindings)) {
|
||||
if (!/^[A-Za-z_][A-Za-z0-9_]*$/.test(key)) continue;
|
||||
if (typeof raw === "string") {
|
||||
env[key] = { type: "plain", value: raw };
|
||||
continue;
|
||||
}
|
||||
if (typeof raw !== "object" || raw === null || Array.isArray(raw)) continue;
|
||||
const rec = raw as Record<string, unknown>;
|
||||
if (rec.type === "plain" && typeof rec.value === "string") {
|
||||
env[key] = { type: "plain", value: rec.value };
|
||||
continue;
|
||||
}
|
||||
if (rec.type === "secret_ref" && typeof rec.secretId === "string") {
|
||||
env[key] = {
|
||||
type: "secret_ref",
|
||||
secretId: rec.secretId,
|
||||
...(typeof rec.version === "number" || rec.version === "latest"
|
||||
? { version: rec.version }
|
||||
: {}),
|
||||
};
|
||||
}
|
||||
}
|
||||
return env;
|
||||
}
|
||||
|
||||
export function buildGeminiLocalConfig(v: CreateConfigValues): Record<string, unknown> {
|
||||
const ac: Record<string, unknown> = {};
|
||||
if (v.cwd) ac.cwd = v.cwd;
|
||||
if (v.instructionsFilePath) ac.instructionsFilePath = v.instructionsFilePath;
|
||||
if (v.promptTemplate) ac.promptTemplate = v.promptTemplate;
|
||||
ac.model = v.model || DEFAULT_GEMINI_LOCAL_MODEL;
|
||||
ac.timeoutSec = 0;
|
||||
ac.graceSec = 15;
|
||||
const env = parseEnvBindings(v.envBindings);
|
||||
const legacy = parseEnvVars(v.envVars);
|
||||
for (const [key, value] of Object.entries(legacy)) {
|
||||
if (!Object.prototype.hasOwnProperty.call(env, key)) {
|
||||
env[key] = { type: "plain", value };
|
||||
}
|
||||
}
|
||||
if (Object.keys(env).length > 0) ac.env = env;
|
||||
ac.sandbox = !v.dangerouslyBypassSandbox;
|
||||
|
||||
if (v.command) ac.command = v.command;
|
||||
if (v.extraArgs) ac.extraArgs = parseCommaArgs(v.extraArgs);
|
||||
return ac;
|
||||
}
|
||||
2
packages/adapters/gemini-local/src/ui/index.ts
Normal file
2
packages/adapters/gemini-local/src/ui/index.ts
Normal file
@@ -0,0 +1,2 @@
|
||||
export { parseGeminiStdoutLine } from "./parse-stdout.js";
|
||||
export { buildGeminiLocalConfig } from "./build-config.js";
|
||||
274
packages/adapters/gemini-local/src/ui/parse-stdout.ts
Normal file
274
packages/adapters/gemini-local/src/ui/parse-stdout.ts
Normal file
@@ -0,0 +1,274 @@
|
||||
import type { TranscriptEntry } from "@paperclipai/adapter-utils";
|
||||
|
||||
function safeJsonParse(text: string): unknown {
|
||||
try {
|
||||
return JSON.parse(text);
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
function asRecord(value: unknown): Record<string, unknown> | null {
|
||||
if (typeof value !== "object" || value === null || Array.isArray(value)) return null;
|
||||
return value as Record<string, unknown>;
|
||||
}
|
||||
|
||||
function asString(value: unknown, fallback = ""): string {
|
||||
return typeof value === "string" ? value : fallback;
|
||||
}
|
||||
|
||||
function asNumber(value: unknown, fallback = 0): number {
|
||||
return typeof value === "number" && Number.isFinite(value) ? value : fallback;
|
||||
}
|
||||
|
||||
function stringifyUnknown(value: unknown): string {
|
||||
if (typeof value === "string") return value;
|
||||
if (value === null || value === undefined) return "";
|
||||
try {
|
||||
return JSON.stringify(value, null, 2);
|
||||
} catch {
|
||||
return String(value);
|
||||
}
|
||||
}
|
||||
|
||||
function errorText(value: unknown): string {
|
||||
if (typeof value === "string") return value;
|
||||
const rec = asRecord(value);
|
||||
if (!rec) return "";
|
||||
const msg =
|
||||
(typeof rec.message === "string" && rec.message) ||
|
||||
(typeof rec.error === "string" && rec.error) ||
|
||||
(typeof rec.code === "string" && rec.code) ||
|
||||
"";
|
||||
if (msg) return msg;
|
||||
try {
|
||||
return JSON.stringify(rec);
|
||||
} catch {
|
||||
return "";
|
||||
}
|
||||
}
|
||||
|
||||
function collectTextEntries(messageRaw: unknown, ts: string, kind: "assistant" | "user"): TranscriptEntry[] {
|
||||
if (typeof messageRaw === "string") {
|
||||
const text = messageRaw.trim();
|
||||
return text ? [{ kind, ts, text }] : [];
|
||||
}
|
||||
|
||||
const message = asRecord(messageRaw);
|
||||
if (!message) return [];
|
||||
|
||||
const entries: TranscriptEntry[] = [];
|
||||
const directText = asString(message.text).trim();
|
||||
if (directText) entries.push({ kind, ts, text: directText });
|
||||
|
||||
const content = Array.isArray(message.content) ? message.content : [];
|
||||
for (const partRaw of content) {
|
||||
const part = asRecord(partRaw);
|
||||
if (!part) continue;
|
||||
const type = asString(part.type).trim();
|
||||
if (type !== "output_text" && type !== "text" && type !== "content") continue;
|
||||
const text = asString(part.text).trim() || asString(part.content).trim();
|
||||
if (text) entries.push({ kind, ts, text });
|
||||
}
|
||||
|
||||
return entries;
|
||||
}
|
||||
|
||||
function parseAssistantMessage(messageRaw: unknown, ts: string): TranscriptEntry[] {
|
||||
if (typeof messageRaw === "string") {
|
||||
const text = messageRaw.trim();
|
||||
return text ? [{ kind: "assistant", ts, text }] : [];
|
||||
}
|
||||
|
||||
const message = asRecord(messageRaw);
|
||||
if (!message) return [];
|
||||
|
||||
const entries: TranscriptEntry[] = [];
|
||||
const directText = asString(message.text).trim();
|
||||
if (directText) entries.push({ kind: "assistant", ts, text: directText });
|
||||
|
||||
const content = Array.isArray(message.content) ? message.content : [];
|
||||
for (const partRaw of content) {
|
||||
const part = asRecord(partRaw);
|
||||
if (!part) continue;
|
||||
const type = asString(part.type).trim();
|
||||
|
||||
if (type === "output_text" || type === "text" || type === "content") {
|
||||
const text = asString(part.text).trim() || asString(part.content).trim();
|
||||
if (text) entries.push({ kind: "assistant", ts, text });
|
||||
continue;
|
||||
}
|
||||
|
||||
if (type === "thinking") {
|
||||
const text = asString(part.text).trim();
|
||||
if (text) entries.push({ kind: "thinking", ts, text });
|
||||
continue;
|
||||
}
|
||||
|
||||
if (type === "tool_call") {
|
||||
const name = asString(part.name, asString(part.tool, "tool"));
|
||||
entries.push({
|
||||
kind: "tool_call",
|
||||
ts,
|
||||
name,
|
||||
input: part.input ?? part.arguments ?? part.args ?? {},
|
||||
});
|
||||
continue;
|
||||
}
|
||||
|
||||
if (type === "tool_result" || type === "tool_response") {
|
||||
const toolUseId =
|
||||
asString(part.tool_use_id) ||
|
||||
asString(part.toolUseId) ||
|
||||
asString(part.call_id) ||
|
||||
asString(part.id) ||
|
||||
"tool_result";
|
||||
const contentText =
|
||||
asString(part.output) ||
|
||||
asString(part.text) ||
|
||||
asString(part.result) ||
|
||||
stringifyUnknown(part.output ?? part.result ?? part.text ?? part.response);
|
||||
const isError = part.is_error === true || asString(part.status).toLowerCase() === "error";
|
||||
entries.push({
|
||||
kind: "tool_result",
|
||||
ts,
|
||||
toolUseId,
|
||||
content: contentText,
|
||||
isError,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return entries;
|
||||
}
|
||||
|
||||
function parseTopLevelToolEvent(parsed: Record<string, unknown>, ts: string): TranscriptEntry[] {
|
||||
const subtype = asString(parsed.subtype).trim().toLowerCase();
|
||||
const callId = asString(parsed.call_id, asString(parsed.callId, asString(parsed.id, "tool_call")));
|
||||
const toolCall = asRecord(parsed.tool_call ?? parsed.toolCall);
|
||||
if (!toolCall) {
|
||||
return [{ kind: "system", ts, text: `tool_call${subtype ? ` (${subtype})` : ""}` }];
|
||||
}
|
||||
|
||||
const [toolName] = Object.keys(toolCall);
|
||||
if (!toolName) {
|
||||
return [{ kind: "system", ts, text: `tool_call${subtype ? ` (${subtype})` : ""}` }];
|
||||
}
|
||||
const payload = asRecord(toolCall[toolName]) ?? {};
|
||||
|
||||
if (subtype === "started" || subtype === "start") {
|
||||
return [{
|
||||
kind: "tool_call",
|
||||
ts,
|
||||
name: toolName,
|
||||
input: payload.args ?? payload.input ?? payload.arguments ?? payload,
|
||||
}];
|
||||
}
|
||||
|
||||
if (subtype === "completed" || subtype === "complete" || subtype === "finished") {
|
||||
const result = payload.result ?? payload.output ?? payload.error;
|
||||
const isError =
|
||||
parsed.is_error === true ||
|
||||
payload.is_error === true ||
|
||||
payload.error !== undefined ||
|
||||
asString(payload.status).toLowerCase() === "error";
|
||||
return [{
|
||||
kind: "tool_result",
|
||||
ts,
|
||||
toolUseId: callId,
|
||||
content: result !== undefined ? stringifyUnknown(result) : `${toolName} completed`,
|
||||
isError,
|
||||
}];
|
||||
}
|
||||
|
||||
return [{ kind: "system", ts, text: `tool_call${subtype ? ` (${subtype})` : ""}: ${toolName}` }];
|
||||
}
|
||||
|
||||
function readSessionId(parsed: Record<string, unknown>): string {
|
||||
return (
|
||||
asString(parsed.session_id) ||
|
||||
asString(parsed.sessionId) ||
|
||||
asString(parsed.sessionID) ||
|
||||
asString(parsed.checkpoint_id) ||
|
||||
asString(parsed.thread_id)
|
||||
);
|
||||
}
|
||||
|
||||
function readUsage(parsed: Record<string, unknown>) {
|
||||
const usage = asRecord(parsed.usage) ?? asRecord(parsed.usageMetadata);
|
||||
const usageMetadata = asRecord(usage?.usageMetadata);
|
||||
const source = usageMetadata ?? usage ?? {};
|
||||
return {
|
||||
inputTokens: asNumber(source.input_tokens, asNumber(source.inputTokens, asNumber(source.promptTokenCount))),
|
||||
outputTokens: asNumber(source.output_tokens, asNumber(source.outputTokens, asNumber(source.candidatesTokenCount))),
|
||||
cachedTokens: asNumber(
|
||||
source.cached_input_tokens,
|
||||
asNumber(source.cachedInputTokens, asNumber(source.cachedContentTokenCount)),
|
||||
),
|
||||
};
|
||||
}
|
||||
|
||||
export function parseGeminiStdoutLine(line: string, ts: string): TranscriptEntry[] {
|
||||
const parsed = asRecord(safeJsonParse(line));
|
||||
if (!parsed) {
|
||||
return [{ kind: "stdout", ts, text: line }];
|
||||
}
|
||||
|
||||
const type = asString(parsed.type);
|
||||
|
||||
if (type === "system") {
|
||||
const subtype = asString(parsed.subtype);
|
||||
if (subtype === "init") {
|
||||
const sessionId = readSessionId(parsed);
|
||||
return [{ kind: "init", ts, model: asString(parsed.model, "gemini"), sessionId }];
|
||||
}
|
||||
if (subtype === "error") {
|
||||
const text = errorText(parsed.error ?? parsed.message ?? parsed.detail);
|
||||
return [{ kind: "stderr", ts, text: text || "error" }];
|
||||
}
|
||||
return [{ kind: "system", ts, text: `system: ${subtype || "event"}` }];
|
||||
}
|
||||
|
||||
if (type === "assistant") {
|
||||
return parseAssistantMessage(parsed.message, ts);
|
||||
}
|
||||
|
||||
if (type === "user") {
|
||||
return collectTextEntries(parsed.message, ts, "user");
|
||||
}
|
||||
|
||||
if (type === "thinking") {
|
||||
const text = asString(parsed.text).trim() || asString(asRecord(parsed.delta)?.text).trim();
|
||||
return text ? [{ kind: "thinking", ts, text }] : [];
|
||||
}
|
||||
|
||||
if (type === "tool_call") {
|
||||
return parseTopLevelToolEvent(parsed, ts);
|
||||
}
|
||||
|
||||
if (type === "result") {
|
||||
const usage = readUsage(parsed);
|
||||
const errors = parsed.is_error === true
|
||||
? [errorText(parsed.error ?? parsed.message ?? parsed.result)].filter(Boolean)
|
||||
: [];
|
||||
return [{
|
||||
kind: "result",
|
||||
ts,
|
||||
text: asString(parsed.result) || asString(parsed.text) || asString(parsed.response),
|
||||
inputTokens: usage.inputTokens,
|
||||
outputTokens: usage.outputTokens,
|
||||
cachedTokens: usage.cachedTokens,
|
||||
costUsd: asNumber(parsed.total_cost_usd, asNumber(parsed.cost_usd, asNumber(parsed.cost))),
|
||||
subtype: asString(parsed.subtype, "result"),
|
||||
isError: parsed.is_error === true,
|
||||
errors,
|
||||
}];
|
||||
}
|
||||
|
||||
if (type === "error") {
|
||||
const text = errorText(parsed.error ?? parsed.message ?? parsed.detail);
|
||||
return [{ kind: "stderr", ts, text: text || "error" }];
|
||||
}
|
||||
|
||||
return [{ kind: "stdout", ts, text: line }];
|
||||
}
|
||||
8
packages/adapters/gemini-local/tsconfig.json
Normal file
8
packages/adapters/gemini-local/tsconfig.json
Normal file
@@ -0,0 +1,8 @@
|
||||
{
|
||||
"extends": "../../../tsconfig.json",
|
||||
"compilerOptions": {
|
||||
"outDir": "dist",
|
||||
"rootDir": "src"
|
||||
},
|
||||
"include": ["src"]
|
||||
}
|
||||
12
packages/adapters/openclaw-gateway/CHANGELOG.md
Normal file
12
packages/adapters/openclaw-gateway/CHANGELOG.md
Normal file
@@ -0,0 +1,12 @@
|
||||
# @paperclipai/adapter-openclaw-gateway
|
||||
|
||||
## 0.3.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Stable release preparation for 0.3.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies
|
||||
- @paperclipai/adapter-utils@0.3.0
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@paperclipai/adapter-openclaw-gateway",
|
||||
"version": "0.2.7",
|
||||
"version": "0.3.0",
|
||||
"type": "module",
|
||||
"exports": {
|
||||
".": "./src/index.ts",
|
||||
|
||||
@@ -31,6 +31,7 @@ Gateway connect identity fields:
|
||||
|
||||
Request behavior fields:
|
||||
- payloadTemplate (object, optional): additional fields merged into gateway agent params
|
||||
- workspaceRuntime (object, optional): desired runtime service intents; Paperclip forwards these in a standardized paperclip.workspaceRuntime block for remote execution environments
|
||||
- timeoutSec (number, optional): adapter timeout in seconds (default 120)
|
||||
- waitTimeoutMs (number, optional): agent.wait timeout override (default timeoutSec * 1000)
|
||||
- autoPairOnFirstConnect (boolean, optional): on first "pairing required", attempt device.pair.list/device.pair.approve via shared auth, then retry once (default true)
|
||||
@@ -39,4 +40,15 @@ Request behavior fields:
|
||||
Session routing fields:
|
||||
- sessionKeyStrategy (string, optional): issue (default), fixed, or run
|
||||
- sessionKey (string, optional): fixed session key when strategy=fixed (default paperclip)
|
||||
|
||||
Standard outbound payload additions:
|
||||
- paperclip (object): standardized Paperclip context added to every gateway agent request
|
||||
- paperclip.workspace (object, optional): resolved execution workspace for this run
|
||||
- paperclip.workspaces (array, optional): additional workspace hints Paperclip exposed to the run
|
||||
- paperclip.workspaceRuntime (object, optional): normalized runtime service intent config for the workspace
|
||||
|
||||
Standard result metadata supported:
|
||||
- meta.runtimeServices (array, optional): normalized adapter-managed runtime service reports
|
||||
- meta.previewUrl (string, optional): shorthand single preview URL
|
||||
- meta.previewUrls (string[], optional): shorthand multiple preview URLs
|
||||
`;
|
||||
|
||||
@@ -1,4 +1,8 @@
|
||||
import type { AdapterExecutionContext, AdapterExecutionResult } from "@paperclipai/adapter-utils";
|
||||
import type {
|
||||
AdapterExecutionContext,
|
||||
AdapterExecutionResult,
|
||||
AdapterRuntimeServiceReport,
|
||||
} from "@paperclipai/adapter-utils";
|
||||
import { asNumber, asString, buildPaperclipEnv, parseObject } from "@paperclipai/adapter-utils/server-utils";
|
||||
import crypto, { randomUUID } from "node:crypto";
|
||||
import { WebSocket } from "ws";
|
||||
@@ -411,6 +415,58 @@ function appendWakeText(baseText: string, wakeText: string): string {
|
||||
return trimmedBase.length > 0 ? `${trimmedBase}\n\n${wakeText}` : wakeText;
|
||||
}
|
||||
|
||||
function buildStandardPaperclipPayload(
|
||||
ctx: AdapterExecutionContext,
|
||||
wakePayload: WakePayload,
|
||||
paperclipEnv: Record<string, string>,
|
||||
payloadTemplate: Record<string, unknown>,
|
||||
): Record<string, unknown> {
|
||||
const templatePaperclip = parseObject(payloadTemplate.paperclip);
|
||||
const workspace = asRecord(ctx.context.paperclipWorkspace);
|
||||
const workspaces = Array.isArray(ctx.context.paperclipWorkspaces)
|
||||
? ctx.context.paperclipWorkspaces.filter((entry): entry is Record<string, unknown> => Boolean(asRecord(entry)))
|
||||
: [];
|
||||
const configuredWorkspaceRuntime = parseObject(ctx.config.workspaceRuntime);
|
||||
const runtimeServiceIntents = Array.isArray(ctx.context.paperclipRuntimeServiceIntents)
|
||||
? ctx.context.paperclipRuntimeServiceIntents.filter(
|
||||
(entry): entry is Record<string, unknown> => Boolean(asRecord(entry)),
|
||||
)
|
||||
: [];
|
||||
|
||||
const standardPaperclip: Record<string, unknown> = {
|
||||
runId: ctx.runId,
|
||||
companyId: ctx.agent.companyId,
|
||||
agentId: ctx.agent.id,
|
||||
agentName: ctx.agent.name,
|
||||
taskId: wakePayload.taskId,
|
||||
issueId: wakePayload.issueId,
|
||||
issueIds: wakePayload.issueIds,
|
||||
wakeReason: wakePayload.wakeReason,
|
||||
wakeCommentId: wakePayload.wakeCommentId,
|
||||
approvalId: wakePayload.approvalId,
|
||||
approvalStatus: wakePayload.approvalStatus,
|
||||
apiUrl: paperclipEnv.PAPERCLIP_API_URL ?? null,
|
||||
};
|
||||
|
||||
if (workspace) {
|
||||
standardPaperclip.workspace = workspace;
|
||||
}
|
||||
if (workspaces.length > 0) {
|
||||
standardPaperclip.workspaces = workspaces;
|
||||
}
|
||||
if (runtimeServiceIntents.length > 0 || Object.keys(configuredWorkspaceRuntime).length > 0) {
|
||||
standardPaperclip.workspaceRuntime = {
|
||||
...configuredWorkspaceRuntime,
|
||||
...(runtimeServiceIntents.length > 0 ? { services: runtimeServiceIntents } : {}),
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
...templatePaperclip,
|
||||
...standardPaperclip,
|
||||
};
|
||||
}
|
||||
|
||||
function normalizeUrl(input: string): URL | null {
|
||||
try {
|
||||
return new URL(input);
|
||||
@@ -835,6 +891,91 @@ function parseUsage(value: unknown): AdapterExecutionResult["usage"] | undefined
|
||||
};
|
||||
}
|
||||
|
||||
function extractRuntimeServicesFromMeta(meta: Record<string, unknown> | null): AdapterRuntimeServiceReport[] {
|
||||
if (!meta) return [];
|
||||
const reports: AdapterRuntimeServiceReport[] = [];
|
||||
|
||||
const runtimeServices = Array.isArray(meta.runtimeServices)
|
||||
? meta.runtimeServices.filter((entry): entry is Record<string, unknown> => Boolean(asRecord(entry)))
|
||||
: [];
|
||||
for (const entry of runtimeServices) {
|
||||
const serviceName = nonEmpty(entry.serviceName) ?? nonEmpty(entry.name);
|
||||
if (!serviceName) continue;
|
||||
const rawStatus = nonEmpty(entry.status)?.toLowerCase();
|
||||
const status =
|
||||
rawStatus === "starting" || rawStatus === "running" || rawStatus === "stopped" || rawStatus === "failed"
|
||||
? rawStatus
|
||||
: "running";
|
||||
const rawLifecycle = nonEmpty(entry.lifecycle)?.toLowerCase();
|
||||
const lifecycle = rawLifecycle === "shared" ? "shared" : "ephemeral";
|
||||
const rawScopeType = nonEmpty(entry.scopeType)?.toLowerCase();
|
||||
const scopeType =
|
||||
rawScopeType === "project_workspace" ||
|
||||
rawScopeType === "execution_workspace" ||
|
||||
rawScopeType === "agent"
|
||||
? rawScopeType
|
||||
: "run";
|
||||
const rawHealth = nonEmpty(entry.healthStatus)?.toLowerCase();
|
||||
const healthStatus =
|
||||
rawHealth === "healthy" || rawHealth === "unhealthy" || rawHealth === "unknown"
|
||||
? rawHealth
|
||||
: status === "running"
|
||||
? "healthy"
|
||||
: "unknown";
|
||||
|
||||
reports.push({
|
||||
id: nonEmpty(entry.id),
|
||||
projectId: nonEmpty(entry.projectId),
|
||||
projectWorkspaceId: nonEmpty(entry.projectWorkspaceId),
|
||||
issueId: nonEmpty(entry.issueId),
|
||||
scopeType,
|
||||
scopeId: nonEmpty(entry.scopeId),
|
||||
serviceName,
|
||||
status,
|
||||
lifecycle,
|
||||
reuseKey: nonEmpty(entry.reuseKey),
|
||||
command: nonEmpty(entry.command),
|
||||
cwd: nonEmpty(entry.cwd),
|
||||
port: parseOptionalPositiveInteger(entry.port),
|
||||
url: nonEmpty(entry.url),
|
||||
providerRef: nonEmpty(entry.providerRef) ?? nonEmpty(entry.previewId),
|
||||
ownerAgentId: nonEmpty(entry.ownerAgentId),
|
||||
stopPolicy: asRecord(entry.stopPolicy),
|
||||
healthStatus,
|
||||
});
|
||||
}
|
||||
|
||||
const previewUrl = nonEmpty(meta.previewUrl);
|
||||
if (previewUrl) {
|
||||
reports.push({
|
||||
serviceName: "preview",
|
||||
status: "running",
|
||||
lifecycle: "ephemeral",
|
||||
scopeType: "run",
|
||||
url: previewUrl,
|
||||
providerRef: nonEmpty(meta.previewId) ?? previewUrl,
|
||||
healthStatus: "healthy",
|
||||
});
|
||||
}
|
||||
|
||||
const previewUrls = Array.isArray(meta.previewUrls)
|
||||
? meta.previewUrls.filter((entry): entry is string => typeof entry === "string" && entry.trim().length > 0)
|
||||
: [];
|
||||
previewUrls.forEach((url, index) => {
|
||||
reports.push({
|
||||
serviceName: index === 0 ? "preview" : `preview-${index + 1}`,
|
||||
status: "running",
|
||||
lifecycle: "ephemeral",
|
||||
scopeType: "run",
|
||||
url,
|
||||
providerRef: `${url}#${index}`,
|
||||
healthStatus: "healthy",
|
||||
});
|
||||
});
|
||||
|
||||
return reports;
|
||||
}
|
||||
|
||||
function extractResultText(value: unknown): string | null {
|
||||
const record = asRecord(value);
|
||||
if (!record) return null;
|
||||
@@ -924,6 +1065,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
|
||||
|
||||
const templateMessage = nonEmpty(payloadTemplate.message) ?? nonEmpty(payloadTemplate.text);
|
||||
const message = templateMessage ? appendWakeText(templateMessage, wakeText) : wakeText;
|
||||
const paperclipPayload = buildStandardPaperclipPayload(ctx, wakePayload, paperclipEnv, payloadTemplate);
|
||||
|
||||
const agentParams: Record<string, unknown> = {
|
||||
...payloadTemplate,
|
||||
@@ -1188,12 +1330,24 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
|
||||
null;
|
||||
const summary = summaryFromEvents || summaryFromPayload || null;
|
||||
|
||||
const meta = asRecord(asRecord(acceptedPayload?.result)?.meta) ?? asRecord(acceptedPayload?.meta);
|
||||
const agentMeta = asRecord(meta?.agentMeta);
|
||||
const usage = parseUsage(agentMeta?.usage ?? meta?.usage);
|
||||
const provider = nonEmpty(agentMeta?.provider) ?? nonEmpty(meta?.provider) ?? "openclaw";
|
||||
const model = nonEmpty(agentMeta?.model) ?? nonEmpty(meta?.model) ?? null;
|
||||
const costUsd = asNumber(agentMeta?.costUsd ?? meta?.costUsd, 0);
|
||||
const acceptedResult = asRecord(acceptedPayload?.result);
|
||||
const latestPayload = asRecord(latestResultPayload);
|
||||
const latestResult = asRecord(latestPayload?.result);
|
||||
const acceptedMeta = asRecord(acceptedResult?.meta) ?? asRecord(acceptedPayload?.meta);
|
||||
const latestMeta = asRecord(latestResult?.meta) ?? asRecord(latestPayload?.meta);
|
||||
const mergedMeta = {
|
||||
...(acceptedMeta ?? {}),
|
||||
...(latestMeta ?? {}),
|
||||
};
|
||||
const agentMeta =
|
||||
asRecord(mergedMeta.agentMeta) ??
|
||||
asRecord(acceptedMeta?.agentMeta) ??
|
||||
asRecord(latestMeta?.agentMeta);
|
||||
const usage = parseUsage(agentMeta?.usage ?? mergedMeta.usage);
|
||||
const runtimeServices = extractRuntimeServicesFromMeta(agentMeta ?? mergedMeta);
|
||||
const provider = nonEmpty(agentMeta?.provider) ?? nonEmpty(mergedMeta.provider) ?? "openclaw";
|
||||
const model = nonEmpty(agentMeta?.model) ?? nonEmpty(mergedMeta.model) ?? null;
|
||||
const costUsd = asNumber(agentMeta?.costUsd ?? mergedMeta.costUsd, 0);
|
||||
|
||||
await ctx.onLog(
|
||||
"stdout",
|
||||
@@ -1209,6 +1363,7 @@ export async function execute(ctx: AdapterExecutionContext): Promise<AdapterExec
|
||||
...(usage ? { usage } : {}),
|
||||
...(costUsd > 0 ? { costUsd } : {}),
|
||||
resultJson: asRecord(latestResultPayload),
|
||||
...(runtimeServices.length > 0 ? { runtimeServices } : {}),
|
||||
...(summary ? { summary } : {}),
|
||||
};
|
||||
} catch (err) {
|
||||
|
||||
@@ -1,5 +1,17 @@
|
||||
import type { CreateConfigValues } from "@paperclipai/adapter-utils";
|
||||
|
||||
function parseJsonObject(text: string): Record<string, unknown> | null {
|
||||
const trimmed = text.trim();
|
||||
if (!trimmed) return null;
|
||||
try {
|
||||
const parsed = JSON.parse(trimmed);
|
||||
if (typeof parsed !== "object" || parsed === null || Array.isArray(parsed)) return null;
|
||||
return parsed as Record<string, unknown>;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
export function buildOpenClawGatewayConfig(v: CreateConfigValues): Record<string, unknown> {
|
||||
const ac: Record<string, unknown> = {};
|
||||
if (v.url) ac.url = v.url;
|
||||
@@ -8,5 +20,11 @@ export function buildOpenClawGatewayConfig(v: CreateConfigValues): Record<string
|
||||
ac.sessionKeyStrategy = "issue";
|
||||
ac.role = "operator";
|
||||
ac.scopes = ["operator.admin"];
|
||||
const payloadTemplate = parseJsonObject(v.payloadTemplateJson ?? "");
|
||||
if (payloadTemplate) ac.payloadTemplate = payloadTemplate;
|
||||
const runtimeServices = parseJsonObject(v.runtimeServicesJson ?? "");
|
||||
if (runtimeServices && Array.isArray(runtimeServices.services)) {
|
||||
ac.workspaceRuntime = runtimeServices;
|
||||
}
|
||||
return ac;
|
||||
}
|
||||
|
||||
@@ -1,5 +1,16 @@
|
||||
# @paperclipai/adapter-opencode-local
|
||||
|
||||
## 0.3.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Stable release preparation for 0.3.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies
|
||||
- @paperclipai/adapter-utils@0.3.0
|
||||
|
||||
## 0.2.7
|
||||
|
||||
### Patch Changes
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@paperclipai/adapter-opencode-local",
|
||||
"version": "0.2.7",
|
||||
"version": "0.3.0",
|
||||
"type": "module",
|
||||
"exports": {
|
||||
".": "./src/index.ts",
|
||||
|
||||
@@ -7,6 +7,7 @@ import {
|
||||
} from "@paperclipai/adapter-utils/server-utils";
|
||||
|
||||
const MODELS_CACHE_TTL_MS = 60_000;
|
||||
const MODELS_DISCOVERY_TIMEOUT_MS = 20_000;
|
||||
|
||||
function resolveOpenCodeCommand(input: unknown): string {
|
||||
const envOverride =
|
||||
@@ -115,14 +116,14 @@ export async function discoverOpenCodeModels(input: {
|
||||
{
|
||||
cwd,
|
||||
env: runtimeEnv,
|
||||
timeoutSec: 20,
|
||||
timeoutSec: MODELS_DISCOVERY_TIMEOUT_MS / 1000,
|
||||
graceSec: 3,
|
||||
onLog: async () => {},
|
||||
},
|
||||
);
|
||||
|
||||
if (result.timedOut) {
|
||||
throw new Error("`opencode models` timed out.");
|
||||
throw new Error(`\`opencode models\` timed out after ${MODELS_DISCOVERY_TIMEOUT_MS / 1000}s.`);
|
||||
}
|
||||
if ((result.exitCode ?? 1) !== 0) {
|
||||
const detail = firstNonEmptyLine(result.stderr) || firstNonEmptyLine(result.stdout);
|
||||
|
||||
@@ -50,6 +50,7 @@ function parseToolUse(parsed: Record<string, unknown>, ts: string): TranscriptEn
|
||||
kind: "tool_call",
|
||||
ts,
|
||||
name: toolName,
|
||||
toolUseId: asString(part.callID) || asString(part.id) || undefined,
|
||||
input,
|
||||
};
|
||||
|
||||
|
||||
12
packages/adapters/pi-local/CHANGELOG.md
Normal file
12
packages/adapters/pi-local/CHANGELOG.md
Normal file
@@ -0,0 +1,12 @@
|
||||
# @paperclipai/adapter-pi-local
|
||||
|
||||
## 0.3.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Stable release preparation for 0.3.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies
|
||||
- @paperclipai/adapter-utils@0.3.0
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@paperclipai/adapter-pi-local",
|
||||
"version": "0.1.0",
|
||||
"version": "0.3.0",
|
||||
"type": "module",
|
||||
"exports": {
|
||||
".": "./src/index.ts",
|
||||
|
||||
@@ -1,5 +1,17 @@
|
||||
# @paperclipai/db
|
||||
|
||||
## 0.3.0
|
||||
|
||||
### Minor Changes
|
||||
|
||||
- Stable release preparation for 0.3.0
|
||||
|
||||
### Patch Changes
|
||||
|
||||
- Updated dependencies [6077ae6]
|
||||
- Updated dependencies
|
||||
- @paperclipai/shared@0.3.0
|
||||
|
||||
## 0.2.7
|
||||
|
||||
### Patch Changes
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "@paperclipai/db",
|
||||
"version": "0.2.7",
|
||||
"version": "0.3.0",
|
||||
"type": "module",
|
||||
"exports": {
|
||||
".": "./src/index.ts",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import { existsSync, mkdirSync, readdirSync, statSync, unlinkSync } from "node:fs";
|
||||
import { writeFile } from "node:fs/promises";
|
||||
import { resolve } from "node:path";
|
||||
import { readFile, writeFile } from "node:fs/promises";
|
||||
import { basename, resolve } from "node:path";
|
||||
import postgres from "postgres";
|
||||
|
||||
export type RunDatabaseBackupOptions = {
|
||||
@@ -9,6 +9,9 @@ export type RunDatabaseBackupOptions = {
|
||||
retentionDays: number;
|
||||
filenamePrefix?: string;
|
||||
connectTimeoutSeconds?: number;
|
||||
includeMigrationJournal?: boolean;
|
||||
excludeTables?: string[];
|
||||
nullifyColumns?: Record<string, string[]>;
|
||||
};
|
||||
|
||||
export type RunDatabaseBackupResult = {
|
||||
@@ -17,6 +20,50 @@ export type RunDatabaseBackupResult = {
|
||||
prunedCount: number;
|
||||
};
|
||||
|
||||
export type RunDatabaseRestoreOptions = {
|
||||
connectionString: string;
|
||||
backupFile: string;
|
||||
connectTimeoutSeconds?: number;
|
||||
};
|
||||
|
||||
type SequenceDefinition = {
|
||||
sequence_schema: string;
|
||||
sequence_name: string;
|
||||
data_type: string;
|
||||
start_value: string;
|
||||
minimum_value: string;
|
||||
maximum_value: string;
|
||||
increment: string;
|
||||
cycle_option: "YES" | "NO";
|
||||
owner_schema: string | null;
|
||||
owner_table: string | null;
|
||||
owner_column: string | null;
|
||||
};
|
||||
|
||||
type TableDefinition = {
|
||||
schema_name: string;
|
||||
tablename: string;
|
||||
};
|
||||
|
||||
const DRIZZLE_SCHEMA = "drizzle";
|
||||
const DRIZZLE_MIGRATIONS_TABLE = "__drizzle_migrations";
|
||||
|
||||
const STATEMENT_BREAKPOINT = "-- paperclip statement breakpoint 69f6f3f1-42fd-46a6-bf17-d1d85f8f3900";
|
||||
|
||||
function sanitizeRestoreErrorMessage(error: unknown): string {
|
||||
if (error && typeof error === "object") {
|
||||
const record = error as Record<string, unknown>;
|
||||
const firstLine = typeof record.message === "string"
|
||||
? record.message.split(/\r?\n/, 1)[0]?.trim()
|
||||
: "";
|
||||
const detail = typeof record.detail === "string" ? record.detail.trim() : "";
|
||||
const severity = typeof record.severity === "string" ? record.severity.trim() : "";
|
||||
const message = firstLine || detail || (error instanceof Error ? error.message : String(error));
|
||||
return severity ? `${severity}: ${message}` : message;
|
||||
}
|
||||
return error instanceof Error ? error.message : String(error);
|
||||
}
|
||||
|
||||
function timestamp(date: Date = new Date()): string {
|
||||
const pad = (n: number) => String(n).padStart(2, "0");
|
||||
return `${date.getFullYear()}${pad(date.getMonth() + 1)}${pad(date.getDate())}-${pad(date.getHours())}${pad(date.getMinutes())}${pad(date.getSeconds())}`;
|
||||
@@ -47,10 +94,60 @@ function formatBackupSize(sizeBytes: number): string {
|
||||
return `${(sizeBytes / (1024 * 1024)).toFixed(1)}M`;
|
||||
}
|
||||
|
||||
function formatSqlLiteral(value: string): string {
|
||||
const sanitized = value.replace(/\u0000/g, "");
|
||||
let tag = "$paperclip$";
|
||||
while (sanitized.includes(tag)) {
|
||||
tag = `$paperclip_${Math.random().toString(36).slice(2, 8)}$`;
|
||||
}
|
||||
return `${tag}${sanitized}${tag}`;
|
||||
}
|
||||
|
||||
function normalizeTableNameSet(values: string[] | undefined): Set<string> {
|
||||
return new Set(
|
||||
(values ?? [])
|
||||
.map((value) => value.trim())
|
||||
.filter((value) => value.length > 0),
|
||||
);
|
||||
}
|
||||
|
||||
function normalizeNullifyColumnMap(values: Record<string, string[]> | undefined): Map<string, Set<string>> {
|
||||
const out = new Map<string, Set<string>>();
|
||||
if (!values) return out;
|
||||
for (const [tableName, columns] of Object.entries(values)) {
|
||||
const normalizedTable = tableName.trim();
|
||||
if (normalizedTable.length === 0) continue;
|
||||
const normalizedColumns = new Set(
|
||||
columns
|
||||
.map((column) => column.trim())
|
||||
.filter((column) => column.length > 0),
|
||||
);
|
||||
if (normalizedColumns.size > 0) {
|
||||
out.set(normalizedTable, normalizedColumns);
|
||||
}
|
||||
}
|
||||
return out;
|
||||
}
|
||||
|
||||
function quoteIdentifier(value: string): string {
|
||||
return `"${value.replaceAll("\"", "\"\"")}"`;
|
||||
}
|
||||
|
||||
function quoteQualifiedName(schemaName: string, objectName: string): string {
|
||||
return `${quoteIdentifier(schemaName)}.${quoteIdentifier(objectName)}`;
|
||||
}
|
||||
|
||||
function tableKey(schemaName: string, tableName: string): string {
|
||||
return `${schemaName}.${tableName}`;
|
||||
}
|
||||
|
||||
export async function runDatabaseBackup(opts: RunDatabaseBackupOptions): Promise<RunDatabaseBackupResult> {
|
||||
const filenamePrefix = opts.filenamePrefix ?? "paperclip";
|
||||
const retentionDays = Math.max(1, Math.trunc(opts.retentionDays));
|
||||
const connectTimeout = Math.max(1, Math.trunc(opts.connectTimeoutSeconds ?? 5));
|
||||
const includeMigrationJournal = opts.includeMigrationJournal === true;
|
||||
const excludedTableNames = normalizeTableNameSet(opts.excludeTables);
|
||||
const nullifiedColumnsByTable = normalizeNullifyColumnMap(opts.nullifyColumns);
|
||||
const sql = postgres(opts.connectionString, { max: 1, connect_timeout: connectTimeout });
|
||||
|
||||
try {
|
||||
@@ -58,13 +155,35 @@ export async function runDatabaseBackup(opts: RunDatabaseBackupOptions): Promise
|
||||
|
||||
const lines: string[] = [];
|
||||
const emit = (line: string) => lines.push(line);
|
||||
const emitStatement = (statement: string) => {
|
||||
emit(statement);
|
||||
emit(STATEMENT_BREAKPOINT);
|
||||
};
|
||||
const emitStatementBoundary = () => {
|
||||
emit(STATEMENT_BREAKPOINT);
|
||||
};
|
||||
|
||||
emit("-- Paperclip database backup");
|
||||
emit(`-- Created: ${new Date().toISOString()}`);
|
||||
emit("");
|
||||
emit("BEGIN;");
|
||||
emitStatement("BEGIN;");
|
||||
emitStatement("SET LOCAL session_replication_role = replica;");
|
||||
emitStatement("SET LOCAL client_min_messages = warning;");
|
||||
emit("");
|
||||
|
||||
const allTables = await sql<TableDefinition[]>`
|
||||
SELECT table_schema AS schema_name, table_name AS tablename
|
||||
FROM information_schema.tables
|
||||
WHERE table_type = 'BASE TABLE'
|
||||
AND (
|
||||
table_schema = 'public'
|
||||
OR (${includeMigrationJournal}::boolean AND table_schema = ${DRIZZLE_SCHEMA} AND table_name = ${DRIZZLE_MIGRATIONS_TABLE})
|
||||
)
|
||||
ORDER BY table_schema, table_name
|
||||
`;
|
||||
const tables = allTables;
|
||||
const includedTableNames = new Set(tables.map(({ schema_name, tablename }) => tableKey(schema_name, tablename)));
|
||||
|
||||
// Get all enums
|
||||
const enums = await sql<{ typname: string; labels: string[] }[]>`
|
||||
SELECT t.typname, array_agg(e.enumlabel ORDER BY e.enumsortorder) AS labels
|
||||
@@ -78,23 +197,65 @@ export async function runDatabaseBackup(opts: RunDatabaseBackupOptions): Promise
|
||||
|
||||
for (const e of enums) {
|
||||
const labels = e.labels.map((l) => `'${l.replace(/'/g, "''")}'`).join(", ");
|
||||
emit(`CREATE TYPE "public"."${e.typname}" AS ENUM (${labels});`);
|
||||
emitStatement(`CREATE TYPE "public"."${e.typname}" AS ENUM (${labels});`);
|
||||
}
|
||||
if (enums.length > 0) emit("");
|
||||
|
||||
// Get tables in dependency order (referenced tables first)
|
||||
const tables = await sql<{ tablename: string }[]>`
|
||||
SELECT c.relname AS tablename
|
||||
FROM pg_class c
|
||||
JOIN pg_namespace n ON n.oid = c.relnamespace
|
||||
WHERE n.nspname = 'public'
|
||||
AND c.relkind = 'r'
|
||||
AND c.relname != '__drizzle_migrations'
|
||||
ORDER BY c.relname
|
||||
const allSequences = await sql<SequenceDefinition[]>`
|
||||
SELECT
|
||||
s.sequence_schema,
|
||||
s.sequence_name,
|
||||
s.data_type,
|
||||
s.start_value,
|
||||
s.minimum_value,
|
||||
s.maximum_value,
|
||||
s.increment,
|
||||
s.cycle_option,
|
||||
tblns.nspname AS owner_schema,
|
||||
tbl.relname AS owner_table,
|
||||
attr.attname AS owner_column
|
||||
FROM information_schema.sequences s
|
||||
JOIN pg_class seq ON seq.relname = s.sequence_name
|
||||
JOIN pg_namespace n ON n.oid = seq.relnamespace AND n.nspname = s.sequence_schema
|
||||
LEFT JOIN pg_depend dep ON dep.objid = seq.oid AND dep.deptype = 'a'
|
||||
LEFT JOIN pg_class tbl ON tbl.oid = dep.refobjid
|
||||
LEFT JOIN pg_namespace tblns ON tblns.oid = tbl.relnamespace
|
||||
LEFT JOIN pg_attribute attr ON attr.attrelid = tbl.oid AND attr.attnum = dep.refobjsubid
|
||||
WHERE s.sequence_schema = 'public'
|
||||
OR (${includeMigrationJournal}::boolean AND s.sequence_schema = ${DRIZZLE_SCHEMA})
|
||||
ORDER BY s.sequence_schema, s.sequence_name
|
||||
`;
|
||||
const sequences = allSequences.filter(
|
||||
(seq) => !seq.owner_table || includedTableNames.has(tableKey(seq.owner_schema ?? "public", seq.owner_table)),
|
||||
);
|
||||
|
||||
const schemas = new Set<string>();
|
||||
for (const table of tables) schemas.add(table.schema_name);
|
||||
for (const seq of sequences) schemas.add(seq.sequence_schema);
|
||||
const extraSchemas = [...schemas].filter((schemaName) => schemaName !== "public");
|
||||
if (extraSchemas.length > 0) {
|
||||
emit("-- Schemas");
|
||||
for (const schemaName of extraSchemas) {
|
||||
emitStatement(`CREATE SCHEMA IF NOT EXISTS ${quoteIdentifier(schemaName)};`);
|
||||
}
|
||||
emit("");
|
||||
}
|
||||
|
||||
if (sequences.length > 0) {
|
||||
emit("-- Sequences");
|
||||
for (const seq of sequences) {
|
||||
const qualifiedSequenceName = quoteQualifiedName(seq.sequence_schema, seq.sequence_name);
|
||||
emitStatement(`DROP SEQUENCE IF EXISTS ${qualifiedSequenceName} CASCADE;`);
|
||||
emitStatement(
|
||||
`CREATE SEQUENCE ${qualifiedSequenceName} AS ${seq.data_type} INCREMENT BY ${seq.increment} MINVALUE ${seq.minimum_value} MAXVALUE ${seq.maximum_value} START WITH ${seq.start_value}${seq.cycle_option === "YES" ? " CYCLE" : " NO CYCLE"};`,
|
||||
);
|
||||
}
|
||||
emit("");
|
||||
}
|
||||
|
||||
// Get full CREATE TABLE DDL via column info
|
||||
for (const { tablename } of tables) {
|
||||
for (const { schema_name, tablename } of tables) {
|
||||
const qualifiedTableName = quoteQualifiedName(schema_name, tablename);
|
||||
const columns = await sql<{
|
||||
column_name: string;
|
||||
data_type: string;
|
||||
@@ -108,12 +269,12 @@ export async function runDatabaseBackup(opts: RunDatabaseBackupOptions): Promise
|
||||
SELECT column_name, data_type, udt_name, is_nullable, column_default,
|
||||
character_maximum_length, numeric_precision, numeric_scale
|
||||
FROM information_schema.columns
|
||||
WHERE table_schema = 'public' AND table_name = ${tablename}
|
||||
WHERE table_schema = ${schema_name} AND table_name = ${tablename}
|
||||
ORDER BY ordinal_position
|
||||
`;
|
||||
|
||||
emit(`-- Table: ${tablename}`);
|
||||
emit(`DROP TABLE IF EXISTS "${tablename}" CASCADE;`);
|
||||
emit(`-- Table: ${schema_name}.${tablename}`);
|
||||
emitStatement(`DROP TABLE IF EXISTS ${qualifiedTableName} CASCADE;`);
|
||||
|
||||
const colDefs: string[] = [];
|
||||
for (const col of columns) {
|
||||
@@ -149,7 +310,7 @@ export async function runDatabaseBackup(opts: RunDatabaseBackupOptions): Promise
|
||||
JOIN pg_class t ON t.oid = c.conrelid
|
||||
JOIN pg_namespace n ON n.oid = t.relnamespace
|
||||
JOIN pg_attribute a ON a.attrelid = t.oid AND a.attnum = ANY(c.conkey)
|
||||
WHERE n.nspname = 'public' AND t.relname = ${tablename} AND c.contype = 'p'
|
||||
WHERE n.nspname = ${schema_name} AND t.relname = ${tablename} AND c.contype = 'p'
|
||||
GROUP BY c.conname
|
||||
`;
|
||||
for (const p of pk) {
|
||||
@@ -157,17 +318,31 @@ export async function runDatabaseBackup(opts: RunDatabaseBackupOptions): Promise
|
||||
colDefs.push(` CONSTRAINT "${p.constraint_name}" PRIMARY KEY (${cols})`);
|
||||
}
|
||||
|
||||
emit(`CREATE TABLE "${tablename}" (`);
|
||||
emit(`CREATE TABLE ${qualifiedTableName} (`);
|
||||
emit(colDefs.join(",\n"));
|
||||
emit(");");
|
||||
emitStatementBoundary();
|
||||
emit("");
|
||||
}
|
||||
|
||||
const ownedSequences = sequences.filter((seq) => seq.owner_table && seq.owner_column);
|
||||
if (ownedSequences.length > 0) {
|
||||
emit("-- Sequence ownership");
|
||||
for (const seq of ownedSequences) {
|
||||
emitStatement(
|
||||
`ALTER SEQUENCE ${quoteQualifiedName(seq.sequence_schema, seq.sequence_name)} OWNED BY ${quoteQualifiedName(seq.owner_schema ?? "public", seq.owner_table!)}.${quoteIdentifier(seq.owner_column!)};`,
|
||||
);
|
||||
}
|
||||
emit("");
|
||||
}
|
||||
|
||||
// Foreign keys (after all tables created)
|
||||
const fks = await sql<{
|
||||
const allForeignKeys = await sql<{
|
||||
constraint_name: string;
|
||||
source_schema: string;
|
||||
source_table: string;
|
||||
source_columns: string[];
|
||||
target_schema: string;
|
||||
target_table: string;
|
||||
target_columns: string[];
|
||||
update_rule: string;
|
||||
@@ -175,137 +350,157 @@ export async function runDatabaseBackup(opts: RunDatabaseBackupOptions): Promise
|
||||
}[]>`
|
||||
SELECT
|
||||
c.conname AS constraint_name,
|
||||
srcn.nspname AS source_schema,
|
||||
src.relname AS source_table,
|
||||
array_agg(sa.attname ORDER BY array_position(c.conkey, sa.attnum)) AS source_columns,
|
||||
tgtn.nspname AS target_schema,
|
||||
tgt.relname AS target_table,
|
||||
array_agg(ta.attname ORDER BY array_position(c.confkey, ta.attnum)) AS target_columns,
|
||||
CASE c.confupdtype WHEN 'a' THEN 'NO ACTION' WHEN 'r' THEN 'RESTRICT' WHEN 'c' THEN 'CASCADE' WHEN 'n' THEN 'SET NULL' WHEN 'd' THEN 'SET DEFAULT' END AS update_rule,
|
||||
CASE c.confdeltype WHEN 'a' THEN 'NO ACTION' WHEN 'r' THEN 'RESTRICT' WHEN 'c' THEN 'CASCADE' WHEN 'n' THEN 'SET NULL' WHEN 'd' THEN 'SET DEFAULT' END AS delete_rule
|
||||
FROM pg_constraint c
|
||||
JOIN pg_class src ON src.oid = c.conrelid
|
||||
JOIN pg_namespace srcn ON srcn.oid = src.relnamespace
|
||||
JOIN pg_class tgt ON tgt.oid = c.confrelid
|
||||
JOIN pg_namespace n ON n.oid = src.relnamespace
|
||||
JOIN pg_namespace tgtn ON tgtn.oid = tgt.relnamespace
|
||||
JOIN pg_attribute sa ON sa.attrelid = src.oid AND sa.attnum = ANY(c.conkey)
|
||||
JOIN pg_attribute ta ON ta.attrelid = tgt.oid AND ta.attnum = ANY(c.confkey)
|
||||
WHERE c.contype = 'f' AND n.nspname = 'public'
|
||||
GROUP BY c.conname, src.relname, tgt.relname, c.confupdtype, c.confdeltype
|
||||
ORDER BY src.relname, c.conname
|
||||
WHERE c.contype = 'f' AND (
|
||||
srcn.nspname = 'public'
|
||||
OR (${includeMigrationJournal}::boolean AND srcn.nspname = ${DRIZZLE_SCHEMA})
|
||||
)
|
||||
GROUP BY c.conname, srcn.nspname, src.relname, tgtn.nspname, tgt.relname, c.confupdtype, c.confdeltype
|
||||
ORDER BY srcn.nspname, src.relname, c.conname
|
||||
`;
|
||||
const fks = allForeignKeys.filter(
|
||||
(fk) => includedTableNames.has(tableKey(fk.source_schema, fk.source_table))
|
||||
&& includedTableNames.has(tableKey(fk.target_schema, fk.target_table)),
|
||||
);
|
||||
|
||||
if (fks.length > 0) {
|
||||
emit("-- Foreign keys");
|
||||
for (const fk of fks) {
|
||||
const srcCols = fk.source_columns.map((c) => `"${c}"`).join(", ");
|
||||
const tgtCols = fk.target_columns.map((c) => `"${c}"`).join(", ");
|
||||
emit(
|
||||
`ALTER TABLE "${fk.source_table}" ADD CONSTRAINT "${fk.constraint_name}" FOREIGN KEY (${srcCols}) REFERENCES "${fk.target_table}" (${tgtCols}) ON UPDATE ${fk.update_rule} ON DELETE ${fk.delete_rule};`,
|
||||
emitStatement(
|
||||
`ALTER TABLE ${quoteQualifiedName(fk.source_schema, fk.source_table)} ADD CONSTRAINT "${fk.constraint_name}" FOREIGN KEY (${srcCols}) REFERENCES ${quoteQualifiedName(fk.target_schema, fk.target_table)} (${tgtCols}) ON UPDATE ${fk.update_rule} ON DELETE ${fk.delete_rule};`,
|
||||
);
|
||||
}
|
||||
emit("");
|
||||
}
|
||||
|
||||
// Unique constraints
|
||||
const uniques = await sql<{
|
||||
const allUniqueConstraints = await sql<{
|
||||
constraint_name: string;
|
||||
schema_name: string;
|
||||
tablename: string;
|
||||
column_names: string[];
|
||||
}[]>`
|
||||
SELECT c.conname AS constraint_name,
|
||||
n.nspname AS schema_name,
|
||||
t.relname AS tablename,
|
||||
array_agg(a.attname ORDER BY array_position(c.conkey, a.attnum)) AS column_names
|
||||
FROM pg_constraint c
|
||||
JOIN pg_class t ON t.oid = c.conrelid
|
||||
JOIN pg_namespace n ON n.oid = t.relnamespace
|
||||
JOIN pg_attribute a ON a.attrelid = t.oid AND a.attnum = ANY(c.conkey)
|
||||
WHERE n.nspname = 'public' AND c.contype = 'u'
|
||||
GROUP BY c.conname, t.relname
|
||||
ORDER BY t.relname, c.conname
|
||||
WHERE c.contype = 'u' AND (
|
||||
n.nspname = 'public'
|
||||
OR (${includeMigrationJournal}::boolean AND n.nspname = ${DRIZZLE_SCHEMA})
|
||||
)
|
||||
GROUP BY c.conname, n.nspname, t.relname
|
||||
ORDER BY n.nspname, t.relname, c.conname
|
||||
`;
|
||||
const uniques = allUniqueConstraints.filter((entry) => includedTableNames.has(tableKey(entry.schema_name, entry.tablename)));
|
||||
|
||||
if (uniques.length > 0) {
|
||||
emit("-- Unique constraints");
|
||||
for (const u of uniques) {
|
||||
const cols = u.column_names.map((c) => `"${c}"`).join(", ");
|
||||
emit(`ALTER TABLE "${u.tablename}" ADD CONSTRAINT "${u.constraint_name}" UNIQUE (${cols});`);
|
||||
emitStatement(`ALTER TABLE ${quoteQualifiedName(u.schema_name, u.tablename)} ADD CONSTRAINT "${u.constraint_name}" UNIQUE (${cols});`);
|
||||
}
|
||||
emit("");
|
||||
}
|
||||
|
||||
// Indexes (non-primary, non-unique-constraint)
|
||||
const indexes = await sql<{ indexdef: string }[]>`
|
||||
SELECT indexdef
|
||||
const allIndexes = await sql<{ schema_name: string; tablename: string; indexdef: string }[]>`
|
||||
SELECT schemaname AS schema_name, tablename, indexdef
|
||||
FROM pg_indexes
|
||||
WHERE schemaname = 'public'
|
||||
AND indexname NOT IN (
|
||||
SELECT conname FROM pg_constraint
|
||||
WHERE connamespace = (SELECT oid FROM pg_namespace WHERE nspname = 'public')
|
||||
WHERE (
|
||||
schemaname = 'public'
|
||||
OR (${includeMigrationJournal}::boolean AND schemaname = ${DRIZZLE_SCHEMA})
|
||||
)
|
||||
ORDER BY tablename, indexname
|
||||
AND indexname NOT IN (
|
||||
SELECT conname FROM pg_constraint c
|
||||
JOIN pg_namespace n ON n.oid = c.connamespace
|
||||
WHERE n.nspname = pg_indexes.schemaname
|
||||
)
|
||||
ORDER BY schemaname, tablename, indexname
|
||||
`;
|
||||
const indexes = allIndexes.filter((entry) => includedTableNames.has(tableKey(entry.schema_name, entry.tablename)));
|
||||
|
||||
if (indexes.length > 0) {
|
||||
emit("-- Indexes");
|
||||
for (const idx of indexes) {
|
||||
emit(`${idx.indexdef};`);
|
||||
emitStatement(`${idx.indexdef};`);
|
||||
}
|
||||
emit("");
|
||||
}
|
||||
|
||||
// Dump data for each table
|
||||
for (const { tablename } of tables) {
|
||||
const count = await sql<{ n: number }[]>`
|
||||
SELECT count(*)::int AS n FROM ${sql(tablename)}
|
||||
`;
|
||||
if ((count[0]?.n ?? 0) === 0) continue;
|
||||
for (const { schema_name, tablename } of tables) {
|
||||
const qualifiedTableName = quoteQualifiedName(schema_name, tablename);
|
||||
const count = await sql.unsafe<{ n: number }[]>(`SELECT count(*)::int AS n FROM ${qualifiedTableName}`);
|
||||
if (excludedTableNames.has(tablename) || (count[0]?.n ?? 0) === 0) continue;
|
||||
|
||||
// Get column info for this table
|
||||
const cols = await sql<{ column_name: string; data_type: string }[]>`
|
||||
SELECT column_name, data_type
|
||||
FROM information_schema.columns
|
||||
WHERE table_schema = 'public' AND table_name = ${tablename}
|
||||
WHERE table_schema = ${schema_name} AND table_name = ${tablename}
|
||||
ORDER BY ordinal_position
|
||||
`;
|
||||
const colNames = cols.map((c) => `"${c.column_name}"`).join(", ");
|
||||
|
||||
emit(`-- Data for: ${tablename} (${count[0]!.n} rows)`);
|
||||
emit(`-- Data for: ${schema_name}.${tablename} (${count[0]!.n} rows)`);
|
||||
|
||||
const rows = await sql`SELECT * FROM ${sql(tablename)}`.values();
|
||||
const rows = await sql.unsafe(`SELECT * FROM ${qualifiedTableName}`).values();
|
||||
const nullifiedColumns = nullifiedColumnsByTable.get(tablename) ?? new Set<string>();
|
||||
for (const row of rows) {
|
||||
const values = row.map((val: unknown) => {
|
||||
const values = row.map((rawValue: unknown, index) => {
|
||||
const columnName = cols[index]?.column_name;
|
||||
const val = columnName && nullifiedColumns.has(columnName) ? null : rawValue;
|
||||
if (val === null || val === undefined) return "NULL";
|
||||
if (typeof val === "boolean") return val ? "true" : "false";
|
||||
if (typeof val === "number") return String(val);
|
||||
if (val instanceof Date) return `'${val.toISOString()}'`;
|
||||
if (typeof val === "object") return `'${JSON.stringify(val).replace(/'/g, "''")}'`;
|
||||
return `'${String(val).replace(/'/g, "''")}'`;
|
||||
if (val instanceof Date) return formatSqlLiteral(val.toISOString());
|
||||
if (typeof val === "object") return formatSqlLiteral(JSON.stringify(val));
|
||||
return formatSqlLiteral(String(val));
|
||||
});
|
||||
emit(`INSERT INTO "${tablename}" (${colNames}) VALUES (${values.join(", ")});`);
|
||||
emitStatement(`INSERT INTO ${qualifiedTableName} (${colNames}) VALUES (${values.join(", ")});`);
|
||||
}
|
||||
emit("");
|
||||
}
|
||||
|
||||
// Sequence values
|
||||
const sequences = await sql<{ sequence_name: string }[]>`
|
||||
SELECT sequence_name
|
||||
FROM information_schema.sequences
|
||||
WHERE sequence_schema = 'public'
|
||||
ORDER BY sequence_name
|
||||
`;
|
||||
|
||||
if (sequences.length > 0) {
|
||||
emit("-- Sequence values");
|
||||
for (const seq of sequences) {
|
||||
const val = await sql<{ last_value: string }[]>`
|
||||
SELECT last_value::text FROM ${sql(seq.sequence_name)}
|
||||
`;
|
||||
if (val[0]) {
|
||||
emit(`SELECT setval('"${seq.sequence_name}"', ${val[0].last_value});`);
|
||||
const qualifiedSequenceName = quoteQualifiedName(seq.sequence_schema, seq.sequence_name);
|
||||
const val = await sql.unsafe<{ last_value: string; is_called: boolean }[]>(
|
||||
`SELECT last_value::text, is_called FROM ${qualifiedSequenceName}`,
|
||||
);
|
||||
const skipSequenceValue =
|
||||
seq.owner_table !== null
|
||||
&& excludedTableNames.has(seq.owner_table);
|
||||
if (val[0] && !skipSequenceValue) {
|
||||
emitStatement(`SELECT setval('${qualifiedSequenceName.replaceAll("'", "''")}', ${val[0].last_value}, ${val[0].is_called ? "true" : "false"});`);
|
||||
}
|
||||
}
|
||||
emit("");
|
||||
}
|
||||
|
||||
emit("COMMIT;");
|
||||
emitStatement("COMMIT;");
|
||||
emit("");
|
||||
|
||||
// Write the backup file
|
||||
@@ -326,6 +521,36 @@ export async function runDatabaseBackup(opts: RunDatabaseBackupOptions): Promise
|
||||
}
|
||||
}
|
||||
|
||||
export async function runDatabaseRestore(opts: RunDatabaseRestoreOptions): Promise<void> {
|
||||
const connectTimeout = Math.max(1, Math.trunc(opts.connectTimeoutSeconds ?? 5));
|
||||
const sql = postgres(opts.connectionString, { max: 1, connect_timeout: connectTimeout });
|
||||
|
||||
try {
|
||||
await sql`SELECT 1`;
|
||||
const contents = await readFile(opts.backupFile, "utf8");
|
||||
const statements = contents
|
||||
.split(STATEMENT_BREAKPOINT)
|
||||
.map((statement) => statement.trim())
|
||||
.filter((statement) => statement.length > 0);
|
||||
|
||||
for (const statement of statements) {
|
||||
await sql.unsafe(statement).execute();
|
||||
}
|
||||
} catch (error) {
|
||||
const statementPreview = typeof error === "object" && error !== null && typeof (error as Record<string, unknown>).query === "string"
|
||||
? String((error as Record<string, unknown>).query)
|
||||
.split(/\r?\n/)
|
||||
.map((line) => line.trim())
|
||||
.find((line) => line.length > 0 && !line.startsWith("--"))
|
||||
: null;
|
||||
throw new Error(
|
||||
`Failed to restore ${basename(opts.backupFile)}: ${sanitizeRestoreErrorMessage(error)}${statementPreview ? ` [statement: ${statementPreview.slice(0, 120)}]` : ""}`,
|
||||
);
|
||||
} finally {
|
||||
await sql.end();
|
||||
}
|
||||
}
|
||||
|
||||
export function formatDatabaseBackupResult(result: RunDatabaseBackupResult): string {
|
||||
const size = formatBackupSize(result.sizeBytes);
|
||||
const pruned = result.prunedCount > 0 ? `; pruned ${result.prunedCount} old backup(s)` : "";
|
||||
|
||||
@@ -10,6 +10,10 @@ const MIGRATIONS_FOLDER = fileURLToPath(new URL("./migrations", import.meta.url)
|
||||
const DRIZZLE_MIGRATIONS_TABLE = "__drizzle_migrations";
|
||||
const MIGRATIONS_JOURNAL_JSON = fileURLToPath(new URL("./migrations/meta/_journal.json", import.meta.url));
|
||||
|
||||
function createUtilitySql(url: string) {
|
||||
return postgres(url, { max: 1, onnotice: () => {} });
|
||||
}
|
||||
|
||||
function isSafeIdentifier(value: string): boolean {
|
||||
return /^[A-Za-z_][A-Za-z0-9_]*$/.test(value);
|
||||
}
|
||||
@@ -223,7 +227,7 @@ async function applyPendingMigrationsManually(
|
||||
journalEntries.map((entry) => [entry.fileName, normalizeFolderMillis(entry.folderMillis)]),
|
||||
);
|
||||
|
||||
const sql = postgres(url, { max: 1 });
|
||||
const sql = createUtilitySql(url);
|
||||
try {
|
||||
const { migrationTableSchema, columnNames } = await ensureMigrationJournalTable(sql);
|
||||
const qualifiedTable = `${quoteIdentifier(migrationTableSchema)}.${quoteIdentifier(DRIZZLE_MIGRATIONS_TABLE)}`;
|
||||
@@ -472,7 +476,7 @@ export async function reconcilePendingMigrationHistory(
|
||||
return { repairedMigrations: [], remainingMigrations: [] };
|
||||
}
|
||||
|
||||
const sql = postgres(url, { max: 1 });
|
||||
const sql = createUtilitySql(url);
|
||||
const repairedMigrations: string[] = [];
|
||||
|
||||
try {
|
||||
@@ -579,7 +583,7 @@ async function discoverMigrationTableSchema(sql: ReturnType<typeof postgres>): P
|
||||
}
|
||||
|
||||
export async function inspectMigrations(url: string): Promise<MigrationState> {
|
||||
const sql = postgres(url, { max: 1 });
|
||||
const sql = createUtilitySql(url);
|
||||
|
||||
try {
|
||||
const availableMigrations = await listMigrationFiles();
|
||||
@@ -642,7 +646,7 @@ export async function applyPendingMigrations(url: string): Promise<void> {
|
||||
const initialState = await inspectMigrations(url);
|
||||
if (initialState.status === "upToDate") return;
|
||||
|
||||
const sql = postgres(url, { max: 1 });
|
||||
const sql = createUtilitySql(url);
|
||||
|
||||
try {
|
||||
const db = drizzlePg(sql);
|
||||
@@ -680,7 +684,7 @@ export type MigrationBootstrapResult =
|
||||
| { migrated: false; reason: "not-empty-no-migration-journal"; tableCount: number };
|
||||
|
||||
export async function migratePostgresIfEmpty(url: string): Promise<MigrationBootstrapResult> {
|
||||
const sql = postgres(url, { max: 1 });
|
||||
const sql = createUtilitySql(url);
|
||||
|
||||
try {
|
||||
const migrationTableSchema = await discoverMigrationTableSchema(sql);
|
||||
@@ -719,14 +723,14 @@ export async function ensurePostgresDatabase(
|
||||
throw new Error(`Unsafe database name: ${databaseName}`);
|
||||
}
|
||||
|
||||
const sql = postgres(url, { max: 1 });
|
||||
const sql = createUtilitySql(url);
|
||||
try {
|
||||
const existing = await sql<{ one: number }[]>`
|
||||
select 1 as one from pg_database where datname = ${databaseName} limit 1
|
||||
`;
|
||||
if (existing.length > 0) return "exists";
|
||||
|
||||
await sql.unsafe(`create database "${databaseName}"`);
|
||||
await sql.unsafe(`create database "${databaseName}" encoding 'UTF8' lc_collate 'C' lc_ctype 'C' template template0`);
|
||||
return "created";
|
||||
} finally {
|
||||
await sql.end();
|
||||
|
||||
@@ -12,8 +12,10 @@ export {
|
||||
} from "./client.js";
|
||||
export {
|
||||
runDatabaseBackup,
|
||||
runDatabaseRestore,
|
||||
formatDatabaseBackupResult,
|
||||
type RunDatabaseBackupOptions,
|
||||
type RunDatabaseBackupResult,
|
||||
type RunDatabaseRestoreOptions,
|
||||
} from "./backup-lib.js";
|
||||
export * from "./schema/index.js";
|
||||
|
||||
@@ -1,21 +1,29 @@
|
||||
import { applyPendingMigrations, inspectMigrations } from "./client.js";
|
||||
import { resolveMigrationConnection } from "./migration-runtime.js";
|
||||
|
||||
const url = process.env.DATABASE_URL;
|
||||
async function main(): Promise<void> {
|
||||
const resolved = await resolveMigrationConnection();
|
||||
|
||||
if (!url) {
|
||||
throw new Error("DATABASE_URL is required for db:migrate");
|
||||
}
|
||||
console.log(`Migrating database via ${resolved.source}`);
|
||||
|
||||
const before = await inspectMigrations(url);
|
||||
if (before.status === "upToDate") {
|
||||
console.log("No pending migrations");
|
||||
} else {
|
||||
console.log(`Applying ${before.pendingMigrations.length} pending migration(s)...`);
|
||||
await applyPendingMigrations(url);
|
||||
try {
|
||||
const before = await inspectMigrations(resolved.connectionString);
|
||||
if (before.status === "upToDate") {
|
||||
console.log("No pending migrations");
|
||||
return;
|
||||
}
|
||||
|
||||
const after = await inspectMigrations(url);
|
||||
if (after.status !== "upToDate") {
|
||||
throw new Error(`Migrations incomplete: ${after.pendingMigrations.join(", ")}`);
|
||||
console.log(`Applying ${before.pendingMigrations.length} pending migration(s)...`);
|
||||
await applyPendingMigrations(resolved.connectionString);
|
||||
|
||||
const after = await inspectMigrations(resolved.connectionString);
|
||||
if (after.status !== "upToDate") {
|
||||
throw new Error(`Migrations incomplete: ${after.pendingMigrations.join(", ")}`);
|
||||
}
|
||||
console.log("Migrations complete");
|
||||
} finally {
|
||||
await resolved.stop();
|
||||
}
|
||||
console.log("Migrations complete");
|
||||
}
|
||||
|
||||
await main();
|
||||
|
||||
135
packages/db/src/migration-runtime.ts
Normal file
135
packages/db/src/migration-runtime.ts
Normal file
@@ -0,0 +1,135 @@
|
||||
import { existsSync, readFileSync, rmSync } from "node:fs";
|
||||
import { createRequire } from "node:module";
|
||||
import path from "node:path";
|
||||
import { fileURLToPath, pathToFileURL } from "node:url";
|
||||
import { ensurePostgresDatabase } from "./client.js";
|
||||
import { resolveDatabaseTarget } from "./runtime-config.js";
|
||||
|
||||
type EmbeddedPostgresInstance = {
|
||||
initialise(): Promise<void>;
|
||||
start(): Promise<void>;
|
||||
stop(): Promise<void>;
|
||||
};
|
||||
|
||||
type EmbeddedPostgresCtor = new (opts: {
|
||||
databaseDir: string;
|
||||
user: string;
|
||||
password: string;
|
||||
port: number;
|
||||
persistent: boolean;
|
||||
onLog?: (message: unknown) => void;
|
||||
onError?: (message: unknown) => void;
|
||||
}) => EmbeddedPostgresInstance;
|
||||
|
||||
export type MigrationConnection = {
|
||||
connectionString: string;
|
||||
source: string;
|
||||
stop: () => Promise<void>;
|
||||
};
|
||||
|
||||
function readRunningPostmasterPid(postmasterPidFile: string): number | null {
|
||||
if (!existsSync(postmasterPidFile)) return null;
|
||||
try {
|
||||
const pid = Number(readFileSync(postmasterPidFile, "utf8").split("\n")[0]?.trim());
|
||||
if (!Number.isInteger(pid) || pid <= 0) return null;
|
||||
process.kill(pid, 0);
|
||||
return pid;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
function readPidFilePort(postmasterPidFile: string): number | null {
|
||||
if (!existsSync(postmasterPidFile)) return null;
|
||||
try {
|
||||
const lines = readFileSync(postmasterPidFile, "utf8").split("\n");
|
||||
const port = Number(lines[3]?.trim());
|
||||
return Number.isInteger(port) && port > 0 ? port : null;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
async function loadEmbeddedPostgresCtor(): Promise<EmbeddedPostgresCtor> {
|
||||
const require = createRequire(import.meta.url);
|
||||
const resolveCandidates = [
|
||||
path.resolve(fileURLToPath(new URL("../..", import.meta.url))),
|
||||
path.resolve(fileURLToPath(new URL("../../server", import.meta.url))),
|
||||
path.resolve(fileURLToPath(new URL("../../cli", import.meta.url))),
|
||||
process.cwd(),
|
||||
];
|
||||
|
||||
try {
|
||||
const resolvedModulePath = require.resolve("embedded-postgres", { paths: resolveCandidates });
|
||||
const mod = await import(pathToFileURL(resolvedModulePath).href);
|
||||
return mod.default as EmbeddedPostgresCtor;
|
||||
} catch {
|
||||
throw new Error(
|
||||
"Embedded PostgreSQL support requires dependency `embedded-postgres`. Reinstall dependencies and try again.",
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
async function ensureEmbeddedPostgresConnection(
|
||||
dataDir: string,
|
||||
preferredPort: number,
|
||||
): Promise<MigrationConnection> {
|
||||
const EmbeddedPostgres = await loadEmbeddedPostgresCtor();
|
||||
const postmasterPidFile = path.resolve(dataDir, "postmaster.pid");
|
||||
const runningPid = readRunningPostmasterPid(postmasterPidFile);
|
||||
const runningPort = readPidFilePort(postmasterPidFile);
|
||||
|
||||
if (runningPid) {
|
||||
const port = runningPort ?? preferredPort;
|
||||
const adminConnectionString = `postgres://paperclip:paperclip@127.0.0.1:${port}/postgres`;
|
||||
await ensurePostgresDatabase(adminConnectionString, "paperclip");
|
||||
return {
|
||||
connectionString: `postgres://paperclip:paperclip@127.0.0.1:${port}/paperclip`,
|
||||
source: `embedded-postgres@${port}`,
|
||||
stop: async () => {},
|
||||
};
|
||||
}
|
||||
|
||||
const instance = new EmbeddedPostgres({
|
||||
databaseDir: dataDir,
|
||||
user: "paperclip",
|
||||
password: "paperclip",
|
||||
port: preferredPort,
|
||||
persistent: true,
|
||||
initdbFlags: ["--encoding=UTF8", "--locale=C"],
|
||||
onLog: () => {},
|
||||
onError: () => {},
|
||||
});
|
||||
|
||||
if (!existsSync(path.resolve(dataDir, "PG_VERSION"))) {
|
||||
await instance.initialise();
|
||||
}
|
||||
if (existsSync(postmasterPidFile)) {
|
||||
rmSync(postmasterPidFile, { force: true });
|
||||
}
|
||||
await instance.start();
|
||||
|
||||
const adminConnectionString = `postgres://paperclip:paperclip@127.0.0.1:${preferredPort}/postgres`;
|
||||
await ensurePostgresDatabase(adminConnectionString, "paperclip");
|
||||
|
||||
return {
|
||||
connectionString: `postgres://paperclip:paperclip@127.0.0.1:${preferredPort}/paperclip`,
|
||||
source: `embedded-postgres@${preferredPort}`,
|
||||
stop: async () => {
|
||||
await instance.stop();
|
||||
},
|
||||
};
|
||||
}
|
||||
|
||||
export async function resolveMigrationConnection(): Promise<MigrationConnection> {
|
||||
const target = resolveDatabaseTarget();
|
||||
if (target.mode === "postgres") {
|
||||
return {
|
||||
connectionString: target.connectionString,
|
||||
source: target.source,
|
||||
stop: async () => {},
|
||||
};
|
||||
}
|
||||
|
||||
return ensureEmbeddedPostgresConnection(target.dataDir, target.port);
|
||||
}
|
||||
45
packages/db/src/migration-status.ts
Normal file
45
packages/db/src/migration-status.ts
Normal file
@@ -0,0 +1,45 @@
|
||||
import { inspectMigrations } from "./client.js";
|
||||
import { resolveMigrationConnection } from "./migration-runtime.js";
|
||||
|
||||
const jsonMode = process.argv.includes("--json");
|
||||
|
||||
async function main(): Promise<void> {
|
||||
const connection = await resolveMigrationConnection();
|
||||
|
||||
try {
|
||||
const state = await inspectMigrations(connection.connectionString);
|
||||
const payload =
|
||||
state.status === "upToDate"
|
||||
? {
|
||||
source: connection.source,
|
||||
status: "upToDate" as const,
|
||||
tableCount: state.tableCount,
|
||||
pendingMigrations: [] as string[],
|
||||
}
|
||||
: {
|
||||
source: connection.source,
|
||||
status: "needsMigrations" as const,
|
||||
tableCount: state.tableCount,
|
||||
pendingMigrations: state.pendingMigrations,
|
||||
reason: state.reason,
|
||||
};
|
||||
|
||||
if (jsonMode) {
|
||||
console.log(JSON.stringify(payload));
|
||||
return;
|
||||
}
|
||||
|
||||
if (payload.status === "upToDate") {
|
||||
console.log(`Database is up to date via ${payload.source}`);
|
||||
return;
|
||||
}
|
||||
|
||||
console.log(
|
||||
`Pending migrations via ${payload.source}: ${payload.pendingMigrations.join(", ")}`,
|
||||
);
|
||||
} finally {
|
||||
await connection.stop();
|
||||
}
|
||||
}
|
||||
|
||||
await main();
|
||||
39
packages/db/src/migrations/0026_lying_pete_wisdom.sql
Normal file
39
packages/db/src/migrations/0026_lying_pete_wisdom.sql
Normal file
@@ -0,0 +1,39 @@
|
||||
CREATE TABLE "workspace_runtime_services" (
|
||||
"id" uuid PRIMARY KEY NOT NULL,
|
||||
"company_id" uuid NOT NULL,
|
||||
"project_id" uuid,
|
||||
"project_workspace_id" uuid,
|
||||
"issue_id" uuid,
|
||||
"scope_type" text NOT NULL,
|
||||
"scope_id" text,
|
||||
"service_name" text NOT NULL,
|
||||
"status" text NOT NULL,
|
||||
"lifecycle" text NOT NULL,
|
||||
"reuse_key" text,
|
||||
"command" text,
|
||||
"cwd" text,
|
||||
"port" integer,
|
||||
"url" text,
|
||||
"provider" text NOT NULL,
|
||||
"provider_ref" text,
|
||||
"owner_agent_id" uuid,
|
||||
"started_by_run_id" uuid,
|
||||
"last_used_at" timestamp with time zone DEFAULT now() NOT NULL,
|
||||
"started_at" timestamp with time zone DEFAULT now() NOT NULL,
|
||||
"stopped_at" timestamp with time zone,
|
||||
"stop_policy" jsonb,
|
||||
"health_status" text DEFAULT 'unknown' NOT NULL,
|
||||
"created_at" timestamp with time zone DEFAULT now() NOT NULL,
|
||||
"updated_at" timestamp with time zone DEFAULT now() NOT NULL
|
||||
);
|
||||
--> statement-breakpoint
|
||||
ALTER TABLE "workspace_runtime_services" ADD CONSTRAINT "workspace_runtime_services_company_id_companies_id_fk" FOREIGN KEY ("company_id") REFERENCES "public"."companies"("id") ON DELETE no action ON UPDATE no action;--> statement-breakpoint
|
||||
ALTER TABLE "workspace_runtime_services" ADD CONSTRAINT "workspace_runtime_services_project_id_projects_id_fk" FOREIGN KEY ("project_id") REFERENCES "public"."projects"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
|
||||
ALTER TABLE "workspace_runtime_services" ADD CONSTRAINT "workspace_runtime_services_project_workspace_id_project_workspaces_id_fk" FOREIGN KEY ("project_workspace_id") REFERENCES "public"."project_workspaces"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
|
||||
ALTER TABLE "workspace_runtime_services" ADD CONSTRAINT "workspace_runtime_services_issue_id_issues_id_fk" FOREIGN KEY ("issue_id") REFERENCES "public"."issues"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
|
||||
ALTER TABLE "workspace_runtime_services" ADD CONSTRAINT "workspace_runtime_services_owner_agent_id_agents_id_fk" FOREIGN KEY ("owner_agent_id") REFERENCES "public"."agents"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
|
||||
ALTER TABLE "workspace_runtime_services" ADD CONSTRAINT "workspace_runtime_services_started_by_run_id_heartbeat_runs_id_fk" FOREIGN KEY ("started_by_run_id") REFERENCES "public"."heartbeat_runs"("id") ON DELETE set null ON UPDATE no action;--> statement-breakpoint
|
||||
CREATE INDEX "workspace_runtime_services_company_workspace_status_idx" ON "workspace_runtime_services" USING btree ("company_id","project_workspace_id","status");--> statement-breakpoint
|
||||
CREATE INDEX "workspace_runtime_services_company_project_status_idx" ON "workspace_runtime_services" USING btree ("company_id","project_id","status");--> statement-breakpoint
|
||||
CREATE INDEX "workspace_runtime_services_run_idx" ON "workspace_runtime_services" USING btree ("started_by_run_id");--> statement-breakpoint
|
||||
CREATE INDEX "workspace_runtime_services_company_updated_idx" ON "workspace_runtime_services" USING btree ("company_id","updated_at");
|
||||
2
packages/db/src/migrations/0027_tranquil_tenebrous.sql
Normal file
2
packages/db/src/migrations/0027_tranquil_tenebrous.sql
Normal file
@@ -0,0 +1,2 @@
|
||||
ALTER TABLE "issues" ADD COLUMN "execution_workspace_settings" jsonb;--> statement-breakpoint
|
||||
ALTER TABLE "projects" ADD COLUMN "execution_workspace_policy" jsonb;
|
||||
6193
packages/db/src/migrations/meta/0026_snapshot.json
Normal file
6193
packages/db/src/migrations/meta/0026_snapshot.json
Normal file
File diff suppressed because it is too large
Load Diff
6205
packages/db/src/migrations/meta/0027_snapshot.json
Normal file
6205
packages/db/src/migrations/meta/0027_snapshot.json
Normal file
File diff suppressed because it is too large
Load Diff
@@ -183,6 +183,20 @@
|
||||
"when": 1772807461603,
|
||||
"tag": "0025_nasty_salo",
|
||||
"breakpoints": true
|
||||
},
|
||||
{
|
||||
"idx": 26,
|
||||
"version": "7",
|
||||
"when": 1773089625430,
|
||||
"tag": "0026_lying_pete_wisdom",
|
||||
"breakpoints": true
|
||||
},
|
||||
{
|
||||
"idx": 27,
|
||||
"version": "7",
|
||||
"when": 1773150731736,
|
||||
"tag": "0027_tranquil_tenebrous",
|
||||
"breakpoints": true
|
||||
}
|
||||
]
|
||||
}
|
||||
108
packages/db/src/runtime-config.test.ts
Normal file
108
packages/db/src/runtime-config.test.ts
Normal file
@@ -0,0 +1,108 @@
|
||||
import fs from "node:fs";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
import { afterEach, describe, expect, it } from "vitest";
|
||||
import { resolveDatabaseTarget } from "./runtime-config.js";
|
||||
|
||||
const ORIGINAL_CWD = process.cwd();
|
||||
const ORIGINAL_ENV = { ...process.env };
|
||||
|
||||
function writeJson(filePath: string, value: unknown) {
|
||||
fs.mkdirSync(path.dirname(filePath), { recursive: true });
|
||||
fs.writeFileSync(filePath, JSON.stringify(value, null, 2));
|
||||
}
|
||||
|
||||
function writeText(filePath: string, value: string) {
|
||||
fs.mkdirSync(path.dirname(filePath), { recursive: true });
|
||||
fs.writeFileSync(filePath, value);
|
||||
}
|
||||
|
||||
afterEach(() => {
|
||||
process.chdir(ORIGINAL_CWD);
|
||||
for (const key of Object.keys(process.env)) {
|
||||
if (!(key in ORIGINAL_ENV)) delete process.env[key];
|
||||
}
|
||||
for (const [key, value] of Object.entries(ORIGINAL_ENV)) {
|
||||
if (value === undefined) delete process.env[key];
|
||||
else process.env[key] = value;
|
||||
}
|
||||
});
|
||||
|
||||
describe("resolveDatabaseTarget", () => {
|
||||
it("uses DATABASE_URL from process env first", () => {
|
||||
process.env.DATABASE_URL = "postgres://env-user:env-pass@db.example.com:5432/paperclip";
|
||||
|
||||
const target = resolveDatabaseTarget();
|
||||
|
||||
expect(target).toMatchObject({
|
||||
mode: "postgres",
|
||||
connectionString: "postgres://env-user:env-pass@db.example.com:5432/paperclip",
|
||||
source: "DATABASE_URL",
|
||||
});
|
||||
});
|
||||
|
||||
it("uses DATABASE_URL from repo-local .paperclip/.env", () => {
|
||||
const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-db-runtime-"));
|
||||
const projectDir = path.join(tempDir, "repo");
|
||||
fs.mkdirSync(projectDir, { recursive: true });
|
||||
process.chdir(projectDir);
|
||||
delete process.env.PAPERCLIP_CONFIG;
|
||||
writeJson(path.join(projectDir, ".paperclip", "config.json"), {
|
||||
database: { mode: "embedded-postgres", embeddedPostgresPort: 54329 },
|
||||
});
|
||||
writeText(
|
||||
path.join(projectDir, ".paperclip", ".env"),
|
||||
'DATABASE_URL="postgres://file-user:file-pass@db.example.com:6543/paperclip"\n',
|
||||
);
|
||||
|
||||
const target = resolveDatabaseTarget();
|
||||
|
||||
expect(target).toMatchObject({
|
||||
mode: "postgres",
|
||||
connectionString: "postgres://file-user:file-pass@db.example.com:6543/paperclip",
|
||||
source: "paperclip-env",
|
||||
});
|
||||
});
|
||||
|
||||
it("uses config postgres connection string when configured", () => {
|
||||
const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-db-runtime-"));
|
||||
const configPath = path.join(tempDir, "instance", "config.json");
|
||||
process.env.PAPERCLIP_CONFIG = configPath;
|
||||
writeJson(configPath, {
|
||||
database: {
|
||||
mode: "postgres",
|
||||
connectionString: "postgres://cfg-user:cfg-pass@db.example.com:5432/paperclip",
|
||||
},
|
||||
});
|
||||
|
||||
const target = resolveDatabaseTarget();
|
||||
|
||||
expect(target).toMatchObject({
|
||||
mode: "postgres",
|
||||
connectionString: "postgres://cfg-user:cfg-pass@db.example.com:5432/paperclip",
|
||||
source: "config.database.connectionString",
|
||||
});
|
||||
});
|
||||
|
||||
it("falls back to embedded postgres settings from config", () => {
|
||||
const tempDir = fs.mkdtempSync(path.join(os.tmpdir(), "paperclip-db-runtime-"));
|
||||
const configPath = path.join(tempDir, "instance", "config.json");
|
||||
process.env.PAPERCLIP_CONFIG = configPath;
|
||||
writeJson(configPath, {
|
||||
database: {
|
||||
mode: "embedded-postgres",
|
||||
embeddedPostgresDataDir: "~/paperclip-test-db",
|
||||
embeddedPostgresPort: 55444,
|
||||
},
|
||||
});
|
||||
|
||||
const target = resolveDatabaseTarget();
|
||||
|
||||
expect(target).toMatchObject({
|
||||
mode: "embedded-postgres",
|
||||
dataDir: path.resolve(os.homedir(), "paperclip-test-db"),
|
||||
port: 55444,
|
||||
source: "embedded-postgres@55444",
|
||||
});
|
||||
});
|
||||
});
|
||||
267
packages/db/src/runtime-config.ts
Normal file
267
packages/db/src/runtime-config.ts
Normal file
@@ -0,0 +1,267 @@
|
||||
import { existsSync, readFileSync } from "node:fs";
|
||||
import os from "node:os";
|
||||
import path from "node:path";
|
||||
|
||||
const DEFAULT_INSTANCE_ID = "default";
|
||||
const CONFIG_BASENAME = "config.json";
|
||||
const ENV_BASENAME = ".env";
|
||||
const INSTANCE_ID_RE = /^[a-zA-Z0-9_-]+$/;
|
||||
|
||||
type PartialConfig = {
|
||||
database?: {
|
||||
mode?: "embedded-postgres" | "postgres";
|
||||
connectionString?: string;
|
||||
embeddedPostgresDataDir?: string;
|
||||
embeddedPostgresPort?: number;
|
||||
pgliteDataDir?: string;
|
||||
pglitePort?: number;
|
||||
};
|
||||
};
|
||||
|
||||
export type ResolvedDatabaseTarget =
|
||||
| {
|
||||
mode: "postgres";
|
||||
connectionString: string;
|
||||
source: "DATABASE_URL" | "paperclip-env" | "config.database.connectionString";
|
||||
configPath: string;
|
||||
envPath: string;
|
||||
}
|
||||
| {
|
||||
mode: "embedded-postgres";
|
||||
dataDir: string;
|
||||
port: number;
|
||||
source: `embedded-postgres@${number}`;
|
||||
configPath: string;
|
||||
envPath: string;
|
||||
};
|
||||
|
||||
function expandHomePrefix(value: string): string {
|
||||
if (value === "~") return os.homedir();
|
||||
if (value.startsWith("~/")) return path.resolve(os.homedir(), value.slice(2));
|
||||
return value;
|
||||
}
|
||||
|
||||
function resolvePaperclipHomeDir(): string {
|
||||
const envHome = process.env.PAPERCLIP_HOME?.trim();
|
||||
if (envHome) return path.resolve(expandHomePrefix(envHome));
|
||||
return path.resolve(os.homedir(), ".paperclip");
|
||||
}
|
||||
|
||||
function resolvePaperclipInstanceId(): string {
|
||||
const raw = process.env.PAPERCLIP_INSTANCE_ID?.trim() || DEFAULT_INSTANCE_ID;
|
||||
if (!INSTANCE_ID_RE.test(raw)) {
|
||||
throw new Error(`Invalid PAPERCLIP_INSTANCE_ID '${raw}'.`);
|
||||
}
|
||||
return raw;
|
||||
}
|
||||
|
||||
function resolveDefaultConfigPath(): string {
|
||||
return path.resolve(
|
||||
resolvePaperclipHomeDir(),
|
||||
"instances",
|
||||
resolvePaperclipInstanceId(),
|
||||
CONFIG_BASENAME,
|
||||
);
|
||||
}
|
||||
|
||||
function resolveDefaultEmbeddedPostgresDir(): string {
|
||||
return path.resolve(resolvePaperclipHomeDir(), "instances", resolvePaperclipInstanceId(), "db");
|
||||
}
|
||||
|
||||
function resolveHomeAwarePath(value: string): string {
|
||||
return path.resolve(expandHomePrefix(value));
|
||||
}
|
||||
|
||||
function findConfigFileFromAncestors(startDir: string): string | null {
|
||||
let currentDir = path.resolve(startDir);
|
||||
|
||||
while (true) {
|
||||
const candidate = path.resolve(currentDir, ".paperclip", CONFIG_BASENAME);
|
||||
if (existsSync(candidate)) return candidate;
|
||||
|
||||
const nextDir = path.resolve(currentDir, "..");
|
||||
if (nextDir === currentDir) return null;
|
||||
currentDir = nextDir;
|
||||
}
|
||||
}
|
||||
|
||||
function resolvePaperclipConfigPath(): string {
|
||||
if (process.env.PAPERCLIP_CONFIG?.trim()) {
|
||||
return path.resolve(process.env.PAPERCLIP_CONFIG.trim());
|
||||
}
|
||||
return findConfigFileFromAncestors(process.cwd()) ?? resolveDefaultConfigPath();
|
||||
}
|
||||
|
||||
function resolvePaperclipEnvPath(configPath: string): string {
|
||||
return path.resolve(path.dirname(configPath), ENV_BASENAME);
|
||||
}
|
||||
|
||||
function parseEnvFile(contents: string): Record<string, string> {
|
||||
const entries: Record<string, string> = {};
|
||||
|
||||
for (const rawLine of contents.split(/\r?\n/)) {
|
||||
const line = rawLine.trim();
|
||||
if (!line || line.startsWith("#")) continue;
|
||||
|
||||
const match = rawLine.match(/^\s*(?:export\s+)?([A-Za-z_][A-Za-z0-9_]*)\s*=\s*(.*)\s*$/);
|
||||
if (!match) continue;
|
||||
|
||||
const [, key, rawValue] = match;
|
||||
const value = rawValue.trim();
|
||||
if (!value) {
|
||||
entries[key] = "";
|
||||
continue;
|
||||
}
|
||||
|
||||
if (
|
||||
(value.startsWith("\"") && value.endsWith("\"")) ||
|
||||
(value.startsWith("'") && value.endsWith("'"))
|
||||
) {
|
||||
entries[key] = value.slice(1, -1);
|
||||
continue;
|
||||
}
|
||||
|
||||
entries[key] = value.replace(/\s+#.*$/, "").trim();
|
||||
}
|
||||
|
||||
return entries;
|
||||
}
|
||||
|
||||
function readEnvEntries(envPath: string): Record<string, string> {
|
||||
if (!existsSync(envPath)) return {};
|
||||
return parseEnvFile(readFileSync(envPath, "utf8"));
|
||||
}
|
||||
|
||||
function migrateLegacyConfig(raw: unknown): PartialConfig | null {
|
||||
if (typeof raw !== "object" || raw === null || Array.isArray(raw)) return null;
|
||||
|
||||
const config = { ...(raw as Record<string, unknown>) };
|
||||
const databaseRaw = config.database;
|
||||
if (typeof databaseRaw !== "object" || databaseRaw === null || Array.isArray(databaseRaw)) {
|
||||
return config;
|
||||
}
|
||||
|
||||
const database = { ...(databaseRaw as Record<string, unknown>) };
|
||||
if (database.mode === "pglite") {
|
||||
database.mode = "embedded-postgres";
|
||||
|
||||
if (
|
||||
typeof database.embeddedPostgresDataDir !== "string" &&
|
||||
typeof database.pgliteDataDir === "string"
|
||||
) {
|
||||
database.embeddedPostgresDataDir = database.pgliteDataDir;
|
||||
}
|
||||
if (
|
||||
typeof database.embeddedPostgresPort !== "number" &&
|
||||
typeof database.pglitePort === "number" &&
|
||||
Number.isFinite(database.pglitePort)
|
||||
) {
|
||||
database.embeddedPostgresPort = database.pglitePort;
|
||||
}
|
||||
}
|
||||
|
||||
config.database = database;
|
||||
return config as PartialConfig;
|
||||
}
|
||||
|
||||
function asPositiveInt(value: unknown): number | null {
|
||||
if (typeof value !== "number" || !Number.isFinite(value)) return null;
|
||||
const rounded = Math.trunc(value);
|
||||
return rounded > 0 ? rounded : null;
|
||||
}
|
||||
|
||||
function readConfig(configPath: string): PartialConfig | null {
|
||||
if (!existsSync(configPath)) return null;
|
||||
|
||||
let parsed: unknown;
|
||||
try {
|
||||
parsed = JSON.parse(readFileSync(configPath, "utf8"));
|
||||
} catch (err) {
|
||||
throw new Error(
|
||||
`Failed to parse config at ${configPath}: ${err instanceof Error ? err.message : String(err)}`,
|
||||
);
|
||||
}
|
||||
|
||||
const migrated = migrateLegacyConfig(parsed);
|
||||
if (migrated === null || typeof migrated !== "object" || Array.isArray(migrated)) {
|
||||
throw new Error(`Invalid config at ${configPath}: expected a JSON object`);
|
||||
}
|
||||
|
||||
const database =
|
||||
typeof migrated.database === "object" &&
|
||||
migrated.database !== null &&
|
||||
!Array.isArray(migrated.database)
|
||||
? migrated.database
|
||||
: undefined;
|
||||
|
||||
return {
|
||||
database: database
|
||||
? {
|
||||
mode: database.mode === "postgres" ? "postgres" : "embedded-postgres",
|
||||
connectionString:
|
||||
typeof database.connectionString === "string" ? database.connectionString : undefined,
|
||||
embeddedPostgresDataDir:
|
||||
typeof database.embeddedPostgresDataDir === "string"
|
||||
? database.embeddedPostgresDataDir
|
||||
: undefined,
|
||||
embeddedPostgresPort: asPositiveInt(database.embeddedPostgresPort) ?? undefined,
|
||||
pgliteDataDir: typeof database.pgliteDataDir === "string" ? database.pgliteDataDir : undefined,
|
||||
pglitePort: asPositiveInt(database.pglitePort) ?? undefined,
|
||||
}
|
||||
: undefined,
|
||||
};
|
||||
}
|
||||
|
||||
export function resolveDatabaseTarget(): ResolvedDatabaseTarget {
|
||||
const configPath = resolvePaperclipConfigPath();
|
||||
const envPath = resolvePaperclipEnvPath(configPath);
|
||||
const envEntries = readEnvEntries(envPath);
|
||||
|
||||
const envUrl = process.env.DATABASE_URL?.trim();
|
||||
if (envUrl) {
|
||||
return {
|
||||
mode: "postgres",
|
||||
connectionString: envUrl,
|
||||
source: "DATABASE_URL",
|
||||
configPath,
|
||||
envPath,
|
||||
};
|
||||
}
|
||||
|
||||
const fileEnvUrl = envEntries.DATABASE_URL?.trim();
|
||||
if (fileEnvUrl) {
|
||||
return {
|
||||
mode: "postgres",
|
||||
connectionString: fileEnvUrl,
|
||||
source: "paperclip-env",
|
||||
configPath,
|
||||
envPath,
|
||||
};
|
||||
}
|
||||
|
||||
const config = readConfig(configPath);
|
||||
const connectionString = config?.database?.connectionString?.trim();
|
||||
if (config?.database?.mode === "postgres" && connectionString) {
|
||||
return {
|
||||
mode: "postgres",
|
||||
connectionString,
|
||||
source: "config.database.connectionString",
|
||||
configPath,
|
||||
envPath,
|
||||
};
|
||||
}
|
||||
|
||||
const port = config?.database?.embeddedPostgresPort ?? 54329;
|
||||
const dataDir = resolveHomeAwarePath(
|
||||
config?.database?.embeddedPostgresDataDir ?? resolveDefaultEmbeddedPostgresDir(),
|
||||
);
|
||||
|
||||
return {
|
||||
mode: "embedded-postgres",
|
||||
dataDir,
|
||||
port,
|
||||
source: `embedded-postgres@${port}`,
|
||||
configPath,
|
||||
envPath,
|
||||
};
|
||||
}
|
||||
@@ -13,6 +13,7 @@ export { agentTaskSessions } from "./agent_task_sessions.js";
|
||||
export { agentWakeupRequests } from "./agent_wakeup_requests.js";
|
||||
export { projects } from "./projects.js";
|
||||
export { projectWorkspaces } from "./project_workspaces.js";
|
||||
export { workspaceRuntimeServices } from "./workspace_runtime_services.js";
|
||||
export { projectGoals } from "./project_goals.js";
|
||||
export { goals } from "./goals.js";
|
||||
export { issues } from "./issues.js";
|
||||
|
||||
@@ -40,6 +40,7 @@ export const issues = pgTable(
|
||||
requestDepth: integer("request_depth").notNull().default(0),
|
||||
billingCode: text("billing_code"),
|
||||
assigneeAdapterOverrides: jsonb("assignee_adapter_overrides").$type<Record<string, unknown>>(),
|
||||
executionWorkspaceSettings: jsonb("execution_workspace_settings").$type<Record<string, unknown>>(),
|
||||
startedAt: timestamp("started_at", { withTimezone: true }),
|
||||
completedAt: timestamp("completed_at", { withTimezone: true }),
|
||||
cancelledAt: timestamp("cancelled_at", { withTimezone: true }),
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import { pgTable, uuid, text, timestamp, date, index } from "drizzle-orm/pg-core";
|
||||
import { pgTable, uuid, text, timestamp, date, index, jsonb } from "drizzle-orm/pg-core";
|
||||
import { companies } from "./companies.js";
|
||||
import { goals } from "./goals.js";
|
||||
import { agents } from "./agents.js";
|
||||
@@ -15,6 +15,7 @@ export const projects = pgTable(
|
||||
leadAgentId: uuid("lead_agent_id").references(() => agents.id),
|
||||
targetDate: date("target_date"),
|
||||
color: text("color"),
|
||||
executionWorkspacePolicy: jsonb("execution_workspace_policy").$type<Record<string, unknown>>(),
|
||||
archivedAt: timestamp("archived_at", { withTimezone: true }),
|
||||
createdAt: timestamp("created_at", { withTimezone: true }).notNull().defaultNow(),
|
||||
updatedAt: timestamp("updated_at", { withTimezone: true }).notNull().defaultNow(),
|
||||
|
||||
64
packages/db/src/schema/workspace_runtime_services.ts
Normal file
64
packages/db/src/schema/workspace_runtime_services.ts
Normal file
@@ -0,0 +1,64 @@
|
||||
import {
|
||||
index,
|
||||
integer,
|
||||
jsonb,
|
||||
pgTable,
|
||||
text,
|
||||
timestamp,
|
||||
uuid,
|
||||
} from "drizzle-orm/pg-core";
|
||||
import { companies } from "./companies.js";
|
||||
import { projects } from "./projects.js";
|
||||
import { projectWorkspaces } from "./project_workspaces.js";
|
||||
import { issues } from "./issues.js";
|
||||
import { agents } from "./agents.js";
|
||||
import { heartbeatRuns } from "./heartbeat_runs.js";
|
||||
|
||||
export const workspaceRuntimeServices = pgTable(
|
||||
"workspace_runtime_services",
|
||||
{
|
||||
id: uuid("id").primaryKey(),
|
||||
companyId: uuid("company_id").notNull().references(() => companies.id),
|
||||
projectId: uuid("project_id").references(() => projects.id, { onDelete: "set null" }),
|
||||
projectWorkspaceId: uuid("project_workspace_id").references(() => projectWorkspaces.id, { onDelete: "set null" }),
|
||||
issueId: uuid("issue_id").references(() => issues.id, { onDelete: "set null" }),
|
||||
scopeType: text("scope_type").notNull(),
|
||||
scopeId: text("scope_id"),
|
||||
serviceName: text("service_name").notNull(),
|
||||
status: text("status").notNull(),
|
||||
lifecycle: text("lifecycle").notNull(),
|
||||
reuseKey: text("reuse_key"),
|
||||
command: text("command"),
|
||||
cwd: text("cwd"),
|
||||
port: integer("port"),
|
||||
url: text("url"),
|
||||
provider: text("provider").notNull(),
|
||||
providerRef: text("provider_ref"),
|
||||
ownerAgentId: uuid("owner_agent_id").references(() => agents.id, { onDelete: "set null" }),
|
||||
startedByRunId: uuid("started_by_run_id").references(() => heartbeatRuns.id, { onDelete: "set null" }),
|
||||
lastUsedAt: timestamp("last_used_at", { withTimezone: true }).notNull().defaultNow(),
|
||||
startedAt: timestamp("started_at", { withTimezone: true }).notNull().defaultNow(),
|
||||
stoppedAt: timestamp("stopped_at", { withTimezone: true }),
|
||||
stopPolicy: jsonb("stop_policy").$type<Record<string, unknown>>(),
|
||||
healthStatus: text("health_status").notNull().default("unknown"),
|
||||
createdAt: timestamp("created_at", { withTimezone: true }).notNull().defaultNow(),
|
||||
updatedAt: timestamp("updated_at", { withTimezone: true }).notNull().defaultNow(),
|
||||
},
|
||||
(table) => ({
|
||||
companyWorkspaceStatusIdx: index("workspace_runtime_services_company_workspace_status_idx").on(
|
||||
table.companyId,
|
||||
table.projectWorkspaceId,
|
||||
table.status,
|
||||
),
|
||||
companyProjectStatusIdx: index("workspace_runtime_services_company_project_status_idx").on(
|
||||
table.companyId,
|
||||
table.projectId,
|
||||
table.status,
|
||||
),
|
||||
runIdx: index("workspace_runtime_services_run_idx").on(table.startedByRunId),
|
||||
companyUpdatedIdx: index("workspace_runtime_services_company_updated_idx").on(
|
||||
table.companyId,
|
||||
table.updatedAt,
|
||||
),
|
||||
}),
|
||||
);
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user