mirror of
https://github.com/glittercowboy/get-shit-done
synced 2026-04-25 17:25:23 +02:00
Compare commits
38 Commits
fix/2047-b
...
docs/2115-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
091793d2c6 | ||
|
|
06daaf4c68 | ||
|
|
4ad7ecc6c6 | ||
|
|
9d5d7d76e7 | ||
|
|
bae220c5ad | ||
|
|
8961322141 | ||
|
|
3c2cc7189a | ||
|
|
9ff6ca20cf | ||
|
|
73be20215e | ||
|
|
ae17848ef1 | ||
|
|
f425bf9142 | ||
|
|
319663deb7 | ||
|
|
868e3d488f | ||
|
|
3f3fd0a723 | ||
|
|
21ebeb8713 | ||
|
|
53995faa8f | ||
|
|
9ac7b7f579 | ||
|
|
ff0b06b43a | ||
|
|
72e789432e | ||
|
|
23763f920b | ||
|
|
9435c4dd38 | ||
|
|
f34dc66fa9 | ||
|
|
1f7ca6b9e8 | ||
|
|
6b0e3904c2 | ||
|
|
aa4532b820 | ||
|
|
0e1711b460 | ||
|
|
b84dfd4c9b | ||
|
|
5a302f477a | ||
|
|
6c2795598a | ||
|
|
7a674c81b7 | ||
|
|
5c0e801322 | ||
|
|
96eef85c40 | ||
|
|
2b4b48401c | ||
|
|
f8cf54bd01 | ||
|
|
cc04baa524 | ||
|
|
46cc28251a | ||
|
|
7857d35dc1 | ||
|
|
2a08f11f46 |
27
CHANGELOG.md
27
CHANGELOG.md
@@ -6,8 +6,35 @@ Format follows [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
## [1.35.0] - 2026-04-10
|
||||
|
||||
### Added
|
||||
- **Cline runtime support** — First-class Cline runtime via rules-based integration. Installs to `~/.cline/` or `./.cline/` as `.clinerules`. No custom slash commands — uses rules. `--cline` flag. (#1605 follow-up)
|
||||
- **CodeBuddy runtime support** — Skills-based install to `~/.codebuddy/skills/gsd-*/SKILL.md`. `--codebuddy` flag.
|
||||
- **Qwen Code runtime support** — Skills-based install to `~/.qwen/skills/gsd-*/SKILL.md`, same open standard as Claude Code 2.1.88+. `QWEN_CONFIG_DIR` env var for custom paths. `--qwen` flag.
|
||||
- **`/gsd-from-gsd2` command** (`gsd:from-gsd2`) — Reverse migration from GSD-2 format (`.gsd/` with Milestone→Slice→Task hierarchy) back to v1 `.planning/` format. Flags: `--dry-run` (preview only), `--force` (overwrite existing `.planning/`), `--path <dir>` (specify GSD-2 root). Produces `PROJECT.md`, `REQUIREMENTS.md`, `ROADMAP.md`, `STATE.md`, and sequential phase dirs. Flattens Milestone→Slice hierarchy to sequential phase numbers (M001/S01→phase 01, M001/S02→phase 02, M002/S01→phase 03, etc.).
|
||||
- **`/gsd-ai-integration-phase` command** (`gsd:ai-integration-phase`) — AI framework selection wizard for integrating AI/LLM capabilities into a project phase. Interactive decision matrix with domain-specific failure modes and eval criteria. Produces `AI-SPEC.md` with framework recommendation, implementation guidance, and evaluation strategy. Runs 3 parallel specialist agents: domain-researcher, framework-selector, ai-researcher, eval-planner.
|
||||
- **`/gsd-eval-review` command** (`gsd:eval-review`) — Retroactive audit of an implemented AI phase's evaluation coverage. Checks implementation against `AI-SPEC.md` evaluation plan. Scores each eval dimension as COVERED/PARTIAL/MISSING. Produces `EVAL-REVIEW.md` with findings, gaps, and remediation guidance.
|
||||
- **Review model configuration** — Per-CLI model selection for /gsd-review via `review.models.<cli>` config keys. Falls back to CLI defaults when not set. (#1849)
|
||||
- **Statusline now surfaces GSD milestone/phase/status** — when no `in_progress` todo is active, `gsd-statusline.js` reads `.planning/STATE.md` (walking up from the workspace dir) and fills the middle slot with `<milestone> · <status> · <phase> (N/total)`. Gracefully degrades when fields are missing; identical to previous behavior when there is no STATE.md or an active todo wins the slot. Uses the YAML frontmatter added for #628.
|
||||
- **Qwen Code and Cursor CLI peer reviewers** — Added as reviewers in `/gsd-review` with `--qwen` and `--cursor` flags. (#1966)
|
||||
|
||||
### Changed
|
||||
- **Worktree safety — `git clean` prohibition** — `gsd-executor` now prohibits `git clean` in worktree context to prevent deletion of prior wave output. (#2075)
|
||||
- **Executor deletion verification** — Pre-merge deletion checks added to catch missing artifacts before executor commit. (#2070)
|
||||
- **Hard reset in worktree branch check** — `--hard` flag in `worktree_branch_check` now correctly resets the file tree, not just HEAD. (#2073)
|
||||
|
||||
### Fixed
|
||||
- **Context7 MCP CLI fallback** — Handles `tools: []` response that previously broke Context7 availability detection. (#1885)
|
||||
- **`Agent` tool in gsd-autonomous** — Added `Agent` to `allowed-tools` to unblock subagent spawning. (#2043)
|
||||
- **`intel.enabled` in config-set whitelist** — Config key now accepted by `config-set` without validation error. (#2021)
|
||||
- **`writeSettings` null guard** — Guards against null `settingsPath` for Cline runtime to prevent crash on install. (#2046)
|
||||
- **Shell hook absolute paths** — `.sh` hooks now receive absolute quoted paths in `buildHookCommand`, fixing path resolution in non-standard working directories. (#2045)
|
||||
- **`processAttribution` runtime-aware** — Was hardcoded to `'claude'`; now reads actual runtime from environment.
|
||||
- **`AskUserQuestion` plain-text fallback** — Non-Claude runtimes now receive plain-text numbered lists instead of broken TUI menus.
|
||||
- **iOS app scaffold uses XcodeGen** — Prevents SPM execution errors in generated iOS scaffolds. (#2023)
|
||||
- **`acceptance_criteria` hard gate** — Enforced as a hard gate in executor — plans missing acceptance criteria are rejected before execution begins. (#1958)
|
||||
- **`normalizePhaseName` preserves letter suffix case** — Phase names with letter suffixes (e.g., `1a`, `2B`) now preserve original case. (#1963)
|
||||
|
||||
## [1.34.2] - 2026-04-06
|
||||
|
||||
|
||||
16
README.md
16
README.md
@@ -4,7 +4,7 @@
|
||||
|
||||
**English** · [Português](README.pt-BR.md) · [简体中文](README.zh-CN.md) · [日本語](README.ja-JP.md) · [한국어](README.ko-KR.md)
|
||||
|
||||
**A light-weight and powerful meta-prompting, context engineering and spec-driven development system for Claude Code, OpenCode, Gemini CLI, Kilo, Codex, Copilot, Cursor, Windsurf, Antigravity, Augment, Trae, CodeBuddy, and Cline.**
|
||||
**A light-weight and powerful meta-prompting, context engineering and spec-driven development system for Claude Code, OpenCode, Gemini CLI, Kilo, Codex, Copilot, Cursor, Windsurf, Antigravity, Augment, Trae, Qwen Code, Cline, and CodeBuddy.**
|
||||
|
||||
**Solves context rot — the quality degradation that happens as Claude fills its context window.**
|
||||
|
||||
@@ -106,17 +106,17 @@ npx get-shit-done-cc@latest
|
||||
```
|
||||
|
||||
The installer prompts you to choose:
|
||||
1. **Runtime** — Claude Code, OpenCode, Gemini, Kilo, Codex, Copilot, Cursor, Windsurf, Antigravity, Augment, Trae, CodeBuddy, Cline, or all (interactive multi-select — pick multiple runtimes in a single install session)
|
||||
1. **Runtime** — Claude Code, OpenCode, Gemini, Kilo, Codex, Copilot, Cursor, Windsurf, Antigravity, Augment, Trae, Qwen Code, CodeBuddy, Cline, or all (interactive multi-select — pick multiple runtimes in a single install session)
|
||||
2. **Location** — Global (all projects) or local (current project only)
|
||||
|
||||
Verify with:
|
||||
- Claude Code / Gemini / Copilot / Antigravity: `/gsd-help`
|
||||
- Claude Code / Gemini / Copilot / Antigravity / Qwen Code: `/gsd-help`
|
||||
- OpenCode / Kilo / Augment / Trae / CodeBuddy: `/gsd-help`
|
||||
- Codex: `$gsd-help`
|
||||
- Cline: GSD installs via `.clinerules` — verify by checking `.clinerules` exists
|
||||
|
||||
> [!NOTE]
|
||||
> Claude Code 2.1.88+ and Codex install as skills (`skills/gsd-*/SKILL.md`). Older Claude Code versions use `commands/gsd/`. Cline uses `.clinerules` for configuration. The installer handles all formats automatically.
|
||||
> Claude Code 2.1.88+, Qwen Code, and Codex install as skills (`skills/gsd-*/SKILL.md`). Older Claude Code versions use `commands/gsd/`. Cline uses `.clinerules` for configuration. The installer handles all formats automatically.
|
||||
|
||||
> [!TIP]
|
||||
> For source-based installs or environments where npm is unavailable, see **[docs/manual-update.md](docs/manual-update.md)**.
|
||||
@@ -175,6 +175,10 @@ npx get-shit-done-cc --augment --local # Install to ./.augment/
|
||||
npx get-shit-done-cc --trae --global # Install to ~/.trae/
|
||||
npx get-shit-done-cc --trae --local # Install to ./.trae/
|
||||
|
||||
# Qwen Code
|
||||
npx get-shit-done-cc --qwen --global # Install to ~/.qwen/
|
||||
npx get-shit-done-cc --qwen --local # Install to ./.qwen/
|
||||
|
||||
# CodeBuddy
|
||||
npx get-shit-done-cc --codebuddy --global # Install to ~/.codebuddy/
|
||||
npx get-shit-done-cc --codebuddy --local # Install to ./.codebuddy/
|
||||
@@ -188,7 +192,7 @@ npx get-shit-done-cc --all --global # Install to all directories
|
||||
```
|
||||
|
||||
Use `--global` (`-g`) or `--local` (`-l`) to skip the location prompt.
|
||||
Use `--claude`, `--opencode`, `--gemini`, `--kilo`, `--codex`, `--copilot`, `--cursor`, `--windsurf`, `--antigravity`, `--augment`, `--trae`, `--codebuddy`, `--cline`, or `--all` to skip the runtime prompt.
|
||||
Use `--claude`, `--opencode`, `--gemini`, `--kilo`, `--codex`, `--copilot`, `--cursor`, `--windsurf`, `--antigravity`, `--augment`, `--trae`, `--qwen`, `--codebuddy`, `--cline`, or `--all` to skip the runtime prompt.
|
||||
Use `--sdk` to also install the GSD SDK CLI (`gsd-sdk`) for headless autonomous execution.
|
||||
|
||||
</details>
|
||||
@@ -850,6 +854,7 @@ npx get-shit-done-cc --windsurf --global --uninstall
|
||||
npx get-shit-done-cc --antigravity --global --uninstall
|
||||
npx get-shit-done-cc --augment --global --uninstall
|
||||
npx get-shit-done-cc --trae --global --uninstall
|
||||
npx get-shit-done-cc --qwen --global --uninstall
|
||||
npx get-shit-done-cc --codebuddy --global --uninstall
|
||||
npx get-shit-done-cc --cline --global --uninstall
|
||||
|
||||
@@ -865,6 +870,7 @@ npx get-shit-done-cc --windsurf --local --uninstall
|
||||
npx get-shit-done-cc --antigravity --local --uninstall
|
||||
npx get-shit-done-cc --augment --local --uninstall
|
||||
npx get-shit-done-cc --trae --local --uninstall
|
||||
npx get-shit-done-cc --qwen --local --uninstall
|
||||
npx get-shit-done-cc --codebuddy --local --uninstall
|
||||
npx get-shit-done-cc --cline --local --uninstall
|
||||
```
|
||||
|
||||
@@ -17,6 +17,29 @@ Spawned by `discuss-phase` via `Task()`. You do NOT present output directly to t
|
||||
- Return structured markdown output for the main agent to synthesize
|
||||
</role>
|
||||
|
||||
<documentation_lookup>
|
||||
When you need library or framework documentation, check in this order:
|
||||
|
||||
1. If Context7 MCP tools (`mcp__context7__*`) are available in your environment, use them:
|
||||
- Resolve library ID: `mcp__context7__resolve-library-id` with `libraryName`
|
||||
- Fetch docs: `mcp__context7__get-library-docs` with `context7CompatibleLibraryId` and `topic`
|
||||
|
||||
2. If Context7 MCP is not available (upstream bug anthropics/claude-code#13898 strips MCP
|
||||
tools from agents with a `tools:` frontmatter restriction), use the CLI fallback via Bash:
|
||||
|
||||
Step 1 — Resolve library ID:
|
||||
```bash
|
||||
npx --yes ctx7@latest library <name> "<query>"
|
||||
```
|
||||
Step 2 — Fetch documentation:
|
||||
```bash
|
||||
npx --yes ctx7@latest docs <libraryId> "<query>"
|
||||
```
|
||||
|
||||
Do not skip documentation lookups because MCP tools are unavailable — the CLI fallback
|
||||
works via Bash and produces equivalent output.
|
||||
</documentation_lookup>
|
||||
|
||||
<input>
|
||||
Agent receives via prompt:
|
||||
|
||||
|
||||
@@ -16,6 +16,29 @@ You are a GSD AI researcher. Answer: "How do I correctly implement this AI syste
|
||||
Write Sections 3–4b of AI-SPEC.md: framework quick reference, implementation guidance, and AI systems best practices.
|
||||
</role>
|
||||
|
||||
<documentation_lookup>
|
||||
When you need library or framework documentation, check in this order:
|
||||
|
||||
1. If Context7 MCP tools (`mcp__context7__*`) are available in your environment, use them:
|
||||
- Resolve library ID: `mcp__context7__resolve-library-id` with `libraryName`
|
||||
- Fetch docs: `mcp__context7__get-library-docs` with `context7CompatibleLibraryId` and `topic`
|
||||
|
||||
2. If Context7 MCP is not available (upstream bug anthropics/claude-code#13898 strips MCP
|
||||
tools from agents with a `tools:` frontmatter restriction), use the CLI fallback via Bash:
|
||||
|
||||
Step 1 — Resolve library ID:
|
||||
```bash
|
||||
npx --yes ctx7@latest library <name> "<query>"
|
||||
```
|
||||
Step 2 — Fetch documentation:
|
||||
```bash
|
||||
npx --yes ctx7@latest docs <libraryId> "<query>"
|
||||
```
|
||||
|
||||
Do not skip documentation lookups because MCP tools are unavailable — the CLI fallback
|
||||
works via Bash and produces equivalent output.
|
||||
</documentation_lookup>
|
||||
|
||||
<required_reading>
|
||||
Read `~/.claude/get-shit-done/references/ai-frameworks.md` for framework profiles and known pitfalls before fetching docs.
|
||||
</required_reading>
|
||||
|
||||
@@ -16,6 +16,29 @@ You are a GSD domain researcher. Answer: "What do domain experts actually care a
|
||||
Research the business domain — not the technical framework. Write Section 1b of AI-SPEC.md.
|
||||
</role>
|
||||
|
||||
<documentation_lookup>
|
||||
When you need library or framework documentation, check in this order:
|
||||
|
||||
1. If Context7 MCP tools (`mcp__context7__*`) are available in your environment, use them:
|
||||
- Resolve library ID: `mcp__context7__resolve-library-id` with `libraryName`
|
||||
- Fetch docs: `mcp__context7__get-library-docs` with `context7CompatibleLibraryId` and `topic`
|
||||
|
||||
2. If Context7 MCP is not available (upstream bug anthropics/claude-code#13898 strips MCP
|
||||
tools from agents with a `tools:` frontmatter restriction), use the CLI fallback via Bash:
|
||||
|
||||
Step 1 — Resolve library ID:
|
||||
```bash
|
||||
npx --yes ctx7@latest library <name> "<query>"
|
||||
```
|
||||
Step 2 — Fetch documentation:
|
||||
```bash
|
||||
npx --yes ctx7@latest docs <libraryId> "<query>"
|
||||
```
|
||||
|
||||
Do not skip documentation lookups because MCP tools are unavailable — the CLI fallback
|
||||
works via Bash and produces equivalent output.
|
||||
</documentation_lookup>
|
||||
|
||||
<required_reading>
|
||||
Read `~/.claude/get-shit-done/references/ai-evals.md` — specifically the rubric design and domain expert sections.
|
||||
</required_reading>
|
||||
|
||||
@@ -22,12 +22,32 @@ Your job: Execute the plan completely, commit each task, create SUMMARY.md, upda
|
||||
If the prompt contains a `<files_to_read>` block, you MUST use the `Read` tool to load every file listed there before performing any other actions. This is your primary context.
|
||||
</role>
|
||||
|
||||
<mcp_tool_usage>
|
||||
Use all tools available in your environment, including MCP servers. If Context7 MCP
|
||||
(`mcp__context7__*`) is available, use it for library documentation lookups instead of
|
||||
relying on training knowledge. Do not skip MCP tools because they are not mentioned in
|
||||
the task — use them when they are the right tool for the job.
|
||||
</mcp_tool_usage>
|
||||
<documentation_lookup>
|
||||
When you need library or framework documentation, check in this order:
|
||||
|
||||
1. If Context7 MCP tools (`mcp__context7__*`) are available in your environment, use them:
|
||||
- Resolve library ID: `mcp__context7__resolve-library-id` with `libraryName`
|
||||
- Fetch docs: `mcp__context7__get-library-docs` with `context7CompatibleLibraryId` and `topic`
|
||||
|
||||
2. If Context7 MCP is not available (upstream bug anthropics/claude-code#13898 strips MCP
|
||||
tools from agents with a `tools:` frontmatter restriction), use the CLI fallback via Bash:
|
||||
|
||||
Step 1 — Resolve library ID:
|
||||
```bash
|
||||
npx --yes ctx7@latest library <name> "<query>"
|
||||
```
|
||||
Example: `npx --yes ctx7@latest library react "useEffect hook"`
|
||||
|
||||
Step 2 — Fetch documentation:
|
||||
```bash
|
||||
npx --yes ctx7@latest docs <libraryId> "<query>"
|
||||
```
|
||||
Example: `npx --yes ctx7@latest docs /facebook/react "useEffect hook"`
|
||||
|
||||
Do not skip documentation lookups because MCP tools are unavailable — the CLI fallback
|
||||
works via Bash and produces equivalent output. Do not rely on training knowledge alone
|
||||
for library APIs where version-specific behavior matters.
|
||||
</documentation_lookup>
|
||||
|
||||
<project_context>
|
||||
Before executing, discover project context:
|
||||
@@ -193,6 +213,10 @@ Track auto-fix attempts per task. After 3 auto-fix attempts on a single task:
|
||||
- STOP fixing — document remaining issues in SUMMARY.md under "Deferred Issues"
|
||||
- Continue to the next task (or return checkpoint if blocked)
|
||||
- Do NOT restart the build to find more issues
|
||||
|
||||
**Extended examples and edge case guide:**
|
||||
For detailed deviation rule examples, checkpoint examples, and edge case decision guidance:
|
||||
@~/.claude/get-shit-done/references/executor-examples.md
|
||||
</deviation_rules>
|
||||
|
||||
<analysis_paralysis_guard>
|
||||
@@ -342,6 +366,9 @@ git add src/types/user.ts
|
||||
| `fix` | Bug fix, error correction |
|
||||
| `test` | Test-only changes (TDD RED) |
|
||||
| `refactor` | Code cleanup, no behavior change |
|
||||
| `perf` | Performance improvement, no behavior change |
|
||||
| `docs` | Documentation only |
|
||||
| `style` | Formatting, whitespace, no logic change |
|
||||
| `chore` | Config, tooling, dependencies |
|
||||
|
||||
**4. Commit:**
|
||||
@@ -377,6 +404,31 @@ Intentional deletions (e.g., removing a deprecated file as part of the task) are
|
||||
**7. Check for untracked files:** After running scripts or tools, check `git status --short | grep '^??'`. For any new untracked files: commit if intentional, add to `.gitignore` if generated/runtime output. Never leave generated files untracked.
|
||||
</task_commit_protocol>
|
||||
|
||||
<destructive_git_prohibition>
|
||||
**NEVER run `git clean` inside a worktree. This is an absolute rule with no exceptions.**
|
||||
|
||||
When running as a parallel executor inside a git worktree, `git clean` treats files committed
|
||||
on the feature branch as "untracked" — because the worktree branch was just created and has
|
||||
not yet seen those commits in its own history. Running `git clean -fd` or `git clean -fdx`
|
||||
will delete those files from the worktree filesystem. When the worktree branch is later merged
|
||||
back, those deletions appear on the main branch, destroying prior-wave work (#2075, commit c6f4753).
|
||||
|
||||
**Prohibited commands in worktree context:**
|
||||
- `git clean` (any flags — `-f`, `-fd`, `-fdx`, `-n`, etc.)
|
||||
- `git rm` on files not explicitly created by the current task
|
||||
- `git checkout -- .` or `git restore .` (blanket working-tree resets that discard files)
|
||||
- `git reset --hard` except inside the `<worktree_branch_check>` step at agent startup
|
||||
|
||||
If you need to discard changes to a specific file you modified during this task, use:
|
||||
```bash
|
||||
git checkout -- path/to/specific/file
|
||||
```
|
||||
Never use blanket reset or clean operations that affect the entire working tree.
|
||||
|
||||
To inspect what is untracked vs. genuinely new, use `git status --short` and evaluate each
|
||||
file individually. If a file appears untracked but is not part of your task, leave it alone.
|
||||
</destructive_git_prohibition>
|
||||
|
||||
<summary_creation>
|
||||
After all tasks complete, create `{phase}-{plan}-SUMMARY.md` at `.planning/phases/XX-name/`.
|
||||
|
||||
|
||||
@@ -34,6 +34,29 @@ If the prompt contains a `<files_to_read>` block, you MUST use the `Read` tool t
|
||||
Claims tagged `[ASSUMED]` signal to the planner and discuss-phase that the information needs user confirmation before becoming a locked decision. Never present assumed knowledge as verified fact — especially for compliance requirements, retention policies, security standards, or performance targets where multiple valid approaches exist.
|
||||
</role>
|
||||
|
||||
<documentation_lookup>
|
||||
When you need library or framework documentation, check in this order:
|
||||
|
||||
1. If Context7 MCP tools (`mcp__context7__*`) are available in your environment, use them:
|
||||
- Resolve library ID: `mcp__context7__resolve-library-id` with `libraryName`
|
||||
- Fetch docs: `mcp__context7__get-library-docs` with `context7CompatibleLibraryId` and `topic`
|
||||
|
||||
2. If Context7 MCP is not available (upstream bug anthropics/claude-code#13898 strips MCP
|
||||
tools from agents with a `tools:` frontmatter restriction), use the CLI fallback via Bash:
|
||||
|
||||
Step 1 — Resolve library ID:
|
||||
```bash
|
||||
npx --yes ctx7@latest library <name> "<query>"
|
||||
```
|
||||
Step 2 — Fetch documentation:
|
||||
```bash
|
||||
npx --yes ctx7@latest docs <libraryId> "<query>"
|
||||
```
|
||||
|
||||
Do not skip documentation lookups because MCP tools are unavailable — the CLI fallback
|
||||
works via Bash and produces equivalent output.
|
||||
</documentation_lookup>
|
||||
|
||||
<project_context>
|
||||
Before researching, discover project context:
|
||||
|
||||
@@ -253,6 +276,12 @@ Priority: Context7 > Exa (verified) > Firecrawl (official docs) > Official GitHu
|
||||
|
||||
**Primary recommendation:** [one-liner actionable guidance]
|
||||
|
||||
## Architectural Responsibility Map
|
||||
|
||||
| Capability | Primary Tier | Secondary Tier | Rationale |
|
||||
|------------|-------------|----------------|-----------|
|
||||
| [capability] | [tier] | [tier or —] | [why this tier owns it] |
|
||||
|
||||
## Standard Stack
|
||||
|
||||
### Core
|
||||
@@ -497,6 +526,33 @@ cat "$phase_dir"/*-CONTEXT.md 2>/dev/null
|
||||
- User decided "simple UI, no animations" → don't research animation libraries
|
||||
- Marked as Claude's discretion → research options and recommend
|
||||
|
||||
## Step 1.5: Architectural Responsibility Mapping
|
||||
|
||||
Before diving into framework-specific research, map each capability in this phase to its standard architectural tier owner. This is a pure reasoning step — no tool calls needed.
|
||||
|
||||
**For each capability in the phase description:**
|
||||
|
||||
1. Identify what the capability does (e.g., "user authentication", "data visualization", "file upload")
|
||||
2. Determine which architectural tier owns the primary responsibility:
|
||||
|
||||
| Tier | Examples |
|
||||
|------|----------|
|
||||
| **Browser / Client** | DOM manipulation, client-side routing, local storage, service workers |
|
||||
| **Frontend Server (SSR)** | Server-side rendering, hydration, middleware, auth cookies |
|
||||
| **API / Backend** | REST/GraphQL endpoints, business logic, auth, data validation |
|
||||
| **CDN / Static** | Static assets, edge caching, image optimization |
|
||||
| **Database / Storage** | Persistence, queries, migrations, caching layers |
|
||||
|
||||
3. Record the mapping in a table:
|
||||
|
||||
| Capability | Primary Tier | Secondary Tier | Rationale |
|
||||
|------------|-------------|----------------|-----------|
|
||||
| [capability] | [tier] | [tier or —] | [why this tier owns it] |
|
||||
|
||||
**Output:** Include an `## Architectural Responsibility Map` section in RESEARCH.md immediately after the Summary section. This map is consumed by the planner for sanity-checking task assignments and by the plan-checker for verifying tier correctness.
|
||||
|
||||
**Why this matters:** Multi-tier applications frequently have capabilities misassigned during planning — e.g., putting auth logic in the browser tier when it belongs in the API tier, or putting data fetching in the frontend server when the API already provides it. Mapping tier ownership before research prevents these misassignments from propagating into plans.
|
||||
|
||||
## Step 2: Identify Research Domains
|
||||
|
||||
Based on phase description, identify what needs investigating:
|
||||
|
||||
@@ -338,6 +338,8 @@ issue:
|
||||
- `"future enhancement"`, `"placeholder"`, `"basic version"`, `"minimal"`
|
||||
- `"will be wired later"`, `"dynamic in future"`, `"skip for now"`
|
||||
- `"not wired to"`, `"not connected to"`, `"stub"`
|
||||
- `"too complex"`, `"too difficult"`, `"challenging"`, `"non-trivial"` (when used to justify omission)
|
||||
- Time estimates used as scope justification: `"would take"`, `"hours"`, `"days"`, `"minutes"` (in sizing context)
|
||||
2. For each match, cross-reference with the CONTEXT.md decision it claims to implement
|
||||
3. Compare: does the task deliver what D-XX actually says, or a reduced version?
|
||||
4. If reduced: BLOCKER — the planner must either deliver fully or propose phase split
|
||||
@@ -369,6 +371,54 @@ Plans reduce {N} user decisions. Options:
|
||||
2. Split phase: [suggested grouping of D-XX into sub-phases]
|
||||
```
|
||||
|
||||
## Dimension 7c: Architectural Tier Compliance
|
||||
|
||||
**Question:** Do plan tasks assign capabilities to the correct architectural tier as defined in the Architectural Responsibility Map?
|
||||
|
||||
**Skip if:** No RESEARCH.md exists for this phase, or RESEARCH.md has no `## Architectural Responsibility Map` section. Output: "Dimension 7c: SKIPPED (no responsibility map found)"
|
||||
|
||||
**Process:**
|
||||
1. Read the phase's RESEARCH.md and extract the `## Architectural Responsibility Map` table
|
||||
2. For each plan task, identify which capability it implements and which tier it targets (inferred from file paths, action description, and artifacts)
|
||||
3. Cross-reference against the responsibility map — does the task place work in the tier that owns the capability?
|
||||
4. Flag any tier mismatch where a task assigns logic to a tier that doesn't own the capability
|
||||
|
||||
**Red flags:**
|
||||
- Auth validation logic placed in browser/client tier when responsibility map assigns it to API tier
|
||||
- Data persistence logic in frontend server when it belongs in database tier
|
||||
- Business rule enforcement in CDN/static tier when it belongs in API tier
|
||||
- Server-side rendering logic assigned to API tier when frontend server owns it
|
||||
|
||||
**Severity:** WARNING for potential tier mismatches. BLOCKER if a security-sensitive capability (auth, access control, input validation) is assigned to a less-trusted tier than the responsibility map specifies.
|
||||
|
||||
**Example — tier mismatch:**
|
||||
```yaml
|
||||
issue:
|
||||
dimension: architectural_tier_compliance
|
||||
severity: blocker
|
||||
description: "Task places auth token validation in browser tier, but Architectural Responsibility Map assigns auth to API tier"
|
||||
plan: "01"
|
||||
task: 2
|
||||
capability: "Authentication token validation"
|
||||
expected_tier: "API / Backend"
|
||||
actual_tier: "Browser / Client"
|
||||
fix_hint: "Move token validation to API route handler per Architectural Responsibility Map"
|
||||
```
|
||||
|
||||
**Example — non-security mismatch (warning):**
|
||||
```yaml
|
||||
issue:
|
||||
dimension: architectural_tier_compliance
|
||||
severity: warning
|
||||
description: "Task places data formatting in API tier, but Architectural Responsibility Map assigns it to Frontend Server"
|
||||
plan: "02"
|
||||
task: 1
|
||||
capability: "Date/currency formatting for display"
|
||||
expected_tier: "Frontend Server (SSR)"
|
||||
actual_tier: "API / Backend"
|
||||
fix_hint: "Consider moving display formatting to frontend server per Architectural Responsibility Map"
|
||||
```
|
||||
|
||||
## Dimension 8: Nyquist Compliance
|
||||
|
||||
Skip if: `workflow.nyquist_validation` is explicitly set to `false` in config.json (absent key = enabled), phase has no RESEARCH.md, or RESEARCH.md has no "Validation Architecture" section. Output: "Dimension 8: SKIPPED (nyquist_validation disabled or not applicable)"
|
||||
@@ -859,6 +909,7 @@ Plan verification complete when:
|
||||
- [ ] No tasks contradict locked decisions
|
||||
- [ ] Deferred ideas not included in plans
|
||||
- [ ] Overall status determined (passed | issues_found)
|
||||
- [ ] Architectural tier compliance checked (tasks match responsibility map tiers)
|
||||
- [ ] Cross-plan data contracts checked (no conflicting transforms on shared data)
|
||||
- [ ] CLAUDE.md compliance checked (plans respect project conventions)
|
||||
- [ ] Structured issues returned (if any found)
|
||||
|
||||
@@ -35,12 +35,15 @@ If the prompt contains a `<files_to_read>` block, you MUST use the `Read` tool t
|
||||
- Return structured results to orchestrator
|
||||
</role>
|
||||
|
||||
<mcp_tool_usage>
|
||||
Use all tools available in your environment, including MCP servers. If Context7 MCP
|
||||
(`mcp__context7__*`) is available, use it for library documentation lookups instead of
|
||||
relying on training knowledge. Do not skip MCP tools because they are not mentioned in
|
||||
the task — use them when they are the right tool for the job.
|
||||
</mcp_tool_usage>
|
||||
<documentation_lookup>
|
||||
For library docs: use Context7 MCP (`mcp__context7__*`) if available. If not (upstream
|
||||
bug #13898 strips MCP from `tools:`-restricted agents), use the Bash CLI fallback:
|
||||
```bash
|
||||
npx --yes ctx7@latest library <name> "<query>" # resolve library ID
|
||||
npx --yes ctx7@latest docs <libraryId> "<query>" # fetch docs
|
||||
```
|
||||
Do not skip — the CLI fallback works via Bash and produces equivalent output.
|
||||
</documentation_lookup>
|
||||
|
||||
<project_context>
|
||||
Before planning, discover project context:
|
||||
@@ -95,38 +98,47 @@ The orchestrator provides user decisions in `<user_decisions>` tags from `/gsd-d
|
||||
- "v1", "v2", "simplified version", "static for now", "hardcoded for now"
|
||||
- "future enhancement", "placeholder", "basic version", "minimal implementation"
|
||||
- "will be wired later", "dynamic in future phase", "skip for now"
|
||||
- Any language that reduces a CONTEXT.md decision to less than what the user decided
|
||||
- Any language that reduces a source artifact decision to less than what was specified
|
||||
|
||||
**The rule:** If D-XX says "display cost calculated from billing table in impulses", the plan MUST deliver cost calculated from billing table in impulses. NOT "static label /min" as a "v1".
|
||||
|
||||
**When the phase is too complex to implement ALL decisions:**
|
||||
**When the plan set cannot cover all source items within context budget:**
|
||||
|
||||
Do NOT silently simplify decisions. Instead:
|
||||
Do NOT silently omit features. Instead:
|
||||
|
||||
1. **Create a decision coverage matrix** mapping every D-XX to a plan/task
|
||||
2. **If any D-XX cannot fit** within the plan budget (too many tasks, too complex):
|
||||
1. **Create a multi-source coverage audit** (see below) covering ALL four artifact types
|
||||
2. **If any item cannot fit** within the plan budget (context cost exceeds capacity):
|
||||
- Return `## PHASE SPLIT RECOMMENDED` to the orchestrator
|
||||
- Propose how to split: which D-XX groups form natural sub-phases
|
||||
- Example: "D-01 to D-19 = Phase 17a (processing core), D-20 to D-27 = Phase 17b (billing + config UX)"
|
||||
3. The orchestrator will present the split to the user for approval
|
||||
- Propose how to split: which item groups form natural sub-phases
|
||||
3. The orchestrator presents the split to the user for approval
|
||||
4. After approval, plan each sub-phase within budget
|
||||
|
||||
**Why this matters:** The user spent time making decisions. Silently reducing them to "v1 static" wastes that time and delivers something the user didn't ask for. Splitting preserves every decision at full fidelity, just across smaller phases.
|
||||
## Multi-Source Coverage Audit (MANDATORY in every plan set)
|
||||
|
||||
**Decision coverage matrix (MANDATORY in every plan set):**
|
||||
@planner-source-audit.md for full format, examples, and gap-handling rules.
|
||||
|
||||
Before finalizing plans, produce internally:
|
||||
Audit ALL four source types before finalizing: **GOAL** (ROADMAP phase goal), **REQ** (phase_req_ids from REQUIREMENTS.md), **RESEARCH** (RESEARCH.md features/constraints), **CONTEXT** (D-XX decisions from CONTEXT.md).
|
||||
|
||||
```
|
||||
D-XX | Plan | Task | Full/Partial | Notes
|
||||
D-01 | 01 | 1 | Full |
|
||||
D-02 | 01 | 2 | Full |
|
||||
D-23 | 03 | 1 | PARTIAL | ← BLOCKER: must be Full or split phase
|
||||
```
|
||||
Every item must be COVERED by a plan. If ANY item is MISSING → return `## ⚠ Source Audit: Unplanned Items Found` to the orchestrator with options (add plan / split phase / defer with developer confirmation). Never finalize silently with gaps.
|
||||
|
||||
If ANY decision is "Partial" → either fix the task to deliver fully, or return PHASE SPLIT RECOMMENDED.
|
||||
Exclusions (not gaps): Deferred Ideas in CONTEXT.md, items scoped to other phases, RESEARCH.md "out of scope" items.
|
||||
</scope_reduction_prohibition>
|
||||
|
||||
<planner_authority_limits>
|
||||
## The Planner Does Not Decide What Is Too Hard
|
||||
|
||||
@planner-source-audit.md for constraint examples.
|
||||
|
||||
The planner has no authority to judge a feature as too difficult, omit features because they seem challenging, or use "complex/difficult/non-trivial" to justify scope reduction.
|
||||
|
||||
**Only three legitimate reasons to split or flag:**
|
||||
1. **Context cost:** implementation would consume >50% of a single agent's context window
|
||||
2. **Missing information:** required data not present in any source artifact
|
||||
3. **Dependency conflict:** feature cannot be built until another phase ships
|
||||
|
||||
If a feature has none of these three constraints, it gets planned. Period.
|
||||
</planner_authority_limits>
|
||||
|
||||
<philosophy>
|
||||
|
||||
## Solo Developer + Claude Workflow
|
||||
@@ -134,7 +146,7 @@ If ANY decision is "Partial" → either fix the task to deliver fully, or return
|
||||
Planning for ONE person (the user) and ONE implementer (Claude).
|
||||
- No teams, stakeholders, ceremonies, coordination overhead
|
||||
- User = visionary/product owner, Claude = builder
|
||||
- Estimate effort in Claude execution time, not human dev time
|
||||
- Estimate effort in context window cost, not time
|
||||
|
||||
## Plans Are Prompts
|
||||
|
||||
@@ -162,7 +174,8 @@ Plan -> Execute -> Ship -> Learn -> Repeat
|
||||
**Anti-enterprise patterns (delete if seen):**
|
||||
- Team structures, RACI matrices, stakeholder management
|
||||
- Sprint ceremonies, change management processes
|
||||
- Human dev time estimates (hours, days, weeks)
|
||||
- Time estimates in human units (see `<planner_authority_limits>`)
|
||||
- Complexity/difficulty as scope justification (see `<planner_authority_limits>`)
|
||||
- Documentation for documentation's sake
|
||||
|
||||
</philosophy>
|
||||
@@ -243,13 +256,19 @@ Every task has four required fields:
|
||||
|
||||
## Task Sizing
|
||||
|
||||
Each task: **15-60 minutes** Claude execution time.
|
||||
Each task targets **10–30% context consumption**.
|
||||
|
||||
| Duration | Action |
|
||||
|----------|--------|
|
||||
| < 15 min | Too small — combine with related task |
|
||||
| 15-60 min | Right size |
|
||||
| > 60 min | Too large — split |
|
||||
| Context Cost | Action |
|
||||
|--------------|--------|
|
||||
| < 10% context | Too small — combine with a related task |
|
||||
| 10-30% context | Right size — proceed |
|
||||
| > 30% context | Too large — split into two tasks |
|
||||
|
||||
**Context cost signals (use these, not time estimates):**
|
||||
- Files modified: 0-3 = ~10-15%, 4-6 = ~20-30%, 7+ = ~40%+ (split)
|
||||
- New subsystem: ~25-35%
|
||||
- Migration + data transform: ~30-40%
|
||||
- Pure config/wiring: ~5-10%
|
||||
|
||||
**Too large signals:** Touches >3-5 files, multiple distinct chunks, action section >1 paragraph.
|
||||
|
||||
@@ -265,17 +284,9 @@ When a plan creates new interfaces consumed by subsequent tasks:
|
||||
|
||||
This prevents the "scavenger hunt" anti-pattern where executors explore the codebase to understand contracts. They receive the contracts in the plan itself.
|
||||
|
||||
## Specificity Examples
|
||||
## Specificity
|
||||
|
||||
| TOO VAGUE | JUST RIGHT |
|
||||
|-----------|------------|
|
||||
| "Add authentication" | "Add JWT auth with refresh rotation using jose library, store in httpOnly cookie, 15min access / 7day refresh" |
|
||||
| "Create the API" | "Create POST /api/projects endpoint accepting {name, description}, validates name length 3-50 chars, returns 201 with project object" |
|
||||
| "Style the dashboard" | "Add Tailwind classes to Dashboard.tsx: grid layout (3 cols on lg, 1 on mobile), card shadows, hover states on action buttons" |
|
||||
| "Handle errors" | "Wrap API calls in try/catch, return {error: string} on 4xx/5xx, show toast via sonner on client" |
|
||||
| "Set up the database" | "Add User and Project models to schema.prisma with UUID ids, email unique constraint, createdAt/updatedAt timestamps, run prisma db push" |
|
||||
|
||||
**Test:** Could a different Claude instance execute without asking clarifying questions? If not, add specificity.
|
||||
**Test:** Could a different Claude instance execute without asking clarifying questions? If not, add specificity. See @~/.claude/get-shit-done/references/planner-antipatterns.md for vague-vs-specific comparison table.
|
||||
|
||||
## TDD Detection
|
||||
|
||||
@@ -333,49 +344,9 @@ Record in `user_setup` frontmatter. Only include what Claude literally cannot do
|
||||
- `creates`: What this produces
|
||||
- `has_checkpoint`: Requires user interaction?
|
||||
|
||||
**Example with 6 tasks:**
|
||||
**Example:** A→C, B→D, C+D→E, E→F(checkpoint). Waves: {A,B} → {C,D} → {E} → {F}.
|
||||
|
||||
```
|
||||
Task A (User model): needs nothing, creates src/models/user.ts
|
||||
Task B (Product model): needs nothing, creates src/models/product.ts
|
||||
Task C (User API): needs Task A, creates src/api/users.ts
|
||||
Task D (Product API): needs Task B, creates src/api/products.ts
|
||||
Task E (Dashboard): needs Task C + D, creates src/components/Dashboard.tsx
|
||||
Task F (Verify UI): checkpoint:human-verify, needs Task E
|
||||
|
||||
Graph:
|
||||
A --> C --\
|
||||
--> E --> F
|
||||
B --> D --/
|
||||
|
||||
Wave analysis:
|
||||
Wave 1: A, B (independent roots)
|
||||
Wave 2: C, D (depend only on Wave 1)
|
||||
Wave 3: E (depends on Wave 2)
|
||||
Wave 4: F (checkpoint, depends on Wave 3)
|
||||
```
|
||||
|
||||
## Vertical Slices vs Horizontal Layers
|
||||
|
||||
**Vertical slices (PREFER):**
|
||||
```
|
||||
Plan 01: User feature (model + API + UI)
|
||||
Plan 02: Product feature (model + API + UI)
|
||||
Plan 03: Order feature (model + API + UI)
|
||||
```
|
||||
Result: All three run parallel (Wave 1)
|
||||
|
||||
**Horizontal layers (AVOID):**
|
||||
```
|
||||
Plan 01: Create User model, Product model, Order model
|
||||
Plan 02: Create User API, Product API, Order API
|
||||
Plan 03: Create User UI, Product UI, Order UI
|
||||
```
|
||||
Result: Fully sequential (02 needs 01, 03 needs 02)
|
||||
|
||||
**When vertical slices work:** Features are independent, self-contained, no cross-feature dependencies.
|
||||
|
||||
**When horizontal layers necessary:** Shared foundation required (auth before protected features), genuine type dependencies, infrastructure setup.
|
||||
**Prefer vertical slices** (User feature: model+API+UI) over horizontal layers (all models → all APIs → all UIs). Vertical = parallel. Horizontal = sequential. Use horizontal only when shared foundation is required.
|
||||
|
||||
## File Ownership for Parallel Execution
|
||||
|
||||
@@ -401,11 +372,11 @@ Plans should complete within ~50% context (not 80%). No context anxiety, quality
|
||||
|
||||
**Each plan: 2-3 tasks maximum.**
|
||||
|
||||
| Task Complexity | Tasks/Plan | Context/Task | Total |
|
||||
|-----------------|------------|--------------|-------|
|
||||
| Simple (CRUD, config) | 3 | ~10-15% | ~30-45% |
|
||||
| Complex (auth, payments) | 2 | ~20-30% | ~40-50% |
|
||||
| Very complex (migrations) | 1-2 | ~30-40% | ~30-50% |
|
||||
| Context Weight | Tasks/Plan | Context/Task | Total |
|
||||
|----------------|------------|--------------|-------|
|
||||
| Light (CRUD, config) | 3 | ~10-15% | ~30-45% |
|
||||
| Medium (auth, payments) | 2 | ~20-30% | ~40-50% |
|
||||
| Heavy (migrations, multi-subsystem) | 1-2 | ~30-40% | ~30-50% |
|
||||
|
||||
## Split Signals
|
||||
|
||||
@@ -416,7 +387,7 @@ Plans should complete within ~50% context (not 80%). No context anxiety, quality
|
||||
- Checkpoint + implementation in same plan
|
||||
- Discovery + implementation in same plan
|
||||
|
||||
**CONSIDER splitting:** >5 files total, complex domains, uncertainty about approach, natural semantic boundaries.
|
||||
**CONSIDER splitting:** >5 files total, natural semantic boundaries, context cost estimate exceeds 40% for a single plan. See `<planner_authority_limits>` for prohibited split reasons.
|
||||
|
||||
## Granularity Calibration
|
||||
|
||||
@@ -426,22 +397,7 @@ Plans should complete within ~50% context (not 80%). No context anxiety, quality
|
||||
| Standard | 3-5 | 2-3 |
|
||||
| Fine | 5-10 | 2-3 |
|
||||
|
||||
Derive plans from actual work. Granularity determines compression tolerance, not a target. Don't pad small work to hit a number. Don't compress complex work to look efficient.
|
||||
|
||||
## Context Per Task Estimates
|
||||
|
||||
| Files Modified | Context Impact |
|
||||
|----------------|----------------|
|
||||
| 0-3 files | ~10-15% (small) |
|
||||
| 4-6 files | ~20-30% (medium) |
|
||||
| 7+ files | ~40%+ (split) |
|
||||
|
||||
| Complexity | Context/Task |
|
||||
|------------|--------------|
|
||||
| Simple CRUD | ~15% |
|
||||
| Business logic | ~25% |
|
||||
| Complex algorithms | ~40% |
|
||||
| Domain modeling | ~35% |
|
||||
Derive plans from actual work. Granularity determines compression tolerance, not a target.
|
||||
|
||||
</scope_estimation>
|
||||
|
||||
@@ -794,36 +750,10 @@ When Claude tries CLI/API and gets auth error → creates checkpoint → user au
|
||||
|
||||
**DON'T:** Ask human to do work Claude can automate, mix multiple verifications, place checkpoints before automation completes.
|
||||
|
||||
## Anti-Patterns
|
||||
## Anti-Patterns and Extended Examples
|
||||
|
||||
**Bad - Asking human to automate:**
|
||||
```xml
|
||||
<task type="checkpoint:human-action">
|
||||
<action>Deploy to Vercel</action>
|
||||
<instructions>Visit vercel.com, import repo, click deploy...</instructions>
|
||||
</task>
|
||||
```
|
||||
Why bad: Vercel has a CLI. Claude should run `vercel --yes`.
|
||||
|
||||
**Bad - Too many checkpoints:**
|
||||
```xml
|
||||
<task type="auto">Create schema</task>
|
||||
<task type="checkpoint:human-verify">Check schema</task>
|
||||
<task type="auto">Create API</task>
|
||||
<task type="checkpoint:human-verify">Check API</task>
|
||||
```
|
||||
Why bad: Verification fatigue. Combine into one checkpoint at end.
|
||||
|
||||
**Good - Single verification checkpoint:**
|
||||
```xml
|
||||
<task type="auto">Create schema</task>
|
||||
<task type="auto">Create API</task>
|
||||
<task type="auto">Create UI</task>
|
||||
<task type="checkpoint:human-verify">
|
||||
<what-built>Complete auth flow (schema + API + UI)</what-built>
|
||||
<how-to-verify>Test full flow: register, login, access protected page</how-to-verify>
|
||||
</task>
|
||||
```
|
||||
For checkpoint anti-patterns, specificity comparison tables, context section anti-patterns, and scope reduction patterns:
|
||||
@~/.claude/get-shit-done/references/planner-antipatterns.md
|
||||
|
||||
</checkpoints>
|
||||
|
||||
@@ -1023,6 +953,8 @@ cat "$phase_dir"/*-DISCOVERY.md 2>/dev/null # From mandatory discovery
|
||||
**If CONTEXT.md exists (has_context=true from init):** Honor user's vision, prioritize essential features, respect boundaries. Locked decisions — do not revisit.
|
||||
|
||||
**If RESEARCH.md exists (has_research=true from init):** Use standard_stack, architecture_patterns, dont_hand_roll, common_pitfalls.
|
||||
|
||||
**Architectural Responsibility Map sanity check:** If RESEARCH.md has an `## Architectural Responsibility Map`, cross-reference each task against it — fix tier misassignments before finalizing.
|
||||
</step>
|
||||
|
||||
<step name="break_into_tasks">
|
||||
|
||||
@@ -32,6 +32,29 @@ Your files feed the roadmap:
|
||||
**Be comprehensive but opinionated.** "Use X because Y" not "Options are X, Y, Z."
|
||||
</role>
|
||||
|
||||
<documentation_lookup>
|
||||
When you need library or framework documentation, check in this order:
|
||||
|
||||
1. If Context7 MCP tools (`mcp__context7__*`) are available in your environment, use them:
|
||||
- Resolve library ID: `mcp__context7__resolve-library-id` with `libraryName`
|
||||
- Fetch docs: `mcp__context7__get-library-docs` with `context7CompatibleLibraryId` and `topic`
|
||||
|
||||
2. If Context7 MCP is not available (upstream bug anthropics/claude-code#13898 strips MCP
|
||||
tools from agents with a `tools:` frontmatter restriction), use the CLI fallback via Bash:
|
||||
|
||||
Step 1 — Resolve library ID:
|
||||
```bash
|
||||
npx --yes ctx7@latest library <name> "<query>"
|
||||
```
|
||||
Step 2 — Fetch documentation:
|
||||
```bash
|
||||
npx --yes ctx7@latest docs <libraryId> "<query>"
|
||||
```
|
||||
|
||||
Do not skip documentation lookups because MCP tools are unavailable — the CLI fallback
|
||||
works via Bash and produces equivalent output.
|
||||
</documentation_lookup>
|
||||
|
||||
<philosophy>
|
||||
|
||||
## Training Data = Hypothesis
|
||||
|
||||
@@ -27,6 +27,29 @@ If the prompt contains a `<files_to_read>` block, you MUST use the `Read` tool t
|
||||
- Return structured result to orchestrator
|
||||
</role>
|
||||
|
||||
<documentation_lookup>
|
||||
When you need library or framework documentation, check in this order:
|
||||
|
||||
1. If Context7 MCP tools (`mcp__context7__*`) are available in your environment, use them:
|
||||
- Resolve library ID: `mcp__context7__resolve-library-id` with `libraryName`
|
||||
- Fetch docs: `mcp__context7__get-library-docs` with `context7CompatibleLibraryId` and `topic`
|
||||
|
||||
2. If Context7 MCP is not available (upstream bug anthropics/claude-code#13898 strips MCP
|
||||
tools from agents with a `tools:` frontmatter restriction), use the CLI fallback via Bash:
|
||||
|
||||
Step 1 — Resolve library ID:
|
||||
```bash
|
||||
npx --yes ctx7@latest library <name> "<query>"
|
||||
```
|
||||
Step 2 — Fetch documentation:
|
||||
```bash
|
||||
npx --yes ctx7@latest docs <libraryId> "<query>"
|
||||
```
|
||||
|
||||
Do not skip documentation lookups because MCP tools are unavailable — the CLI fallback
|
||||
works via Bash and produces equivalent output.
|
||||
</documentation_lookup>
|
||||
|
||||
<project_context>
|
||||
Before researching, discover project context:
|
||||
|
||||
|
||||
110
bin/install.js
110
bin/install.js
@@ -70,6 +70,7 @@ const hasCursor = args.includes('--cursor');
|
||||
const hasWindsurf = args.includes('--windsurf');
|
||||
const hasAugment = args.includes('--augment');
|
||||
const hasTrae = args.includes('--trae');
|
||||
const hasQwen = args.includes('--qwen');
|
||||
const hasCodebuddy = args.includes('--codebuddy');
|
||||
const hasCline = args.includes('--cline');
|
||||
const hasBoth = args.includes('--both'); // Legacy flag, keeps working
|
||||
@@ -79,7 +80,7 @@ const hasUninstall = args.includes('--uninstall') || args.includes('-u');
|
||||
// Runtime selection - can be set by flags or interactive prompt
|
||||
let selectedRuntimes = [];
|
||||
if (hasAll) {
|
||||
selectedRuntimes = ['claude', 'kilo', 'opencode', 'gemini', 'codex', 'copilot', 'antigravity', 'cursor', 'windsurf', 'augment', 'trae', 'codebuddy', 'cline'];
|
||||
selectedRuntimes = ['claude', 'kilo', 'opencode', 'gemini', 'codex', 'copilot', 'antigravity', 'cursor', 'windsurf', 'augment', 'trae', 'qwen', 'codebuddy', 'cline'];
|
||||
} else if (hasBoth) {
|
||||
selectedRuntimes = ['claude', 'opencode'];
|
||||
} else {
|
||||
@@ -94,6 +95,7 @@ if (hasAll) {
|
||||
if (hasWindsurf) selectedRuntimes.push('windsurf');
|
||||
if (hasAugment) selectedRuntimes.push('augment');
|
||||
if (hasTrae) selectedRuntimes.push('trae');
|
||||
if (hasQwen) selectedRuntimes.push('qwen');
|
||||
if (hasCodebuddy) selectedRuntimes.push('codebuddy');
|
||||
if (hasCline) selectedRuntimes.push('cline');
|
||||
}
|
||||
@@ -144,6 +146,7 @@ function getDirName(runtime) {
|
||||
if (runtime === 'windsurf') return '.windsurf';
|
||||
if (runtime === 'augment') return '.augment';
|
||||
if (runtime === 'trae') return '.trae';
|
||||
if (runtime === 'qwen') return '.qwen';
|
||||
if (runtime === 'codebuddy') return '.codebuddy';
|
||||
if (runtime === 'cline') return '.cline';
|
||||
return '.claude';
|
||||
@@ -178,6 +181,7 @@ function getConfigDirFromHome(runtime, isGlobal) {
|
||||
if (runtime === 'windsurf') return "'.windsurf'";
|
||||
if (runtime === 'augment') return "'.augment'";
|
||||
if (runtime === 'trae') return "'.trae'";
|
||||
if (runtime === 'qwen') return "'.qwen'";
|
||||
if (runtime === 'codebuddy') return "'.codebuddy'";
|
||||
if (runtime === 'cline') return "'.cline'";
|
||||
return "'.claude'";
|
||||
@@ -342,6 +346,16 @@ function getGlobalDir(runtime, explicitDir = null) {
|
||||
return path.join(os.homedir(), '.trae');
|
||||
}
|
||||
|
||||
if (runtime === 'qwen') {
|
||||
if (explicitDir) {
|
||||
return expandTilde(explicitDir);
|
||||
}
|
||||
if (process.env.QWEN_CONFIG_DIR) {
|
||||
return expandTilde(process.env.QWEN_CONFIG_DIR);
|
||||
}
|
||||
return path.join(os.homedir(), '.qwen');
|
||||
}
|
||||
|
||||
if (runtime === 'codebuddy') {
|
||||
// CodeBuddy: --config-dir > CODEBUDDY_CONFIG_DIR > ~/.codebuddy
|
||||
if (explicitDir) {
|
||||
@@ -384,7 +398,7 @@ const banner = '\n' +
|
||||
'\n' +
|
||||
' Get Shit Done ' + dim + 'v' + pkg.version + reset + '\n' +
|
||||
' A meta-prompting, context engineering and spec-driven\n' +
|
||||
' development system for Claude Code, OpenCode, Gemini, Kilo, Codex, Copilot, Antigravity, Cursor, Windsurf, Augment, Trae, Cline and CodeBuddy by TÂCHES.\n';
|
||||
' development system for Claude Code, OpenCode, Gemini, Kilo, Codex, Copilot, Antigravity, Cursor, Windsurf, Augment, Trae, Qwen Code, Cline and CodeBuddy by TÂCHES.\n';
|
||||
|
||||
// Parse --config-dir argument
|
||||
function parseConfigDirArg() {
|
||||
@@ -422,7 +436,7 @@ if (hasUninstall) {
|
||||
|
||||
// Show help if requested
|
||||
if (hasHelp) {
|
||||
console.log(` ${yellow}Usage:${reset} npx get-shit-done-cc [options]\n\n ${yellow}Options:${reset}\n ${cyan}-g, --global${reset} Install globally (to config directory)\n ${cyan}-l, --local${reset} Install locally (to current directory)\n ${cyan}--claude${reset} Install for Claude Code only\n ${cyan}--opencode${reset} Install for OpenCode only\n ${cyan}--gemini${reset} Install for Gemini only\n ${cyan}--kilo${reset} Install for Kilo only\n ${cyan}--codex${reset} Install for Codex only\n ${cyan}--copilot${reset} Install for Copilot only\n ${cyan}--antigravity${reset} Install for Antigravity only\n ${cyan}--cursor${reset} Install for Cursor only\n ${cyan}--windsurf${reset} Install for Windsurf only\n ${cyan}--augment${reset} Install for Augment only\n ${cyan}--trae${reset} Install for Trae only\n ${cyan}--cline${reset} Install for Cline only\n ${cyan}--codebuddy${reset} Install for CodeBuddy only\n ${cyan}--all${reset} Install for all runtimes\n ${cyan}-u, --uninstall${reset} Uninstall GSD (remove all GSD files)\n ${cyan}-c, --config-dir <path>${reset} Specify custom config directory\n ${cyan}-h, --help${reset} Show this help message\n ${cyan}--force-statusline${reset} Replace existing statusline config\n\n ${yellow}Examples:${reset}\n ${dim}# Interactive install (prompts for runtime and location)${reset}\n npx get-shit-done-cc\n\n ${dim}# Install for Claude Code globally${reset}\n npx get-shit-done-cc --claude --global\n\n ${dim}# Install for Gemini globally${reset}\n npx get-shit-done-cc --gemini --global\n\n ${dim}# Install for Kilo globally${reset}\n npx get-shit-done-cc --kilo --global\n\n ${dim}# Install for Codex globally${reset}\n npx get-shit-done-cc --codex --global\n\n ${dim}# Install for Copilot globally${reset}\n npx get-shit-done-cc --copilot --global\n\n ${dim}# Install for Copilot locally${reset}\n npx get-shit-done-cc --copilot --local\n\n ${dim}# Install for Antigravity globally${reset}\n npx get-shit-done-cc --antigravity --global\n\n ${dim}# Install for Antigravity locally${reset}\n npx get-shit-done-cc --antigravity --local\n\n ${dim}# Install for Cursor globally${reset}\n npx get-shit-done-cc --cursor --global\n\n ${dim}# Install for Cursor locally${reset}\n npx get-shit-done-cc --cursor --local\n\n ${dim}# Install for Windsurf globally${reset}\n npx get-shit-done-cc --windsurf --global\n\n ${dim}# Install for Windsurf locally${reset}\n npx get-shit-done-cc --windsurf --local\n\n ${dim}# Install for Augment globally${reset}\n npx get-shit-done-cc --augment --global\n\n ${dim}# Install for Augment locally${reset}\n npx get-shit-done-cc --augment --local\n\n ${dim}# Install for Trae globally${reset}\n npx get-shit-done-cc --trae --global\n\n ${dim}# Install for Trae locally${reset}\n npx get-shit-done-cc --trae --local\n\n ${dim}# Install for Cline locally${reset}\n npx get-shit-done-cc --cline --local\n\n ${dim}# Install for CodeBuddy globally${reset}\n npx get-shit-done-cc --codebuddy --global\n\n ${dim}# Install for CodeBuddy locally${reset}\n npx get-shit-done-cc --codebuddy --local\n\n ${dim}# Install for all runtimes globally${reset}\n npx get-shit-done-cc --all --global\n\n ${dim}# Install to custom config directory${reset}\n npx get-shit-done-cc --kilo --global --config-dir ~/.kilo-work\n\n ${dim}# Install to current project only${reset}\n npx get-shit-done-cc --claude --local\n\n ${dim}# Uninstall GSD from Cursor globally${reset}\n npx get-shit-done-cc --cursor --global --uninstall\n\n ${yellow}Notes:${reset}\n The --config-dir option is useful when you have multiple configurations.\n It takes priority over CLAUDE_CONFIG_DIR / OPENCODE_CONFIG_DIR / GEMINI_CONFIG_DIR / KILO_CONFIG_DIR / CODEX_HOME / COPILOT_CONFIG_DIR / ANTIGRAVITY_CONFIG_DIR / CURSOR_CONFIG_DIR / WINDSURF_CONFIG_DIR / AUGMENT_CONFIG_DIR / TRAE_CONFIG_DIR / CLINE_CONFIG_DIR / CODEBUDDY_CONFIG_DIR environment variables.\n`);
|
||||
console.log(` ${yellow}Usage:${reset} npx get-shit-done-cc [options]\n\n ${yellow}Options:${reset}\n ${cyan}-g, --global${reset} Install globally (to config directory)\n ${cyan}-l, --local${reset} Install locally (to current directory)\n ${cyan}--claude${reset} Install for Claude Code only\n ${cyan}--opencode${reset} Install for OpenCode only\n ${cyan}--gemini${reset} Install for Gemini only\n ${cyan}--kilo${reset} Install for Kilo only\n ${cyan}--codex${reset} Install for Codex only\n ${cyan}--copilot${reset} Install for Copilot only\n ${cyan}--antigravity${reset} Install for Antigravity only\n ${cyan}--cursor${reset} Install for Cursor only\n ${cyan}--windsurf${reset} Install for Windsurf only\n ${cyan}--augment${reset} Install for Augment only\n ${cyan}--trae${reset} Install for Trae only\n ${cyan}--qwen${reset} Install for Qwen Code only\n ${cyan}--cline${reset} Install for Cline only\n ${cyan}--codebuddy${reset} Install for CodeBuddy only\n ${cyan}--all${reset} Install for all runtimes\n ${cyan}-u, --uninstall${reset} Uninstall GSD (remove all GSD files)\n ${cyan}-c, --config-dir <path>${reset} Specify custom config directory\n ${cyan}-h, --help${reset} Show this help message\n ${cyan}--force-statusline${reset} Replace existing statusline config\n\n ${yellow}Examples:${reset}\n ${dim}# Interactive install (prompts for runtime and location)${reset}\n npx get-shit-done-cc\n\n ${dim}# Install for Claude Code globally${reset}\n npx get-shit-done-cc --claude --global\n\n ${dim}# Install for Gemini globally${reset}\n npx get-shit-done-cc --gemini --global\n\n ${dim}# Install for Kilo globally${reset}\n npx get-shit-done-cc --kilo --global\n\n ${dim}# Install for Codex globally${reset}\n npx get-shit-done-cc --codex --global\n\n ${dim}# Install for Copilot globally${reset}\n npx get-shit-done-cc --copilot --global\n\n ${dim}# Install for Copilot locally${reset}\n npx get-shit-done-cc --copilot --local\n\n ${dim}# Install for Antigravity globally${reset}\n npx get-shit-done-cc --antigravity --global\n\n ${dim}# Install for Antigravity locally${reset}\n npx get-shit-done-cc --antigravity --local\n\n ${dim}# Install for Cursor globally${reset}\n npx get-shit-done-cc --cursor --global\n\n ${dim}# Install for Cursor locally${reset}\n npx get-shit-done-cc --cursor --local\n\n ${dim}# Install for Windsurf globally${reset}\n npx get-shit-done-cc --windsurf --global\n\n ${dim}# Install for Windsurf locally${reset}\n npx get-shit-done-cc --windsurf --local\n\n ${dim}# Install for Augment globally${reset}\n npx get-shit-done-cc --augment --global\n\n ${dim}# Install for Augment locally${reset}\n npx get-shit-done-cc --augment --local\n\n ${dim}# Install for Trae globally${reset}\n npx get-shit-done-cc --trae --global\n\n ${dim}# Install for Trae locally${reset}\n npx get-shit-done-cc --trae --local\n\n ${dim}# Install for Cline locally${reset}\n npx get-shit-done-cc --cline --local\n\n ${dim}# Install for CodeBuddy globally${reset}\n npx get-shit-done-cc --codebuddy --global\n\n ${dim}# Install for CodeBuddy locally${reset}\n npx get-shit-done-cc --codebuddy --local\n\n ${dim}# Install for all runtimes globally${reset}\n npx get-shit-done-cc --all --global\n\n ${dim}# Install to custom config directory${reset}\n npx get-shit-done-cc --kilo --global --config-dir ~/.kilo-work\n\n ${dim}# Install to current project only${reset}\n npx get-shit-done-cc --claude --local\n\n ${dim}# Uninstall GSD from Cursor globally${reset}\n npx get-shit-done-cc --cursor --global --uninstall\n\n ${yellow}Notes:${reset}\n The --config-dir option is useful when you have multiple configurations.\n It takes priority over CLAUDE_CONFIG_DIR / OPENCODE_CONFIG_DIR / GEMINI_CONFIG_DIR / KILO_CONFIG_DIR / CODEX_HOME / COPILOT_CONFIG_DIR / ANTIGRAVITY_CONFIG_DIR / CURSOR_CONFIG_DIR / WINDSURF_CONFIG_DIR / AUGMENT_CONFIG_DIR / TRAE_CONFIG_DIR / QWEN_CONFIG_DIR / CLINE_CONFIG_DIR / CODEBUDDY_CONFIG_DIR environment variables.\n`);
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
@@ -3939,7 +3953,10 @@ function copyCommandsAsClaudeSkills(srcDir, skillsDir, prefix, pathPrefix, runti
|
||||
content = content.replace(/~\/\.claude\//g, pathPrefix);
|
||||
content = content.replace(/\$HOME\/\.claude\//g, pathPrefix);
|
||||
content = content.replace(/\.\/\.claude\//g, `./${getDirName(runtime)}/`);
|
||||
content = processAttribution(content, getCommitAttribution('claude'));
|
||||
content = content.replace(/~\/\.qwen\//g, pathPrefix);
|
||||
content = content.replace(/\$HOME\/\.qwen\//g, pathPrefix);
|
||||
content = content.replace(/\.\/\.qwen\//g, `./${getDirName(runtime)}/`);
|
||||
content = processAttribution(content, getCommitAttribution(runtime));
|
||||
content = convertClaudeCommandToClaudeSkill(content, skillName);
|
||||
|
||||
fs.writeFileSync(path.join(skillDir, 'SKILL.md'), content);
|
||||
@@ -4057,6 +4074,7 @@ function copyWithPathReplacement(srcDir, destDir, pathPrefix, runtime, isCommand
|
||||
const isWindsurf = runtime === 'windsurf';
|
||||
const isAugment = runtime === 'augment';
|
||||
const isTrae = runtime === 'trae';
|
||||
const isQwen = runtime === 'qwen';
|
||||
const isCline = runtime === 'cline';
|
||||
const dirName = getDirName(runtime);
|
||||
|
||||
@@ -4085,6 +4103,9 @@ function copyWithPathReplacement(srcDir, destDir, pathPrefix, runtime, isCommand
|
||||
content = content.replace(globalClaudeRegex, pathPrefix);
|
||||
content = content.replace(globalClaudeHomeRegex, pathPrefix);
|
||||
content = content.replace(localClaudeRegex, `./${dirName}/`);
|
||||
content = content.replace(/~\/\.qwen\//g, pathPrefix);
|
||||
content = content.replace(/\$HOME\/\.qwen\//g, pathPrefix);
|
||||
content = content.replace(/\.\/\.qwen\//g, `./${dirName}/`);
|
||||
}
|
||||
content = processAttribution(content, getCommitAttribution(runtime));
|
||||
|
||||
@@ -4128,6 +4149,10 @@ function copyWithPathReplacement(srcDir, destDir, pathPrefix, runtime, isCommand
|
||||
} else if (isCline) {
|
||||
content = convertClaudeToCliineMarkdown(content);
|
||||
fs.writeFileSync(destPath, content);
|
||||
} else if (isQwen) {
|
||||
content = content.replace(/CLAUDE\.md/g, 'QWEN.md');
|
||||
content = content.replace(/\bClaude Code\b/g, 'Qwen Code');
|
||||
fs.writeFileSync(destPath, content);
|
||||
} else {
|
||||
fs.writeFileSync(destPath, content);
|
||||
}
|
||||
@@ -4172,6 +4197,13 @@ function copyWithPathReplacement(srcDir, destDir, pathPrefix, runtime, isCommand
|
||||
jsContent = jsContent.replace(/CLAUDE\.md/g, '.clinerules');
|
||||
jsContent = jsContent.replace(/\bClaude Code\b/g, 'Cline');
|
||||
fs.writeFileSync(destPath, jsContent);
|
||||
} else if (isQwen && (entry.name.endsWith('.cjs') || entry.name.endsWith('.js'))) {
|
||||
let jsContent = fs.readFileSync(srcPath, 'utf8');
|
||||
jsContent = jsContent.replace(/\.claude\/skills\//g, '.qwen/skills/');
|
||||
jsContent = jsContent.replace(/\.claude\//g, '.qwen/');
|
||||
jsContent = jsContent.replace(/CLAUDE\.md/g, 'QWEN.md');
|
||||
jsContent = jsContent.replace(/\bClaude Code\b/g, 'Qwen Code');
|
||||
fs.writeFileSync(destPath, jsContent);
|
||||
} else {
|
||||
fs.copyFileSync(srcPath, destPath);
|
||||
}
|
||||
@@ -4349,6 +4381,7 @@ function uninstall(isGlobal, runtime = 'claude') {
|
||||
const isWindsurf = runtime === 'windsurf';
|
||||
const isAugment = runtime === 'augment';
|
||||
const isTrae = runtime === 'trae';
|
||||
const isQwen = runtime === 'qwen';
|
||||
const isCodebuddy = runtime === 'codebuddy';
|
||||
const dirName = getDirName(runtime);
|
||||
|
||||
@@ -4372,6 +4405,7 @@ function uninstall(isGlobal, runtime = 'claude') {
|
||||
if (runtime === 'windsurf') runtimeLabel = 'Windsurf';
|
||||
if (runtime === 'augment') runtimeLabel = 'Augment';
|
||||
if (runtime === 'trae') runtimeLabel = 'Trae';
|
||||
if (runtime === 'qwen') runtimeLabel = 'Qwen Code';
|
||||
if (runtime === 'codebuddy') runtimeLabel = 'CodeBuddy';
|
||||
|
||||
console.log(` Uninstalling GSD from ${cyan}${runtimeLabel}${reset} at ${cyan}${locationLabel}${reset}\n`);
|
||||
@@ -4502,6 +4536,31 @@ function uninstall(isGlobal, runtime = 'claude') {
|
||||
console.log(` ${green}✓${reset} Removed ${skillCount} Antigravity skills`);
|
||||
}
|
||||
}
|
||||
} else if (isQwen) {
|
||||
const skillsDir = path.join(targetDir, 'skills');
|
||||
if (fs.existsSync(skillsDir)) {
|
||||
let skillCount = 0;
|
||||
const entries = fs.readdirSync(skillsDir, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
if (entry.isDirectory() && entry.name.startsWith('gsd-')) {
|
||||
fs.rmSync(path.join(skillsDir, entry.name), { recursive: true });
|
||||
skillCount++;
|
||||
}
|
||||
}
|
||||
if (skillCount > 0) {
|
||||
removedCount++;
|
||||
console.log(` ${green}✓${reset} Removed ${skillCount} Qwen Code skills`);
|
||||
}
|
||||
}
|
||||
|
||||
const legacyCommandsDir = path.join(targetDir, 'commands', 'gsd');
|
||||
if (fs.existsSync(legacyCommandsDir)) {
|
||||
const savedLegacyArtifacts = preserveUserArtifacts(legacyCommandsDir, ['dev-preferences.md']);
|
||||
fs.rmSync(legacyCommandsDir, { recursive: true });
|
||||
removedCount++;
|
||||
console.log(` ${green}✓${reset} Removed legacy commands/gsd/`);
|
||||
restoreUserArtifacts(legacyCommandsDir, savedLegacyArtifacts);
|
||||
}
|
||||
} else if (isGemini) {
|
||||
// Gemini: still uses commands/gsd/
|
||||
const gsdCommandsDir = path.join(targetDir, 'commands', 'gsd');
|
||||
@@ -5298,6 +5357,7 @@ function install(isGlobal, runtime = 'claude') {
|
||||
const isWindsurf = runtime === 'windsurf';
|
||||
const isAugment = runtime === 'augment';
|
||||
const isTrae = runtime === 'trae';
|
||||
const isQwen = runtime === 'qwen';
|
||||
const isCodebuddy = runtime === 'codebuddy';
|
||||
const isCline = runtime === 'cline';
|
||||
const dirName = getDirName(runtime);
|
||||
@@ -5338,6 +5398,7 @@ function install(isGlobal, runtime = 'claude') {
|
||||
if (isWindsurf) runtimeLabel = 'Windsurf';
|
||||
if (isAugment) runtimeLabel = 'Augment';
|
||||
if (isTrae) runtimeLabel = 'Trae';
|
||||
if (isQwen) runtimeLabel = 'Qwen Code';
|
||||
if (isCodebuddy) runtimeLabel = 'CodeBuddy';
|
||||
if (isCline) runtimeLabel = 'Cline';
|
||||
|
||||
@@ -5447,6 +5508,29 @@ function install(isGlobal, runtime = 'claude') {
|
||||
} else {
|
||||
failures.push('skills/gsd-*');
|
||||
}
|
||||
} else if (isQwen) {
|
||||
const skillsDir = path.join(targetDir, 'skills');
|
||||
const gsdSrc = path.join(src, 'commands', 'gsd');
|
||||
copyCommandsAsClaudeSkills(gsdSrc, skillsDir, 'gsd', pathPrefix, runtime, isGlobal);
|
||||
if (fs.existsSync(skillsDir)) {
|
||||
const count = fs.readdirSync(skillsDir, { withFileTypes: true })
|
||||
.filter(e => e.isDirectory() && e.name.startsWith('gsd-')).length;
|
||||
if (count > 0) {
|
||||
console.log(` ${green}✓${reset} Installed ${count} skills to skills/`);
|
||||
} else {
|
||||
failures.push('skills/gsd-*');
|
||||
}
|
||||
} else {
|
||||
failures.push('skills/gsd-*');
|
||||
}
|
||||
|
||||
const legacyCommandsDir = path.join(targetDir, 'commands', 'gsd');
|
||||
if (fs.existsSync(legacyCommandsDir)) {
|
||||
const savedLegacyArtifacts = preserveUserArtifacts(legacyCommandsDir, ['dev-preferences.md']);
|
||||
fs.rmSync(legacyCommandsDir, { recursive: true });
|
||||
console.log(` ${green}✓${reset} Removed legacy commands/gsd/ directory`);
|
||||
restoreUserArtifacts(legacyCommandsDir, savedLegacyArtifacts);
|
||||
}
|
||||
} else if (isCodebuddy) {
|
||||
const skillsDir = path.join(targetDir, 'skills');
|
||||
const gsdSrc = path.join(src, 'commands', 'gsd');
|
||||
@@ -6188,6 +6272,7 @@ function finishInstall(settingsPath, settings, statuslineCommand, shouldInstallS
|
||||
if (runtime === 'augment') program = 'Augment';
|
||||
if (runtime === 'trae') program = 'Trae';
|
||||
if (runtime === 'cline') program = 'Cline';
|
||||
if (runtime === 'qwen') program = 'Qwen Code';
|
||||
|
||||
let command = '/gsd-new-project';
|
||||
if (runtime === 'opencode') command = '/gsd-new-project';
|
||||
@@ -6200,6 +6285,7 @@ function finishInstall(settingsPath, settings, statuslineCommand, shouldInstallS
|
||||
if (runtime === 'augment') command = '/gsd-new-project';
|
||||
if (runtime === 'trae') command = '/gsd-new-project';
|
||||
if (runtime === 'cline') command = '/gsd-new-project';
|
||||
if (runtime === 'qwen') command = '/gsd-new-project';
|
||||
console.log(`
|
||||
${green}Done!${reset} Open a blank directory in ${program} and run ${cyan}${command}${reset}.
|
||||
|
||||
@@ -6289,10 +6375,11 @@ function promptRuntime(callback) {
|
||||
'9': 'gemini',
|
||||
'10': 'kilo',
|
||||
'11': 'opencode',
|
||||
'12': 'trae',
|
||||
'13': 'windsurf'
|
||||
'12': 'qwen',
|
||||
'13': 'trae',
|
||||
'14': 'windsurf'
|
||||
};
|
||||
const allRuntimes = ['claude', 'antigravity', 'augment', 'cline', 'codebuddy', 'codex', 'copilot', 'cursor', 'gemini', 'kilo', 'opencode', 'trae', 'windsurf'];
|
||||
const allRuntimes = ['claude', 'antigravity', 'augment', 'cline', 'codebuddy', 'codex', 'copilot', 'cursor', 'gemini', 'kilo', 'opencode', 'qwen', 'trae', 'windsurf'];
|
||||
|
||||
console.log(` ${yellow}Which runtime(s) would you like to install for?${reset}\n\n ${cyan}1${reset}) Claude Code ${dim}(~/.claude)${reset}
|
||||
${cyan}2${reset}) Antigravity ${dim}(~/.gemini/antigravity)${reset}
|
||||
@@ -6305,9 +6392,10 @@ function promptRuntime(callback) {
|
||||
${cyan}9${reset}) Gemini ${dim}(~/.gemini)${reset}
|
||||
${cyan}10${reset}) Kilo ${dim}(~/.config/kilo)${reset}
|
||||
${cyan}11${reset}) OpenCode ${dim}(~/.config/opencode)${reset}
|
||||
${cyan}12${reset}) Trae ${dim}(~/.trae)${reset}
|
||||
${cyan}13${reset}) Windsurf ${dim}(~/.codeium/windsurf)${reset}
|
||||
${cyan}14${reset}) All
|
||||
${cyan}12${reset}) Qwen Code ${dim}(~/.qwen)${reset}
|
||||
${cyan}13${reset}) Trae ${dim}(~/.trae)${reset}
|
||||
${cyan}14${reset}) Windsurf ${dim}(~/.codeium/windsurf)${reset}
|
||||
${cyan}15${reset}) All
|
||||
|
||||
${dim}Select multiple: 1,2,6 or 1 2 6${reset}
|
||||
`);
|
||||
@@ -6318,7 +6406,7 @@ function promptRuntime(callback) {
|
||||
const input = answer.trim() || '1';
|
||||
|
||||
// "All" shortcut
|
||||
if (input === '14') {
|
||||
if (input === '15') {
|
||||
callback(allRuntimes);
|
||||
return;
|
||||
}
|
||||
|
||||
22
commands/gsd/extract_learnings.md
Normal file
22
commands/gsd/extract_learnings.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: gsd:extract-learnings
|
||||
description: Extract decisions, lessons, patterns, and surprises from completed phase artifacts
|
||||
argument-hint: <phase-number>
|
||||
allowed-tools:
|
||||
- Read
|
||||
- Write
|
||||
- Bash
|
||||
- Grep
|
||||
- Glob
|
||||
- Agent
|
||||
type: prompt
|
||||
---
|
||||
<objective>
|
||||
Extract structured learnings from completed phase artifacts (PLAN.md, SUMMARY.md, VERIFICATION.md, UAT.md, STATE.md) into a LEARNINGS.md file that captures decisions, lessons learned, patterns discovered, and surprises encountered.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
@~/.claude/get-shit-done/workflows/extract_learnings.md
|
||||
</execution_context>
|
||||
|
||||
Execute the extract-learnings workflow from @~/.claude/get-shit-done/workflows/extract_learnings.md end-to-end.
|
||||
45
commands/gsd/from-gsd2.md
Normal file
45
commands/gsd/from-gsd2.md
Normal file
@@ -0,0 +1,45 @@
|
||||
---
|
||||
name: gsd:from-gsd2
|
||||
description: Import a GSD-2 (.gsd/) project back to GSD v1 (.planning/) format
|
||||
argument-hint: "[--path <dir>] [--force]"
|
||||
allowed-tools:
|
||||
- Read
|
||||
- Write
|
||||
- Bash
|
||||
type: prompt
|
||||
---
|
||||
|
||||
<objective>
|
||||
Reverse-migrate a GSD-2 project (`.gsd/` directory) back to GSD v1 (`.planning/`) format.
|
||||
|
||||
Maps the GSD-2 hierarchy (Milestone → Slice → Task) to the GSD v1 hierarchy (Milestone sections in ROADMAP.md → Phase → Plan), preserving completion state, research files, and summaries.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
|
||||
1. **Locate the .gsd/ directory** — check the current working directory (or `--path` argument):
|
||||
```bash
|
||||
node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" from-gsd2 --dry-run
|
||||
```
|
||||
If no `.gsd/` is found, report the error and stop.
|
||||
|
||||
2. **Show the dry-run preview** — present the full file list and migration statistics to the user. Ask for confirmation before writing anything.
|
||||
|
||||
3. **Run the migration** after confirmation:
|
||||
```bash
|
||||
node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" from-gsd2
|
||||
```
|
||||
Use `--force` if `.planning/` already exists and the user has confirmed overwrite.
|
||||
|
||||
4. **Report the result** — show the `filesWritten` count, `planningDir` path, and the preview summary.
|
||||
|
||||
</process>
|
||||
|
||||
<notes>
|
||||
- The migration is non-destructive: `.gsd/` is never modified or removed.
|
||||
- Pass `--path <dir>` to migrate a project at a different path than the current directory.
|
||||
- Slices are numbered sequentially across all milestones (M001/S01 → phase 01, M001/S02 → phase 02, M002/S01 → phase 03, etc.).
|
||||
- Tasks within each slice become plans (T01 → plan 01, T02 → plan 02, etc.).
|
||||
- Completed slices and tasks carry their done state into ROADMAP.md checkboxes and SUMMARY.md files.
|
||||
- GSD-2 cost/token ledger, database state, and VS Code extension state cannot be migrated.
|
||||
</notes>
|
||||
@@ -14,7 +14,9 @@ No arguments needed — reads STATE.md, ROADMAP.md, and phase directories to det
|
||||
|
||||
Designed for rapid multi-project workflows where remembering which phase/step you're on is overhead.
|
||||
|
||||
Supports `--force` flag to bypass safety gates (checkpoint, error state, verification failures).
|
||||
Supports `--force` flag to bypass safety gates (checkpoint, error state, verification failures, and prior-phase completeness scan).
|
||||
|
||||
Before routing to the next step, scans all prior phases for incomplete work: plans that ran without producing summaries, verification failures without overrides, and phases where discussion happened but planning never ran. When incomplete work is found, shows a structured report and offers three options: defer the gaps to the backlog and continue, stop and resolve manually, or force advance without recording. When prior phases are clean, routes silently with no interruption.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
---
|
||||
name: gsd:review
|
||||
description: Request cross-AI peer review of phase plans from external AI CLIs
|
||||
argument-hint: "--phase N [--gemini] [--claude] [--codex] [--opencode] [--all]"
|
||||
argument-hint: "--phase N [--gemini] [--claude] [--codex] [--opencode] [--qwen] [--cursor] [--all]"
|
||||
allowed-tools:
|
||||
- Read
|
||||
- Write
|
||||
@@ -11,7 +11,7 @@ allowed-tools:
|
||||
---
|
||||
|
||||
<objective>
|
||||
Invoke external AI CLIs (Gemini, Claude, Codex, OpenCode) to independently review phase plans.
|
||||
Invoke external AI CLIs (Gemini, Claude, Codex, OpenCode, Qwen Code, Cursor) to independently review phase plans.
|
||||
Produces a structured REVIEWS.md with per-reviewer feedback that can be fed back into
|
||||
planning via /gsd-plan-phase --reviews.
|
||||
|
||||
@@ -30,6 +30,8 @@ Phase number: extracted from $ARGUMENTS (required)
|
||||
- `--claude` — Include Claude CLI review (uses separate session)
|
||||
- `--codex` — Include Codex CLI review
|
||||
- `--opencode` — Include OpenCode review (uses model from user's OpenCode config)
|
||||
- `--qwen` — Include Qwen Code review (Alibaba Qwen models)
|
||||
- `--cursor` — Include Cursor agent review
|
||||
- `--all` — Include all available CLIs
|
||||
</context>
|
||||
|
||||
|
||||
@@ -593,6 +593,31 @@ Ingest an external plan file into the GSD planning system with conflict detectio
|
||||
|
||||
---
|
||||
|
||||
### `/gsd-from-gsd2`
|
||||
|
||||
Reverse migration from GSD-2 format (`.gsd/` with Milestone→Slice→Task hierarchy) back to v1 `.planning/` format.
|
||||
|
||||
| Flag | Required | Description |
|
||||
|------|----------|-------------|
|
||||
| `--dry-run` | No | Preview what would be migrated without writing anything |
|
||||
| `--force` | No | Overwrite existing `.planning/` directory |
|
||||
| `--path <dir>` | No | Specify GSD-2 root directory (defaults to current directory) |
|
||||
|
||||
**Flattening:** Milestone→Slice hierarchy is flattened to sequential phase numbers (M001/S01→phase 01, M001/S02→phase 02, M002/S01→phase 03, etc.).
|
||||
|
||||
**Produces:** `PROJECT.md`, `REQUIREMENTS.md`, `ROADMAP.md`, `STATE.md`, and sequential phase directories in `.planning/`.
|
||||
|
||||
**Safety:** Guards against overwriting an existing `.planning/` directory without `--force`.
|
||||
|
||||
```bash
|
||||
/gsd-from-gsd2 # Migrate .gsd/ in current directory
|
||||
/gsd-from-gsd2 --dry-run # Preview migration without writing
|
||||
/gsd-from-gsd2 --force # Overwrite existing .planning/
|
||||
/gsd-from-gsd2 --path /path/to/gsd2-project # Specify GSD-2 root
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `/gsd-quick`
|
||||
|
||||
Execute ad-hoc task with GSD guarantees.
|
||||
@@ -900,6 +925,37 @@ Query, inspect, or refresh queryable codebase intelligence files stored in `.pla
|
||||
|
||||
---
|
||||
|
||||
## AI Integration Commands
|
||||
|
||||
### `/gsd-ai-integration-phase`
|
||||
|
||||
AI framework selection wizard for integrating AI/LLM capabilities into a project phase. Presents an interactive decision matrix, surfaces domain-specific failure modes and eval criteria, and produces `AI-SPEC.md` with a framework recommendation, implementation guidance, and evaluation strategy.
|
||||
|
||||
**Produces:** `{phase}-AI-SPEC.md` in the phase directory
|
||||
|
||||
**Spawns:** 3 parallel specialist agents: domain-researcher, framework-selector, ai-researcher, and eval-planner
|
||||
|
||||
```bash
|
||||
/gsd-ai-integration-phase # Wizard for the current phase
|
||||
/gsd-ai-integration-phase 3 # Wizard for a specific phase
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `/gsd-eval-review`
|
||||
|
||||
Retroactive audit of an implemented AI phase's evaluation coverage. Checks implementation against the `AI-SPEC.md` evaluation plan produced by `/gsd-ai-integration-phase`. Scores each eval dimension as COVERED/PARTIAL/MISSING.
|
||||
|
||||
**Prerequisites:** Phase has been executed and has an `AI-SPEC.md`
|
||||
**Produces:** `{phase}-EVAL-REVIEW.md` with findings, gaps, and remediation guidance
|
||||
|
||||
```bash
|
||||
/gsd-eval-review # Audit current phase
|
||||
/gsd-eval-review 3 # Audit a specific phase
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Update Commands
|
||||
|
||||
### `/gsd-update`
|
||||
@@ -1023,6 +1079,8 @@ Cross-AI peer review of phase plans from external AI CLIs.
|
||||
| `--codex` | Include Codex CLI review |
|
||||
| `--coderabbit` | Include CodeRabbit review |
|
||||
| `--opencode` | Include OpenCode review (via GitHub Copilot) |
|
||||
| `--qwen` | Include Qwen Code review (Alibaba Qwen models) |
|
||||
| `--cursor` | Include Cursor agent review |
|
||||
| `--all` | Include all available CLIs |
|
||||
|
||||
**Produces:** `{phase}-REVIEWS.md` — consumable by `/gsd-plan-phase --reviews`
|
||||
|
||||
@@ -360,6 +360,36 @@ Settings for the security enforcement feature (v1.31). All follow the **absent =
|
||||
|
||||
---
|
||||
|
||||
## Review Settings
|
||||
|
||||
Configure per-CLI model selection for `/gsd-review`. When set, overrides the CLI's default model for that reviewer.
|
||||
|
||||
| Setting | Type | Default | Description |
|
||||
|---------|------|---------|-------------|
|
||||
| `review.models.gemini` | string | (CLI default) | Model used when `--gemini` reviewer is invoked |
|
||||
| `review.models.claude` | string | (CLI default) | Model used when `--claude` reviewer is invoked |
|
||||
| `review.models.codex` | string | (CLI default) | Model used when `--codex` reviewer is invoked |
|
||||
| `review.models.opencode` | string | (CLI default) | Model used when `--opencode` reviewer is invoked |
|
||||
| `review.models.qwen` | string | (CLI default) | Model used when `--qwen` reviewer is invoked |
|
||||
| `review.models.cursor` | string | (CLI default) | Model used when `--cursor` reviewer is invoked |
|
||||
|
||||
### Example
|
||||
|
||||
```json
|
||||
{
|
||||
"review": {
|
||||
"models": {
|
||||
"gemini": "gemini-2.5-pro",
|
||||
"qwen": "qwen-max"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Falls back to each CLI's configured default when a key is absent. Added in v1.35.0 (#1849).
|
||||
|
||||
---
|
||||
|
||||
## Manager Passthrough Flags
|
||||
|
||||
Configure per-step flags that `/gsd-manager` appends to each dispatched command. This allows customizing how the manager runs discuss, plan, and execute steps without manual flag entry.
|
||||
|
||||
108
docs/FEATURES.md
108
docs/FEATURES.md
@@ -102,6 +102,11 @@
|
||||
- [Hard Stop Safety Gates in /gsd-next](#101-hard-stop-safety-gates-in-gsd-next)
|
||||
- [Adaptive Model Preset](#102-adaptive-model-preset)
|
||||
- [Post-Merge Hunk Verification](#103-post-merge-hunk-verification)
|
||||
- [v1.35.0 Features](#v1350-features)
|
||||
- [New Runtime Support (Cline, CodeBuddy, Qwen Code)](#104-new-runtime-support-cline-codebuddy-qwen-code)
|
||||
- [GSD-2 Reverse Migration](#105-gsd-2-reverse-migration)
|
||||
- [AI Integration Phase Wizard](#106-ai-integration-phase-wizard)
|
||||
- [AI Eval Review](#107-ai-eval-review)
|
||||
- [v1.32 Features](#v132-features)
|
||||
- [STATE.md Consistency Gates](#69-statemd-consistency-gates)
|
||||
- [Autonomous `--to N` Flag](#70-autonomous---to-n-flag)
|
||||
@@ -917,7 +922,7 @@ fix(03-01): correct auth token expiry
|
||||
**Purpose:** Run GSD across multiple AI coding agent runtimes.
|
||||
|
||||
**Requirements:**
|
||||
- REQ-RUNTIME-01: System MUST support Claude Code, OpenCode, Gemini CLI, Kilo, Codex, Copilot, Antigravity, Trae, Cline, Augment Code
|
||||
- REQ-RUNTIME-01: System MUST support Claude Code, OpenCode, Gemini CLI, Kilo, Codex, Copilot, Antigravity, Trae, Cline, Augment Code, CodeBuddy, Qwen Code
|
||||
- REQ-RUNTIME-02: Installer MUST transform content per runtime (tool names, paths, frontmatter)
|
||||
- REQ-RUNTIME-03: Installer MUST support interactive and non-interactive (`--claude --global`) modes
|
||||
- REQ-RUNTIME-04: Installer MUST support both global and local installation
|
||||
@@ -926,12 +931,12 @@ fix(03-01): correct auth token expiry
|
||||
|
||||
**Runtime Transformations:**
|
||||
|
||||
| Aspect | Claude Code | OpenCode | Gemini | Kilo | Codex | Copilot | Antigravity | Trae | Cline | Augment |
|
||||
|--------|------------|----------|--------|-------|-------|---------|-------------|------|-------|---------|
|
||||
| Commands | Slash commands | Slash commands | Slash commands | Slash commands | Skills (TOML) | Slash commands | Skills | Skills | Rules | Skills |
|
||||
| Agent format | Claude native | `mode: subagent` | Claude native | `mode: subagent` | Skills | Tool mapping | Skills | Skills | Rules | Skills |
|
||||
| Hook events | `PostToolUse` | N/A | `AfterTool` | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
|
||||
| Config | `settings.json` | `opencode.json(c)` | `settings.json` | `kilo.json(c)` | TOML | Instructions | Config | Config | Config | Config |
|
||||
| Aspect | Claude Code | OpenCode | Gemini | Kilo | Codex | Copilot | Antigravity | Trae | Cline | Augment | CodeBuddy | Qwen Code |
|
||||
|--------|------------|----------|--------|-------|-------|---------|-------------|------|-------|---------|-----------|-----------|
|
||||
| Commands | Slash commands | Slash commands | Slash commands | Slash commands | Skills (TOML) | Slash commands | Skills | Skills | Rules | Skills | Skills | Skills |
|
||||
| Agent format | Claude native | `mode: subagent` | Claude native | `mode: subagent` | Skills | Tool mapping | Skills | Skills | Rules | Skills | Skills | Skills |
|
||||
| Hook events | `PostToolUse` | N/A | `AfterTool` | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A |
|
||||
| Config | `settings.json` | `opencode.json(c)` | `settings.json` | `kilo.json(c)` | TOML | Instructions | Config | Config | `.clinerules` | Config | Config | Config |
|
||||
|
||||
---
|
||||
|
||||
@@ -1068,9 +1073,9 @@ When verification returns `human_needed`, items are persisted as a trackable HUM
|
||||
|
||||
### 42. Cross-AI Peer Review
|
||||
|
||||
**Command:** `/gsd-review --phase N [--gemini] [--claude] [--codex] [--coderabbit] [--all]`
|
||||
**Command:** `/gsd-review --phase N [--gemini] [--claude] [--codex] [--coderabbit] [--opencode] [--qwen] [--cursor] [--all]`
|
||||
|
||||
**Purpose:** Invoke external AI CLIs (Gemini, Claude, Codex, CodeRabbit) to independently review phase plans. Produces structured REVIEWS.md with per-reviewer feedback.
|
||||
**Purpose:** Invoke external AI CLIs (Gemini, Claude, Codex, CodeRabbit, OpenCode, Qwen Code, Cursor) to independently review phase plans. Produces structured REVIEWS.md with per-reviewer feedback.
|
||||
|
||||
**Requirements:**
|
||||
- REQ-REVIEW-01: System MUST detect available AI CLIs on the system
|
||||
@@ -2179,3 +2184,88 @@ Test suite that scans all agent, workflow, and command files for embedded inject
|
||||
- REQ-PATCH-VERIFY-01: Reapply-patches MUST verify each hunk was applied after the merge
|
||||
- REQ-PATCH-VERIFY-02: Dropped or partial hunks MUST be reported to the user with file and line context
|
||||
- REQ-PATCH-VERIFY-03: Verification MUST run after all patches are applied, not per-patch
|
||||
|
||||
---
|
||||
|
||||
## v1.35.0 Features
|
||||
|
||||
- [New Runtime Support (Cline, CodeBuddy, Qwen Code)](#104-new-runtime-support-cline-codebuddy-qwen-code)
|
||||
- [GSD-2 Reverse Migration](#105-gsd-2-reverse-migration)
|
||||
- [AI Integration Phase Wizard](#106-ai-integration-phase-wizard)
|
||||
- [AI Eval Review](#107-ai-eval-review)
|
||||
|
||||
---
|
||||
|
||||
### 104. New Runtime Support (Cline, CodeBuddy, Qwen Code)
|
||||
|
||||
**Part of:** `npx get-shit-done-cc`
|
||||
|
||||
**Purpose:** Extend GSD installation to Cline, CodeBuddy, and Qwen Code runtimes.
|
||||
|
||||
**Requirements:**
|
||||
- REQ-CLINE-02: Cline install MUST write `.clinerules` to `~/.cline/` (global) or `./.cline/` (local). No custom slash commands — rules-based integration only. Flag: `--cline`.
|
||||
- REQ-CODEBUDDY-01: CodeBuddy install MUST deploy skills to `~/.codebuddy/skills/gsd-*/SKILL.md`. Flag: `--codebuddy`.
|
||||
- REQ-QWEN-01: Qwen Code install MUST deploy skills to `~/.qwen/skills/gsd-*/SKILL.md`, following the open standard used by Claude Code 2.1.88+. `QWEN_CONFIG_DIR` env var overrides the default path. Flag: `--qwen`.
|
||||
|
||||
**Runtime summary:**
|
||||
|
||||
| Runtime | Install Format | Config Path | Flag |
|
||||
|---------|---------------|-------------|------|
|
||||
| Cline | `.clinerules` | `~/.cline/` or `./.cline/` | `--cline` |
|
||||
| CodeBuddy | Skills (`SKILL.md`) | `~/.codebuddy/skills/` | `--codebuddy` |
|
||||
| Qwen Code | Skills (`SKILL.md`) | `~/.qwen/skills/` | `--qwen` |
|
||||
|
||||
---
|
||||
|
||||
### 105. GSD-2 Reverse Migration
|
||||
|
||||
**Command:** `/gsd-from-gsd2 [--dry-run] [--force] [--path <dir>]`
|
||||
|
||||
**Purpose:** Migrate a project from GSD-2 format (`.gsd/` directory with Milestone→Slice→Task hierarchy) back to the v1 `.planning/` format, restoring full compatibility with all GSD v1 commands.
|
||||
|
||||
**Requirements:**
|
||||
- REQ-FROM-GSD2-01: Importer MUST read `.gsd/` from the specified or current directory
|
||||
- REQ-FROM-GSD2-02: Milestone→Slice hierarchy MUST be flattened to sequential phase numbers (M001/S01→phase 01, M001/S02→phase 02, M002/S01→phase 03, etc.)
|
||||
- REQ-FROM-GSD2-03: System MUST guard against overwriting an existing `.planning/` directory without `--force`
|
||||
- REQ-FROM-GSD2-04: `--dry-run` MUST preview all changes without writing any files
|
||||
- REQ-FROM-GSD2-05: Migration MUST produce `PROJECT.md`, `REQUIREMENTS.md`, `ROADMAP.md`, `STATE.md`, and sequential phase directories
|
||||
|
||||
**Flags:**
|
||||
|
||||
| Flag | Description |
|
||||
|------|-------------|
|
||||
| `--dry-run` | Preview migration output without writing files |
|
||||
| `--force` | Overwrite an existing `.planning/` directory |
|
||||
| `--path <dir>` | Specify the GSD-2 root directory |
|
||||
|
||||
---
|
||||
|
||||
### 106. AI Integration Phase Wizard
|
||||
|
||||
**Command:** `/gsd-ai-integration-phase [N]`
|
||||
|
||||
**Purpose:** Guide developers through selecting, integrating, and planning evaluation for AI/LLM capabilities in a project phase. Produces a structured `AI-SPEC.md` that feeds into planning and verification.
|
||||
|
||||
**Requirements:**
|
||||
- REQ-AISPEC-01: Wizard MUST present an interactive decision matrix covering framework selection, model choice, and integration approach
|
||||
- REQ-AISPEC-02: System MUST surface domain-specific failure modes and eval criteria relevant to the project type
|
||||
- REQ-AISPEC-03: System MUST spawn 3 parallel specialist agents: domain-researcher, framework-selector, and eval-planner
|
||||
- REQ-AISPEC-04: Output MUST produce `{phase}-AI-SPEC.md` with framework recommendation, implementation guidance, and evaluation strategy
|
||||
|
||||
**Produces:** `{phase}-AI-SPEC.md` in the phase directory
|
||||
|
||||
---
|
||||
|
||||
### 107. AI Eval Review
|
||||
|
||||
**Command:** `/gsd-eval-review [N]`
|
||||
|
||||
**Purpose:** Retroactively audit an executed AI phase's evaluation coverage against the `AI-SPEC.md` plan. Identifies gaps between planned and implemented evaluation before the phase is closed.
|
||||
|
||||
**Requirements:**
|
||||
- REQ-EVALREVIEW-01: Review MUST read `AI-SPEC.md` from the specified phase
|
||||
- REQ-EVALREVIEW-02: Each eval dimension MUST be scored as COVERED, PARTIAL, or MISSING
|
||||
- REQ-EVALREVIEW-03: Output MUST include findings, gap descriptions, and remediation guidance
|
||||
- REQ-EVALREVIEW-04: `EVAL-REVIEW.md` MUST be written to the phase directory
|
||||
|
||||
**Produces:** `{phase}-EVAL-REVIEW.md` with scored eval dimensions, gap analysis, and remediation steps
|
||||
|
||||
@@ -868,6 +868,40 @@ The installer auto-configures `resolve_model_ids: "omit"` for Gemini CLI, OpenCo
|
||||
|
||||
See the [Configuration Reference](CONFIGURATION.md#non-claude-runtimes-codex-opencode-gemini-cli-kilo) for the full explanation.
|
||||
|
||||
### Installing for Cline
|
||||
|
||||
Cline uses a rules-based integration — GSD installs as `.clinerules` rather than slash commands.
|
||||
|
||||
```bash
|
||||
# Global install (applies to all projects)
|
||||
npx get-shit-done-cc --cline --global
|
||||
|
||||
# Local install (this project only)
|
||||
npx get-shit-done-cc --cline --local
|
||||
```
|
||||
|
||||
Global installs write to `~/.cline/`. Local installs write to `./.cline/`. No custom slash commands are registered — GSD rules are loaded automatically by Cline from the rules file.
|
||||
|
||||
### Installing for CodeBuddy
|
||||
|
||||
CodeBuddy uses a skills-based integration.
|
||||
|
||||
```bash
|
||||
npx get-shit-done-cc --codebuddy --global
|
||||
```
|
||||
|
||||
Skills are installed to `~/.codebuddy/skills/gsd-*/SKILL.md`.
|
||||
|
||||
### Installing for Qwen Code
|
||||
|
||||
Qwen Code uses the same open skills standard as Claude Code 2.1.88+.
|
||||
|
||||
```bash
|
||||
npx get-shit-done-cc --qwen --global
|
||||
```
|
||||
|
||||
Skills are installed to `~/.qwen/skills/gsd-*/SKILL.md`. Use the `QWEN_CONFIG_DIR` environment variable to override the default install path.
|
||||
|
||||
### Using Claude Code with Non-Anthropic Providers (OpenRouter, Local)
|
||||
|
||||
If GSD subagents call Anthropic models and you're paying through OpenRouter or a local provider, switch to the `inherit` profile: `/gsd-set-profile inherit`. This makes all agents use your current session model instead of specific Anthropic models. See also `/gsd-settings` → Model Profile → Inherit.
|
||||
|
||||
@@ -839,6 +839,9 @@ GSDアップデート後にローカルの変更を復元します。
|
||||
| `--claude` | Claude CLIレビューを含める(別セッション) |
|
||||
| `--codex` | Codex CLIレビューを含める |
|
||||
| `--coderabbit` | CodeRabbitレビューを含める |
|
||||
| `--opencode` | OpenCodeレビューを含める(GitHub Copilot経由) |
|
||||
| `--qwen` | Qwen Codeレビューを含める(Alibaba Qwenモデル) |
|
||||
| `--cursor` | Cursorエージェントレビューを含める |
|
||||
| `--all` | 利用可能なすべてのCLIを含める |
|
||||
|
||||
**生成物:** `{phase}-REVIEWS.md` — `/gsd-plan-phase --reviews` で利用可能
|
||||
|
||||
@@ -1049,9 +1049,9 @@ fix(03-01): correct auth token expiry
|
||||
|
||||
### 42. クロス AI ピアレビュー
|
||||
|
||||
**コマンド:** `/gsd-review --phase N [--gemini] [--claude] [--codex] [--coderabbit] [--all]`
|
||||
**コマンド:** `/gsd-review --phase N [--gemini] [--claude] [--codex] [--coderabbit] [--opencode] [--qwen] [--cursor] [--all]`
|
||||
|
||||
**目的:** 外部の AI CLI(Gemini、Claude、Codex、CodeRabbit)を呼び出して、フェーズプランを独立してレビューします。レビュアーごとのフィードバックを含む構造化された REVIEWS.md を生成します。
|
||||
**目的:** 外部の AI CLI(Gemini、Claude、Codex、CodeRabbit、OpenCode、Qwen Code、Cursor)を呼び出して、フェーズプランを独立してレビューします。レビュアーごとのフィードバックを含む構造化された REVIEWS.md を生成します。
|
||||
|
||||
**要件:**
|
||||
- REQ-REVIEW-01: システムはシステム上で利用可能な AI CLI を検出しなければならない
|
||||
|
||||
@@ -839,6 +839,9 @@ GSD 업데이트 후 로컬 수정사항을 복원합니다.
|
||||
| `--claude` | Claude CLI 리뷰 포함 (별도 세션) |
|
||||
| `--codex` | Codex CLI 리뷰 포함 |
|
||||
| `--coderabbit` | CodeRabbit 리뷰 포함 |
|
||||
| `--opencode` | OpenCode 리뷰 포함 (GitHub Copilot 경유) |
|
||||
| `--qwen` | Qwen Code 리뷰 포함 (Alibaba Qwen 모델) |
|
||||
| `--cursor` | Cursor 에이전트 리뷰 포함 |
|
||||
| `--all` | 사용 가능한 모든 CLI 포함 |
|
||||
|
||||
**생성 파일:** `{phase}-REVIEWS.md` — `/gsd-plan-phase --reviews`에서 사용 가능
|
||||
|
||||
@@ -1049,9 +1049,9 @@ fix(03-01): correct auth token expiry
|
||||
|
||||
### 42. Cross-AI Peer Review
|
||||
|
||||
**명령어:** `/gsd-review --phase N [--gemini] [--claude] [--codex] [--coderabbit] [--all]`
|
||||
**명령어:** `/gsd-review --phase N [--gemini] [--claude] [--codex] [--coderabbit] [--opencode] [--qwen] [--cursor] [--all]`
|
||||
|
||||
**목적:** 외부 AI CLI(Gemini, Claude, Codex, CodeRabbit)를 호출하여 페이즈 계획을 독립적으로 검토합니다. 검토자별 피드백이 담긴 구조화된 REVIEWS.md를 생성합니다.
|
||||
**목적:** 외부 AI CLI(Gemini, Claude, Codex, CodeRabbit, OpenCode, Qwen Code, Cursor)를 호출하여 페이즈 계획을 독립적으로 검토합니다. 검토자별 피드백이 담긴 구조화된 REVIEWS.md를 생성합니다.
|
||||
|
||||
**요구사항.**
|
||||
- REQ-REVIEW-01: 시스템에서 사용 가능한 AI CLI를 감지해야 합니다.
|
||||
|
||||
@@ -154,6 +154,10 @@
|
||||
* learnings copy Copy from current project's LEARNINGS.md
|
||||
* learnings prune --older-than <dur> Remove entries older than duration (e.g. 90d)
|
||||
* learnings delete <id> Delete a learning by ID
|
||||
*
|
||||
* GSD-2 Migration:
|
||||
* from-gsd2 [--path <dir>] [--force] [--dry-run]
|
||||
* Import a GSD-2 (.gsd/) project back to GSD v1 (.planning/) format
|
||||
*/
|
||||
|
||||
const fs = require('fs');
|
||||
@@ -634,6 +638,11 @@ async function runCommand(command, args, cwd, raw, defaultValue) {
|
||||
break;
|
||||
}
|
||||
|
||||
case 'skill-manifest': {
|
||||
init.cmdSkillManifest(cwd, args, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
case 'history-digest': {
|
||||
commands.cmdHistoryDigest(cwd, raw);
|
||||
break;
|
||||
@@ -1070,6 +1079,14 @@ async function runCommand(command, args, cwd, raw, defaultValue) {
|
||||
break;
|
||||
}
|
||||
|
||||
// ─── GSD-2 Reverse Migration ───────────────────────────────────────────
|
||||
|
||||
case 'from-gsd2': {
|
||||
const gsd2Import = require('./lib/gsd2-import.cjs');
|
||||
gsd2Import.cmdFromGsd2(args.slice(1), cwd, raw);
|
||||
break;
|
||||
}
|
||||
|
||||
default:
|
||||
error(`Unknown command: ${command}`);
|
||||
}
|
||||
|
||||
@@ -25,8 +25,13 @@ const VALID_CONFIG_KEYS = new Set([
|
||||
'workflow.use_worktrees',
|
||||
'workflow.code_review',
|
||||
'workflow.code_review_depth',
|
||||
'workflow.code_review_command',
|
||||
'workflow.plan_bounce',
|
||||
'workflow.plan_bounce_script',
|
||||
'workflow.plan_bounce_passes',
|
||||
'git.branching_strategy', 'git.base_branch', 'git.phase_branch_template', 'git.milestone_branch_template', 'git.quick_branch_template',
|
||||
'planning.commit_docs', 'planning.search_gitignored',
|
||||
'workflow.cross_ai_execution', 'workflow.cross_ai_command', 'workflow.cross_ai_timeout',
|
||||
'workflow.subagent_timeout',
|
||||
'hooks.context_warnings',
|
||||
'features.thinking_partner',
|
||||
@@ -36,6 +41,8 @@ const VALID_CONFIG_KEYS = new Set([
|
||||
'project_code', 'phase_naming',
|
||||
'manager.flags.discuss', 'manager.flags.plan', 'manager.flags.execute',
|
||||
'response_language',
|
||||
'intel.enabled',
|
||||
'claude_md_path',
|
||||
]);
|
||||
|
||||
/**
|
||||
@@ -63,6 +70,7 @@ const CONFIG_KEY_SUGGESTIONS = {
|
||||
'hooks.research_questions': 'workflow.research_before_questions',
|
||||
'workflow.research_questions': 'workflow.research_before_questions',
|
||||
'workflow.codereview': 'workflow.code_review',
|
||||
'workflow.review_command': 'workflow.code_review_command',
|
||||
'workflow.review': 'workflow.code_review',
|
||||
'workflow.code_review_level': 'workflow.code_review_depth',
|
||||
'workflow.review_depth': 'workflow.code_review_depth',
|
||||
@@ -153,6 +161,10 @@ function buildNewProjectConfig(userChoices) {
|
||||
skip_discuss: false,
|
||||
code_review: true,
|
||||
code_review_depth: 'standard',
|
||||
code_review_command: null,
|
||||
plan_bounce: false,
|
||||
plan_bounce_script: null,
|
||||
plan_bounce_passes: 2,
|
||||
},
|
||||
hooks: {
|
||||
context_warnings: true,
|
||||
@@ -160,6 +172,7 @@ function buildNewProjectConfig(userChoices) {
|
||||
project_code: null,
|
||||
phase_naming: 'sequential',
|
||||
agent_skills: {},
|
||||
claude_md_path: './CLAUDE.md',
|
||||
};
|
||||
|
||||
// Three-level deep merge: hardcoded <- userDefaults <- choices
|
||||
|
||||
@@ -159,14 +159,25 @@ function findProjectRoot(startDir) {
|
||||
* @param {number} opts.maxAgeMs - max age in ms before removal (default: 5 min)
|
||||
* @param {boolean} opts.dirsOnly - if true, only remove directories (default: false)
|
||||
*/
|
||||
/**
|
||||
* Dedicated GSD temp directory: path.join(os.tmpdir(), 'gsd').
|
||||
* Created on first use. Keeps GSD temp files isolated from the system
|
||||
* temp directory so reap scans only GSD files (#1975).
|
||||
*/
|
||||
const GSD_TEMP_DIR = path.join(require('os').tmpdir(), 'gsd');
|
||||
|
||||
function ensureGsdTempDir() {
|
||||
fs.mkdirSync(GSD_TEMP_DIR, { recursive: true });
|
||||
}
|
||||
|
||||
function reapStaleTempFiles(prefix = 'gsd-', { maxAgeMs = 5 * 60 * 1000, dirsOnly = false } = {}) {
|
||||
try {
|
||||
const tmpDir = require('os').tmpdir();
|
||||
ensureGsdTempDir();
|
||||
const now = Date.now();
|
||||
const entries = fs.readdirSync(tmpDir);
|
||||
const entries = fs.readdirSync(GSD_TEMP_DIR);
|
||||
for (const entry of entries) {
|
||||
if (!entry.startsWith(prefix)) continue;
|
||||
const fullPath = path.join(tmpDir, entry);
|
||||
const fullPath = path.join(GSD_TEMP_DIR, entry);
|
||||
try {
|
||||
const stat = fs.statSync(fullPath);
|
||||
if (now - stat.mtimeMs > maxAgeMs) {
|
||||
@@ -195,7 +206,8 @@ function output(result, raw, rawValue) {
|
||||
// Write to tmpfile and output the path prefixed with @file: so callers can detect it.
|
||||
if (json.length > 50000) {
|
||||
reapStaleTempFiles();
|
||||
const tmpPath = path.join(require('os').tmpdir(), `gsd-${Date.now()}.json`);
|
||||
ensureGsdTempDir();
|
||||
const tmpPath = path.join(GSD_TEMP_DIR, `gsd-${Date.now()}.json`);
|
||||
fs.writeFileSync(tmpPath, json, 'utf-8');
|
||||
data = '@file:' + tmpPath;
|
||||
} else {
|
||||
@@ -313,7 +325,7 @@ function loadConfig(cwd) {
|
||||
// Section containers that hold nested sub-keys
|
||||
'git', 'workflow', 'planning', 'hooks', 'features',
|
||||
// Internal keys loadConfig reads but config-set doesn't expose
|
||||
'model_overrides', 'agent_skills', 'context_window', 'resolve_model_ids',
|
||||
'model_overrides', 'agent_skills', 'context_window', 'resolve_model_ids', 'claude_md_path',
|
||||
// Deprecated keys (still accepted for migration, not in config-set)
|
||||
'depth', 'multiRepo',
|
||||
]);
|
||||
@@ -374,6 +386,7 @@ function loadConfig(cwd) {
|
||||
agent_skills: parsed.agent_skills || {},
|
||||
manager: parsed.manager || {},
|
||||
response_language: get('response_language') || null,
|
||||
claude_md_path: get('claude_md_path') || null,
|
||||
};
|
||||
} catch {
|
||||
// Fall back to ~/.gsd/defaults.json only for truly pre-project contexts (#1683)
|
||||
@@ -1578,6 +1591,7 @@ module.exports = {
|
||||
findProjectRoot,
|
||||
detectSubRepos,
|
||||
reapStaleTempFiles,
|
||||
GSD_TEMP_DIR,
|
||||
MODEL_ALIAS_MAP,
|
||||
CONFIG_DEFAULTS,
|
||||
planningDir,
|
||||
|
||||
511
get-shit-done/bin/lib/gsd2-import.cjs
Normal file
511
get-shit-done/bin/lib/gsd2-import.cjs
Normal file
@@ -0,0 +1,511 @@
|
||||
'use strict';
|
||||
|
||||
/**
|
||||
* gsd2-import — Reverse migration from GSD-2 (.gsd/) to GSD v1 (.planning/)
|
||||
*
|
||||
* Reads a GSD-2 project directory structure and produces a complete
|
||||
* .planning/ artifact tree in GSD v1 format.
|
||||
*
|
||||
* GSD-2 hierarchy: Milestone → Slice → Task
|
||||
* GSD v1 hierarchy: Milestone (in ROADMAP.md) → Phase → Plan
|
||||
*
|
||||
* Mapping rules:
|
||||
* - Slices are numbered sequentially across all milestones (01, 02, …)
|
||||
* - Tasks within a slice become plans (01-01, 01-02, …)
|
||||
* - Completed slices ([x] in ROADMAP) → [x] phases in ROADMAP.md
|
||||
* - Tasks with a SUMMARY file → SUMMARY.md written
|
||||
* - Slice RESEARCH.md → phase XX-RESEARCH.md
|
||||
*/
|
||||
|
||||
const fs = require('node:fs');
|
||||
const path = require('node:path');
|
||||
|
||||
// ─── Utilities ──────────────────────────────────────────────────────────────
|
||||
|
||||
function readOptional(filePath) {
|
||||
try { return fs.readFileSync(filePath, 'utf8'); } catch { return null; }
|
||||
}
|
||||
|
||||
function zeroPad(n, width = 2) {
|
||||
return String(n).padStart(width, '0');
|
||||
}
|
||||
|
||||
function slugify(title) {
|
||||
return title.toLowerCase().replace(/[^a-z0-9]+/g, '-').replace(/^-|-$/g, '');
|
||||
}
|
||||
|
||||
// ─── GSD-2 Parser ───────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Find the .gsd/ directory starting from a project root.
|
||||
* Returns the absolute path or null if not found.
|
||||
*/
|
||||
function findGsd2Root(startPath) {
|
||||
if (path.basename(startPath) === '.gsd' && fs.existsSync(startPath)) {
|
||||
return startPath;
|
||||
}
|
||||
const candidate = path.join(startPath, '.gsd');
|
||||
if (fs.existsSync(candidate) && fs.statSync(candidate).isDirectory()) {
|
||||
return candidate;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse the ## Slices section from a GSD-2 milestone ROADMAP.md.
|
||||
* Each slice entry looks like:
|
||||
* - [x] **S01: Title** `risk:medium` `depends:[S00]`
|
||||
*/
|
||||
function parseSlicesFromRoadmap(content) {
|
||||
const slices = [];
|
||||
const sectionMatch = content.match(/## Slices\n([\s\S]*?)(?:\n## |\n# |$)/);
|
||||
if (!sectionMatch) return slices;
|
||||
|
||||
for (const line of sectionMatch[1].split('\n')) {
|
||||
const m = line.match(/^- \[([x ])\]\s+\*\*(\w+):\s*([^*]+)\*\*/);
|
||||
if (!m) continue;
|
||||
slices.push({ done: m[1] === 'x', id: m[2].trim(), title: m[3].trim() });
|
||||
}
|
||||
return slices;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse the milestone title from the first heading in a GSD-2 ROADMAP.md.
|
||||
* Format: # M001: Title
|
||||
*/
|
||||
function parseMilestoneTitle(content) {
|
||||
const m = content.match(/^# \w+:\s*(.+)/m);
|
||||
return m ? m[1].trim() : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse a task title from a GSD-2 T##-PLAN.md.
|
||||
* Format: # T01: Title
|
||||
*/
|
||||
function parseTaskTitle(content, fallback) {
|
||||
const m = content.match(/^# \w+:\s*(.+)/m);
|
||||
return m ? m[1].trim() : fallback;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse the ## Description body from a GSD-2 task plan.
|
||||
*/
|
||||
function parseTaskDescription(content) {
|
||||
const m = content.match(/## Description\n+([\s\S]+?)(?:\n## |\n# |$)/);
|
||||
return m ? m[1].trim() : '';
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse ## Must-Haves items from a GSD-2 task plan.
|
||||
*/
|
||||
function parseTaskMustHaves(content) {
|
||||
const m = content.match(/## Must-Haves\n+([\s\S]+?)(?:\n## |\n# |$)/);
|
||||
if (!m) return [];
|
||||
return m[1].split('\n')
|
||||
.map(l => l.match(/^- \[[ x]\]\s*(.+)/))
|
||||
.filter(Boolean)
|
||||
.map(match => match[1].trim());
|
||||
}
|
||||
|
||||
/**
|
||||
* Read all task plan files from a GSD-2 tasks/ directory.
|
||||
*/
|
||||
function readTasksDir(tasksDir) {
|
||||
if (!fs.existsSync(tasksDir)) return [];
|
||||
|
||||
return fs.readdirSync(tasksDir)
|
||||
.filter(f => f.endsWith('-PLAN.md'))
|
||||
.sort()
|
||||
.map(tf => {
|
||||
const tid = tf.replace('-PLAN.md', '');
|
||||
const plan = readOptional(path.join(tasksDir, tf));
|
||||
const summary = readOptional(path.join(tasksDir, `${tid}-SUMMARY.md`));
|
||||
return {
|
||||
id: tid,
|
||||
title: plan ? parseTaskTitle(plan, tid) : tid,
|
||||
description: plan ? parseTaskDescription(plan) : '',
|
||||
mustHaves: plan ? parseTaskMustHaves(plan) : [],
|
||||
plan,
|
||||
summary,
|
||||
done: !!summary,
|
||||
};
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse a complete GSD-2 .gsd/ directory into a structured representation.
|
||||
*/
|
||||
function parseGsd2(gsdDir) {
|
||||
const data = {
|
||||
projectContent: readOptional(path.join(gsdDir, 'PROJECT.md')),
|
||||
requirements: readOptional(path.join(gsdDir, 'REQUIREMENTS.md')),
|
||||
milestones: [],
|
||||
};
|
||||
|
||||
const milestonesBase = path.join(gsdDir, 'milestones');
|
||||
if (!fs.existsSync(milestonesBase)) return data;
|
||||
|
||||
const milestoneIds = fs.readdirSync(milestonesBase)
|
||||
.filter(d => fs.statSync(path.join(milestonesBase, d)).isDirectory())
|
||||
.sort();
|
||||
|
||||
for (const mid of milestoneIds) {
|
||||
const mDir = path.join(milestonesBase, mid);
|
||||
const roadmapContent = readOptional(path.join(mDir, `${mid}-ROADMAP.md`));
|
||||
const slicesDir = path.join(mDir, 'slices');
|
||||
|
||||
const sliceInfos = roadmapContent ? parseSlicesFromRoadmap(roadmapContent) : [];
|
||||
|
||||
const slices = sliceInfos.map(info => {
|
||||
const sDir = path.join(slicesDir, info.id);
|
||||
const hasSDir = fs.existsSync(sDir);
|
||||
return {
|
||||
id: info.id,
|
||||
title: info.title,
|
||||
done: info.done,
|
||||
plan: hasSDir ? readOptional(path.join(sDir, `${info.id}-PLAN.md`)) : null,
|
||||
summary: hasSDir ? readOptional(path.join(sDir, `${info.id}-SUMMARY.md`)) : null,
|
||||
research: hasSDir ? readOptional(path.join(sDir, `${info.id}-RESEARCH.md`)) : null,
|
||||
context: hasSDir ? readOptional(path.join(sDir, `${info.id}-CONTEXT.md`)) : null,
|
||||
tasks: hasSDir ? readTasksDir(path.join(sDir, 'tasks')) : [],
|
||||
};
|
||||
});
|
||||
|
||||
data.milestones.push({
|
||||
id: mid,
|
||||
title: roadmapContent ? (parseMilestoneTitle(roadmapContent) ?? mid) : mid,
|
||||
research: readOptional(path.join(mDir, `${mid}-RESEARCH.md`)),
|
||||
slices,
|
||||
});
|
||||
}
|
||||
|
||||
return data;
|
||||
}
|
||||
|
||||
// ─── Artifact Builders ──────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Build a GSD v1 PLAN.md from a GSD-2 task.
|
||||
*/
|
||||
function buildPlanMd(task, phasePrefix, planPrefix, phaseSlug, milestoneTitle) {
|
||||
const lines = [
|
||||
'---',
|
||||
`phase: "${phasePrefix}"`,
|
||||
`plan: "${planPrefix}"`,
|
||||
'type: "implementation"',
|
||||
'---',
|
||||
'',
|
||||
'<objective>',
|
||||
task.title,
|
||||
'</objective>',
|
||||
'',
|
||||
'<context>',
|
||||
`Phase: ${phasePrefix} (${phaseSlug}) — Milestone: ${milestoneTitle}`,
|
||||
];
|
||||
|
||||
if (task.description) {
|
||||
lines.push('', task.description);
|
||||
}
|
||||
|
||||
lines.push('</context>');
|
||||
|
||||
if (task.mustHaves.length > 0) {
|
||||
lines.push('', '<must_haves>');
|
||||
for (const mh of task.mustHaves) {
|
||||
lines.push(`- ${mh}`);
|
||||
}
|
||||
lines.push('</must_haves>');
|
||||
}
|
||||
|
||||
return lines.join('\n') + '\n';
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a GSD v1 SUMMARY.md from a GSD-2 task summary.
|
||||
* Strips the GSD-2 frontmatter and preserves the body.
|
||||
*/
|
||||
function buildSummaryMd(task, phasePrefix, planPrefix) {
|
||||
const raw = task.summary || '';
|
||||
// Strip GSD-2 frontmatter block (--- ... ---) if present
|
||||
const bodyMatch = raw.match(/^---[\s\S]*?---\n+([\s\S]*)$/);
|
||||
const body = bodyMatch ? bodyMatch[1].trim() : raw.trim();
|
||||
|
||||
return [
|
||||
'---',
|
||||
`phase: "${phasePrefix}"`,
|
||||
`plan: "${planPrefix}"`,
|
||||
'---',
|
||||
'',
|
||||
body || 'Task completed (migrated from GSD-2).',
|
||||
'',
|
||||
].join('\n');
|
||||
}
|
||||
|
||||
/**
|
||||
* Build a GSD v1 XX-CONTEXT.md from a GSD-2 slice.
|
||||
*/
|
||||
function buildContextMd(slice, phasePrefix) {
|
||||
const lines = [
|
||||
`# Phase ${phasePrefix} Context`,
|
||||
'',
|
||||
`Migrated from GSD-2 slice ${slice.id}: ${slice.title}`,
|
||||
];
|
||||
|
||||
const extra = slice.context || '';
|
||||
if (extra.trim()) {
|
||||
lines.push('', extra.trim());
|
||||
}
|
||||
|
||||
return lines.join('\n') + '\n';
|
||||
}
|
||||
|
||||
/**
|
||||
* Build the GSD v1 ROADMAP.md with milestone-sectioned format.
|
||||
*/
|
||||
function buildRoadmapMd(milestones, phaseMap) {
|
||||
const lines = ['# Roadmap', ''];
|
||||
|
||||
for (const milestone of milestones) {
|
||||
lines.push(`## ${milestone.id}: ${milestone.title}`, '');
|
||||
const mPhases = phaseMap.filter(p => p.milestoneId === milestone.id);
|
||||
for (const { slice, phaseNum } of mPhases) {
|
||||
const prefix = zeroPad(phaseNum);
|
||||
const slug = slugify(slice.title);
|
||||
const check = slice.done ? 'x' : ' ';
|
||||
lines.push(`- [${check}] **Phase ${prefix}: ${slug}** — ${slice.title}`);
|
||||
}
|
||||
lines.push('');
|
||||
}
|
||||
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
/**
|
||||
* Build the GSD v1 STATE.md reflecting the current position in the project.
|
||||
*/
|
||||
function buildStateMd(phaseMap) {
|
||||
const currentEntry = phaseMap.find(p => !p.slice.done);
|
||||
const totalPhases = phaseMap.length;
|
||||
const donePhases = phaseMap.filter(p => p.slice.done).length;
|
||||
const pct = totalPhases > 0 ? Math.round((donePhases / totalPhases) * 100) : 0;
|
||||
|
||||
const currentPhaseNum = currentEntry ? zeroPad(currentEntry.phaseNum) : zeroPad(totalPhases);
|
||||
const currentSlug = currentEntry ? slugify(currentEntry.slice.title) : 'complete';
|
||||
const status = currentEntry ? 'Ready to plan' : 'All phases complete';
|
||||
|
||||
const filled = Math.round(pct / 10);
|
||||
const bar = `[${'█'.repeat(filled)}${'░'.repeat(10 - filled)}]`;
|
||||
const today = new Date().toISOString().split('T')[0];
|
||||
|
||||
return [
|
||||
'# Project State',
|
||||
'',
|
||||
'## Project Reference',
|
||||
'',
|
||||
'See: .planning/PROJECT.md',
|
||||
'',
|
||||
`**Current focus:** Phase ${currentPhaseNum} (${currentSlug})`,
|
||||
'',
|
||||
'## Current Position',
|
||||
'',
|
||||
`Phase: ${currentPhaseNum} of ${zeroPad(totalPhases)} (${currentSlug})`,
|
||||
`Status: ${status}`,
|
||||
`Last activity: ${today} — Migrated from GSD-2`,
|
||||
'',
|
||||
`Progress: ${bar} ${pct}%`,
|
||||
'',
|
||||
'## Accumulated Context',
|
||||
'',
|
||||
'### Decisions',
|
||||
'',
|
||||
'Migrated from GSD-2. Review PROJECT.md for key decisions.',
|
||||
'',
|
||||
'### Blockers/Concerns',
|
||||
'',
|
||||
'None.',
|
||||
'',
|
||||
'## Session Continuity',
|
||||
'',
|
||||
`Last session: ${today}`,
|
||||
'Stopped at: Migration from GSD-2 completed',
|
||||
'Resume file: None',
|
||||
'',
|
||||
].join('\n');
|
||||
}
|
||||
|
||||
// ─── Transformer ─────────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Convert parsed GSD-2 data into a map of relative path → file content.
|
||||
* All paths are relative to the .planning/ root.
|
||||
*/
|
||||
function buildPlanningArtifacts(gsd2Data) {
|
||||
const artifacts = new Map();
|
||||
|
||||
// Passthrough files
|
||||
artifacts.set('PROJECT.md', gsd2Data.projectContent || '# Project\n\n(Migrated from GSD-2)\n');
|
||||
if (gsd2Data.requirements) {
|
||||
artifacts.set('REQUIREMENTS.md', gsd2Data.requirements);
|
||||
}
|
||||
|
||||
// Minimal valid v1 config
|
||||
artifacts.set('config.json', JSON.stringify({ version: 1 }, null, 2) + '\n');
|
||||
|
||||
// Build sequential phase map: flatten Milestones → Slices into numbered phases
|
||||
const phaseMap = [];
|
||||
let phaseNum = 1;
|
||||
for (const milestone of gsd2Data.milestones) {
|
||||
for (const slice of milestone.slices) {
|
||||
phaseMap.push({ milestoneId: milestone.id, milestoneTitle: milestone.title, slice, phaseNum });
|
||||
phaseNum++;
|
||||
}
|
||||
}
|
||||
|
||||
artifacts.set('ROADMAP.md', buildRoadmapMd(gsd2Data.milestones, phaseMap));
|
||||
artifacts.set('STATE.md', buildStateMd(phaseMap));
|
||||
|
||||
for (const { slice, phaseNum, milestoneTitle } of phaseMap) {
|
||||
const prefix = zeroPad(phaseNum);
|
||||
const slug = slugify(slice.title);
|
||||
const dir = `phases/${prefix}-${slug}`;
|
||||
|
||||
artifacts.set(`${dir}/${prefix}-CONTEXT.md`, buildContextMd(slice, prefix));
|
||||
|
||||
if (slice.research) {
|
||||
artifacts.set(`${dir}/${prefix}-RESEARCH.md`, slice.research);
|
||||
}
|
||||
|
||||
for (let i = 0; i < slice.tasks.length; i++) {
|
||||
const task = slice.tasks[i];
|
||||
const planPrefix = zeroPad(i + 1);
|
||||
|
||||
artifacts.set(
|
||||
`${dir}/${prefix}-${planPrefix}-PLAN.md`,
|
||||
buildPlanMd(task, prefix, planPrefix, slug, milestoneTitle)
|
||||
);
|
||||
|
||||
if (task.done && task.summary) {
|
||||
artifacts.set(
|
||||
`${dir}/${prefix}-${planPrefix}-SUMMARY.md`,
|
||||
buildSummaryMd(task, prefix, planPrefix)
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return artifacts;
|
||||
}
|
||||
|
||||
// ─── Preview ─────────────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Format a dry-run preview string for display before writing.
|
||||
*/
|
||||
function buildPreview(gsd2Data, artifacts) {
|
||||
const lines = ['Preview — files that will be created in .planning/:'];
|
||||
|
||||
for (const rel of artifacts.keys()) {
|
||||
lines.push(` ${rel}`);
|
||||
}
|
||||
|
||||
const totalSlices = gsd2Data.milestones.reduce((s, m) => s + m.slices.length, 0);
|
||||
const doneSlices = gsd2Data.milestones.reduce((s, m) => s + m.slices.filter(sl => sl.done).length, 0);
|
||||
const allTasks = gsd2Data.milestones.flatMap(m => m.slices.flatMap(sl => sl.tasks));
|
||||
const doneTasks = allTasks.filter(t => t.done).length;
|
||||
|
||||
lines.push('');
|
||||
lines.push(`Milestones: ${gsd2Data.milestones.length}`);
|
||||
lines.push(`Phases (slices): ${totalSlices} (${doneSlices} completed)`);
|
||||
lines.push(`Plans (tasks): ${allTasks.length} (${doneTasks} completed)`);
|
||||
lines.push('');
|
||||
lines.push('Cannot migrate automatically:');
|
||||
lines.push(' - GSD-2 cost/token ledger (no v1 equivalent)');
|
||||
lines.push(' - GSD-2 database state (rebuilt from files on first /gsd-health)');
|
||||
lines.push(' - VS Code extension state');
|
||||
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
// ─── Writer ───────────────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Write all artifacts to the .planning/ directory.
|
||||
*/
|
||||
function writePlanningDir(artifacts, planningRoot) {
|
||||
for (const [rel, content] of artifacts) {
|
||||
const absPath = path.join(planningRoot, rel);
|
||||
fs.mkdirSync(path.dirname(absPath), { recursive: true });
|
||||
fs.writeFileSync(absPath, content, 'utf8');
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Command Handler ──────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Entry point called from gsd-tools.cjs.
|
||||
* Supports: --force, --dry-run, --path <dir>
|
||||
*/
|
||||
function cmdFromGsd2(args, cwd, raw) {
|
||||
const { output, error } = require('./core.cjs');
|
||||
|
||||
const force = args.includes('--force');
|
||||
const dryRun = args.includes('--dry-run');
|
||||
|
||||
const pathIdx = args.indexOf('--path');
|
||||
const projectDir = pathIdx >= 0 && args[pathIdx + 1]
|
||||
? path.resolve(cwd, args[pathIdx + 1])
|
||||
: cwd;
|
||||
|
||||
const gsdDir = findGsd2Root(projectDir);
|
||||
if (!gsdDir) {
|
||||
return output({ success: false, error: `No .gsd/ directory found in ${projectDir}` }, raw);
|
||||
}
|
||||
|
||||
const planningRoot = path.join(path.dirname(gsdDir), '.planning');
|
||||
if (fs.existsSync(planningRoot) && !force) {
|
||||
return output({
|
||||
success: false,
|
||||
error: `.planning/ already exists at ${planningRoot}. Pass --force to overwrite.`,
|
||||
}, raw);
|
||||
}
|
||||
|
||||
const gsd2Data = parseGsd2(gsdDir);
|
||||
const artifacts = buildPlanningArtifacts(gsd2Data);
|
||||
const preview = buildPreview(gsd2Data, artifacts);
|
||||
|
||||
if (dryRun) {
|
||||
return output({ success: true, dryRun: true, preview }, raw);
|
||||
}
|
||||
|
||||
writePlanningDir(artifacts, planningRoot);
|
||||
|
||||
return output({
|
||||
success: true,
|
||||
planningDir: planningRoot,
|
||||
filesWritten: artifacts.size,
|
||||
milestones: gsd2Data.milestones.length,
|
||||
preview,
|
||||
}, raw);
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
findGsd2Root,
|
||||
parseGsd2,
|
||||
buildPlanningArtifacts,
|
||||
buildPreview,
|
||||
writePlanningDir,
|
||||
cmdFromGsd2,
|
||||
// Exported for unit tests
|
||||
parseSlicesFromRoadmap,
|
||||
parseMilestoneTitle,
|
||||
parseTaskTitle,
|
||||
parseTaskDescription,
|
||||
parseTaskMustHaves,
|
||||
buildPlanMd,
|
||||
buildSummaryMd,
|
||||
buildContextMd,
|
||||
buildRoadmapMd,
|
||||
buildStateMd,
|
||||
slugify,
|
||||
zeroPad,
|
||||
};
|
||||
@@ -1513,6 +1513,105 @@ function cmdAgentSkills(cwd, agentType, raw) {
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate a skill manifest from a skills directory.
|
||||
*
|
||||
* Scans the given skills directory for subdirectories containing SKILL.md,
|
||||
* extracts frontmatter (name, description) and trigger conditions from the
|
||||
* body text, and returns an array of skill descriptors.
|
||||
*
|
||||
* @param {string} skillsDir - Absolute path to the skills directory
|
||||
* @returns {Array<{name: string, description: string, triggers: string[], path: string}>}
|
||||
*/
|
||||
function buildSkillManifest(skillsDir) {
|
||||
const { extractFrontmatter } = require('./frontmatter.cjs');
|
||||
|
||||
if (!fs.existsSync(skillsDir)) return [];
|
||||
|
||||
let entries;
|
||||
try {
|
||||
entries = fs.readdirSync(skillsDir, { withFileTypes: true });
|
||||
} catch {
|
||||
return [];
|
||||
}
|
||||
|
||||
const manifest = [];
|
||||
for (const entry of entries) {
|
||||
if (!entry.isDirectory()) continue;
|
||||
|
||||
const skillMdPath = path.join(skillsDir, entry.name, 'SKILL.md');
|
||||
if (!fs.existsSync(skillMdPath)) continue;
|
||||
|
||||
let content;
|
||||
try {
|
||||
content = fs.readFileSync(skillMdPath, 'utf-8');
|
||||
} catch {
|
||||
continue;
|
||||
}
|
||||
|
||||
const frontmatter = extractFrontmatter(content);
|
||||
const name = frontmatter.name || entry.name;
|
||||
const description = frontmatter.description || '';
|
||||
|
||||
// Extract trigger lines from body text (after frontmatter)
|
||||
const triggers = [];
|
||||
const bodyMatch = content.match(/^---[\s\S]*?---\s*\n([\s\S]*)$/);
|
||||
if (bodyMatch) {
|
||||
const body = bodyMatch[1];
|
||||
const triggerLines = body.match(/^TRIGGER\s+when:\s*(.+)$/gmi);
|
||||
if (triggerLines) {
|
||||
for (const line of triggerLines) {
|
||||
const m = line.match(/^TRIGGER\s+when:\s*(.+)$/i);
|
||||
if (m) triggers.push(m[1].trim());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
manifest.push({
|
||||
name,
|
||||
description,
|
||||
triggers,
|
||||
path: entry.name,
|
||||
});
|
||||
}
|
||||
|
||||
// Sort by name for deterministic output
|
||||
manifest.sort((a, b) => a.name.localeCompare(b.name));
|
||||
return manifest;
|
||||
}
|
||||
|
||||
/**
|
||||
* Command: generate skill manifest JSON.
|
||||
*
|
||||
* Options:
|
||||
* --skills-dir <path> Path to skills directory (required)
|
||||
* --write Also write to .planning/skill-manifest.json
|
||||
*/
|
||||
function cmdSkillManifest(cwd, args, raw) {
|
||||
const skillsDirIdx = args.indexOf('--skills-dir');
|
||||
const skillsDir = skillsDirIdx >= 0 && args[skillsDirIdx + 1]
|
||||
? args[skillsDirIdx + 1]
|
||||
: null;
|
||||
|
||||
if (!skillsDir) {
|
||||
output([], raw);
|
||||
return;
|
||||
}
|
||||
|
||||
const manifest = buildSkillManifest(skillsDir);
|
||||
|
||||
// Optionally write to .planning/skill-manifest.json
|
||||
if (args.includes('--write')) {
|
||||
const planningDir = path.join(cwd, '.planning');
|
||||
if (fs.existsSync(planningDir)) {
|
||||
const manifestPath = path.join(planningDir, 'skill-manifest.json');
|
||||
fs.writeFileSync(manifestPath, JSON.stringify(manifest, null, 2), 'utf-8');
|
||||
}
|
||||
}
|
||||
|
||||
output(manifest, raw);
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
cmdInitExecutePhase,
|
||||
cmdInitPlanPhase,
|
||||
@@ -1533,4 +1632,6 @@ module.exports = {
|
||||
detectChildRepos,
|
||||
buildAgentSkillsBlock,
|
||||
cmdAgentSkills,
|
||||
buildSkillManifest,
|
||||
cmdSkillManifest,
|
||||
};
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const os = require('os');
|
||||
const { output, error, safeReadFile } = require('./core.cjs');
|
||||
const { output, error, safeReadFile, loadConfig } = require('./core.cjs');
|
||||
|
||||
// ─── Constants ────────────────────────────────────────────────────────────────
|
||||
|
||||
@@ -870,7 +870,13 @@ function cmdGenerateClaudeProfile(cwd, options, raw) {
|
||||
} else if (options.output) {
|
||||
targetPath = path.isAbsolute(options.output) ? options.output : path.join(cwd, options.output);
|
||||
} else {
|
||||
targetPath = path.join(cwd, 'CLAUDE.md');
|
||||
// Read claude_md_path from config, default to ./CLAUDE.md
|
||||
let configClaudeMdPath = './CLAUDE.md';
|
||||
try {
|
||||
const config = loadConfig(cwd);
|
||||
if (config.claude_md_path) configClaudeMdPath = config.claude_md_path;
|
||||
} catch { /* use default */ }
|
||||
targetPath = path.isAbsolute(configClaudeMdPath) ? configClaudeMdPath : path.join(cwd, configClaudeMdPath);
|
||||
}
|
||||
|
||||
let action;
|
||||
@@ -944,7 +950,13 @@ function cmdGenerateClaudeMd(cwd, options, raw) {
|
||||
|
||||
let outputPath = options.output;
|
||||
if (!outputPath) {
|
||||
outputPath = path.join(cwd, 'CLAUDE.md');
|
||||
// Read claude_md_path from config, default to ./CLAUDE.md
|
||||
let configClaudeMdPath = './CLAUDE.md';
|
||||
try {
|
||||
const config = loadConfig(cwd);
|
||||
if (config.claude_md_path) configClaudeMdPath = config.claude_md_path;
|
||||
} catch { /* use default */ }
|
||||
outputPath = path.isAbsolute(configClaudeMdPath) ? configClaudeMdPath : path.join(cwd, configClaudeMdPath);
|
||||
} else if (!path.isAbsolute(outputPath)) {
|
||||
outputPath = path.join(cwd, outputPath);
|
||||
}
|
||||
|
||||
110
get-shit-done/references/executor-examples.md
Normal file
110
get-shit-done/references/executor-examples.md
Normal file
@@ -0,0 +1,110 @@
|
||||
# Executor Extended Examples
|
||||
|
||||
> Reference file for gsd-executor agent. Loaded on-demand via `@` reference.
|
||||
> For sub-200K context windows, this content is stripped from the agent prompt and available here for on-demand loading.
|
||||
|
||||
## Deviation Rule Examples
|
||||
|
||||
### Rule 1 — Auto-fix bugs
|
||||
|
||||
**Examples of Rule 1 triggers:**
|
||||
- Wrong queries returning incorrect data
|
||||
- Logic errors in conditionals
|
||||
- Type errors and type mismatches
|
||||
- Null pointer exceptions / undefined access
|
||||
- Broken validation (accepts invalid input)
|
||||
- Security vulnerabilities (XSS, SQL injection)
|
||||
- Race conditions in async code
|
||||
- Memory leaks from uncleaned resources
|
||||
|
||||
### Rule 2 — Auto-add missing critical functionality
|
||||
|
||||
**Examples of Rule 2 triggers:**
|
||||
- Missing error handling (unhandled promise rejections, no try/catch on I/O)
|
||||
- No input validation on user-facing endpoints
|
||||
- Missing null checks before property access
|
||||
- No auth on protected routes
|
||||
- Missing authorization checks (user can access other users' data)
|
||||
- No CSRF/CORS configuration
|
||||
- No rate limiting on public endpoints
|
||||
- Missing DB indexes on frequently queried columns
|
||||
- No error logging (failures silently swallowed)
|
||||
|
||||
### Rule 3 — Auto-fix blocking issues
|
||||
|
||||
**Examples of Rule 3 triggers:**
|
||||
- Missing dependency not in package.json
|
||||
- Wrong types preventing compilation
|
||||
- Broken imports (wrong path, wrong export name)
|
||||
- Missing env var required at runtime
|
||||
- DB connection error (wrong URL, missing credentials)
|
||||
- Build config error (wrong entry point, missing loader)
|
||||
- Missing referenced file (import points to non-existent module)
|
||||
- Circular dependency preventing module load
|
||||
|
||||
### Rule 4 — Ask about architectural changes
|
||||
|
||||
**Examples of Rule 4 triggers:**
|
||||
- New DB table (not just adding a column)
|
||||
- Major schema changes (renaming tables, changing relationships)
|
||||
- New service layer (adding a queue, cache, or message bus)
|
||||
- Switching libraries/frameworks (e.g., replacing Express with Fastify)
|
||||
- Changing auth approach (switching from session to JWT)
|
||||
- New infrastructure (adding Redis, S3, etc.)
|
||||
- Breaking API changes (removing or renaming endpoints)
|
||||
|
||||
## Edge Case Decision Guide
|
||||
|
||||
| Scenario | Rule | Rationale |
|
||||
|----------|------|-----------|
|
||||
| Missing validation on input | Rule 2 | Security requirement |
|
||||
| Crashes on null input | Rule 1 | Bug — incorrect behavior |
|
||||
| Need new database table | Rule 4 | Architectural decision |
|
||||
| Need new column on existing table | Rule 1 or 2 | Depends on context |
|
||||
| Pre-existing linting warnings | Out of scope | Not caused by current task |
|
||||
| Unrelated test failures | Out of scope | Not caused by current task |
|
||||
|
||||
**Decision heuristic:** "Does this affect correctness, security, or ability to complete the current task?"
|
||||
- YES → Rules 1-3 (fix automatically)
|
||||
- MAYBE → Rule 4 (ask the user)
|
||||
- NO → Out of scope (log to deferred-items.md)
|
||||
|
||||
## Checkpoint Examples
|
||||
|
||||
### Good checkpoint placement
|
||||
|
||||
```xml
|
||||
<!-- Automate everything, then verify at the end -->
|
||||
<task type="auto">Create database schema</task>
|
||||
<task type="auto">Create API endpoints</task>
|
||||
<task type="auto">Create UI components</task>
|
||||
<task type="checkpoint:human-verify">
|
||||
<what-built>Complete auth flow (schema + API + UI)</what-built>
|
||||
<how-to-verify>
|
||||
1. Visit http://localhost:3000/register
|
||||
2. Create account with test@example.com
|
||||
3. Log in with those credentials
|
||||
4. Verify dashboard loads with user name
|
||||
</how-to-verify>
|
||||
</task>
|
||||
```
|
||||
|
||||
### Bad checkpoint placement
|
||||
|
||||
```xml
|
||||
<!-- Too many checkpoints — causes verification fatigue -->
|
||||
<task type="auto">Create schema</task>
|
||||
<task type="checkpoint:human-verify">Check schema</task>
|
||||
<task type="auto">Create API</task>
|
||||
<task type="checkpoint:human-verify">Check API</task>
|
||||
<task type="auto">Create UI</task>
|
||||
<task type="checkpoint:human-verify">Check UI</task>
|
||||
```
|
||||
|
||||
### Auth gate handling
|
||||
|
||||
When an auth error occurs during `type="auto"` execution:
|
||||
1. Recognize it as an auth gate (not a bug) — indicators: "Not authenticated", "401", "403", "Please run X login"
|
||||
2. STOP the current task
|
||||
3. Return a `checkpoint:human-action` with exact auth steps
|
||||
4. In SUMMARY.md, document auth gates as normal flow, not deviations
|
||||
89
get-shit-done/references/planner-antipatterns.md
Normal file
89
get-shit-done/references/planner-antipatterns.md
Normal file
@@ -0,0 +1,89 @@
|
||||
# Planner Anti-Patterns and Specificity Examples
|
||||
|
||||
> Reference file for gsd-planner agent. Loaded on-demand via `@` reference.
|
||||
> For sub-200K context windows, this content is stripped from the agent prompt and available here for on-demand loading.
|
||||
|
||||
## Checkpoint Anti-Patterns
|
||||
|
||||
### Bad — Asking human to automate
|
||||
|
||||
```xml
|
||||
<task type="checkpoint:human-action">
|
||||
<action>Deploy to Vercel</action>
|
||||
<instructions>Visit vercel.com, import repo, click deploy...</instructions>
|
||||
</task>
|
||||
```
|
||||
|
||||
**Why bad:** Vercel has a CLI. Claude should run `vercel --yes`. Never ask the user to do what Claude can automate via CLI/API.
|
||||
|
||||
### Bad — Too many checkpoints
|
||||
|
||||
```xml
|
||||
<task type="auto">Create schema</task>
|
||||
<task type="checkpoint:human-verify">Check schema</task>
|
||||
<task type="auto">Create API</task>
|
||||
<task type="checkpoint:human-verify">Check API</task>
|
||||
```
|
||||
|
||||
**Why bad:** Verification fatigue. Users should not be asked to verify every small step. Combine into one checkpoint at the end of meaningful work.
|
||||
|
||||
### Good — Single verification checkpoint
|
||||
|
||||
```xml
|
||||
<task type="auto">Create schema</task>
|
||||
<task type="auto">Create API</task>
|
||||
<task type="auto">Create UI</task>
|
||||
<task type="checkpoint:human-verify">
|
||||
<what-built>Complete auth flow (schema + API + UI)</what-built>
|
||||
<how-to-verify>Test full flow: register, login, access protected page</how-to-verify>
|
||||
</task>
|
||||
```
|
||||
|
||||
### Bad — Mixing checkpoints with implementation
|
||||
|
||||
A plan should not interleave multiple checkpoint types with implementation tasks. Checkpoints belong at natural verification boundaries, not scattered throughout.
|
||||
|
||||
## Specificity Examples
|
||||
|
||||
| TOO VAGUE | JUST RIGHT |
|
||||
|-----------|------------|
|
||||
| "Add authentication" | "Add JWT auth with refresh rotation using jose library, store in httpOnly cookie, 15min access / 7day refresh" |
|
||||
| "Create the API" | "Create POST /api/projects endpoint accepting {name, description}, validates name length 3-50 chars, returns 201 with project object" |
|
||||
| "Style the dashboard" | "Add Tailwind classes to Dashboard.tsx: grid layout (3 cols on lg, 1 on mobile), card shadows, hover states on action buttons" |
|
||||
| "Handle errors" | "Wrap API calls in try/catch, return {error: string} on 4xx/5xx, show toast via sonner on client" |
|
||||
| "Set up the database" | "Add User and Project models to schema.prisma with UUID ids, email unique constraint, createdAt/updatedAt timestamps, run prisma db push" |
|
||||
|
||||
**Specificity test:** Could a different Claude instance execute the task without asking clarifying questions? If not, add more detail.
|
||||
|
||||
## Context Section Anti-Patterns
|
||||
|
||||
### Bad — Reflexive SUMMARY chaining
|
||||
|
||||
```markdown
|
||||
<context>
|
||||
@.planning/phases/01-foundation/01-01-SUMMARY.md
|
||||
@.planning/phases/01-foundation/01-02-SUMMARY.md <!-- Does Plan 02 actually need Plan 01's output? -->
|
||||
@.planning/phases/01-foundation/01-03-SUMMARY.md <!-- Chain grows, context bloats -->
|
||||
</context>
|
||||
```
|
||||
|
||||
**Why bad:** Plans are often independent. Reflexive chaining (02 refs 01, 03 refs 02...) wastes context. Only reference prior SUMMARY files when the plan genuinely uses types/exports from that prior plan or a decision from it affects the current plan.
|
||||
|
||||
### Good — Selective context
|
||||
|
||||
```markdown
|
||||
<context>
|
||||
@.planning/PROJECT.md
|
||||
@.planning/STATE.md
|
||||
@.planning/phases/01-foundation/01-01-SUMMARY.md <!-- Uses User type defined in Plan 01 -->
|
||||
</context>
|
||||
```
|
||||
|
||||
## Scope Reduction Anti-Patterns
|
||||
|
||||
**Prohibited language in task actions:**
|
||||
- "v1", "v2", "simplified version", "static for now", "hardcoded for now"
|
||||
- "future enhancement", "placeholder", "basic version", "minimal implementation"
|
||||
- "will be wired later", "dynamic in future phase", "skip for now"
|
||||
|
||||
If a decision from CONTEXT.md says "display cost calculated from billing table in impulses", the plan must deliver exactly that. Not "static label /min" as a "v1". If the phase is too complex, recommend a phase split instead of silently reducing scope.
|
||||
73
get-shit-done/references/planner-source-audit.md
Normal file
73
get-shit-done/references/planner-source-audit.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# Planner Source Audit & Authority Limits
|
||||
|
||||
Reference for `agents/gsd-planner.md` — extended rules for multi-source coverage audits and planner authority constraints.
|
||||
|
||||
## Multi-Source Coverage Audit Format
|
||||
|
||||
Before finalizing plans, produce a **source audit** covering ALL four artifact types:
|
||||
|
||||
```
|
||||
SOURCE | ID | Feature/Requirement | Plan | Status | Notes
|
||||
--------- | ------- | ---------------------------- | ----- | --------- | ------
|
||||
GOAL | — | {phase goal from ROADMAP.md} | 01-03 | COVERED |
|
||||
REQ | REQ-14 | OAuth login with Google + GH | 02 | COVERED |
|
||||
REQ | REQ-22 | Email verification flow | 03 | COVERED |
|
||||
RESEARCH | — | Rate limiting on auth routes | 01 | COVERED |
|
||||
RESEARCH | — | Refresh token rotation | NONE | ⚠ MISSING | No plan covers this
|
||||
CONTEXT | D-01 | Use jose library for JWT | 02 | COVERED |
|
||||
CONTEXT | D-04 | 15min access / 7day refresh | 02 | COVERED |
|
||||
```
|
||||
|
||||
### Four Source Types
|
||||
|
||||
1. **GOAL** — The `goal:` field from ROADMAP.md for this phase. The primary success condition.
|
||||
2. **REQ** — Every REQ-ID in `phase_req_ids`. Cross-reference REQUIREMENTS.md for descriptions.
|
||||
3. **RESEARCH** — Technical approaches, discovered constraints, and features identified in RESEARCH.md. Exclude items explicitly marked "out of scope" or "future work" by the researcher.
|
||||
4. **CONTEXT** — Every D-XX decision from CONTEXT.md `<decisions>` section.
|
||||
|
||||
### What is NOT a Gap
|
||||
|
||||
Do not flag these as MISSING:
|
||||
- Items in `## Deferred Ideas` in CONTEXT.md — developer chose to defer these
|
||||
- Items scoped to a different phase via `phase_req_ids` — not assigned to this phase
|
||||
- Items in RESEARCH.md explicitly marked "out of scope" or "future work" by the researcher
|
||||
|
||||
### Handling MISSING Items
|
||||
|
||||
If ANY row is `⚠ MISSING`, do NOT finalize the plan set silently. Return to the orchestrator:
|
||||
|
||||
```
|
||||
## ⚠ Source Audit: Unplanned Items Found
|
||||
|
||||
The following items from source artifacts have no corresponding plan:
|
||||
|
||||
1. **{SOURCE}: {item description}** (from {artifact file}, section "{section}")
|
||||
- {why this was identified as required}
|
||||
|
||||
Options:
|
||||
A) Add a plan to cover this item
|
||||
B) Split phase: move to a sub-phase
|
||||
C) Defer explicitly: add to backlog with developer confirmation
|
||||
|
||||
→ Awaiting developer decision before finalizing plan set.
|
||||
```
|
||||
|
||||
If ALL rows are COVERED → return `## PLANNING COMPLETE` as normal.
|
||||
|
||||
---
|
||||
|
||||
## Authority Limits — Constraint Examples
|
||||
|
||||
The planner's only legitimate reasons to split or flag a feature are **constraints**, not judgments about difficulty:
|
||||
|
||||
**Valid (constraints):**
|
||||
- ✓ "This task touches 9 files and would consume ~45% context — split into two tasks"
|
||||
- ✓ "No API key or endpoint is defined in any source artifact — need developer input"
|
||||
- ✓ "This feature depends on the auth system built in Phase 03, which is not yet complete"
|
||||
|
||||
**Invalid (difficulty judgments):**
|
||||
- ✗ "This is complex and would be difficult to implement correctly"
|
||||
- ✗ "Integrating with an external service could take a long time"
|
||||
- ✗ "This is a challenging feature that might be better left to a future phase"
|
||||
|
||||
If a feature has none of the three legitimate constraints (context cost, missing information, dependency conflict), it gets planned. Period.
|
||||
@@ -310,6 +310,14 @@ Set via `learnings.*` namespace (e.g., `"learnings": { "max_inject": 5 }`). Used
|
||||
|-----|------|---------|----------------|-------------|
|
||||
| `learnings.max_inject` | number | `10` | Any positive integer | Maximum number of global learning entries to inject into agent prompts per session |
|
||||
|
||||
### Intel Fields
|
||||
|
||||
Set via `intel.*` namespace (e.g., `"intel": { "enabled": true }`). Controls the queryable codebase intelligence system consumed by `/gsd-intel`.
|
||||
|
||||
| Key | Type | Default | Allowed Values | Description |
|
||||
|-----|------|---------|----------------|-------------|
|
||||
| `intel.enabled` | boolean | `false` | `true`, `false` | Enable queryable codebase intelligence system. When `true`, `/gsd-intel` commands build and query a JSON index in `.planning/intel/`. |
|
||||
|
||||
### Manager Fields
|
||||
|
||||
Set via `manager.*` namespace (e.g., `"manager": { "flags": { "discuss": "--auto" } }`).
|
||||
|
||||
@@ -11,7 +11,14 @@
|
||||
"security_asvs_level": 1,
|
||||
"security_block_on": "high",
|
||||
"discuss_mode": "discuss",
|
||||
"research_before_questions": false
|
||||
"research_before_questions": false,
|
||||
"code_review_command": null,
|
||||
"plan_bounce": false,
|
||||
"plan_bounce_script": null,
|
||||
"plan_bounce_passes": 2,
|
||||
"cross_ai_execution": false,
|
||||
"cross_ai_command": "",
|
||||
"cross_ai_timeout": 300
|
||||
},
|
||||
"planning": {
|
||||
"commit_docs": true,
|
||||
@@ -44,5 +51,6 @@
|
||||
"context_warnings": true
|
||||
},
|
||||
"project_code": null,
|
||||
"agent_skills": {}
|
||||
"agent_skills": {},
|
||||
"claude_md_path": "./CLAUDE.md"
|
||||
}
|
||||
|
||||
@@ -38,6 +38,18 @@ Template for `.planning/phases/XX-name/{phase_num}-RESEARCH.md` - comprehensive
|
||||
**If no CONTEXT.md exists:** Write "No user constraints - all decisions at Claude's discretion"
|
||||
</user_constraints>
|
||||
|
||||
<architectural_responsibility_map>
|
||||
## Architectural Responsibility Map
|
||||
|
||||
Map each phase capability to its standard architectural tier owner before diving into framework research. This prevents tier misassignment from propagating into plans.
|
||||
|
||||
| Capability | Primary Tier | Secondary Tier | Rationale |
|
||||
|------------|-------------|----------------|-----------|
|
||||
| [capability from phase description] | [Browser/Client, Frontend Server, API/Backend, CDN/Static, or Database/Storage] | [secondary tier or —] | [why this tier owns it] |
|
||||
|
||||
**If single-tier application:** Write "Single-tier application — all capabilities reside in [tier]" and omit the table.
|
||||
</architectural_responsibility_map>
|
||||
|
||||
<research_summary>
|
||||
## Summary
|
||||
|
||||
|
||||
@@ -113,6 +113,15 @@ Phase: "API documentation"
|
||||
|
||||
<answer_validation>
|
||||
**IMPORTANT: Answer validation** — After every AskUserQuestion call, check if the response is empty or whitespace-only. If so:
|
||||
|
||||
**Exception — "Other" with empty text:** If the user selected "Other" (or "Chat more") and the response body is empty or whitespace-only, this is NOT an empty answer — it is a signal that the user wants to type freeform input. In this case:
|
||||
1. Output a single plain-text line: "What would you like to discuss?"
|
||||
2. STOP generating. Do not call any tools. Do not output any further text.
|
||||
3. Wait for the user's next message.
|
||||
4. After receiving their message, reflect it back and continue.
|
||||
Do NOT retry the AskUserQuestion or generate more questions when "Other" is selected with empty text.
|
||||
|
||||
**All other empty responses:** If the response is empty or whitespace-only (and the user did NOT select "Other"):
|
||||
1. Retry the question once with the same parameters
|
||||
2. If still empty, present the options as a plain-text numbered list and ask the user to type their choice number
|
||||
Never proceed with an empty answer.
|
||||
|
||||
@@ -57,6 +57,8 @@ Parse `$ARGUMENTS` before loading any context:
|
||||
- First positional token → `PHASE_ARG`
|
||||
- Optional `--wave N` → `WAVE_FILTER`
|
||||
- Optional `--gaps-only` keeps its current meaning
|
||||
- Optional `--cross-ai` → `CROSS_AI_FORCE=true` (force all plans through cross-AI execution)
|
||||
- Optional `--no-cross-ai` → `CROSS_AI_DISABLED=true` (disable cross-AI for this run, overrides config and frontmatter)
|
||||
|
||||
If `--wave` is absent, preserve the current behavior of executing all incomplete waves in the phase.
|
||||
</step>
|
||||
@@ -93,6 +95,12 @@ When `CONTEXT_WINDOW >= 500000` (1M-class models), subagent prompts include rich
|
||||
- Verifier agents receive all PLAN.md, SUMMARY.md, CONTEXT.md files plus REQUIREMENTS.md
|
||||
- This enables cross-phase awareness and history-aware verification
|
||||
|
||||
When `CONTEXT_WINDOW < 200000` (sub-200K models), subagent prompts are thinned to reduce static overhead:
|
||||
- Executor agents omit extended deviation rule examples and checkpoint examples from inline prompt — load on-demand via @~/.claude/get-shit-done/references/executor-examples.md
|
||||
- Planner agents omit extended anti-pattern lists and specificity examples from inline prompt — load on-demand via @~/.claude/get-shit-done/references/planner-antipatterns.md
|
||||
- Core rules and decision logic remain inline; only verbose examples and edge-case lists are extracted
|
||||
- This reduces executor static overhead by ~40% while preserving behavioral correctness
|
||||
|
||||
**If `phase_found` is false:** Error — phase directory not found.
|
||||
**If `plan_count` is 0:** Error — no plans found in phase.
|
||||
**If `state_exists` is false but `.planning/` exists:** Offer reconstruct or continue.
|
||||
@@ -243,6 +251,77 @@ Report:
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="cross_ai_delegation">
|
||||
**Optional step 2.5 — Delegate plans to an external AI runtime.**
|
||||
|
||||
This step runs after plan discovery and before normal wave execution. It identifies plans
|
||||
that should be delegated to an external AI command and executes them via stdin-based prompt
|
||||
delivery. Plans handled here are removed from the execute_waves plan list so the normal
|
||||
executor skips them.
|
||||
|
||||
**Activation logic:**
|
||||
|
||||
1. If `CROSS_AI_DISABLED` is true (`--no-cross-ai` flag): skip this step entirely.
|
||||
2. If `CROSS_AI_FORCE` is true (`--cross-ai` flag): mark ALL incomplete plans for cross-AI execution.
|
||||
3. Otherwise: check each plan's frontmatter for `cross_ai: true` AND verify config
|
||||
`workflow.cross_ai_execution` is `true`. Plans matching both conditions are marked for cross-AI.
|
||||
|
||||
```bash
|
||||
CROSS_AI_ENABLED=$(node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow.cross_ai_execution --default false 2>/dev/null)
|
||||
CROSS_AI_CMD=$(node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow.cross_ai_command --default "" 2>/dev/null)
|
||||
CROSS_AI_TIMEOUT=$(node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow.cross_ai_timeout --default 300 2>/dev/null)
|
||||
```
|
||||
|
||||
**If no plans are marked for cross-AI:** Skip to execute_waves.
|
||||
|
||||
**If plans are marked but `cross_ai_command` is empty:** Error — tell user to set
|
||||
`workflow.cross_ai_command` via `gsd-tools.cjs config-set workflow.cross_ai_command "<command>"`.
|
||||
|
||||
**For each cross-AI plan (sequentially):**
|
||||
|
||||
1. **Construct the task prompt** from the plan file:
|
||||
- Extract `<objective>` and `<tasks>` sections from the PLAN.md
|
||||
- Append PROJECT.md context (project name, description, tech stack)
|
||||
- Format as a self-contained execution prompt
|
||||
|
||||
2. **Check for dirty working tree before execution:**
|
||||
```bash
|
||||
if ! git diff --quiet HEAD 2>/dev/null; then
|
||||
echo "WARNING: dirty working tree detected — the external AI command may produce uncommitted changes that conflict with existing modifications"
|
||||
fi
|
||||
```
|
||||
|
||||
3. **Run the external command** from the project root, writing the prompt to stdin.
|
||||
Never shell-interpolate the prompt — always pipe via stdin to prevent injection:
|
||||
```bash
|
||||
echo "$TASK_PROMPT" | timeout "${CROSS_AI_TIMEOUT}s" ${CROSS_AI_CMD} > "$CANDIDATE_SUMMARY" 2>"$ERROR_LOG"
|
||||
EXIT_CODE=$?
|
||||
```
|
||||
|
||||
4. **Evaluate the result:**
|
||||
|
||||
**Success (exit 0 + valid summary):**
|
||||
- Read `$CANDIDATE_SUMMARY` and validate it contains meaningful content
|
||||
(not empty, has at least a heading and description — a valid SUMMARY.md structure)
|
||||
- Write it as the plan's SUMMARY.md file
|
||||
- Update STATE.md plan status to complete
|
||||
- Update ROADMAP.md progress
|
||||
- Mark plan as handled — skip it in execute_waves
|
||||
|
||||
**Failure (non-zero exit or invalid summary):**
|
||||
- Display the error output and exit code
|
||||
- Warn: "The external command may have left uncommitted changes or partial edits
|
||||
in the working tree. Review `git status` and `git diff` before proceeding."
|
||||
- Offer three choices:
|
||||
- **retry** — run the same plan through cross-AI again
|
||||
- **skip** — fall back to normal executor for this plan (re-add to execute_waves list)
|
||||
- **abort** — stop execution entirely, preserve state for resume
|
||||
|
||||
5. **After all cross-AI plans processed:** Remove successfully handled plans from the
|
||||
incomplete plan list so execute_waves skips them. Any skipped-to-fallback plans remain
|
||||
in the list for normal executor processing.
|
||||
</step>
|
||||
|
||||
<step name="execute_waves">
|
||||
Execute each selected wave in sequence. Within a wave: parallel if `PARALLELIZATION=true`, sequential if `false`.
|
||||
|
||||
@@ -382,6 +461,12 @@ Execute each selected wave in sequence. Within a wave: parallel if `PARALLELIZAT
|
||||
auto-detects worktree mode (`.git` is a file, not a directory) and skips
|
||||
shared file updates automatically. The orchestrator updates them centrally
|
||||
after merge.
|
||||
|
||||
REQUIRED: SUMMARY.md MUST be committed before you return. In worktree mode the
|
||||
git_commit_metadata step in execute-plan.md commits SUMMARY.md and REQUIREMENTS.md
|
||||
only (STATE.md and ROADMAP.md are excluded automatically). Do NOT skip or defer
|
||||
this commit — the orchestrator force-removes the worktree after you return, and
|
||||
any uncommitted SUMMARY.md will be permanently lost (#2070).
|
||||
</parallel_execution>
|
||||
|
||||
<execution_context>
|
||||
@@ -389,6 +474,7 @@ Execute each selected wave in sequence. Within a wave: parallel if `PARALLELIZAT
|
||||
@~/.claude/get-shit-done/templates/summary.md
|
||||
@~/.claude/get-shit-done/references/checkpoints.md
|
||||
@~/.claude/get-shit-done/references/tdd.md
|
||||
${CONTEXT_WINDOW < 200000 ? '' : '@~/.claude/get-shit-done/references/executor-examples.md'}
|
||||
</execution_context>
|
||||
|
||||
<files_to_read>
|
||||
@@ -556,6 +642,17 @@ Execute each selected wave in sequence. Within a wave: parallel if `PARALLELIZAT
|
||||
fi
|
||||
fi
|
||||
|
||||
# Safety net: commit any uncommitted SUMMARY.md before force-removing the worktree.
|
||||
# This guards against executors that skipped the git_commit_metadata step (#2070).
|
||||
UNCOMMITTED_SUMMARY=$(git -C "$WT" ls-files --modified --others --exclude-standard -- "*SUMMARY.md" 2>/dev/null || true)
|
||||
if [ -n "$UNCOMMITTED_SUMMARY" ]; then
|
||||
echo "⚠ SUMMARY.md was not committed by executor — committing now to prevent data loss"
|
||||
git -C "$WT" add -- "*SUMMARY.md" 2>/dev/null || true
|
||||
git -C "$WT" commit --no-verify -m "docs(recovery): rescue uncommitted SUMMARY.md before worktree removal (#2070)" 2>/dev/null || true
|
||||
# Re-merge the recovery commit
|
||||
git merge "$WT_BRANCH" --no-edit -m "chore: merge rescued SUMMARY.md from executor worktree ($WT_BRANCH)" 2>/dev/null || true
|
||||
fi
|
||||
|
||||
# Remove the worktree
|
||||
git worktree remove "$WT" --force 2>/dev/null || true
|
||||
|
||||
|
||||
@@ -188,32 +188,12 @@ Auth errors during execution are NOT failures — they're expected interaction p
|
||||
|
||||
## Deviation Rules
|
||||
|
||||
You WILL discover unplanned work. Apply automatically, track all for Summary.
|
||||
|
||||
| Rule | Trigger | Action | Permission |
|
||||
|------|---------|--------|------------|
|
||||
| **1: Bug** | Broken behavior, errors, wrong queries, type errors, security vulns, race conditions, leaks | Fix → test → verify → track `[Rule 1 - Bug]` | Auto |
|
||||
| **2: Missing Critical** | Missing essentials: error handling, validation, auth, CSRF/CORS, rate limiting, indexes, logging | Add → test → verify → track `[Rule 2 - Missing Critical]` | Auto |
|
||||
| **3: Blocking** | Prevents completion: missing deps, wrong types, broken imports, missing env/config/files, circular deps | Fix blocker → verify proceeds → track `[Rule 3 - Blocking]` | Auto |
|
||||
| **4: Architectural** | Structural change: new DB table, schema change, new service, switching libs, breaking API, new infra | STOP → present decision (below) → track `[Rule 4 - Architectural]` | Ask user |
|
||||
|
||||
**Rule 4 format:**
|
||||
```
|
||||
⚠️ Architectural Decision Needed
|
||||
|
||||
Current task: [task name]
|
||||
Discovery: [what prompted this]
|
||||
Proposed change: [modification]
|
||||
Why needed: [rationale]
|
||||
Impact: [what this affects]
|
||||
Alternatives: [other approaches]
|
||||
|
||||
Proceed with proposed change? (yes / different approach / defer)
|
||||
```
|
||||
|
||||
**Priority:** Rule 4 (STOP) > Rules 1-3 (auto) > unsure → Rule 4
|
||||
**Edge cases:** missing validation → R2 | null crash → R1 | new table → R4 | new column → R1/2
|
||||
**Heuristic:** Affects correctness/security/completion? → R1-3. Maybe? → R4.
|
||||
Apply deviation rules from the gsd-executor agent definition (single source of truth):
|
||||
- **Rules 1-3** (bugs, missing critical, blockers): auto-fix, test, verify, track as deviations
|
||||
- **Rule 4** (architectural changes): STOP, present decision to user, await approval
|
||||
- **Scope boundary**: do not auto-fix pre-existing issues unrelated to current task
|
||||
- **Fix attempt limit**: max 3 retries per deviation before escalating
|
||||
- **Priority**: Rule 4 (STOP) > Rules 1-3 (auto) > unsure → Rule 4
|
||||
|
||||
</deviation_rules>
|
||||
|
||||
@@ -266,59 +246,13 @@ If a commit is BLOCKED by a hook:
|
||||
<task_commit>
|
||||
## Task Commit Protocol
|
||||
|
||||
After each task (verification passed, done criteria met), commit immediately.
|
||||
|
||||
**1. Check:** `git status --short`
|
||||
|
||||
**2. Stage individually** (NEVER `git add .` or `git add -A`):
|
||||
```bash
|
||||
git add src/api/auth.ts
|
||||
git add src/types/user.ts
|
||||
```
|
||||
|
||||
**3. Commit type:**
|
||||
|
||||
| Type | When | Example |
|
||||
|------|------|---------|
|
||||
| `feat` | New functionality | feat(08-02): create user registration endpoint |
|
||||
| `fix` | Bug fix | fix(08-02): correct email validation regex |
|
||||
| `test` | Test-only (TDD RED) | test(08-02): add failing test for password hashing |
|
||||
| `refactor` | No behavior change (TDD REFACTOR) | refactor(08-02): extract validation to helper |
|
||||
| `perf` | Performance | perf(08-02): add database index |
|
||||
| `docs` | Documentation | docs(08-02): add API docs |
|
||||
| `style` | Formatting | style(08-02): format auth module |
|
||||
| `chore` | Config/deps | chore(08-02): add bcrypt dependency |
|
||||
|
||||
**4. Format:** `{type}({phase}-{plan}): {description}` with bullet points for key changes.
|
||||
|
||||
<sub_repos_commit_flow>
|
||||
**Sub-repos mode:** If `sub_repos` is configured (non-empty array from init context), use `commit-to-subrepo` instead of standard git commit. This routes files to their correct sub-repo based on path prefix.
|
||||
|
||||
```bash
|
||||
node ~/.claude/get-shit-done/bin/gsd-tools.cjs commit-to-subrepo "{type}({phase}-{plan}): {description}" --files file1 file2 ...
|
||||
```
|
||||
|
||||
The command groups files by sub-repo prefix and commits atomically to each. Returns JSON: `{ committed: true, repos: { "backend": { hash: "abc", files: [...] }, ... } }`.
|
||||
|
||||
Record hashes from each repo in the response for SUMMARY tracking.
|
||||
|
||||
**If `sub_repos` is empty or not set:** Use standard git commit flow below.
|
||||
</sub_repos_commit_flow>
|
||||
|
||||
**5. Record hash:**
|
||||
```bash
|
||||
TASK_COMMIT=$(git rev-parse --short HEAD)
|
||||
TASK_COMMITS+=("Task ${TASK_NUM}: ${TASK_COMMIT}")
|
||||
```
|
||||
|
||||
**6. Check for untracked generated files:**
|
||||
```bash
|
||||
git status --short | grep '^??'
|
||||
```
|
||||
If new untracked files appeared after running scripts or tools, decide for each:
|
||||
- **Commit it** — if it's a source file, config, or intentional artifact
|
||||
- **Add to .gitignore** — if it's a generated/runtime output (build artifacts, `.env` files, cache files, compiled output)
|
||||
- Do NOT leave generated files untracked
|
||||
Follow the task commit protocol from the gsd-executor agent definition (single source of truth):
|
||||
- Stage files individually (NEVER `git add .` or `git add -A`)
|
||||
- Format: `{type}({phase}-{plan}): {concise description}` with bullet points for key changes
|
||||
- Types: feat, fix, test, refactor, perf, docs, style, chore
|
||||
- Sub-repos: use `commit-to-subrepo` when `sub_repos` is configured
|
||||
- Record commit hash for SUMMARY tracking
|
||||
- Check for untracked generated files after each commit
|
||||
|
||||
</task_commit>
|
||||
|
||||
|
||||
232
get-shit-done/workflows/extract_learnings.md
Normal file
232
get-shit-done/workflows/extract_learnings.md
Normal file
@@ -0,0 +1,232 @@
|
||||
<purpose>
|
||||
Extract decisions, lessons learned, patterns discovered, and surprises encountered from completed phase artifacts into a structured LEARNINGS.md file. Captures institutional knowledge that would otherwise be lost between phases.
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
</required_reading>
|
||||
|
||||
<objective>
|
||||
Analyze completed phase artifacts (PLAN.md, SUMMARY.md, VERIFICATION.md, UAT.md, STATE.md) and extract structured learnings into 4 categories: decisions, lessons, patterns, and surprises. Each extracted item includes source attribution. The output is a LEARNINGS.md file with YAML frontmatter containing metadata about the extraction.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="initialize">
|
||||
Parse arguments and load project state:
|
||||
|
||||
```bash
|
||||
INIT=$(node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" init phase-op "${PHASE_ARG}")
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Parse from init JSON: `phase_found`, `phase_dir`, `phase_number`, `phase_name`, `padded_phase`.
|
||||
|
||||
If phase not found, exit with error: "Phase {PHASE_ARG} not found."
|
||||
</step>
|
||||
|
||||
<step name="collect_artifacts">
|
||||
Read the phase artifacts. PLAN.md and SUMMARY.md are required; VERIFICATION.md, UAT.md, and STATE.md are optional.
|
||||
|
||||
**Required artifacts:**
|
||||
- `${PHASE_DIR}/*-PLAN.md` — all plan files for the phase
|
||||
- `${PHASE_DIR}/*-SUMMARY.md` — all summary files for the phase
|
||||
|
||||
If PLAN.md or SUMMARY.md files are not found or missing, exit with error: "Required artifacts missing. PLAN.md and SUMMARY.md are required for learning extraction."
|
||||
|
||||
**Optional artifacts (read if available, skip if not found):**
|
||||
- `${PHASE_DIR}/*-VERIFICATION.md` — verification results
|
||||
- `${PHASE_DIR}/*-UAT.md` — user acceptance test results
|
||||
- `.planning/STATE.md` — project state with decisions and blockers
|
||||
|
||||
Track which optional artifacts are missing for the `missing_artifacts` frontmatter field.
|
||||
</step>
|
||||
|
||||
<step name="extract_learnings">
|
||||
Analyze all collected artifacts and extract learnings into 4 categories:
|
||||
|
||||
### 1. Decisions
|
||||
Technical and architectural decisions made during the phase. Look for:
|
||||
- Explicit decisions documented in PLAN.md or SUMMARY.md
|
||||
- Technology choices and their rationale
|
||||
- Trade-offs that were evaluated
|
||||
- Design decisions recorded in STATE.md
|
||||
|
||||
Each decision entry must include:
|
||||
- **What** was decided
|
||||
- **Why** it was decided (rationale)
|
||||
- **Source:** attribution to the artifact where the decision was found (e.g., "Source: 03-01-PLAN.md")
|
||||
|
||||
### 2. Lessons
|
||||
Things learned during execution that were not known beforehand. Look for:
|
||||
- Unexpected complexity in SUMMARY.md
|
||||
- Issues discovered during verification in VERIFICATION.md
|
||||
- Failed approaches documented in SUMMARY.md
|
||||
- UAT feedback that revealed gaps
|
||||
|
||||
Each lesson entry must include:
|
||||
- **What** was learned
|
||||
- **Context** for the lesson
|
||||
- **Source:** attribution to the originating artifact
|
||||
|
||||
### 3. Patterns
|
||||
Reusable patterns, approaches, or techniques discovered. Look for:
|
||||
- Successful implementation patterns in SUMMARY.md
|
||||
- Testing patterns from VERIFICATION.md or UAT.md
|
||||
- Workflow patterns that worked well
|
||||
- Code organization patterns from PLAN.md
|
||||
|
||||
Each pattern entry must include:
|
||||
- **Pattern** name/description
|
||||
- **When to use** it
|
||||
- **Source:** attribution to the originating artifact
|
||||
|
||||
### 4. Surprises
|
||||
Unexpected findings, behaviors, or outcomes. Look for:
|
||||
- Things that took longer or shorter than estimated
|
||||
- Unexpected dependencies or interactions
|
||||
- Edge cases not anticipated in planning
|
||||
- Performance or behavior that differed from expectations
|
||||
|
||||
Each surprise entry must include:
|
||||
- **What** was surprising
|
||||
- **Impact** of the surprise
|
||||
- **Source:** attribution to the originating artifact
|
||||
</step>
|
||||
|
||||
<step name="capture_thought_integration">
|
||||
If the `capture_thought` tool is available in the current session, capture each extracted learning as a thought with metadata:
|
||||
|
||||
```
|
||||
capture_thought({
|
||||
category: "decision" | "lesson" | "pattern" | "surprise",
|
||||
phase: PHASE_NUMBER,
|
||||
content: LEARNING_TEXT,
|
||||
source: ARTIFACT_NAME
|
||||
})
|
||||
```
|
||||
|
||||
If `capture_thought` is not available (e.g., runtime does not support it), gracefully skip this step and continue. The LEARNINGS.md file is the primary output — capture_thought is a supplementary integration that provides a fallback for runtimes with thought capture support. The workflow must not fail or warn if capture_thought is unavailable.
|
||||
</step>
|
||||
|
||||
<step name="write_learnings">
|
||||
Write the LEARNINGS.md file to the phase directory. If a previous LEARNINGS.md exists, overwrite it (replace the file entirely).
|
||||
|
||||
Output path: `${PHASE_DIR}/${PADDED_PHASE}-LEARNINGS.md`
|
||||
|
||||
The file must have YAML frontmatter with these fields:
|
||||
```yaml
|
||||
---
|
||||
phase: {PHASE_NUMBER}
|
||||
phase_name: "{PHASE_NAME}"
|
||||
project: "{PROJECT_NAME}"
|
||||
generated: "{ISO_DATE}"
|
||||
counts:
|
||||
decisions: {N}
|
||||
lessons: {N}
|
||||
patterns: {N}
|
||||
surprises: {N}
|
||||
missing_artifacts:
|
||||
- "{ARTIFACT_NAME}"
|
||||
---
|
||||
```
|
||||
|
||||
The body follows this structure:
|
||||
```markdown
|
||||
# Phase {PHASE_NUMBER} Learnings: {PHASE_NAME}
|
||||
|
||||
## Decisions
|
||||
|
||||
### {Decision Title}
|
||||
{What was decided}
|
||||
|
||||
**Rationale:** {Why}
|
||||
**Source:** {artifact file}
|
||||
|
||||
---
|
||||
|
||||
## Lessons
|
||||
|
||||
### {Lesson Title}
|
||||
{What was learned}
|
||||
|
||||
**Context:** {context}
|
||||
**Source:** {artifact file}
|
||||
|
||||
---
|
||||
|
||||
## Patterns
|
||||
|
||||
### {Pattern Name}
|
||||
{Description}
|
||||
|
||||
**When to use:** {applicability}
|
||||
**Source:** {artifact file}
|
||||
|
||||
---
|
||||
|
||||
## Surprises
|
||||
|
||||
### {Surprise Title}
|
||||
{What was surprising}
|
||||
|
||||
**Impact:** {impact description}
|
||||
**Source:** {artifact file}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="update_state">
|
||||
Update STATE.md to reflect the learning extraction:
|
||||
|
||||
```bash
|
||||
node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" state update "Last Activity" "$(date +%Y-%m-%d)"
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="report">
|
||||
```
|
||||
---------------------------------------------------------------
|
||||
|
||||
## Learnings Extracted: Phase {X} — {Name}
|
||||
|
||||
Decisions: {N}
|
||||
Lessons: {N}
|
||||
Patterns: {N}
|
||||
Surprises: {N}
|
||||
Total: {N}
|
||||
|
||||
Output: {PHASE_DIR}/{PADDED_PHASE}-LEARNINGS.md
|
||||
|
||||
Missing artifacts: {list or "none"}
|
||||
|
||||
Next steps:
|
||||
- Review extracted learnings for accuracy
|
||||
- /gsd-progress — see overall project state
|
||||
- /gsd-execute-phase {next} — continue to next phase
|
||||
|
||||
---------------------------------------------------------------
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] Phase artifacts located and read successfully
|
||||
- [ ] All 4 categories extracted: decisions, lessons, patterns, surprises
|
||||
- [ ] Each extracted item has source attribution
|
||||
- [ ] LEARNINGS.md written with correct YAML frontmatter
|
||||
- [ ] Missing optional artifacts tracked in frontmatter
|
||||
- [ ] capture_thought integration attempted if tool available
|
||||
- [ ] STATE.md updated with extraction activity
|
||||
- [ ] User receives summary report
|
||||
</success_criteria>
|
||||
|
||||
<critical_rules>
|
||||
- PLAN.md and SUMMARY.md are required — exit with clear error if missing
|
||||
- VERIFICATION.md, UAT.md, and STATE.md are optional — extract from them if present, skip gracefully if not found
|
||||
- Every extracted learning must have source attribution back to the originating artifact
|
||||
- Running extract-learnings twice on the same phase must overwrite (replace) the previous LEARNINGS.md, not append
|
||||
- Do not fabricate learnings — only extract what is explicitly documented in artifacts
|
||||
- If capture_thought is unavailable, the workflow must not fail — graceful degradation to file-only output
|
||||
- LEARNINGS.md frontmatter must include counts for all 4 categories and list any missing_artifacts
|
||||
</critical_rules>
|
||||
@@ -345,7 +345,7 @@ Usage: `/gsd-ship 4` or `/gsd-ship 4 --draft`
|
||||
|
||||
---
|
||||
|
||||
**`/gsd-review --phase N [--gemini] [--claude] [--codex] [--coderabbit] [--all]`**
|
||||
**`/gsd-review --phase N [--gemini] [--claude] [--codex] [--coderabbit] [--opencode] [--qwen] [--cursor] [--all]`**
|
||||
Cross-AI peer review — invoke external AI CLIs to independently review phase plans.
|
||||
|
||||
- Detects available CLIs (gemini, claude, codex, coderabbit)
|
||||
|
||||
@@ -202,7 +202,7 @@ Workspace created: $TARGET_PATH
|
||||
Branch: $BRANCH_NAME
|
||||
|
||||
Next steps:
|
||||
cd $TARGET_PATH
|
||||
cd "$TARGET_PATH"
|
||||
/gsd-new-project # Initialize GSD in the workspace
|
||||
```
|
||||
|
||||
@@ -215,7 +215,7 @@ Workspace created with $SUCCESS_COUNT of $TOTAL_COUNT repos: $TARGET_PATH
|
||||
Failed: repo3 (branch already exists), repo4 (not a git repo)
|
||||
|
||||
Next steps:
|
||||
cd $TARGET_PATH
|
||||
cd "$TARGET_PATH"
|
||||
/gsd-new-project # Initialize GSD in the workspace
|
||||
```
|
||||
|
||||
@@ -225,7 +225,7 @@ Use AskUserQuestion:
|
||||
- header: "Initialize GSD"
|
||||
- question: "Would you like to initialize a GSD project in the new workspace?"
|
||||
- options:
|
||||
- "Yes — run /gsd-new-project" → tell user to `cd $TARGET_PATH` first, then run `/gsd-new-project`
|
||||
- "Yes — run /gsd-new-project" → tell user to `cd "$TARGET_PATH"` first, then run `/gsd-new-project`
|
||||
- "No — I'll set it up later" → done
|
||||
|
||||
</process>
|
||||
|
||||
@@ -82,12 +82,56 @@ Use `--force` to bypass this check.
|
||||
```
|
||||
Exit.
|
||||
|
||||
**Consecutive-call guard:**
|
||||
After passing all gates, check a counter file `.planning/.next-call-count`:
|
||||
- If file exists and count >= 6: prompt "You've called /gsd-next {N} times consecutively. Continue? [y/N]"
|
||||
- If user says no, exit
|
||||
- Increment the counter
|
||||
- The counter file is deleted by any non-`/gsd-next` command (convention — other workflows don't need to implement this, the note here is sufficient)
|
||||
**Prior-phase completeness scan:**
|
||||
After passing all three hard-stop gates, scan all phases that precede the current phase in ROADMAP.md order for incomplete work. Use the existing `gsd-tools.cjs phase json <N>` output to inspect each prior phase.
|
||||
|
||||
Detect three categories of incomplete work:
|
||||
1. **Plans without summaries** — a PLAN.md exists in a prior phase directory but no matching SUMMARY.md exists (execution started but not completed).
|
||||
2. **Verification failures not overridden** — a prior phase has a VERIFICATION.md with `FAIL` items that have no override annotation.
|
||||
3. **CONTEXT.md without plans** — a prior phase directory has a CONTEXT.md but no PLAN.md files (discussion happened, planning never ran).
|
||||
|
||||
If no incomplete prior work is found, continue to `determine_next_action` silently with no interruption.
|
||||
|
||||
If incomplete prior work is found, show a structured completeness report:
|
||||
```
|
||||
⚠ Prior phase has incomplete work
|
||||
|
||||
Phase {N} — "{name}" has unresolved items:
|
||||
• Plan {N}-{M} ({slug}): executed but no SUMMARY.md
|
||||
[... additional items ...]
|
||||
|
||||
Advancing before resolving these may cause:
|
||||
• Verification gaps — future phase verification won't have visibility into what prior phases shipped
|
||||
• Context loss — plans that ran without summaries leave no record for future agents
|
||||
|
||||
Options:
|
||||
[C] Continue and defer these items to backlog
|
||||
[S] Stop and resolve manually (recommended)
|
||||
[F] Force advance without recording deferral
|
||||
|
||||
Choice [S]:
|
||||
```
|
||||
|
||||
**If the user chooses "Stop" (S or Enter/default):** Exit without routing.
|
||||
|
||||
**If the user chooses "Continue and defer" (C):**
|
||||
1. For each incomplete item, create a backlog entry in `ROADMAP.md` under `## Backlog` using the existing `999.x` numbering scheme:
|
||||
```markdown
|
||||
### Phase 999.{N}: Follow-up — Phase {src} incomplete plans (BACKLOG)
|
||||
|
||||
**Goal:** Resolve plans that ran without producing summaries during Phase {src} execution
|
||||
**Source phase:** {src}
|
||||
**Deferred at:** {date} during /gsd-next advancement to Phase {dest}
|
||||
**Plans:**
|
||||
- [ ] {N}-{M}: {slug} (ran, no SUMMARY.md)
|
||||
```
|
||||
2. Commit the deferral record:
|
||||
```bash
|
||||
node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" commit "docs: defer incomplete Phase {src} items to backlog"
|
||||
```
|
||||
3. Continue routing to `determine_next_action` immediately — no second prompt.
|
||||
|
||||
**If the user chooses "Force" (F):** Continue to `determine_next_action` without recording deferral.
|
||||
</step>
|
||||
|
||||
<step name="determine_next_action">
|
||||
|
||||
@@ -46,7 +46,7 @@ Parse JSON for: `researcher_model`, `planner_model`, `checker_model`, `research_
|
||||
|
||||
## 2. Parse and Normalize Arguments
|
||||
|
||||
Extract from $ARGUMENTS: phase number (integer or decimal like `2.1`), flags (`--research`, `--skip-research`, `--gaps`, `--skip-verify`, `--skip-ui`, `--prd <filepath>`, `--reviews`, `--text`).
|
||||
Extract from $ARGUMENTS: phase number (integer or decimal like `2.1`), flags (`--research`, `--skip-research`, `--gaps`, `--skip-verify`, `--skip-ui`, `--prd <filepath>`, `--reviews`, `--text`, `--bounce`, `--skip-bounce`).
|
||||
|
||||
Set `TEXT_MODE=true` if `--text` is present in $ARGUMENTS OR `text_mode` from init JSON is `true`. When `TEXT_MODE` is active, replace every `AskUserQuestion` call with a plain-text numbered list and ask the user to type their choice number. This is required for Claude Code remote sessions (`/rc` mode) where TUI menus don't work through the Claude App.
|
||||
|
||||
@@ -719,41 +719,70 @@ Task(
|
||||
## 9. Handle Planner Return
|
||||
|
||||
- **`## PLANNING COMPLETE`:** Display plan count. If `--skip-verify` or `plan_checker_enabled` is false (from init): skip to step 13. Otherwise: step 10.
|
||||
- **`## PHASE SPLIT RECOMMENDED`:** The planner determined the phase is too complex to implement all user decisions without simplifying them. Handle in step 9b.
|
||||
- **`## PHASE SPLIT RECOMMENDED`:** The planner determined the phase exceeds the context budget for full-fidelity implementation of all source items. Handle in step 9b.
|
||||
- **`## ⚠ Source Audit: Unplanned Items Found`:** The planner's multi-source coverage audit found items from REQUIREMENTS.md, RESEARCH.md, ROADMAP goal, or CONTEXT.md decisions that are not covered by any plan. Handle in step 9c.
|
||||
- **`## CHECKPOINT REACHED`:** Present to user, get response, spawn continuation (step 12)
|
||||
- **`## PLANNING INCONCLUSIVE`:** Show attempts, offer: Add context / Retry / Manual
|
||||
|
||||
## 9b. Handle Phase Split Recommendation
|
||||
|
||||
When the planner returns `## PHASE SPLIT RECOMMENDED`, it means the phase has too many decisions to implement at full fidelity within the plan budget. The planner proposes groupings.
|
||||
When the planner returns `## PHASE SPLIT RECOMMENDED`, it means the phase's source items exceed the context budget for full-fidelity implementation. The planner proposes groupings.
|
||||
|
||||
**Extract from planner return:**
|
||||
- Proposed sub-phases (e.g., "17a: processing core (D-01 to D-19)", "17b: billing + config UX (D-20 to D-27)")
|
||||
- Which D-XX decisions go in each sub-phase
|
||||
- Why the split is necessary (decision count, complexity estimate)
|
||||
- Which source items (REQ-IDs, D-XX decisions, RESEARCH items) go in each sub-phase
|
||||
- Why the split is necessary (context cost estimate, file count)
|
||||
|
||||
**Present to user:**
|
||||
```
|
||||
## Phase {X} is too complex for full-fidelity implementation
|
||||
## Phase {X} exceeds context budget for full-fidelity implementation
|
||||
|
||||
The planner found {N} decisions that cannot all be implemented without
|
||||
simplifying some. Instead of reducing your decisions, we recommend splitting:
|
||||
The planner found {N} source items that exceed the context budget when
|
||||
planned at full fidelity. Instead of reducing scope, we recommend splitting:
|
||||
|
||||
**Option 1: Split into sub-phases**
|
||||
- Phase {X}a: {name} — {D-XX to D-YY} ({N} decisions)
|
||||
- Phase {X}b: {name} — {D-XX to D-YY} ({M} decisions)
|
||||
- Phase {X}a: {name} — {items} ({N} source items, ~{P}% context)
|
||||
- Phase {X}b: {name} — {items} ({M} source items, ~{Q}% context)
|
||||
|
||||
**Option 2: Proceed anyway** (planner will attempt all, quality may degrade)
|
||||
**Option 2: Proceed anyway** (planner will attempt all, quality may degrade past 50% context)
|
||||
|
||||
**Option 3: Prioritize** — you choose which decisions to implement now,
|
||||
**Option 3: Prioritize** — you choose which items to implement now,
|
||||
rest become a follow-up phase
|
||||
```
|
||||
|
||||
Use AskUserQuestion with these 3 options.
|
||||
|
||||
**If "Split":** Use `/gsd-insert-phase` to create the sub-phases, then replan each.
|
||||
**If "Proceed":** Return to planner with instruction to attempt all decisions at full fidelity, accepting more plans/tasks.
|
||||
**If "Prioritize":** Use AskUserQuestion (multiSelect) to let user pick which D-XX are "now" vs "later". Create CONTEXT.md for each sub-phase with the selected decisions.
|
||||
**If "Proceed":** Return to planner with instruction to attempt all items at full fidelity, accepting more plans/tasks.
|
||||
**If "Prioritize":** Use AskUserQuestion (multiSelect) to let user pick which items are "now" vs "later". Create CONTEXT.md for each sub-phase with the selected items.
|
||||
|
||||
## 9c. Handle Source Audit Gaps
|
||||
|
||||
When the planner returns `## ⚠ Source Audit: Unplanned Items Found`, it means items from REQUIREMENTS.md, RESEARCH.md, ROADMAP goal, or CONTEXT.md decisions have no corresponding plan.
|
||||
|
||||
**Extract from planner return:**
|
||||
- Each unplanned item with its source artifact and section
|
||||
- The planner's suggested options (A: add plan, B: split phase, C: defer with confirmation)
|
||||
|
||||
**Present each gap to user.** For each unplanned item:
|
||||
|
||||
```
|
||||
## ⚠ Unplanned: {item description}
|
||||
|
||||
Source: {RESEARCH.md / REQUIREMENTS.md / ROADMAP goal / CONTEXT.md}
|
||||
Details: {why the planner flagged this}
|
||||
|
||||
Options:
|
||||
1. Add a plan to cover this item (recommended)
|
||||
2. Split phase — move to a sub-phase with related items
|
||||
3. Defer — add to backlog (developer confirms this is intentional)
|
||||
```
|
||||
|
||||
Use AskUserQuestion for each gap (or batch if multiple gaps).
|
||||
|
||||
**If "Add plan":** Return to planner (step 8) with instruction to add plans covering the missing items, preserving existing plans.
|
||||
**If "Split":** Use `/gsd-insert-phase` for overflow items, then replan.
|
||||
**If "Defer":** Record in CONTEXT.md `## Deferred Ideas` with developer's confirmation. Proceed to step 10.
|
||||
|
||||
## 10. Spawn gsd-plan-checker Agent
|
||||
|
||||
@@ -901,6 +930,77 @@ Display: `Max iterations reached. {N} issues remain:` + issue list
|
||||
|
||||
Offer: 1) Force proceed, 2) Provide guidance and retry, 3) Abandon
|
||||
|
||||
## 12.5. Plan Bounce (Optional External Refinement)
|
||||
|
||||
**Skip if:** `--skip-bounce` flag, `--gaps` flag, or bounce is not activated.
|
||||
|
||||
**Activation:** Bounce runs when `--bounce` flag is present OR `workflow.plan_bounce` config is `true`. The `--skip-bounce` flag always wins (disables bounce even if config enables it). The `--gaps` flag also disables bounce (gap-closure mode should not modify plans externally).
|
||||
|
||||
**Prerequisites:** `workflow.plan_bounce_script` must be set to a valid script path. If bounce is activated but no script is configured, display warning and skip:
|
||||
```
|
||||
⚠ Plan bounce activated but no script configured.
|
||||
Set workflow.plan_bounce_script to the path of your refinement script.
|
||||
Skipping bounce step.
|
||||
```
|
||||
|
||||
**Read pass count:**
|
||||
```bash
|
||||
BOUNCE_PASSES=$(node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow.plan_bounce_passes --default 2)
|
||||
BOUNCE_SCRIPT=$(node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow.plan_bounce_script)
|
||||
```
|
||||
|
||||
Display banner:
|
||||
```
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
GSD ► BOUNCING PLANS (External Refinement)
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
Script: ${BOUNCE_SCRIPT}
|
||||
Max passes: ${BOUNCE_PASSES}
|
||||
```
|
||||
|
||||
**For each PLAN.md file in the phase directory:**
|
||||
|
||||
1. **Backup:** Copy `*-PLAN.md` to `*-PLAN.pre-bounce.md`
|
||||
```bash
|
||||
cp "${PLAN_FILE}" "${PLAN_FILE%.md}.pre-bounce.md"
|
||||
```
|
||||
|
||||
2. **Invoke bounce script:**
|
||||
```bash
|
||||
"${BOUNCE_SCRIPT}" "${PLAN_FILE}" "${BOUNCE_PASSES}"
|
||||
```
|
||||
|
||||
3. **Validate bounced plan — YAML frontmatter integrity:**
|
||||
After the script returns, check that the bounced file still has valid YAML frontmatter (opening and closing `---` delimiters with parseable content between them). If the bounced plan breaks YAML frontmatter validation, restore the original from the pre-bounce.md backup and continue to the next plan:
|
||||
```
|
||||
⚠ Bounced plan ${PLAN_FILE} has broken YAML frontmatter — restoring original from pre-bounce backup.
|
||||
```
|
||||
|
||||
4. **Handle script failure:** If the bounce script exits non-zero, restore the original plan from the pre-bounce.md backup and continue to the next plan:
|
||||
```
|
||||
⚠ Bounce script failed for ${PLAN_FILE} (exit code ${EXIT_CODE}) — restoring original from pre-bounce backup.
|
||||
```
|
||||
|
||||
**After all plans are bounced:**
|
||||
|
||||
5. **Re-run plan checker on bounced plans:** Spawn gsd-plan-checker (same as step 10) on all modified plans. If a bounced plan fails the checker, restore original from its pre-bounce.md backup:
|
||||
```
|
||||
⚠ Bounced plan ${PLAN_FILE} failed checker validation — restoring original from pre-bounce backup.
|
||||
```
|
||||
|
||||
6. **Commit surviving bounced plans:** If at least one plan survived both the frontmatter validation and the checker re-run, commit the changes:
|
||||
```bash
|
||||
node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" commit "refactor(${padded_phase}): bounce plans through external refinement" --files "${PHASE_DIR}/*-PLAN.md"
|
||||
```
|
||||
|
||||
Display summary:
|
||||
```
|
||||
Plan bounce complete: {survived}/{total} plans refined
|
||||
```
|
||||
|
||||
**Clean up:** Remove all `*-PLAN.pre-bounce.md` backup files after the bounce step completes (whether plans survived or were restored).
|
||||
|
||||
## 13. Requirements Coverage Gate
|
||||
|
||||
After plans pass the checker (or checker is skipped), verify that all phase requirements are covered by at least one plan.
|
||||
|
||||
@@ -43,7 +43,7 @@ Cannot remove workspace "$WORKSPACE_NAME" — the following repos have uncommitt
|
||||
- repo2
|
||||
|
||||
Commit or stash changes in these repos before removing the workspace:
|
||||
cd $WORKSPACE_PATH/repo1
|
||||
cd "$WORKSPACE_PATH/repo1"
|
||||
git stash # or git commit
|
||||
```
|
||||
|
||||
|
||||
@@ -20,6 +20,8 @@ command -v claude >/dev/null 2>&1 && echo "claude:available" || echo "claude:mis
|
||||
command -v codex >/dev/null 2>&1 && echo "codex:available" || echo "codex:missing"
|
||||
command -v coderabbit >/dev/null 2>&1 && echo "coderabbit:available" || echo "coderabbit:missing"
|
||||
command -v opencode >/dev/null 2>&1 && echo "opencode:available" || echo "opencode:missing"
|
||||
command -v qwen >/dev/null 2>&1 && echo "qwen:available" || echo "qwen:missing"
|
||||
command -v cursor >/dev/null 2>&1 && echo "cursor:available" || echo "cursor:missing"
|
||||
```
|
||||
|
||||
Parse flags from `$ARGUMENTS`:
|
||||
@@ -28,6 +30,8 @@ Parse flags from `$ARGUMENTS`:
|
||||
- `--codex` → include Codex
|
||||
- `--coderabbit` → include CodeRabbit
|
||||
- `--opencode` → include OpenCode
|
||||
- `--qwen` → include Qwen Code
|
||||
- `--cursor` → include Cursor
|
||||
- `--all` → include all available
|
||||
- No flags → include all available
|
||||
|
||||
@@ -38,6 +42,8 @@ No external AI CLIs found. Install at least one:
|
||||
- codex: https://github.com/openai/codex
|
||||
- claude: https://github.com/anthropics/claude-code
|
||||
- opencode: https://opencode.ai (leverages GitHub Copilot subscription models)
|
||||
- qwen: https://github.com/nicepkg/qwen-code (Alibaba Qwen models)
|
||||
- cursor: https://cursor.com (Cursor IDE agent mode)
|
||||
|
||||
Then run /gsd-review again.
|
||||
```
|
||||
@@ -50,6 +56,9 @@ Determine which CLI to skip based on the current runtime environment:
|
||||
if [ "$ANTIGRAVITY_AGENT" = "1" ]; then
|
||||
# Antigravity is a separate client — all CLIs are external, skip none
|
||||
SELF_CLI="none"
|
||||
elif [ -n "$CURSOR_SESSION_ID" ]; then
|
||||
# Running inside Cursor agent — skip cursor for independence
|
||||
SELF_CLI="cursor"
|
||||
elif [ -n "$CLAUDE_CODE_ENTRYPOINT" ]; then
|
||||
# Running inside Claude Code CLI — skip claude for independence
|
||||
SELF_CLI="claude"
|
||||
@@ -197,6 +206,22 @@ if [ ! -s /tmp/gsd-review-opencode-{phase}.md ]; then
|
||||
fi
|
||||
```
|
||||
|
||||
**Qwen Code:**
|
||||
```bash
|
||||
qwen "$(cat /tmp/gsd-review-prompt-{phase}.md)" 2>/dev/null > /tmp/gsd-review-qwen-{phase}.md
|
||||
if [ ! -s /tmp/gsd-review-qwen-{phase}.md ]; then
|
||||
echo "Qwen review failed or returned empty output." > /tmp/gsd-review-qwen-{phase}.md
|
||||
fi
|
||||
```
|
||||
|
||||
**Cursor:**
|
||||
```bash
|
||||
cat /tmp/gsd-review-prompt-{phase}.md | cursor agent -p --mode ask --trust 2>/dev/null > /tmp/gsd-review-cursor-{phase}.md
|
||||
if [ ! -s /tmp/gsd-review-cursor-{phase}.md ]; then
|
||||
echo "Cursor review failed or returned empty output." > /tmp/gsd-review-cursor-{phase}.md
|
||||
fi
|
||||
```
|
||||
|
||||
If a CLI fails, log the error and continue with remaining CLIs.
|
||||
|
||||
Display progress:
|
||||
@@ -216,7 +241,7 @@ Combine all review responses into `{phase_dir}/{padded_phase}-REVIEWS.md`:
|
||||
```markdown
|
||||
---
|
||||
phase: {N}
|
||||
reviewers: [gemini, claude, codex, coderabbit, opencode]
|
||||
reviewers: [gemini, claude, codex, coderabbit, opencode, qwen, cursor]
|
||||
reviewed_at: {ISO timestamp}
|
||||
plans_reviewed: [{list of PLAN.md files}]
|
||||
---
|
||||
@@ -253,6 +278,18 @@ plans_reviewed: [{list of PLAN.md files}]
|
||||
|
||||
---
|
||||
|
||||
## Qwen Review
|
||||
|
||||
{qwen review content}
|
||||
|
||||
---
|
||||
|
||||
## Cursor Review
|
||||
|
||||
{cursor review content}
|
||||
|
||||
---
|
||||
|
||||
## Consensus Summary
|
||||
|
||||
{synthesize common concerns across all reviewers}
|
||||
|
||||
@@ -159,6 +159,68 @@ Report: "PR #{number} created: {url}"
|
||||
</step>
|
||||
|
||||
<step name="optional_review">
|
||||
|
||||
**External code review command (automated sub-step):**
|
||||
|
||||
Before prompting the user, check if an external review command is configured:
|
||||
|
||||
```bash
|
||||
REVIEW_CMD=$(node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" config-get workflow.code_review_command --default "" 2>/dev/null)
|
||||
```
|
||||
|
||||
If `REVIEW_CMD` is non-empty and not `"null"`, run the external review:
|
||||
|
||||
1. **Generate diff and stats:**
|
||||
```bash
|
||||
DIFF=$(git diff ${BASE_BRANCH}...HEAD)
|
||||
DIFF_STATS=$(git diff --stat ${BASE_BRANCH}...HEAD)
|
||||
```
|
||||
|
||||
2. **Load phase context from STATE.md:**
|
||||
```bash
|
||||
STATE_STATUS=$(node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" state load 2>/dev/null | head -20)
|
||||
```
|
||||
|
||||
3. **Build review prompt and pipe to command via stdin:**
|
||||
Construct a review prompt containing the diff, diff stats, and phase context, then pipe it to the configured command:
|
||||
```bash
|
||||
REVIEW_PROMPT="You are reviewing a pull request.\n\nDiff stats:\n${DIFF_STATS}\n\nPhase context:\n${STATE_STATUS}\n\nFull diff:\n${DIFF}\n\nRespond with JSON: { \"verdict\": \"APPROVED\" or \"REVISE\", \"confidence\": 0-100, \"summary\": \"...\", \"issues\": [{\"severity\": \"...\", \"file\": \"...\", \"line_range\": \"...\", \"description\": \"...\", \"suggestion\": \"...\"}] }"
|
||||
REVIEW_OUTPUT=$(echo "${REVIEW_PROMPT}" | timeout 120 ${REVIEW_CMD} 2>/tmp/gsd-review-stderr.log)
|
||||
REVIEW_EXIT=$?
|
||||
```
|
||||
|
||||
4. **Handle timeout (120s) and failure:**
|
||||
If `REVIEW_EXIT` is non-zero or the command times out:
|
||||
```bash
|
||||
if [ $REVIEW_EXIT -ne 0 ]; then
|
||||
REVIEW_STDERR=$(cat /tmp/gsd-review-stderr.log 2>/dev/null)
|
||||
echo "WARNING: External review command failed (exit ${REVIEW_EXIT}). stderr: ${REVIEW_STDERR}"
|
||||
echo "Continuing with manual review flow..."
|
||||
fi
|
||||
```
|
||||
On failure, warn with stderr output and fall through to the manual review flow below.
|
||||
|
||||
5. **Parse JSON result:**
|
||||
If the command succeeded, parse the JSON output and report the verdict:
|
||||
```bash
|
||||
# Parse verdict and summary from REVIEW_OUTPUT JSON
|
||||
VERDICT=$(echo "${REVIEW_OUTPUT}" | node -e "
|
||||
let d=''; process.stdin.on('data',c=>d+=c); process.stdin.on('end',()=>{
|
||||
try { const r=JSON.parse(d); console.log(r.verdict); }
|
||||
catch(e) { console.log('INVALID_JSON'); }
|
||||
});
|
||||
")
|
||||
```
|
||||
- If `verdict` is `"APPROVED"`: report approval with confidence and summary.
|
||||
- If `verdict` is `"REVISE"`: report issues found, list each issue with severity, file, line_range, description, and suggestion.
|
||||
- If JSON is invalid (`INVALID_JSON`): warn "External review returned invalid JSON" with stderr and continue.
|
||||
|
||||
Regardless of the external review result, fall through to the manual review options below.
|
||||
|
||||
---
|
||||
|
||||
**Manual review options:**
|
||||
|
||||
Ask if user wants to trigger a code review:
|
||||
|
||||
|
||||
|
||||
@@ -1,20 +1,120 @@
|
||||
#!/usr/bin/env node
|
||||
// gsd-hook-version: {{GSD_VERSION}}
|
||||
// Claude Code Statusline - GSD Edition
|
||||
// Shows: model | current task | directory | context usage
|
||||
// Shows: model | current task (or GSD state) | directory | context usage
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const os = require('os');
|
||||
|
||||
// Read JSON from stdin
|
||||
let input = '';
|
||||
// Timeout guard: if stdin doesn't close within 3s (e.g. pipe issues on
|
||||
// Windows/Git Bash), exit silently instead of hanging. See #775.
|
||||
const stdinTimeout = setTimeout(() => process.exit(0), 3000);
|
||||
process.stdin.setEncoding('utf8');
|
||||
process.stdin.on('data', chunk => input += chunk);
|
||||
process.stdin.on('end', () => {
|
||||
// --- GSD state reader -------------------------------------------------------
|
||||
|
||||
/**
|
||||
* Walk up from dir looking for .planning/STATE.md.
|
||||
* Returns parsed state object or null.
|
||||
*/
|
||||
function readGsdState(dir) {
|
||||
const home = os.homedir();
|
||||
let current = dir;
|
||||
for (let i = 0; i < 10; i++) {
|
||||
const candidate = path.join(current, '.planning', 'STATE.md');
|
||||
if (fs.existsSync(candidate)) {
|
||||
try {
|
||||
return parseStateMd(fs.readFileSync(candidate, 'utf8'));
|
||||
} catch (e) {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
const parent = path.dirname(current);
|
||||
if (parent === current || current === home) break;
|
||||
current = parent;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse STATE.md frontmatter + Phase line from body.
|
||||
* Returns { status, milestone, milestoneName, phaseNum, phaseTotal, phaseName }
|
||||
*/
|
||||
function parseStateMd(content) {
|
||||
const state = {};
|
||||
|
||||
// YAML frontmatter between --- markers
|
||||
const fmMatch = content.match(/^---\n([\s\S]*?)\n---/);
|
||||
if (fmMatch) {
|
||||
for (const line of fmMatch[1].split('\n')) {
|
||||
const m = line.match(/^(\w+):\s*(.+)/);
|
||||
if (!m) continue;
|
||||
const [, key, val] = m;
|
||||
const v = val.trim().replace(/^["']|["']$/g, '');
|
||||
if (key === 'status') state.status = v === 'null' ? null : v;
|
||||
if (key === 'milestone') state.milestone = v === 'null' ? null : v;
|
||||
if (key === 'milestone_name') state.milestoneName = v === 'null' ? null : v;
|
||||
}
|
||||
}
|
||||
|
||||
// Phase: N of M (name) or Phase: none active (...)
|
||||
const phaseMatch = content.match(/^Phase:\s*(\d+)\s+of\s+(\d+)(?:\s+\(([^)]+)\))?/m);
|
||||
if (phaseMatch) {
|
||||
state.phaseNum = phaseMatch[1];
|
||||
state.phaseTotal = phaseMatch[2];
|
||||
state.phaseName = phaseMatch[3] || null;
|
||||
}
|
||||
|
||||
// Fallback: parse Status: from body when frontmatter is absent
|
||||
if (!state.status) {
|
||||
const bodyStatus = content.match(/^Status:\s*(.+)/m);
|
||||
if (bodyStatus) {
|
||||
const raw = bodyStatus[1].trim().toLowerCase();
|
||||
if (raw.includes('ready to plan') || raw.includes('planning')) state.status = 'planning';
|
||||
else if (raw.includes('execut')) state.status = 'executing';
|
||||
else if (raw.includes('complet') || raw.includes('archived')) state.status = 'complete';
|
||||
}
|
||||
}
|
||||
|
||||
return state;
|
||||
}
|
||||
|
||||
/**
|
||||
* Format GSD state into display string.
|
||||
* Format: "v1.9 Code Quality · executing · fix-graphiti-deployment (1/5)"
|
||||
* Gracefully degrades when parts are missing.
|
||||
*/
|
||||
function formatGsdState(s) {
|
||||
const parts = [];
|
||||
|
||||
// Milestone: version + name (skip placeholder "milestone")
|
||||
if (s.milestone || s.milestoneName) {
|
||||
const ver = s.milestone || '';
|
||||
const name = (s.milestoneName && s.milestoneName !== 'milestone') ? s.milestoneName : '';
|
||||
const ms = [ver, name].filter(Boolean).join(' ');
|
||||
if (ms) parts.push(ms);
|
||||
}
|
||||
|
||||
// Status
|
||||
if (s.status) parts.push(s.status);
|
||||
|
||||
// Phase
|
||||
if (s.phaseNum && s.phaseTotal) {
|
||||
const phase = s.phaseName
|
||||
? `${s.phaseName} (${s.phaseNum}/${s.phaseTotal})`
|
||||
: `ph ${s.phaseNum}/${s.phaseTotal}`;
|
||||
parts.push(phase);
|
||||
}
|
||||
|
||||
return parts.join(' · ');
|
||||
}
|
||||
|
||||
// --- stdin ------------------------------------------------------------------
|
||||
|
||||
function runStatusline() {
|
||||
let input = '';
|
||||
// Timeout guard: if stdin doesn't close within 3s (e.g. pipe issues on
|
||||
// Windows/Git Bash), exit silently instead of hanging. See #775.
|
||||
const stdinTimeout = setTimeout(() => process.exit(0), 3000);
|
||||
process.stdin.setEncoding('utf8');
|
||||
process.stdin.on('data', chunk => input += chunk);
|
||||
process.stdin.on('end', () => {
|
||||
clearTimeout(stdinTimeout);
|
||||
try {
|
||||
const data = JSON.parse(input);
|
||||
@@ -94,6 +194,9 @@ process.stdin.on('end', () => {
|
||||
}
|
||||
}
|
||||
|
||||
// GSD state (milestone · status · phase) — shown when no todo task
|
||||
const gsdStateStr = task ? '' : formatGsdState(readGsdState(dir) || {});
|
||||
|
||||
// GSD update available?
|
||||
// Check shared cache first (#1421), fall back to runtime-specific cache for
|
||||
// backward compatibility with older gsd-check-update.js versions.
|
||||
@@ -115,8 +218,14 @@ process.stdin.on('end', () => {
|
||||
|
||||
// Output
|
||||
const dirname = path.basename(dir);
|
||||
if (task) {
|
||||
process.stdout.write(`${gsdUpdate}\x1b[2m${model}\x1b[0m │ \x1b[1m${task}\x1b[0m │ \x1b[2m${dirname}\x1b[0m${ctx}`);
|
||||
const middle = task
|
||||
? `\x1b[1m${task}\x1b[0m`
|
||||
: gsdStateStr
|
||||
? `\x1b[2m${gsdStateStr}\x1b[0m`
|
||||
: null;
|
||||
|
||||
if (middle) {
|
||||
process.stdout.write(`${gsdUpdate}\x1b[2m${model}\x1b[0m │ ${middle} │ \x1b[2m${dirname}\x1b[0m${ctx}`);
|
||||
} else {
|
||||
process.stdout.write(`${gsdUpdate}\x1b[2m${model}\x1b[0m │ \x1b[2m${dirname}\x1b[0m${ctx}`);
|
||||
}
|
||||
@@ -124,3 +233,9 @@ process.stdin.on('end', () => {
|
||||
// Silent fail - don't break statusline on parse errors
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Export helpers for unit tests. Harmless when run as a script.
|
||||
module.exports = { readGsdState, parseStateMd, formatGsdState };
|
||||
|
||||
if (require.main === module) runStatusline();
|
||||
|
||||
@@ -15,6 +15,7 @@ import { GSD } from './index.js';
|
||||
import { CLITransport } from './cli-transport.js';
|
||||
import { WSTransport } from './ws-transport.js';
|
||||
import { InitRunner } from './init-runner.js';
|
||||
import { validateWorkstreamName } from './workstream-utils.js';
|
||||
|
||||
// ─── Parsed CLI args ─────────────────────────────────────────────────────────
|
||||
|
||||
@@ -29,6 +30,8 @@ export interface ParsedCliArgs {
|
||||
wsPort: number | undefined;
|
||||
model: string | undefined;
|
||||
maxBudget: number | undefined;
|
||||
/** Workstream name for multi-workstream projects. Routes .planning/ to .planning/workstreams/<name>/. */
|
||||
ws: string | undefined;
|
||||
help: boolean;
|
||||
version: boolean;
|
||||
}
|
||||
@@ -43,6 +46,7 @@ export function parseCliArgs(argv: string[]): ParsedCliArgs {
|
||||
options: {
|
||||
'project-dir': { type: 'string', default: process.cwd() },
|
||||
'ws-port': { type: 'string' },
|
||||
ws: { type: 'string' },
|
||||
model: { type: 'string' },
|
||||
'max-budget': { type: 'string' },
|
||||
init: { type: 'string' },
|
||||
@@ -69,6 +73,7 @@ export function parseCliArgs(argv: string[]): ParsedCliArgs {
|
||||
wsPort: values['ws-port'] ? Number(values['ws-port']) : undefined,
|
||||
model: values.model as string | undefined,
|
||||
maxBudget: values['max-budget'] ? Number(values['max-budget']) : undefined,
|
||||
ws: values.ws as string | undefined,
|
||||
help: values.help as boolean,
|
||||
version: values.version as boolean,
|
||||
};
|
||||
@@ -92,6 +97,7 @@ Options:
|
||||
--init <input> Bootstrap from a PRD before running (auto only)
|
||||
Accepts @path/to/prd.md or "description text"
|
||||
--project-dir <dir> Project directory (default: cwd)
|
||||
--ws <name> Route .planning/ to .planning/workstreams/<name>/
|
||||
--ws-port <port> Enable WebSocket transport on <port>
|
||||
--model <model> Override LLM model
|
||||
--max-budget <n> Max budget per step in USD
|
||||
@@ -194,6 +200,13 @@ export async function main(argv: string[] = process.argv.slice(2)): Promise<void
|
||||
return;
|
||||
}
|
||||
|
||||
// Validate --ws flag if provided
|
||||
if (args.ws !== undefined && !validateWorkstreamName(args.ws)) {
|
||||
console.error(`Error: Invalid workstream name "${args.ws}". Use alphanumeric, hyphens, underscores, or dots only.`);
|
||||
process.exitCode = 1;
|
||||
return;
|
||||
}
|
||||
|
||||
if (args.command !== 'run' && args.command !== 'init' && args.command !== 'auto') {
|
||||
console.error('Error: Expected "gsd-sdk run <prompt>", "gsd-sdk auto", or "gsd-sdk init [input]"');
|
||||
console.error(USAGE);
|
||||
@@ -226,6 +239,7 @@ export async function main(argv: string[] = process.argv.slice(2)): Promise<void
|
||||
projectDir: args.projectDir,
|
||||
model: args.model,
|
||||
maxBudgetUsd: args.maxBudget,
|
||||
workstream: args.ws,
|
||||
});
|
||||
|
||||
// Wire CLI transport
|
||||
@@ -296,6 +310,7 @@ export async function main(argv: string[] = process.argv.slice(2)): Promise<void
|
||||
model: args.model,
|
||||
maxBudgetUsd: args.maxBudget,
|
||||
autoMode: true,
|
||||
workstream: args.ws,
|
||||
});
|
||||
|
||||
// Wire CLI transport (always active)
|
||||
@@ -384,6 +399,7 @@ export async function main(argv: string[] = process.argv.slice(2)): Promise<void
|
||||
projectDir: args.projectDir,
|
||||
model: args.model,
|
||||
maxBudgetUsd: args.maxBudget,
|
||||
workstream: args.ws,
|
||||
});
|
||||
|
||||
// Wire CLI transport (always active)
|
||||
|
||||
@@ -7,6 +7,7 @@
|
||||
|
||||
import { readFile } from 'node:fs/promises';
|
||||
import { join } from 'node:path';
|
||||
import { relPlanningPath } from './workstream-utils.js';
|
||||
|
||||
// ─── Types ───────────────────────────────────────────────────────────────────
|
||||
|
||||
@@ -99,15 +100,25 @@ export const CONFIG_DEFAULTS: GSDConfig = {
|
||||
* Returns full defaults when file is missing or empty.
|
||||
* Throws on malformed JSON with a helpful error message.
|
||||
*/
|
||||
export async function loadConfig(projectDir: string): Promise<GSDConfig> {
|
||||
const configPath = join(projectDir, '.planning', 'config.json');
|
||||
export async function loadConfig(projectDir: string, workstream?: string): Promise<GSDConfig> {
|
||||
const configPath = join(projectDir, relPlanningPath(workstream), 'config.json');
|
||||
const rootConfigPath = join(projectDir, '.planning', 'config.json');
|
||||
|
||||
let raw: string;
|
||||
try {
|
||||
raw = await readFile(configPath, 'utf-8');
|
||||
} catch {
|
||||
// File missing — normal for new projects
|
||||
return structuredClone(CONFIG_DEFAULTS);
|
||||
// If workstream config missing, fall back to root config
|
||||
if (workstream) {
|
||||
try {
|
||||
raw = await readFile(rootConfigPath, 'utf-8');
|
||||
} catch {
|
||||
return structuredClone(CONFIG_DEFAULTS);
|
||||
}
|
||||
} else {
|
||||
// File missing — normal for new projects
|
||||
return structuredClone(CONFIG_DEFAULTS);
|
||||
}
|
||||
}
|
||||
|
||||
const trimmed = raw.trim();
|
||||
|
||||
@@ -25,6 +25,7 @@ import {
|
||||
DEFAULT_TRUNCATION_OPTIONS,
|
||||
type TruncationOptions,
|
||||
} from './context-truncation.js';
|
||||
import { relPlanningPath } from './workstream-utils.js';
|
||||
|
||||
// ─── File manifest per phase ─────────────────────────────────────────────────
|
||||
|
||||
@@ -77,8 +78,8 @@ export class ContextEngine {
|
||||
private readonly logger?: GSDLogger;
|
||||
private readonly truncation: TruncationOptions;
|
||||
|
||||
constructor(projectDir: string, logger?: GSDLogger, truncation?: Partial<TruncationOptions>) {
|
||||
this.planningDir = join(projectDir, '.planning');
|
||||
constructor(projectDir: string, logger?: GSDLogger, truncation?: Partial<TruncationOptions>, workstream?: string) {
|
||||
this.planningDir = join(projectDir, relPlanningPath(workstream));
|
||||
this.logger = logger;
|
||||
this.truncation = { ...DEFAULT_TRUNCATION_OPTIONS, ...truncation };
|
||||
}
|
||||
|
||||
@@ -39,16 +39,19 @@ export class GSDTools {
|
||||
private readonly projectDir: string;
|
||||
private readonly gsdToolsPath: string;
|
||||
private readonly timeoutMs: number;
|
||||
private readonly workstream?: string;
|
||||
|
||||
constructor(opts: {
|
||||
projectDir: string;
|
||||
gsdToolsPath?: string;
|
||||
timeoutMs?: number;
|
||||
workstream?: string;
|
||||
}) {
|
||||
this.projectDir = opts.projectDir;
|
||||
this.gsdToolsPath =
|
||||
opts.gsdToolsPath ?? resolveGsdToolsPath(opts.projectDir);
|
||||
this.timeoutMs = opts.timeoutMs ?? DEFAULT_TIMEOUT_MS;
|
||||
this.workstream = opts.workstream;
|
||||
}
|
||||
|
||||
// ─── Core exec ───────────────────────────────────────────────────────────
|
||||
@@ -58,7 +61,8 @@ export class GSDTools {
|
||||
* Handles the `@file:` prefix pattern for large results.
|
||||
*/
|
||||
async exec(command: string, args: string[] = []): Promise<unknown> {
|
||||
const fullArgs = [this.gsdToolsPath, command, ...args];
|
||||
const wsArgs = this.workstream ? ['--ws', this.workstream] : [];
|
||||
const fullArgs = [this.gsdToolsPath, command, ...args, ...wsArgs];
|
||||
|
||||
return new Promise<unknown>((resolve, reject) => {
|
||||
const child = execFile(
|
||||
@@ -160,7 +164,8 @@ export class GSDTools {
|
||||
* Use for commands like `config-set` that return plain text, not JSON.
|
||||
*/
|
||||
async execRaw(command: string, args: string[] = []): Promise<string> {
|
||||
const fullArgs = [this.gsdToolsPath, command, ...args, '--raw'];
|
||||
const wsArgs = this.workstream ? ['--ws', this.workstream] : [];
|
||||
const fullArgs = [this.gsdToolsPath, command, ...args, ...wsArgs, '--raw'];
|
||||
|
||||
return new Promise<string>((resolve, reject) => {
|
||||
const child = execFile(
|
||||
|
||||
@@ -44,6 +44,7 @@ export class GSD {
|
||||
private readonly defaultMaxBudgetUsd: number;
|
||||
private readonly defaultMaxTurns: number;
|
||||
private readonly autoMode: boolean;
|
||||
private readonly workstream?: string;
|
||||
readonly eventStream: GSDEventStream;
|
||||
|
||||
constructor(options: GSDOptions) {
|
||||
@@ -54,6 +55,7 @@ export class GSD {
|
||||
this.defaultMaxBudgetUsd = options.maxBudgetUsd ?? 5.0;
|
||||
this.defaultMaxTurns = options.maxTurns ?? 50;
|
||||
this.autoMode = options.autoMode ?? false;
|
||||
this.workstream = options.workstream;
|
||||
this.eventStream = new GSDEventStream();
|
||||
}
|
||||
|
||||
@@ -75,7 +77,7 @@ export class GSD {
|
||||
const plan = await parsePlanFile(absolutePlanPath);
|
||||
|
||||
// Load project config
|
||||
const config = await loadConfig(this.projectDir);
|
||||
const config = await loadConfig(this.projectDir, this.workstream);
|
||||
|
||||
// Try to load agent definition for tool restrictions
|
||||
const agentDef = await this.loadAgentDefinition();
|
||||
@@ -117,6 +119,7 @@ export class GSD {
|
||||
return new GSDTools({
|
||||
projectDir: this.projectDir,
|
||||
gsdToolsPath: this.gsdToolsPath,
|
||||
workstream: this.workstream,
|
||||
});
|
||||
}
|
||||
|
||||
@@ -133,8 +136,8 @@ export class GSD {
|
||||
async runPhase(phaseNumber: string, options?: PhaseRunnerOptions): Promise<PhaseRunnerResult> {
|
||||
const tools = this.createTools();
|
||||
const promptFactory = new PromptFactory();
|
||||
const contextEngine = new ContextEngine(this.projectDir);
|
||||
const config = await loadConfig(this.projectDir);
|
||||
const contextEngine = new ContextEngine(this.projectDir, undefined, undefined, this.workstream);
|
||||
const config = await loadConfig(this.projectDir, this.workstream);
|
||||
|
||||
// Auto mode: force auto_advance on and skip_discuss off so self-discuss kicks in
|
||||
if (this.autoMode) {
|
||||
@@ -314,6 +317,9 @@ export { CLITransport } from './cli-transport.js';
|
||||
export { WSTransport } from './ws-transport.js';
|
||||
export type { WSTransportOptions } from './ws-transport.js';
|
||||
|
||||
// Workstream utilities
|
||||
export { validateWorkstreamName, relPlanningPath } from './workstream-utils.js';
|
||||
|
||||
// Init workflow
|
||||
export { InitRunner } from './init-runner.js';
|
||||
export type { InitRunnerDeps } from './init-runner.js';
|
||||
|
||||
@@ -207,6 +207,8 @@ export interface GSDOptions {
|
||||
maxTurns?: number;
|
||||
/** Enable auto mode: sets auto_advance=true, skip_discuss=false in workflow config. */
|
||||
autoMode?: boolean;
|
||||
/** Workstream name. Routes all .planning/ paths to .planning/workstreams/<name>/. */
|
||||
workstream?: string;
|
||||
}
|
||||
|
||||
// ─── S02: Event stream types ─────────────────────────────────────────────────
|
||||
|
||||
32
sdk/src/workstream-utils.ts
Normal file
32
sdk/src/workstream-utils.ts
Normal file
@@ -0,0 +1,32 @@
|
||||
/**
|
||||
* Workstream utility functions for multi-workstream project support.
|
||||
*
|
||||
* When --ws <name> is provided, all .planning/ paths are routed to
|
||||
* .planning/workstreams/<name>/ instead.
|
||||
*/
|
||||
|
||||
import { join } from 'node:path';
|
||||
|
||||
/**
|
||||
* Validate a workstream name.
|
||||
* Allowed: alphanumeric, hyphens, underscores, dots.
|
||||
* Disallowed: empty, spaces, slashes, special chars, path traversal.
|
||||
*/
|
||||
export function validateWorkstreamName(name: string): boolean {
|
||||
if (!name || name.length === 0) return false;
|
||||
// Only allow alphanumeric, hyphens, underscores, dots
|
||||
// Must not be ".." or start with ".." (path traversal)
|
||||
if (name === '..' || name.startsWith('../')) return false;
|
||||
return /^[a-zA-Z0-9][a-zA-Z0-9._-]*$/.test(name);
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the relative planning directory path.
|
||||
*
|
||||
* - Without workstream: `.planning`
|
||||
* - With workstream: `.planning/workstreams/<name>`
|
||||
*/
|
||||
export function relPlanningPath(workstream?: string): string {
|
||||
if (!workstream) return '.planning';
|
||||
return join('.planning', 'workstreams', workstream);
|
||||
}
|
||||
285
sdk/src/ws-flag.test.ts
Normal file
285
sdk/src/ws-flag.test.ts
Normal file
@@ -0,0 +1,285 @@
|
||||
/**
|
||||
* Tests for --ws (workstream) flag support.
|
||||
*
|
||||
* Validates:
|
||||
* - CLI parsing of --ws flag
|
||||
* - Workstream name validation
|
||||
* - GSDOptions.workstream propagation
|
||||
* - GSDTools workstream-aware invocation
|
||||
* - Config path resolution with workstream
|
||||
* - ContextEngine workstream-aware planning dir
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'vitest';
|
||||
import { mkdir, writeFile, rm } from 'node:fs/promises';
|
||||
import { join } from 'node:path';
|
||||
import { tmpdir } from 'node:os';
|
||||
|
||||
// ─── Workstream name validation ─────────────────────────────────────────────
|
||||
|
||||
import { validateWorkstreamName } from './workstream-utils.js';
|
||||
|
||||
describe('validateWorkstreamName', () => {
|
||||
it('accepts alphanumeric names', () => {
|
||||
expect(validateWorkstreamName('frontend')).toBe(true);
|
||||
expect(validateWorkstreamName('backend2')).toBe(true);
|
||||
});
|
||||
|
||||
it('accepts names with hyphens', () => {
|
||||
expect(validateWorkstreamName('my-feature')).toBe(true);
|
||||
});
|
||||
|
||||
it('accepts names with underscores', () => {
|
||||
expect(validateWorkstreamName('my_feature')).toBe(true);
|
||||
});
|
||||
|
||||
it('accepts names with dots', () => {
|
||||
expect(validateWorkstreamName('v1.0')).toBe(true);
|
||||
});
|
||||
|
||||
it('rejects empty strings', () => {
|
||||
expect(validateWorkstreamName('')).toBe(false);
|
||||
});
|
||||
|
||||
it('rejects names with spaces', () => {
|
||||
expect(validateWorkstreamName('my feature')).toBe(false);
|
||||
});
|
||||
|
||||
it('rejects names with slashes', () => {
|
||||
expect(validateWorkstreamName('my/feature')).toBe(false);
|
||||
});
|
||||
|
||||
it('rejects names with special characters', () => {
|
||||
expect(validateWorkstreamName('feat@ure')).toBe(false);
|
||||
expect(validateWorkstreamName('feat!ure')).toBe(false);
|
||||
expect(validateWorkstreamName('feat#ure')).toBe(false);
|
||||
});
|
||||
|
||||
it('rejects path traversal attempts', () => {
|
||||
expect(validateWorkstreamName('..')).toBe(false);
|
||||
expect(validateWorkstreamName('../etc')).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── relPlanningPath helper ─────────────────────────────────────────────────
|
||||
|
||||
import { relPlanningPath } from './workstream-utils.js';
|
||||
|
||||
describe('relPlanningPath', () => {
|
||||
it('returns .planning/ in flat mode (no workstream)', () => {
|
||||
expect(relPlanningPath()).toBe('.planning');
|
||||
expect(relPlanningPath(undefined)).toBe('.planning');
|
||||
});
|
||||
|
||||
it('returns .planning/workstreams/<name>/ with workstream', () => {
|
||||
expect(relPlanningPath('frontend')).toBe('.planning/workstreams/frontend');
|
||||
expect(relPlanningPath('api-v2')).toBe('.planning/workstreams/api-v2');
|
||||
});
|
||||
});
|
||||
|
||||
// ─── CLI --ws flag parsing ──────────────────────────────────────────────────
|
||||
|
||||
import { parseCliArgs } from './cli.js';
|
||||
|
||||
describe('parseCliArgs --ws flag', () => {
|
||||
it('parses --ws flag', () => {
|
||||
const result = parseCliArgs(['run', 'build auth', '--ws', 'frontend']);
|
||||
|
||||
expect(result.ws).toBe('frontend');
|
||||
});
|
||||
|
||||
it('ws is undefined when not provided', () => {
|
||||
const result = parseCliArgs(['run', 'build auth']);
|
||||
|
||||
expect(result.ws).toBeUndefined();
|
||||
});
|
||||
|
||||
it('works with other flags', () => {
|
||||
const result = parseCliArgs([
|
||||
'run', 'build auth',
|
||||
'--ws', 'backend',
|
||||
'--model', 'claude-sonnet-4-6',
|
||||
'--project-dir', '/tmp/test',
|
||||
]);
|
||||
|
||||
expect(result.ws).toBe('backend');
|
||||
expect(result.model).toBe('claude-sonnet-4-6');
|
||||
expect(result.projectDir).toBe('/tmp/test');
|
||||
});
|
||||
});
|
||||
|
||||
// ─── GSDOptions.workstream ──────────────────────────────────────────────────
|
||||
|
||||
describe('GSDOptions.workstream', () => {
|
||||
it('GSD class accepts workstream option', async () => {
|
||||
// This is a compile-time check -- if the type is wrong, TS will fail
|
||||
const { GSD } = await import('./index.js');
|
||||
const gsd = new GSD({
|
||||
projectDir: '/tmp/test-ws',
|
||||
workstream: 'frontend',
|
||||
});
|
||||
// If we get here without a type error, the option is accepted
|
||||
expect(gsd).toBeDefined();
|
||||
});
|
||||
});
|
||||
|
||||
// ─── GSDTools workstream injection ──────────────────────────────────────────
|
||||
|
||||
describe('GSDTools workstream injection', () => {
|
||||
let tmpDir: string;
|
||||
let fixtureDir: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
tmpDir = join(tmpdir(), `gsd-ws-test-${Date.now()}-${Math.random().toString(36).slice(2)}`);
|
||||
fixtureDir = join(tmpDir, 'fixtures');
|
||||
await mkdir(fixtureDir, { recursive: true });
|
||||
await mkdir(join(tmpDir, '.planning'), { recursive: true });
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await rm(tmpDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
async function createScript(name: string, code: string): Promise<string> {
|
||||
const scriptPath = join(fixtureDir, name);
|
||||
await writeFile(scriptPath, code, { mode: 0o755 });
|
||||
return scriptPath;
|
||||
}
|
||||
|
||||
it('passes --ws flag to gsd-tools.cjs when workstream is set', async () => {
|
||||
const { GSDTools } = await import('./gsd-tools.js');
|
||||
|
||||
// Script echoes its arguments as JSON
|
||||
const scriptPath = await createScript(
|
||||
'echo-args.cjs',
|
||||
'process.stdout.write(JSON.stringify(process.argv.slice(2)));',
|
||||
);
|
||||
|
||||
const tools = new GSDTools({
|
||||
projectDir: tmpDir,
|
||||
gsdToolsPath: scriptPath,
|
||||
workstream: 'frontend',
|
||||
});
|
||||
|
||||
const result = await tools.exec('state', ['load']) as string[];
|
||||
|
||||
// Should contain --ws frontend in the arguments
|
||||
expect(result).toContain('--ws');
|
||||
expect(result).toContain('frontend');
|
||||
});
|
||||
|
||||
it('does not pass --ws when workstream is undefined', async () => {
|
||||
const { GSDTools } = await import('./gsd-tools.js');
|
||||
|
||||
const scriptPath = await createScript(
|
||||
'echo-args-no-ws.cjs',
|
||||
'process.stdout.write(JSON.stringify(process.argv.slice(2)));',
|
||||
);
|
||||
|
||||
const tools = new GSDTools({
|
||||
projectDir: tmpDir,
|
||||
gsdToolsPath: scriptPath,
|
||||
});
|
||||
|
||||
const result = await tools.exec('state', ['load']) as string[];
|
||||
|
||||
expect(result).not.toContain('--ws');
|
||||
});
|
||||
});
|
||||
|
||||
// ─── Config workstream-aware path ───────────────────────────────────────────
|
||||
|
||||
import { loadConfig } from './config.js';
|
||||
|
||||
describe('loadConfig with workstream', () => {
|
||||
let tmpDir: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
tmpDir = join(tmpdir(), `gsd-config-ws-${Date.now()}-${Math.random().toString(36).slice(2)}`);
|
||||
await mkdir(tmpDir, { recursive: true });
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await rm(tmpDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('loads config from workstream path when workstream is provided', async () => {
|
||||
const wsDir = join(tmpDir, '.planning', 'workstreams', 'frontend');
|
||||
await mkdir(wsDir, { recursive: true });
|
||||
await writeFile(
|
||||
join(wsDir, 'config.json'),
|
||||
JSON.stringify({ model_profile: 'performance' }),
|
||||
);
|
||||
|
||||
const config = await loadConfig(tmpDir, 'frontend');
|
||||
|
||||
expect(config.model_profile).toBe('performance');
|
||||
});
|
||||
|
||||
it('falls back to root config when workstream config is missing', async () => {
|
||||
// Create root config but no workstream config
|
||||
await mkdir(join(tmpDir, '.planning'), { recursive: true });
|
||||
await writeFile(
|
||||
join(tmpDir, '.planning', 'config.json'),
|
||||
JSON.stringify({ model_profile: 'balanced' }),
|
||||
);
|
||||
|
||||
const config = await loadConfig(tmpDir, 'frontend');
|
||||
|
||||
expect(config.model_profile).toBe('balanced');
|
||||
});
|
||||
|
||||
it('loads from root .planning/ when workstream is undefined', async () => {
|
||||
await mkdir(join(tmpDir, '.planning'), { recursive: true });
|
||||
await writeFile(
|
||||
join(tmpDir, '.planning', 'config.json'),
|
||||
JSON.stringify({ model_profile: 'economy' }),
|
||||
);
|
||||
|
||||
const config = await loadConfig(tmpDir);
|
||||
|
||||
expect(config.model_profile).toBe('economy');
|
||||
});
|
||||
});
|
||||
|
||||
// ─── ContextEngine workstream-aware planning dir ────────────────────────────
|
||||
|
||||
describe('ContextEngine with workstream', () => {
|
||||
let tmpDir: string;
|
||||
|
||||
beforeEach(async () => {
|
||||
tmpDir = join(tmpdir(), `gsd-ctx-ws-${Date.now()}-${Math.random().toString(36).slice(2)}`);
|
||||
await mkdir(tmpDir, { recursive: true });
|
||||
});
|
||||
|
||||
afterEach(async () => {
|
||||
await rm(tmpDir, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
it('resolves files from workstream planning dir', async () => {
|
||||
const { ContextEngine } = await import('./context-engine.js');
|
||||
const { PhaseType } = await import('./types.js');
|
||||
|
||||
const wsDir = join(tmpDir, '.planning', 'workstreams', 'backend');
|
||||
await mkdir(wsDir, { recursive: true });
|
||||
await writeFile(join(wsDir, 'STATE.md'), '# State\nPhase: 01');
|
||||
|
||||
const engine = new ContextEngine(tmpDir, undefined, undefined, 'backend');
|
||||
const files = await engine.resolveContextFiles(PhaseType.Execute);
|
||||
|
||||
expect(files.state).toContain('Phase: 01');
|
||||
});
|
||||
|
||||
it('resolves files from root .planning/ without workstream', async () => {
|
||||
const { ContextEngine } = await import('./context-engine.js');
|
||||
const { PhaseType } = await import('./types.js');
|
||||
|
||||
await mkdir(join(tmpDir, '.planning'), { recursive: true });
|
||||
await writeFile(join(tmpDir, '.planning', 'STATE.md'), '# State\nPhase: 02');
|
||||
|
||||
const engine = new ContextEngine(tmpDir);
|
||||
const files = await engine.resolveContextFiles(PhaseType.Execute);
|
||||
|
||||
expect(files.state).toContain('Phase: 02');
|
||||
});
|
||||
});
|
||||
@@ -12,7 +12,7 @@
|
||||
|
||||
process.env.GSD_TEST_MODE = '1';
|
||||
|
||||
const { describe, test, beforeEach, afterEach } = require('node:test');
|
||||
const { describe, test, before, beforeEach, afterEach } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
@@ -20,8 +20,22 @@ const os = require('os');
|
||||
const { execFileSync } = require('child_process');
|
||||
|
||||
const INSTALL_SRC = path.join(__dirname, '..', 'bin', 'install.js');
|
||||
const BUILD_SCRIPT = path.join(__dirname, '..', 'scripts', 'build-hooks.js');
|
||||
const { install, copyCommandsAsClaudeSkills } = require(INSTALL_SRC);
|
||||
|
||||
// ─── Ensure hooks/dist/ is populated before install tests ────────────────────
|
||||
// With --test-concurrency=4, other install tests (bug-1834, bug-1924) run
|
||||
// build-hooks.js concurrently. That script creates hooks/dist/ empty first,
|
||||
// then copies files — creating a window where this test sees an empty dir and
|
||||
// install() fails with "directory is empty" → process.exit(1).
|
||||
|
||||
before(() => {
|
||||
execFileSync(process.execPath, [BUILD_SCRIPT], {
|
||||
encoding: 'utf-8',
|
||||
stdio: 'pipe',
|
||||
});
|
||||
});
|
||||
|
||||
// ─── #1736: local install deploys commands/gsd/ ─────────────────────────────
|
||||
|
||||
describe('#1736: local Claude install populates .claude/commands/gsd/', () => {
|
||||
|
||||
219
tests/bug-2075-worktree-deletion-safeguards.test.cjs
Normal file
219
tests/bug-2075-worktree-deletion-safeguards.test.cjs
Normal file
@@ -0,0 +1,219 @@
|
||||
/**
|
||||
* Regression tests for #2075: gsd-executor worktree merge systematically
|
||||
* deletes prior-wave committed files.
|
||||
*
|
||||
* Three failure modes documented in issue #2075:
|
||||
*
|
||||
* Failure Mode B (PRIMARY — unaddressed before this fix):
|
||||
* Executor agent runs `git clean` inside the worktree, removing files
|
||||
* committed on the feature branch. git clean treats them as "untracked"
|
||||
* from the worktree's perspective and deletes them. The executor then
|
||||
* commits only its own deliverables; the subsequent merge brings the
|
||||
* deletions onto the main branch.
|
||||
*
|
||||
* Failure Mode A (partially addressed in PR #1982):
|
||||
* Worktree created from wrong branch base. Audit all worktree-spawning
|
||||
* workflows for worktree_branch_check presence.
|
||||
*
|
||||
* Failure Mode C:
|
||||
* Stale content from wrong base overwrites shared files. Covered by
|
||||
* the --hard reset in the worktree_branch_check.
|
||||
*
|
||||
* Defense-in-depth (from #1977):
|
||||
* Post-commit deletion check: already in gsd-executor.md (--diff-filter=D).
|
||||
* Pre-merge deletion check: already in execute-phase.md (--diff-filter=D).
|
||||
*/
|
||||
|
||||
'use strict';
|
||||
|
||||
const { describe, test } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const EXECUTOR_AGENT_PATH = path.join(__dirname, '..', 'agents', 'gsd-executor.md');
|
||||
const EXECUTE_PHASE_PATH = path.join(__dirname, '..', 'get-shit-done', 'workflows', 'execute-phase.md');
|
||||
const QUICK_PATH = path.join(__dirname, '..', 'get-shit-done', 'workflows', 'quick.md');
|
||||
const DIAGNOSE_PATH = path.join(__dirname, '..', 'get-shit-done', 'workflows', 'diagnose-issues.md');
|
||||
|
||||
describe('bug-2075: worktree deletion safeguards', () => {
|
||||
|
||||
describe('Failure Mode B: git clean prohibition in executor agent', () => {
|
||||
test('gsd-executor.md explicitly prohibits git clean in worktree context', () => {
|
||||
const content = fs.readFileSync(EXECUTOR_AGENT_PATH, 'utf-8');
|
||||
|
||||
// Must have an explicit prohibition section mentioning git clean
|
||||
const prohibitsGitClean = (
|
||||
content.includes('git clean') &&
|
||||
(
|
||||
/NEVER.*git clean/i.test(content) ||
|
||||
/git clean.*NEVER/i.test(content) ||
|
||||
/do not.*git clean/i.test(content) ||
|
||||
/git clean.*prohibited/i.test(content) ||
|
||||
/prohibited.*git clean/i.test(content) ||
|
||||
/forbidden.*git clean/i.test(content) ||
|
||||
/git clean.*forbidden/i.test(content) ||
|
||||
/must not.*git clean/i.test(content) ||
|
||||
/git clean.*must not/i.test(content)
|
||||
)
|
||||
);
|
||||
|
||||
assert.ok(
|
||||
prohibitsGitClean,
|
||||
'gsd-executor.md must explicitly prohibit git clean — running it inside a worktree deletes files committed on the feature branch (#2075 Failure Mode B)'
|
||||
);
|
||||
});
|
||||
|
||||
test('gsd-executor.md git clean prohibition explains the worktree data-loss risk', () => {
|
||||
const content = fs.readFileSync(EXECUTOR_AGENT_PATH, 'utf-8');
|
||||
|
||||
// The prohibition must be accompanied by a reason — not just a bare rule
|
||||
// Look for the word "worktree" near the git clean prohibition
|
||||
const gitCleanIdx = content.indexOf('git clean');
|
||||
assert.ok(gitCleanIdx > -1, 'gsd-executor.md must mention git clean (to prohibit it)');
|
||||
|
||||
// Extract context around the git clean mention (500 chars either side)
|
||||
const contextStart = Math.max(0, gitCleanIdx - 500);
|
||||
const contextEnd = Math.min(content.length, gitCleanIdx + 500);
|
||||
const context = content.slice(contextStart, contextEnd);
|
||||
|
||||
const hasWorktreeRationale = (
|
||||
/worktree/i.test(context) ||
|
||||
/delete/i.test(context) ||
|
||||
/untracked/i.test(context)
|
||||
);
|
||||
|
||||
assert.ok(
|
||||
hasWorktreeRationale,
|
||||
'The git clean prohibition in gsd-executor.md must explain why: git clean in a worktree deletes files that appear untracked but are committed on the feature branch'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Failure Mode A: worktree_branch_check audit across all worktree-spawning workflows', () => {
|
||||
test('execute-phase.md has worktree_branch_check block with --hard reset', () => {
|
||||
const content = fs.readFileSync(EXECUTE_PHASE_PATH, 'utf-8');
|
||||
|
||||
const blockMatch = content.match(/<worktree_branch_check>([\s\S]*?)<\/worktree_branch_check>/);
|
||||
assert.ok(
|
||||
blockMatch,
|
||||
'execute-phase.md must contain a <worktree_branch_check> block'
|
||||
);
|
||||
|
||||
const block = blockMatch[1];
|
||||
assert.ok(
|
||||
block.includes('reset --hard'),
|
||||
'execute-phase.md worktree_branch_check must use git reset --hard (not --soft)'
|
||||
);
|
||||
assert.ok(
|
||||
!block.includes('reset --soft'),
|
||||
'execute-phase.md worktree_branch_check must not use git reset --soft'
|
||||
);
|
||||
});
|
||||
|
||||
test('quick.md has worktree_branch_check block with --hard reset', () => {
|
||||
const content = fs.readFileSync(QUICK_PATH, 'utf-8');
|
||||
|
||||
const blockMatch = content.match(/<worktree_branch_check>([\s\S]*?)<\/worktree_branch_check>/);
|
||||
assert.ok(
|
||||
blockMatch,
|
||||
'quick.md must contain a <worktree_branch_check> block'
|
||||
);
|
||||
|
||||
const block = blockMatch[1];
|
||||
assert.ok(
|
||||
block.includes('reset --hard'),
|
||||
'quick.md worktree_branch_check must use git reset --hard (not --soft)'
|
||||
);
|
||||
assert.ok(
|
||||
!block.includes('reset --soft'),
|
||||
'quick.md worktree_branch_check must not use git reset --soft'
|
||||
);
|
||||
});
|
||||
|
||||
test('diagnose-issues.md has worktree_branch_check instruction for spawned agents', () => {
|
||||
const content = fs.readFileSync(DIAGNOSE_PATH, 'utf-8');
|
||||
|
||||
assert.ok(
|
||||
content.includes('worktree_branch_check'),
|
||||
'diagnose-issues.md must include worktree_branch_check instruction for spawned debug agents'
|
||||
);
|
||||
|
||||
assert.ok(
|
||||
content.includes('reset --hard'),
|
||||
'diagnose-issues.md worktree_branch_check must instruct agents to use git reset --hard'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Defense-in-depth: post-commit deletion check (from #1977)', () => {
|
||||
test('gsd-executor.md task_commit_protocol has post-commit deletion verification', () => {
|
||||
const content = fs.readFileSync(EXECUTOR_AGENT_PATH, 'utf-8');
|
||||
|
||||
assert.ok(
|
||||
content.includes('--diff-filter=D'),
|
||||
'gsd-executor.md must include --diff-filter=D to detect accidental file deletions after each commit'
|
||||
);
|
||||
|
||||
// Must have a warning about unexpected deletions
|
||||
assert.ok(
|
||||
content.includes('DELETIONS') || content.includes('WARNING'),
|
||||
'gsd-executor.md must emit a warning when a commit includes unexpected file deletions'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Defense-in-depth: pre-merge deletion check (from #1977)', () => {
|
||||
test('execute-phase.md worktree merge section has pre-merge deletion check', () => {
|
||||
const content = fs.readFileSync(EXECUTE_PHASE_PATH, 'utf-8');
|
||||
|
||||
const worktreeCleanupStart = content.indexOf('Worktree cleanup');
|
||||
assert.ok(
|
||||
worktreeCleanupStart > -1,
|
||||
'execute-phase.md must have a worktree cleanup section'
|
||||
);
|
||||
|
||||
const cleanupSection = content.slice(worktreeCleanupStart);
|
||||
|
||||
assert.ok(
|
||||
cleanupSection.includes('--diff-filter=D'),
|
||||
'execute-phase.md worktree cleanup must use --diff-filter=D to block deletion-introducing merges'
|
||||
);
|
||||
|
||||
// Deletion check must appear before git merge
|
||||
const deletionCheckIdx = cleanupSection.indexOf('--diff-filter=D');
|
||||
const gitMergeIdx = cleanupSection.indexOf('git merge');
|
||||
assert.ok(
|
||||
deletionCheckIdx < gitMergeIdx,
|
||||
'--diff-filter=D deletion check must appear before git merge in the worktree cleanup section'
|
||||
);
|
||||
|
||||
assert.ok(
|
||||
cleanupSection.includes('BLOCKED') || cleanupSection.includes('deletion'),
|
||||
'execute-phase.md must block or warn when the worktree branch contains file deletions'
|
||||
);
|
||||
});
|
||||
|
||||
test('quick.md worktree merge section has pre-merge deletion check', () => {
|
||||
const content = fs.readFileSync(QUICK_PATH, 'utf-8');
|
||||
|
||||
const mergeIdx = content.indexOf('git merge');
|
||||
assert.ok(mergeIdx > -1, 'quick.md must contain a git merge operation');
|
||||
|
||||
// Find the worktree cleanup block (starts after "Worktree cleanup")
|
||||
const worktreeCleanupStart = content.indexOf('Worktree cleanup');
|
||||
assert.ok(
|
||||
worktreeCleanupStart > -1,
|
||||
'quick.md must have a worktree cleanup section'
|
||||
);
|
||||
|
||||
const cleanupSection = content.slice(worktreeCleanupStart);
|
||||
|
||||
assert.ok(
|
||||
cleanupSection.includes('--diff-filter=D') || cleanupSection.includes('diff-filter'),
|
||||
'quick.md worktree cleanup must check for file deletions before merging'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
});
|
||||
201
tests/claude-md-path.test.cjs
Normal file
201
tests/claude-md-path.test.cjs
Normal file
@@ -0,0 +1,201 @@
|
||||
/**
|
||||
* Tests for configurable claude_md_path setting (#2010)
|
||||
*/
|
||||
|
||||
const { describe, test, beforeEach, afterEach } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { runGsdTools, createTempProject, cleanup } = require('./helpers.cjs');
|
||||
|
||||
describe('claude_md_path config key', () => {
|
||||
let tmpDir;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = createTempProject();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
cleanup(tmpDir);
|
||||
});
|
||||
|
||||
test('claude_md_path is in VALID_CONFIG_KEYS', () => {
|
||||
const { VALID_CONFIG_KEYS } = require('../get-shit-done/bin/lib/config.cjs');
|
||||
assert.ok(VALID_CONFIG_KEYS.has('claude_md_path'));
|
||||
});
|
||||
|
||||
test('config template includes claude_md_path', () => {
|
||||
const templatePath = path.join(__dirname, '..', 'get-shit-done', 'templates', 'config.json');
|
||||
const template = JSON.parse(fs.readFileSync(templatePath, 'utf-8'));
|
||||
assert.strictEqual(template.claude_md_path, './CLAUDE.md');
|
||||
});
|
||||
|
||||
test('config-get claude_md_path returns default value when not set', () => {
|
||||
// Create a config.json without claude_md_path
|
||||
const configPath = path.join(tmpDir, '.planning', 'config.json');
|
||||
fs.writeFileSync(configPath, JSON.stringify({ mode: 'interactive' }), 'utf-8');
|
||||
|
||||
const result = runGsdTools('config-get claude_md_path --default ./CLAUDE.md', tmpDir, { HOME: tmpDir });
|
||||
assert.ok(result.success, `Expected success but got error: ${result.error}`);
|
||||
assert.strictEqual(JSON.parse(result.output), './CLAUDE.md');
|
||||
});
|
||||
|
||||
test('config-set claude_md_path works', () => {
|
||||
const configPath = path.join(tmpDir, '.planning', 'config.json');
|
||||
fs.writeFileSync(configPath, JSON.stringify({ mode: 'interactive' }), 'utf-8');
|
||||
|
||||
const setResult = runGsdTools('config-set claude_md_path .claude/CLAUDE.md', tmpDir, { HOME: tmpDir });
|
||||
assert.ok(setResult.success, `Expected success but got error: ${setResult.error}`);
|
||||
|
||||
const getResult = runGsdTools('config-get claude_md_path', tmpDir, { HOME: tmpDir });
|
||||
assert.ok(getResult.success, `Expected success but got error: ${getResult.error}`);
|
||||
assert.strictEqual(JSON.parse(getResult.output), '.claude/CLAUDE.md');
|
||||
});
|
||||
|
||||
test('buildNewProjectConfig includes claude_md_path default', () => {
|
||||
// Use config-new-project which calls buildNewProjectConfig
|
||||
const result = runGsdTools('config-new-project', tmpDir, { HOME: tmpDir });
|
||||
assert.ok(result.success, `Expected success but got error: ${result.error}`);
|
||||
|
||||
const configPath = path.join(tmpDir, '.planning', 'config.json');
|
||||
const config = JSON.parse(fs.readFileSync(configPath, 'utf-8'));
|
||||
assert.strictEqual(config.claude_md_path, './CLAUDE.md');
|
||||
});
|
||||
});
|
||||
|
||||
describe('cmdGenerateClaudeProfile reads claude_md_path from config', () => {
|
||||
let tmpDir;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = createTempProject();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
cleanup(tmpDir);
|
||||
});
|
||||
|
||||
test('uses claude_md_path from config when no --output or --global', () => {
|
||||
// Set up config with custom claude_md_path
|
||||
const configPath = path.join(tmpDir, '.planning', 'config.json');
|
||||
const customPath = '.claude/CLAUDE.md';
|
||||
fs.writeFileSync(configPath, JSON.stringify({ claude_md_path: customPath }), 'utf-8');
|
||||
|
||||
// Create the target directory
|
||||
fs.mkdirSync(path.join(tmpDir, '.claude'), { recursive: true });
|
||||
|
||||
// Create a minimal analysis file
|
||||
const analysisPath = path.join(tmpDir, '.planning', 'analysis.json');
|
||||
const analysis = {
|
||||
dimensions: {
|
||||
communication_style: { rating: 'terse-direct', confidence: 'HIGH' },
|
||||
},
|
||||
data_source: 'test',
|
||||
};
|
||||
fs.writeFileSync(analysisPath, JSON.stringify(analysis), 'utf-8');
|
||||
|
||||
const result = runGsdTools(
|
||||
['generate-claude-profile', '--analysis', analysisPath],
|
||||
tmpDir,
|
||||
{ HOME: tmpDir }
|
||||
);
|
||||
assert.ok(result.success, `Expected success but got error: ${result.error}`);
|
||||
|
||||
const parsed = JSON.parse(result.output);
|
||||
const realTmpDir = fs.realpathSync(tmpDir);
|
||||
const expectedPath = path.join(realTmpDir, customPath);
|
||||
assert.strictEqual(parsed.claude_md_path, expectedPath);
|
||||
assert.ok(fs.existsSync(expectedPath), `Expected file at ${expectedPath}`);
|
||||
});
|
||||
|
||||
test('--output flag overrides claude_md_path from config', () => {
|
||||
// Set up config with custom claude_md_path
|
||||
const configPath = path.join(tmpDir, '.planning', 'config.json');
|
||||
fs.writeFileSync(configPath, JSON.stringify({ claude_md_path: '.claude/CLAUDE.md' }), 'utf-8');
|
||||
|
||||
// Create analysis file
|
||||
const analysisPath = path.join(tmpDir, '.planning', 'analysis.json');
|
||||
const analysis = {
|
||||
dimensions: {
|
||||
communication_style: { rating: 'terse-direct', confidence: 'HIGH' },
|
||||
},
|
||||
data_source: 'test',
|
||||
};
|
||||
fs.writeFileSync(analysisPath, JSON.stringify(analysis), 'utf-8');
|
||||
|
||||
const outputFile = 'custom-output.md';
|
||||
const result = runGsdTools(
|
||||
['generate-claude-profile', '--analysis', analysisPath, '--output', outputFile],
|
||||
tmpDir,
|
||||
{ HOME: tmpDir }
|
||||
);
|
||||
assert.ok(result.success, `Expected success but got error: ${result.error}`);
|
||||
|
||||
const parsed = JSON.parse(result.output);
|
||||
const realTmpDir = fs.realpathSync(tmpDir);
|
||||
assert.strictEqual(parsed.claude_md_path, path.join(realTmpDir, outputFile));
|
||||
});
|
||||
});
|
||||
|
||||
describe('cmdGenerateClaudeMd reads claude_md_path from config', () => {
|
||||
let tmpDir;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = createTempProject();
|
||||
// Create minimal project files so generate-claude-md has something to read
|
||||
fs.writeFileSync(
|
||||
path.join(tmpDir, '.planning', 'PROJECT.md'),
|
||||
['# Test Project', '', 'A test project.'].join('\n'),
|
||||
'utf-8'
|
||||
);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
cleanup(tmpDir);
|
||||
});
|
||||
|
||||
test('uses claude_md_path from config when no --output', () => {
|
||||
// Set up config with custom claude_md_path
|
||||
const configPath = path.join(tmpDir, '.planning', 'config.json');
|
||||
const customPath = '.claude/CLAUDE.md';
|
||||
fs.writeFileSync(configPath, JSON.stringify({ claude_md_path: customPath }), 'utf-8');
|
||||
|
||||
// Create the target directory
|
||||
fs.mkdirSync(path.join(tmpDir, '.claude'), { recursive: true });
|
||||
|
||||
const result = runGsdTools('generate-claude-md', tmpDir, { HOME: tmpDir });
|
||||
assert.ok(result.success, `Expected success but got error: ${result.error}`);
|
||||
|
||||
const parsed = JSON.parse(result.output);
|
||||
const realTmpDir = fs.realpathSync(tmpDir);
|
||||
const expectedPath = path.join(realTmpDir, customPath);
|
||||
assert.strictEqual(parsed.claude_md_path, expectedPath);
|
||||
assert.ok(fs.existsSync(expectedPath), `Expected file at ${expectedPath}`);
|
||||
});
|
||||
|
||||
test('--output flag overrides claude_md_path from config', () => {
|
||||
// Set up config with custom claude_md_path
|
||||
const configPath = path.join(tmpDir, '.planning', 'config.json');
|
||||
fs.writeFileSync(configPath, JSON.stringify({ claude_md_path: '.claude/CLAUDE.md' }), 'utf-8');
|
||||
|
||||
const outputFile = 'my-custom.md';
|
||||
const result = runGsdTools(['generate-claude-md', '--output', outputFile], tmpDir, { HOME: tmpDir });
|
||||
assert.ok(result.success, `Expected success but got error: ${result.error}`);
|
||||
|
||||
const parsed = JSON.parse(result.output);
|
||||
const realTmpDir = fs.realpathSync(tmpDir);
|
||||
assert.strictEqual(parsed.claude_md_path, path.join(realTmpDir, outputFile));
|
||||
});
|
||||
|
||||
test('defaults to ./CLAUDE.md when config has no claude_md_path', () => {
|
||||
// Set up config without claude_md_path
|
||||
const configPath = path.join(tmpDir, '.planning', 'config.json');
|
||||
fs.writeFileSync(configPath, JSON.stringify({ mode: 'interactive' }), 'utf-8');
|
||||
|
||||
const result = runGsdTools('generate-claude-md', tmpDir, { HOME: tmpDir });
|
||||
assert.ok(result.success, `Expected success but got error: ${result.error}`);
|
||||
|
||||
const parsed = JSON.parse(result.output);
|
||||
const realTmpDir = fs.realpathSync(tmpDir);
|
||||
assert.strictEqual(parsed.claude_md_path, path.join(realTmpDir, 'CLAUDE.md'));
|
||||
});
|
||||
});
|
||||
140
tests/code-review-command.test.cjs
Normal file
140
tests/code-review-command.test.cjs
Normal file
@@ -0,0 +1,140 @@
|
||||
/**
|
||||
* Tests for code_review_command hook in ship workflow (#1876)
|
||||
*
|
||||
* Validates that the external code review command integration is properly
|
||||
* wired into config, templates, and the ship workflow.
|
||||
*/
|
||||
|
||||
const { describe, test, beforeEach, afterEach } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { createTempProject, cleanup, runGsdTools } = require('./helpers.cjs');
|
||||
|
||||
const CONFIG_CJS_PATH = path.join(__dirname, '..', 'get-shit-done', 'bin', 'lib', 'config.cjs');
|
||||
const SHIP_MD_PATH = path.join(__dirname, '..', 'get-shit-done', 'workflows', 'ship.md');
|
||||
const CONFIG_TEMPLATE_PATH = path.join(__dirname, '..', 'get-shit-done', 'templates', 'config.json');
|
||||
|
||||
describe('code_review_command config key', () => {
|
||||
test('workflow.code_review_command is in VALID_CONFIG_KEYS', () => {
|
||||
const { VALID_CONFIG_KEYS } = require(CONFIG_CJS_PATH);
|
||||
assert.ok(
|
||||
VALID_CONFIG_KEYS.has('workflow.code_review_command'),
|
||||
'workflow.code_review_command must be in VALID_CONFIG_KEYS'
|
||||
);
|
||||
});
|
||||
|
||||
test('config-set accepts workflow.code_review_command', () => {
|
||||
const tmpDir = createTempProject();
|
||||
try {
|
||||
// Create config.json first
|
||||
fs.mkdirSync(path.join(tmpDir, '.planning'), { recursive: true });
|
||||
fs.writeFileSync(
|
||||
path.join(tmpDir, '.planning', 'config.json'),
|
||||
JSON.stringify({ workflow: {} }, null, 2)
|
||||
);
|
||||
|
||||
const result = runGsdTools(
|
||||
['config-set', 'workflow.code_review_command', 'my-review-tool --review'],
|
||||
tmpDir,
|
||||
{ HOME: tmpDir }
|
||||
);
|
||||
assert.ok(result.success, 'config-set should succeed');
|
||||
|
||||
const parsed = JSON.parse(result.output);
|
||||
assert.strictEqual(parsed.updated, true);
|
||||
assert.strictEqual(parsed.key, 'workflow.code_review_command');
|
||||
assert.strictEqual(parsed.value, 'my-review-tool --review');
|
||||
} finally {
|
||||
cleanup(tmpDir);
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
describe('config template', () => {
|
||||
test('config.json template has code_review_command under workflow section', () => {
|
||||
const template = JSON.parse(fs.readFileSync(CONFIG_TEMPLATE_PATH, 'utf-8'));
|
||||
assert.ok(template.workflow, 'template must have workflow section');
|
||||
assert.ok(
|
||||
'code_review_command' in template.workflow,
|
||||
'workflow section must contain code_review_command key'
|
||||
);
|
||||
assert.strictEqual(
|
||||
template.workflow.code_review_command,
|
||||
null,
|
||||
'code_review_command default should be null'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('ship workflow code_review_command integration', () => {
|
||||
const shipContent = fs.readFileSync(SHIP_MD_PATH, 'utf-8');
|
||||
|
||||
test('ship.md contains code_review_command config check', () => {
|
||||
assert.ok(
|
||||
shipContent.includes('code_review_command'),
|
||||
'ship.md must reference code_review_command'
|
||||
);
|
||||
});
|
||||
|
||||
test('ship.md has external review sub-step that reads config', () => {
|
||||
assert.ok(
|
||||
shipContent.includes('config-get') && shipContent.includes('workflow.code_review_command'),
|
||||
'ship.md must read workflow.code_review_command from config'
|
||||
);
|
||||
});
|
||||
|
||||
test('ship.md generates diff against base branch for review', () => {
|
||||
assert.ok(
|
||||
shipContent.includes('git diff') && shipContent.includes('BASE_BRANCH'),
|
||||
'ship.md must generate a diff using BASE_BRANCH for the external review'
|
||||
);
|
||||
});
|
||||
|
||||
test('ship.md has JSON parsing for external review output', () => {
|
||||
assert.ok(
|
||||
shipContent.includes('verdict') && shipContent.includes('APPROVED'),
|
||||
'ship.md must parse JSON output with verdict field'
|
||||
);
|
||||
assert.ok(
|
||||
shipContent.includes('REVISE'),
|
||||
'ship.md must handle REVISE verdict'
|
||||
);
|
||||
});
|
||||
|
||||
test('ship.md has timeout handling for external review command (120s)', () => {
|
||||
assert.ok(
|
||||
shipContent.includes('120') || shipContent.includes('timeout'),
|
||||
'ship.md must have timeout handling (120s) for external review command'
|
||||
);
|
||||
});
|
||||
|
||||
test('ship.md has stderr capture on failure', () => {
|
||||
assert.ok(
|
||||
shipContent.includes('stderr'),
|
||||
'ship.md must capture stderr on external review command failure'
|
||||
);
|
||||
});
|
||||
|
||||
test('ship.md pipes review prompt to command via stdin', () => {
|
||||
assert.ok(
|
||||
shipContent.includes('stdin'),
|
||||
'ship.md must pipe the review prompt to the command via stdin'
|
||||
);
|
||||
});
|
||||
|
||||
test('ship.md includes diff stats in review prompt', () => {
|
||||
assert.ok(
|
||||
shipContent.includes('diff --stat') || shipContent.includes('diffstat') || shipContent.includes('--stat'),
|
||||
'ship.md must include diff stats in the review prompt'
|
||||
);
|
||||
});
|
||||
|
||||
test('ship.md falls through to existing review flow on failure', () => {
|
||||
// The external review should not block the existing manual review options
|
||||
assert.ok(
|
||||
shipContent.includes('AskUserQuestion') || shipContent.includes('Skip review'),
|
||||
'ship.md must still offer the existing manual review flow after external review'
|
||||
);
|
||||
});
|
||||
});
|
||||
@@ -252,6 +252,16 @@ describe('config-set command', () => {
|
||||
assert.strictEqual(config.git.base_branch, 'master');
|
||||
});
|
||||
|
||||
test('sets intel.enabled to opt into the intel subsystem', () => {
|
||||
writeConfig(tmpDir, {});
|
||||
|
||||
const result = runGsdTools('config-set intel.enabled true', tmpDir);
|
||||
assert.ok(result.success, `Command failed: ${result.error}`);
|
||||
|
||||
const config = readConfig(tmpDir);
|
||||
assert.strictEqual(config.intel.enabled, true);
|
||||
});
|
||||
|
||||
test('errors when no key path provided', () => {
|
||||
const result = runGsdTools('config-set', tmpDir);
|
||||
assert.strictEqual(result.success, false);
|
||||
|
||||
@@ -1629,9 +1629,11 @@ describe('findProjectRoot', () => {
|
||||
// ─── reapStaleTempFiles ─────────────────────────────────────────────────────
|
||||
|
||||
describe('reapStaleTempFiles', () => {
|
||||
const gsdTmpDir = path.join(os.tmpdir(), 'gsd');
|
||||
|
||||
test('removes stale gsd-*.json files older than maxAgeMs', () => {
|
||||
const tmpDir = os.tmpdir();
|
||||
const stalePath = path.join(tmpDir, `gsd-reap-test-${Date.now()}.json`);
|
||||
fs.mkdirSync(gsdTmpDir, { recursive: true });
|
||||
const stalePath = path.join(gsdTmpDir, `gsd-reap-test-${Date.now()}.json`);
|
||||
fs.writeFileSync(stalePath, '{}');
|
||||
// Set mtime to 10 minutes ago
|
||||
const oldTime = new Date(Date.now() - 10 * 60 * 1000);
|
||||
@@ -1643,8 +1645,8 @@ describe('reapStaleTempFiles', () => {
|
||||
});
|
||||
|
||||
test('preserves fresh gsd-*.json files', () => {
|
||||
const tmpDir = os.tmpdir();
|
||||
const freshPath = path.join(tmpDir, `gsd-reap-fresh-${Date.now()}.json`);
|
||||
fs.mkdirSync(gsdTmpDir, { recursive: true });
|
||||
const freshPath = path.join(gsdTmpDir, `gsd-reap-fresh-${Date.now()}.json`);
|
||||
fs.writeFileSync(freshPath, '{}');
|
||||
|
||||
reapStaleTempFiles('gsd-reap-fresh-', { maxAgeMs: 5 * 60 * 1000 });
|
||||
@@ -1655,8 +1657,8 @@ describe('reapStaleTempFiles', () => {
|
||||
});
|
||||
|
||||
test('removes stale temp directories when present', () => {
|
||||
const tmpDir = os.tmpdir();
|
||||
const staleDir = fs.mkdtempSync(path.join(tmpDir, 'gsd-reap-dir-'));
|
||||
fs.mkdirSync(gsdTmpDir, { recursive: true });
|
||||
const staleDir = fs.mkdtempSync(path.join(gsdTmpDir, 'gsd-reap-dir-'));
|
||||
fs.writeFileSync(path.join(staleDir, 'data.jsonl'), 'test');
|
||||
// Set mtime to 10 minutes ago
|
||||
const oldTime = new Date(Date.now() - 10 * 60 * 1000);
|
||||
|
||||
173
tests/cross-ai-execution.test.cjs
Normal file
173
tests/cross-ai-execution.test.cjs
Normal file
@@ -0,0 +1,173 @@
|
||||
const { test, describe } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const CONFIG_PATH = path.join(__dirname, '..', 'get-shit-done', 'bin', 'lib', 'config.cjs');
|
||||
const EXECUTE_PHASE_PATH = path.join(__dirname, '..', 'get-shit-done', 'workflows', 'execute-phase.md');
|
||||
const CONFIG_TEMPLATE_PATH = path.join(__dirname, '..', 'get-shit-done', 'templates', 'config.json');
|
||||
|
||||
describe('cross-AI execution', () => {
|
||||
|
||||
describe('config keys', () => {
|
||||
test('workflow.cross_ai_execution is in VALID_CONFIG_KEYS', () => {
|
||||
const { VALID_CONFIG_KEYS } = require(CONFIG_PATH);
|
||||
assert.ok(VALID_CONFIG_KEYS.has('workflow.cross_ai_execution'),
|
||||
'VALID_CONFIG_KEYS must include workflow.cross_ai_execution');
|
||||
});
|
||||
|
||||
test('workflow.cross_ai_command is in VALID_CONFIG_KEYS', () => {
|
||||
const { VALID_CONFIG_KEYS } = require(CONFIG_PATH);
|
||||
assert.ok(VALID_CONFIG_KEYS.has('workflow.cross_ai_command'),
|
||||
'VALID_CONFIG_KEYS must include workflow.cross_ai_command');
|
||||
});
|
||||
|
||||
test('workflow.cross_ai_timeout is in VALID_CONFIG_KEYS', () => {
|
||||
const { VALID_CONFIG_KEYS } = require(CONFIG_PATH);
|
||||
assert.ok(VALID_CONFIG_KEYS.has('workflow.cross_ai_timeout'),
|
||||
'VALID_CONFIG_KEYS must include workflow.cross_ai_timeout');
|
||||
});
|
||||
});
|
||||
|
||||
describe('config template defaults', () => {
|
||||
test('config template has cross_ai_execution default', () => {
|
||||
const template = JSON.parse(fs.readFileSync(CONFIG_TEMPLATE_PATH, 'utf-8'));
|
||||
assert.strictEqual(template.workflow.cross_ai_execution, false,
|
||||
'cross_ai_execution should default to false');
|
||||
});
|
||||
|
||||
test('config template has cross_ai_command default', () => {
|
||||
const template = JSON.parse(fs.readFileSync(CONFIG_TEMPLATE_PATH, 'utf-8'));
|
||||
assert.strictEqual(template.workflow.cross_ai_command, '',
|
||||
'cross_ai_command should default to empty string');
|
||||
});
|
||||
|
||||
test('config template has cross_ai_timeout default', () => {
|
||||
const template = JSON.parse(fs.readFileSync(CONFIG_TEMPLATE_PATH, 'utf-8'));
|
||||
assert.strictEqual(template.workflow.cross_ai_timeout, 300,
|
||||
'cross_ai_timeout should default to 300 seconds');
|
||||
});
|
||||
});
|
||||
|
||||
describe('execute-phase.md cross-AI step', () => {
|
||||
let content;
|
||||
|
||||
test('execute-phase.md has a cross-AI execution step', () => {
|
||||
content = fs.readFileSync(EXECUTE_PHASE_PATH, 'utf-8');
|
||||
assert.ok(content.includes('<step name="cross_ai_delegation">'),
|
||||
'execute-phase.md must have a step named cross_ai_delegation');
|
||||
});
|
||||
|
||||
test('cross-AI step appears between discover_and_group_plans and execute_waves', () => {
|
||||
content = content || fs.readFileSync(EXECUTE_PHASE_PATH, 'utf-8');
|
||||
const discoverIdx = content.indexOf('<step name="discover_and_group_plans">');
|
||||
const crossAiIdx = content.indexOf('<step name="cross_ai_delegation">');
|
||||
const executeIdx = content.indexOf('<step name="execute_waves">');
|
||||
assert.ok(discoverIdx < crossAiIdx, 'cross_ai_delegation must come after discover_and_group_plans');
|
||||
assert.ok(crossAiIdx < executeIdx, 'cross_ai_delegation must come before execute_waves');
|
||||
});
|
||||
|
||||
test('cross-AI step handles --cross-ai flag', () => {
|
||||
content = content || fs.readFileSync(EXECUTE_PHASE_PATH, 'utf-8');
|
||||
assert.ok(content.includes('--cross-ai'),
|
||||
'execute-phase.md must reference --cross-ai flag');
|
||||
});
|
||||
|
||||
test('cross-AI step handles --no-cross-ai flag', () => {
|
||||
content = content || fs.readFileSync(EXECUTE_PHASE_PATH, 'utf-8');
|
||||
assert.ok(content.includes('--no-cross-ai'),
|
||||
'execute-phase.md must reference --no-cross-ai flag');
|
||||
});
|
||||
|
||||
test('cross-AI step uses stdin-based prompt delivery', () => {
|
||||
content = content || fs.readFileSync(EXECUTE_PHASE_PATH, 'utf-8');
|
||||
// The step must describe piping prompt via stdin, not shell interpolation
|
||||
assert.ok(content.includes('stdin'),
|
||||
'cross-AI step must describe stdin-based prompt delivery');
|
||||
});
|
||||
|
||||
test('cross-AI step validates summary output', () => {
|
||||
content = content || fs.readFileSync(EXECUTE_PHASE_PATH, 'utf-8');
|
||||
// The step must describe validating the captured summary
|
||||
const crossAiSection = content.substring(
|
||||
content.indexOf('<step name="cross_ai_delegation">'),
|
||||
content.indexOf('</step>', content.indexOf('<step name="cross_ai_delegation">')) + '</step>'.length
|
||||
);
|
||||
assert.ok(
|
||||
crossAiSection.includes('SUMMARY') && crossAiSection.includes('valid'),
|
||||
'cross-AI step must validate the summary output'
|
||||
);
|
||||
});
|
||||
|
||||
test('cross-AI step warns about dirty working tree', () => {
|
||||
content = content || fs.readFileSync(EXECUTE_PHASE_PATH, 'utf-8');
|
||||
const crossAiSection = content.substring(
|
||||
content.indexOf('<step name="cross_ai_delegation">'),
|
||||
content.indexOf('</step>', content.indexOf('<step name="cross_ai_delegation">')) + '</step>'.length
|
||||
);
|
||||
assert.ok(
|
||||
crossAiSection.includes('dirty') || crossAiSection.includes('uncommitted') || crossAiSection.includes('working tree'),
|
||||
'cross-AI step must warn about dirty/uncommitted changes from external command'
|
||||
);
|
||||
});
|
||||
|
||||
test('cross-AI step reads cross_ai_command from config', () => {
|
||||
content = content || fs.readFileSync(EXECUTE_PHASE_PATH, 'utf-8');
|
||||
const crossAiSection = content.substring(
|
||||
content.indexOf('<step name="cross_ai_delegation">'),
|
||||
content.indexOf('</step>', content.indexOf('<step name="cross_ai_delegation">')) + '</step>'.length
|
||||
);
|
||||
assert.ok(
|
||||
crossAiSection.includes('cross_ai_command'),
|
||||
'cross-AI step must read cross_ai_command from config'
|
||||
);
|
||||
});
|
||||
|
||||
test('cross-AI step reads cross_ai_timeout from config', () => {
|
||||
content = content || fs.readFileSync(EXECUTE_PHASE_PATH, 'utf-8');
|
||||
const crossAiSection = content.substring(
|
||||
content.indexOf('<step name="cross_ai_delegation">'),
|
||||
content.indexOf('</step>', content.indexOf('<step name="cross_ai_delegation">')) + '</step>'.length
|
||||
);
|
||||
assert.ok(
|
||||
crossAiSection.includes('cross_ai_timeout'),
|
||||
'cross-AI step must read cross_ai_timeout from config'
|
||||
);
|
||||
});
|
||||
|
||||
test('cross-AI step handles failure with retry/skip/abort', () => {
|
||||
content = content || fs.readFileSync(EXECUTE_PHASE_PATH, 'utf-8');
|
||||
const crossAiSection = content.substring(
|
||||
content.indexOf('<step name="cross_ai_delegation">'),
|
||||
content.indexOf('</step>', content.indexOf('<step name="cross_ai_delegation">')) + '</step>'.length
|
||||
);
|
||||
assert.ok(crossAiSection.includes('retry'), 'cross-AI step must offer retry on failure');
|
||||
assert.ok(crossAiSection.includes('skip'), 'cross-AI step must offer skip on failure');
|
||||
assert.ok(crossAiSection.includes('abort'), 'cross-AI step must offer abort on failure');
|
||||
});
|
||||
|
||||
test('cross-AI step skips normal executor for handled plans', () => {
|
||||
content = content || fs.readFileSync(EXECUTE_PHASE_PATH, 'utf-8');
|
||||
const crossAiSection = content.substring(
|
||||
content.indexOf('<step name="cross_ai_delegation">'),
|
||||
content.indexOf('</step>', content.indexOf('<step name="cross_ai_delegation">')) + '</step>'.length
|
||||
);
|
||||
assert.ok(
|
||||
crossAiSection.includes('skip') && (crossAiSection.includes('executor') || crossAiSection.includes('execute_waves')),
|
||||
'cross-AI step must describe skipping normal executor for cross-AI handled plans'
|
||||
);
|
||||
});
|
||||
|
||||
test('parse_args step includes --cross-ai and --no-cross-ai', () => {
|
||||
content = content || fs.readFileSync(EXECUTE_PHASE_PATH, 'utf-8');
|
||||
const parseArgsSection = content.substring(
|
||||
content.indexOf('<step name="parse_args"'),
|
||||
content.indexOf('</step>', content.indexOf('<step name="parse_args"')) + '</step>'.length
|
||||
);
|
||||
assert.ok(parseArgsSection.includes('--cross-ai'),
|
||||
'parse_args step must parse --cross-ai flag');
|
||||
assert.ok(parseArgsSection.includes('--no-cross-ai'),
|
||||
'parse_args step must parse --no-cross-ai flag');
|
||||
});
|
||||
});
|
||||
});
|
||||
222
tests/cursor-reviewer.test.cjs
Normal file
222
tests/cursor-reviewer.test.cjs
Normal file
@@ -0,0 +1,222 @@
|
||||
/**
|
||||
* Cursor CLI Reviewer Tests (#1960)
|
||||
*
|
||||
* Verifies that /gsd-review includes Cursor CLI as a peer reviewer:
|
||||
* - review.md workflow contains cursor detection, flag parsing, self-detection, invocation
|
||||
* - commands/gsd/review.md command file mentions --cursor flag
|
||||
* - help.md lists --cursor in the /gsd-review signature
|
||||
* - docs/COMMANDS.md has --cursor flag row
|
||||
* - docs/FEATURES.md has Cursor in the review section
|
||||
* - i18n docs mirror the same content
|
||||
* - REVIEWS.md template includes Cursor Review section
|
||||
*/
|
||||
|
||||
const { test, describe } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const ROOT = path.join(__dirname, '..');
|
||||
|
||||
describe('Cursor CLI reviewer in /gsd-review (#1960)', () => {
|
||||
|
||||
// --- review.md workflow ---
|
||||
|
||||
describe('review.md workflow', () => {
|
||||
const reviewPath = path.join(ROOT, 'get-shit-done', 'workflows', 'review.md');
|
||||
let content;
|
||||
|
||||
test('review.md exists', () => {
|
||||
assert.ok(fs.existsSync(reviewPath), 'review.md should exist');
|
||||
content = fs.readFileSync(reviewPath, 'utf-8');
|
||||
});
|
||||
|
||||
test('contains cursor CLI detection via command -v', () => {
|
||||
const c = fs.readFileSync(reviewPath, 'utf-8');
|
||||
assert.ok(
|
||||
c.includes('command -v cursor'),
|
||||
'review.md should detect cursor CLI via "command -v cursor"'
|
||||
);
|
||||
});
|
||||
|
||||
test('contains --cursor flag parsing', () => {
|
||||
const c = fs.readFileSync(reviewPath, 'utf-8');
|
||||
assert.ok(
|
||||
c.includes('--cursor'),
|
||||
'review.md should parse --cursor flag'
|
||||
);
|
||||
});
|
||||
|
||||
test('contains CURSOR_SESSION_ID self-detection', () => {
|
||||
const c = fs.readFileSync(reviewPath, 'utf-8');
|
||||
assert.ok(
|
||||
c.includes('CURSOR_SESSION_ID'),
|
||||
'review.md should detect self-CLI via CURSOR_SESSION_ID env var'
|
||||
);
|
||||
});
|
||||
|
||||
test('contains cursor agent invocation command', () => {
|
||||
const c = fs.readFileSync(reviewPath, 'utf-8');
|
||||
assert.ok(
|
||||
c.includes('cursor agent -p --mode ask --trust'),
|
||||
'review.md should invoke cursor via "cursor agent -p --mode ask --trust"'
|
||||
);
|
||||
});
|
||||
|
||||
test('contains Cursor Review section in REVIEWS.md template', () => {
|
||||
const c = fs.readFileSync(reviewPath, 'utf-8');
|
||||
assert.ok(
|
||||
c.includes('Cursor Review'),
|
||||
'review.md should include a "Cursor Review" section in the REVIEWS.md template'
|
||||
);
|
||||
});
|
||||
|
||||
test('lists cursor in the reviewers frontmatter array', () => {
|
||||
const c = fs.readFileSync(reviewPath, 'utf-8');
|
||||
assert.ok(
|
||||
/reviewers:.*cursor/.test(c),
|
||||
'review.md should list cursor in the reviewers array'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// --- commands/gsd/review.md ---
|
||||
|
||||
describe('commands/gsd/review.md', () => {
|
||||
const cmdPath = path.join(ROOT, 'commands', 'gsd', 'review.md');
|
||||
|
||||
test('mentions --cursor flag', () => {
|
||||
const c = fs.readFileSync(cmdPath, 'utf-8');
|
||||
assert.ok(
|
||||
c.includes('--cursor'),
|
||||
'commands/gsd/review.md should mention --cursor flag'
|
||||
);
|
||||
});
|
||||
|
||||
test('mentions Cursor in objective or context', () => {
|
||||
const c = fs.readFileSync(cmdPath, 'utf-8');
|
||||
assert.ok(
|
||||
c.includes('Cursor'),
|
||||
'commands/gsd/review.md should mention Cursor'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// --- help.md ---
|
||||
|
||||
describe('help.md', () => {
|
||||
const helpPath = path.join(ROOT, 'get-shit-done', 'workflows', 'help.md');
|
||||
|
||||
test('lists --cursor in /gsd-review signature', () => {
|
||||
const c = fs.readFileSync(helpPath, 'utf-8');
|
||||
assert.ok(
|
||||
c.includes('--cursor'),
|
||||
'help.md should list --cursor in the /gsd-review command signature'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// --- docs/COMMANDS.md ---
|
||||
|
||||
describe('docs/COMMANDS.md', () => {
|
||||
const docsPath = path.join(ROOT, 'docs', 'COMMANDS.md');
|
||||
|
||||
test('has --cursor flag row', () => {
|
||||
const c = fs.readFileSync(docsPath, 'utf-8');
|
||||
assert.ok(
|
||||
c.includes('--cursor'),
|
||||
'docs/COMMANDS.md should have a --cursor flag row'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// --- docs/FEATURES.md ---
|
||||
|
||||
describe('docs/FEATURES.md', () => {
|
||||
const featPath = path.join(ROOT, 'docs', 'FEATURES.md');
|
||||
|
||||
test('has --cursor in review command signature', () => {
|
||||
const c = fs.readFileSync(featPath, 'utf-8');
|
||||
assert.ok(
|
||||
c.includes('--cursor'),
|
||||
'docs/FEATURES.md should include --cursor in the review command signature'
|
||||
);
|
||||
});
|
||||
|
||||
test('mentions Cursor in the review purpose', () => {
|
||||
const c = fs.readFileSync(featPath, 'utf-8');
|
||||
assert.ok(
|
||||
c.includes('Cursor'),
|
||||
'docs/FEATURES.md should mention Cursor in the review section'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// --- i18n: ja-JP ---
|
||||
|
||||
describe('docs/ja-JP/COMMANDS.md', () => {
|
||||
const jaPath = path.join(ROOT, 'docs', 'ja-JP', 'COMMANDS.md');
|
||||
|
||||
test('has --cursor flag row', () => {
|
||||
const c = fs.readFileSync(jaPath, 'utf-8');
|
||||
assert.ok(
|
||||
c.includes('--cursor'),
|
||||
'docs/ja-JP/COMMANDS.md should have a --cursor flag row'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('docs/ja-JP/FEATURES.md', () => {
|
||||
const jaPath = path.join(ROOT, 'docs', 'ja-JP', 'FEATURES.md');
|
||||
|
||||
test('has --cursor in review command signature', () => {
|
||||
const c = fs.readFileSync(jaPath, 'utf-8');
|
||||
assert.ok(
|
||||
c.includes('--cursor'),
|
||||
'docs/ja-JP/FEATURES.md should include --cursor in the review command signature'
|
||||
);
|
||||
});
|
||||
|
||||
test('mentions Cursor in the review section', () => {
|
||||
const c = fs.readFileSync(jaPath, 'utf-8');
|
||||
assert.ok(
|
||||
/Cursor/i.test(fs.readFileSync(jaPath, 'utf-8')),
|
||||
'docs/ja-JP/FEATURES.md should mention Cursor in the review section'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// --- i18n: ko-KR ---
|
||||
|
||||
describe('docs/ko-KR/COMMANDS.md', () => {
|
||||
const koPath = path.join(ROOT, 'docs', 'ko-KR', 'COMMANDS.md');
|
||||
|
||||
test('has --cursor flag row', () => {
|
||||
const c = fs.readFileSync(koPath, 'utf-8');
|
||||
assert.ok(
|
||||
c.includes('--cursor'),
|
||||
'docs/ko-KR/COMMANDS.md should have a --cursor flag row'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('docs/ko-KR/FEATURES.md', () => {
|
||||
const koPath = path.join(ROOT, 'docs', 'ko-KR', 'FEATURES.md');
|
||||
|
||||
test('has --cursor in review command signature', () => {
|
||||
const c = fs.readFileSync(koPath, 'utf-8');
|
||||
assert.ok(
|
||||
c.includes('--cursor'),
|
||||
'docs/ko-KR/FEATURES.md should include --cursor in the review command signature'
|
||||
);
|
||||
});
|
||||
|
||||
test('mentions Cursor in the review section', () => {
|
||||
const c = fs.readFileSync(koPath, 'utf-8');
|
||||
assert.ok(
|
||||
/Cursor/i.test(fs.readFileSync(koPath, 'utf-8')),
|
||||
'docs/ko-KR/FEATURES.md should mention Cursor in the review section'
|
||||
);
|
||||
});
|
||||
});
|
||||
});
|
||||
168
tests/extract-learnings.test.cjs
Normal file
168
tests/extract-learnings.test.cjs
Normal file
@@ -0,0 +1,168 @@
|
||||
/**
|
||||
* Extract-Learnings Command & Workflow Tests
|
||||
*
|
||||
* Validates command file existence, frontmatter correctness, workflow content,
|
||||
* 4 learning categories, capture_thought handling, graceful degradation,
|
||||
* LEARNINGS.md output, and missing artifact handling.
|
||||
*/
|
||||
|
||||
const { describe, test } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const COMMAND_PATH = path.join(__dirname, '..', 'commands', 'gsd', 'extract_learnings.md');
|
||||
const WORKFLOW_PATH = path.join(__dirname, '..', 'get-shit-done', 'workflows', 'extract_learnings.md');
|
||||
|
||||
describe('extract-learnings command', () => {
|
||||
test('command file exists', () => {
|
||||
assert.ok(fs.existsSync(COMMAND_PATH), 'commands/gsd/extract_learnings.md should exist');
|
||||
});
|
||||
|
||||
test('command file has correct name frontmatter', () => {
|
||||
const content = fs.readFileSync(COMMAND_PATH, 'utf-8');
|
||||
assert.ok(content.includes('name: gsd:extract-learnings'), 'Command must have name: gsd:extract-learnings');
|
||||
});
|
||||
|
||||
test('command file has description frontmatter', () => {
|
||||
const content = fs.readFileSync(COMMAND_PATH, 'utf-8');
|
||||
assert.ok(content.includes('description:'), 'Command must have description frontmatter');
|
||||
});
|
||||
|
||||
test('command file has argument-hint for phase-number', () => {
|
||||
const content = fs.readFileSync(COMMAND_PATH, 'utf-8');
|
||||
assert.ok(content.includes('argument-hint:'), 'Command must have argument-hint');
|
||||
assert.ok(content.includes('<phase-number>'), 'argument-hint must reference <phase-number>');
|
||||
});
|
||||
|
||||
test('command file has allowed-tools list', () => {
|
||||
const content = fs.readFileSync(COMMAND_PATH, 'utf-8');
|
||||
assert.ok(content.includes('allowed-tools:'), 'Command must have allowed-tools');
|
||||
assert.ok(content.includes('Read'), 'allowed-tools must include Read');
|
||||
assert.ok(content.includes('Write'), 'allowed-tools must include Write');
|
||||
assert.ok(content.includes('Bash'), 'allowed-tools must include Bash');
|
||||
assert.ok(content.includes('Grep'), 'allowed-tools must include Grep');
|
||||
assert.ok(content.includes('Glob'), 'allowed-tools must include Glob');
|
||||
assert.ok(content.includes('Agent'), 'allowed-tools must include Agent');
|
||||
});
|
||||
|
||||
test('command file has type: prompt', () => {
|
||||
const content = fs.readFileSync(COMMAND_PATH, 'utf-8');
|
||||
assert.ok(content.includes('type: prompt'), 'Command must have type: prompt');
|
||||
});
|
||||
|
||||
test('command references the workflow via execution_context', () => {
|
||||
const content = fs.readFileSync(COMMAND_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('workflows/extract_learnings.md'),
|
||||
'Command must reference workflows/extract_learnings.md in execution_context'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('extract-learnings workflow', () => {
|
||||
test('workflow file exists', () => {
|
||||
assert.ok(fs.existsSync(WORKFLOW_PATH), 'workflows/extract_learnings.md should exist');
|
||||
});
|
||||
|
||||
test('workflow has objective tag', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('<objective>'), 'Workflow must have <objective> tag');
|
||||
assert.ok(content.includes('</objective>'), 'Workflow must close <objective> tag');
|
||||
});
|
||||
|
||||
test('workflow has process tag', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('<process>'), 'Workflow must have <process> tag');
|
||||
assert.ok(content.includes('</process>'), 'Workflow must close <process> tag');
|
||||
});
|
||||
|
||||
test('workflow has step tags', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('<step name='), 'Workflow must have named step tags');
|
||||
assert.ok(content.includes('</step>'), 'Workflow must close step tags');
|
||||
});
|
||||
|
||||
test('workflow has success_criteria tag', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('<success_criteria>'), 'Workflow must have <success_criteria> tag');
|
||||
assert.ok(content.includes('</success_criteria>'), 'Workflow must close <success_criteria> tag');
|
||||
});
|
||||
|
||||
test('workflow has critical_rules tag', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('<critical_rules>'), 'Workflow must have <critical_rules> tag');
|
||||
assert.ok(content.includes('</critical_rules>'), 'Workflow must close <critical_rules> tag');
|
||||
});
|
||||
|
||||
test('workflow reads required artifacts (PLAN.md and SUMMARY.md)', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('PLAN.md'), 'Workflow must reference PLAN.md');
|
||||
assert.ok(content.includes('SUMMARY.md'), 'Workflow must reference SUMMARY.md');
|
||||
});
|
||||
|
||||
test('workflow reads optional artifacts (VERIFICATION.md, UAT.md, STATE.md)', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('VERIFICATION.md'), 'Workflow must reference VERIFICATION.md');
|
||||
assert.ok(content.includes('UAT.md'), 'Workflow must reference UAT.md');
|
||||
assert.ok(content.includes('STATE.md'), 'Workflow must reference STATE.md');
|
||||
});
|
||||
|
||||
test('workflow extracts all 4 learning categories', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.toLowerCase().includes('decision'), 'Workflow must extract decisions');
|
||||
assert.ok(content.toLowerCase().includes('lesson'), 'Workflow must extract lessons');
|
||||
assert.ok(content.toLowerCase().includes('pattern'), 'Workflow must extract patterns');
|
||||
assert.ok(content.toLowerCase().includes('surprise'), 'Workflow must extract surprises');
|
||||
});
|
||||
|
||||
test('workflow handles capture_thought tool availability', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('capture_thought'), 'Workflow must reference capture_thought tool');
|
||||
});
|
||||
|
||||
test('workflow degrades gracefully when capture_thought is unavailable', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('graceful') || content.includes('not available') || content.includes('unavailable') || content.includes('fallback'),
|
||||
'Workflow must handle graceful degradation when capture_thought is unavailable'
|
||||
);
|
||||
});
|
||||
|
||||
test('workflow outputs LEARNINGS.md', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('LEARNINGS.md'), 'Workflow must output LEARNINGS.md');
|
||||
});
|
||||
|
||||
test('workflow handles missing artifacts gracefully', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('missing') || content.includes('not found') || content.includes('optional'),
|
||||
'Workflow must handle missing artifacts'
|
||||
);
|
||||
});
|
||||
|
||||
test('workflow includes source attribution for extracted items', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('source') || content.includes('attribution') || content.includes('Source:'),
|
||||
'Workflow must include source attribution for extracted items'
|
||||
);
|
||||
});
|
||||
|
||||
test('workflow specifies LEARNINGS.md YAML frontmatter fields', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('phase'), 'LEARNINGS.md frontmatter must include phase');
|
||||
assert.ok(content.includes('phase_name'), 'LEARNINGS.md frontmatter must include phase_name');
|
||||
assert.ok(content.includes('generated'), 'LEARNINGS.md frontmatter must include generated');
|
||||
assert.ok(content.includes('missing_artifacts'), 'LEARNINGS.md frontmatter must include missing_artifacts');
|
||||
});
|
||||
|
||||
test('workflow supports overwriting previous LEARNINGS.md on re-run', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('overwrite') || content.includes('overwrit') || content.includes('replace'),
|
||||
'Workflow must support overwriting previous LEARNINGS.md'
|
||||
);
|
||||
});
|
||||
});
|
||||
249
tests/gsd-statusline.test.cjs
Normal file
249
tests/gsd-statusline.test.cjs
Normal file
@@ -0,0 +1,249 @@
|
||||
/**
|
||||
* Tests for gsd-statusline.js GSD state display helpers.
|
||||
*
|
||||
* Covers:
|
||||
* - parseStateMd across YAML-frontmatter, body-fallback, and partial formats
|
||||
* - formatGsdState graceful degradation when fields are missing
|
||||
* - readGsdState walk-up search with proper bounds
|
||||
*/
|
||||
|
||||
'use strict';
|
||||
|
||||
const { test, describe } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('node:fs');
|
||||
const os = require('node:os');
|
||||
const path = require('node:path');
|
||||
|
||||
const { parseStateMd, formatGsdState, readGsdState } = require('../hooks/gsd-statusline.js');
|
||||
|
||||
// ─── parseStateMd ───────────────────────────────────────────────────────────
|
||||
|
||||
describe('parseStateMd', () => {
|
||||
test('parses full YAML frontmatter', () => {
|
||||
const content = [
|
||||
'---',
|
||||
'status: executing',
|
||||
'milestone: v1.9',
|
||||
'milestone_name: Code Quality',
|
||||
'---',
|
||||
'',
|
||||
'# State',
|
||||
'Phase: 1 of 5 (fix-graphiti-deployment)',
|
||||
].join('\n');
|
||||
|
||||
const s = parseStateMd(content);
|
||||
assert.equal(s.status, 'executing');
|
||||
assert.equal(s.milestone, 'v1.9');
|
||||
assert.equal(s.milestoneName, 'Code Quality');
|
||||
assert.equal(s.phaseNum, '1');
|
||||
assert.equal(s.phaseTotal, '5');
|
||||
assert.equal(s.phaseName, 'fix-graphiti-deployment');
|
||||
});
|
||||
|
||||
test('treats literal "null" values as null', () => {
|
||||
const content = [
|
||||
'---',
|
||||
'status: null',
|
||||
'milestone: null',
|
||||
'milestone_name: null',
|
||||
'---',
|
||||
].join('\n');
|
||||
|
||||
const s = parseStateMd(content);
|
||||
assert.equal(s.status, null);
|
||||
assert.equal(s.milestone, null);
|
||||
assert.equal(s.milestoneName, null);
|
||||
});
|
||||
|
||||
test('strips surrounding quotes from frontmatter values', () => {
|
||||
const content = [
|
||||
'---',
|
||||
'milestone_name: "Code Quality"',
|
||||
"milestone: 'v1.9'",
|
||||
'---',
|
||||
].join('\n');
|
||||
|
||||
const s = parseStateMd(content);
|
||||
assert.equal(s.milestone, 'v1.9');
|
||||
assert.equal(s.milestoneName, 'Code Quality');
|
||||
});
|
||||
|
||||
test('parses phase without name', () => {
|
||||
const content = [
|
||||
'---',
|
||||
'status: planning',
|
||||
'---',
|
||||
'Phase: 3 of 10',
|
||||
].join('\n');
|
||||
|
||||
const s = parseStateMd(content);
|
||||
assert.equal(s.phaseNum, '3');
|
||||
assert.equal(s.phaseTotal, '10');
|
||||
assert.equal(s.phaseName, null);
|
||||
});
|
||||
|
||||
test('falls back to body Status when frontmatter is missing', () => {
|
||||
const content = [
|
||||
'# State',
|
||||
'Status: Ready to plan',
|
||||
].join('\n');
|
||||
|
||||
const s = parseStateMd(content);
|
||||
assert.equal(s.status, 'planning');
|
||||
});
|
||||
|
||||
test('body fallback recognizes executing state', () => {
|
||||
const content = 'Status: Executing phase 2';
|
||||
assert.equal(parseStateMd(content).status, 'executing');
|
||||
});
|
||||
|
||||
test('body fallback recognizes complete state', () => {
|
||||
const content = 'Status: Complete';
|
||||
assert.equal(parseStateMd(content).status, 'complete');
|
||||
});
|
||||
|
||||
test('body fallback recognizes archived as complete', () => {
|
||||
const content = 'Status: Archived';
|
||||
assert.equal(parseStateMd(content).status, 'complete');
|
||||
});
|
||||
|
||||
test('returns empty object for empty content', () => {
|
||||
const s = parseStateMd('');
|
||||
assert.deepEqual(s, {});
|
||||
});
|
||||
|
||||
test('returns partial state when only some fields present', () => {
|
||||
const content = [
|
||||
'---',
|
||||
'milestone: v2.0',
|
||||
'---',
|
||||
].join('\n');
|
||||
|
||||
const s = parseStateMd(content);
|
||||
assert.equal(s.milestone, 'v2.0');
|
||||
assert.equal(s.status, undefined);
|
||||
assert.equal(s.phaseNum, undefined);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── formatGsdState ─────────────────────────────────────────────────────────
|
||||
|
||||
describe('formatGsdState', () => {
|
||||
test('formats full state with milestone name, status, and phase name', () => {
|
||||
const out = formatGsdState({
|
||||
milestone: 'v1.9',
|
||||
milestoneName: 'Code Quality',
|
||||
status: 'executing',
|
||||
phaseNum: '1',
|
||||
phaseTotal: '5',
|
||||
phaseName: 'fix-graphiti-deployment',
|
||||
});
|
||||
assert.equal(out, 'v1.9 Code Quality · executing · fix-graphiti-deployment (1/5)');
|
||||
});
|
||||
|
||||
test('skips placeholder "milestone" value in milestoneName', () => {
|
||||
const out = formatGsdState({
|
||||
milestone: 'v1.0',
|
||||
milestoneName: 'milestone',
|
||||
status: 'planning',
|
||||
});
|
||||
assert.equal(out, 'v1.0 · planning');
|
||||
});
|
||||
|
||||
test('uses short phase form when phase name is missing', () => {
|
||||
const out = formatGsdState({
|
||||
milestone: 'v2.0',
|
||||
status: 'executing',
|
||||
phaseNum: '3',
|
||||
phaseTotal: '7',
|
||||
});
|
||||
assert.equal(out, 'v2.0 · executing · ph 3/7');
|
||||
});
|
||||
|
||||
test('omits phase entirely when phaseNum/phaseTotal missing', () => {
|
||||
const out = formatGsdState({
|
||||
milestone: 'v1.0',
|
||||
status: 'planning',
|
||||
});
|
||||
assert.equal(out, 'v1.0 · planning');
|
||||
});
|
||||
|
||||
test('handles milestone version only (no name)', () => {
|
||||
const out = formatGsdState({
|
||||
milestone: 'v1.9',
|
||||
status: 'executing',
|
||||
});
|
||||
assert.equal(out, 'v1.9 · executing');
|
||||
});
|
||||
|
||||
test('handles milestone name only (no version)', () => {
|
||||
const out = formatGsdState({
|
||||
milestoneName: 'Foundations',
|
||||
status: 'planning',
|
||||
});
|
||||
assert.equal(out, 'Foundations · planning');
|
||||
});
|
||||
|
||||
test('returns empty string for empty state', () => {
|
||||
assert.equal(formatGsdState({}), '');
|
||||
});
|
||||
|
||||
test('returns only available parts when everything else is missing', () => {
|
||||
assert.equal(formatGsdState({ status: 'planning' }), 'planning');
|
||||
});
|
||||
});
|
||||
|
||||
// ─── readGsdState ───────────────────────────────────────────────────────────
|
||||
|
||||
describe('readGsdState', () => {
|
||||
const tmpRoot = fs.mkdtempSync(path.join(os.tmpdir(), 'gsd-statusline-test-'));
|
||||
|
||||
test('finds STATE.md in the starting directory', () => {
|
||||
const proj = fs.mkdtempSync(path.join(tmpRoot, 'proj-'));
|
||||
fs.mkdirSync(path.join(proj, '.planning'), { recursive: true });
|
||||
fs.writeFileSync(
|
||||
path.join(proj, '.planning', 'STATE.md'),
|
||||
'---\nstatus: executing\nmilestone: v1.0\n---\n'
|
||||
);
|
||||
|
||||
const s = readGsdState(proj);
|
||||
assert.equal(s.status, 'executing');
|
||||
assert.equal(s.milestone, 'v1.0');
|
||||
});
|
||||
|
||||
test('walks up to find STATE.md in a parent directory', () => {
|
||||
const proj = fs.mkdtempSync(path.join(tmpRoot, 'proj-'));
|
||||
fs.mkdirSync(path.join(proj, '.planning'), { recursive: true });
|
||||
fs.writeFileSync(
|
||||
path.join(proj, '.planning', 'STATE.md'),
|
||||
'---\nstatus: planning\n---\n'
|
||||
);
|
||||
|
||||
const nested = path.join(proj, 'src', 'components', 'deep');
|
||||
fs.mkdirSync(nested, { recursive: true });
|
||||
|
||||
const s = readGsdState(nested);
|
||||
assert.equal(s.status, 'planning');
|
||||
});
|
||||
|
||||
test('returns null when no STATE.md exists in the walk-up chain', () => {
|
||||
const proj = fs.mkdtempSync(path.join(tmpRoot, 'proj-'));
|
||||
const nested = path.join(proj, 'src');
|
||||
fs.mkdirSync(nested, { recursive: true });
|
||||
|
||||
assert.equal(readGsdState(nested), null);
|
||||
});
|
||||
|
||||
test('returns null on malformed STATE.md without crashing', () => {
|
||||
const proj = fs.mkdtempSync(path.join(tmpRoot, 'proj-'));
|
||||
fs.mkdirSync(path.join(proj, '.planning'), { recursive: true });
|
||||
// Valid file (no content to crash on) — parseStateMd returns {}
|
||||
fs.writeFileSync(path.join(proj, '.planning', 'STATE.md'), '');
|
||||
|
||||
const s = readGsdState(proj);
|
||||
// Empty file yields an empty state object, not null — the function
|
||||
// only returns null when no file is found.
|
||||
assert.deepEqual(s, {});
|
||||
});
|
||||
});
|
||||
550
tests/gsd2-import.test.cjs
Normal file
550
tests/gsd2-import.test.cjs
Normal file
@@ -0,0 +1,550 @@
|
||||
'use strict';
|
||||
|
||||
const { describe, it, test, beforeEach, afterEach } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('node:fs');
|
||||
const path = require('node:path');
|
||||
const { createTempDir, cleanup, runGsdTools } = require('./helpers.cjs');
|
||||
|
||||
const {
|
||||
findGsd2Root,
|
||||
parseSlicesFromRoadmap,
|
||||
parseMilestoneTitle,
|
||||
parseTaskTitle,
|
||||
parseTaskDescription,
|
||||
parseTaskMustHaves,
|
||||
parseGsd2,
|
||||
buildPlanningArtifacts,
|
||||
buildRoadmapMd,
|
||||
buildStateMd,
|
||||
slugify,
|
||||
zeroPad,
|
||||
} = require('../get-shit-done/bin/lib/gsd2-import.cjs');
|
||||
|
||||
// ─── Fixture Builders ──────────────────────────────────────────────────────
|
||||
|
||||
/** Build a minimal but complete GSD-2 .gsd/ directory in tmpDir. */
|
||||
function makeGsd2Project(tmpDir, opts = {}) {
|
||||
const gsdDir = path.join(tmpDir, '.gsd');
|
||||
const m001Dir = path.join(gsdDir, 'milestones', 'M001');
|
||||
const s01Dir = path.join(m001Dir, 'slices', 'S01');
|
||||
const s02Dir = path.join(m001Dir, 'slices', 'S02');
|
||||
const s01TasksDir = path.join(s01Dir, 'tasks');
|
||||
|
||||
fs.mkdirSync(s01TasksDir, { recursive: true });
|
||||
|
||||
fs.writeFileSync(path.join(gsdDir, 'PROJECT.md'), '# My Project\n\nA test project.\n');
|
||||
fs.writeFileSync(path.join(gsdDir, 'REQUIREMENTS.md'), [
|
||||
'# Requirements',
|
||||
'',
|
||||
'## Active',
|
||||
'',
|
||||
'### R001 — Do the thing',
|
||||
'',
|
||||
'- Status: active',
|
||||
'- Description: The core requirement.',
|
||||
'',
|
||||
].join('\n'));
|
||||
|
||||
const roadmap = [
|
||||
'# M001: Foundation',
|
||||
'',
|
||||
'**Vision:** Build the foundation.',
|
||||
'',
|
||||
'## Success Criteria',
|
||||
'',
|
||||
'- It works.',
|
||||
'',
|
||||
'## Slices',
|
||||
'',
|
||||
'- [x] **S01: Setup** `risk:low` `depends:[]`',
|
||||
' > After this: setup complete',
|
||||
'- [ ] **S02: Auth System** `risk:medium` `depends:[S01]`',
|
||||
' > After this: auth works',
|
||||
].join('\n');
|
||||
fs.writeFileSync(path.join(m001Dir, 'M001-ROADMAP.md'), roadmap);
|
||||
|
||||
// S01 — completed slice with research and a done task
|
||||
fs.writeFileSync(path.join(s01Dir, 'S01-PLAN.md'), [
|
||||
'# S01: Setup',
|
||||
'',
|
||||
'**Goal:** Set up the project.',
|
||||
'',
|
||||
'## Tasks',
|
||||
'- [x] **T01: Init**',
|
||||
].join('\n'));
|
||||
fs.writeFileSync(path.join(s01Dir, 'S01-RESEARCH.md'), '# Research\n\nSome research.\n');
|
||||
fs.writeFileSync(path.join(s01Dir, 'S01-SUMMARY.md'), '---\nstatus: done\n---\n\nSlice done.\n');
|
||||
|
||||
fs.writeFileSync(path.join(s01TasksDir, 'T01-PLAN.md'), [
|
||||
'# T01: Init Project',
|
||||
'',
|
||||
'**Slice:** S01 — **Milestone:** M001',
|
||||
'',
|
||||
'## Description',
|
||||
'Initialize the project structure.',
|
||||
'',
|
||||
'## Must-Haves',
|
||||
'- [x] package.json exists',
|
||||
'- [x] tsconfig.json exists',
|
||||
'',
|
||||
'## Files',
|
||||
'- `package.json`',
|
||||
'- `tsconfig.json`',
|
||||
].join('\n'));
|
||||
fs.writeFileSync(path.join(s01TasksDir, 'T01-SUMMARY.md'), [
|
||||
'---',
|
||||
'status: done',
|
||||
'completed_at: 2025-01-15',
|
||||
'---',
|
||||
'',
|
||||
'# T01: Init Project',
|
||||
'',
|
||||
'Set up package.json and tsconfig.json.',
|
||||
].join('\n'));
|
||||
|
||||
// S02 — not started: slice appears in roadmap but no slice directory
|
||||
if (opts.withS02Dir) {
|
||||
fs.mkdirSync(path.join(s02Dir, 'tasks'), { recursive: true });
|
||||
fs.writeFileSync(path.join(s02Dir, 'S02-PLAN.md'), [
|
||||
'# S02: Auth System',
|
||||
'',
|
||||
'**Goal:** Add authentication.',
|
||||
'',
|
||||
'## Tasks',
|
||||
'- [ ] **T01: JWT middleware**',
|
||||
].join('\n'));
|
||||
fs.writeFileSync(path.join(s02Dir, 'tasks', 'T01-PLAN.md'), [
|
||||
'# T01: JWT Middleware',
|
||||
'',
|
||||
'**Slice:** S02 — **Milestone:** M001',
|
||||
'',
|
||||
'## Description',
|
||||
'Implement JWT token validation middleware.',
|
||||
'',
|
||||
'## Must-Haves',
|
||||
'- [ ] validateToken() returns 401 on invalid JWT',
|
||||
].join('\n'));
|
||||
}
|
||||
|
||||
return gsdDir;
|
||||
}
|
||||
|
||||
/** Build a two-milestone GSD-2 project. */
|
||||
function makeTwoMilestoneProject(tmpDir) {
|
||||
const gsdDir = path.join(tmpDir, '.gsd');
|
||||
const m001Dir = path.join(gsdDir, 'milestones', 'M001');
|
||||
const m002Dir = path.join(gsdDir, 'milestones', 'M002');
|
||||
|
||||
fs.mkdirSync(path.join(m001Dir, 'slices', 'S01', 'tasks'), { recursive: true });
|
||||
fs.mkdirSync(path.join(m002Dir, 'slices', 'S01', 'tasks'), { recursive: true });
|
||||
|
||||
fs.writeFileSync(path.join(gsdDir, 'PROJECT.md'), '# Multi-milestone Project\n');
|
||||
|
||||
fs.writeFileSync(path.join(m001Dir, 'M001-ROADMAP.md'), [
|
||||
'# M001: Alpha',
|
||||
'',
|
||||
'## Slices',
|
||||
'',
|
||||
'- [x] **S01: Core** `risk:low` `depends:[]`',
|
||||
'- [x] **S02: API** `risk:low` `depends:[S01]`',
|
||||
].join('\n'));
|
||||
|
||||
fs.writeFileSync(path.join(m002Dir, 'M002-ROADMAP.md'), [
|
||||
'# M002: Beta',
|
||||
'',
|
||||
'## Slices',
|
||||
'',
|
||||
'- [ ] **S01: Dashboard** `risk:medium` `depends:[]`',
|
||||
].join('\n'));
|
||||
|
||||
return gsdDir;
|
||||
}
|
||||
|
||||
// ─── Unit Tests ────────────────────────────────────────────────────────────
|
||||
|
||||
describe('parseSlicesFromRoadmap', () => {
|
||||
test('parses done and pending slices', () => {
|
||||
const content = [
|
||||
'## Slices',
|
||||
'',
|
||||
'- [x] **S01: Setup** `risk:low` `depends:[]`',
|
||||
'- [ ] **S02: Auth System** `risk:medium` `depends:[S01]`',
|
||||
].join('\n');
|
||||
const slices = parseSlicesFromRoadmap(content);
|
||||
assert.strictEqual(slices.length, 2);
|
||||
assert.deepStrictEqual(slices[0], { done: true, id: 'S01', title: 'Setup' });
|
||||
assert.deepStrictEqual(slices[1], { done: false, id: 'S02', title: 'Auth System' });
|
||||
});
|
||||
|
||||
test('returns empty array when no Slices section', () => {
|
||||
const slices = parseSlicesFromRoadmap('# M001: Title\n\n## Success Criteria\n\n- Works.');
|
||||
assert.strictEqual(slices.length, 0);
|
||||
});
|
||||
|
||||
test('ignores non-slice lines in the section', () => {
|
||||
const content = [
|
||||
'## Slices',
|
||||
'',
|
||||
'Some intro text.',
|
||||
'- [x] **S01: Core** `risk:low` `depends:[]`',
|
||||
' > After this: done',
|
||||
].join('\n');
|
||||
const slices = parseSlicesFromRoadmap(content);
|
||||
assert.strictEqual(slices.length, 1);
|
||||
assert.strictEqual(slices[0].id, 'S01');
|
||||
});
|
||||
});
|
||||
|
||||
describe('parseMilestoneTitle', () => {
|
||||
test('extracts title from first heading', () => {
|
||||
assert.strictEqual(parseMilestoneTitle('# M001: Foundation\n\nBody.'), 'Foundation');
|
||||
});
|
||||
|
||||
test('returns null when heading absent', () => {
|
||||
assert.strictEqual(parseMilestoneTitle('No heading here.'), null);
|
||||
});
|
||||
});
|
||||
|
||||
describe('parseTaskTitle', () => {
|
||||
test('extracts title from task plan', () => {
|
||||
assert.strictEqual(parseTaskTitle('# T01: Init Project\n\nBody.', 'T01'), 'Init Project');
|
||||
});
|
||||
|
||||
test('falls back to provided default', () => {
|
||||
assert.strictEqual(parseTaskTitle('No heading.', 'T01'), 'T01');
|
||||
});
|
||||
});
|
||||
|
||||
describe('parseTaskDescription', () => {
|
||||
test('extracts description body', () => {
|
||||
const content = [
|
||||
'# T01: Title',
|
||||
'',
|
||||
'## Description',
|
||||
'Do the thing.',
|
||||
'',
|
||||
'## Must-Haves',
|
||||
].join('\n');
|
||||
assert.strictEqual(parseTaskDescription(content), 'Do the thing.');
|
||||
});
|
||||
|
||||
test('returns empty string when section absent', () => {
|
||||
assert.strictEqual(parseTaskDescription('# T01: Title\n\nNo sections.'), '');
|
||||
});
|
||||
});
|
||||
|
||||
describe('parseTaskMustHaves', () => {
|
||||
test('parses checked and unchecked items', () => {
|
||||
const content = [
|
||||
'## Must-Haves',
|
||||
'- [x] File exists',
|
||||
'- [ ] Tests pass',
|
||||
].join('\n');
|
||||
const mh = parseTaskMustHaves(content);
|
||||
assert.deepStrictEqual(mh, ['File exists', 'Tests pass']);
|
||||
});
|
||||
|
||||
test('returns empty array when section absent', () => {
|
||||
assert.deepStrictEqual(parseTaskMustHaves('# T01: Title\n\nNo sections.'), []);
|
||||
});
|
||||
});
|
||||
|
||||
describe('slugify', () => {
|
||||
test('lowercases and replaces non-alphanumeric with hyphens', () => {
|
||||
assert.strictEqual(slugify('Auth System'), 'auth-system');
|
||||
assert.strictEqual(slugify('My Feature (v2)'), 'my-feature-v2');
|
||||
});
|
||||
|
||||
test('strips leading/trailing hyphens', () => {
|
||||
assert.strictEqual(slugify(' spaces '), 'spaces');
|
||||
});
|
||||
});
|
||||
|
||||
describe('zeroPad', () => {
|
||||
test('pads to 2 digits by default', () => {
|
||||
assert.strictEqual(zeroPad(1), '01');
|
||||
assert.strictEqual(zeroPad(12), '12');
|
||||
});
|
||||
});
|
||||
|
||||
// ─── Integration Tests ─────────────────────────────────────────────────────
|
||||
|
||||
describe('parseGsd2', () => {
|
||||
let tmpDir;
|
||||
beforeEach(() => { tmpDir = createTempDir('gsd2-parse-'); });
|
||||
afterEach(() => { cleanup(tmpDir); });
|
||||
|
||||
test('reads project and requirements passthroughs', () => {
|
||||
const gsdDir = makeGsd2Project(tmpDir);
|
||||
const data = parseGsd2(gsdDir);
|
||||
assert.ok(data.projectContent.includes('My Project'));
|
||||
assert.ok(data.requirements.includes('R001'));
|
||||
});
|
||||
|
||||
test('parses milestone with slices', () => {
|
||||
const gsdDir = makeGsd2Project(tmpDir);
|
||||
const data = parseGsd2(gsdDir);
|
||||
assert.strictEqual(data.milestones.length, 1);
|
||||
assert.strictEqual(data.milestones[0].id, 'M001');
|
||||
assert.strictEqual(data.milestones[0].title, 'Foundation');
|
||||
assert.strictEqual(data.milestones[0].slices.length, 2);
|
||||
});
|
||||
|
||||
test('marks S01 as done, S02 as not done', () => {
|
||||
const gsdDir = makeGsd2Project(tmpDir);
|
||||
const data = parseGsd2(gsdDir);
|
||||
const [s01, s02] = data.milestones[0].slices;
|
||||
assert.strictEqual(s01.done, true);
|
||||
assert.strictEqual(s02.done, false);
|
||||
});
|
||||
|
||||
test('reads research for completed slice', () => {
|
||||
const gsdDir = makeGsd2Project(tmpDir);
|
||||
const data = parseGsd2(gsdDir);
|
||||
assert.ok(data.milestones[0].slices[0].research.includes('Some research'));
|
||||
});
|
||||
|
||||
test('reads tasks from tasks/ directory', () => {
|
||||
const gsdDir = makeGsd2Project(tmpDir);
|
||||
const data = parseGsd2(gsdDir);
|
||||
const tasks = data.milestones[0].slices[0].tasks;
|
||||
assert.strictEqual(tasks.length, 1);
|
||||
assert.strictEqual(tasks[0].id, 'T01');
|
||||
assert.strictEqual(tasks[0].title, 'Init Project');
|
||||
assert.strictEqual(tasks[0].done, true);
|
||||
});
|
||||
|
||||
test('parses task must-haves', () => {
|
||||
const gsdDir = makeGsd2Project(tmpDir);
|
||||
const data = parseGsd2(gsdDir);
|
||||
const mh = data.milestones[0].slices[0].tasks[0].mustHaves;
|
||||
assert.deepStrictEqual(mh, ['package.json exists', 'tsconfig.json exists']);
|
||||
});
|
||||
|
||||
test('handles missing .gsd/milestones/ gracefully', () => {
|
||||
const gsdDir = path.join(tmpDir, '.gsd');
|
||||
fs.mkdirSync(gsdDir, { recursive: true });
|
||||
fs.writeFileSync(path.join(gsdDir, 'PROJECT.md'), '# Empty\n');
|
||||
const data = parseGsd2(gsdDir);
|
||||
assert.strictEqual(data.milestones.length, 0);
|
||||
});
|
||||
|
||||
test('slice with no directory has empty tasks list', () => {
|
||||
const gsdDir = makeGsd2Project(tmpDir);
|
||||
const data = parseGsd2(gsdDir);
|
||||
// S02 has no slice directory in the default fixture
|
||||
const s02 = data.milestones[0].slices[1];
|
||||
assert.strictEqual(s02.tasks.length, 0);
|
||||
assert.strictEqual(s02.research, null);
|
||||
});
|
||||
});
|
||||
|
||||
describe('buildPlanningArtifacts', () => {
|
||||
let tmpDir;
|
||||
beforeEach(() => { tmpDir = createTempDir('gsd2-artifacts-'); });
|
||||
afterEach(() => { cleanup(tmpDir); });
|
||||
|
||||
test('produces PROJECT.md, REQUIREMENTS.md, ROADMAP.md, STATE.md, config.json', () => {
|
||||
const gsdDir = makeGsd2Project(tmpDir);
|
||||
const data = parseGsd2(gsdDir);
|
||||
const artifacts = buildPlanningArtifacts(data);
|
||||
assert.ok(artifacts.has('PROJECT.md'));
|
||||
assert.ok(artifacts.has('REQUIREMENTS.md'));
|
||||
assert.ok(artifacts.has('ROADMAP.md'));
|
||||
assert.ok(artifacts.has('STATE.md'));
|
||||
assert.ok(artifacts.has('config.json'));
|
||||
});
|
||||
|
||||
test('S01 (done) maps to phase 01 with PLAN and SUMMARY', () => {
|
||||
const gsdDir = makeGsd2Project(tmpDir);
|
||||
const data = parseGsd2(gsdDir);
|
||||
const artifacts = buildPlanningArtifacts(data);
|
||||
assert.ok(artifacts.has('phases/01-setup/01-CONTEXT.md'));
|
||||
assert.ok(artifacts.has('phases/01-setup/01-RESEARCH.md'));
|
||||
assert.ok(artifacts.has('phases/01-setup/01-01-PLAN.md'));
|
||||
assert.ok(artifacts.has('phases/01-setup/01-01-SUMMARY.md'));
|
||||
});
|
||||
|
||||
test('S02 (pending) maps to phase 02 with only CONTEXT and PLAN', () => {
|
||||
const gsdDir = makeGsd2Project(tmpDir, { withS02Dir: true });
|
||||
const data = parseGsd2(gsdDir);
|
||||
const artifacts = buildPlanningArtifacts(data);
|
||||
assert.ok(artifacts.has('phases/02-auth-system/02-CONTEXT.md'));
|
||||
assert.ok(artifacts.has('phases/02-auth-system/02-01-PLAN.md'));
|
||||
assert.ok(!artifacts.has('phases/02-auth-system/02-01-SUMMARY.md'), 'no summary for pending task');
|
||||
});
|
||||
|
||||
test('ROADMAP.md marks S01 done, S02 pending', () => {
|
||||
const gsdDir = makeGsd2Project(tmpDir);
|
||||
const data = parseGsd2(gsdDir);
|
||||
const artifacts = buildPlanningArtifacts(data);
|
||||
const roadmap = artifacts.get('ROADMAP.md');
|
||||
assert.ok(roadmap.includes('[x]'));
|
||||
assert.ok(roadmap.includes('[ ]'));
|
||||
});
|
||||
|
||||
test('PLAN.md includes frontmatter with phase and plan keys', () => {
|
||||
const gsdDir = makeGsd2Project(tmpDir);
|
||||
const data = parseGsd2(gsdDir);
|
||||
const artifacts = buildPlanningArtifacts(data);
|
||||
const plan = artifacts.get('phases/01-setup/01-01-PLAN.md');
|
||||
assert.ok(plan.includes('phase: "01"'));
|
||||
assert.ok(plan.includes('plan: "01"'));
|
||||
assert.ok(plan.includes('type: "implementation"'));
|
||||
});
|
||||
|
||||
test('SUMMARY.md strips GSD-2 frontmatter and adds v1 frontmatter', () => {
|
||||
const gsdDir = makeGsd2Project(tmpDir);
|
||||
const data = parseGsd2(gsdDir);
|
||||
const artifacts = buildPlanningArtifacts(data);
|
||||
const summary = artifacts.get('phases/01-setup/01-01-SUMMARY.md');
|
||||
assert.ok(summary.includes('phase: "01"'));
|
||||
assert.ok(summary.includes('plan: "01"'));
|
||||
// GSD-2 frontmatter field should not appear
|
||||
assert.ok(!summary.includes('completed_at:'));
|
||||
// Body content should be preserved
|
||||
assert.ok(summary.includes('Init Project'));
|
||||
});
|
||||
|
||||
test('config.json is valid JSON', () => {
|
||||
const gsdDir = makeGsd2Project(tmpDir);
|
||||
const data = parseGsd2(gsdDir);
|
||||
const artifacts = buildPlanningArtifacts(data);
|
||||
assert.doesNotThrow(() => JSON.parse(artifacts.get('config.json')));
|
||||
});
|
||||
|
||||
test('multi-milestone: slices numbered sequentially across milestones', () => {
|
||||
const gsdDir = makeTwoMilestoneProject(tmpDir);
|
||||
const data = parseGsd2(gsdDir);
|
||||
const artifacts = buildPlanningArtifacts(data);
|
||||
// M001/S01 → phase 01, M001/S02 → phase 02, M002/S01 → phase 03
|
||||
assert.ok(artifacts.has('phases/01-core/01-CONTEXT.md'));
|
||||
assert.ok(artifacts.has('phases/02-api/02-CONTEXT.md'));
|
||||
assert.ok(artifacts.has('phases/03-dashboard/03-CONTEXT.md'));
|
||||
});
|
||||
});
|
||||
|
||||
describe('buildRoadmapMd', () => {
|
||||
test('produces milestone sections with checked/unchecked phases', () => {
|
||||
const milestones = [{ id: 'M001', title: 'Alpha', slices: [] }];
|
||||
const phaseMap = [
|
||||
{ milestoneId: 'M001', milestoneTitle: 'Alpha', slice: { done: true, title: 'Core' }, phaseNum: 1 },
|
||||
{ milestoneId: 'M001', milestoneTitle: 'Alpha', slice: { done: false, title: 'API' }, phaseNum: 2 },
|
||||
];
|
||||
const roadmap = buildRoadmapMd(milestones, phaseMap);
|
||||
assert.ok(roadmap.includes('## M001: Alpha'));
|
||||
assert.ok(roadmap.includes('[x]'));
|
||||
assert.ok(roadmap.includes('[ ]'));
|
||||
assert.ok(roadmap.includes('Phase 01: core'));
|
||||
assert.ok(roadmap.includes('Phase 02: api'));
|
||||
});
|
||||
});
|
||||
|
||||
describe('buildStateMd', () => {
|
||||
test('sets current phase to first incomplete slice', () => {
|
||||
const phaseMap = [
|
||||
{ milestoneId: 'M001', milestoneTitle: 'Alpha', slice: { done: true, title: 'Core' }, phaseNum: 1 },
|
||||
{ milestoneId: 'M001', milestoneTitle: 'Alpha', slice: { done: false, title: 'API Layer' }, phaseNum: 2 },
|
||||
];
|
||||
const state = buildStateMd(phaseMap);
|
||||
assert.ok(state.includes('Phase: 02'));
|
||||
assert.ok(state.includes('api-layer'));
|
||||
assert.ok(state.includes('Ready to plan'));
|
||||
});
|
||||
|
||||
test('reports all complete when all slices done', () => {
|
||||
const phaseMap = [
|
||||
{ milestoneId: 'M001', milestoneTitle: 'Alpha', slice: { done: true, title: 'Core' }, phaseNum: 1 },
|
||||
];
|
||||
const state = buildStateMd(phaseMap);
|
||||
assert.ok(state.includes('All phases complete'));
|
||||
});
|
||||
});
|
||||
|
||||
// ─── CLI Integration Tests ──────────────────────────────────────────────────
|
||||
|
||||
describe('gsd-tools from-gsd2 CLI', () => {
|
||||
let tmpDir;
|
||||
beforeEach(() => { tmpDir = createTempDir('gsd2-cli-'); });
|
||||
afterEach(() => { cleanup(tmpDir); });
|
||||
|
||||
test('--dry-run returns preview without writing files', () => {
|
||||
makeGsd2Project(tmpDir);
|
||||
const result = runGsdTools(['from-gsd2', '--dry-run', '--raw'], tmpDir);
|
||||
assert.ok(result.success, result.error);
|
||||
const parsed = JSON.parse(result.output);
|
||||
assert.strictEqual(parsed.dryRun, true);
|
||||
assert.ok(parsed.preview.includes('PROJECT.md'));
|
||||
assert.ok(!fs.existsSync(path.join(tmpDir, '.planning')), 'no files written in dry-run');
|
||||
});
|
||||
|
||||
test('writes .planning/ directory with correct structure', () => {
|
||||
makeGsd2Project(tmpDir);
|
||||
const result = runGsdTools(['from-gsd2', '--raw'], tmpDir);
|
||||
assert.ok(result.success, result.error);
|
||||
const parsed = JSON.parse(result.output);
|
||||
assert.strictEqual(parsed.success, true);
|
||||
assert.ok(parsed.filesWritten > 0);
|
||||
assert.ok(fs.existsSync(path.join(tmpDir, '.planning', 'ROADMAP.md')));
|
||||
assert.ok(fs.existsSync(path.join(tmpDir, '.planning', 'STATE.md')));
|
||||
assert.ok(fs.existsSync(path.join(tmpDir, '.planning', 'PROJECT.md')));
|
||||
assert.ok(fs.existsSync(path.join(tmpDir, '.planning', 'phases', '01-setup', '01-01-PLAN.md')));
|
||||
});
|
||||
|
||||
test('errors when no .gsd/ directory present', () => {
|
||||
const result = runGsdTools(['from-gsd2', '--raw'], tmpDir);
|
||||
const parsed = JSON.parse(result.output);
|
||||
assert.strictEqual(parsed.success, false);
|
||||
assert.ok(parsed.error.includes('No .gsd/'));
|
||||
});
|
||||
|
||||
test('errors when .planning/ already exists without --force', () => {
|
||||
makeGsd2Project(tmpDir);
|
||||
fs.mkdirSync(path.join(tmpDir, '.planning'), { recursive: true });
|
||||
const result = runGsdTools(['from-gsd2', '--raw'], tmpDir);
|
||||
const parsed = JSON.parse(result.output);
|
||||
assert.strictEqual(parsed.success, false);
|
||||
assert.ok(parsed.error.includes('already exists'));
|
||||
});
|
||||
|
||||
test('--force overwrites existing .planning/', () => {
|
||||
makeGsd2Project(tmpDir);
|
||||
fs.mkdirSync(path.join(tmpDir, '.planning'), { recursive: true });
|
||||
fs.writeFileSync(path.join(tmpDir, '.planning', 'OLD.md'), 'old content');
|
||||
const result = runGsdTools(['from-gsd2', '--force', '--raw'], tmpDir);
|
||||
const parsed = JSON.parse(result.output);
|
||||
assert.strictEqual(parsed.success, true);
|
||||
assert.ok(fs.existsSync(path.join(tmpDir, '.planning', 'ROADMAP.md')));
|
||||
});
|
||||
|
||||
test('--path resolves target directory', () => {
|
||||
const projectDir = path.join(tmpDir, 'myproject');
|
||||
fs.mkdirSync(projectDir, { recursive: true });
|
||||
makeGsd2Project(projectDir);
|
||||
// Run from tmpDir but point at projectDir
|
||||
const result = runGsdTools(['from-gsd2', '--path', projectDir, '--dry-run', '--raw'], tmpDir);
|
||||
assert.ok(result.success, result.error);
|
||||
const parsed = JSON.parse(result.output);
|
||||
assert.strictEqual(parsed.dryRun, true);
|
||||
assert.ok(parsed.preview.includes('PROJECT.md'));
|
||||
});
|
||||
|
||||
test('completion state: S01 done → [x] in ROADMAP.md', () => {
|
||||
makeGsd2Project(tmpDir);
|
||||
runGsdTools(['from-gsd2', '--raw'], tmpDir);
|
||||
const roadmap = fs.readFileSync(path.join(tmpDir, '.planning', 'ROADMAP.md'), 'utf8');
|
||||
assert.ok(roadmap.includes('[x]'));
|
||||
// S02 is pending
|
||||
assert.ok(roadmap.includes('[ ]'));
|
||||
});
|
||||
|
||||
test('SUMMARY.md written for completed task, not for pending', () => {
|
||||
makeGsd2Project(tmpDir, { withS02Dir: true });
|
||||
runGsdTools(['from-gsd2', '--raw'], tmpDir);
|
||||
// S01/T01 is done → SUMMARY exists
|
||||
assert.ok(fs.existsSync(path.join(tmpDir, '.planning', 'phases', '01-setup', '01-01-SUMMARY.md')));
|
||||
// S02/T01 is pending → no SUMMARY
|
||||
assert.ok(!fs.existsSync(path.join(tmpDir, '.planning', 'phases', '02-auth-system', '02-01-SUMMARY.md')));
|
||||
});
|
||||
});
|
||||
@@ -29,10 +29,11 @@ const runtimeMap = {
|
||||
'9': 'gemini',
|
||||
'10': 'kilo',
|
||||
'11': 'opencode',
|
||||
'12': 'trae',
|
||||
'13': 'windsurf'
|
||||
'12': 'qwen',
|
||||
'13': 'trae',
|
||||
'14': 'windsurf'
|
||||
};
|
||||
const allRuntimes = ['claude', 'antigravity', 'augment', 'cline', 'codebuddy', 'codex', 'copilot', 'cursor', 'gemini', 'kilo', 'opencode', 'trae', 'windsurf'];
|
||||
const allRuntimes = ['claude', 'antigravity', 'augment', 'cline', 'codebuddy', 'codex', 'copilot', 'cursor', 'gemini', 'kilo', 'opencode', 'qwen', 'trae', 'windsurf'];
|
||||
|
||||
/**
|
||||
* Simulate the parsing logic from promptRuntime without requiring readline.
|
||||
@@ -41,7 +42,7 @@ const allRuntimes = ['claude', 'antigravity', 'augment', 'cline', 'codebuddy', '
|
||||
function parseRuntimeInput(input) {
|
||||
input = input.trim() || '1';
|
||||
|
||||
if (input === '14') {
|
||||
if (input === '15') {
|
||||
return allRuntimes;
|
||||
}
|
||||
|
||||
@@ -89,16 +90,20 @@ describe('multi-runtime selection parsing', () => {
|
||||
assert.deepStrictEqual(parseRuntimeInput('11'), ['opencode']);
|
||||
});
|
||||
|
||||
test('single choice for qwen', () => {
|
||||
assert.deepStrictEqual(parseRuntimeInput('12'), ['qwen']);
|
||||
});
|
||||
|
||||
test('single choice for trae', () => {
|
||||
assert.deepStrictEqual(parseRuntimeInput('12'), ['trae']);
|
||||
assert.deepStrictEqual(parseRuntimeInput('13'), ['trae']);
|
||||
});
|
||||
|
||||
test('single choice for windsurf', () => {
|
||||
assert.deepStrictEqual(parseRuntimeInput('13'), ['windsurf']);
|
||||
assert.deepStrictEqual(parseRuntimeInput('14'), ['windsurf']);
|
||||
});
|
||||
|
||||
test('choice 14 returns all runtimes', () => {
|
||||
assert.deepStrictEqual(parseRuntimeInput('14'), allRuntimes);
|
||||
test('choice 15 returns all runtimes', () => {
|
||||
assert.deepStrictEqual(parseRuntimeInput('15'), allRuntimes);
|
||||
});
|
||||
|
||||
test('empty input defaults to claude', () => {
|
||||
@@ -107,13 +112,13 @@ describe('multi-runtime selection parsing', () => {
|
||||
});
|
||||
|
||||
test('invalid choices are ignored, falls back to claude if all invalid', () => {
|
||||
assert.deepStrictEqual(parseRuntimeInput('15'), ['claude']);
|
||||
assert.deepStrictEqual(parseRuntimeInput('16'), ['claude']);
|
||||
assert.deepStrictEqual(parseRuntimeInput('0'), ['claude']);
|
||||
assert.deepStrictEqual(parseRuntimeInput('abc'), ['claude']);
|
||||
});
|
||||
|
||||
test('invalid choices mixed with valid are filtered out', () => {
|
||||
assert.deepStrictEqual(parseRuntimeInput('1,15,7'), ['claude', 'copilot']);
|
||||
assert.deepStrictEqual(parseRuntimeInput('1,16,7'), ['claude', 'copilot']);
|
||||
assert.deepStrictEqual(parseRuntimeInput('abc 3 xyz'), ['augment']);
|
||||
});
|
||||
|
||||
@@ -129,7 +134,7 @@ describe('multi-runtime selection parsing', () => {
|
||||
});
|
||||
|
||||
describe('install.js source contains multi-select support', () => {
|
||||
test('runtimeMap is defined with all 13 runtimes', () => {
|
||||
test('runtimeMap is defined with all 14 runtimes', () => {
|
||||
for (const [key, name] of Object.entries(runtimeMap)) {
|
||||
assert.ok(
|
||||
installSrc.includes(`'${key}': '${name}'`),
|
||||
@@ -146,21 +151,25 @@ describe('install.js source contains multi-select support', () => {
|
||||
}
|
||||
});
|
||||
|
||||
test('all shortcut uses option 14', () => {
|
||||
test('all shortcut uses option 15', () => {
|
||||
assert.ok(
|
||||
installSrc.includes("if (input === '14')"),
|
||||
'all shortcut uses option 14'
|
||||
installSrc.includes("if (input === '15')"),
|
||||
'all shortcut uses option 15'
|
||||
);
|
||||
});
|
||||
|
||||
test('prompt lists Trae as option 12 and All as option 14', () => {
|
||||
test('prompt lists Qwen Code as option 12, Trae as option 13 and All as option 15', () => {
|
||||
assert.ok(
|
||||
installSrc.includes('12${reset}) Trae'),
|
||||
'prompt lists Trae as option 12'
|
||||
installSrc.includes('12${reset}) Qwen Code'),
|
||||
'prompt lists Qwen Code as option 12'
|
||||
);
|
||||
assert.ok(
|
||||
installSrc.includes('14${reset}) All'),
|
||||
'prompt lists All as option 14'
|
||||
installSrc.includes('13${reset}) Trae'),
|
||||
'prompt lists Trae as option 13'
|
||||
);
|
||||
assert.ok(
|
||||
installSrc.includes('15${reset}) All'),
|
||||
'prompt lists All as option 15'
|
||||
);
|
||||
});
|
||||
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
/**
|
||||
* GSD Tools Tests - /gsd-next safety gates and consecutive-call guard
|
||||
* GSD Tools Tests - /gsd-next safety gates and prior-phase completeness scan
|
||||
*
|
||||
* Validates that the next workflow includes three hard-stop safety gates
|
||||
* (checkpoint, error state, verification), a consecutive-call budget guard,
|
||||
* and a --force bypass flag.
|
||||
* (checkpoint, error state, verification), a prior-phase completeness scan
|
||||
* replacing the old consecutive-call counter, and a --force bypass flag.
|
||||
*
|
||||
* Closes: #1732
|
||||
* Closes: #1732, #2089
|
||||
*/
|
||||
|
||||
const { test, describe } = require('node:test');
|
||||
@@ -13,7 +13,7 @@ const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
describe('/gsd-next safety gates (#1732)', () => {
|
||||
describe('/gsd-next safety gates (#1732, #2089)', () => {
|
||||
const workflowPath = path.join(__dirname, '..', 'get-shit-done', 'workflows', 'next.md');
|
||||
const commandPath = path.join(__dirname, '..', 'commands', 'gsd', 'next.md');
|
||||
|
||||
@@ -79,19 +79,72 @@ describe('/gsd-next safety gates (#1732)', () => {
|
||||
);
|
||||
});
|
||||
|
||||
test('consecutive-call budget guard', () => {
|
||||
test('prior-phase completeness scan replaces consecutive-call counter', () => {
|
||||
const content = fs.readFileSync(workflowPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('.next-call-count'),
|
||||
'workflow should reference .next-call-count counter file'
|
||||
content.includes('Prior-phase completeness scan'),
|
||||
'workflow should have a prior-phase completeness scan section'
|
||||
);
|
||||
assert.ok(
|
||||
content.includes('6'),
|
||||
'consecutive guard should trigger at count >= 6'
|
||||
!content.includes('.next-call-count'),
|
||||
'workflow must not reference the old .next-call-count counter file'
|
||||
);
|
||||
assert.ok(
|
||||
content.includes('consecutively'),
|
||||
'guard should mention consecutive calls'
|
||||
!content.includes('consecutively'),
|
||||
'workflow must not reference consecutive call counting'
|
||||
);
|
||||
});
|
||||
|
||||
test('completeness scan checks plans without summaries', () => {
|
||||
const content = fs.readFileSync(workflowPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('Plans without summaries') || content.includes('no SUMMARY.md'),
|
||||
'completeness scan should detect plans that ran without producing summaries'
|
||||
);
|
||||
});
|
||||
|
||||
test('completeness scan checks verification failures in prior phases', () => {
|
||||
const content = fs.readFileSync(workflowPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('Verification failures not overridden') ||
|
||||
content.includes('VERIFICATION.md with `FAIL`'),
|
||||
'completeness scan should detect unoverridden FAIL items in prior phase VERIFICATION.md'
|
||||
);
|
||||
});
|
||||
|
||||
test('completeness scan checks CONTEXT.md without plans', () => {
|
||||
const content = fs.readFileSync(workflowPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('CONTEXT.md without plans') ||
|
||||
content.includes('CONTEXT.md but no PLAN.md'),
|
||||
'completeness scan should detect phases with discussion but no planning'
|
||||
);
|
||||
});
|
||||
|
||||
test('completeness scan offers Continue, Stop, and Force options', () => {
|
||||
const content = fs.readFileSync(workflowPath, 'utf8');
|
||||
assert.ok(content.includes('[C]'), 'completeness scan should offer [C] Continue option');
|
||||
assert.ok(content.includes('[S]'), 'completeness scan should offer [S] Stop option');
|
||||
assert.ok(content.includes('[F]'), 'completeness scan should offer [F] Force option');
|
||||
});
|
||||
|
||||
test('deferral path creates backlog entry using 999.x scheme', () => {
|
||||
const content = fs.readFileSync(workflowPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('999.'),
|
||||
'deferral should use the 999.x backlog numbering scheme'
|
||||
);
|
||||
assert.ok(
|
||||
content.includes('Backlog') || content.includes('BACKLOG'),
|
||||
'deferral should write to the Backlog section of ROADMAP.md'
|
||||
);
|
||||
});
|
||||
|
||||
test('clean prior phases route silently with no interruption', () => {
|
||||
const content = fs.readFileSync(workflowPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('silently') || content.includes('no interruption'),
|
||||
'workflow should route without interruption when prior phases are clean'
|
||||
);
|
||||
});
|
||||
|
||||
@@ -107,7 +160,7 @@ describe('/gsd-next safety gates (#1732)', () => {
|
||||
);
|
||||
});
|
||||
|
||||
test('command definition documents --force flag', () => {
|
||||
test('command definition documents --force flag and completeness scan', () => {
|
||||
const content = fs.readFileSync(commandPath, 'utf8');
|
||||
assert.ok(
|
||||
content.includes('--force'),
|
||||
@@ -117,6 +170,10 @@ describe('/gsd-next safety gates (#1732)', () => {
|
||||
content.includes('bypass safety gates'),
|
||||
'command definition should explain that --force bypasses safety gates'
|
||||
);
|
||||
assert.ok(
|
||||
content.includes('completeness'),
|
||||
'command definition should document the prior-phase completeness scan'
|
||||
);
|
||||
});
|
||||
|
||||
test('gates exit on first hit', () => {
|
||||
|
||||
177
tests/phase-researcher-app-aware.test.cjs
Normal file
177
tests/phase-researcher-app-aware.test.cjs
Normal file
@@ -0,0 +1,177 @@
|
||||
/**
|
||||
* Phase Researcher Application-Aware Tests (#1988)
|
||||
*
|
||||
* Validates that gsd-phase-researcher maps capabilities to architectural
|
||||
* tiers before diving into framework-specific research. Also validates
|
||||
* that gsd-planner and gsd-plan-checker consume the Architectural
|
||||
* Responsibility Map downstream.
|
||||
*/
|
||||
|
||||
const { test, describe } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const AGENTS_DIR = path.join(__dirname, '..', 'agents');
|
||||
const TEMPLATES_DIR = path.join(__dirname, '..', 'get-shit-done', 'templates');
|
||||
|
||||
// ─── Phase Researcher: Architectural Responsibility Mapping ─────────────────
|
||||
|
||||
describe('phase-researcher: Architectural Responsibility Mapping', () => {
|
||||
const researcherPath = path.join(AGENTS_DIR, 'gsd-phase-researcher.md');
|
||||
const content = fs.readFileSync(researcherPath, 'utf-8');
|
||||
|
||||
test('contains Architectural Responsibility Mapping step', () => {
|
||||
assert.ok(
|
||||
content.includes('Architectural Responsibility Map'),
|
||||
'gsd-phase-researcher.md must contain "Architectural Responsibility Map"'
|
||||
);
|
||||
});
|
||||
|
||||
test('Architectural Responsibility Mapping step comes after Step 1 and before Step 2', () => {
|
||||
const step1Pos = content.indexOf('## Step 1:');
|
||||
// Look for the step heading specifically (not the output format section)
|
||||
const stepARMPos = content.indexOf('## Step 1.5:');
|
||||
const step2Pos = content.indexOf('## Step 2:');
|
||||
|
||||
assert.ok(step1Pos !== -1, 'Step 1 must exist');
|
||||
assert.ok(stepARMPos !== -1, 'Step 1.5 Architectural Responsibility Mapping step must exist');
|
||||
assert.ok(step2Pos !== -1, 'Step 2 must exist');
|
||||
|
||||
assert.ok(
|
||||
stepARMPos > step1Pos,
|
||||
'Step 1.5 (Architectural Responsibility Mapping) must come after Step 1'
|
||||
);
|
||||
assert.ok(
|
||||
stepARMPos < step2Pos,
|
||||
'Step 1.5 (Architectural Responsibility Mapping) must come before Step 2'
|
||||
);
|
||||
});
|
||||
|
||||
test('step is a pure reasoning step with no tool calls', () => {
|
||||
// Extract the ARM section content (between the ARM heading and the next ## Step heading)
|
||||
const armHeadingMatch = content.match(/## Step 1\.5[^\n]*Architectural Responsibility Map/);
|
||||
assert.ok(armHeadingMatch, 'Must have a Step 1.5 heading for Architectural Responsibility Mapping');
|
||||
|
||||
const armStart = content.indexOf(armHeadingMatch[0]);
|
||||
const nextStepMatch = content.indexOf('## Step 2:', armStart);
|
||||
const armSection = content.substring(armStart, nextStepMatch);
|
||||
|
||||
// Should not contain tool invocation patterns
|
||||
const toolPatterns = [
|
||||
/```bash/,
|
||||
/node "\$HOME/,
|
||||
/gsd-tools\.cjs/,
|
||||
/WebSearch/,
|
||||
/Context7/,
|
||||
/mcp__/,
|
||||
];
|
||||
|
||||
for (const pattern of toolPatterns) {
|
||||
assert.ok(
|
||||
!pattern.test(armSection),
|
||||
`Architectural Responsibility Mapping step must be pure reasoning (no tool calls), but found: ${pattern}`
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
test('mentions standard architectural tiers', () => {
|
||||
const armStart = content.indexOf('Architectural Responsibility Map');
|
||||
const nextStep = content.indexOf('## Step 2:', armStart);
|
||||
const armSection = content.substring(armStart, nextStep);
|
||||
|
||||
// Should reference standard tiers
|
||||
const tiers = ['browser', 'frontend', 'API', 'database'];
|
||||
const foundTiers = tiers.filter(tier =>
|
||||
armSection.toLowerCase().includes(tier.toLowerCase())
|
||||
);
|
||||
|
||||
assert.ok(
|
||||
foundTiers.length >= 3,
|
||||
`Must mention at least 3 standard architectural tiers, found: ${foundTiers.join(', ')}`
|
||||
);
|
||||
});
|
||||
|
||||
test('specifies output format as a table in RESEARCH.md', () => {
|
||||
const armStart = content.indexOf('Architectural Responsibility Map');
|
||||
const nextStep = content.indexOf('## Step 2:', armStart);
|
||||
const armSection = content.substring(armStart, nextStep);
|
||||
|
||||
assert.ok(
|
||||
armSection.includes('|') && armSection.includes('Capability'),
|
||||
'ARM step must specify a table output format with Capability column'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── Planner: Architectural Responsibility Map Sanity Check ─────────────────
|
||||
|
||||
describe('planner: Architectural Responsibility Map sanity check', () => {
|
||||
const plannerPath = path.join(AGENTS_DIR, 'gsd-planner.md');
|
||||
const content = fs.readFileSync(plannerPath, 'utf-8');
|
||||
|
||||
test('references Architectural Responsibility Map', () => {
|
||||
assert.ok(
|
||||
content.includes('Architectural Responsibility Map'),
|
||||
'gsd-planner.md must reference the Architectural Responsibility Map'
|
||||
);
|
||||
});
|
||||
|
||||
test('includes sanity check against the map', () => {
|
||||
// Must mention checking/verifying plan tasks against the responsibility map
|
||||
assert.ok(
|
||||
content.includes('sanity check') || content.includes('sanity-check'),
|
||||
'gsd-planner.md must include a sanity check against the Architectural Responsibility Map'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── Plan Checker: Architectural Tier Verification Dimension ────────────────
|
||||
|
||||
describe('plan-checker: Architectural Tier verification dimension', () => {
|
||||
const checkerPath = path.join(AGENTS_DIR, 'gsd-plan-checker.md');
|
||||
const content = fs.readFileSync(checkerPath, 'utf-8');
|
||||
|
||||
test('has verification dimension for architectural tier', () => {
|
||||
assert.ok(
|
||||
content.includes('Architectural Responsibility Map') ||
|
||||
content.includes('Architectural Tier'),
|
||||
'gsd-plan-checker.md must have a verification dimension for architectural tier mapping'
|
||||
);
|
||||
});
|
||||
|
||||
test('verification dimension checks plans against the map', () => {
|
||||
// Should have a dimension that references tier/responsibility checking
|
||||
assert.ok(
|
||||
content.includes('tier owner') || content.includes('tier mismatch') || content.includes('responsibility map'),
|
||||
'plan-checker verification dimension must check for tier mismatches against the responsibility map'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── Research Template: Architectural Responsibility Map Section ─────────────
|
||||
|
||||
describe('research template: Architectural Responsibility Map section', () => {
|
||||
const templatePath = path.join(TEMPLATES_DIR, 'research.md');
|
||||
const content = fs.readFileSync(templatePath, 'utf-8');
|
||||
|
||||
test('mentions Architectural Responsibility Map section', () => {
|
||||
assert.ok(
|
||||
content.includes('Architectural Responsibility Map'),
|
||||
'Research template must include an Architectural Responsibility Map section'
|
||||
);
|
||||
});
|
||||
|
||||
test('template includes tier table format', () => {
|
||||
const armStart = content.indexOf('Architectural Responsibility Map');
|
||||
assert.ok(armStart !== -1, 'ARM section must exist');
|
||||
|
||||
const sectionEnd = content.indexOf('##', armStart + 10);
|
||||
const section = content.substring(armStart, sectionEnd !== -1 ? sectionEnd : armStart + 500);
|
||||
|
||||
assert.ok(
|
||||
section.includes('|') && (section.includes('Tier') || section.includes('tier')),
|
||||
'Research template ARM section must include a table format with Tier column'
|
||||
);
|
||||
});
|
||||
});
|
||||
170
tests/plan-bounce.test.cjs
Normal file
170
tests/plan-bounce.test.cjs
Normal file
@@ -0,0 +1,170 @@
|
||||
/**
|
||||
* Plan Bounce Tests
|
||||
*
|
||||
* Validates plan bounce hook feature (step 12.5 in plan-phase):
|
||||
* - Config key registration (workflow.plan_bounce, workflow.plan_bounce_script, workflow.plan_bounce_passes)
|
||||
* - Config template defaults
|
||||
* - Workflow step 12.5 content in plan-phase.md
|
||||
* - Flag handling (--bounce, --skip-bounce)
|
||||
* - Backup/restore pattern (pre-bounce.md)
|
||||
* - Frontmatter integrity validation
|
||||
* - Re-runs checker on bounced plans
|
||||
*/
|
||||
|
||||
const { test, describe } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const GSD_ROOT = path.join(__dirname, '..', 'get-shit-done');
|
||||
const CONFIG_CJS_PATH = path.join(GSD_ROOT, 'bin', 'lib', 'config.cjs');
|
||||
const CONFIG_TEMPLATE_PATH = path.join(GSD_ROOT, 'templates', 'config.json');
|
||||
const PLAN_PHASE_PATH = path.join(GSD_ROOT, 'workflows', 'plan-phase.md');
|
||||
|
||||
describe('Plan Bounce: config keys', () => {
|
||||
test('workflow.plan_bounce is in VALID_CONFIG_KEYS', () => {
|
||||
const content = fs.readFileSync(CONFIG_CJS_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes("'workflow.plan_bounce'"),
|
||||
'VALID_CONFIG_KEYS should contain workflow.plan_bounce'
|
||||
);
|
||||
});
|
||||
|
||||
test('workflow.plan_bounce_script is in VALID_CONFIG_KEYS', () => {
|
||||
const content = fs.readFileSync(CONFIG_CJS_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes("'workflow.plan_bounce_script'"),
|
||||
'VALID_CONFIG_KEYS should contain workflow.plan_bounce_script'
|
||||
);
|
||||
});
|
||||
|
||||
test('workflow.plan_bounce_passes is in VALID_CONFIG_KEYS', () => {
|
||||
const content = fs.readFileSync(CONFIG_CJS_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes("'workflow.plan_bounce_passes'"),
|
||||
'VALID_CONFIG_KEYS should contain workflow.plan_bounce_passes'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Plan Bounce: config template defaults', () => {
|
||||
test('config template has plan_bounce default (false)', () => {
|
||||
const template = JSON.parse(fs.readFileSync(CONFIG_TEMPLATE_PATH, 'utf-8'));
|
||||
assert.strictEqual(
|
||||
template.workflow.plan_bounce,
|
||||
false,
|
||||
'config template workflow.plan_bounce should default to false'
|
||||
);
|
||||
});
|
||||
|
||||
test('config template has plan_bounce_script default (null)', () => {
|
||||
const template = JSON.parse(fs.readFileSync(CONFIG_TEMPLATE_PATH, 'utf-8'));
|
||||
assert.strictEqual(
|
||||
template.workflow.plan_bounce_script,
|
||||
null,
|
||||
'config template workflow.plan_bounce_script should default to null'
|
||||
);
|
||||
});
|
||||
|
||||
test('config template has plan_bounce_passes default (2)', () => {
|
||||
const template = JSON.parse(fs.readFileSync(CONFIG_TEMPLATE_PATH, 'utf-8'));
|
||||
assert.strictEqual(
|
||||
template.workflow.plan_bounce_passes,
|
||||
2,
|
||||
'config template workflow.plan_bounce_passes should default to 2'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('Plan Bounce: plan-phase.md step 12.5', () => {
|
||||
let content;
|
||||
|
||||
test('plan-phase.md contains step 12.5', () => {
|
||||
content = fs.readFileSync(PLAN_PHASE_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('## 12.5'),
|
||||
'plan-phase.md should contain step 12.5'
|
||||
);
|
||||
});
|
||||
|
||||
test('step 12.5 references plan bounce', () => {
|
||||
content = content || fs.readFileSync(PLAN_PHASE_PATH, 'utf-8');
|
||||
// The step title should mention bounce
|
||||
assert.ok(
|
||||
/## 12\.5.*[Bb]ounce/i.test(content),
|
||||
'step 12.5 should reference plan bounce in its title'
|
||||
);
|
||||
});
|
||||
|
||||
test('plan-phase.md has --bounce flag handling', () => {
|
||||
content = content || fs.readFileSync(PLAN_PHASE_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('--bounce'),
|
||||
'plan-phase.md should handle --bounce flag'
|
||||
);
|
||||
});
|
||||
|
||||
test('plan-phase.md has --skip-bounce flag handling', () => {
|
||||
content = content || fs.readFileSync(PLAN_PHASE_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('--skip-bounce'),
|
||||
'plan-phase.md should handle --skip-bounce flag'
|
||||
);
|
||||
});
|
||||
|
||||
test('plan-phase.md has backup pattern (pre-bounce.md)', () => {
|
||||
content = content || fs.readFileSync(PLAN_PHASE_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('pre-bounce.md'),
|
||||
'plan-phase.md should reference pre-bounce.md backup files'
|
||||
);
|
||||
});
|
||||
|
||||
test('plan-phase.md has frontmatter integrity validation for bounced plans', () => {
|
||||
content = content || fs.readFileSync(PLAN_PHASE_PATH, 'utf-8');
|
||||
// Should mention YAML frontmatter validation after bounce
|
||||
assert.ok(
|
||||
/frontmatter.*bounced|bounced.*frontmatter|YAML.*bounce|bounce.*YAML/i.test(content),
|
||||
'plan-phase.md should validate frontmatter integrity on bounced plans'
|
||||
);
|
||||
});
|
||||
|
||||
test('plan-phase.md re-runs checker on bounced plans', () => {
|
||||
content = content || fs.readFileSync(PLAN_PHASE_PATH, 'utf-8');
|
||||
// Should mention re-running plan checker after bounce
|
||||
assert.ok(
|
||||
/[Rr]e-run.*checker.*bounce|bounce.*checker.*re-run|checker.*bounced/i.test(content),
|
||||
'plan-phase.md should re-run plan checker on bounced plans'
|
||||
);
|
||||
});
|
||||
|
||||
test('plan-phase.md references plan_bounce config keys', () => {
|
||||
content = content || fs.readFileSync(PLAN_PHASE_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('plan_bounce_script'),
|
||||
'plan-phase.md should reference plan_bounce_script config'
|
||||
);
|
||||
assert.ok(
|
||||
content.includes('plan_bounce_passes'),
|
||||
'plan-phase.md should reference plan_bounce_passes config'
|
||||
);
|
||||
});
|
||||
|
||||
test('plan-phase.md disables bounce when --gaps flag is present', () => {
|
||||
content = content || fs.readFileSync(PLAN_PHASE_PATH, 'utf-8');
|
||||
// Should mention that --gaps disables bounce
|
||||
assert.ok(
|
||||
/--gaps.*bounce|bounce.*--gaps/i.test(content),
|
||||
'plan-phase.md should disable bounce when --gaps flag is present'
|
||||
);
|
||||
});
|
||||
|
||||
test('plan-phase.md restores original on script failure', () => {
|
||||
content = content || fs.readFileSync(PLAN_PHASE_PATH, 'utf-8');
|
||||
// Should mention restoring from backup on failure
|
||||
assert.ok(
|
||||
/restore.*original|restore.*pre-bounce|original.*restore/i.test(content),
|
||||
'plan-phase.md should restore original plan on script failure'
|
||||
);
|
||||
});
|
||||
});
|
||||
333
tests/planner-language-regression.test.cjs
Normal file
333
tests/planner-language-regression.test.cjs
Normal file
@@ -0,0 +1,333 @@
|
||||
'use strict';
|
||||
|
||||
/**
|
||||
* Planner Language Regression Tests (#2091, #2092)
|
||||
*
|
||||
* Prevents time-based reasoning and complexity-as-scope-justification
|
||||
* from leaking back into planning artifacts via future PRs.
|
||||
*
|
||||
* These tests scan agent definitions, workflow files, and references
|
||||
* for prohibited patterns that import human-world constraints into
|
||||
* an AI execution context where those constraints do not exist.
|
||||
*/
|
||||
|
||||
const { test, describe } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const ROOT = path.join(__dirname, '..');
|
||||
const AGENTS_DIR = path.join(ROOT, 'agents');
|
||||
const WORKFLOWS_DIR = path.join(ROOT, 'get-shit-done', 'workflows');
|
||||
const REFERENCES_DIR = path.join(ROOT, 'get-shit-done', 'references');
|
||||
const TEMPLATES_DIR = path.join(ROOT, 'get-shit-done', 'templates');
|
||||
|
||||
/**
|
||||
* Collect all .md files from a directory (non-recursive).
|
||||
*/
|
||||
function mdFiles(dir) {
|
||||
if (!fs.existsSync(dir)) return [];
|
||||
return fs.readdirSync(dir)
|
||||
.filter(f => f.endsWith('.md'))
|
||||
.map(f => ({ name: f, path: path.join(dir, f) }));
|
||||
}
|
||||
|
||||
/**
|
||||
* Collect all .md files recursively.
|
||||
*/
|
||||
function mdFilesRecursive(dir) {
|
||||
if (!fs.existsSync(dir)) return [];
|
||||
const results = [];
|
||||
for (const entry of fs.readdirSync(dir, { withFileTypes: true })) {
|
||||
const full = path.join(dir, entry.name);
|
||||
if (entry.isDirectory()) {
|
||||
results.push(...mdFilesRecursive(full));
|
||||
} else if (entry.name.endsWith('.md')) {
|
||||
results.push({ name: entry.name, path: full });
|
||||
}
|
||||
}
|
||||
return results;
|
||||
}
|
||||
|
||||
/**
|
||||
* Files that define planning behavior — agents, workflows, references.
|
||||
* These are the files where time-based and complexity-based scope
|
||||
* reasoning must never appear.
|
||||
*/
|
||||
const PLANNING_FILES = [
|
||||
...mdFiles(AGENTS_DIR),
|
||||
...mdFiles(WORKFLOWS_DIR),
|
||||
...mdFiles(REFERENCES_DIR),
|
||||
...mdFilesRecursive(TEMPLATES_DIR),
|
||||
];
|
||||
|
||||
// -- Prohibited patterns --
|
||||
|
||||
/**
|
||||
* Time-based task sizing patterns.
|
||||
* Matches "15-60 minutes", "X minutes Claude execution time", etc.
|
||||
* Does NOT match operational timeouts ("timeout: 5 minutes"),
|
||||
* API docs examples ("100 requests per 15 minutes"),
|
||||
* or human-readable timeout descriptions in workflow execution steps.
|
||||
*/
|
||||
const TIME_SIZING_PATTERNS = [
|
||||
// "N-M minutes" in task sizing context (not timeout context)
|
||||
/each task[:\s]*\*?\*?\d+[-–]\d+\s*min/i,
|
||||
// "minutes Claude execution time" or "minutes execution time"
|
||||
/minutes?\s+(claude\s+)?execution\s+time/i,
|
||||
// Duration-based sizing table rows: "< 15 min", "15-60 min", "> 60 min"
|
||||
/[<>]\s*\d+\s*min\s*\|/i,
|
||||
];
|
||||
|
||||
/**
|
||||
* Complexity-as-scope-justification patterns.
|
||||
* Matches "too complex to implement", "challenging feature", etc.
|
||||
* Does NOT match legitimate uses like:
|
||||
* - "complex domains" in research/discovery context (describing what to research)
|
||||
* - "non-trivial" in verification context (confirming substantive code exists)
|
||||
* - "challenging" in user-profiling context (quoting user reactions)
|
||||
*/
|
||||
const COMPLEXITY_SCOPE_PATTERNS = [
|
||||
// "too complex to" — always a scope-reduction justification
|
||||
/too\s+complex\s+to/i,
|
||||
// "too difficult" — always a scope-reduction justification
|
||||
/too\s+difficult/i,
|
||||
// "is too complex for" — scope justification (e.g. "Phase X is too complex for")
|
||||
/is\s+too\s+complex\s+for/i,
|
||||
];
|
||||
|
||||
/**
|
||||
* Files allowed to contain certain patterns because they document
|
||||
* the prohibition itself, or use the terms in non-scope-reduction context.
|
||||
*/
|
||||
const ALLOWLIST = {
|
||||
// Plan-checker scans FOR these patterns — it's a detection list, not usage
|
||||
'gsd-plan-checker.md': ['complexity_scope', 'time_sizing'],
|
||||
// Planner defines the prohibition and the authority limits — uses terms to explain what NOT to do
|
||||
'gsd-planner.md': ['complexity_scope'],
|
||||
// Debugger uses "30+ minutes" as anti-pattern detection, not task sizing
|
||||
'gsd-debugger.md': ['time_sizing'],
|
||||
// Doc-writer uses "15 minutes" in API rate limit example, "2 minutes" for doc quality
|
||||
'gsd-doc-writer.md': ['time_sizing'],
|
||||
// Discovery-phase uses time for level descriptions (operational, not scope)
|
||||
'discovery-phase.md': ['time_sizing'],
|
||||
// Explore uses "~30 seconds" as operational estimate
|
||||
'explore.md': ['time_sizing'],
|
||||
// Review uses "up to 5 minutes" for CodeRabbit timeout
|
||||
'review.md': ['time_sizing'],
|
||||
// Fast uses "under 2 minutes wall time" as operational constraint
|
||||
'fast.md': ['time_sizing'],
|
||||
// Execute-phase uses "timeout: 5 minutes" for test runner
|
||||
'execute-phase.md': ['time_sizing'],
|
||||
// Verify-phase uses "timeout: 5 minutes" for test runner
|
||||
'verify-phase.md': ['time_sizing'],
|
||||
// Map-codebase documents subagent_timeout
|
||||
'map-codebase.md': ['time_sizing'],
|
||||
// Help documents CodeRabbit timing
|
||||
'help.md': ['time_sizing'],
|
||||
};
|
||||
|
||||
function isAllowlisted(fileName, category) {
|
||||
const entry = ALLOWLIST[fileName];
|
||||
return entry && entry.includes(category);
|
||||
}
|
||||
|
||||
// -- Tests --
|
||||
|
||||
describe('Planner language regression — time-based task sizing (#2092)', () => {
|
||||
for (const file of PLANNING_FILES) {
|
||||
test(`${file.name} must not use time-based task sizing`, () => {
|
||||
if (isAllowlisted(file.name, 'time_sizing')) return;
|
||||
|
||||
const content = fs.readFileSync(file.path, 'utf-8');
|
||||
for (const pattern of TIME_SIZING_PATTERNS) {
|
||||
const match = content.match(pattern);
|
||||
assert.ok(
|
||||
!match,
|
||||
[
|
||||
`${file.name} contains time-based task sizing: "${match?.[0]}"`,
|
||||
'Task sizing must use context-window percentage, not time units.',
|
||||
'See issue #2092 for rationale.',
|
||||
].join('\n')
|
||||
);
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
describe('Planner language regression — complexity-as-scope-justification (#2092)', () => {
|
||||
for (const file of PLANNING_FILES) {
|
||||
test(`${file.name} must not use complexity to justify scope reduction`, () => {
|
||||
if (isAllowlisted(file.name, 'complexity_scope')) return;
|
||||
|
||||
const content = fs.readFileSync(file.path, 'utf-8');
|
||||
for (const pattern of COMPLEXITY_SCOPE_PATTERNS) {
|
||||
const match = content.match(pattern);
|
||||
assert.ok(
|
||||
!match,
|
||||
[
|
||||
`${file.name} contains complexity-as-scope-justification: "${match?.[0]}"`,
|
||||
'Scope decisions must be based on context cost, missing information,',
|
||||
'or dependency conflicts — not perceived difficulty.',
|
||||
'See issue #2092 for rationale.',
|
||||
].join('\n')
|
||||
);
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
|
||||
describe('gsd-planner.md — required structural sections (#2091, #2092)', () => {
|
||||
let plannerContent;
|
||||
|
||||
test('planner file exists and is readable', () => {
|
||||
const plannerPath = path.join(AGENTS_DIR, 'gsd-planner.md');
|
||||
assert.ok(fs.existsSync(plannerPath), 'agents/gsd-planner.md must exist');
|
||||
plannerContent = fs.readFileSync(plannerPath, 'utf-8');
|
||||
});
|
||||
|
||||
test('contains <planner_authority_limits> section', () => {
|
||||
assert.ok(
|
||||
plannerContent.includes('<planner_authority_limits>'),
|
||||
'gsd-planner.md must contain a <planner_authority_limits> section defining what the planner cannot decide'
|
||||
);
|
||||
});
|
||||
|
||||
test('authority limits prohibit difficulty-based scope decisions', () => {
|
||||
assert.ok(
|
||||
plannerContent.includes('The planner has no authority to'),
|
||||
'planner_authority_limits must explicitly state what the planner cannot decide'
|
||||
);
|
||||
});
|
||||
|
||||
test('authority limits list three legitimate split reasons: context cost, missing info, dependency', () => {
|
||||
assert.ok(
|
||||
plannerContent.includes('Context cost') || plannerContent.includes('context cost'),
|
||||
'authority limits must list context cost as a legitimate split reason'
|
||||
);
|
||||
assert.ok(
|
||||
plannerContent.includes('Missing information') || plannerContent.includes('missing information'),
|
||||
'authority limits must list missing information as a legitimate split reason'
|
||||
);
|
||||
assert.ok(
|
||||
plannerContent.includes('Dependency conflict') || plannerContent.includes('dependency conflict'),
|
||||
'authority limits must list dependency conflict as a legitimate split reason'
|
||||
);
|
||||
});
|
||||
|
||||
test('task sizing uses context percentage, not time units', () => {
|
||||
assert.ok(
|
||||
plannerContent.includes('context consumption') || plannerContent.includes('context cost'),
|
||||
'task sizing must reference context consumption, not time'
|
||||
);
|
||||
assert.ok(
|
||||
!(/each task[:\s]*\*?\*?\d+[-–]\d+\s*min/i.test(plannerContent)),
|
||||
'task sizing must not use minutes as sizing unit'
|
||||
);
|
||||
});
|
||||
|
||||
test('contains multi-source coverage audit (not just D-XX decisions)', () => {
|
||||
assert.ok(
|
||||
plannerContent.includes('Multi-Source Coverage Audit') ||
|
||||
plannerContent.includes('multi-source coverage audit'),
|
||||
'gsd-planner.md must contain a multi-source coverage audit, not just D-XX decision matrix'
|
||||
);
|
||||
});
|
||||
|
||||
test('coverage audit includes all four source types: GOAL, REQ, RESEARCH, CONTEXT', () => {
|
||||
// The planner file or its referenced planner-source-audit.md must define all four types.
|
||||
// The inline compact version uses **GOAL**, **REQ**, **RESEARCH**, **CONTEXT**.
|
||||
const refPath = path.join(ROOT, 'get-shit-done', 'references', 'planner-source-audit.md');
|
||||
const combined = plannerContent + (fs.existsSync(refPath) ? fs.readFileSync(refPath, 'utf-8') : '');
|
||||
|
||||
const hasGoal = combined.includes('**GOAL**');
|
||||
const hasReq = combined.includes('**REQ**');
|
||||
const hasResearch = combined.includes('**RESEARCH**');
|
||||
const hasContext = combined.includes('**CONTEXT**');
|
||||
|
||||
assert.ok(hasGoal, 'coverage audit must include GOAL source type (ROADMAP.md phase goal)');
|
||||
assert.ok(hasReq, 'coverage audit must include REQ source type (REQUIREMENTS.md)');
|
||||
assert.ok(hasResearch, 'coverage audit must include RESEARCH source type (RESEARCH.md)');
|
||||
assert.ok(hasContext, 'coverage audit must include CONTEXT source type (CONTEXT.md decisions)');
|
||||
});
|
||||
|
||||
test('coverage audit defines MISSING item handling with developer escalation', () => {
|
||||
assert.ok(
|
||||
plannerContent.includes('Source Audit: Unplanned Items Found') ||
|
||||
plannerContent.includes('MISSING'),
|
||||
'coverage audit must define handling for MISSING items'
|
||||
);
|
||||
assert.ok(
|
||||
plannerContent.includes('Awaiting developer decision') ||
|
||||
plannerContent.includes('developer confirmation'),
|
||||
'MISSING items must escalate to developer, not be silently dropped'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('plan-phase.md — source audit orchestration (#2091)', () => {
|
||||
let workflowContent;
|
||||
|
||||
test('plan-phase workflow exists and is readable', () => {
|
||||
const workflowPath = path.join(WORKFLOWS_DIR, 'plan-phase.md');
|
||||
assert.ok(fs.existsSync(workflowPath), 'workflows/plan-phase.md must exist');
|
||||
workflowContent = fs.readFileSync(workflowPath, 'utf-8');
|
||||
});
|
||||
|
||||
test('step 9 handles Source Audit return from planner', () => {
|
||||
assert.ok(
|
||||
workflowContent.includes('Source Audit: Unplanned Items Found'),
|
||||
'plan-phase.md step 9 must handle the Source Audit return from the planner'
|
||||
);
|
||||
});
|
||||
|
||||
test('step 9c exists for source audit gap handling', () => {
|
||||
assert.ok(
|
||||
workflowContent.includes('9c') && workflowContent.includes('Source Audit'),
|
||||
'plan-phase.md must have a step 9c for handling source audit gaps'
|
||||
);
|
||||
});
|
||||
|
||||
test('step 9b does not use "too complex" language', () => {
|
||||
// Extract just step 9b content (between "## 9b" and "## 9c" or "## 10")
|
||||
const step9bMatch = workflowContent.match(/## 9b\.([\s\S]*?)(?=## 9c|## 10)/);
|
||||
if (step9bMatch) {
|
||||
const step9b = step9bMatch[1];
|
||||
assert.ok(
|
||||
!step9b.includes('too complex'),
|
||||
'step 9b must not use "too complex" — use context budget language instead'
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
test('phase split recommendation uses context budget framing', () => {
|
||||
assert.ok(
|
||||
workflowContent.includes('context budget') || workflowContent.includes('context cost'),
|
||||
'phase split recommendation must be framed in terms of context budget, not complexity'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('gsd-plan-checker.md — scope reduction detection includes time/complexity (#2092)', () => {
|
||||
let checkerContent;
|
||||
|
||||
test('plan-checker exists and is readable', () => {
|
||||
const checkerPath = path.join(AGENTS_DIR, 'gsd-plan-checker.md');
|
||||
assert.ok(fs.existsSync(checkerPath), 'agents/gsd-plan-checker.md must exist');
|
||||
checkerContent = fs.readFileSync(checkerPath, 'utf-8');
|
||||
});
|
||||
|
||||
test('scope reduction scan includes complexity-based justification patterns', () => {
|
||||
assert.ok(
|
||||
checkerContent.includes('too complex') || checkerContent.includes('too difficult'),
|
||||
'plan-checker scope reduction scan must detect complexity-based justification language'
|
||||
);
|
||||
});
|
||||
|
||||
test('scope reduction scan includes time-based justification patterns', () => {
|
||||
assert.ok(
|
||||
checkerContent.includes('would take') || checkerContent.includes('hours') || checkerContent.includes('minutes'),
|
||||
'plan-checker scope reduction scan must detect time-based justification language'
|
||||
);
|
||||
});
|
||||
});
|
||||
137
tests/prompt-thinning.test.cjs
Normal file
137
tests/prompt-thinning.test.cjs
Normal file
@@ -0,0 +1,137 @@
|
||||
'use strict';
|
||||
|
||||
/**
|
||||
* Prompt Thinning Tests (#1978)
|
||||
*
|
||||
* Validates context-window-aware prompt thinning for sub-200K models.
|
||||
* When CONTEXT_WINDOW < 200000, agent prompts strip extended examples
|
||||
* and anti-pattern lists, referencing them as @-required_reading files instead.
|
||||
*/
|
||||
|
||||
const { test, describe } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const EXECUTE_PHASE = path.join(__dirname, '..', 'get-shit-done', 'workflows', 'execute-phase.md');
|
||||
const EXECUTOR_AGENT = path.join(__dirname, '..', 'agents', 'gsd-executor.md');
|
||||
const PLANNER_AGENT = path.join(__dirname, '..', 'agents', 'gsd-planner.md');
|
||||
const EXECUTOR_EXAMPLES_REF = path.join(__dirname, '..', 'get-shit-done', 'references', 'executor-examples.md');
|
||||
const PLANNER_ANTIPATTERNS_REF = path.join(__dirname, '..', 'get-shit-done', 'references', 'planner-antipatterns.md');
|
||||
|
||||
describe('prompt thinning — sub-200K context window support (#1978)', () => {
|
||||
|
||||
describe('execute-phase.md — thinning conditional', () => {
|
||||
test('has a CONTEXT_WINDOW < 200000 thinning conditional', () => {
|
||||
const content = fs.readFileSync(EXECUTE_PHASE, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('CONTEXT_WINDOW < 200000') || content.includes('CONTEXT_WINDOW< 200000'),
|
||||
'execute-phase.md must contain a CONTEXT_WINDOW < 200000 conditional for prompt thinning'
|
||||
);
|
||||
});
|
||||
|
||||
test('preserves the existing CONTEXT_WINDOW >= 500000 enrichment conditional', () => {
|
||||
const content = fs.readFileSync(EXECUTE_PHASE, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('CONTEXT_WINDOW >= 500000'),
|
||||
'execute-phase.md must preserve the existing 500K enrichment conditional'
|
||||
);
|
||||
});
|
||||
|
||||
test('thinning block references executor-examples.md for on-demand loading', () => {
|
||||
const content = fs.readFileSync(EXECUTE_PHASE, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('executor-examples.md'),
|
||||
'execute-phase.md thinning block must reference executor-examples.md'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('gsd-executor.md — reference to extracted examples', () => {
|
||||
test('references executor-examples.md for extended examples', () => {
|
||||
const content = fs.readFileSync(EXECUTOR_AGENT, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('executor-examples.md'),
|
||||
'gsd-executor.md must reference executor-examples.md for extended deviation/checkpoint examples'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('gsd-planner.md — reference to extracted anti-patterns', () => {
|
||||
test('references planner-antipatterns.md for extended anti-patterns', () => {
|
||||
const content = fs.readFileSync(PLANNER_AGENT, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('planner-antipatterns.md'),
|
||||
'gsd-planner.md must reference planner-antipatterns.md for extended checkpoint anti-patterns and specificity examples'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('executor-examples.md — extracted reference file', () => {
|
||||
test('file exists', () => {
|
||||
assert.ok(
|
||||
fs.existsSync(EXECUTOR_EXAMPLES_REF),
|
||||
'get-shit-done/references/executor-examples.md must exist'
|
||||
);
|
||||
});
|
||||
|
||||
test('contains deviation rule examples', () => {
|
||||
const content = fs.readFileSync(EXECUTOR_EXAMPLES_REF, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('Rule 1') || content.includes('RULE 1'),
|
||||
'executor-examples.md must contain deviation rule examples'
|
||||
);
|
||||
});
|
||||
|
||||
test('contains checkpoint examples', () => {
|
||||
const content = fs.readFileSync(EXECUTOR_EXAMPLES_REF, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('checkpoint') || content.includes('Checkpoint'),
|
||||
'executor-examples.md must contain checkpoint examples'
|
||||
);
|
||||
});
|
||||
|
||||
test('contains edge case examples', () => {
|
||||
const content = fs.readFileSync(EXECUTOR_EXAMPLES_REF, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('Edge case') || content.includes('edge case') || content.includes('Edge Case'),
|
||||
'executor-examples.md must contain edge case guidance'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('planner-antipatterns.md — extracted reference file', () => {
|
||||
test('file exists', () => {
|
||||
assert.ok(
|
||||
fs.existsSync(PLANNER_ANTIPATTERNS_REF),
|
||||
'get-shit-done/references/planner-antipatterns.md must exist'
|
||||
);
|
||||
});
|
||||
|
||||
test('contains checkpoint anti-patterns', () => {
|
||||
const content = fs.readFileSync(PLANNER_ANTIPATTERNS_REF, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('anti-pattern') || content.includes('Anti-Pattern') || content.includes('Bad'),
|
||||
'planner-antipatterns.md must contain checkpoint anti-pattern examples'
|
||||
);
|
||||
});
|
||||
|
||||
test('contains specificity examples', () => {
|
||||
const content = fs.readFileSync(PLANNER_ANTIPATTERNS_REF, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('TOO VAGUE') || content.includes('Specificity') || content.includes('specificity'),
|
||||
'planner-antipatterns.md must contain specificity examples'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('three-tier consistency', () => {
|
||||
test('thinning tier (< 200K), standard tier (200K-500K), and enrichment tier (>= 500K) all coexist', () => {
|
||||
const content = fs.readFileSync(EXECUTE_PHASE, 'utf-8');
|
||||
const hasThinning = content.includes('CONTEXT_WINDOW < 200000');
|
||||
const hasEnrichment = content.includes('CONTEXT_WINDOW >= 500000');
|
||||
assert.ok(hasThinning, 'must have thinning conditional (< 200K)');
|
||||
assert.ok(hasEnrichment, 'must have enrichment conditional (>= 500K)');
|
||||
});
|
||||
});
|
||||
});
|
||||
178
tests/qwen-install.test.cjs
Normal file
178
tests/qwen-install.test.cjs
Normal file
@@ -0,0 +1,178 @@
|
||||
process.env.GSD_TEST_MODE = '1';
|
||||
|
||||
const { test, describe, beforeEach, afterEach } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('node:fs');
|
||||
const path = require('node:path');
|
||||
const os = require('node:os');
|
||||
const { createTempDir, cleanup } = require('./helpers.cjs');
|
||||
|
||||
const {
|
||||
getDirName,
|
||||
getGlobalDir,
|
||||
getConfigDirFromHome,
|
||||
install,
|
||||
uninstall,
|
||||
writeManifest,
|
||||
} = require('../bin/install.js');
|
||||
|
||||
describe('Qwen Code runtime directory mapping', () => {
|
||||
test('maps Qwen to .qwen for local installs', () => {
|
||||
assert.strictEqual(getDirName('qwen'), '.qwen');
|
||||
});
|
||||
|
||||
test('maps Qwen to ~/.qwen for global installs', () => {
|
||||
assert.strictEqual(getGlobalDir('qwen'), path.join(os.homedir(), '.qwen'));
|
||||
});
|
||||
|
||||
test('returns .qwen config fragments for local and global installs', () => {
|
||||
assert.strictEqual(getConfigDirFromHome('qwen', false), "'.qwen'");
|
||||
assert.strictEqual(getConfigDirFromHome('qwen', true), "'.qwen'");
|
||||
});
|
||||
});
|
||||
|
||||
describe('getGlobalDir (Qwen Code)', () => {
|
||||
let originalQwenConfigDir;
|
||||
|
||||
beforeEach(() => {
|
||||
originalQwenConfigDir = process.env.QWEN_CONFIG_DIR;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
if (originalQwenConfigDir !== undefined) {
|
||||
process.env.QWEN_CONFIG_DIR = originalQwenConfigDir;
|
||||
} else {
|
||||
delete process.env.QWEN_CONFIG_DIR;
|
||||
}
|
||||
});
|
||||
|
||||
test('returns ~/.qwen with no env var or explicit dir', () => {
|
||||
delete process.env.QWEN_CONFIG_DIR;
|
||||
const result = getGlobalDir('qwen');
|
||||
assert.strictEqual(result, path.join(os.homedir(), '.qwen'));
|
||||
});
|
||||
|
||||
test('returns explicit dir when provided', () => {
|
||||
const result = getGlobalDir('qwen', '/custom/qwen-path');
|
||||
assert.strictEqual(result, '/custom/qwen-path');
|
||||
});
|
||||
|
||||
test('respects QWEN_CONFIG_DIR env var', () => {
|
||||
process.env.QWEN_CONFIG_DIR = '~/custom-qwen';
|
||||
const result = getGlobalDir('qwen');
|
||||
assert.strictEqual(result, path.join(os.homedir(), 'custom-qwen'));
|
||||
});
|
||||
|
||||
test('explicit dir takes priority over QWEN_CONFIG_DIR', () => {
|
||||
process.env.QWEN_CONFIG_DIR = '~/from-env';
|
||||
const result = getGlobalDir('qwen', '/explicit/path');
|
||||
assert.strictEqual(result, '/explicit/path');
|
||||
});
|
||||
|
||||
test('does not break other runtimes', () => {
|
||||
assert.strictEqual(getGlobalDir('claude'), path.join(os.homedir(), '.claude'));
|
||||
assert.strictEqual(getGlobalDir('codex'), path.join(os.homedir(), '.codex'));
|
||||
});
|
||||
});
|
||||
|
||||
describe('Qwen Code local install/uninstall', () => {
|
||||
let tmpDir;
|
||||
let previousCwd;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = createTempDir('gsd-qwen-install-');
|
||||
previousCwd = process.cwd();
|
||||
process.chdir(tmpDir);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
process.chdir(previousCwd);
|
||||
cleanup(tmpDir);
|
||||
});
|
||||
|
||||
test('installs GSD into ./.qwen and removes it cleanly', () => {
|
||||
const result = install(false, 'qwen');
|
||||
const targetDir = path.join(tmpDir, '.qwen');
|
||||
|
||||
assert.strictEqual(result.runtime, 'qwen');
|
||||
assert.strictEqual(result.configDir, fs.realpathSync(targetDir));
|
||||
|
||||
assert.ok(fs.existsSync(path.join(targetDir, 'skills', 'gsd-help', 'SKILL.md')));
|
||||
assert.ok(fs.existsSync(path.join(targetDir, 'get-shit-done', 'VERSION')));
|
||||
assert.ok(fs.existsSync(path.join(targetDir, 'agents')));
|
||||
|
||||
const manifest = writeManifest(targetDir, 'qwen');
|
||||
assert.ok(Object.keys(manifest.files).some(file => file.startsWith('skills/gsd-help/')), manifest);
|
||||
|
||||
uninstall(false, 'qwen');
|
||||
|
||||
assert.ok(!fs.existsSync(path.join(targetDir, 'skills', 'gsd-help')), 'Qwen skill directory removed');
|
||||
assert.ok(!fs.existsSync(path.join(targetDir, 'get-shit-done')), 'get-shit-done removed');
|
||||
});
|
||||
});
|
||||
|
||||
describe('E2E: Qwen Code uninstall skills cleanup', () => {
|
||||
let tmpDir;
|
||||
let previousCwd;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = createTempDir('gsd-qwen-uninstall-');
|
||||
previousCwd = process.cwd();
|
||||
process.chdir(tmpDir);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
process.chdir(previousCwd);
|
||||
cleanup(tmpDir);
|
||||
});
|
||||
|
||||
test('removes all gsd-* skill directories on --qwen --uninstall', () => {
|
||||
const targetDir = path.join(tmpDir, '.qwen');
|
||||
install(false, 'qwen');
|
||||
|
||||
const skillsDir = path.join(targetDir, 'skills');
|
||||
assert.ok(fs.existsSync(skillsDir), 'skills dir exists after install');
|
||||
|
||||
const installedSkills = fs.readdirSync(skillsDir, { withFileTypes: true })
|
||||
.filter(e => e.isDirectory() && e.name.startsWith('gsd-'));
|
||||
assert.ok(installedSkills.length > 0, `found ${installedSkills.length} gsd-* skill dirs before uninstall`);
|
||||
|
||||
uninstall(false, 'qwen');
|
||||
|
||||
if (fs.existsSync(skillsDir)) {
|
||||
const remainingGsd = fs.readdirSync(skillsDir, { withFileTypes: true })
|
||||
.filter(e => e.isDirectory() && e.name.startsWith('gsd-'));
|
||||
assert.strictEqual(remainingGsd.length, 0,
|
||||
`Expected 0 gsd-* skill dirs after uninstall, found: ${remainingGsd.map(e => e.name).join(', ')}`);
|
||||
}
|
||||
});
|
||||
|
||||
test('preserves non-GSD skill directories during --qwen --uninstall', () => {
|
||||
const targetDir = path.join(tmpDir, '.qwen');
|
||||
install(false, 'qwen');
|
||||
|
||||
const customSkillDir = path.join(targetDir, 'skills', 'my-custom-skill');
|
||||
fs.mkdirSync(customSkillDir, { recursive: true });
|
||||
fs.writeFileSync(path.join(customSkillDir, 'SKILL.md'), '# My Custom Skill\n');
|
||||
|
||||
assert.ok(fs.existsSync(path.join(customSkillDir, 'SKILL.md')), 'custom skill exists before uninstall');
|
||||
|
||||
uninstall(false, 'qwen');
|
||||
|
||||
assert.ok(fs.existsSync(path.join(customSkillDir, 'SKILL.md')),
|
||||
'Non-GSD skill directory should be preserved after Qwen uninstall');
|
||||
});
|
||||
|
||||
test('removes engine directory on --qwen --uninstall', () => {
|
||||
const targetDir = path.join(tmpDir, '.qwen');
|
||||
install(false, 'qwen');
|
||||
|
||||
assert.ok(fs.existsSync(path.join(targetDir, 'get-shit-done', 'VERSION')),
|
||||
'engine exists before uninstall');
|
||||
|
||||
uninstall(false, 'qwen');
|
||||
|
||||
assert.ok(!fs.existsSync(path.join(targetDir, 'get-shit-done')),
|
||||
'get-shit-done engine should be removed after Qwen uninstall');
|
||||
});
|
||||
});
|
||||
286
tests/qwen-skills-migration.test.cjs
Normal file
286
tests/qwen-skills-migration.test.cjs
Normal file
@@ -0,0 +1,286 @@
|
||||
/**
|
||||
* GSD Tools Tests - Qwen Code Skills Migration
|
||||
*
|
||||
* Tests for installing GSD for Qwen Code using the standard
|
||||
* skills/gsd-xxx/SKILL.md format (same open standard as Claude Code 2.1.88+).
|
||||
*
|
||||
* Uses node:test and node:assert (NOT Jest).
|
||||
*/
|
||||
|
||||
process.env.GSD_TEST_MODE = '1';
|
||||
|
||||
const { test, describe, beforeEach, afterEach } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const path = require('path');
|
||||
const os = require('os');
|
||||
const fs = require('fs');
|
||||
|
||||
const {
|
||||
convertClaudeCommandToClaudeSkill,
|
||||
copyCommandsAsClaudeSkills,
|
||||
} = require('../bin/install.js');
|
||||
|
||||
// ─── convertClaudeCommandToClaudeSkill (used by Qwen via copyCommandsAsClaudeSkills) ──
|
||||
|
||||
describe('Qwen Code: convertClaudeCommandToClaudeSkill', () => {
|
||||
test('preserves allowed-tools multiline YAML list', () => {
|
||||
const input = [
|
||||
'---',
|
||||
'name: gsd:next',
|
||||
'description: Advance to the next step',
|
||||
'allowed-tools:',
|
||||
' - Read',
|
||||
' - Bash',
|
||||
' - Grep',
|
||||
'---',
|
||||
'',
|
||||
'Body content here.',
|
||||
].join('\n');
|
||||
|
||||
const result = convertClaudeCommandToClaudeSkill(input, 'gsd-next');
|
||||
assert.ok(result.includes('allowed-tools:'), 'allowed-tools field is present');
|
||||
assert.ok(result.includes('Read'), 'Read tool preserved');
|
||||
assert.ok(result.includes('Bash'), 'Bash tool preserved');
|
||||
assert.ok(result.includes('Grep'), 'Grep tool preserved');
|
||||
});
|
||||
|
||||
test('preserves argument-hint', () => {
|
||||
const input = [
|
||||
'---',
|
||||
'name: gsd:debug',
|
||||
'description: Debug issues',
|
||||
'argument-hint: "[issue description]"',
|
||||
'allowed-tools:',
|
||||
' - Read',
|
||||
' - Bash',
|
||||
'---',
|
||||
'',
|
||||
'Debug body.',
|
||||
].join('\n');
|
||||
|
||||
const result = convertClaudeCommandToClaudeSkill(input, 'gsd-debug');
|
||||
assert.ok(result.includes('argument-hint:'), 'argument-hint field is present');
|
||||
assert.ok(
|
||||
result.includes('[issue description]'),
|
||||
'argument-hint value preserved'
|
||||
);
|
||||
});
|
||||
|
||||
test('converts name format from gsd:xxx to skill naming', () => {
|
||||
const input = [
|
||||
'---',
|
||||
'name: gsd:next',
|
||||
'description: Advance workflow',
|
||||
'---',
|
||||
'',
|
||||
'Body.',
|
||||
].join('\n');
|
||||
|
||||
const result = convertClaudeCommandToClaudeSkill(input, 'gsd-next');
|
||||
assert.ok(result.includes('name: gsd-next'), 'name uses skill naming convention');
|
||||
assert.ok(!result.includes('name: gsd:next'), 'old name format removed');
|
||||
});
|
||||
|
||||
test('preserves body content unchanged', () => {
|
||||
const body = '\n<objective>\nDo the thing.\n</objective>\n\n<process>\nStep 1.\nStep 2.\n</process>\n';
|
||||
const input = [
|
||||
'---',
|
||||
'name: gsd:test',
|
||||
'description: Test command',
|
||||
'---',
|
||||
body,
|
||||
].join('');
|
||||
|
||||
const result = convertClaudeCommandToClaudeSkill(input, 'gsd-test');
|
||||
assert.ok(result.includes('<objective>'), 'objective tag preserved');
|
||||
assert.ok(result.includes('Do the thing.'), 'body text preserved');
|
||||
assert.ok(result.includes('<process>'), 'process tag preserved');
|
||||
});
|
||||
|
||||
test('produces valid SKILL.md frontmatter starting with ---', () => {
|
||||
const input = [
|
||||
'---',
|
||||
'name: gsd:plan',
|
||||
'description: Plan a phase',
|
||||
'---',
|
||||
'',
|
||||
'Plan body.',
|
||||
].join('\n');
|
||||
|
||||
const result = convertClaudeCommandToClaudeSkill(input, 'gsd-plan');
|
||||
assert.ok(result.startsWith('---\n'), 'frontmatter starts with ---');
|
||||
assert.ok(result.includes('\n---\n'), 'frontmatter closes with ---');
|
||||
});
|
||||
});
|
||||
|
||||
// ─── copyCommandsAsClaudeSkills (used for Qwen skills install) ─────────────
|
||||
|
||||
describe('Qwen Code: copyCommandsAsClaudeSkills', () => {
|
||||
let tmpDir;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'gsd-qwen-test-'));
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
if (fs.existsSync(tmpDir)) {
|
||||
fs.rmSync(tmpDir, { recursive: true });
|
||||
}
|
||||
});
|
||||
|
||||
test('creates skills/gsd-xxx/SKILL.md directory structure', () => {
|
||||
// Create source command files
|
||||
const srcDir = path.join(tmpDir, 'src', 'commands', 'gsd');
|
||||
fs.mkdirSync(srcDir, { recursive: true });
|
||||
fs.writeFileSync(path.join(srcDir, 'quick.md'), [
|
||||
'---',
|
||||
'name: gsd:quick',
|
||||
'description: Execute a quick task',
|
||||
'allowed-tools:',
|
||||
' - Read',
|
||||
' - Bash',
|
||||
'---',
|
||||
'',
|
||||
'<objective>Quick task body</objective>',
|
||||
].join('\n'));
|
||||
|
||||
const skillsDir = path.join(tmpDir, 'dest', 'skills');
|
||||
copyCommandsAsClaudeSkills(srcDir, skillsDir, 'gsd', '/test/prefix/', 'qwen', false);
|
||||
|
||||
// Verify SKILL.md was created
|
||||
const skillPath = path.join(skillsDir, 'gsd-quick', 'SKILL.md');
|
||||
assert.ok(fs.existsSync(skillPath), 'gsd-quick/SKILL.md exists');
|
||||
|
||||
// Verify content
|
||||
const content = fs.readFileSync(skillPath, 'utf8');
|
||||
assert.ok(content.includes('name: gsd-quick'), 'skill name converted');
|
||||
assert.ok(content.includes('description:'), 'description present');
|
||||
assert.ok(content.includes('allowed-tools:'), 'allowed-tools preserved');
|
||||
assert.ok(content.includes('<objective>'), 'body content preserved');
|
||||
});
|
||||
|
||||
test('replaces ~/.claude/ paths with pathPrefix', () => {
|
||||
const srcDir = path.join(tmpDir, 'src', 'commands', 'gsd');
|
||||
fs.mkdirSync(srcDir, { recursive: true });
|
||||
fs.writeFileSync(path.join(srcDir, 'next.md'), [
|
||||
'---',
|
||||
'name: gsd:next',
|
||||
'description: Next step',
|
||||
'---',
|
||||
'',
|
||||
'Reference: @~/.claude/get-shit-done/workflows/next.md',
|
||||
].join('\n'));
|
||||
|
||||
const skillsDir = path.join(tmpDir, 'dest', 'skills');
|
||||
copyCommandsAsClaudeSkills(srcDir, skillsDir, 'gsd', '$HOME/.qwen/', 'qwen', false);
|
||||
|
||||
const content = fs.readFileSync(path.join(skillsDir, 'gsd-next', 'SKILL.md'), 'utf8');
|
||||
assert.ok(content.includes('$HOME/.qwen/'), 'path replaced to .qwen/');
|
||||
assert.ok(!content.includes('~/.claude/'), 'old claude path removed');
|
||||
});
|
||||
|
||||
test('replaces $HOME/.claude/ paths with pathPrefix', () => {
|
||||
const srcDir = path.join(tmpDir, 'src', 'commands', 'gsd');
|
||||
fs.mkdirSync(srcDir, { recursive: true });
|
||||
fs.writeFileSync(path.join(srcDir, 'plan.md'), [
|
||||
'---',
|
||||
'name: gsd:plan',
|
||||
'description: Plan phase',
|
||||
'---',
|
||||
'',
|
||||
'Reference: $HOME/.claude/get-shit-done/workflows/plan.md',
|
||||
].join('\n'));
|
||||
|
||||
const skillsDir = path.join(tmpDir, 'dest', 'skills');
|
||||
copyCommandsAsClaudeSkills(srcDir, skillsDir, 'gsd', '$HOME/.qwen/', 'qwen', false);
|
||||
|
||||
const content = fs.readFileSync(path.join(skillsDir, 'gsd-plan', 'SKILL.md'), 'utf8');
|
||||
assert.ok(content.includes('$HOME/.qwen/'), 'path replaced to .qwen/');
|
||||
assert.ok(!content.includes('$HOME/.claude/'), 'old claude path removed');
|
||||
});
|
||||
|
||||
test('removes stale gsd- skills before installing new ones', () => {
|
||||
const srcDir = path.join(tmpDir, 'src', 'commands', 'gsd');
|
||||
fs.mkdirSync(srcDir, { recursive: true });
|
||||
fs.writeFileSync(path.join(srcDir, 'quick.md'), [
|
||||
'---',
|
||||
'name: gsd:quick',
|
||||
'description: Quick task',
|
||||
'---',
|
||||
'',
|
||||
'Body',
|
||||
].join('\n'));
|
||||
|
||||
const skillsDir = path.join(tmpDir, 'dest', 'skills');
|
||||
// Pre-create a stale skill
|
||||
fs.mkdirSync(path.join(skillsDir, 'gsd-old-skill'), { recursive: true });
|
||||
fs.writeFileSync(path.join(skillsDir, 'gsd-old-skill', 'SKILL.md'), 'old');
|
||||
|
||||
copyCommandsAsClaudeSkills(srcDir, skillsDir, 'gsd', '/test/', 'qwen', false);
|
||||
|
||||
assert.ok(!fs.existsSync(path.join(skillsDir, 'gsd-old-skill')), 'stale skill removed');
|
||||
assert.ok(fs.existsSync(path.join(skillsDir, 'gsd-quick', 'SKILL.md')), 'new skill installed');
|
||||
});
|
||||
|
||||
test('preserves agent field in frontmatter', () => {
|
||||
const srcDir = path.join(tmpDir, 'src', 'commands', 'gsd');
|
||||
fs.mkdirSync(srcDir, { recursive: true });
|
||||
fs.writeFileSync(path.join(srcDir, 'execute.md'), [
|
||||
'---',
|
||||
'name: gsd:execute',
|
||||
'description: Execute phase',
|
||||
'agent: gsd-executor',
|
||||
'allowed-tools:',
|
||||
' - Read',
|
||||
' - Bash',
|
||||
' - Task',
|
||||
'---',
|
||||
'',
|
||||
'Execute body',
|
||||
].join('\n'));
|
||||
|
||||
const skillsDir = path.join(tmpDir, 'dest', 'skills');
|
||||
copyCommandsAsClaudeSkills(srcDir, skillsDir, 'gsd', '/test/', 'qwen', false);
|
||||
|
||||
const content = fs.readFileSync(path.join(skillsDir, 'gsd-execute', 'SKILL.md'), 'utf8');
|
||||
assert.ok(content.includes('agent: gsd-executor'), 'agent field preserved');
|
||||
});
|
||||
});
|
||||
|
||||
// ─── Integration: SKILL.md format validation ────────────────────────────────
|
||||
|
||||
describe('Qwen Code: SKILL.md format validation', () => {
|
||||
test('SKILL.md frontmatter is valid YAML structure', () => {
|
||||
const input = [
|
||||
'---',
|
||||
'name: gsd:review',
|
||||
'description: Code review with quality checks',
|
||||
'argument-hint: "[PR number or branch]"',
|
||||
'agent: gsd-code-reviewer',
|
||||
'allowed-tools:',
|
||||
' - Read',
|
||||
' - Grep',
|
||||
' - Bash',
|
||||
'---',
|
||||
'',
|
||||
'<objective>Review code</objective>',
|
||||
].join('\n');
|
||||
|
||||
const result = convertClaudeCommandToClaudeSkill(input, 'gsd-review');
|
||||
|
||||
// Parse the frontmatter
|
||||
const fmMatch = result.match(/^---\n([\s\S]*?)\n---/);
|
||||
assert.ok(fmMatch, 'has frontmatter block');
|
||||
|
||||
const fmLines = fmMatch[1].split('\n');
|
||||
const hasName = fmLines.some(l => l.startsWith('name: gsd-review'));
|
||||
const hasDesc = fmLines.some(l => l.startsWith('description:'));
|
||||
const hasAgent = fmLines.some(l => l.startsWith('agent:'));
|
||||
const hasTools = fmLines.some(l => l.startsWith('allowed-tools:'));
|
||||
|
||||
assert.ok(hasName, 'name field correct');
|
||||
assert.ok(hasDesc, 'description field present');
|
||||
assert.ok(hasAgent, 'agent field present');
|
||||
assert.ok(hasTools, 'allowed-tools field present');
|
||||
});
|
||||
});
|
||||
219
tests/skill-manifest.test.cjs
Normal file
219
tests/skill-manifest.test.cjs
Normal file
@@ -0,0 +1,219 @@
|
||||
/**
|
||||
* Tests for skill-manifest command
|
||||
* TDD: RED phase — tests written before implementation
|
||||
*/
|
||||
|
||||
const { describe, test, beforeEach, afterEach } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { runGsdTools, createTempProject, cleanup } = require('./helpers.cjs');
|
||||
|
||||
describe('skill-manifest', () => {
|
||||
let tmpDir;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = createTempProject();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
cleanup(tmpDir);
|
||||
});
|
||||
|
||||
test('skill-manifest command exists and returns JSON', () => {
|
||||
// Create a skills directory with one skill
|
||||
const skillDir = path.join(tmpDir, '.claude', 'skills', 'test-skill');
|
||||
fs.mkdirSync(skillDir, { recursive: true });
|
||||
fs.writeFileSync(path.join(skillDir, 'SKILL.md'), [
|
||||
'---',
|
||||
'name: test-skill',
|
||||
'description: A test skill',
|
||||
'---',
|
||||
'',
|
||||
'# Test Skill',
|
||||
].join('\n'));
|
||||
|
||||
const result = runGsdTools(['skill-manifest', '--skills-dir', path.join(tmpDir, '.claude', 'skills')], tmpDir);
|
||||
assert.ok(result.success, `Command should succeed: ${result.error || result.output}`);
|
||||
|
||||
const manifest = JSON.parse(result.output);
|
||||
assert.ok(Array.isArray(manifest), 'Manifest should be an array');
|
||||
});
|
||||
|
||||
test('generates manifest with correct structure from SKILL.md frontmatter', () => {
|
||||
const skillDir = path.join(tmpDir, '.claude', 'skills', 'my-skill');
|
||||
fs.mkdirSync(skillDir, { recursive: true });
|
||||
fs.writeFileSync(path.join(skillDir, 'SKILL.md'), [
|
||||
'---',
|
||||
'name: my-skill',
|
||||
'description: Does something useful',
|
||||
'---',
|
||||
'',
|
||||
'# My Skill',
|
||||
'',
|
||||
'TRIGGER when: user asks about widgets',
|
||||
].join('\n'));
|
||||
|
||||
const result = runGsdTools(['skill-manifest', '--skills-dir', path.join(tmpDir, '.claude', 'skills')], tmpDir);
|
||||
assert.ok(result.success, `Command should succeed: ${result.error || result.output}`);
|
||||
|
||||
const manifest = JSON.parse(result.output);
|
||||
assert.strictEqual(manifest.length, 1);
|
||||
assert.strictEqual(manifest[0].name, 'my-skill');
|
||||
assert.strictEqual(manifest[0].description, 'Does something useful');
|
||||
assert.strictEqual(manifest[0].path, 'my-skill');
|
||||
});
|
||||
|
||||
test('empty skills directory produces empty manifest', () => {
|
||||
const skillsDir = path.join(tmpDir, '.claude', 'skills');
|
||||
fs.mkdirSync(skillsDir, { recursive: true });
|
||||
|
||||
const result = runGsdTools(['skill-manifest', '--skills-dir', skillsDir], tmpDir);
|
||||
assert.ok(result.success, `Command should succeed: ${result.error || result.output}`);
|
||||
|
||||
const manifest = JSON.parse(result.output);
|
||||
assert.ok(Array.isArray(manifest), 'Manifest should be an array');
|
||||
assert.strictEqual(manifest.length, 0);
|
||||
});
|
||||
|
||||
test('skills without SKILL.md are skipped', () => {
|
||||
const skillsDir = path.join(tmpDir, '.claude', 'skills');
|
||||
// Skill with SKILL.md
|
||||
const goodDir = path.join(skillsDir, 'good-skill');
|
||||
fs.mkdirSync(goodDir, { recursive: true });
|
||||
fs.writeFileSync(path.join(goodDir, 'SKILL.md'), [
|
||||
'---',
|
||||
'name: good-skill',
|
||||
'description: Has a SKILL.md',
|
||||
'---',
|
||||
'',
|
||||
'# Good Skill',
|
||||
].join('\n'));
|
||||
|
||||
// Skill without SKILL.md (just a directory)
|
||||
const badDir = path.join(skillsDir, 'bad-skill');
|
||||
fs.mkdirSync(badDir, { recursive: true });
|
||||
fs.writeFileSync(path.join(badDir, 'README.md'), '# No SKILL.md here');
|
||||
|
||||
const result = runGsdTools(['skill-manifest', '--skills-dir', skillsDir], tmpDir);
|
||||
assert.ok(result.success, `Command should succeed: ${result.error || result.output}`);
|
||||
|
||||
const manifest = JSON.parse(result.output);
|
||||
assert.strictEqual(manifest.length, 1);
|
||||
assert.strictEqual(manifest[0].name, 'good-skill');
|
||||
});
|
||||
|
||||
test('manifest includes frontmatter fields from SKILL.md', () => {
|
||||
const skillDir = path.join(tmpDir, '.claude', 'skills', 'rich-skill');
|
||||
fs.mkdirSync(skillDir, { recursive: true });
|
||||
fs.writeFileSync(path.join(skillDir, 'SKILL.md'), [
|
||||
'---',
|
||||
'name: rich-skill',
|
||||
'description: A richly documented skill',
|
||||
'---',
|
||||
'',
|
||||
'# Rich Skill',
|
||||
'',
|
||||
'TRIGGER when: user mentions databases',
|
||||
'DO NOT TRIGGER when: user asks about frontend',
|
||||
].join('\n'));
|
||||
|
||||
const result = runGsdTools(['skill-manifest', '--skills-dir', path.join(tmpDir, '.claude', 'skills')], tmpDir);
|
||||
assert.ok(result.success, `Command should succeed: ${result.error || result.output}`);
|
||||
|
||||
const manifest = JSON.parse(result.output);
|
||||
assert.strictEqual(manifest.length, 1);
|
||||
|
||||
const skill = manifest[0];
|
||||
assert.strictEqual(skill.name, 'rich-skill');
|
||||
assert.strictEqual(skill.description, 'A richly documented skill');
|
||||
assert.strictEqual(skill.path, 'rich-skill');
|
||||
// triggers extracted from body text
|
||||
assert.ok(Array.isArray(skill.triggers), 'triggers should be an array');
|
||||
assert.ok(skill.triggers.length > 0, 'triggers should have at least one entry');
|
||||
assert.ok(skill.triggers.some(t => t.includes('databases')), 'triggers should mention databases');
|
||||
});
|
||||
|
||||
test('multiple skills are all included in manifest', () => {
|
||||
const skillsDir = path.join(tmpDir, '.claude', 'skills');
|
||||
|
||||
for (const name of ['alpha', 'beta', 'gamma']) {
|
||||
const dir = path.join(skillsDir, name);
|
||||
fs.mkdirSync(dir, { recursive: true });
|
||||
fs.writeFileSync(path.join(dir, 'SKILL.md'), [
|
||||
'---',
|
||||
`name: ${name}`,
|
||||
`description: The ${name} skill`,
|
||||
'---',
|
||||
'',
|
||||
`# ${name}`,
|
||||
].join('\n'));
|
||||
}
|
||||
|
||||
const result = runGsdTools(['skill-manifest', '--skills-dir', skillsDir], tmpDir);
|
||||
assert.ok(result.success, `Command should succeed: ${result.error || result.output}`);
|
||||
|
||||
const manifest = JSON.parse(result.output);
|
||||
assert.strictEqual(manifest.length, 3);
|
||||
const names = manifest.map(s => s.name).sort();
|
||||
assert.deepStrictEqual(names, ['alpha', 'beta', 'gamma']);
|
||||
});
|
||||
|
||||
test('writes manifest to .planning/skill-manifest.json when --write flag is used', () => {
|
||||
const skillDir = path.join(tmpDir, '.claude', 'skills', 'write-test');
|
||||
fs.mkdirSync(skillDir, { recursive: true });
|
||||
fs.writeFileSync(path.join(skillDir, 'SKILL.md'), [
|
||||
'---',
|
||||
'name: write-test',
|
||||
'description: Tests write mode',
|
||||
'---',
|
||||
'',
|
||||
'# Write Test',
|
||||
].join('\n'));
|
||||
|
||||
const result = runGsdTools(['skill-manifest', '--skills-dir', path.join(tmpDir, '.claude', 'skills'), '--write'], tmpDir);
|
||||
assert.ok(result.success, `Command should succeed: ${result.error || result.output}`);
|
||||
|
||||
const manifestPath = path.join(tmpDir, '.planning', 'skill-manifest.json');
|
||||
assert.ok(fs.existsSync(manifestPath), 'skill-manifest.json should be written to .planning/');
|
||||
|
||||
const manifest = JSON.parse(fs.readFileSync(manifestPath, 'utf-8'));
|
||||
assert.strictEqual(manifest.length, 1);
|
||||
assert.strictEqual(manifest[0].name, 'write-test');
|
||||
});
|
||||
|
||||
test('nonexistent skills directory returns empty manifest', () => {
|
||||
const result = runGsdTools(['skill-manifest', '--skills-dir', path.join(tmpDir, 'nonexistent')], tmpDir);
|
||||
assert.ok(result.success, `Command should succeed: ${result.error || result.output}`);
|
||||
|
||||
const manifest = JSON.parse(result.output);
|
||||
assert.ok(Array.isArray(manifest), 'Manifest should be an array');
|
||||
assert.strictEqual(manifest.length, 0);
|
||||
});
|
||||
|
||||
test('files in skills directory are ignored (only subdirectories scanned)', () => {
|
||||
const skillsDir = path.join(tmpDir, '.claude', 'skills');
|
||||
fs.mkdirSync(skillsDir, { recursive: true });
|
||||
// A file, not a directory
|
||||
fs.writeFileSync(path.join(skillsDir, 'not-a-skill.md'), '# Not a skill');
|
||||
|
||||
// A valid skill directory
|
||||
const skillDir = path.join(skillsDir, 'real-skill');
|
||||
fs.mkdirSync(skillDir, { recursive: true });
|
||||
fs.writeFileSync(path.join(skillDir, 'SKILL.md'), [
|
||||
'---',
|
||||
'name: real-skill',
|
||||
'description: A real skill',
|
||||
'---',
|
||||
'',
|
||||
'# Real Skill',
|
||||
].join('\n'));
|
||||
|
||||
const result = runGsdTools(['skill-manifest', '--skills-dir', skillsDir], tmpDir);
|
||||
assert.ok(result.success, `Command should succeed: ${result.error || result.output}`);
|
||||
|
||||
const manifest = JSON.parse(result.output);
|
||||
assert.strictEqual(manifest.length, 1);
|
||||
assert.strictEqual(manifest[0].name, 'real-skill');
|
||||
});
|
||||
});
|
||||
136
tests/temp-subdir.test.cjs
Normal file
136
tests/temp-subdir.test.cjs
Normal file
@@ -0,0 +1,136 @@
|
||||
/**
|
||||
* GSD Tools Tests - dedicated temp subdirectory
|
||||
*
|
||||
* Tests for issue #1975: GSD temp files should use a dedicated
|
||||
* subdirectory (path.join(os.tmpdir(), 'gsd')) instead of writing
|
||||
* directly to os.tmpdir().
|
||||
*/
|
||||
|
||||
const { test, describe, beforeEach, afterEach } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const os = require('os');
|
||||
|
||||
const {
|
||||
reapStaleTempFiles,
|
||||
} = require('../get-shit-done/bin/lib/core.cjs');
|
||||
|
||||
const GSD_TEMP_DIR = path.join(os.tmpdir(), 'gsd');
|
||||
|
||||
// ─── Dedicated temp subdirectory ────────────────────────────────────────────
|
||||
|
||||
describe('dedicated gsd temp subdirectory', () => {
|
||||
describe('output() temp file placement', () => {
|
||||
// output() writes to tmpfile when JSON > 50KB. We test indirectly by
|
||||
// checking that reapStaleTempFiles scans the subdirectory.
|
||||
|
||||
test('gsd temp subdirectory path is os.tmpdir()/gsd', () => {
|
||||
// The GSD_TEMP_DIR constant should resolve to <tmpdir>/gsd
|
||||
assert.strictEqual(GSD_TEMP_DIR, path.join(os.tmpdir(), 'gsd'));
|
||||
});
|
||||
});
|
||||
|
||||
describe('reapStaleTempFiles with subdirectory', () => {
|
||||
let testPrefix;
|
||||
|
||||
beforeEach(() => {
|
||||
testPrefix = `gsd-tempsub-test-${Date.now()}-`;
|
||||
// Ensure the gsd subdirectory exists for test setup
|
||||
fs.mkdirSync(GSD_TEMP_DIR, { recursive: true });
|
||||
});
|
||||
|
||||
test('removes stale files from gsd subdirectory', () => {
|
||||
const stalePath = path.join(GSD_TEMP_DIR, `${testPrefix}stale.json`);
|
||||
fs.writeFileSync(stalePath, '{}');
|
||||
const oldTime = new Date(Date.now() - 10 * 60 * 1000);
|
||||
fs.utimesSync(stalePath, oldTime, oldTime);
|
||||
|
||||
reapStaleTempFiles(testPrefix, { maxAgeMs: 5 * 60 * 1000 });
|
||||
|
||||
assert.ok(!fs.existsSync(stalePath), 'stale file in gsd subdir should be removed');
|
||||
});
|
||||
|
||||
test('preserves fresh files in gsd subdirectory', () => {
|
||||
const freshPath = path.join(GSD_TEMP_DIR, `${testPrefix}fresh.json`);
|
||||
fs.writeFileSync(freshPath, '{}');
|
||||
|
||||
reapStaleTempFiles(testPrefix, { maxAgeMs: 5 * 60 * 1000 });
|
||||
|
||||
assert.ok(fs.existsSync(freshPath), 'fresh file in gsd subdir should be preserved');
|
||||
// Clean up
|
||||
fs.unlinkSync(freshPath);
|
||||
});
|
||||
|
||||
test('removes stale directories from gsd subdirectory', () => {
|
||||
const staleDir = path.join(GSD_TEMP_DIR, `${testPrefix}dir`);
|
||||
fs.mkdirSync(staleDir, { recursive: true });
|
||||
const oldTime = new Date(Date.now() - 10 * 60 * 1000);
|
||||
fs.utimesSync(staleDir, oldTime, oldTime);
|
||||
|
||||
reapStaleTempFiles(testPrefix, { maxAgeMs: 5 * 60 * 1000 });
|
||||
|
||||
assert.ok(!fs.existsSync(staleDir), 'stale directory in gsd subdir should be removed');
|
||||
});
|
||||
|
||||
test('creates gsd subdirectory if it does not exist', () => {
|
||||
// Use a unique nested path to avoid interfering with other tests
|
||||
const uniqueSubdir = path.join(os.tmpdir(), `gsd-creation-test-${Date.now()}`);
|
||||
|
||||
// Verify it does not exist
|
||||
if (fs.existsSync(uniqueSubdir)) {
|
||||
fs.rmSync(uniqueSubdir, { recursive: true, force: true });
|
||||
}
|
||||
assert.ok(!fs.existsSync(uniqueSubdir), 'test subdir should not exist before test');
|
||||
|
||||
// reapStaleTempFiles should not throw even if subdir does not exist
|
||||
// (it gets created or handled gracefully)
|
||||
assert.doesNotThrow(() => {
|
||||
reapStaleTempFiles(`gsd-creation-test-${Date.now()}-`, { maxAgeMs: 0 });
|
||||
});
|
||||
});
|
||||
|
||||
test('does not scan system tmpdir root for gsd- files', () => {
|
||||
// Place a stale file in the OLD location (system tmpdir root)
|
||||
const oldLocationPath = path.join(os.tmpdir(), `${testPrefix}old-location.json`);
|
||||
fs.writeFileSync(oldLocationPath, '{}');
|
||||
const oldTime = new Date(Date.now() - 10 * 60 * 1000);
|
||||
fs.utimesSync(oldLocationPath, oldTime, oldTime);
|
||||
|
||||
// reapStaleTempFiles should NOT remove files from the old location
|
||||
// because it now only scans the gsd subdirectory
|
||||
reapStaleTempFiles(testPrefix, { maxAgeMs: 5 * 60 * 1000 });
|
||||
|
||||
// The file in the old location should still exist (not scanned)
|
||||
assert.ok(
|
||||
fs.existsSync(oldLocationPath),
|
||||
'files in system tmpdir root should NOT be scanned by reapStaleTempFiles'
|
||||
);
|
||||
|
||||
// Clean up manually
|
||||
fs.unlinkSync(oldLocationPath);
|
||||
});
|
||||
|
||||
test('backward compat: reapStaleTempFilesLegacy cleans old location', () => {
|
||||
// Place a stale file in the old location (system tmpdir root)
|
||||
const oldLocationPath = path.join(os.tmpdir(), `${testPrefix}legacy.json`);
|
||||
fs.writeFileSync(oldLocationPath, '{}');
|
||||
const oldTime = new Date(Date.now() - 10 * 60 * 1000);
|
||||
fs.utimesSync(oldLocationPath, oldTime, oldTime);
|
||||
|
||||
// The legacy reap function should still clean old-location files
|
||||
// We import it if exported, or verify the main reap handles both
|
||||
const core = require('../get-shit-done/bin/lib/core.cjs');
|
||||
if (typeof core.reapStaleTempFilesLegacy === 'function') {
|
||||
core.reapStaleTempFilesLegacy(testPrefix, { maxAgeMs: 5 * 60 * 1000 });
|
||||
assert.ok(!fs.existsSync(oldLocationPath), 'legacy reap should clean old location');
|
||||
} else {
|
||||
// If no separate legacy function, the main output() should do a one-time
|
||||
// migration sweep. We just verify the export shape is correct.
|
||||
assert.ok(typeof core.reapStaleTempFiles === 'function');
|
||||
// Clean up manually since we're not testing migration here
|
||||
fs.unlinkSync(oldLocationPath);
|
||||
}
|
||||
});
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user