mirror of
https://github.com/stephenleo/bmad-autonomous-development.git
synced 2026-04-25 12:24:56 +02:00
feat: initial release of BAD — BMad Autonomous Development v1.0.0
Standalone BMad module that orchestrates fully autonomous parallel multi-agent pipelines through the full story lifecycle (create → dev → review → PR), driven by the sprint backlog and dependency graph. - Every step runs in a dedicated subagent with a fresh context window - Harness-agnostic: detects Claude Code, Cursor, Copilot, etc. at setup - Configurable models, timers, CI, and merge behaviour per harness - Self-registering via assets/module-setup.md Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -1,19 +1,33 @@
|
||||
{
|
||||
"name": "my-module",
|
||||
"owner": { "name": "Your Name" },
|
||||
"name": "bmad-bad",
|
||||
"owner": { "name": "stephenleo" },
|
||||
"license": "MIT",
|
||||
"homepage": "https://github.com/your-org/my-module",
|
||||
"repository": "https://github.com/your-org/my-module",
|
||||
"keywords": ["bmad"],
|
||||
"homepage": "https://github.com/stephenleo/bmad-autonomous-development",
|
||||
"repository": "https://github.com/stephenleo/bmad-autonomous-development",
|
||||
"keywords": [
|
||||
"bmad",
|
||||
"bmad-method",
|
||||
"autonomous development",
|
||||
"agent swarm",
|
||||
"subagent",
|
||||
"parallel development",
|
||||
"agentic",
|
||||
"orchestration",
|
||||
"sprint automation",
|
||||
"story implementation",
|
||||
"pull request automation",
|
||||
"ci-cd",
|
||||
"ai coding"
|
||||
],
|
||||
"plugins": [
|
||||
{
|
||||
"name": "my-module",
|
||||
"name": "bmad-bad",
|
||||
"source": "./",
|
||||
"description": "TODO: What your module does in one sentence.",
|
||||
"description": "Autonomous development orchestrator for the BMad Method. Runs fully autonomous parallel multi-agent pipelines through the full story lifecycle (create → dev → review → PR) driven by your sprint backlog and dependency graph.",
|
||||
"version": "1.0.0",
|
||||
"author": { "name": "Your Name" },
|
||||
"author": { "name": "Marie Stephen Leo" },
|
||||
"skills": [
|
||||
"./skills/my-skill"
|
||||
"./skills/bad"
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
3
LICENSE
3
LICENSE
@@ -1,6 +1,6 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) TODO: YEAR YOUR-NAME
|
||||
Copyright (c) 2026 Marie Stephen Leo
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
@@ -19,4 +19,3 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
|
||||
|
||||
154
README.md
154
README.md
@@ -1,85 +1,95 @@
|
||||
# BMad Module Template
|
||||
# BAD — BMad Autonomous Development
|
||||
|
||||
A minimal template for creating [BMad Method](https://docs.bmad-method.org/) modules. Fork this repo or use it as a GitHub template to start building your own module.
|
||||
> 🤖 Autonomous development orchestrator for the BMad Method. Runs fully autonomous parallel multi-agent pipelines through the full story lifecycle (create → dev → review → PR) driven by your sprint backlog and dependency graph.
|
||||
|
||||
## Quick Start
|
||||
## What It Does
|
||||
|
||||
1. Click **Use this template** on GitHub (or fork the repo)
|
||||
2. Rename `skills/my-skill/` to your skill name
|
||||
3. Edit `skills/my-skill/SKILL.md` with your skill's instructions
|
||||
4. Update `.claude-plugin/marketplace.json` with your module info
|
||||
5. Update `LICENSE` with your name and year
|
||||
6. Replace this README with what your module does
|
||||
BAD is a [BMad Method](https://docs.bmad-method.org/) module that automates your entire sprint execution. A lightweight coordinator orchestrates the pipeline — it never reads files or writes code itself. **Every unit of work is delegated to a dedicated subagent with a fresh context window**, keeping each agent fully focused on its single task.
|
||||
|
||||
Once your epics and stories are planned, BAD takes over:
|
||||
|
||||
1. *(`MODEL_STANDARD` subagent)* Builds a dependency graph from your sprint backlog — maps story dependencies, syncs GitHub PR status, and identifies what's ready to work on
|
||||
2. Picks ready stories from the graph, respecting epic ordering and dependencies
|
||||
3. Runs up to `MAX_PARALLEL_STORIES` stories simultaneously, each through a sequential 4-step pipeline:
|
||||
- **Step 1** *(`MODEL_STANDARD` subagent)* — `bmad-create-story`: generates the story spec
|
||||
- **Step 2** *(`MODEL_STANDARD` subagent)* — `bmad-dev-story`: implements the code
|
||||
- **Step 3** *(`MODEL_QUALITY` subagent)* — `bmad-code-review`: reviews and fixes the implementation
|
||||
- **Step 4** *(`MODEL_STANDARD` subagent)* — commit, push, open PR, monitor CI, fix any failing checks, resolve code review comments, and resolve merge conflicts
|
||||
4. *(`MODEL_STANDARD` subagent)* Optionally auto-merges batch PRs sequentially (lowest story number first), resolving any conflicts
|
||||
5. Waits, then loops back for the next batch — until the entire sprint is done
|
||||
|
||||
## Requirements
|
||||
|
||||
- [BMad Method](https://docs.bmad-method.org/) installed in your project
|
||||
- A sprint plan with epics, stories, and `sprint-status.yaml`
|
||||
- Git + GitHub CLI (`gh`) installed and authenticated:
|
||||
1. `brew install gh`
|
||||
2. `gh auth login`
|
||||
3. Add to your `.zshrc` so BAD's subagents can connect to GitHub:
|
||||
```bash
|
||||
export GITHUB_PERSONAL_ACCESS_TOKEN=$(gh auth token)
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
npx bmad-method install --custom-content https://github.com/stephenleo/bmad-autonomous-development
|
||||
```
|
||||
|
||||
Then run setup in your project:
|
||||
|
||||
```
|
||||
/bad setup
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
/bad
|
||||
```
|
||||
|
||||
BAD can also be triggered naturally: *"run BAD"*, *"kick off the sprint"*, *"automate the sprint"*, *"start autonomous development"*, *"run the pipeline"*, *"start the dev pipeline"*
|
||||
|
||||
Run with optional overrides:
|
||||
|
||||
```
|
||||
/bad MAX_PARALLEL_STORIES=2 AUTO_PR_MERGE=true MODEL_STANDARD=opus
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
BAD is configured at install time (`/bad setup`) and stores settings in `_bmad/bad/config.yaml`. All values can be overridden at runtime with `KEY=VALUE` args.
|
||||
|
||||
| Variable | Default | Description |
|
||||
|---|---|---|
|
||||
| `MAX_PARALLEL_STORIES` | `3` | Stories to run per batch |
|
||||
| `WORKTREE_BASE_PATH` | `.worktrees` | Git worktree directory |
|
||||
| `MODEL_STANDARD` | `sonnet` | Model for create, dev, PR steps |
|
||||
| `MODEL_QUALITY` | `opus` | Model for code review |
|
||||
| `AUTO_PR_MERGE` | `false` | Auto-merge PRs after each batch |
|
||||
| `RUN_CI_LOCALLY` | `false` | Run CI locally instead of GitHub Actions |
|
||||
| `WAIT_TIMER_SECONDS` | `3600` | Wait between batches |
|
||||
| `RETRO_TIMER_SECONDS` | `600` | Delay before auto-retrospective |
|
||||
|
||||
## Agent Harness Support
|
||||
|
||||
BAD is harness-agnostic. Setup detects your installed harnesses (Claude Code, Cursor, GitHub Copilot, etc.) and configures platform-specific settings (models, rate limit thresholds, timer support) accordingly.
|
||||
|
||||
## Structure
|
||||
|
||||
```
|
||||
your-module/
|
||||
bmad-autonomous-development/
|
||||
├── .claude-plugin/
|
||||
│ └── marketplace.json # Module manifest (required for installation)
|
||||
│ └── marketplace.json # Module manifest
|
||||
├── skills/
|
||||
│ └── my-skill/ # Rename to your skill name
|
||||
│ ├── SKILL.md # Skill instructions
|
||||
│ ├── prompts/ # Internal capability prompts (optional)
|
||||
│ ├── scripts/ # Deterministic scripts (optional)
|
||||
│ └── assets/ # Module registration files (optional)
|
||||
├── docs/ # Documentation (optional, GitHub Pages ready)
|
||||
├── LICENSE
|
||||
└── README.md
|
||||
│ └── bad/
|
||||
│ ├── SKILL.md # Main skill — coordinator logic
|
||||
│ ├── references/ # Phase-specific reference docs
|
||||
│ ├── assets/ # Module registration files
|
||||
│ └── scripts/ # Config merge scripts
|
||||
└── docs/
|
||||
```
|
||||
|
||||
## Building with BMad Builder
|
||||
|
||||
You don't have to write skills from scratch. The [BMad Builder](https://bmad-builder-docs.bmad-method.org/) provides guided tools for creating production-quality skills:
|
||||
|
||||
- **[Agent Builder](https://bmad-builder-docs.bmad-method.org/reference/builder-commands)** — Build agent skills through conversational discovery
|
||||
- **[Workflow Builder](https://bmad-builder-docs.bmad-method.org/reference/builder-commands)** — Build workflow and utility skills
|
||||
- **[Module Builder](https://bmad-builder-docs.bmad-method.org/reference/builder-commands)** — Package skills into an installable module with help system registration
|
||||
- **[Build Your First Module](https://bmad-builder-docs.bmad-method.org/tutorials/build-your-first-module)** — Full walkthrough from idea to distributable module
|
||||
|
||||
The Module Builder can scaffold registration files (`module.yaml`, `module-help.csv`, merge scripts) so your module integrates with the BMad help system.
|
||||
|
||||
## Adding More Skills
|
||||
|
||||
Add skill directories under `skills/` and list them in `marketplace.json`:
|
||||
|
||||
```json
|
||||
"skills": [
|
||||
"./skills/my-agent",
|
||||
"./skills/my-workflow"
|
||||
]
|
||||
```
|
||||
|
||||
## Documentation
|
||||
|
||||
A `docs/` folder is included for your module's documentation. Publish it with [GitHub Pages](https://docs.github.com/en/pages) or any static site host. For a richer docs site, consider [Starlight](https://starlight.astro.build/) (used by the official BMad modules).
|
||||
|
||||
## Installation
|
||||
|
||||
Users install your module with:
|
||||
|
||||
```bash
|
||||
npx bmad-method install --custom-content https://github.com/your-org/your-module
|
||||
```
|
||||
|
||||
See [Distribute Your Module](https://bmad-builder-docs.bmad-method.org/how-to/distribute-your-module) for full details on repo structure, the marketplace.json format, and versioning.
|
||||
|
||||
## Publishing to the Marketplace
|
||||
|
||||
Once your module is stable, you can list it in the [BMad Plugins Marketplace](https://github.com/bmad-code-org/bmad-plugins-marketplace) for broader discovery:
|
||||
|
||||
1. Tag a release (e.g., `v1.0.0`)
|
||||
2. Open a PR to the marketplace repo adding a registry entry to `registry/community/`
|
||||
3. Your module goes through automated validation and manual review
|
||||
|
||||
Review the marketplace [contribution guide](https://github.com/bmad-code-org/bmad-plugins-marketplace/blob/main/CONTRIBUTING.md) and [governance policy](https://github.com/bmad-code-org/bmad-plugins-marketplace/blob/main/GOVERNANCE.md) before submitting.
|
||||
|
||||
## Resources
|
||||
|
||||
- [BMad Method Documentation](https://docs.bmad-method.org/) — Core framework
|
||||
- [BMad Builder Documentation](https://bmad-builder-docs.bmad-method.org/) — Build agents, workflows, and modules
|
||||
- [BMad Plugins Marketplace](https://github.com/bmad-code-org/bmad-plugins-marketplace) — Registry, categories, and submission process
|
||||
|
||||
## License
|
||||
|
||||
MIT — update `LICENSE` with your own copyright.
|
||||
MIT © 2026 Marie Stephen Leo
|
||||
|
||||
@@ -1,15 +1,100 @@
|
||||
# My Module
|
||||
# BAD — BMad Autonomous Development
|
||||
|
||||
TODO: Replace with your module's documentation.
|
||||
> 🤖 Autonomous development orchestrator for the BMad Method. Runs fully autonomous parallel multi-agent pipelines through the full story lifecycle (create → dev → review → PR) driven by your sprint backlog and dependency graph.
|
||||
|
||||
## Getting Started
|
||||
## Overview
|
||||
|
||||
Describe how to install and use your module.
|
||||
BAD is a [BMad Method](https://docs.bmad-method.org/) module that automates your entire sprint execution. A lightweight coordinator orchestrates the pipeline — it never reads files or writes code itself. **Every unit of work is delegated to a dedicated subagent with a fresh context window**, keeping each agent fully focused on its single task.
|
||||
|
||||
## Skills
|
||||
## Requirements
|
||||
|
||||
List your module's skills and what they do.
|
||||
- [BMad Method](https://docs.bmad-method.org/) installed in your project
|
||||
- A sprint plan with epics, stories, and `sprint-status.yaml`
|
||||
- Git + GitHub CLI (`gh`) installed and authenticated:
|
||||
1. `brew install gh`
|
||||
2. `gh auth login`
|
||||
3. Add to your `.zshrc` so BAD's subagents can connect to GitHub:
|
||||
```bash
|
||||
export GITHUB_PERSONAL_ACCESS_TOKEN=$(gh auth token)
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
npx bmad-method install --custom-content https://github.com/stephenleo/bmad-autonomous-development
|
||||
```
|
||||
|
||||
Then run setup in your project:
|
||||
|
||||
```
|
||||
/bad setup
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
/bad
|
||||
```
|
||||
|
||||
BAD can also be triggered naturally: *"run BAD"*, *"kick off the sprint"*, *"automate the sprint"*, *"start autonomous development"*, *"run the pipeline"*, *"start the dev pipeline"*
|
||||
|
||||
Run with optional runtime overrides:
|
||||
|
||||
```
|
||||
/bad MAX_PARALLEL_STORIES=2 AUTO_PR_MERGE=true MODEL_STANDARD=opus
|
||||
```
|
||||
|
||||
## Pipeline
|
||||
|
||||
Once your epics and stories are planned, BAD takes over:
|
||||
|
||||
1. *(`MODEL_STANDARD` subagent)* Builds a dependency graph from your sprint backlog — maps story dependencies, syncs GitHub PR status, and identifies what's ready to work on
|
||||
2. Picks ready stories from the graph, respecting epic ordering and dependencies
|
||||
3. Runs up to `MAX_PARALLEL_STORIES` stories simultaneously, each through a sequential 4-step pipeline. **Every step runs in a dedicated subagent with a fresh context window**, keeping the coordinator lean and each agent fully focused on its single task:
|
||||
- **Step 1** *(`MODEL_STANDARD` subagent)* — `bmad-create-story`: generates the story spec
|
||||
- **Step 2** *(`MODEL_STANDARD` subagent)* — `bmad-dev-story`: implements the code
|
||||
- **Step 3** *(`MODEL_QUALITY` subagent)* — `bmad-code-review`: reviews and fixes the implementation
|
||||
- **Step 4** *(`MODEL_STANDARD` subagent)* — commit, push, open PR, monitor CI, fix any failing checks, resolve code review comments, and resolve merge conflicts
|
||||
4. *(`MODEL_STANDARD` subagent)* Optionally auto-merges batch PRs sequentially (lowest story number first), resolving any conflicts
|
||||
5. Waits, then loops back for the next batch — until the entire sprint is done
|
||||
|
||||
## Configuration
|
||||
|
||||
Describe any configuration options (if applicable).
|
||||
BAD is configured at install time (`/bad setup`) and stores settings in `_bmad/bad/config.yaml`. All values can be overridden at runtime with `KEY=VALUE` args.
|
||||
|
||||
| Variable | Config Key | Default | Description |
|
||||
|---|---|---|---|
|
||||
| `MAX_PARALLEL_STORIES` | `max_parallel_stories` | `3` | Stories to run per batch |
|
||||
| `WORKTREE_BASE_PATH` | `worktree_base_path` | `.worktrees` | Git worktree directory (relative to repo root) |
|
||||
| `MODEL_STANDARD` | `model_standard` | `sonnet` | Model for create, dev, and PR steps |
|
||||
| `MODEL_QUALITY` | `model_quality` | `opus` | Model for code review step |
|
||||
| `AUTO_PR_MERGE` | `auto_pr_merge` | `false` | Auto-merge PRs sequentially after each batch |
|
||||
| `RUN_CI_LOCALLY` | `run_ci_locally` | `false` | Run CI locally instead of GitHub Actions |
|
||||
| `WAIT_TIMER_SECONDS` | `wait_timer_seconds` | `3600` | Seconds to wait between batches |
|
||||
| `RETRO_TIMER_SECONDS` | `retro_timer_seconds` | `600` | Seconds before auto-retrospective after epic completion |
|
||||
| `CONTEXT_COMPACTION_THRESHOLD` | `context_compaction_threshold` | `80` | Context window % at which to compact context |
|
||||
| `TIMER_SUPPORT` | `timer_support` | `true` | Use native platform timers; `false` for prompt-based continuation |
|
||||
| `API_FIVE_HOUR_THRESHOLD` | `api_five_hour_threshold` | `80` | (Claude Code) 5-hour usage % at which to pause |
|
||||
| `API_SEVEN_DAY_THRESHOLD` | `api_seven_day_threshold` | `95` | (Claude Code) 7-day usage % at which to pause |
|
||||
| `API_USAGE_THRESHOLD` | `api_usage_threshold` | `80` | (Other harnesses) Generic usage % at which to pause |
|
||||
|
||||
## Agent Harness Support
|
||||
|
||||
BAD is harness-agnostic. Setup detects your installed harnesses by checking for their directories at the project root (`.claude/` for Claude Code, `.cursor/` for Cursor, `.github/skills/` for GitHub Copilot, etc.) and configures platform-specific settings accordingly:
|
||||
|
||||
- **Claude Code** — native timer support (`CronCreate`), Claude model names (`sonnet`/`opus`/`haiku`), 5-hour and 7-day rate limit thresholds
|
||||
- **Other harnesses** — prompt-based continuation, free-text model names, single generic usage threshold
|
||||
|
||||
In multi-harness projects, setup runs once per detected harness and stores per-harness model settings (e.g. `claude_model_standard`, `cursor_model_standard`).
|
||||
|
||||
## Reconfigure
|
||||
|
||||
To update your configuration at any time:
|
||||
|
||||
```
|
||||
/bad configure
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
MIT © 2026 Marie Stephen Leo
|
||||
|
||||
594
skills/bad/SKILL.md
Normal file
594
skills/bad/SKILL.md
Normal file
@@ -0,0 +1,594 @@
|
||||
---
|
||||
name: bad
|
||||
description: 'BMad Autonomous Development — orchestrates parallel story implementation pipelines. Builds a dependency graph, updates PR status from GitHub, picks stories from the backlog, and runs each through create → dev → review → PR in parallel using dedicated subagents with fresh context windows. Loops through the entire sprint plan in batches, with optional epic retrospective. Use when the user says "run BAD", "start autonomous development", "automate the sprint", "run the pipeline", "kick off the sprint", or "start the dev pipeline". Run /bad setup or /bad configure to install and configure the module.'
|
||||
---
|
||||
|
||||
# BAD — BMad Autonomous Development
|
||||
|
||||
## On Activation
|
||||
|
||||
Check if `{project-root}/_bmad/config.yaml` contains a `bad` section. If not — or if the user passed `setup` or `configure` as an argument — load `./assets/module-setup.md` and complete registration before proceeding.
|
||||
|
||||
The `setup`/`configure` argument always triggers `./assets/module-setup.md`, even if the module is already registered (for reconfiguration).
|
||||
|
||||
After setup completes (or if config already exists), load the `bad` config and continue to Startup below.
|
||||
|
||||
You are a **coordinator**. You delegate every step to subagents. You never read files, run git/gh commands, or write to disk yourself.
|
||||
|
||||
**Coordinator-only responsibilities:**
|
||||
- Pick stories from subagent-reported data
|
||||
- Spawn subagents (in parallel where allowed)
|
||||
- Manage timers (CronCreate / CronDelete)
|
||||
- Run Pre-Continuation Checks (requires session stdin JSON — coordinator only)
|
||||
- Handle user input, print summaries, and send channel notifications
|
||||
|
||||
**Everything else** — file reads, git operations, gh commands, disk writes — happens in subagents with fresh context.
|
||||
|
||||
## Startup: Capture Channel Context
|
||||
|
||||
Before doing anything else, determine how to send notifications:
|
||||
|
||||
1. **Check for a connected channel** — look at the current conversation context:
|
||||
- If you see a `<channel source="telegram" chat_id="..." ...>` tag, save `NOTIFY_CHAT_ID` and `NOTIFY_SOURCE="telegram"`.
|
||||
- If another channel type is connected, save its equivalent identifier.
|
||||
- If no channel is connected, set `NOTIFY_SOURCE="terminal"`.
|
||||
|
||||
2. **Send the BAD started notification** using the [Notify Pattern](#notify-pattern):
|
||||
```
|
||||
🤖 BAD started — building dependency graph...
|
||||
```
|
||||
|
||||
Then proceed to Phase 0.
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
Load base values from `_bmad/bad/config.yaml` at startup (via `/bmad-init --module bad --all`). Then parse any `KEY=VALUE` overrides from arguments passed to `/bad` — args win over config. For any variable not in config or args, use the default below.
|
||||
|
||||
| Variable | Config Key | Default | Description |
|
||||
|----------|-----------|---------|-------------|
|
||||
| `MAX_PARALLEL_STORIES` | `max_parallel_stories` | `3` | Max stories to run in a single batch |
|
||||
| `WORKTREE_BASE_PATH` | `worktree_base_path` | `.worktrees` | Root directory for git worktrees |
|
||||
| `MODEL_STANDARD` | `model_standard` | `sonnet` | Model for Steps 1, 2, 4 and Phase 3 (auto-merge) |
|
||||
| `MODEL_QUALITY` | `model_quality` | `opus` | Model for Step 3 (code review) |
|
||||
| `RETRO_TIMER_SECONDS` | `retro_timer_seconds` | `600` | Auto-retrospective countdown after epic completion (10 min) |
|
||||
| `WAIT_TIMER_SECONDS` | `wait_timer_seconds` | `3600` | Post-batch wait before re-checking PR status (1 hr) |
|
||||
| `CONTEXT_COMPACTION_THRESHOLD` | `context_compaction_threshold` | `80` | Context window % at which to compact/summarise context |
|
||||
| `TIMER_SUPPORT` | `timer_support` | `true` | When `true`, use native platform timers; when `false`, use prompt-based continuation |
|
||||
| `API_FIVE_HOUR_THRESHOLD` | `api_five_hour_threshold` | `80` | (Claude Code) 5-hour rate limit % that triggers a pause |
|
||||
| `API_SEVEN_DAY_THRESHOLD` | `api_seven_day_threshold` | `95` | (Claude Code) 7-day rate limit % that triggers a pause |
|
||||
| `API_USAGE_THRESHOLD` | `api_usage_threshold` | `80` | (Other harnesses) Generic API usage % that triggers a pause |
|
||||
| `RUN_CI_LOCALLY` | `run_ci_locally` | `false` | When `true`, skip GitHub Actions and always run the local CI fallback |
|
||||
| `AUTO_PR_MERGE` | `auto_pr_merge` | `false` | When `true`, auto-merge batch PRs sequentially (lowest → highest) before Phase 4 |
|
||||
|
||||
After resolving all values, print the active configuration so the user can confirm before Phase 0 begins:
|
||||
```
|
||||
⚙️ BAD config: MAX_PARALLEL_STORIES=3, RUN_CI_LOCALLY=false, AUTO_PR_MERGE=false, MODEL_STANDARD=sonnet, MODEL_QUALITY=opus, TIMER_SUPPORT=true, ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pipeline
|
||||
|
||||
```
|
||||
Phase 0: Build (or update) dependency graph [subagent]
|
||||
└─ bmad-help maps story dependencies
|
||||
└─ GitHub updates PR merge status per story
|
||||
└─ git pull origin main
|
||||
└─ Reports: ready stories, epic completion status
|
||||
│
|
||||
Phase 1: Discover stories [coordinator logic]
|
||||
└─ Pick up to MAX_PARALLEL_STORIES from Phase 0 report
|
||||
└─ If none ready → skip to Phase 4
|
||||
│
|
||||
Phase 2: Run the pipeline [subagents — stories parallel, steps sequential]
|
||||
├─► Story A ──► Step 1 → Step 2 → Step 3 → Step 4
|
||||
├─► Story B ──► Step 1 → Step 2 → Step 3 → Step 4
|
||||
└─► Story C ──► Step 1 → Step 2 → Step 3 → Step 4
|
||||
│
|
||||
Phase 3: Auto-Merge Batch PRs [subagents — sequential]
|
||||
└─ One subagent per story (lowest → highest story number)
|
||||
└─ Cleanup subagent for branch safety + git pull
|
||||
│
|
||||
Phase 4: Batch Completion & Continuation
|
||||
└─ Print batch summary [coordinator]
|
||||
└─ Epic completion check [subagent]
|
||||
└─ Optional retrospective [subagent]
|
||||
└─ Gate & Continue (WAIT_TIMER timer) → Phase 0 → Phase 1
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Build or Update the Dependency Graph
|
||||
|
||||
Spawn a **single `MODEL_STANDARD` subagent** (yolo mode) with these instructions. The coordinator waits for the report.
|
||||
|
||||
```
|
||||
You are the Phase 0 dependency graph builder. Auto-approve all tool calls (yolo mode).
|
||||
|
||||
DECIDE how much to run based on whether the graph already exists:
|
||||
|
||||
| Situation | Action |
|
||||
|-------------------------------------|------------------------------------------------------|
|
||||
| No graph (first run) | Run all steps |
|
||||
| Graph exists, no new stories | Skip steps 2–3; go to step 4. Preserve all chains. |
|
||||
| Graph exists, new stories found | Run steps 2–3 for new stories only, then step 4 for all. |
|
||||
|
||||
BRANCH SAFETY — before anything else, ensure the repo root is on main:
|
||||
git branch --show-current
|
||||
If not main:
|
||||
git checkout -- .
|
||||
git checkout main
|
||||
git pull --ff-only origin main
|
||||
If checkout fails because a worktree claims the branch:
|
||||
git worktree list
|
||||
git worktree remove --force <path>
|
||||
git checkout main
|
||||
git pull --ff-only origin main
|
||||
|
||||
STEPS:
|
||||
|
||||
1. Read `_bmad-output/implementation-artifacts/sprint-status.yaml`. Note current story
|
||||
statuses. Compare against the existing graph (if any) to identify new stories.
|
||||
|
||||
2. Read `_bmad-output/planning-artifacts/epics.md` for dependency relationships of
|
||||
new stories. (Skip if no new stories.)
|
||||
|
||||
3. Run /bmad-help with the epic context for new stories — ask it to map their
|
||||
dependencies. Merge the result into the existing graph. (Skip if no new stories.)
|
||||
|
||||
4. Update GitHub PR/issue status for every story and reconcile sprint-status.yaml.
|
||||
Follow the procedure in `references/phase0-dependency-graph.md` exactly.
|
||||
|
||||
5. Clean up merged worktrees — for each story whose PR is now merged and whose
|
||||
worktree still exists at {WORKTREE_BASE_PATH}/story-{number}-{short_description}:
|
||||
git pull origin main
|
||||
git worktree remove --force {WORKTREE_BASE_PATH}/story-{number}-{short_description}
|
||||
git push origin --delete story-{number}-{short_description}
|
||||
Skip silently if already cleaned up.
|
||||
|
||||
6. Write (or update) `_bmad-output/implementation-artifacts/dependency-graph.md`.
|
||||
Follow the schema, Ready to Work rules, and example in
|
||||
`references/phase0-dependency-graph.md` exactly.
|
||||
|
||||
7. Pull latest main (if step 5 didn't already do so):
|
||||
git pull origin main
|
||||
|
||||
REPORT BACK to the coordinator with this structured summary:
|
||||
- ready_stories: list of { number, short_description, status } for every story
|
||||
marked "Ready to Work: Yes" that is not done
|
||||
- all_stories_done: true/false — whether every story across every epic is done
|
||||
- current_epic: name/number of the lowest incomplete epic
|
||||
- any warnings or blockers worth surfacing
|
||||
```
|
||||
|
||||
The coordinator uses the report to drive Phase 1. No coordinator-side file reads.
|
||||
|
||||
📣 **Notify** after Phase 0:
|
||||
```
|
||||
📊 Phase 0 complete
|
||||
Ready: {N} stories — {comma-separated story numbers}
|
||||
Blocked: {N} stories (if any)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Discover Stories
|
||||
|
||||
Pure coordinator logic — no file reads, no tool calls.
|
||||
|
||||
1. From Phase 0's `ready_stories` report, select at most `MAX_PARALLEL_STORIES` stories.
|
||||
- **Epic ordering is strictly enforced:** only pick stories from the lowest incomplete epic. Never pick a story from epic N if any story in epic N-1 (or earlier) is not yet merged — check this against the Phase 0 report.
|
||||
2. **If no stories are ready** → report to the user which stories are blocked (from Phase 0 warnings), then jump to **Phase 4, Step 3 (Gate & Continue)**.
|
||||
|
||||
> **Why epic ordering matters:** Stories in later epics build on earlier epics' code and product foundation. Starting epic 3 while epic 2 has open PRs risks merge conflicts and building on code that may still change.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Run the Pipeline
|
||||
|
||||
Launch all stories' Step 1 subagents **in a single message** (parallel). Each story's steps are **strictly sequential** — do not spawn step N+1 until step N reports success.
|
||||
|
||||
**Skip steps based on story status** (from Phase 0 report):
|
||||
|
||||
| Status | Start from | Skip |
|
||||
|-----------------|------------|-----------|
|
||||
| `backlog` | Step 1 | nothing |
|
||||
| `ready-for-dev` | Step 2 | Step 1 |
|
||||
| `in-progress` | Step 2 | Step 1 |
|
||||
| `review` | Step 3 | Steps 1–2 |
|
||||
| `done` | — | all |
|
||||
|
||||
**After each step:** run **Pre-Continuation Checks** (see `references/pre-continuation-checks.md`) before spawning the next subagent. Pre-Continuation Checks are the only coordinator-side work between steps.
|
||||
|
||||
**On failure:** stop that story's pipeline. Report step, story, and error. Other stories continue.
|
||||
**Exception:** rate/usage limit failures → run Pre-Continuation Checks (which auto-pauses until reset) then retry.
|
||||
|
||||
📣 **Notify per story** as each pipeline concludes (Step 4 success or any step failure):
|
||||
- Success: `✅ Story {number} done — PR #{pr_number}`
|
||||
- Failure: `❌ Story {number} failed at Step {N} — {brief error}`
|
||||
|
||||
### Step 1: Create Story (`MODEL_STANDARD`)
|
||||
|
||||
Spawn with model `MODEL_STANDARD` (yolo mode):
|
||||
```
|
||||
You are the Step 1 story creator for story {number}-{short_description}.
|
||||
Working directory: {repo_root}. Auto-approve all tool calls (yolo mode).
|
||||
|
||||
1. Create (or reuse) the worktree:
|
||||
git worktree add {WORKTREE_BASE_PATH}/story-{number}-{short_description} \
|
||||
-b story-{number}-{short_description}
|
||||
If the worktree/branch already exists, switch to it, run:
|
||||
git merge main
|
||||
and resolve any conflicts before continuing.
|
||||
|
||||
2. Change into the worktree directory:
|
||||
cd {repo_root}/{WORKTREE_BASE_PATH}/story-{number}-{short_description}
|
||||
|
||||
3. Run /bmad-create-story {number}-{short_description}.
|
||||
|
||||
4. Update sprint-status.yaml at the REPO ROOT (not the worktree copy):
|
||||
_bmad-output/implementation-artifacts/sprint-status.yaml
|
||||
Set story {number} status to `ready-for-dev`.
|
||||
|
||||
Report: success or failure with error details.
|
||||
```
|
||||
|
||||
### Step 2: Develop Story (`MODEL_STANDARD`)
|
||||
|
||||
Spawn with model `MODEL_STANDARD` (yolo mode):
|
||||
```
|
||||
You are the Step 2 developer for story {number}-{short_description}.
|
||||
Working directory: {repo_root}/{WORKTREE_BASE_PATH}/story-{number}-{short_description}.
|
||||
Auto-approve all tool calls (yolo mode).
|
||||
|
||||
1. Run /bmad-dev-story {number}-{short_description}.
|
||||
2. Commit all changes when implementation is complete.
|
||||
3. Update sprint-status.yaml at the REPO ROOT:
|
||||
{repo_root}/_bmad-output/implementation-artifacts/sprint-status.yaml
|
||||
Set story {number} status to `review`.
|
||||
|
||||
Report: success or failure with error details.
|
||||
```
|
||||
|
||||
### Step 3: Code Review (`MODEL_QUALITY`)
|
||||
|
||||
Spawn with model `MODEL_QUALITY` (yolo mode):
|
||||
```
|
||||
You are the Step 3 code reviewer for story {number}-{short_description}.
|
||||
Working directory: {repo_root}/{WORKTREE_BASE_PATH}/story-{number}-{short_description}.
|
||||
Auto-approve all tool calls (yolo mode).
|
||||
|
||||
1. Run /bmad-code-review {number}-{short_description}.
|
||||
2. Auto-accept all findings and apply fixes using your best engineering judgement.
|
||||
3. Commit any changes from the review.
|
||||
|
||||
Report: success or failure with error details.
|
||||
```
|
||||
|
||||
### Step 4: PR & CI (`MODEL_STANDARD`)
|
||||
|
||||
Spawn with model `MODEL_STANDARD` (yolo mode):
|
||||
```
|
||||
You are the Step 4 PR and CI agent for story {number}-{short_description}.
|
||||
Working directory: {repo_root}/{WORKTREE_BASE_PATH}/story-{number}-{short_description}.
|
||||
Auto-approve all tool calls (yolo mode).
|
||||
|
||||
1. Commit all outstanding changes.
|
||||
|
||||
2. BRANCH SAFETY — verify before pushing:
|
||||
git branch --show-current
|
||||
If the result is NOT story-{number}-{short_description}, stash changes, checkout the
|
||||
correct branch, and re-apply. Never push to main or create a new branch.
|
||||
|
||||
3. Run /commit-commands:commit-push-pr.
|
||||
PR title: story-{number}-{short_description} - fixes #{gh_issue_number}
|
||||
(look up gh_issue_number from the epic file or sprint-status.yaml; omit "fixes #" if none)
|
||||
Add "Fixes #{gh_issue_number}" to the PR description if an issue number exists.
|
||||
|
||||
4. CI:
|
||||
- If RUN_CI_LOCALLY is true → skip GitHub Actions and run the Local CI Fallback below.
|
||||
- Otherwise monitor CI in a loop:
|
||||
gh run view
|
||||
- Billing/spending limit error → exit loop, run Local CI Fallback
|
||||
- CI failed for other reason, or Claude bot left PR comments → fix, push, loop
|
||||
- CI green → proceed to step 5
|
||||
|
||||
LOCAL CI FALLBACK (when RUN_CI_LOCALLY=true or billing-limited):
|
||||
a. Read all .github/workflows/ files triggered on pull_request events.
|
||||
b. Extract and run shell commands from each run: step in order (respecting
|
||||
working-directory). If any fail, diagnose, fix, and re-run until all pass.
|
||||
c. Commit fixes and push to the PR branch.
|
||||
d. Post a PR comment:
|
||||
## Test Results (manual — GitHub Actions skipped: billing/spending limit reached)
|
||||
| Check | Status | Notes |
|
||||
|-------|--------|-------|
|
||||
| `<command>` | ✅ Pass / ❌ Fail | e.g. "42 tests passed" |
|
||||
### Fixes applied
|
||||
- [failure] → [fix]
|
||||
All rows must show ✅ Pass before this step is considered complete.
|
||||
|
||||
5. CODE REVIEW — spawn a dedicated MODEL_DEV subagent (yolo mode) after CI passes:
|
||||
```
|
||||
You are the code review agent for story {number}-{short_description}.
|
||||
Working directory: {repo_root}/{WORKTREE_BASE_PATH}/story-{number}-{short_description}.
|
||||
Auto-approve all tool calls (yolo mode).
|
||||
|
||||
1. Run /code-review:code-review (reads the PR diff via gh pr diff).
|
||||
2. For every finding, apply a fix using your best engineering judgement.
|
||||
Do not skip or defer any finding — fix them all.
|
||||
3. Commit all fixes and push to the PR branch.
|
||||
4. If any fixes were pushed, re-run /code-review:code-review once more to confirm
|
||||
no new issues were introduced. Repeat fix → commit → push → re-review until
|
||||
the review comes back clean.
|
||||
|
||||
Report: clean (no findings or all fixed) or failure with details.
|
||||
```
|
||||
Wait for the subagent to report before continuing. If it reports failure,
|
||||
stop this story and surface the error.
|
||||
|
||||
6. Update sprint-status.yaml at the REPO ROOT:
|
||||
{repo_root}/_bmad-output/implementation-artifacts/sprint-status.yaml
|
||||
Set story {number} status to `done`.
|
||||
|
||||
Report: success or failure, and the PR number/URL if opened.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Auto-Merge Batch PRs (when `AUTO_PR_MERGE=true`)
|
||||
|
||||
After all batch stories complete Phase 2, merge every successful story's PR into `main` — one subagent per story, **sequentially** (lowest story number first).
|
||||
|
||||
> **Why sequential:** Merging lowest-first ensures each subsequent merge rebases against a main that already contains its predecessors — keeping conflict resolution incremental and predictable.
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Collect all stories from the current batch that reached Step 4 successfully (have a PR). Sort ascending by story number.
|
||||
2. For each story **sequentially** (wait for each to complete before starting the next):
|
||||
- Pull latest main at the repo root: spawn a quick subagent or include in the merge subagent.
|
||||
- Spawn a `MODEL_STANDARD` subagent (yolo mode) with the instructions from `references/phase4-auto-merge.md`.
|
||||
- Run Pre-Continuation Checks after the subagent completes. If it fails (unresolvable conflict, CI blocking), report the error and continue to the next story.
|
||||
3. Print a merge summary (coordinator formats from subagent reports):
|
||||
```
|
||||
Auto-Merge Results:
|
||||
Story | PR | Outcome
|
||||
--------|-------|--------
|
||||
6.1 | #142 | Merged ✅
|
||||
6.2 | #143 | Merged ✅ (conflict resolved: src/foo.ts)
|
||||
6.3 | #144 | Failed ❌ (CI blocking — manual merge required)
|
||||
```
|
||||
📣 **Notify** after all merges are processed (coordinator formats from subagent reports):
|
||||
```
|
||||
🔀 Auto-merge complete
|
||||
{story}: ✅ PR #{pr} | {story}: ✅ PR #{pr} (conflict resolved) | {story}: ❌ manual merge needed
|
||||
```
|
||||
|
||||
4. Spawn a **cleanup subagent** (`MODEL_STANDARD`, yolo mode):
|
||||
```
|
||||
Post-merge cleanup. Auto-approve all tool calls (yolo mode).
|
||||
|
||||
1. Verify sprint-status.yaml at the repo root has status `done` for all merged stories.
|
||||
Fix any that are missing.
|
||||
|
||||
2. Repo root branch safety check:
|
||||
git branch --show-current
|
||||
If not main:
|
||||
git checkout -- .
|
||||
git checkout main
|
||||
git reset --hard origin/main
|
||||
If checkout fails because a worktree claims the branch:
|
||||
git worktree list
|
||||
git worktree remove --force <path>
|
||||
git checkout main
|
||||
git reset --hard origin/main
|
||||
|
||||
3. Pull main:
|
||||
git pull --ff-only origin main
|
||||
|
||||
Report: done or any errors encountered.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Batch Completion & Continuation
|
||||
|
||||
### Step 1: Print Batch Summary
|
||||
|
||||
Coordinator prints immediately — no file reads, formats from Phase 2 step results:
|
||||
|
||||
```
|
||||
Story | Step 1 | Step 2 | Step 3 | Step 4 | Result
|
||||
--------|--------|--------|--------|--------|-------
|
||||
9.1 | OK | OK | OK | OK | PR #142
|
||||
9.2 | OK | OK | FAIL | -- | Review failed: ...
|
||||
9.3 | OK | OK | OK | OK | PR #143
|
||||
```
|
||||
|
||||
If arriving from Phase 1 with no ready stories:
|
||||
```
|
||||
No stories ready to work on.
|
||||
Blocked stories: {from Phase 0 report}
|
||||
```
|
||||
|
||||
📣 **Notify** with the batch summary (same content, condensed to one line per story):
|
||||
```
|
||||
📦 Batch complete — {N} stories
|
||||
{number} ✅ PR #{pr} | {number} ❌ Step {N} | ...
|
||||
```
|
||||
Or if no stories were ready: `⏸ No stories ready — waiting for PRs to merge`
|
||||
|
||||
### Step 2: Check for Epic Completion
|
||||
|
||||
Spawn an **assessment subagent** (`MODEL_STANDARD`, yolo mode):
|
||||
```
|
||||
Epic completion assessment. Auto-approve all tool calls (yolo mode).
|
||||
|
||||
Read:
|
||||
- _bmad-output/planning-artifacts/epics.md
|
||||
- _bmad-output/implementation-artifacts/sprint-status.yaml
|
||||
|
||||
Report back:
|
||||
- current_epic_complete: true/false (all stories done or have open PRs)
|
||||
- all_epics_complete: true/false (every story across every epic is done)
|
||||
- current_epic_name: name/number of the lowest incomplete epic
|
||||
- next_epic_name: name/number of the next epic (if any)
|
||||
- stories_remaining: count of non-done stories in the current epic
|
||||
```
|
||||
|
||||
Using the assessment report:
|
||||
|
||||
**If `current_epic_complete = true`:**
|
||||
1. Print: `🎉 Epic {current_epic_name} is complete! Starting retrospective countdown ({RETRO_TIMER_SECONDS ÷ 60} minutes)...`
|
||||
|
||||
📣 **Notify:** `🎉 Epic {current_epic_name} complete! Running retrospective in {RETRO_TIMER_SECONDS ÷ 60} min...`
|
||||
2. Start a timer using the **[Timer Pattern](#timer-pattern)** with:
|
||||
- **Duration:** `RETRO_TIMER_SECONDS`
|
||||
- **Fire prompt:** `"BAD_RETRO_TIMER_FIRED — The retrospective countdown has elapsed. Auto-run the retrospective: spawn a MODEL_DEV subagent (yolo mode) to run /bmad-retrospective, accept all changes. Run Pre-Continuation Checks after it completes, then proceed to Phase 4 Step 3."`
|
||||
- **[C] label:** `Run retrospective now`
|
||||
- **[S] label:** `Skip retrospective`
|
||||
- **[C] / FIRED action:** Spawn MODEL_DEV subagent (yolo mode) to run `/bmad-retrospective`. Accept all changes. Run Pre-Continuation Checks after.
|
||||
- **[S] action:** Skip retrospective.
|
||||
3. Proceed to Step 3 after the retrospective decision resolves.
|
||||
|
||||
### Step 3: Gate & Continue
|
||||
|
||||
Using the assessment report from Step 2, follow the applicable branch:
|
||||
|
||||
**Branch A — All epics complete (`all_epics_complete = true`):**
|
||||
```
|
||||
🏁 All epics are complete — sprint is done! BAD is stopping.
|
||||
```
|
||||
📣 **Notify:** `🏁 Sprint complete — all epics done! BAD is stopping.`
|
||||
|
||||
**Branch B — More work remains:**
|
||||
|
||||
1. Print a status line:
|
||||
- Epic just completed: `✅ Epic {current_epic_name} complete. Next up: Epic {next_epic_name} ({stories_remaining} stories remaining).`
|
||||
- More stories in current epic: `✅ Batch complete. Ready for the next batch.`
|
||||
2. Start a timer using the **[Timer Pattern](#timer-pattern)** with:
|
||||
- **Duration:** `WAIT_TIMER_SECONDS`
|
||||
- **Fire prompt:** `"BAD_WAIT_TIMER_FIRED — The post-batch wait has elapsed. Run Pre-Continuation Checks, then re-run Phase 0, then proceed to Phase 1."`
|
||||
- **[C] label:** `Continue now`
|
||||
- **[S] label:** `Stop BAD`
|
||||
- **[C] / FIRED action:** Run Pre-Continuation Checks, then re-run Phase 0.
|
||||
- **[S] action:** Stop BAD, print a final summary, and 📣 **Notify:** `🛑 BAD stopped by user.`
|
||||
3. After Phase 0 completes:
|
||||
- At least one story unblocked → proceed to Phase 1.
|
||||
- All stories still blocked → print which PRs are pending (from Phase 0 report), restart Branch B for another `WAIT_TIMER_SECONDS` countdown.
|
||||
|
||||
---
|
||||
|
||||
## Notify Pattern
|
||||
|
||||
Use this pattern every time a `📣 Notify:` callout appears **anywhere in this skill** — including inside the Timer Pattern.
|
||||
|
||||
**If `NOTIFY_SOURCE="telegram"`:** call `mcp__plugin_telegram_telegram__reply` with:
|
||||
- `chat_id`: `NOTIFY_CHAT_ID`
|
||||
- `text`: the message
|
||||
|
||||
**If `NOTIFY_SOURCE="terminal"`** (or if the Telegram tool call fails): print the message in the conversation as a normal response.
|
||||
|
||||
Always send both a terminal print and a channel message — the terminal print keeps the in-session transcript readable, and the channel message reaches the user on their device.
|
||||
|
||||
---
|
||||
|
||||
## Timer Pattern
|
||||
|
||||
Both the retrospective and post-batch wait timers use this pattern. The caller supplies the duration, fire prompt, option labels, and actions.
|
||||
|
||||
Behaviour depends on `TIMER_SUPPORT`:
|
||||
|
||||
---
|
||||
|
||||
### If `TIMER_SUPPORT=true` (native platform timers)
|
||||
|
||||
**Step 1 — compute target cron expression** (convert seconds to minutes: `SECONDS ÷ 60`):
|
||||
```bash
|
||||
# macOS
|
||||
date -v +{N}M '+%M %H %d %m *'
|
||||
# Linux
|
||||
date -d '+{N} minutes' '+%M %H %d %m *'
|
||||
```
|
||||
Save as `CRON_EXPR`. Save `TIMER_START=$(date +%s)`.
|
||||
|
||||
**Step 2 — create the one-shot timer** via `CronCreate`:
|
||||
- `cron`: expression from Step 1
|
||||
- `recurring`: `false`
|
||||
- `prompt`: the caller-supplied fire prompt
|
||||
|
||||
Save the returned job ID as `JOB_ID`.
|
||||
|
||||
**Step 3 — print the options menu** (always all three options):
|
||||
> Timer running (job: {JOB_ID}). I'll act in {N} minutes.
|
||||
>
|
||||
> - **[C] Continue** — {C label}
|
||||
> - **[S] Stop** — {S label}
|
||||
> - **[M] {N} Modify timer to {N} minutes** — shorten or extend the countdown
|
||||
|
||||
📣 **Notify** using the [Notify Pattern](#notify-pattern) with the same options so the user can respond from their device:
|
||||
```
|
||||
⏱ Timer set — {N} minutes (job: {JOB_ID})
|
||||
|
||||
[C] {C label}
|
||||
[S] {S label}
|
||||
[M] <minutes> — modify countdown
|
||||
```
|
||||
|
||||
Wait for whichever arrives first — user reply or fired prompt. On any human reply, print elapsed time first:
|
||||
```bash
|
||||
ELAPSED=$(( $(date +%s) - TIMER_START ))
|
||||
echo "⏱ Time elapsed: $((ELAPSED / 60))m $((ELAPSED % 60))s"
|
||||
```
|
||||
|
||||
- **[C]** → `CronDelete(JOB_ID)`, run the [C] action
|
||||
- **[S]** → `CronDelete(JOB_ID)`, run the [S] action
|
||||
- **[M] N** → `CronDelete(JOB_ID)`, recompute cron for N minutes from now, `CronCreate` again with same fire prompt, update `JOB_ID` and `TIMER_START`, print updated countdown, then 📣 **Notify** using the [Notify Pattern](#notify-pattern):
|
||||
```
|
||||
⏱ Timer updated — {N} minutes (job: {JOB_ID})
|
||||
|
||||
[C] {C label}
|
||||
[S] {S label}
|
||||
[M] <minutes> — modify countdown
|
||||
```
|
||||
- **FIRED (no prior reply)** → run the [C] action automatically
|
||||
|
||||
---
|
||||
|
||||
### If `TIMER_SUPPORT=false` (prompt-based continuation)
|
||||
|
||||
Save `TIMER_START=$(date +%s)`. No native timer is created — print the options menu immediately and wait for user reply:
|
||||
|
||||
> Waiting {N} minutes before continuing. Reply when ready.
|
||||
>
|
||||
> - **[C] Continue** — {C label}
|
||||
> - **[S] Stop** — {S label}
|
||||
> - **[M] N** — remind me after N minutes (reply with `[M] <minutes>`)
|
||||
|
||||
📣 **Notify** using the [Notify Pattern](#notify-pattern) with the same options.
|
||||
|
||||
On any human reply, print elapsed time first:
|
||||
```bash
|
||||
ELAPSED=$(( $(date +%s) - TIMER_START ))
|
||||
echo "⏱ Time elapsed: $((ELAPSED / 60))m $((ELAPSED % 60))s"
|
||||
```
|
||||
|
||||
- **[C]** → run the [C] action
|
||||
- **[S]** → run the [S] action
|
||||
- **[M] N** → update `TIMER_START`, print updated wait message, 📣 **Notify**, and wait again
|
||||
|
||||
---
|
||||
|
||||
## Rules
|
||||
|
||||
1. **Delegate mode only** — never read files, run git/gh commands, or write to disk yourself. The only platform command the coordinator may run directly is context compaction via Pre-Continuation Checks (when `CONTEXT_COMPACTION_THRESHOLD` is exceeded). All other slash commands and operations are delegated to subagents.
|
||||
2. **One subagent per step per story** — spawn only after the previous step reports success.
|
||||
3. **Sequential steps within a story** — Steps 1→2→3→4 run strictly in order.
|
||||
4. **Parallel stories** — launch all stories' Step 1 in one message (one tool call per story). Phase 3 runs sequentially by design.
|
||||
5. **Dependency graph is authoritative** — never pick a story whose dependencies are not fully merged. Use Phase 0's report, not your own file reads.
|
||||
6. **Phase 0 runs before every batch** — always after the Phase 4 wait. Always as a fresh subagent.
|
||||
7. **Confirm success** before spawning the next subagent.
|
||||
8. **sprint-status.yaml is updated by step subagents** — each step subagent writes to the repo root copy. The coordinator never does this directly.
|
||||
9. **On failure** — report the error, halt that story. No auto-retry. **Exception:** rate/usage limit failures → run Pre-Continuation Checks (auto-pauses until reset) then retry.
|
||||
10. **Issue all Step 1 subagent calls in one response** when Phase 2 begins. After each story's Step 1 completes, issue that story's Step 2 — never wait for all stories' Step 1 to finish before issuing any Step 2.
|
||||
3
skills/bad/assets/module-help.csv
Normal file
3
skills/bad/assets/module-help.csv
Normal file
@@ -0,0 +1,3 @@
|
||||
module,skill,display-name,menu-code,description,action,args,phase,after,before,required,output-location,outputs
|
||||
bad,bad,Run BAD Pipeline,RN,"Runs the full autonomous development loop — dependency graph, story pipeline, auto-merge, and continuation",run,[KEY=VALUE...],3-development,bmad-sprint-planning:run,,false,,
|
||||
bad,bad,Configure BAD,SU,Set up or reconfigure BAD for this project,configure,[setup|configure],anytime,,,false,,
|
||||
|
153
skills/bad/assets/module-setup.md
Normal file
153
skills/bad/assets/module-setup.md
Normal file
@@ -0,0 +1,153 @@
|
||||
# BAD Module Setup
|
||||
|
||||
Standalone module self-registration for BMad Autonomous Development. This file is loaded when:
|
||||
- The user passes `setup`, `configure`, or `install` as an argument
|
||||
- The module is not yet registered in `{project-root}/_bmad/bad/config.yaml`
|
||||
|
||||
## Overview
|
||||
|
||||
Registers BAD into a project. Writes to:
|
||||
- **`{project-root}/_bmad/config.yaml`** — shared project config (universal settings + harness-specific settings)
|
||||
- **`{project-root}/_bmad/config.user.yaml`** — personal settings (gitignored): `user_name`, `communication_language`, and any `user_setting: true` variable
|
||||
- **`{project-root}/_bmad/module-help.csv`** — registers BAD capabilities for the help system
|
||||
|
||||
Both config scripts use an anti-zombie pattern — existing `bad` entries are removed before writing fresh ones, so stale values never persist.
|
||||
|
||||
`{project-root}` is a **literal token** in config values — never substitute it with an actual path.
|
||||
|
||||
## Step 1: Check Existing Config
|
||||
|
||||
1. Read `./assets/module.yaml` for module metadata.
|
||||
2. Check if `{project-root}/_bmad/config.yaml` has a `bad` section — if so, inform the user this is a reconfiguration and show existing values as defaults.
|
||||
3. Check for inline args (e.g. `accept all defaults`, `--headless`, or `MAX_PARALLEL_STORIES=5`) — map any provided values to config keys, use defaults for the rest, skip prompting for those keys.
|
||||
|
||||
## Step 2: Detect Installed Harnesses
|
||||
|
||||
Check for the presence of harness directories at the project root:
|
||||
|
||||
| Directory | Harness |
|
||||
|---|---|
|
||||
| `.claude/` | `claude-code` |
|
||||
| `.cursor/` | `cursor` |
|
||||
| `.github/skills/` | `github-copilot` (use `/skills/` subfolder to avoid false positive on bare `.github/`) |
|
||||
| `.codex/` | `openai-codex` |
|
||||
| `.gemini/` | `gemini` |
|
||||
| `.windsurf/` | `windsurf` |
|
||||
| `.cline/` | `cline` |
|
||||
|
||||
Store all detected harnesses. Determine the **current harness** from this skill's own file path — whichever harness directory contains this running skill is the current harness. Use the current harness to drive the question branch in Step 3.
|
||||
|
||||
## Step 3: Collect Configuration
|
||||
|
||||
Show defaults in brackets. Present all values together so the user can respond once with only what they want to change. Never say "press enter" or "leave blank".
|
||||
|
||||
**Default priority** (highest wins): existing config values > `./assets/module.yaml` defaults.
|
||||
|
||||
### Core Config (only if not yet set)
|
||||
|
||||
Only collect if no core keys exist in `config.yaml` or `config.user.yaml`:
|
||||
|
||||
- `user_name` (default: BMad) — written exclusively to `config.user.yaml`
|
||||
- `communication_language` and `document_output_language` (default: English — ask as a single language question, both keys get the same answer) — `communication_language` written exclusively to `config.user.yaml`
|
||||
- `output_folder` (default: `{project-root}/_bmad-output`) — written to root of `config.yaml`, shared across all modules
|
||||
|
||||
### Universal BAD Config
|
||||
|
||||
Read from `./assets/module.yaml` and present as a grouped block:
|
||||
|
||||
- `max_parallel_stories` — Max stories to run in a single batch [3]
|
||||
- `worktree_base_path` — Root directory for git worktrees, relative to repo root [.worktrees]
|
||||
- `auto_pr_merge` — Auto-merge batch PRs sequentially after each batch? [No]
|
||||
- `run_ci_locally` — Skip GitHub Actions and run CI locally by default? [No]
|
||||
- `wait_timer_seconds` — Seconds to wait between batches before re-checking PR status [3600]
|
||||
- `retro_timer_seconds` — Seconds before auto-running retrospective after epic completion [600]
|
||||
- `context_compaction_threshold` — Context window % at which to compact/summarise context [80]
|
||||
|
||||
### Harness-Specific Config
|
||||
|
||||
Run once for the **current harness**. If multiple harnesses are detected, also offer to configure each additional harness in sequence after the current one — label each section clearly.
|
||||
|
||||
When configuring multiple harnesses, model and threshold variables are stored with a harness prefix (e.g. `claude_model_standard`, `cursor_model_standard`) so they coexist. Universal variables are shared and asked only once.
|
||||
|
||||
#### Claude Code (`claude-code`)
|
||||
|
||||
Present as **"Claude Code settings"**:
|
||||
|
||||
- `model_standard` — Model for story creation, dev, and PR steps
|
||||
- Choose: `sonnet` (default), `haiku`
|
||||
- `model_quality` — Model for code review step
|
||||
- Choose: `opus` (default), `sonnet`
|
||||
- `api_five_hour_threshold` — 5-hour API usage % at which to pause [80]
|
||||
- `api_seven_day_threshold` — 7-day API usage % at which to pause [95]
|
||||
|
||||
Automatically write `timer_support: true` — no prompt needed.
|
||||
|
||||
#### All Other Harnesses
|
||||
|
||||
Present as **"{HarnessName} settings"**:
|
||||
|
||||
- `model_standard` — Model for story creation, dev, and PR steps (e.g. `fast`, `gpt-4o-mini`, `flash`)
|
||||
- `model_quality` — Model for code review step (e.g. `best`, `o1`, `pro`)
|
||||
- `api_usage_threshold` — API usage % at which to pause for rate limits [80]
|
||||
|
||||
Automatically write `timer_support: false` — no prompt needed. BAD will use prompt-based continuation instead of native timers on this harness.
|
||||
|
||||
## Step 4: Write Files
|
||||
|
||||
Write a temp JSON file with collected answers structured as:
|
||||
```json
|
||||
{
|
||||
"core": { "user_name": "...", "document_output_language": "...", "output_folder": "..." },
|
||||
"bad": {
|
||||
"max_parallel_stories": "3",
|
||||
"worktree_base_path": ".worktrees",
|
||||
"auto_pr_merge": false,
|
||||
"run_ci_locally": false,
|
||||
"wait_timer_seconds": "3600",
|
||||
"retro_timer_seconds": "600",
|
||||
"context_compaction_threshold": "80",
|
||||
"timer_support": true,
|
||||
"model_standard": "sonnet",
|
||||
"model_quality": "opus",
|
||||
"api_five_hour_threshold": "80",
|
||||
"api_seven_day_threshold": "95"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Omit `core` key if core config already exists. Run both scripts in parallel:
|
||||
|
||||
```bash
|
||||
python3 ./scripts/merge-config.py \
|
||||
--config-path "{project-root}/_bmad/config.yaml" \
|
||||
--user-config-path "{project-root}/_bmad/config.user.yaml" \
|
||||
--module-yaml ./assets/module.yaml \
|
||||
--answers {temp-file}
|
||||
|
||||
python3 ./scripts/merge-help-csv.py \
|
||||
--target "{project-root}/_bmad/module-help.csv" \
|
||||
--source ./assets/module-help.csv \
|
||||
--module-code bad
|
||||
```
|
||||
|
||||
If either exits non-zero, surface the error and stop.
|
||||
|
||||
Run `./scripts/merge-config.py --help` or `./scripts/merge-help-csv.py --help` for full usage.
|
||||
|
||||
## Step 5: Create Directories
|
||||
|
||||
After writing config, create the worktree base directory at the resolved path of `{project-root}/{worktree_base_path}` if it does not exist. Use the actual resolved path for filesystem operations only — config values must continue to use the literal `{project-root}` token.
|
||||
|
||||
Also create `output_folder` and any other `{project-root}/`-prefixed values from the config that don't exist on disk.
|
||||
|
||||
## Step 6: Confirm and Greet
|
||||
|
||||
Display what was written: config values set, user settings written, help entries registered, fresh install vs reconfiguration.
|
||||
|
||||
Then display the module greeting:
|
||||
|
||||
> BAD is ready. Run /bad to start. Pass KEY=VALUE args to override config at runtime (e.g. /bad MAX_PARALLEL_STORIES=2).
|
||||
|
||||
## Return to Skill
|
||||
|
||||
Setup is complete. Resume normal BAD activation — load config from the freshly written files and proceed with whatever the user originally intended.
|
||||
41
skills/bad/assets/module.yaml
Normal file
41
skills/bad/assets/module.yaml
Normal file
@@ -0,0 +1,41 @@
|
||||
code: bad
|
||||
name: "BMad Autonomous Development"
|
||||
description: "Orchestrates parallel BMad story implementation pipelines — automatically runs bmad-create-story, bmad-dev-story, bmad-code-review, and commit/PR in batches, driven by the sprint backlog and dependency graph"
|
||||
module_version: "1.0.0"
|
||||
module_greeting: "BAD is ready. Run /bad to start. Pass KEY=VALUE args to override config at runtime (e.g. /bad MAX_PARALLEL_STORIES=2)."
|
||||
|
||||
header: "BAD — BMad Autonomous Development"
|
||||
subheader: "Configure the autonomous development pipeline for this project.\nHarness-specific settings (models, rate limit thresholds, timer support) are detected automatically from your installed harnesses."
|
||||
|
||||
max_parallel_stories:
|
||||
prompt: "Max stories to run in a single batch"
|
||||
default: "3"
|
||||
result: "{value}"
|
||||
|
||||
worktree_base_path:
|
||||
prompt: "Root directory for git worktrees (relative to repo root)"
|
||||
default: ".worktrees"
|
||||
result: "{value}"
|
||||
|
||||
auto_pr_merge:
|
||||
prompt: "Auto-merge batch PRs sequentially after each batch?"
|
||||
default: false
|
||||
|
||||
run_ci_locally:
|
||||
prompt: "Skip GitHub Actions and run CI locally by default?"
|
||||
default: false
|
||||
|
||||
wait_timer_seconds:
|
||||
prompt: "Seconds to wait between batches before re-checking PR status"
|
||||
default: "3600"
|
||||
result: "{value}"
|
||||
|
||||
retro_timer_seconds:
|
||||
prompt: "Seconds before auto-running retrospective after epic completion"
|
||||
default: "600"
|
||||
result: "{value}"
|
||||
|
||||
context_compaction_threshold:
|
||||
prompt: "Context window % at which to compact/summarise context"
|
||||
default: "80"
|
||||
result: "{value}"
|
||||
73
skills/bad/references/phase0-dependency-graph.md
Normal file
73
skills/bad/references/phase0-dependency-graph.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# Phase 0: Dependency Graph — Detailed Reference
|
||||
|
||||
Read this file during Phase 0 steps 4 and 6.
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Update GitHub PR/Issue Status and Reconcile sprint-status.yaml
|
||||
|
||||
GitHub PR merge status is the **authoritative source of truth** for whether a story is `done`. This step always runs, even on resume.
|
||||
|
||||
### PR Status Lookup
|
||||
|
||||
Search by branch name:
|
||||
```bash
|
||||
gh pr list --search "story-{number}" --state all --json number,title,state,mergedAt
|
||||
```
|
||||
|
||||
### GitHub Issue Number Lookup
|
||||
|
||||
Resolve in this order:
|
||||
1. Check the epic file and `sprint-status.yaml` for an explicit issue reference.
|
||||
2. If not found, search by the BMad issue title prefix `"Story {number}:"`:
|
||||
```bash
|
||||
gh issue list --search "Story 3.1:" --json number,title,state
|
||||
```
|
||||
Pick the best match by comparing titles.
|
||||
3. If still not found, leave the Issue column blank.
|
||||
|
||||
### Reconcile sprint-status.yaml from PR Status
|
||||
|
||||
After updating PR statuses, sync `_bmad-output/implementation-artifacts/sprint-status.yaml` at the **repo root** to match. For every story whose PR status is now `merged`, set its sprint-status entry to `done` (regardless of what the file currently says).
|
||||
|
||||
This repair step handles cases where sprint-status.yaml was reset or reverted by a git operation — **GitHub is always right**.
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Dependency Graph Format
|
||||
|
||||
Write from scratch on first run. On subsequent runs, update only columns that change (Sprint Status, Issue, PR, PR Status, Ready to Work), add new rows for new stories, and preserve all existing dependency chain data.
|
||||
|
||||
### Schema
|
||||
|
||||
```markdown
|
||||
# Story Dependency Graph
|
||||
_Last updated: {ISO timestamp}_
|
||||
|
||||
## Stories
|
||||
|
||||
| Story | Epic | Title | Sprint Status | Issue | PR | PR Status | Dependencies | Ready to Work |
|
||||
|-------|------|-------|--------------|-------|----|-----------|--------------|---------------|
|
||||
| 1.1 | 1 | ... | done | #10 | #42 | merged | none | ✅ Yes (done) |
|
||||
| 1.2 | 1 | ... | backlog | #11 | #43 | open | 1.1 | ❌ No (1.1 not merged) |
|
||||
| 1.3 | 1 | ... | backlog | #12 | — | — | none | ✅ Yes |
|
||||
| 2.1 | 2 | ... | backlog | #13 | — | — | none | ❌ No (epic 1 not complete) |
|
||||
|
||||
## Dependency Chains
|
||||
|
||||
- **1.2** depends on: 1.1
|
||||
- **1.4** depends on: 1.2, 1.3
|
||||
...
|
||||
|
||||
## Notes
|
||||
{Any observations from bmad-help about parallelization opportunities or bottlenecks}
|
||||
```
|
||||
|
||||
### Ready to Work Rules
|
||||
|
||||
**Ready to Work = ✅ Yes** only when **all** of the following are true:
|
||||
- The story itself is not `done`.
|
||||
- Every story it depends on has a **merged** PR (or is `done` with a merged PR).
|
||||
- Every story in all **lower-numbered epics** has a **merged** PR (or is `done` with a merged PR) — epic N may not start until epic N-1 is fully merged into main.
|
||||
|
||||
Any condition failing → **❌ No** with a parenthetical explaining the blocker (e.g., `❌ No (1.1 not merged)`, `❌ No (epic 1 not complete)`).
|
||||
64
skills/bad/references/phase4-auto-merge.md
Normal file
64
skills/bad/references/phase4-auto-merge.md
Normal file
@@ -0,0 +1,64 @@
|
||||
# Phase 4: Auto-Merge — Subagent Instructions
|
||||
|
||||
The coordinator spawns one subagent per story, sequentially. Pass these instructions to each subagent. Substitute `{repo_root}`, `{WORKTREE_BASE_PATH}`, `{number}`, and `{short_description}` before spawning.
|
||||
|
||||
---
|
||||
|
||||
## Subagent Instructions
|
||||
|
||||
You are working in the worktree at `{repo_root}/{WORKTREE_BASE_PATH}/story-{number}-{short_description}`. Auto-approve all tool calls (yolo mode).
|
||||
|
||||
1. **Identify the open PR** for this branch:
|
||||
```bash
|
||||
gh pr view --json number,title,mergeable,mergeStateStatus
|
||||
```
|
||||
|
||||
2. **Check for merge conflicts:**
|
||||
- If `mergeable` is `"CONFLICTING"` or `mergeStateStatus` indicates conflicts:
|
||||
a. Fetch and rebase onto latest main:
|
||||
```bash
|
||||
git fetch origin main
|
||||
git rebase origin/main
|
||||
```
|
||||
b. Resolve conflicts using your best engineering judgement, keeping the intent of this story's changes.
|
||||
- For `_bmad-output/implementation-artifacts/sprint-status.yaml` conflicts: always keep the version from `origin/main` — sprint-status.yaml is reconciled from GitHub PR status in Phase 0, so the exact content doesn't matter as long as the rebase completes.
|
||||
- After resolving each conflicted file, stage and continue:
|
||||
```bash
|
||||
git add <file>
|
||||
git rebase --continue
|
||||
```
|
||||
c. Force-push the rebased branch:
|
||||
```bash
|
||||
git push --force-with-lease
|
||||
```
|
||||
d. Wait briefly for GitHub to re-evaluate mergeability, then confirm the PR is now mergeable.
|
||||
|
||||
3. **Wait for all CI checks to pass — do not skip this step:**
|
||||
```bash
|
||||
gh pr checks {pr_number} --watch --interval 30
|
||||
```
|
||||
This blocks until every check completes. Once it returns, verify all checks passed:
|
||||
```bash
|
||||
gh pr checks {pr_number}
|
||||
```
|
||||
- If any check shows `fail` or `error` → **do not merge**. Report the failing check name and stop. The coordinator should surface this as a failure for this story.
|
||||
- If checks show `pending` still (e.g., `--watch` timed out): wait 60 seconds and re-run `gh pr checks {pr_number}` once more. If still pending after the retry, report and stop.
|
||||
- Only proceed to step 4 when every check shows `pass` or `success`.
|
||||
|
||||
> **Why this matters:** Phase 4 runs after Step 4, which may have seen CI green at PR-creation time. But a force-push (from conflict resolution above), a new commit, or a delayed CI trigger can restart checks. Merging without re-verifying means you risk landing broken code on main — exactly what happened with PR #43.
|
||||
|
||||
4. **Merge the PR** using squash strategy:
|
||||
```bash
|
||||
gh pr merge {pr_number} --squash --auto --delete-branch
|
||||
```
|
||||
If `--auto` is unavailable (requires branch protection rules), merge immediately:
|
||||
```bash
|
||||
gh pr merge {pr_number} --squash --delete-branch
|
||||
```
|
||||
|
||||
5. **Confirm the merge:**
|
||||
```bash
|
||||
gh pr view {pr_number} --json state,mergedAt
|
||||
```
|
||||
|
||||
6. **Report back:** `"Merged story-{number} PR #{pr_number}"` on success, or a clear description of any failure.
|
||||
88
skills/bad/references/pre-continuation-checks.md
Normal file
88
skills/bad/references/pre-continuation-checks.md
Normal file
@@ -0,0 +1,88 @@
|
||||
# Pre-Continuation Checks
|
||||
|
||||
Run these checks **in order** every time you (the coordinator) are about to re-enter Phase 0 — whether triggered by a user reply, a timer firing, or the automatic loop.
|
||||
|
||||
**Harness note:** Checks 2 and 3 read from platform-provided session state (e.g. Claude Code's stdin JSON). On other harnesses this data may not be available — each check gracefully skips if its fields are absent.
|
||||
|
||||
Read the current session state from whatever mechanism your platform provides (e.g. Claude Code pipes session JSON to stdin). The relevant fields:
|
||||
|
||||
- `context_window.used_percentage` — 0–100, percentage of context window consumed (treat null as 0)
|
||||
- `rate_limits.five_hour.used_percentage` — 0–100 (Claude Code: Pro/Max subscribers only)
|
||||
- `rate_limits.five_hour.resets_at` — Unix epoch seconds when the 5-hour window resets
|
||||
- `rate_limits.seven_day.used_percentage` — 0–100 (Claude Code only)
|
||||
- `rate_limits.seven_day.resets_at` — Unix epoch seconds when the 7-day window resets
|
||||
|
||||
Each field may be independently absent. If absent, skip the corresponding check.
|
||||
|
||||
---
|
||||
|
||||
## Check 1: Context Window
|
||||
|
||||
If `context_window.used_percentage` **> `CONTEXT_COMPACTION_THRESHOLD`**:
|
||||
|
||||
1. Print: `"⚠️ Context window at {usage}% — compacting before continuing."`
|
||||
2. Compact context using your platform's mechanism (e.g. `/compact` on Claude Code). Wait for it to complete.
|
||||
|
||||
---
|
||||
|
||||
## Check 2: Five-Hour Usage Limit
|
||||
|
||||
If `rate_limits.five_hour.used_percentage` is present and **> `API_FIVE_HOUR_THRESHOLD`**:
|
||||
|
||||
1. Convert reset epoch to human-readable time:
|
||||
```bash
|
||||
# macOS
|
||||
date -r {resets_at}
|
||||
# Linux
|
||||
date -d @{resets_at}
|
||||
```
|
||||
2. Print: `"⏸ 5-hour usage limit at {usage}% — auto-pausing until reset at {reset_time}. BAD will resume automatically."`
|
||||
3. **If `TIMER_SUPPORT=true`:** compute a cron expression from the reset epoch and schedule a resume:
|
||||
```bash
|
||||
# macOS
|
||||
date -r {resets_at} '+%M %H %d %m *'
|
||||
# Linux
|
||||
date -d @{resets_at} '+%M %H %d %m *'
|
||||
```
|
||||
Call `CronCreate`:
|
||||
- `cron`: expression from above
|
||||
- `recurring`: `false`
|
||||
- `prompt`: `"BAD_RATE_LIMIT_TIMER_FIRED (five_hour) — The 5-hour rate limit window has reset. Re-check five_hour.used_percentage; if now below API_FIVE_HOUR_THRESHOLD, continue with Pre-Continuation Check 3 (seven-day). If still too high, schedule another pause until the next reset time."`
|
||||
|
||||
Save the job ID. Do not ask the user for input — resume automatically when `BAD_RATE_LIMIT_TIMER_FIRED` arrives.
|
||||
|
||||
4. **If `TIMER_SUPPORT=false`:** print the reset time and wait for the user to reply when they're ready to continue. Then re-check the limit before proceeding.
|
||||
|
||||
---
|
||||
|
||||
## Check 3: Seven-Day Usage Limit
|
||||
|
||||
If `rate_limits.seven_day.used_percentage` is present and **> `API_SEVEN_DAY_THRESHOLD`**:
|
||||
|
||||
1. Convert reset epoch to human-readable time:
|
||||
```bash
|
||||
# macOS
|
||||
date -r {resets_at}
|
||||
# Linux
|
||||
date -d @{resets_at}
|
||||
```
|
||||
2. Print: `"⏸ 7-day usage limit at {usage}% — auto-pausing until reset at {reset_time}. BAD will resume automatically."`
|
||||
3. **If `TIMER_SUPPORT=true`:** compute a cron expression from the reset epoch and schedule a resume:
|
||||
```bash
|
||||
# macOS
|
||||
date -r {resets_at} '+%M %H %d %m *'
|
||||
# Linux
|
||||
date -d @{resets_at} '+%M %H %d %m *'
|
||||
```
|
||||
Call `CronCreate`:
|
||||
- `cron`: expression from above
|
||||
- `recurring`: `false`
|
||||
- `prompt`: `"BAD_RATE_LIMIT_TIMER_FIRED (seven_day) — The 7-day rate limit window has reset. Re-check seven_day.used_percentage; if now below API_SEVEN_DAY_THRESHOLD, continue with Phase 0. If still too high, schedule another pause until the next reset time."`
|
||||
|
||||
Save the job ID. Resume automatically when `BAD_RATE_LIMIT_TIMER_FIRED` arrives.
|
||||
|
||||
4. **If `TIMER_SUPPORT=false`:** print the reset time and wait for the user to reply when ready. Then re-check before proceeding.
|
||||
|
||||
---
|
||||
|
||||
Only after all applicable checks pass, proceed to re-run Phase 0 in full.
|
||||
408
skills/bad/scripts/merge-config.py
Executable file
408
skills/bad/scripts/merge-config.py
Executable file
@@ -0,0 +1,408 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.9"
|
||||
# dependencies = ["pyyaml"]
|
||||
# ///
|
||||
"""Merge module configuration into shared _bmad/config.yaml and config.user.yaml.
|
||||
|
||||
Reads a module.yaml definition and a JSON answers file, then writes or updates
|
||||
the shared config.yaml (core values at root + module section) and config.user.yaml
|
||||
(user_name, communication_language, plus any module variable with user_setting: true).
|
||||
Uses an anti-zombie pattern for the module section in config.yaml.
|
||||
|
||||
Legacy migration: when --legacy-dir is provided, reads old per-module config files
|
||||
from {legacy-dir}/{module-code}/config.yaml and {legacy-dir}/core/config.yaml.
|
||||
Matching values serve as fallback defaults (answers override them). After a
|
||||
successful merge, the legacy config.yaml files are deleted. Only the current
|
||||
module and core directories are touched — other module directories are left alone.
|
||||
|
||||
Exit codes: 0=success, 1=validation error, 2=runtime error
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
try:
|
||||
import yaml
|
||||
except ImportError:
|
||||
print("Error: pyyaml is required (PEP 723 dependency)", file=sys.stderr)
|
||||
sys.exit(2)
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Merge module config into shared _bmad/config.yaml with anti-zombie pattern."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--config-path",
|
||||
required=True,
|
||||
help="Path to the target _bmad/config.yaml file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-yaml",
|
||||
required=True,
|
||||
help="Path to the module.yaml definition file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--answers",
|
||||
required=True,
|
||||
help="Path to JSON file with collected answers",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--user-config-path",
|
||||
required=True,
|
||||
help="Path to the target _bmad/config.user.yaml file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--legacy-dir",
|
||||
help="Path to _bmad/ directory to check for legacy per-module config files. "
|
||||
"Matching values are used as fallback defaults, then legacy files are deleted.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose",
|
||||
action="store_true",
|
||||
help="Print detailed progress to stderr",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def load_yaml_file(path: str) -> dict:
|
||||
"""Load a YAML file, returning empty dict if file doesn't exist."""
|
||||
file_path = Path(path)
|
||||
if not file_path.exists():
|
||||
return {}
|
||||
with open(file_path, "r", encoding="utf-8") as f:
|
||||
content = yaml.safe_load(f)
|
||||
return content if content else {}
|
||||
|
||||
|
||||
def load_json_file(path: str) -> dict:
|
||||
"""Load a JSON file."""
|
||||
with open(path, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
# Keys that live at config root (shared across all modules)
|
||||
_CORE_KEYS = frozenset(
|
||||
{"user_name", "communication_language", "document_output_language", "output_folder"}
|
||||
)
|
||||
|
||||
|
||||
def load_legacy_values(
|
||||
legacy_dir: str, module_code: str, module_yaml: dict, verbose: bool = False
|
||||
) -> tuple[dict, dict, list]:
|
||||
"""Read legacy per-module config files and return core/module value dicts.
|
||||
|
||||
Reads {legacy_dir}/core/config.yaml and {legacy_dir}/{module_code}/config.yaml.
|
||||
Only returns values whose keys match the current schema (core keys or module.yaml
|
||||
variable definitions). Other modules' directories are not touched.
|
||||
|
||||
Returns:
|
||||
(legacy_core, legacy_module, files_found) where files_found lists paths read.
|
||||
"""
|
||||
legacy_core: dict = {}
|
||||
legacy_module: dict = {}
|
||||
files_found: list = []
|
||||
|
||||
# Read core legacy config
|
||||
core_path = Path(legacy_dir) / "core" / "config.yaml"
|
||||
if core_path.exists():
|
||||
core_data = load_yaml_file(str(core_path))
|
||||
files_found.append(str(core_path))
|
||||
for k, v in core_data.items():
|
||||
if k in _CORE_KEYS:
|
||||
legacy_core[k] = v
|
||||
if verbose:
|
||||
print(f"Legacy core config: {list(legacy_core.keys())}", file=sys.stderr)
|
||||
|
||||
# Read module legacy config
|
||||
mod_path = Path(legacy_dir) / module_code / "config.yaml"
|
||||
if mod_path.exists():
|
||||
mod_data = load_yaml_file(str(mod_path))
|
||||
files_found.append(str(mod_path))
|
||||
for k, v in mod_data.items():
|
||||
if k in _CORE_KEYS:
|
||||
# Core keys duplicated in module config — only use if not already set
|
||||
if k not in legacy_core:
|
||||
legacy_core[k] = v
|
||||
elif k in module_yaml and isinstance(module_yaml[k], dict):
|
||||
# Module-specific key that matches a current variable definition
|
||||
legacy_module[k] = v
|
||||
if verbose:
|
||||
print(
|
||||
f"Legacy module config: {list(legacy_module.keys())}", file=sys.stderr
|
||||
)
|
||||
|
||||
return legacy_core, legacy_module, files_found
|
||||
|
||||
|
||||
def apply_legacy_defaults(answers: dict, legacy_core: dict, legacy_module: dict) -> dict:
|
||||
"""Apply legacy values as fallback defaults under the answers.
|
||||
|
||||
Legacy values fill in any key not already present in answers.
|
||||
Explicit answers always win.
|
||||
"""
|
||||
merged = dict(answers)
|
||||
|
||||
if legacy_core:
|
||||
core = merged.get("core", {})
|
||||
filled_core = dict(legacy_core) # legacy as base
|
||||
filled_core.update(core) # answers override
|
||||
merged["core"] = filled_core
|
||||
|
||||
if legacy_module:
|
||||
mod = merged.get("module", {})
|
||||
filled_mod = dict(legacy_module) # legacy as base
|
||||
filled_mod.update(mod) # answers override
|
||||
merged["module"] = filled_mod
|
||||
|
||||
return merged
|
||||
|
||||
|
||||
def cleanup_legacy_configs(
|
||||
legacy_dir: str, module_code: str, verbose: bool = False
|
||||
) -> list:
|
||||
"""Delete legacy config.yaml files for this module and core only.
|
||||
|
||||
Returns list of deleted file paths.
|
||||
"""
|
||||
deleted = []
|
||||
for subdir in (module_code, "core"):
|
||||
legacy_path = Path(legacy_dir) / subdir / "config.yaml"
|
||||
if legacy_path.exists():
|
||||
if verbose:
|
||||
print(f"Deleting legacy config: {legacy_path}", file=sys.stderr)
|
||||
legacy_path.unlink()
|
||||
deleted.append(str(legacy_path))
|
||||
return deleted
|
||||
|
||||
|
||||
def extract_module_metadata(module_yaml: dict) -> dict:
|
||||
"""Extract non-variable metadata fields from module.yaml."""
|
||||
meta = {}
|
||||
for k in ("name", "description"):
|
||||
if k in module_yaml:
|
||||
meta[k] = module_yaml[k]
|
||||
meta["version"] = module_yaml.get("module_version") # null if absent
|
||||
if "default_selected" in module_yaml:
|
||||
meta["default_selected"] = module_yaml["default_selected"]
|
||||
return meta
|
||||
|
||||
|
||||
def apply_result_templates(
|
||||
module_yaml: dict, module_answers: dict, verbose: bool = False
|
||||
) -> dict:
|
||||
"""Apply result templates from module.yaml to transform raw answer values.
|
||||
|
||||
For each answer, if the corresponding variable definition in module.yaml has
|
||||
a 'result' field, replaces {value} in that template with the answer. Skips
|
||||
the template if the answer already contains '{project-root}' to prevent
|
||||
double-prefixing.
|
||||
"""
|
||||
transformed = {}
|
||||
for key, value in module_answers.items():
|
||||
var_def = module_yaml.get(key)
|
||||
if (
|
||||
isinstance(var_def, dict)
|
||||
and "result" in var_def
|
||||
and "{project-root}" not in str(value)
|
||||
):
|
||||
template = var_def["result"]
|
||||
transformed[key] = template.replace("{value}", str(value))
|
||||
if verbose:
|
||||
print(
|
||||
f"Applied result template for '{key}': {value} → {transformed[key]}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
else:
|
||||
transformed[key] = value
|
||||
return transformed
|
||||
|
||||
|
||||
def merge_config(
|
||||
existing_config: dict,
|
||||
module_yaml: dict,
|
||||
answers: dict,
|
||||
verbose: bool = False,
|
||||
) -> dict:
|
||||
"""Merge answers into config, applying anti-zombie pattern.
|
||||
|
||||
Args:
|
||||
existing_config: Current config.yaml contents (may be empty)
|
||||
module_yaml: The module definition
|
||||
answers: JSON with 'core' and/or 'module' keys
|
||||
verbose: Print progress to stderr
|
||||
|
||||
Returns:
|
||||
Updated config dict ready to write
|
||||
"""
|
||||
config = dict(existing_config)
|
||||
module_code = module_yaml.get("code")
|
||||
|
||||
if not module_code:
|
||||
print("Error: module.yaml must have a 'code' field", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Migrate legacy core: section to root
|
||||
if "core" in config and isinstance(config["core"], dict):
|
||||
if verbose:
|
||||
print("Migrating legacy 'core' section to root", file=sys.stderr)
|
||||
config.update(config.pop("core"))
|
||||
|
||||
# Strip user-only keys from config — they belong exclusively in config.user.yaml
|
||||
for key in _CORE_USER_KEYS:
|
||||
if key in config:
|
||||
if verbose:
|
||||
print(f"Removing user-only key '{key}' from config (belongs in config.user.yaml)", file=sys.stderr)
|
||||
del config[key]
|
||||
|
||||
# Write core values at root (global properties, not nested under "core")
|
||||
# Exclude user-only keys — those belong exclusively in config.user.yaml
|
||||
core_answers = answers.get("core")
|
||||
if core_answers:
|
||||
shared_core = {k: v for k, v in core_answers.items() if k not in _CORE_USER_KEYS}
|
||||
if shared_core:
|
||||
if verbose:
|
||||
print(f"Writing core config at root: {list(shared_core.keys())}", file=sys.stderr)
|
||||
config.update(shared_core)
|
||||
|
||||
# Anti-zombie: remove existing module section
|
||||
if module_code in config:
|
||||
if verbose:
|
||||
print(
|
||||
f"Removing existing '{module_code}' section (anti-zombie)",
|
||||
file=sys.stderr,
|
||||
)
|
||||
del config[module_code]
|
||||
|
||||
# Build module section: metadata + variable values
|
||||
module_section = extract_module_metadata(module_yaml)
|
||||
module_answers = apply_result_templates(
|
||||
module_yaml, answers.get("module", {}), verbose
|
||||
)
|
||||
module_section.update(module_answers)
|
||||
|
||||
if verbose:
|
||||
print(
|
||||
f"Writing '{module_code}' section with keys: {list(module_section.keys())}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
config[module_code] = module_section
|
||||
|
||||
return config
|
||||
|
||||
|
||||
# Core keys that are always written to config.user.yaml
|
||||
_CORE_USER_KEYS = ("user_name", "communication_language")
|
||||
|
||||
|
||||
def extract_user_settings(module_yaml: dict, answers: dict) -> dict:
|
||||
"""Collect settings that belong in config.user.yaml.
|
||||
|
||||
Includes user_name and communication_language from core answers, plus any
|
||||
module variable whose definition contains user_setting: true.
|
||||
"""
|
||||
user_settings = {}
|
||||
|
||||
core_answers = answers.get("core", {})
|
||||
for key in _CORE_USER_KEYS:
|
||||
if key in core_answers:
|
||||
user_settings[key] = core_answers[key]
|
||||
|
||||
module_answers = answers.get("module", {})
|
||||
for var_name, var_def in module_yaml.items():
|
||||
if isinstance(var_def, dict) and var_def.get("user_setting") is True:
|
||||
if var_name in module_answers:
|
||||
user_settings[var_name] = module_answers[var_name]
|
||||
|
||||
return user_settings
|
||||
|
||||
|
||||
def write_config(config: dict, config_path: str, verbose: bool = False) -> None:
|
||||
"""Write config dict to YAML file, creating parent dirs as needed."""
|
||||
path = Path(config_path)
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if verbose:
|
||||
print(f"Writing config to {path}", file=sys.stderr)
|
||||
|
||||
with open(path, "w", encoding="utf-8") as f:
|
||||
yaml.dump(
|
||||
config,
|
||||
f,
|
||||
default_flow_style=False,
|
||||
allow_unicode=True,
|
||||
sort_keys=False,
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
|
||||
# Load inputs
|
||||
module_yaml = load_yaml_file(args.module_yaml)
|
||||
if not module_yaml:
|
||||
print(f"Error: Could not load module.yaml from {args.module_yaml}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
answers = load_json_file(args.answers)
|
||||
existing_config = load_yaml_file(args.config_path)
|
||||
|
||||
if args.verbose:
|
||||
exists = Path(args.config_path).exists()
|
||||
print(f"Config file exists: {exists}", file=sys.stderr)
|
||||
if exists:
|
||||
print(f"Existing sections: {list(existing_config.keys())}", file=sys.stderr)
|
||||
|
||||
# Legacy migration: read old per-module configs as fallback defaults
|
||||
legacy_files_found = []
|
||||
if args.legacy_dir:
|
||||
module_code = module_yaml.get("code", "")
|
||||
legacy_core, legacy_module, legacy_files_found = load_legacy_values(
|
||||
args.legacy_dir, module_code, module_yaml, args.verbose
|
||||
)
|
||||
if legacy_core or legacy_module:
|
||||
answers = apply_legacy_defaults(answers, legacy_core, legacy_module)
|
||||
if args.verbose:
|
||||
print("Applied legacy values as fallback defaults", file=sys.stderr)
|
||||
|
||||
# Merge and write config.yaml
|
||||
updated_config = merge_config(existing_config, module_yaml, answers, args.verbose)
|
||||
write_config(updated_config, args.config_path, args.verbose)
|
||||
|
||||
# Merge and write config.user.yaml
|
||||
user_settings = extract_user_settings(module_yaml, answers)
|
||||
existing_user_config = load_yaml_file(args.user_config_path)
|
||||
updated_user_config = dict(existing_user_config)
|
||||
updated_user_config.update(user_settings)
|
||||
if user_settings:
|
||||
write_config(updated_user_config, args.user_config_path, args.verbose)
|
||||
|
||||
# Legacy cleanup: delete old per-module config files
|
||||
legacy_deleted = []
|
||||
if args.legacy_dir:
|
||||
legacy_deleted = cleanup_legacy_configs(
|
||||
args.legacy_dir, module_yaml["code"], args.verbose
|
||||
)
|
||||
|
||||
# Output result summary as JSON
|
||||
module_code = module_yaml["code"]
|
||||
result = {
|
||||
"status": "success",
|
||||
"config_path": str(Path(args.config_path).resolve()),
|
||||
"user_config_path": str(Path(args.user_config_path).resolve()),
|
||||
"module_code": module_code,
|
||||
"core_updated": bool(answers.get("core")),
|
||||
"module_keys": list(updated_config.get(module_code, {}).keys()),
|
||||
"user_keys": list(user_settings.keys()),
|
||||
"legacy_configs_found": legacy_files_found,
|
||||
"legacy_configs_deleted": legacy_deleted,
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
218
skills/bad/scripts/merge-help-csv.py
Executable file
218
skills/bad/scripts/merge-help-csv.py
Executable file
@@ -0,0 +1,218 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.9"
|
||||
# dependencies = []
|
||||
# ///
|
||||
"""Merge module help entries into shared _bmad/module-help.csv.
|
||||
|
||||
Reads a source CSV with module help entries and merges them into a target CSV.
|
||||
Uses an anti-zombie pattern: all existing rows matching the source module code
|
||||
are removed before appending fresh rows.
|
||||
|
||||
Legacy cleanup: when --legacy-dir and --module-code are provided, deletes old
|
||||
per-module module-help.csv files from {legacy-dir}/{module-code}/ and
|
||||
{legacy-dir}/core/. Only the current module and core are touched.
|
||||
|
||||
Exit codes: 0=success, 1=validation error, 2=runtime error
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import csv
|
||||
import json
|
||||
import sys
|
||||
from io import StringIO
|
||||
from pathlib import Path
|
||||
|
||||
# CSV header for module-help.csv
|
||||
HEADER = [
|
||||
"module",
|
||||
"skill",
|
||||
"display-name",
|
||||
"menu-code",
|
||||
"description",
|
||||
"action",
|
||||
"args",
|
||||
"phase",
|
||||
"after",
|
||||
"before",
|
||||
"required",
|
||||
"output-location",
|
||||
"outputs",
|
||||
]
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Merge module help entries into shared _bmad/module-help.csv with anti-zombie pattern."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--target",
|
||||
required=True,
|
||||
help="Path to the target _bmad/module-help.csv file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--source",
|
||||
required=True,
|
||||
help="Path to the source module-help.csv with entries to merge",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--legacy-dir",
|
||||
help="Path to _bmad/ directory to check for legacy per-module CSV files.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--module-code",
|
||||
help="Module code (required with --legacy-dir for scoping cleanup).",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose",
|
||||
action="store_true",
|
||||
help="Print detailed progress to stderr",
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def read_csv_rows(path: str) -> tuple[list[str], list[list[str]]]:
|
||||
"""Read CSV file returning (header, data_rows).
|
||||
|
||||
Returns empty header and rows if file doesn't exist.
|
||||
"""
|
||||
file_path = Path(path)
|
||||
if not file_path.exists():
|
||||
return [], []
|
||||
|
||||
with open(file_path, "r", encoding="utf-8", newline="") as f:
|
||||
content = f.read()
|
||||
|
||||
reader = csv.reader(StringIO(content))
|
||||
rows = list(reader)
|
||||
|
||||
if not rows:
|
||||
return [], []
|
||||
|
||||
return rows[0], rows[1:]
|
||||
|
||||
|
||||
def extract_module_codes(rows: list[list[str]]) -> set[str]:
|
||||
"""Extract unique module codes from data rows."""
|
||||
codes = set()
|
||||
for row in rows:
|
||||
if row and row[0].strip():
|
||||
codes.add(row[0].strip())
|
||||
return codes
|
||||
|
||||
|
||||
def filter_rows(rows: list[list[str]], module_code: str) -> list[list[str]]:
|
||||
"""Remove all rows matching the given module code."""
|
||||
return [row for row in rows if not row or row[0].strip() != module_code]
|
||||
|
||||
|
||||
def write_csv(path: str, header: list[str], rows: list[list[str]], verbose: bool = False) -> None:
|
||||
"""Write header + rows to CSV file, creating parent dirs as needed."""
|
||||
file_path = Path(path)
|
||||
file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
if verbose:
|
||||
print(f"Writing {len(rows)} data rows to {path}", file=sys.stderr)
|
||||
|
||||
with open(file_path, "w", encoding="utf-8", newline="") as f:
|
||||
writer = csv.writer(f)
|
||||
writer.writerow(header)
|
||||
for row in rows:
|
||||
writer.writerow(row)
|
||||
|
||||
|
||||
def cleanup_legacy_csvs(
|
||||
legacy_dir: str, module_code: str, verbose: bool = False
|
||||
) -> list:
|
||||
"""Delete legacy per-module module-help.csv files for this module and core only.
|
||||
|
||||
Returns list of deleted file paths.
|
||||
"""
|
||||
deleted = []
|
||||
for subdir in (module_code, "core"):
|
||||
legacy_path = Path(legacy_dir) / subdir / "module-help.csv"
|
||||
if legacy_path.exists():
|
||||
if verbose:
|
||||
print(f"Deleting legacy CSV: {legacy_path}", file=sys.stderr)
|
||||
legacy_path.unlink()
|
||||
deleted.append(str(legacy_path))
|
||||
return deleted
|
||||
|
||||
|
||||
def main():
|
||||
args = parse_args()
|
||||
|
||||
# Read source entries
|
||||
source_header, source_rows = read_csv_rows(args.source)
|
||||
if not source_rows:
|
||||
print(f"Error: No data rows found in source {args.source}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Determine module codes being merged
|
||||
source_codes = extract_module_codes(source_rows)
|
||||
if not source_codes:
|
||||
print("Error: Could not determine module code from source rows", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
if args.verbose:
|
||||
print(f"Source module codes: {source_codes}", file=sys.stderr)
|
||||
print(f"Source rows: {len(source_rows)}", file=sys.stderr)
|
||||
|
||||
# Read existing target (may not exist)
|
||||
target_header, target_rows = read_csv_rows(args.target)
|
||||
target_existed = Path(args.target).exists()
|
||||
|
||||
if args.verbose:
|
||||
print(f"Target exists: {target_existed}", file=sys.stderr)
|
||||
if target_existed:
|
||||
print(f"Existing target rows: {len(target_rows)}", file=sys.stderr)
|
||||
|
||||
# Use source header if target doesn't exist or has no header
|
||||
header = target_header if target_header else (source_header if source_header else HEADER)
|
||||
|
||||
# Anti-zombie: remove all rows for each source module code
|
||||
filtered_rows = target_rows
|
||||
removed_count = 0
|
||||
for code in source_codes:
|
||||
before_count = len(filtered_rows)
|
||||
filtered_rows = filter_rows(filtered_rows, code)
|
||||
removed_count += before_count - len(filtered_rows)
|
||||
|
||||
if args.verbose and removed_count > 0:
|
||||
print(f"Removed {removed_count} existing rows (anti-zombie)", file=sys.stderr)
|
||||
|
||||
# Append source rows
|
||||
merged_rows = filtered_rows + source_rows
|
||||
|
||||
# Write result
|
||||
write_csv(args.target, header, merged_rows, args.verbose)
|
||||
|
||||
# Legacy cleanup: delete old per-module CSV files
|
||||
legacy_deleted = []
|
||||
if args.legacy_dir:
|
||||
if not args.module_code:
|
||||
print(
|
||||
"Error: --module-code is required when --legacy-dir is provided",
|
||||
file=sys.stderr,
|
||||
)
|
||||
sys.exit(1)
|
||||
legacy_deleted = cleanup_legacy_csvs(
|
||||
args.legacy_dir, args.module_code, args.verbose
|
||||
)
|
||||
|
||||
# Output result summary as JSON
|
||||
result = {
|
||||
"status": "success",
|
||||
"target_path": str(Path(args.target).resolve()),
|
||||
"target_existed": target_existed,
|
||||
"module_codes": sorted(source_codes),
|
||||
"rows_removed": removed_count,
|
||||
"rows_added": len(source_rows),
|
||||
"total_rows": len(merged_rows),
|
||||
"legacy_csvs_deleted": legacy_deleted,
|
||||
}
|
||||
print(json.dumps(result, indent=2))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
Reference in New Issue
Block a user