feat(bad): add Monitor tool support for CI polling and PR-merge watching (v1.1.0)

- Add MONITOR_SUPPORT config flag (true for Claude Code, false for Bedrock/Vertex/Foundry)
- Replace manual CI polling loop in Step 4 with Monitor tool when supported
- Replace blind WAIT_TIMER sleep in Phase 4 Branch B with Monitor PR-merge watcher + CronCreate fallback
- Extract Notify, Timer, and Monitor patterns into dedicated reference files for progressive disclosure
- Add sprint task list (TaskCreate) after Phase 0 for live progress tracking in UI
- Update module-setup.md and marketplace.json to register monitor_support alongside timer_support

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
stephenleo
2026-04-10 17:01:22 +08:00
parent 02384c6983
commit de8cd5b275
9 changed files with 212 additions and 106 deletions

View File

@@ -24,7 +24,7 @@
"name": "bmad-bad",
"source": "./",
"description": "Autonomous development orchestrator for the BMad Method. Runs fully autonomous parallel multi-agent pipelines through the full story lifecycle (create → dev → review → PR) driven by your sprint backlog and dependency graph.",
"version": "1.0.0",
"version": "1.1.0",
"author": { "name": "Marie Stephen Leo" },
"skills": [
"./skills/bad"

View File

@@ -13,7 +13,7 @@ Once your epics and stories are planned, BAD takes over:
1. *(`MODEL_STANDARD` subagent)* Builds a dependency graph from your sprint backlog — maps story dependencies, syncs GitHub PR status, and identifies what's ready to work on
2. Picks ready stories from the graph, respecting epic ordering and dependencies
3. Runs up to `MAX_PARALLEL_STORIES` stories simultaneously — each in its own isolated git worktree — each through a sequential 4-step pipeline:
- **Step 1** *(`MODEL_STANDARD` subagent)*`bmad-create-story`: generates the story spec
- **Step 1** *(`MODEL_STANDARD` subagent)*`bmad-create-story`: generates and validates the story spec
- **Step 2** *(`MODEL_STANDARD` subagent)*`bmad-dev-story`: implements the code
- **Step 3** *(`MODEL_QUALITY` subagent)*`bmad-code-review`: reviews and fixes the implementation
- **Step 4** *(`MODEL_STANDARD` subagent)* — commit, push, open PR, monitor CI, fix any failing checks, resolve code review comments, and resolve merge conflicts
@@ -22,7 +22,7 @@ Once your epics and stories are planned, BAD takes over:
## Requirements
- [BMad Method](https://docs.bmad-method.org/) installed in your project
- [BMad Method](https://docs.bmad-method.org/) installed in your project `npx bmad-method install --modules bmm,tea`
- A sprint plan with epics, stories, and `sprint-status.yaml`
- Git + GitHub CLI (`gh`) installed and authenticated:
1. `brew install gh`

View File

@@ -10,7 +10,7 @@ BAD is a [BMad Method](https://docs.bmad-method.org/) module that automates your
## Requirements
- [BMad Method](https://docs.bmad-method.org/) installed in your project
- [BMad Method](https://docs.bmad-method.org/) installed in your project `npx bmad-method install --modules bmm,tea`
- A sprint plan with epics, stories, and `sprint-status.yaml`
- Git + GitHub CLI (`gh`) installed and authenticated:
1. `brew install gh`
@@ -53,7 +53,7 @@ Once your epics and stories are planned, BAD takes over:
1. *(`MODEL_STANDARD` subagent)* Builds a dependency graph from your sprint backlog — maps story dependencies, syncs GitHub PR status, and identifies what's ready to work on
2. Picks ready stories from the graph, respecting epic ordering and dependencies
3. Runs up to `MAX_PARALLEL_STORIES` stories simultaneously — each in its own isolated git worktree — each through a sequential 4-step pipeline. **Every step runs in a dedicated subagent with a fresh context window**, keeping the coordinator lean and each agent fully focused on its single task:
- **Step 1** *(`MODEL_STANDARD` subagent)* — `bmad-create-story`: generates the story spec
- **Step 1** *(`MODEL_STANDARD` subagent)* — `bmad-create-story`: generates and validates the story spec
- **Step 2** *(`MODEL_STANDARD` subagent)* — `bmad-dev-story`: implements the code
- **Step 3** *(`MODEL_QUALITY` subagent)* — `bmad-code-review`: reviews and fixes the implementation
- **Step 4** *(`MODEL_STANDARD` subagent)* — commit, push, open PR, monitor CI, fix any failing checks, resolve code review comments, and resolve merge conflicts

View File

@@ -33,7 +33,7 @@ Before doing anything else, determine how to send notifications:
- If another channel type is connected, save its equivalent identifier.
- If no channel is connected, set `NOTIFY_SOURCE="terminal"`.
2. **Send the BAD started notification** using the [Notify Pattern](#notify-pattern):
2. **Send the BAD started notification** using the [Notify Pattern](references/notify-pattern.md):
```
🤖 BAD started — building dependency graph...
```
@@ -56,6 +56,7 @@ Load base values from `_bmad/bad/config.yaml` at startup (via `/bmad-init --modu
| `WAIT_TIMER_SECONDS` | `wait_timer_seconds` | `3600` | Post-batch wait before re-checking PR status (1 hr) |
| `CONTEXT_COMPACTION_THRESHOLD` | `context_compaction_threshold` | `80` | Context window % at which to compact/summarise context |
| `TIMER_SUPPORT` | `timer_support` | `true` | When `true`, use native platform timers; when `false`, use prompt-based continuation |
| `MONITOR_SUPPORT` | `monitor_support` | `true` | When `true`, use the Monitor tool for CI and PR-merge polling; when `false`, fall back to manual polling loops (required for Bedrock/Vertex/Foundry) |
| `API_FIVE_HOUR_THRESHOLD` | `api_five_hour_threshold` | `80` | (Claude Code) 5-hour rate limit % that triggers a pause |
| `API_SEVEN_DAY_THRESHOLD` | `api_seven_day_threshold` | `95` | (Claude Code) 7-day rate limit % that triggers a pause |
| `API_USAGE_THRESHOLD` | `api_usage_threshold` | `80` | (Other harnesses) Generic API usage % that triggers a pause |
@@ -172,6 +173,21 @@ Ready: {N} stories — {comma-separated story numbers}
Blocked: {N} stories (if any)
```
After Phase 0 completes, **create the sprint task list** using TaskCreate — one task per item below. This gives a live progress view in the UI that updates as work completes.
```
[ ] Phase 0: Dependency graph
[ ] Phase 1: Story selection
[ ] Story {N}: Step 1 — Create story ← one set per selected story
[ ] Story {N}: Step 2 — Develop
[ ] Story {N}: Step 3 — Code review
[ ] Story {N}: Step 4 — PR & CI
[ ] Phase 3: Auto-merge (if AUTO_PR_MERGE=true)
[ ] Phase 4: Batch summary & continuation
```
Mark Phase 0 and Phase 1 tasks `completed` immediately after creating them (they are already done by this point). Update each story step task to `in_progress` when its subagent is spawned, and `completed` (or `failed`) when it reports back. Update Phase 3 and Phase 4 tasks similarly as they execute.
---
## Phase 1: Discover Stories
@@ -228,7 +244,11 @@ Working directory: {repo_root}. Auto-approve all tool calls (yolo mode).
3. Run /bmad-create-story {number}-{short_description}.
4. Update sprint-status.yaml at the REPO ROOT (not the worktree copy):
4. Run "validate story {number}-{short_description}". For every finding,
apply a fix directly to the story file using your best engineering judgement.
Repeat until no findings remain.
5. Update sprint-status.yaml at the REPO ROOT (not the worktree copy):
_bmad-output/implementation-artifacts/sprint-status.yaml
Set story {number} status to `ready-for-dev`.
@@ -289,7 +309,14 @@ Auto-approve all tool calls (yolo mode).
4. CI:
- If RUN_CI_LOCALLY is true → skip GitHub Actions and run the Local CI Fallback below.
- Otherwise monitor CI in a loop:
- If MONITOR_SUPPORT is true → use the Monitor tool to watch CI status:
Write a poller script:
while true; do gh run view --json status,conclusion 2>&1; sleep 30; done
Start it with Monitor. React to each output line as it arrives:
- conclusion=success → stop Monitor, proceed to step 5
- conclusion=failure or cancelled → stop Monitor, diagnose, fix, push, restart Monitor
- Billing/spending limit error in output → stop Monitor, run Local CI Fallback
- If MONITOR_SUPPORT is false → poll manually in a loop:
gh run view
- Billing/spending limit error → exit loop, run Local CI Fallback
- CI failed for other reason, or Claude bot left PR comments → fix, push, loop
@@ -443,7 +470,7 @@ Using the assessment report:
1. Print: `🎉 Epic {current_epic_name} is complete! Starting retrospective countdown ({RETRO_TIMER_SECONDS ÷ 60} minutes)...`
📣 **Notify:** `🎉 Epic {current_epic_name} complete! Running retrospective in {RETRO_TIMER_SECONDS ÷ 60} min...`
2. Start a timer using the **[Timer Pattern](#timer-pattern)** with:
2. Start a timer using the **[Timer Pattern](references/timer-pattern.md)** with:
- **Duration:** `RETRO_TIMER_SECONDS`
- **Fire prompt:** `"BAD_RETRO_TIMER_FIRED — The retrospective countdown has elapsed. Auto-run the retrospective: spawn a MODEL_DEV subagent (yolo mode) to run /bmad-retrospective, accept all changes. Run Pre-Continuation Checks after it completes, then proceed to Phase 4 Step 3."`
- **[C] label:** `Run retrospective now`
@@ -467,116 +494,54 @@ Using the assessment report from Step 2, follow the applicable branch:
1. Print a status line:
- Epic just completed: `✅ Epic {current_epic_name} complete. Next up: Epic {next_epic_name} ({stories_remaining} stories remaining).`
- More stories in current epic: `✅ Batch complete. Ready for the next batch.`
2. Start a timer using the **[Timer Pattern](#timer-pattern)** with:
- **Duration:** `WAIT_TIMER_SECONDS`
- **Fire prompt:** `"BAD_WAIT_TIMER_FIRED — The post-batch wait has elapsed. Run Pre-Continuation Checks, then re-run Phase 0, then proceed to Phase 1."`
- **[C] label:** `Continue now`
- **[S] label:** `Stop BAD`
- **[C] / FIRED action:** Run Pre-Continuation Checks, then re-run Phase 0.
- **[S] action:** Stop BAD, print a final summary, and 📣 **Notify:** `🛑 BAD stopped by user.`
2. Start the wait using the **[Monitor Pattern](references/monitor-pattern.md)** (when `MONITOR_SUPPORT=true`) or the **[Timer Pattern](references/timer-pattern.md)** (when `MONITOR_SUPPORT=false`):
**If `MONITOR_SUPPORT=true` — Monitor + CronCreate fallback:**
- Start Monitor with a PR-merge watcher script:
```bash
while true; do gh pr list --json number,mergedAt --jq '.[] | select(.mergedAt != null) | "MERGED: #\(.number)"'; sleep 60; done
```
Save the Monitor handle as `PR_MONITOR`.
- Also start a CronCreate fallback timer using the [Timer Pattern](references/timer-pattern.md) with:
- **Duration:** `WAIT_TIMER_SECONDS`
- **Fire prompt:** `"BAD_WAIT_TIMER_FIRED — Max wait elapsed. Stop PR_MONITOR, run Pre-Continuation Checks, then re-run Phase 0."`
- **[C] label:** `Continue now`
- **[S] label:** `Stop BAD`
- **[C] / FIRED action:** Stop `PR_MONITOR`, run Pre-Continuation Checks, then re-run Phase 0.
- **[S] action:** Stop `PR_MONITOR`, CronDelete, stop BAD, print final summary, and 📣 **Notify:** `🛑 BAD stopped by user.`
- **On Monitor event (merge detected):** CronDelete the fallback timer, stop `PR_MONITOR`, run Pre-Continuation Checks, re-run Phase 0.
- 📣 **Notify:** `⏳ Watching for PR merges (max wait: {WAIT_TIMER_SECONDS ÷ 60} min)...`
**If `MONITOR_SUPPORT=false` — Timer only:**
- Use the [Timer Pattern](references/timer-pattern.md) with:
- **Duration:** `WAIT_TIMER_SECONDS`
- **Fire prompt:** `"BAD_WAIT_TIMER_FIRED — The post-batch wait has elapsed. Run Pre-Continuation Checks, then re-run Phase 0, then proceed to Phase 1."`
- **[C] label:** `Continue now`
- **[S] label:** `Stop BAD`
- **[C] / FIRED action:** Run Pre-Continuation Checks, then re-run Phase 0.
- **[S] action:** Stop BAD, print a final summary, and 📣 **Notify:** `🛑 BAD stopped by user.`
3. After Phase 0 completes:
- At least one story unblocked → proceed to Phase 1.
- All stories still blocked → print which PRs are pending (from Phase 0 report), restart Branch B for another `WAIT_TIMER_SECONDS` countdown.
- All stories still blocked → print which PRs are pending (from Phase 0 report), restart Branch B for another wait.
---
## Notify Pattern
Use this pattern every time a `📣 Notify:` callout appears **anywhere in this skill** — including inside the Timer Pattern.
**If `NOTIFY_SOURCE="telegram"`:** call `mcp__plugin_telegram_telegram__reply` with:
- `chat_id`: `NOTIFY_CHAT_ID`
- `text`: the message
**If `NOTIFY_SOURCE="terminal"`** (or if the Telegram tool call fails): print the message in the conversation as a normal response.
Always send both a terminal print and a channel message — the terminal print keeps the in-session transcript readable, and the channel message reaches the user on their device.
Read `references/notify-pattern.md` whenever a `📣 Notify:` callout appears. It covers Telegram and terminal output.
---
## Timer Pattern
Both the retrospective and post-batch wait timers use this pattern. The caller supplies the duration, fire prompt, option labels, and actions.
Behaviour depends on `TIMER_SUPPORT`:
Read `references/timer-pattern.md` when instructed to start a timer. It covers both `TIMER_SUPPORT=true` (CronCreate) and `TIMER_SUPPORT=false` (prompt-based) paths.
---
### If `TIMER_SUPPORT=true` (native platform timers)
## Monitor Pattern
**Step 1 — compute target cron expression** (convert seconds to minutes: `SECONDS ÷ 60`):
```bash
# macOS
date -v +{N}M '+%M %H %d %m *'
# Linux
date -d '+{N} minutes' '+%M %H %d %m *'
```
Save as `CRON_EXPR`. Save `TIMER_START=$(date +%s)`.
**Step 2 — create the one-shot timer** via `CronCreate`:
- `cron`: expression from Step 1
- `recurring`: `false`
- `prompt`: the caller-supplied fire prompt
Save the returned job ID as `JOB_ID`.
**Step 3 — print the options menu** (always all three options):
> Timer running (job: {JOB_ID}). I'll act in {N} minutes.
>
> - **[C] Continue** — {C label}
> - **[S] Stop** — {S label}
> - **[M] {N} Modify timer to {N} minutes** — shorten or extend the countdown
📣 **Notify** using the [Notify Pattern](#notify-pattern) with the same options so the user can respond from their device:
```
⏱ Timer set — {N} minutes (job: {JOB_ID})
[C] {C label}
[S] {S label}
[M] <minutes> — modify countdown
```
Wait for whichever arrives first — user reply or fired prompt. On any human reply, print elapsed time first:
```bash
ELAPSED=$(( $(date +%s) - TIMER_START ))
echo "⏱ Time elapsed: $((ELAPSED / 60))m $((ELAPSED % 60))s"
```
- **[C]** → `CronDelete(JOB_ID)`, run the [C] action
- **[S]** → `CronDelete(JOB_ID)`, run the [S] action
- **[M] N** → `CronDelete(JOB_ID)`, recompute cron for N minutes from now, `CronCreate` again with same fire prompt, update `JOB_ID` and `TIMER_START`, print updated countdown, then 📣 **Notify** using the [Notify Pattern](#notify-pattern):
```
⏱ Timer updated — {N} minutes (job: {JOB_ID})
[C] {C label}
[S] {S label}
[M] <minutes> — modify countdown
```
- **FIRED (no prior reply)** → run the [C] action automatically
---
### If `TIMER_SUPPORT=false` (prompt-based continuation)
Save `TIMER_START=$(date +%s)`. No native timer is created — print the options menu immediately and wait for user reply:
> Waiting {N} minutes before continuing. Reply when ready.
>
> - **[C] Continue** — {C label}
> - **[S] Stop** — {S label}
> - **[M] N** — remind me after N minutes (reply with `[M] <minutes>`)
📣 **Notify** using the [Notify Pattern](#notify-pattern) with the same options.
On any human reply, print elapsed time first:
```bash
ELAPSED=$(( $(date +%s) - TIMER_START ))
echo "⏱ Time elapsed: $((ELAPSED / 60))m $((ELAPSED % 60))s"
```
- **[C]** → run the [C] action
- **[S]** → run the [S] action
- **[M] N** → update `TIMER_START`, print updated wait message, 📣 **Notify**, and wait again
Read `references/monitor-pattern.md` when `MONITOR_SUPPORT=true`. It covers CI status polling (Step 4) and PR-merge watching (Phase 4 Branch B), plus the `MONITOR_SUPPORT=false` fallback for each.
---

View File

@@ -80,7 +80,7 @@ Present as **"Claude Code settings"**:
- `api_five_hour_threshold` — 5-hour API usage % at which to pause [80]
- `api_seven_day_threshold` — 7-day API usage % at which to pause [95]
Automatically write `timer_support: true` — no prompt needed.
Automatically write `timer_support: true` and `monitor_support: true` — no prompt needed.
#### All Other Harnesses
@@ -90,7 +90,7 @@ Present as **"{HarnessName} settings"**:
- `model_quality` — Model for code review step (e.g. `best`, `o1`, `pro`)
- `api_usage_threshold` — API usage % at which to pause for rate limits [80]
Automatically write `timer_support: false` — no prompt needed. BAD will use prompt-based continuation instead of native timers on this harness.
Automatically write `timer_support: false` and `monitor_support: false` — no prompt needed. BAD will use prompt-based continuation instead of native timers, and manual polling loops instead of the Monitor tool, on this harness.
## Step 4: Write Files
@@ -107,6 +107,7 @@ Write a temp JSON file with collected answers structured as:
"retro_timer_seconds": "600",
"context_compaction_threshold": "80",
"timer_support": true,
"monitor_support": true,
"model_standard": "sonnet",
"model_quality": "opus",
"api_five_hour_threshold": "80",

View File

@@ -1,7 +1,7 @@
code: bad
name: "BMad Autonomous Development"
description: "Orchestrates parallel BMad story implementation pipelines — automatically runs bmad-create-story, bmad-dev-story, bmad-code-review, and commit/PR in batches, driven by the sprint backlog and dependency graph"
module_version: "1.0.0"
module_version: "1.1.0"
module_greeting: "BAD is ready. Run /bad to start. Pass KEY=VALUE args to override config at runtime (e.g. /bad MAX_PARALLEL_STORIES=2)."
header: "BAD — BMad Autonomous Development"

View File

@@ -0,0 +1,46 @@
# Monitor Pattern
Use this pattern when `MONITOR_SUPPORT=true`. It covers two use cases in BAD: CI status polling (Step 4) and PR-merge watching (Phase 4 Branch B). The caller supplies the poll script and the reaction logic.
> **Requires Claude Code v2.1.98+.** Uses the same Bash permission rules. Not available on Amazon Bedrock, Google Vertex AI, or Microsoft Azure Foundry — set `MONITOR_SUPPORT=false` on those platforms.
## How it works
1. **Write a poll script** — a `while true; do ...; sleep N; done` loop that emits one line per status change to stdout.
2. **Start Monitor** — pass the script to the Monitor tool. Claude receives each stdout line as a live event and can react immediately without blocking the conversation.
3. **React to events** — on each line, apply the caller's reaction logic (e.g. CI green → proceed; PR merged → continue).
4. **Stop Monitor** — call stop/cancel on the Monitor handle when done (success, failure, or user override).
## CI status polling (Step 4)
Poll script (run inside the Step 4 subagent):
```bash
while true; do
gh run view --json status,conclusion 2>&1
sleep 30
done
```
React to each output line:
- `"conclusion":"success"` → stop Monitor, proceed to step 5
- `"conclusion":"failure"` or `"conclusion":"cancelled"` → stop Monitor, diagnose, fix, push, restart Monitor
- Billing/spending limit text in output → stop Monitor, run Local CI Fallback
## PR-merge watching (Phase 4 Branch B)
Poll script (run by the coordinator):
```bash
while true; do
gh pr list --json number,mergedAt \
--jq '.[] | select(.mergedAt != null) | "MERGED: #\(.number)"'
sleep 60
done
```
React to each output line:
- `MERGED: #N` → CronDelete the fallback timer, stop Monitor, run Pre-Continuation Checks, re-run Phase 0
## If `MONITOR_SUPPORT=false`
- **CI polling:** use the manual `gh run view` loop in Step 4 (see Step 4 fallback path in SKILL.md).
- **PR-merge watching:** use the CronCreate-only Timer Pattern in Phase 4 Branch B (see fallback path in SKILL.md).

View File

@@ -0,0 +1,11 @@
# Notify Pattern
Use this pattern every time a `📣 Notify:` callout appears **anywhere in the BAD skill** — including inside the Timer Pattern and Monitor Pattern.
**If `NOTIFY_SOURCE="telegram"`:** call `mcp__plugin_telegram_telegram__reply` with:
- `chat_id`: `NOTIFY_CHAT_ID`
- `text`: the message
**If `NOTIFY_SOURCE="terminal"`** (or if the Telegram tool call fails): print the message in the conversation as a normal response.
Always send both a terminal print and a channel message — the terminal print keeps the in-session transcript readable, and the channel message reaches the user on their device.

View File

@@ -0,0 +1,83 @@
# Timer Pattern
Both the retrospective and post-batch wait timers use this pattern. The caller supplies the duration, fire prompt, option labels, and actions.
Behaviour depends on `TIMER_SUPPORT`:
---
## If `TIMER_SUPPORT=true` (native platform timers)
**Step 1 — compute target cron expression** (convert seconds to minutes: `SECONDS ÷ 60`):
```bash
# macOS
date -v +{N}M '+%M %H %d %m *'
# Linux
date -d '+{N} minutes' '+%M %H %d %m *'
```
Save as `CRON_EXPR`. Save `TIMER_START=$(date +%s)`.
**Step 2 — create the one-shot timer** via `CronCreate`:
- `cron`: expression from Step 1
- `recurring`: `false`
- `prompt`: the caller-supplied fire prompt
Save the returned job ID as `JOB_ID`.
**Step 3 — print the options menu** (always all three options):
> Timer running (job: {JOB_ID}). I'll act in {N} minutes.
>
> - **[C] Continue** — {C label}
> - **[S] Stop** — {S label}
> - **[M] {N} Modify timer to {N} minutes** — shorten or extend the countdown
📣 **Notify** (see `references/notify-pattern.md`) with the same options so the user can respond from their device:
```
⏱ Timer set — {N} minutes (job: {JOB_ID})
[C] {C label}
[S] {S label}
[M] <minutes> — modify countdown
```
Wait for whichever arrives first — user reply or fired prompt. On any human reply, print elapsed time first:
```bash
ELAPSED=$(( $(date +%s) - TIMER_START ))
echo "⏱ Time elapsed: $((ELAPSED / 60))m $((ELAPSED % 60))s"
```
- **[C]** → `CronDelete(JOB_ID)`, run the [C] action
- **[S]** → `CronDelete(JOB_ID)`, run the [S] action
- **[M] N** → `CronDelete(JOB_ID)`, recompute cron for N minutes from now, `CronCreate` again with same fire prompt, update `JOB_ID` and `TIMER_START`, print updated countdown, then 📣 **Notify**:
```
⏱ Timer updated — {N} minutes (job: {JOB_ID})
[C] {C label}
[S] {S label}
[M] <minutes> — modify countdown
```
- **FIRED (no prior reply)** → run the [C] action automatically
---
## If `TIMER_SUPPORT=false` (prompt-based continuation)
Save `TIMER_START=$(date +%s)`. No native timer is created — print the options menu immediately and wait for user reply:
> Waiting {N} minutes before continuing. Reply when ready.
>
> - **[C] Continue** — {C label}
> - **[S] Stop** — {S label}
> - **[M] N** — remind me after N minutes (reply with `[M] <minutes>`)
📣 **Notify** (see `references/notify-pattern.md`) with the same options.
On any human reply, print elapsed time first:
```bash
ELAPSED=$(( $(date +%s) - TIMER_START ))
echo "⏱ Time elapsed: $((ELAPSED / 60))m $((ELAPSED % 60))s"
```
- **[C]** → run the [C] action
- **[S]** → run the [S] action
- **[M] N** → update `TIMER_START`, print updated wait message, 📣 **Notify**, and wait again