mirror of
https://github.com/glittercowboy/get-shit-done
synced 2026-04-25 17:25:23 +02:00
feat(commands): add /gsd-audit-fix for autonomous audit-to-fix pipeline (#1814)
* feat(commands): add /gsd-audit-fix autonomous audit-to-fix pipeline Chains audit, classify, fix, test, commit into an autonomous pipeline. Runs an audit (currently audit-uat), classifies findings as auto-fixable vs manual-only (erring on manual when uncertain), spawns executor agents for fixable issues, runs tests after each fix, and commits atomically with finding IDs for traceability. Supports --max N (cap fixes), --severity (filter threshold), --dry-run (classification table only), and --source (audit command). Reverts changes on test failure and continues to the next finding. Closes #1735 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(commands): address review feedback on audit-fix command (#1735) - Change --severity default from high to medium per approved spec - Fix pipeline to stop on first test failure instead of continuing - Verify gsd-tools.cjs commit usage (confirmed valid — no change needed) - Add argument-hint for /gsd-help discoverability - Update tests: severity default, stop-on-failure, argument-hint Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(commands): address second-round review feedback on audit-fix (#1735) - Replace non-existent gsd-tools.cjs commit with direct git add/commit - Scope revert to changed files only instead of git checkout -- . - Fix argument-hint to reflect actual supported source values - Add type: prompt to command frontmatter Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
33
commands/gsd/audit-fix.md
Normal file
33
commands/gsd/audit-fix.md
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
type: prompt
|
||||
name: gsd:audit-fix
|
||||
description: Autonomous audit-to-fix pipeline — find issues, classify, fix, test, commit
|
||||
argument-hint: "--source <audit-uat> [--severity <medium|high|all>] [--max N] [--dry-run]"
|
||||
allowed-tools:
|
||||
- Read
|
||||
- Write
|
||||
- Edit
|
||||
- Bash
|
||||
- Grep
|
||||
- Glob
|
||||
- Agent
|
||||
- AskUserQuestion
|
||||
---
|
||||
<objective>
|
||||
Run an audit, classify findings as auto-fixable vs manual-only, then autonomously fix
|
||||
auto-fixable issues with test verification and atomic commits.
|
||||
|
||||
Flags:
|
||||
- `--max N` — maximum findings to fix (default: 5)
|
||||
- `--severity high|medium|all` — minimum severity to process (default: medium)
|
||||
- `--dry-run` — classify findings without fixing (shows classification table)
|
||||
- `--source <audit>` — which audit to run (default: audit-uat)
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
@~/.claude/get-shit-done/workflows/audit-fix.md
|
||||
</execution_context>
|
||||
|
||||
<process>
|
||||
Execute the audit-fix workflow from @~/.claude/get-shit-done/workflows/audit-fix.md end-to-end.
|
||||
</process>
|
||||
157
get-shit-done/workflows/audit-fix.md
Normal file
157
get-shit-done/workflows/audit-fix.md
Normal file
@@ -0,0 +1,157 @@
|
||||
<purpose>
|
||||
Autonomous audit-to-fix pipeline. Runs an audit, parses findings, classifies each as
|
||||
auto-fixable vs manual-only, spawns executor agents for fixable issues, runs tests
|
||||
after each fix, and commits atomically with finding IDs for traceability.
|
||||
</purpose>
|
||||
|
||||
<available_agent_types>
|
||||
- gsd-executor — executes a specific, scoped code change
|
||||
</available_agent_types>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="parse-arguments">
|
||||
Extract flags from the user's invocation:
|
||||
|
||||
- `--max N` — maximum findings to fix (default: **5**)
|
||||
- `--severity high|medium|all` — minimum severity to process (default: **medium**)
|
||||
- `--dry-run` — classify findings without fixing (shows classification table only)
|
||||
- `--source <audit>` — which audit to run (default: **audit-uat**)
|
||||
|
||||
Validate `--source` is a supported audit. Currently supported:
|
||||
- `audit-uat`
|
||||
|
||||
If `--source` is not supported, stop with an error:
|
||||
```
|
||||
Error: Unsupported audit source "{source}". Supported sources: audit-uat
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="run-audit">
|
||||
Invoke the source audit command and capture output.
|
||||
|
||||
For `audit-uat` source:
|
||||
```bash
|
||||
INIT=$(node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" init audit-uat 2>/dev/null || echo "{}")
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Read existing UAT and verification files to extract findings:
|
||||
- Glob: `.planning/phases/*/*-UAT.md`
|
||||
- Glob: `.planning/phases/*/*-VERIFICATION.md`
|
||||
|
||||
Parse each finding into a structured record:
|
||||
- **ID** — sequential identifier (F-01, F-02, ...)
|
||||
- **description** — concise summary of the issue
|
||||
- **severity** — high, medium, or low
|
||||
- **file_refs** — specific file paths referenced in the finding
|
||||
</step>
|
||||
|
||||
<step name="classify-findings">
|
||||
For each finding, classify as one of:
|
||||
|
||||
- **auto-fixable** — clear code change, specific file referenced, testable fix
|
||||
- **manual-only** — requires design decisions, ambiguous scope, architectural changes, user input needed
|
||||
- **skip** — severity below the `--severity` threshold
|
||||
|
||||
**Classification heuristics** (err on manual-only when uncertain):
|
||||
|
||||
Auto-fixable signals:
|
||||
- References a specific file path + line number
|
||||
- Describes a missing test or assertion
|
||||
- Missing export, wrong import path, typo in identifier
|
||||
- Clear single-file change with obvious expected behavior
|
||||
|
||||
Manual-only signals:
|
||||
- Uses words like "consider", "evaluate", "design", "rethink"
|
||||
- Requires new architecture or API changes
|
||||
- Ambiguous scope or multiple valid approaches
|
||||
- Requires user input or design decisions
|
||||
- Cross-cutting concerns affecting multiple subsystems
|
||||
- Performance or scalability issues without clear fix
|
||||
|
||||
**When uncertain, always classify as manual-only.**
|
||||
</step>
|
||||
|
||||
<step name="present-classification">
|
||||
Display the classification table:
|
||||
|
||||
```
|
||||
## Audit-Fix Classification
|
||||
|
||||
| # | Finding | Severity | Classification | Reason |
|
||||
|---|---------|----------|---------------|--------|
|
||||
| F-01 | Missing export in index.ts | high | auto-fixable | Specific file, clear fix |
|
||||
| F-02 | No error handling in payment flow | high | manual-only | Requires design decisions |
|
||||
| F-03 | Test stub with 0 assertions | medium | auto-fixable | Clear test gap |
|
||||
```
|
||||
|
||||
If `--dry-run` was specified, **stop here and exit**. The classification table is the
|
||||
final output — do not proceed to fixing.
|
||||
</step>
|
||||
|
||||
<step name="fix-loop">
|
||||
For each **auto-fixable** finding (up to `--max`, ordered by severity desc):
|
||||
|
||||
**a. Spawn executor agent:**
|
||||
```
|
||||
Task(
|
||||
prompt="Fix finding {ID}: {description}. Files: {file_refs}. Make the minimal change to resolve this specific finding. Do not refactor surrounding code.",
|
||||
subagent_type="gsd-executor"
|
||||
)
|
||||
```
|
||||
|
||||
**b. Run tests:**
|
||||
```bash
|
||||
npm test 2>&1 | tail -20
|
||||
```
|
||||
|
||||
**c. If tests pass** — commit atomically:
|
||||
```bash
|
||||
git add {changed_files}
|
||||
git commit -m "fix({scope}): resolve {ID} — {description}"
|
||||
```
|
||||
The commit message **must** include the finding ID (e.g., F-01) for traceability.
|
||||
|
||||
**d. If tests fail** — revert changes, mark finding as `fix-failed`, and **stop the pipeline**:
|
||||
```bash
|
||||
git checkout -- {changed_files} 2>/dev/null
|
||||
```
|
||||
Log the failure reason and stop processing — do not continue to the next finding.
|
||||
A test failure indicates the codebase may be in an unexpected state, so the pipeline
|
||||
must halt to avoid cascading issues. Remaining auto-fixable findings will appear in the
|
||||
report as `not-attempted`.
|
||||
</step>
|
||||
|
||||
<step name="report">
|
||||
Present the final summary:
|
||||
|
||||
```
|
||||
## Audit-Fix Complete
|
||||
|
||||
**Source:** {audit_command}
|
||||
**Findings:** {total} total, {auto} auto-fixable, {manual} manual-only
|
||||
**Fixed:** {fixed_count}/{auto} auto-fixable findings
|
||||
**Failed:** {failed_count} (reverted)
|
||||
|
||||
| # | Finding | Status | Commit |
|
||||
|---|---------|--------|--------|
|
||||
| F-01 | Missing export | Fixed | abc1234 |
|
||||
| F-03 | Test stub | Fix failed | (reverted) |
|
||||
|
||||
### Manual-only findings (require developer attention):
|
||||
- F-02: No error handling in payment flow — requires design decisions
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- Auto-fixable findings processed sequentially until --max reached or a test failure stops the pipeline
|
||||
- Tests pass after each committed fix (no broken commits)
|
||||
- Failed fixes are reverted cleanly (no partial changes left)
|
||||
- Pipeline stops after the first test failure (no cascading fixes)
|
||||
- Every commit message contains the finding ID
|
||||
- Manual-only findings are surfaced for developer attention
|
||||
- --dry-run produces a useful standalone classification table
|
||||
</success_criteria>
|
||||
423
tests/audit-fix-command.test.cjs
Normal file
423
tests/audit-fix-command.test.cjs
Normal file
@@ -0,0 +1,423 @@
|
||||
/**
|
||||
* GSD Audit-Fix Command Tests
|
||||
*
|
||||
* Validates the autonomous audit-to-fix pipeline:
|
||||
* - Command file exists with correct frontmatter
|
||||
* - Workflow file exists with all required steps
|
||||
* - 4 flags documented (--max, --severity, --dry-run, --source)
|
||||
* - Classification heuristics (auto-fixable vs manual-only)
|
||||
* - --dry-run stops before fixing
|
||||
* - Atomic commit with finding ID in message
|
||||
* - Test-then-commit pattern
|
||||
* - Revert on test failure
|
||||
*/
|
||||
|
||||
const { test, describe } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const REPO_ROOT = path.join(__dirname, '..');
|
||||
const COMMANDS_DIR = path.join(REPO_ROOT, 'commands', 'gsd');
|
||||
const WORKFLOWS_DIR = path.join(REPO_ROOT, 'get-shit-done', 'workflows');
|
||||
|
||||
// ─── 1. Command file — audit-fix.md ──────────────────────────────────────────
|
||||
|
||||
describe('AUDIT-FIX: command file', () => {
|
||||
const cmdPath = path.join(COMMANDS_DIR, 'audit-fix.md');
|
||||
|
||||
test('command file exists', () => {
|
||||
assert.ok(
|
||||
fs.existsSync(cmdPath),
|
||||
'audit-fix.md must exist in commands/gsd/'
|
||||
);
|
||||
});
|
||||
|
||||
test('has valid frontmatter with name gsd:audit-fix', () => {
|
||||
const content = fs.readFileSync(cmdPath, 'utf-8');
|
||||
const frontmatter = content.split('---')[1] || '';
|
||||
assert.ok(
|
||||
frontmatter.includes('name: gsd:audit-fix'),
|
||||
'name must be gsd:audit-fix'
|
||||
);
|
||||
});
|
||||
|
||||
test('has description in frontmatter', () => {
|
||||
const content = fs.readFileSync(cmdPath, 'utf-8');
|
||||
const frontmatter = content.split('---')[1] || '';
|
||||
assert.ok(
|
||||
frontmatter.includes('description:'),
|
||||
'must have description in frontmatter'
|
||||
);
|
||||
});
|
||||
|
||||
test('has allowed-tools list including Agent', () => {
|
||||
const content = fs.readFileSync(cmdPath, 'utf-8');
|
||||
const frontmatter = content.split('---')[1] || '';
|
||||
assert.ok(
|
||||
frontmatter.includes('allowed-tools:'),
|
||||
'must have allowed-tools in frontmatter'
|
||||
);
|
||||
assert.ok(
|
||||
frontmatter.includes('- Agent'),
|
||||
'allowed-tools must include Agent for spawning executor subagents'
|
||||
);
|
||||
});
|
||||
|
||||
test('has argument-hint in frontmatter for /gsd-help discoverability', () => {
|
||||
const content = fs.readFileSync(cmdPath, 'utf-8');
|
||||
const frontmatter = content.split('---')[1] || '';
|
||||
assert.ok(
|
||||
frontmatter.includes('argument-hint:'),
|
||||
'must have argument-hint in frontmatter for /gsd-help discoverability'
|
||||
);
|
||||
});
|
||||
|
||||
test('has type: prompt in frontmatter', () => {
|
||||
const content = fs.readFileSync(cmdPath, 'utf-8');
|
||||
const frontmatter = content.split('---')[1] || '';
|
||||
assert.ok(
|
||||
frontmatter.includes('type: prompt'),
|
||||
'must have type: prompt in frontmatter'
|
||||
);
|
||||
});
|
||||
|
||||
test('argument-hint reflects supported source values', () => {
|
||||
const content = fs.readFileSync(cmdPath, 'utf-8');
|
||||
const frontmatter = content.split('---')[1] || '';
|
||||
assert.ok(
|
||||
frontmatter.includes('--source <audit-uat>'),
|
||||
'argument-hint must show --source <audit-uat> (the only currently supported value)'
|
||||
);
|
||||
assert.ok(
|
||||
!frontmatter.includes('--source <audit|verify>'),
|
||||
'argument-hint must not advertise unsupported verify source'
|
||||
);
|
||||
});
|
||||
|
||||
test('references audit-fix.md workflow', () => {
|
||||
const content = fs.readFileSync(cmdPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('audit-fix.md'),
|
||||
'must reference audit-fix.md workflow'
|
||||
);
|
||||
});
|
||||
|
||||
test('has <objective> section', () => {
|
||||
const content = fs.readFileSync(cmdPath, 'utf-8');
|
||||
assert.ok(content.includes('<objective>'), 'must have <objective> section');
|
||||
assert.ok(content.includes('</objective>'), 'must close <objective> section');
|
||||
});
|
||||
});
|
||||
|
||||
// ─── 2. Workflow file — audit-fix.md ──────────────────────────────────────────
|
||||
|
||||
describe('AUDIT-FIX: workflow file', () => {
|
||||
const wfPath = path.join(WORKFLOWS_DIR, 'audit-fix.md');
|
||||
|
||||
test('workflow file exists', () => {
|
||||
assert.ok(
|
||||
fs.existsSync(wfPath),
|
||||
'audit-fix.md must exist in get-shit-done/workflows/'
|
||||
);
|
||||
});
|
||||
|
||||
test('has <purpose> section', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(content.includes('<purpose>'), 'must have <purpose> section');
|
||||
assert.ok(content.includes('</purpose>'), 'must close <purpose> section');
|
||||
});
|
||||
|
||||
test('has <process> section with steps', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(content.includes('<process>'), 'must have <process> section');
|
||||
assert.ok(content.includes('</process>'), 'must close <process> section');
|
||||
});
|
||||
|
||||
test('has <success_criteria> section', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(content.includes('<success_criteria>'), 'must have <success_criteria> section');
|
||||
assert.ok(content.includes('</success_criteria>'), 'must close <success_criteria> section');
|
||||
});
|
||||
|
||||
test('has <available_agent_types> listing gsd-executor', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('<available_agent_types>'),
|
||||
'must have <available_agent_types> section'
|
||||
);
|
||||
assert.ok(
|
||||
content.includes('gsd-executor'),
|
||||
'must list gsd-executor as available agent type'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── 3. Flags documented ─────────────────────────────────────────────────────
|
||||
|
||||
describe('AUDIT-FIX: all 4 flags documented', () => {
|
||||
const cmdPath = path.join(COMMANDS_DIR, 'audit-fix.md');
|
||||
const wfPath = path.join(WORKFLOWS_DIR, 'audit-fix.md');
|
||||
|
||||
test('--max flag documented in command', () => {
|
||||
const content = fs.readFileSync(cmdPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('--max'),
|
||||
'command must document --max flag'
|
||||
);
|
||||
});
|
||||
|
||||
test('--severity flag documented in command', () => {
|
||||
const content = fs.readFileSync(cmdPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('--severity'),
|
||||
'command must document --severity flag'
|
||||
);
|
||||
});
|
||||
|
||||
test('--dry-run flag documented in command', () => {
|
||||
const content = fs.readFileSync(cmdPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('--dry-run'),
|
||||
'command must document --dry-run flag'
|
||||
);
|
||||
});
|
||||
|
||||
test('--source flag documented in command', () => {
|
||||
const content = fs.readFileSync(cmdPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('--source'),
|
||||
'command must document --source flag'
|
||||
);
|
||||
});
|
||||
|
||||
test('--max flag documented in workflow with default 5', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(content.includes('--max'), 'workflow must document --max flag');
|
||||
assert.ok(
|
||||
content.includes('5'),
|
||||
'workflow must show default of 5 for --max'
|
||||
);
|
||||
});
|
||||
|
||||
test('--severity flag documented in workflow with default medium', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(content.includes('--severity'), 'workflow must document --severity flag');
|
||||
assert.ok(
|
||||
content.includes('medium'),
|
||||
'workflow must show default of medium for --severity'
|
||||
);
|
||||
});
|
||||
|
||||
test('--dry-run flag documented in workflow', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('--dry-run'),
|
||||
'workflow must document --dry-run flag'
|
||||
);
|
||||
});
|
||||
|
||||
test('--source flag documented in workflow with default audit-uat', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(content.includes('--source'), 'workflow must document --source flag');
|
||||
assert.ok(
|
||||
content.includes('audit-uat'),
|
||||
'workflow must show audit-uat as default source'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── 4. Classification heuristics ─────────────────────────────────────────────
|
||||
|
||||
describe('AUDIT-FIX: classification heuristics documented', () => {
|
||||
const wfPath = path.join(WORKFLOWS_DIR, 'audit-fix.md');
|
||||
|
||||
test('documents auto-fixable classification', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('auto-fixable'),
|
||||
'must document auto-fixable classification'
|
||||
);
|
||||
});
|
||||
|
||||
test('documents manual-only classification', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('manual-only'),
|
||||
'must document manual-only classification'
|
||||
);
|
||||
});
|
||||
|
||||
test('errs on manual-only when uncertain', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.toLowerCase().includes('uncertain') &&
|
||||
content.includes('manual-only'),
|
||||
'must specify to err on manual-only when uncertain'
|
||||
);
|
||||
});
|
||||
|
||||
test('lists auto-fixable signals (file path, missing test)', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('file path'),
|
||||
'must list file path reference as auto-fixable signal'
|
||||
);
|
||||
assert.ok(
|
||||
content.includes('missing test') || content.includes('Missing test'),
|
||||
'must list missing test as auto-fixable signal'
|
||||
);
|
||||
});
|
||||
|
||||
test('lists manual-only signals (design decisions, architecture)', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('design decision') || content.includes('design decisions'),
|
||||
'must list design decisions as manual-only signal'
|
||||
);
|
||||
assert.ok(
|
||||
content.includes('architecture') || content.includes('architectural'),
|
||||
'must list architecture changes as manual-only signal'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── 5. --dry-run stops before fixing ─────────────────────────────────────────
|
||||
|
||||
describe('AUDIT-FIX: --dry-run stops before fixing', () => {
|
||||
const wfPath = path.join(WORKFLOWS_DIR, 'audit-fix.md');
|
||||
|
||||
test('dry-run explicitly stops after classification', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
// Verify dry-run is mentioned in the context of stopping/exiting
|
||||
assert.ok(
|
||||
content.includes('dry-run') && (
|
||||
content.includes('stop here') ||
|
||||
content.includes('stop') ||
|
||||
content.includes('exit')
|
||||
),
|
||||
'must indicate --dry-run stops after classification'
|
||||
);
|
||||
});
|
||||
|
||||
test('dry-run does not proceed to fix loop', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
// Find the dry-run stop instruction and verify it comes before the fix-loop step
|
||||
const dryRunStopIdx = content.indexOf('dry-run');
|
||||
const fixLoopIdx = content.indexOf('fix-loop');
|
||||
assert.ok(dryRunStopIdx > -1, 'must mention dry-run');
|
||||
assert.ok(fixLoopIdx > -1, 'must have fix-loop step');
|
||||
// The dry-run stop instruction should be in the classification step, before fix-loop
|
||||
assert.ok(
|
||||
dryRunStopIdx < fixLoopIdx,
|
||||
'dry-run stop must be documented before the fix-loop step'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── 6. Atomic commit with finding ID ─────────────────────────────────────────
|
||||
|
||||
describe('AUDIT-FIX: atomic commit with finding ID', () => {
|
||||
const wfPath = path.join(WORKFLOWS_DIR, 'audit-fix.md');
|
||||
|
||||
test('commit message pattern includes finding ID', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
// The workflow should show {ID} in the commit message template
|
||||
assert.ok(
|
||||
content.includes('{ID}') && content.includes('commit'),
|
||||
'commit message template must include {ID} placeholder for finding ID'
|
||||
);
|
||||
});
|
||||
|
||||
test('commit is atomic per finding (one commit per fix)', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
// The fix-loop structure should show commit happening inside the per-finding loop
|
||||
assert.ok(
|
||||
content.includes('commit') && content.includes('finding'),
|
||||
'must commit atomically per finding'
|
||||
);
|
||||
});
|
||||
|
||||
test('mentions finding ID traceability', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('traceability') || content.includes('finding ID'),
|
||||
'must mention finding ID for traceability'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── 7. Test-then-commit pattern ──────────────────────────────────────────────
|
||||
|
||||
describe('AUDIT-FIX: test-then-commit pattern', () => {
|
||||
const wfPath = path.join(WORKFLOWS_DIR, 'audit-fix.md');
|
||||
|
||||
test('runs tests before committing', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('npm test'),
|
||||
'must run npm test as part of the fix loop'
|
||||
);
|
||||
});
|
||||
|
||||
test('tests appear before commit in workflow order', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
// Within the fix-loop step, test must come before commit
|
||||
const fixLoopStart = content.indexOf('fix-loop');
|
||||
const testIdx = content.indexOf('npm test', fixLoopStart);
|
||||
const commitIdx = content.indexOf('git commit', fixLoopStart);
|
||||
assert.ok(testIdx > -1, 'must have npm test in fix-loop');
|
||||
assert.ok(commitIdx > -1, 'must have git commit in fix-loop');
|
||||
assert.ok(
|
||||
testIdx < commitIdx,
|
||||
'npm test must appear before commit in fix-loop (test-then-commit pattern)'
|
||||
);
|
||||
});
|
||||
|
||||
test('commit is conditional on tests passing', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('tests pass') || content.includes('If tests pass'),
|
||||
'commit must be conditional on tests passing'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── 8. Revert on test failure ────────────────────────────────────────────────
|
||||
|
||||
describe('AUDIT-FIX: revert on test failure', () => {
|
||||
const wfPath = path.join(WORKFLOWS_DIR, 'audit-fix.md');
|
||||
|
||||
test('reverts changes when tests fail', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('git checkout') || content.includes('revert'),
|
||||
'must revert changes on test failure'
|
||||
);
|
||||
});
|
||||
|
||||
test('marks failed fixes as fix-failed', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('fix-failed'),
|
||||
'must mark failed fixes as fix-failed'
|
||||
);
|
||||
});
|
||||
|
||||
test('stops pipeline after first test failure', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('stop') && content.includes('fix-failed'),
|
||||
'must stop the pipeline after the first test failure'
|
||||
);
|
||||
});
|
||||
|
||||
test('test failure does not leave partial changes', () => {
|
||||
const content = fs.readFileSync(wfPath, 'utf-8');
|
||||
// git checkout scoped to changed files is the revert mechanism
|
||||
assert.ok(
|
||||
content.includes('git checkout -- {changed_files}'),
|
||||
'must use git checkout -- {changed_files} to clean partial changes on failure'
|
||||
);
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user