mirror of
https://github.com/glittercowboy/get-shit-done
synced 2026-04-25 17:25:23 +02:00
feat(workflow): add extract-learnings command for phase knowledge capture (#1873)
Add /gsd:extract-learnings command and backing workflow that extracts decisions, lessons, patterns, and surprises from completed phase artifacts into a structured LEARNINGS.md file with YAML frontmatter metadata. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
22
commands/gsd/extract_learnings.md
Normal file
22
commands/gsd/extract_learnings.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: gsd:extract-learnings
|
||||
description: Extract decisions, lessons, patterns, and surprises from completed phase artifacts
|
||||
argument-hint: <phase-number>
|
||||
allowed-tools:
|
||||
- Read
|
||||
- Write
|
||||
- Bash
|
||||
- Grep
|
||||
- Glob
|
||||
- Agent
|
||||
type: prompt
|
||||
---
|
||||
<objective>
|
||||
Extract structured learnings from completed phase artifacts (PLAN.md, SUMMARY.md, VERIFICATION.md, UAT.md, STATE.md) into a LEARNINGS.md file that captures decisions, lessons learned, patterns discovered, and surprises encountered.
|
||||
</objective>
|
||||
|
||||
<execution_context>
|
||||
@~/.claude/get-shit-done/workflows/extract_learnings.md
|
||||
</execution_context>
|
||||
|
||||
Execute the extract-learnings workflow from @~/.claude/get-shit-done/workflows/extract_learnings.md end-to-end.
|
||||
232
get-shit-done/workflows/extract_learnings.md
Normal file
232
get-shit-done/workflows/extract_learnings.md
Normal file
@@ -0,0 +1,232 @@
|
||||
<purpose>
|
||||
Extract decisions, lessons learned, patterns discovered, and surprises encountered from completed phase artifacts into a structured LEARNINGS.md file. Captures institutional knowledge that would otherwise be lost between phases.
|
||||
</purpose>
|
||||
|
||||
<required_reading>
|
||||
Read all files referenced by the invoking prompt's execution_context before starting.
|
||||
</required_reading>
|
||||
|
||||
<objective>
|
||||
Analyze completed phase artifacts (PLAN.md, SUMMARY.md, VERIFICATION.md, UAT.md, STATE.md) and extract structured learnings into 4 categories: decisions, lessons, patterns, and surprises. Each extracted item includes source attribution. The output is a LEARNINGS.md file with YAML frontmatter containing metadata about the extraction.
|
||||
</objective>
|
||||
|
||||
<process>
|
||||
|
||||
<step name="initialize">
|
||||
Parse arguments and load project state:
|
||||
|
||||
```bash
|
||||
INIT=$(node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" init phase-op "${PHASE_ARG}")
|
||||
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
|
||||
```
|
||||
|
||||
Parse from init JSON: `phase_found`, `phase_dir`, `phase_number`, `phase_name`, `padded_phase`.
|
||||
|
||||
If phase not found, exit with error: "Phase {PHASE_ARG} not found."
|
||||
</step>
|
||||
|
||||
<step name="collect_artifacts">
|
||||
Read the phase artifacts. PLAN.md and SUMMARY.md are required; VERIFICATION.md, UAT.md, and STATE.md are optional.
|
||||
|
||||
**Required artifacts:**
|
||||
- `${PHASE_DIR}/*-PLAN.md` — all plan files for the phase
|
||||
- `${PHASE_DIR}/*-SUMMARY.md` — all summary files for the phase
|
||||
|
||||
If PLAN.md or SUMMARY.md files are not found or missing, exit with error: "Required artifacts missing. PLAN.md and SUMMARY.md are required for learning extraction."
|
||||
|
||||
**Optional artifacts (read if available, skip if not found):**
|
||||
- `${PHASE_DIR}/*-VERIFICATION.md` — verification results
|
||||
- `${PHASE_DIR}/*-UAT.md` — user acceptance test results
|
||||
- `.planning/STATE.md` — project state with decisions and blockers
|
||||
|
||||
Track which optional artifacts are missing for the `missing_artifacts` frontmatter field.
|
||||
</step>
|
||||
|
||||
<step name="extract_learnings">
|
||||
Analyze all collected artifacts and extract learnings into 4 categories:
|
||||
|
||||
### 1. Decisions
|
||||
Technical and architectural decisions made during the phase. Look for:
|
||||
- Explicit decisions documented in PLAN.md or SUMMARY.md
|
||||
- Technology choices and their rationale
|
||||
- Trade-offs that were evaluated
|
||||
- Design decisions recorded in STATE.md
|
||||
|
||||
Each decision entry must include:
|
||||
- **What** was decided
|
||||
- **Why** it was decided (rationale)
|
||||
- **Source:** attribution to the artifact where the decision was found (e.g., "Source: 03-01-PLAN.md")
|
||||
|
||||
### 2. Lessons
|
||||
Things learned during execution that were not known beforehand. Look for:
|
||||
- Unexpected complexity in SUMMARY.md
|
||||
- Issues discovered during verification in VERIFICATION.md
|
||||
- Failed approaches documented in SUMMARY.md
|
||||
- UAT feedback that revealed gaps
|
||||
|
||||
Each lesson entry must include:
|
||||
- **What** was learned
|
||||
- **Context** for the lesson
|
||||
- **Source:** attribution to the originating artifact
|
||||
|
||||
### 3. Patterns
|
||||
Reusable patterns, approaches, or techniques discovered. Look for:
|
||||
- Successful implementation patterns in SUMMARY.md
|
||||
- Testing patterns from VERIFICATION.md or UAT.md
|
||||
- Workflow patterns that worked well
|
||||
- Code organization patterns from PLAN.md
|
||||
|
||||
Each pattern entry must include:
|
||||
- **Pattern** name/description
|
||||
- **When to use** it
|
||||
- **Source:** attribution to the originating artifact
|
||||
|
||||
### 4. Surprises
|
||||
Unexpected findings, behaviors, or outcomes. Look for:
|
||||
- Things that took longer or shorter than estimated
|
||||
- Unexpected dependencies or interactions
|
||||
- Edge cases not anticipated in planning
|
||||
- Performance or behavior that differed from expectations
|
||||
|
||||
Each surprise entry must include:
|
||||
- **What** was surprising
|
||||
- **Impact** of the surprise
|
||||
- **Source:** attribution to the originating artifact
|
||||
</step>
|
||||
|
||||
<step name="capture_thought_integration">
|
||||
If the `capture_thought` tool is available in the current session, capture each extracted learning as a thought with metadata:
|
||||
|
||||
```
|
||||
capture_thought({
|
||||
category: "decision" | "lesson" | "pattern" | "surprise",
|
||||
phase: PHASE_NUMBER,
|
||||
content: LEARNING_TEXT,
|
||||
source: ARTIFACT_NAME
|
||||
})
|
||||
```
|
||||
|
||||
If `capture_thought` is not available (e.g., runtime does not support it), gracefully skip this step and continue. The LEARNINGS.md file is the primary output — capture_thought is a supplementary integration that provides a fallback for runtimes with thought capture support. The workflow must not fail or warn if capture_thought is unavailable.
|
||||
</step>
|
||||
|
||||
<step name="write_learnings">
|
||||
Write the LEARNINGS.md file to the phase directory. If a previous LEARNINGS.md exists, overwrite it (replace the file entirely).
|
||||
|
||||
Output path: `${PHASE_DIR}/${PADDED_PHASE}-LEARNINGS.md`
|
||||
|
||||
The file must have YAML frontmatter with these fields:
|
||||
```yaml
|
||||
---
|
||||
phase: {PHASE_NUMBER}
|
||||
phase_name: "{PHASE_NAME}"
|
||||
project: "{PROJECT_NAME}"
|
||||
generated: "{ISO_DATE}"
|
||||
counts:
|
||||
decisions: {N}
|
||||
lessons: {N}
|
||||
patterns: {N}
|
||||
surprises: {N}
|
||||
missing_artifacts:
|
||||
- "{ARTIFACT_NAME}"
|
||||
---
|
||||
```
|
||||
|
||||
The body follows this structure:
|
||||
```markdown
|
||||
# Phase {PHASE_NUMBER} Learnings: {PHASE_NAME}
|
||||
|
||||
## Decisions
|
||||
|
||||
### {Decision Title}
|
||||
{What was decided}
|
||||
|
||||
**Rationale:** {Why}
|
||||
**Source:** {artifact file}
|
||||
|
||||
---
|
||||
|
||||
## Lessons
|
||||
|
||||
### {Lesson Title}
|
||||
{What was learned}
|
||||
|
||||
**Context:** {context}
|
||||
**Source:** {artifact file}
|
||||
|
||||
---
|
||||
|
||||
## Patterns
|
||||
|
||||
### {Pattern Name}
|
||||
{Description}
|
||||
|
||||
**When to use:** {applicability}
|
||||
**Source:** {artifact file}
|
||||
|
||||
---
|
||||
|
||||
## Surprises
|
||||
|
||||
### {Surprise Title}
|
||||
{What was surprising}
|
||||
|
||||
**Impact:** {impact description}
|
||||
**Source:** {artifact file}
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="update_state">
|
||||
Update STATE.md to reflect the learning extraction:
|
||||
|
||||
```bash
|
||||
node "$HOME/.claude/get-shit-done/bin/gsd-tools.cjs" state update "Last Activity" "$(date +%Y-%m-%d)"
|
||||
```
|
||||
</step>
|
||||
|
||||
<step name="report">
|
||||
```
|
||||
---------------------------------------------------------------
|
||||
|
||||
## Learnings Extracted: Phase {X} — {Name}
|
||||
|
||||
Decisions: {N}
|
||||
Lessons: {N}
|
||||
Patterns: {N}
|
||||
Surprises: {N}
|
||||
Total: {N}
|
||||
|
||||
Output: {PHASE_DIR}/{PADDED_PHASE}-LEARNINGS.md
|
||||
|
||||
Missing artifacts: {list or "none"}
|
||||
|
||||
Next steps:
|
||||
- Review extracted learnings for accuracy
|
||||
- /gsd-progress — see overall project state
|
||||
- /gsd-execute-phase {next} — continue to next phase
|
||||
|
||||
---------------------------------------------------------------
|
||||
```
|
||||
</step>
|
||||
|
||||
</process>
|
||||
|
||||
<success_criteria>
|
||||
- [ ] Phase artifacts located and read successfully
|
||||
- [ ] All 4 categories extracted: decisions, lessons, patterns, surprises
|
||||
- [ ] Each extracted item has source attribution
|
||||
- [ ] LEARNINGS.md written with correct YAML frontmatter
|
||||
- [ ] Missing optional artifacts tracked in frontmatter
|
||||
- [ ] capture_thought integration attempted if tool available
|
||||
- [ ] STATE.md updated with extraction activity
|
||||
- [ ] User receives summary report
|
||||
</success_criteria>
|
||||
|
||||
<critical_rules>
|
||||
- PLAN.md and SUMMARY.md are required — exit with clear error if missing
|
||||
- VERIFICATION.md, UAT.md, and STATE.md are optional — extract from them if present, skip gracefully if not found
|
||||
- Every extracted learning must have source attribution back to the originating artifact
|
||||
- Running extract-learnings twice on the same phase must overwrite (replace) the previous LEARNINGS.md, not append
|
||||
- Do not fabricate learnings — only extract what is explicitly documented in artifacts
|
||||
- If capture_thought is unavailable, the workflow must not fail — graceful degradation to file-only output
|
||||
- LEARNINGS.md frontmatter must include counts for all 4 categories and list any missing_artifacts
|
||||
</critical_rules>
|
||||
168
tests/extract-learnings.test.cjs
Normal file
168
tests/extract-learnings.test.cjs
Normal file
@@ -0,0 +1,168 @@
|
||||
/**
|
||||
* Extract-Learnings Command & Workflow Tests
|
||||
*
|
||||
* Validates command file existence, frontmatter correctness, workflow content,
|
||||
* 4 learning categories, capture_thought handling, graceful degradation,
|
||||
* LEARNINGS.md output, and missing artifact handling.
|
||||
*/
|
||||
|
||||
const { describe, test } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
|
||||
const COMMAND_PATH = path.join(__dirname, '..', 'commands', 'gsd', 'extract_learnings.md');
|
||||
const WORKFLOW_PATH = path.join(__dirname, '..', 'get-shit-done', 'workflows', 'extract_learnings.md');
|
||||
|
||||
describe('extract-learnings command', () => {
|
||||
test('command file exists', () => {
|
||||
assert.ok(fs.existsSync(COMMAND_PATH), 'commands/gsd/extract_learnings.md should exist');
|
||||
});
|
||||
|
||||
test('command file has correct name frontmatter', () => {
|
||||
const content = fs.readFileSync(COMMAND_PATH, 'utf-8');
|
||||
assert.ok(content.includes('name: gsd:extract-learnings'), 'Command must have name: gsd:extract-learnings');
|
||||
});
|
||||
|
||||
test('command file has description frontmatter', () => {
|
||||
const content = fs.readFileSync(COMMAND_PATH, 'utf-8');
|
||||
assert.ok(content.includes('description:'), 'Command must have description frontmatter');
|
||||
});
|
||||
|
||||
test('command file has argument-hint for phase-number', () => {
|
||||
const content = fs.readFileSync(COMMAND_PATH, 'utf-8');
|
||||
assert.ok(content.includes('argument-hint:'), 'Command must have argument-hint');
|
||||
assert.ok(content.includes('<phase-number>'), 'argument-hint must reference <phase-number>');
|
||||
});
|
||||
|
||||
test('command file has allowed-tools list', () => {
|
||||
const content = fs.readFileSync(COMMAND_PATH, 'utf-8');
|
||||
assert.ok(content.includes('allowed-tools:'), 'Command must have allowed-tools');
|
||||
assert.ok(content.includes('Read'), 'allowed-tools must include Read');
|
||||
assert.ok(content.includes('Write'), 'allowed-tools must include Write');
|
||||
assert.ok(content.includes('Bash'), 'allowed-tools must include Bash');
|
||||
assert.ok(content.includes('Grep'), 'allowed-tools must include Grep');
|
||||
assert.ok(content.includes('Glob'), 'allowed-tools must include Glob');
|
||||
assert.ok(content.includes('Agent'), 'allowed-tools must include Agent');
|
||||
});
|
||||
|
||||
test('command file has type: prompt', () => {
|
||||
const content = fs.readFileSync(COMMAND_PATH, 'utf-8');
|
||||
assert.ok(content.includes('type: prompt'), 'Command must have type: prompt');
|
||||
});
|
||||
|
||||
test('command references the workflow via execution_context', () => {
|
||||
const content = fs.readFileSync(COMMAND_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('workflows/extract_learnings.md'),
|
||||
'Command must reference workflows/extract_learnings.md in execution_context'
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe('extract-learnings workflow', () => {
|
||||
test('workflow file exists', () => {
|
||||
assert.ok(fs.existsSync(WORKFLOW_PATH), 'workflows/extract_learnings.md should exist');
|
||||
});
|
||||
|
||||
test('workflow has objective tag', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('<objective>'), 'Workflow must have <objective> tag');
|
||||
assert.ok(content.includes('</objective>'), 'Workflow must close <objective> tag');
|
||||
});
|
||||
|
||||
test('workflow has process tag', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('<process>'), 'Workflow must have <process> tag');
|
||||
assert.ok(content.includes('</process>'), 'Workflow must close <process> tag');
|
||||
});
|
||||
|
||||
test('workflow has step tags', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('<step name='), 'Workflow must have named step tags');
|
||||
assert.ok(content.includes('</step>'), 'Workflow must close step tags');
|
||||
});
|
||||
|
||||
test('workflow has success_criteria tag', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('<success_criteria>'), 'Workflow must have <success_criteria> tag');
|
||||
assert.ok(content.includes('</success_criteria>'), 'Workflow must close <success_criteria> tag');
|
||||
});
|
||||
|
||||
test('workflow has critical_rules tag', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('<critical_rules>'), 'Workflow must have <critical_rules> tag');
|
||||
assert.ok(content.includes('</critical_rules>'), 'Workflow must close <critical_rules> tag');
|
||||
});
|
||||
|
||||
test('workflow reads required artifacts (PLAN.md and SUMMARY.md)', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('PLAN.md'), 'Workflow must reference PLAN.md');
|
||||
assert.ok(content.includes('SUMMARY.md'), 'Workflow must reference SUMMARY.md');
|
||||
});
|
||||
|
||||
test('workflow reads optional artifacts (VERIFICATION.md, UAT.md, STATE.md)', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('VERIFICATION.md'), 'Workflow must reference VERIFICATION.md');
|
||||
assert.ok(content.includes('UAT.md'), 'Workflow must reference UAT.md');
|
||||
assert.ok(content.includes('STATE.md'), 'Workflow must reference STATE.md');
|
||||
});
|
||||
|
||||
test('workflow extracts all 4 learning categories', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.toLowerCase().includes('decision'), 'Workflow must extract decisions');
|
||||
assert.ok(content.toLowerCase().includes('lesson'), 'Workflow must extract lessons');
|
||||
assert.ok(content.toLowerCase().includes('pattern'), 'Workflow must extract patterns');
|
||||
assert.ok(content.toLowerCase().includes('surprise'), 'Workflow must extract surprises');
|
||||
});
|
||||
|
||||
test('workflow handles capture_thought tool availability', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('capture_thought'), 'Workflow must reference capture_thought tool');
|
||||
});
|
||||
|
||||
test('workflow degrades gracefully when capture_thought is unavailable', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('graceful') || content.includes('not available') || content.includes('unavailable') || content.includes('fallback'),
|
||||
'Workflow must handle graceful degradation when capture_thought is unavailable'
|
||||
);
|
||||
});
|
||||
|
||||
test('workflow outputs LEARNINGS.md', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('LEARNINGS.md'), 'Workflow must output LEARNINGS.md');
|
||||
});
|
||||
|
||||
test('workflow handles missing artifacts gracefully', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('missing') || content.includes('not found') || content.includes('optional'),
|
||||
'Workflow must handle missing artifacts'
|
||||
);
|
||||
});
|
||||
|
||||
test('workflow includes source attribution for extracted items', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('source') || content.includes('attribution') || content.includes('Source:'),
|
||||
'Workflow must include source attribution for extracted items'
|
||||
);
|
||||
});
|
||||
|
||||
test('workflow specifies LEARNINGS.md YAML frontmatter fields', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(content.includes('phase'), 'LEARNINGS.md frontmatter must include phase');
|
||||
assert.ok(content.includes('phase_name'), 'LEARNINGS.md frontmatter must include phase_name');
|
||||
assert.ok(content.includes('generated'), 'LEARNINGS.md frontmatter must include generated');
|
||||
assert.ok(content.includes('missing_artifacts'), 'LEARNINGS.md frontmatter must include missing_artifacts');
|
||||
});
|
||||
|
||||
test('workflow supports overwriting previous LEARNINGS.md on re-run', () => {
|
||||
const content = fs.readFileSync(WORKFLOW_PATH, 'utf-8');
|
||||
assert.ok(
|
||||
content.includes('overwrite') || content.includes('overwrit') || content.includes('replace'),
|
||||
'Workflow must support overwriting previous LEARNINGS.md'
|
||||
);
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user