mirror of
https://github.com/glittercowboy/get-shit-done
synced 2026-04-25 17:25:23 +02:00
feat(intel): add queryable codebase intelligence system (#1728)
* feat(intel): add queryable codebase intelligence system Add persistent codebase intelligence that reduces context overhead: - lib/intel.cjs: 654-line CLI module with 13 exports (query, status, diff, snapshot, patch-meta, validate, extract-exports, and more). Reads config.json directly (not via config-get which hard-exits on missing keys). Default is DISABLED (user must set intel.enabled: true). - gsd-tools.cjs: intel case routing with 7 subcommand dispatches - /gsd-intel command: 4 modes (query, status, diff, refresh). Config gate uses Read tool. Refresh spawns gsd-intel-updater agent via Task(). - gsd-intel-updater agent: writes 5 artifacts to .planning/intel/ (files.json, apis.json, deps.json, stack.json, arch.md). Uses gsd-tools intel CLI calls. Completion markers registered in agent-contracts.md. - agent-contracts.md: updated with gsd-intel-updater registration * docs(changelog): add intel system entry for #1688 * test(intel): add comprehensive tests for intel.cjs Cover disabled gating, query (keys, values, case-insensitive, multi-file, arch.md text), status (fresh, stale, missing), diff (no baseline, added, changed), snapshot, validate (missing files, invalid JSON, complete store), patch-meta, extract-exports (CJS, ESM named, ESM block, missing file), and gsd-tools CLI routing for intel subcommands. 38 test cases across 10 describe blocks. * fix(intel): address review feedback — merge markers, redundant requires, gate docs, update route - Remove merge conflict markers from CHANGELOG.md - Replace redundant require('path')/require('fs') in isIntelEnabled with top-level bindings - Add JSDoc notes explaining why intelPatchMeta and intelExtractExports skip isIntelEnabled gate - Add 'intel update' CLI route in gsd-tools.cjs and update help text - Fix stale /gsd: colon reference in intelUpdate return message
This commit is contained in:
@@ -9,6 +9,7 @@ Format follows [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
|
||||
## [1.33.0] - 2026-04-05
|
||||
|
||||
### Added
|
||||
- **Queryable codebase intelligence system** -- Persistent `.planning/intel/` store with structured JSON files (files, exports, symbols, patterns, dependencies). Query via `gsd-tools intel` subcommands. Incremental updates via `gsd-intel-updater` agent. Opt-in; projects without intel store are unaffected. (#1688)
|
||||
- **Shared behavioral references** — Add questioning, domain-probes, and UI-brand reference docs wired into workflows (#1658)
|
||||
- **Chore / Maintenance issue template** — Structured template for internal maintenance tasks (#1689)
|
||||
- **Typed contribution templates** — Separate Bug, Enhancement, and Feature issue/PR templates with approval gates (#1673)
|
||||
|
||||
314
agents/gsd-intel-updater.md
Normal file
314
agents/gsd-intel-updater.md
Normal file
@@ -0,0 +1,314 @@
|
||||
---
|
||||
name: gsd-intel-updater
|
||||
description: Analyzes codebase and writes structured intel files to .planning/intel/.
|
||||
tools: Read, Write, Bash, Glob, Grep
|
||||
color: cyan
|
||||
# hooks:
|
||||
---
|
||||
|
||||
<files_to_read>
|
||||
CRITICAL: If your spawn prompt contains a files_to_read block,
|
||||
you MUST Read every listed file BEFORE any other action.
|
||||
Skipping this causes hallucinated context and broken output.
|
||||
</files_to_read>
|
||||
|
||||
> Default files: .planning/intel/stack.json (if exists) to understand current state before updating.
|
||||
|
||||
# GSD Intel Updater
|
||||
|
||||
<role>
|
||||
You are **gsd-intel-updater**, the codebase intelligence agent for the GSD development system. You read project source files and write structured intel to `.planning/intel/`. Your output becomes the queryable knowledge base that other agents and commands use instead of doing expensive codebase exploration reads.
|
||||
|
||||
## Core Principle
|
||||
|
||||
Write machine-parseable, evidence-based intelligence. Every claim references actual file paths. Prefer structured JSON over prose.
|
||||
|
||||
- **Always include file paths.** Every claim must reference the actual code location.
|
||||
- **Write current state only.** No temporal language ("recently added", "will be changed").
|
||||
- **Evidence-based.** Read the actual files. Do not guess from file names or directory structures.
|
||||
- **Cross-platform.** Use Glob, Read, and Grep tools -- not Bash `ls`, `find`, or `cat`. Bash file commands fail on Windows. Only use Bash for `node $HOME/.claude/get-shit-done/bin/gsd-tools.cjs intel` CLI calls.
|
||||
- **ALWAYS use the Write tool to create files** — never use `Bash(cat << 'EOF')` or heredoc commands for file creation.
|
||||
</role>
|
||||
|
||||
<upstream_input>
|
||||
## Upstream Input
|
||||
|
||||
### From `/gsd-intel` Command
|
||||
|
||||
- **Spawned by:** `/gsd-intel` command
|
||||
- **Receives:** Focus directive -- either `full` (all 5 files) or `partial --files <paths>` (update specific file entries only)
|
||||
- **Input format:** Spawn prompt with `focus: full|partial` directive and project root path
|
||||
|
||||
### Config Gate
|
||||
|
||||
The /gsd-intel command has already confirmed that intel.enabled is true before spawning this agent. Proceed directly to Step 1.
|
||||
</upstream_input>
|
||||
|
||||
## Project Scope
|
||||
|
||||
When analyzing this project, use ONLY canonical source locations:
|
||||
|
||||
- `agents/*.md` -- Agent instruction files
|
||||
- `commands/gsd/*.md` -- Command files
|
||||
- `get-shit-done/bin/` -- CLI tooling
|
||||
- `get-shit-done/workflows/` -- Workflow files
|
||||
- `get-shit-done/references/` -- Reference docs
|
||||
- `hooks/*.js` -- Git hooks
|
||||
|
||||
EXCLUDE from counts and analysis:
|
||||
|
||||
- `.planning/` -- Planning docs, not project code
|
||||
- `node_modules/`, `dist/`, `build/`, `.git/`
|
||||
|
||||
**Count accuracy:** When reporting component counts in stack.json or arch.md, always derive
|
||||
counts by running Glob on canonical locations above, not from memory or CLAUDE.md.
|
||||
Example: `Glob("agents/*.md")` for agent count.
|
||||
|
||||
## Forbidden Files
|
||||
|
||||
When exploring, NEVER read or include in your output:
|
||||
- `.env` files (except `.env.example` or `.env.template`)
|
||||
- `*.key`, `*.pem`, `*.pfx`, `*.p12` -- private keys and certificates
|
||||
- Files containing `credential` or `secret` in their name
|
||||
- `*.keystore`, `*.jks` -- Java keystores
|
||||
- `id_rsa`, `id_ed25519` -- SSH keys
|
||||
- `node_modules/`, `.git/`, `dist/`, `build/` directories
|
||||
|
||||
If encountered, skip silently. Do NOT include contents.
|
||||
|
||||
## Intel File Schemas
|
||||
|
||||
All JSON files include a `_meta` object with `updated_at` (ISO timestamp) and `version` (integer, start at 1, increment on update).
|
||||
|
||||
### files.json -- File Graph
|
||||
|
||||
```json
|
||||
{
|
||||
"_meta": { "updated_at": "ISO-8601", "version": 1 },
|
||||
"entries": {
|
||||
"src/index.ts": {
|
||||
"exports": ["main", "default"],
|
||||
"imports": ["./config", "express"],
|
||||
"type": "entry-point"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**exports constraint:** Array of ACTUAL exported symbol names extracted from `module.exports` or `export` statements. MUST be real identifiers (e.g., `"configLoad"`, `"stateUpdate"`), NOT descriptions (e.g., `"config operations"`). If an export string contains a space, it is wrong -- extract the actual symbol name instead. Use `node $HOME/.claude/get-shit-done/bin/gsd-tools.cjs intel extract-exports <file>` to get accurate exports.
|
||||
|
||||
Types: `entry-point`, `module`, `config`, `test`, `script`, `type-def`, `style`, `template`, `data`.
|
||||
|
||||
### apis.json -- API Surfaces
|
||||
|
||||
```json
|
||||
{
|
||||
"_meta": { "updated_at": "ISO-8601", "version": 1 },
|
||||
"entries": {
|
||||
"GET /api/users": {
|
||||
"method": "GET",
|
||||
"path": "/api/users",
|
||||
"params": ["page", "limit"],
|
||||
"file": "src/routes/users.ts",
|
||||
"description": "List all users with pagination"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### deps.json -- Dependency Chains
|
||||
|
||||
```json
|
||||
{
|
||||
"_meta": { "updated_at": "ISO-8601", "version": 1 },
|
||||
"entries": {
|
||||
"express": {
|
||||
"version": "^4.18.0",
|
||||
"type": "production",
|
||||
"used_by": ["src/server.ts", "src/routes/"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Types: `production`, `development`, `peer`, `optional`.
|
||||
|
||||
Each dependency entry should also include `"invocation": "<method or npm script>"`. Set invocation to the npm script command that uses this dep (e.g. `npm run lint`, `npm test`, `npm run dashboard`). For deps imported via `require()`, set to `require`. For implicit framework deps, set to `implicit`. Set `used_by` to the npm script names that invoke them.
|
||||
|
||||
### stack.json -- Tech Stack
|
||||
|
||||
```json
|
||||
{
|
||||
"_meta": { "updated_at": "ISO-8601", "version": 1 },
|
||||
"languages": ["TypeScript", "JavaScript"],
|
||||
"frameworks": ["Express", "React"],
|
||||
"tools": ["ESLint", "Jest", "Docker"],
|
||||
"build_system": "npm scripts",
|
||||
"test_framework": "Jest",
|
||||
"package_manager": "npm",
|
||||
"content_formats": ["Markdown (skills, agents, commands)", "YAML (frontmatter config)", "EJS (templates)"]
|
||||
}
|
||||
```
|
||||
|
||||
Identify non-code content formats that are structurally important to the project and include them in `content_formats`.
|
||||
|
||||
### arch.md -- Architecture Summary
|
||||
|
||||
```markdown
|
||||
---
|
||||
updated_at: "ISO-8601"
|
||||
---
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
{pattern name and description}
|
||||
|
||||
## Key Components
|
||||
|
||||
| Component | Path | Responsibility |
|
||||
|-----------|------|---------------|
|
||||
|
||||
## Data Flow
|
||||
|
||||
{entry point} -> {processing} -> {output}
|
||||
|
||||
## Conventions
|
||||
|
||||
{naming, file organization, import patterns}
|
||||
```
|
||||
|
||||
<execution_flow>
|
||||
## Exploration Process
|
||||
|
||||
### Step 1: Orientation
|
||||
|
||||
Glob for project structure indicators:
|
||||
- `**/package.json`, `**/tsconfig.json`, `**/pyproject.toml`, `**/*.csproj`
|
||||
- `**/Dockerfile`, `**/.github/workflows/*`
|
||||
- Entry points: `**/index.*`, `**/main.*`, `**/app.*`, `**/server.*`
|
||||
|
||||
### Step 2: Stack Detection
|
||||
|
||||
Read package.json, configs, and build files. Write `stack.json`. Then patch its timestamp:
|
||||
```bash
|
||||
node $HOME/.claude/get-shit-done/bin/gsd-tools.cjs intel patch-meta .planning/intel/stack.json --cwd <project_root>
|
||||
```
|
||||
|
||||
### Step 3: File Graph
|
||||
|
||||
Glob source files (`**/*.ts`, `**/*.js`, `**/*.py`, etc., excluding node_modules/dist/build).
|
||||
Read key files (entry points, configs, core modules) for imports/exports.
|
||||
Write `files.json`. Then patch its timestamp:
|
||||
```bash
|
||||
node $HOME/.claude/get-shit-done/bin/gsd-tools.cjs intel patch-meta .planning/intel/files.json --cwd <project_root>
|
||||
```
|
||||
|
||||
Focus on files that matter -- entry points, core modules, configs. Skip test files and generated code unless they reveal architecture.
|
||||
|
||||
### Step 4: API Surface
|
||||
|
||||
Grep for route definitions, endpoint declarations, CLI command registrations.
|
||||
Patterns to search: `app.get(`, `router.post(`, `@GetMapping`, `def route`, express route patterns.
|
||||
Write `apis.json`. If no API endpoints found, write an empty entries object. Then patch its timestamp:
|
||||
```bash
|
||||
node $HOME/.claude/get-shit-done/bin/gsd-tools.cjs intel patch-meta .planning/intel/apis.json --cwd <project_root>
|
||||
```
|
||||
|
||||
### Step 5: Dependencies
|
||||
|
||||
Read package.json (dependencies, devDependencies), requirements.txt, go.mod, Cargo.toml.
|
||||
Cross-reference with actual imports to populate `used_by`.
|
||||
Write `deps.json`. Then patch its timestamp:
|
||||
```bash
|
||||
node $HOME/.claude/get-shit-done/bin/gsd-tools.cjs intel patch-meta .planning/intel/deps.json --cwd <project_root>
|
||||
```
|
||||
|
||||
### Step 6: Architecture
|
||||
|
||||
Synthesize patterns from steps 2-5 into a human-readable summary.
|
||||
Write `arch.md`.
|
||||
|
||||
### Step 6.5: Self-Check
|
||||
|
||||
Run: `node $HOME/.claude/get-shit-done/bin/gsd-tools.cjs intel validate --cwd <project_root>`
|
||||
|
||||
Review the output:
|
||||
|
||||
- If `valid: true`: proceed to Step 7
|
||||
- If errors exist: fix the indicated files before proceeding
|
||||
- Common fixes: replace descriptive exports with actual symbol names, fix stale timestamps
|
||||
|
||||
This step is MANDATORY -- do not skip it.
|
||||
|
||||
### Step 7: Snapshot
|
||||
|
||||
Run: `node $HOME/.claude/get-shit-done/bin/gsd-tools.cjs intel snapshot --cwd <project_root>`
|
||||
|
||||
This writes `.last-refresh.json` with accurate timestamps and hashes. Do NOT write `.last-refresh.json` manually.
|
||||
</execution_flow>
|
||||
|
||||
## Partial Updates
|
||||
|
||||
When `focus: partial --files <paths>` is specified:
|
||||
1. Only update entries in files.json/apis.json/deps.json that reference the given paths
|
||||
2. Do NOT rewrite stack.json or arch.md (these need full context)
|
||||
3. Preserve existing entries not related to the specified paths
|
||||
4. Read existing intel files first, merge updates, write back
|
||||
|
||||
## Output Budget
|
||||
|
||||
| File | Target | Hard Limit |
|
||||
|------|--------|------------|
|
||||
| files.json | <=2000 tokens | 3000 tokens |
|
||||
| apis.json | <=1500 tokens | 2500 tokens |
|
||||
| deps.json | <=1000 tokens | 1500 tokens |
|
||||
| stack.json | <=500 tokens | 800 tokens |
|
||||
| arch.md | <=1500 tokens | 2000 tokens |
|
||||
|
||||
For large codebases, prioritize coverage of key files over exhaustive listing. Include the most important 50-100 source files in files.json rather than attempting to list every file.
|
||||
|
||||
<success_criteria>
|
||||
- [ ] All 5 intel files written to .planning/intel/
|
||||
- [ ] All JSON files are valid, parseable JSON
|
||||
- [ ] All entries reference actual file paths verified by Glob/Read
|
||||
- [ ] .last-refresh.json written with hashes
|
||||
- [ ] Completion marker returned
|
||||
</success_criteria>
|
||||
|
||||
<structured_returns>
|
||||
## Completion Protocol
|
||||
|
||||
CRITICAL: Your final output MUST end with exactly one completion marker.
|
||||
Orchestrators pattern-match on these markers to route results. Omitting causes silent failures.
|
||||
|
||||
- `## INTEL UPDATE COMPLETE` - all intel files written successfully
|
||||
- `## INTEL UPDATE FAILED` - could not complete analysis (disabled, empty project, errors)
|
||||
</structured_returns>
|
||||
|
||||
<critical_rules>
|
||||
|
||||
### Context Quality Tiers
|
||||
|
||||
| Budget Used | Tier | Behavior |
|
||||
|------------|------|----------|
|
||||
| 0-30% | PEAK | Explore freely, read broadly |
|
||||
| 30-50% | GOOD | Be selective with reads |
|
||||
| 50-70% | DEGRADING | Write incrementally, skip non-essential |
|
||||
| 70%+ | POOR | Finish current file and return immediately |
|
||||
|
||||
</critical_rules>
|
||||
|
||||
<anti_patterns>
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
1. DO NOT guess or assume -- read actual files for evidence
|
||||
2. DO NOT use Bash for file listing -- use Glob tool
|
||||
3. DO NOT read files in node_modules, .git, dist, or build directories
|
||||
4. DO NOT include secrets or credentials in intel output
|
||||
5. DO NOT write placeholder data -- every entry must be verified
|
||||
6. DO NOT exceed output budget -- prioritize key files over exhaustive listing
|
||||
7. DO NOT commit the output -- the orchestrator handles commits
|
||||
8. DO NOT consume more than 50% context before producing output -- write incrementally
|
||||
|
||||
</anti_patterns>
|
||||
179
commands/gsd/intel.md
Normal file
179
commands/gsd/intel.md
Normal file
@@ -0,0 +1,179 @@
|
||||
---
|
||||
name: gsd:intel
|
||||
description: "Query, inspect, or refresh codebase intelligence files in .planning/intel/"
|
||||
argument-hint: "[query <term>|status|diff|refresh]"
|
||||
allowed-tools:
|
||||
- Read
|
||||
- Bash
|
||||
- Task
|
||||
---
|
||||
|
||||
**STOP -- DO NOT READ THIS FILE. You are already reading it. This prompt was injected into your context by Claude Code's command system. Using the Read tool on this file wastes tokens. Begin executing Step 0 immediately.**
|
||||
|
||||
## Step 0 -- Banner
|
||||
|
||||
**Before ANY tool calls**, display this banner:
|
||||
|
||||
```
|
||||
GSD > INTEL
|
||||
```
|
||||
|
||||
Then proceed to Step 1.
|
||||
|
||||
## Step 1 -- Config Gate
|
||||
|
||||
Check if intel is enabled by reading `.planning/config.json` directly using the Read tool.
|
||||
|
||||
**DO NOT use the gsd-tools config get-value command** -- it hard-exits on missing keys.
|
||||
|
||||
1. Read `.planning/config.json` using the Read tool
|
||||
2. If the file does not exist: display the disabled message below and **STOP**
|
||||
3. Parse the JSON content. Check if `config.intel && config.intel.enabled === true`
|
||||
4. If `intel.enabled` is NOT explicitly `true`: display the disabled message below and **STOP**
|
||||
5. If `intel.enabled` is `true`: proceed to Step 2
|
||||
|
||||
**Disabled message:**
|
||||
|
||||
```
|
||||
GSD > INTEL
|
||||
|
||||
Intel system is disabled. To activate:
|
||||
|
||||
node $HOME/.claude/get-shit-done/bin/gsd-tools.cjs config-set intel.enabled true
|
||||
|
||||
Then run /gsd-intel refresh to build the initial index.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 2 -- Parse Argument
|
||||
|
||||
Parse `$ARGUMENTS` to determine the operation mode:
|
||||
|
||||
| Argument | Action |
|
||||
|----------|--------|
|
||||
| `query <term>` | Run inline query (Step 2a) |
|
||||
| `status` | Run inline status check (Step 2b) |
|
||||
| `diff` | Run inline diff check (Step 2c) |
|
||||
| `refresh` | Spawn intel-updater agent (Step 3) |
|
||||
| No argument or unknown | Show usage message |
|
||||
|
||||
**Usage message** (shown when no argument or unrecognized argument):
|
||||
|
||||
```
|
||||
GSD > INTEL
|
||||
|
||||
Usage: /gsd-intel <mode>
|
||||
|
||||
Modes:
|
||||
query <term> Search intel files for a term
|
||||
status Show intel file freshness and staleness
|
||||
diff Show changes since last snapshot
|
||||
refresh Rebuild all intel files from codebase analysis
|
||||
```
|
||||
|
||||
### Step 2a -- Query
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
node $HOME/.claude/get-shit-done/bin/gsd-tools.cjs intel query <term>
|
||||
```
|
||||
|
||||
Parse the JSON output and display results:
|
||||
- If the output contains `"disabled": true`, display the disabled message from Step 1 and **STOP**
|
||||
- If no matches found, display: `No intel matches for '<term>'. Try /gsd-intel refresh to build the index.`
|
||||
- Otherwise, display matching entries grouped by intel file
|
||||
|
||||
**STOP** after displaying results. Do not spawn an agent.
|
||||
|
||||
### Step 2b -- Status
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
node $HOME/.claude/get-shit-done/bin/gsd-tools.cjs intel status
|
||||
```
|
||||
|
||||
Parse the JSON output and display each intel file with:
|
||||
- File name
|
||||
- Last `updated_at` timestamp
|
||||
- STALE or FRESH status (stale if older than 24 hours or missing)
|
||||
|
||||
**STOP** after displaying status. Do not spawn an agent.
|
||||
|
||||
### Step 2c -- Diff
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
node $HOME/.claude/get-shit-done/bin/gsd-tools.cjs intel diff
|
||||
```
|
||||
|
||||
Parse the JSON output and display:
|
||||
- Added entries since last snapshot
|
||||
- Removed entries since last snapshot
|
||||
- Changed entries since last snapshot
|
||||
|
||||
If no snapshot exists, suggest running `refresh` first.
|
||||
|
||||
**STOP** after displaying diff. Do not spawn an agent.
|
||||
|
||||
---
|
||||
|
||||
## Step 3 -- Refresh (Agent Spawn)
|
||||
|
||||
Display before spawning:
|
||||
|
||||
```
|
||||
GSD > Spawning intel-updater agent to analyze codebase...
|
||||
```
|
||||
|
||||
Spawn a Task:
|
||||
|
||||
```
|
||||
Task(
|
||||
description="Refresh codebase intelligence files",
|
||||
prompt="You are the gsd-intel-updater agent. Your job is to analyze this codebase and write/update intelligence files in .planning/intel/.
|
||||
|
||||
Project root: ${CWD}
|
||||
gsd-tools path: $HOME/.claude/get-shit-done/bin/gsd-tools.cjs
|
||||
|
||||
Instructions:
|
||||
1. Analyze the codebase structure, dependencies, APIs, and architecture
|
||||
2. Write JSON intel files to .planning/intel/ (stack.json, api-map.json, dependency-graph.json, file-roles.json, arch-decisions.json)
|
||||
3. Each file must have a _meta object with updated_at timestamp
|
||||
4. Use gsd-tools intel extract-exports <file> to analyze source files
|
||||
5. Use gsd-tools intel patch-meta <file> to update timestamps after writing
|
||||
6. Use gsd-tools intel validate to check your output
|
||||
|
||||
When complete, output: ## INTEL UPDATE COMPLETE
|
||||
If something fails, output: ## INTEL UPDATE FAILED with details."
|
||||
)
|
||||
```
|
||||
|
||||
Wait for the agent to complete.
|
||||
|
||||
---
|
||||
|
||||
## Step 4 -- Post-Refresh Summary
|
||||
|
||||
After the agent completes, run:
|
||||
|
||||
```bash
|
||||
node $HOME/.claude/get-shit-done/bin/gsd-tools.cjs intel status
|
||||
```
|
||||
|
||||
Display a summary showing:
|
||||
- Which intel files were written or updated
|
||||
- Last update timestamps
|
||||
- Overall health of the intel index
|
||||
|
||||
---
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
1. DO NOT spawn an agent for query/status/diff operations -- these are inline CLI calls
|
||||
2. DO NOT modify intel files directly -- the agent handles writes during refresh
|
||||
3. DO NOT skip the config gate check
|
||||
4. DO NOT use the gsd-tools config get-value CLI for the config gate -- it exits on missing keys
|
||||
@@ -70,6 +70,16 @@
|
||||
* audit-uat Scan all phases for unresolved UAT/verification items
|
||||
* uat render-checkpoint --file <path> Render the current UAT checkpoint block
|
||||
*
|
||||
* Intel:
|
||||
* intel query <term> Query intel files for a term
|
||||
* intel status Show intel file freshness
|
||||
* intel update Trigger intel refresh (returns agent spawn hint)
|
||||
* intel diff Show changed intel entries since last snapshot
|
||||
* intel snapshot Save current intel state as diff baseline
|
||||
* intel patch-meta <file> Update _meta.updated_at in an intel file
|
||||
* intel validate Validate intel file structure
|
||||
* intel extract-exports <file> Extract exported symbols from a source file
|
||||
*
|
||||
* Scaffolding:
|
||||
* scaffold context --phase <N> Create CONTEXT.md template
|
||||
* scaffold uat --phase <N> Create UAT.md template
|
||||
@@ -947,6 +957,45 @@ async function runCommand(command, args, cwd, raw) {
|
||||
break;
|
||||
}
|
||||
|
||||
// ─── Intel ────────────────────────────────────────────────────────────
|
||||
|
||||
case 'intel': {
|
||||
const intel = require('./lib/intel.cjs');
|
||||
const subcommand = args[1];
|
||||
if (subcommand === 'query') {
|
||||
const term = args[2];
|
||||
if (!term) error('Usage: gsd-tools intel query <term>');
|
||||
const planningDir = path.join(cwd, '.planning');
|
||||
core.output(intel.intelQuery(term, planningDir), raw);
|
||||
} else if (subcommand === 'status') {
|
||||
const planningDir = path.join(cwd, '.planning');
|
||||
core.output(intel.intelStatus(planningDir), raw);
|
||||
} else if (subcommand === 'diff') {
|
||||
const planningDir = path.join(cwd, '.planning');
|
||||
core.output(intel.intelDiff(planningDir), raw);
|
||||
} else if (subcommand === 'snapshot') {
|
||||
const planningDir = path.join(cwd, '.planning');
|
||||
core.output(intel.intelSnapshot(planningDir), raw);
|
||||
} else if (subcommand === 'patch-meta') {
|
||||
const filePath = args[2];
|
||||
if (!filePath) error('Usage: gsd-tools intel patch-meta <file-path>');
|
||||
core.output(intel.intelPatchMeta(path.resolve(cwd, filePath)), raw);
|
||||
} else if (subcommand === 'validate') {
|
||||
const planningDir = path.join(cwd, '.planning');
|
||||
core.output(intel.intelValidate(planningDir), raw);
|
||||
} else if (subcommand === 'extract-exports') {
|
||||
const filePath = args[2];
|
||||
if (!filePath) error('Usage: gsd-tools intel extract-exports <file-path>');
|
||||
core.output(intel.intelExtractExports(path.resolve(cwd, filePath)), raw);
|
||||
} else if (subcommand === 'update') {
|
||||
const planningDir = path.join(cwd, '.planning');
|
||||
core.output(intel.intelUpdate(planningDir), raw);
|
||||
} else {
|
||||
error('Unknown intel subcommand. Available: query, status, update, diff, snapshot, patch-meta, validate, extract-exports');
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
// ─── Documentation ────────────────────────────────────────────────────
|
||||
|
||||
case 'docs-init': {
|
||||
|
||||
660
get-shit-done/bin/lib/intel.cjs
Normal file
660
get-shit-done/bin/lib/intel.cjs
Normal file
@@ -0,0 +1,660 @@
|
||||
/**
|
||||
* lib/intel.cjs -- Intel storage and query operations for GSD.
|
||||
*
|
||||
* Provides a persistent, queryable intelligence system for project metadata.
|
||||
* Intel files live in .planning/intel/ and store structured data about
|
||||
* the project's files, APIs, dependencies, architecture, and tech stack.
|
||||
*
|
||||
* All public functions gate on intel.enabled config (no-op when false).
|
||||
*/
|
||||
|
||||
'use strict';
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const crypto = require('crypto');
|
||||
|
||||
// ─── Constants ───────────────────────────────────────────────────────────────
|
||||
|
||||
const INTEL_DIR = '.planning/intel';
|
||||
|
||||
const INTEL_FILES = {
|
||||
files: 'files.json',
|
||||
apis: 'apis.json',
|
||||
deps: 'deps.json',
|
||||
arch: 'arch.md',
|
||||
stack: 'stack.json'
|
||||
};
|
||||
|
||||
// ─── Internal helpers ────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Ensure the intel directory exists under the given planning dir.
|
||||
*
|
||||
* @param {string} planningDir - Path to .planning directory
|
||||
* @returns {string} Full path to .planning/intel/
|
||||
*/
|
||||
function ensureIntelDir(planningDir) {
|
||||
const intelPath = path.join(planningDir, 'intel');
|
||||
if (!fs.existsSync(intelPath)) {
|
||||
fs.mkdirSync(intelPath, { recursive: true });
|
||||
}
|
||||
return intelPath;
|
||||
}
|
||||
|
||||
/**
|
||||
* Check whether intel is enabled in the project config.
|
||||
* Reads config.json directly via fs. Returns false by default
|
||||
* (when no config, no intel key, or on error).
|
||||
*
|
||||
* @param {string} planningDir - Path to .planning directory
|
||||
* @returns {boolean}
|
||||
*/
|
||||
function isIntelEnabled(planningDir) {
|
||||
try {
|
||||
const configPath = path.join(planningDir, 'config.json');
|
||||
if (!fs.existsSync(configPath)) return false;
|
||||
const config = JSON.parse(fs.readFileSync(configPath, 'utf8'));
|
||||
if (config && config.intel && config.intel.enabled === true) return true;
|
||||
return false;
|
||||
} catch (_e) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the standard disabled response object.
|
||||
* @returns {{ disabled: true, message: string }}
|
||||
*/
|
||||
function disabledResponse() {
|
||||
return { disabled: true, message: 'Intel system disabled. Set intel.enabled=true in config.json to activate.' };
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve full path to an intel file.
|
||||
* @param {string} planningDir
|
||||
* @param {string} filename
|
||||
* @returns {string}
|
||||
*/
|
||||
function intelFilePath(planningDir, filename) {
|
||||
return path.join(planningDir, 'intel', filename);
|
||||
}
|
||||
|
||||
/**
|
||||
* Safely read and parse a JSON intel file.
|
||||
* Returns null if file doesn't exist or can't be parsed.
|
||||
*
|
||||
* @param {string} filePath
|
||||
* @returns {object|null}
|
||||
*/
|
||||
function safeReadJson(filePath) {
|
||||
try {
|
||||
if (!fs.existsSync(filePath)) return null;
|
||||
return JSON.parse(fs.readFileSync(filePath, 'utf8'));
|
||||
} catch (_e) {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Compute SHA-256 hash of a file's contents.
|
||||
* Returns null if the file doesn't exist.
|
||||
*
|
||||
* @param {string} filePath
|
||||
* @returns {string|null}
|
||||
*/
|
||||
function hashFile(filePath) {
|
||||
try {
|
||||
if (!fs.existsSync(filePath)) return null;
|
||||
const content = fs.readFileSync(filePath, 'utf8');
|
||||
return crypto.createHash('sha256').update(content).digest('hex');
|
||||
} catch (_e) {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Search for a term (case-insensitive) in a JSON object's keys and string values.
|
||||
* Returns an array of matching entries.
|
||||
*
|
||||
* @param {object} data - The JSON data (expects { _meta, entries } or flat object)
|
||||
* @param {string} term - Search term
|
||||
* @returns {Array<{ key: string, value: * }>}
|
||||
*/
|
||||
function searchJsonEntries(data, term) {
|
||||
if (!data || typeof data !== 'object') return [];
|
||||
|
||||
const entries = data.entries || data;
|
||||
if (!entries || typeof entries !== 'object') return [];
|
||||
|
||||
const lowerTerm = term.toLowerCase();
|
||||
const matches = [];
|
||||
|
||||
for (const [key, value] of Object.entries(entries)) {
|
||||
if (key === '_meta') continue;
|
||||
|
||||
// Check key match
|
||||
if (key.toLowerCase().includes(lowerTerm)) {
|
||||
matches.push({ key, value });
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check string value match (recursive for objects)
|
||||
if (matchesInValue(value, lowerTerm)) {
|
||||
matches.push({ key, value });
|
||||
}
|
||||
}
|
||||
|
||||
return matches;
|
||||
}
|
||||
|
||||
/**
|
||||
* Recursively check if a term appears in any string value.
|
||||
*
|
||||
* @param {*} value
|
||||
* @param {string} lowerTerm
|
||||
* @returns {boolean}
|
||||
*/
|
||||
function matchesInValue(value, lowerTerm) {
|
||||
if (typeof value === 'string') {
|
||||
return value.toLowerCase().includes(lowerTerm);
|
||||
}
|
||||
if (Array.isArray(value)) {
|
||||
return value.some(v => matchesInValue(v, lowerTerm));
|
||||
}
|
||||
if (value && typeof value === 'object') {
|
||||
return Object.values(value).some(v => matchesInValue(v, lowerTerm));
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Search for a term in arch.md text content.
|
||||
* Returns matching lines.
|
||||
*
|
||||
* @param {string} filePath - Path to arch.md
|
||||
* @param {string} term - Search term
|
||||
* @returns {string[]}
|
||||
*/
|
||||
function searchArchMd(filePath, term) {
|
||||
try {
|
||||
if (!fs.existsSync(filePath)) return [];
|
||||
const content = fs.readFileSync(filePath, 'utf8');
|
||||
const lowerTerm = term.toLowerCase();
|
||||
const lines = content.split(/\r?\n/);
|
||||
return lines.filter(line => line.toLowerCase().includes(lowerTerm));
|
||||
} catch (_e) {
|
||||
return [];
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Public API ──────────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Query intel files for a search term.
|
||||
* Searches across all JSON intel files (keys and values) and arch.md (text lines).
|
||||
*
|
||||
* @param {string} term - Search term (case-insensitive)
|
||||
* @param {string} planningDir - Path to .planning directory
|
||||
* @returns {{ matches: Array<{ source: string, entries: Array }>, term: string, total: number } | { disabled: true, message: string }}
|
||||
*/
|
||||
function intelQuery(term, planningDir) {
|
||||
if (!isIntelEnabled(planningDir)) return disabledResponse();
|
||||
|
||||
const matches = [];
|
||||
let total = 0;
|
||||
|
||||
// Search JSON intel files
|
||||
for (const [_key, filename] of Object.entries(INTEL_FILES)) {
|
||||
if (filename.endsWith('.md')) continue; // Skip arch.md here
|
||||
|
||||
const filePath = intelFilePath(planningDir, filename);
|
||||
const data = safeReadJson(filePath);
|
||||
if (!data) continue;
|
||||
|
||||
const found = searchJsonEntries(data, term);
|
||||
if (found.length > 0) {
|
||||
matches.push({ source: filename, entries: found });
|
||||
total += found.length;
|
||||
}
|
||||
}
|
||||
|
||||
// Search arch.md
|
||||
const archPath = intelFilePath(planningDir, INTEL_FILES.arch);
|
||||
const archMatches = searchArchMd(archPath, term);
|
||||
if (archMatches.length > 0) {
|
||||
matches.push({ source: INTEL_FILES.arch, entries: archMatches });
|
||||
total += archMatches.length;
|
||||
}
|
||||
|
||||
return { matches, term, total };
|
||||
}
|
||||
|
||||
/**
|
||||
* Report status and staleness of each intel file.
|
||||
* A file is considered stale if its updated_at is older than 24 hours.
|
||||
*
|
||||
* @param {string} planningDir - Path to .planning directory
|
||||
* @returns {{ files: object, overall_stale: boolean } | { disabled: true, message: string }}
|
||||
*/
|
||||
function intelStatus(planningDir) {
|
||||
if (!isIntelEnabled(planningDir)) return disabledResponse();
|
||||
|
||||
const STALE_MS = 24 * 60 * 60 * 1000; // 24 hours
|
||||
const now = Date.now();
|
||||
const files = {};
|
||||
let overallStale = false;
|
||||
|
||||
for (const [_key, filename] of Object.entries(INTEL_FILES)) {
|
||||
const filePath = intelFilePath(planningDir, filename);
|
||||
const exists = fs.existsSync(filePath);
|
||||
|
||||
if (!exists) {
|
||||
files[filename] = { exists: false, updated_at: null, stale: true };
|
||||
overallStale = true;
|
||||
continue;
|
||||
}
|
||||
|
||||
let updatedAt = null;
|
||||
|
||||
if (filename.endsWith('.md')) {
|
||||
// For arch.md, use file mtime
|
||||
try {
|
||||
const stat = fs.statSync(filePath);
|
||||
updatedAt = stat.mtime.toISOString();
|
||||
} catch (_e) {
|
||||
// intentionally silent: fall through on error
|
||||
}
|
||||
} else {
|
||||
// For JSON files, read _meta.updated_at
|
||||
const data = safeReadJson(filePath);
|
||||
if (data && data._meta && data._meta.updated_at) {
|
||||
updatedAt = data._meta.updated_at;
|
||||
}
|
||||
}
|
||||
|
||||
let stale = true;
|
||||
if (updatedAt) {
|
||||
const age = now - new Date(updatedAt).getTime();
|
||||
stale = age > STALE_MS;
|
||||
}
|
||||
|
||||
if (stale) overallStale = true;
|
||||
files[filename] = { exists: true, updated_at: updatedAt, stale };
|
||||
}
|
||||
|
||||
return { files, overall_stale: overallStale };
|
||||
}
|
||||
|
||||
/**
|
||||
* Show changes since the last full refresh by comparing file hashes.
|
||||
*
|
||||
* @param {string} planningDir - Path to .planning directory
|
||||
* @returns {{ changed: string[], added: string[], removed: string[] } | { no_baseline: true } | { disabled: true, message: string }}
|
||||
*/
|
||||
function intelDiff(planningDir) {
|
||||
if (!isIntelEnabled(planningDir)) return disabledResponse();
|
||||
|
||||
const snapshotPath = intelFilePath(planningDir, '.last-refresh.json');
|
||||
const snapshot = safeReadJson(snapshotPath);
|
||||
|
||||
if (!snapshot) {
|
||||
return { no_baseline: true };
|
||||
}
|
||||
|
||||
const prevHashes = snapshot.hashes || {};
|
||||
const changed = [];
|
||||
const added = [];
|
||||
const removed = [];
|
||||
|
||||
// Check current files against snapshot
|
||||
for (const [_key, filename] of Object.entries(INTEL_FILES)) {
|
||||
const filePath = intelFilePath(planningDir, filename);
|
||||
const currentHash = hashFile(filePath);
|
||||
|
||||
if (currentHash && !prevHashes[filename]) {
|
||||
added.push(filename);
|
||||
} else if (currentHash && prevHashes[filename] && currentHash !== prevHashes[filename]) {
|
||||
changed.push(filename);
|
||||
} else if (!currentHash && prevHashes[filename]) {
|
||||
removed.push(filename);
|
||||
}
|
||||
}
|
||||
|
||||
return { changed, added, removed };
|
||||
}
|
||||
|
||||
/**
|
||||
* Stub for triggering an intel update.
|
||||
* The actual update is performed by the intel-updater agent (PLAN-02).
|
||||
*
|
||||
* @param {string} planningDir - Path to .planning directory
|
||||
* @returns {{ action: string, message: string } | { disabled: true, message: string }}
|
||||
*/
|
||||
function intelUpdate(planningDir) {
|
||||
if (!isIntelEnabled(planningDir)) return disabledResponse();
|
||||
|
||||
return {
|
||||
action: 'spawn_agent',
|
||||
message: 'Run gsd-tools intel update or spawn gsd-intel-updater agent for full refresh'
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Save a refresh snapshot with hashes of all current intel files.
|
||||
* Called by the intel-updater agent after completing a refresh.
|
||||
*
|
||||
* @param {string} planningDir - Path to .planning directory
|
||||
* @returns {{ saved: boolean, timestamp: string, files: number }}
|
||||
*/
|
||||
function saveRefreshSnapshot(planningDir) {
|
||||
const intelPath = ensureIntelDir(planningDir);
|
||||
const hashes = {};
|
||||
let fileCount = 0;
|
||||
|
||||
for (const [_key, filename] of Object.entries(INTEL_FILES)) {
|
||||
const filePath = path.join(intelPath, filename);
|
||||
const hash = hashFile(filePath);
|
||||
if (hash) {
|
||||
hashes[filename] = hash;
|
||||
fileCount++;
|
||||
}
|
||||
}
|
||||
|
||||
const timestamp = new Date().toISOString();
|
||||
const snapshotPath = path.join(intelPath, '.last-refresh.json');
|
||||
fs.writeFileSync(snapshotPath, JSON.stringify({
|
||||
hashes,
|
||||
timestamp,
|
||||
version: 1
|
||||
}, null, 2), 'utf8');
|
||||
|
||||
return { saved: true, timestamp, files: fileCount };
|
||||
}
|
||||
|
||||
// ─── CLI Subcommands ─────────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Thin wrapper around saveRefreshSnapshot for CLI dispatch.
|
||||
* Writes .last-refresh.json with accurate timestamps and hashes.
|
||||
*
|
||||
* @param {string} planningDir - Path to .planning directory
|
||||
* @returns {{ saved: boolean, timestamp: string, files: number } | { disabled: true, message: string }}
|
||||
*/
|
||||
function intelSnapshot(planningDir) {
|
||||
if (!isIntelEnabled(planningDir)) return disabledResponse();
|
||||
return saveRefreshSnapshot(planningDir);
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate all intel files for correctness and freshness.
|
||||
*
|
||||
* @param {string} planningDir - Path to .planning directory
|
||||
* @returns {{ valid: boolean, errors: string[], warnings: string[] } | { disabled: true, message: string }}
|
||||
*/
|
||||
function intelValidate(planningDir) {
|
||||
if (!isIntelEnabled(planningDir)) return disabledResponse();
|
||||
|
||||
const errors = [];
|
||||
const warnings = [];
|
||||
const STALE_MS = 24 * 60 * 60 * 1000;
|
||||
const now = Date.now();
|
||||
|
||||
for (const [key, filename] of Object.entries(INTEL_FILES)) {
|
||||
const filePath = intelFilePath(planningDir, filename);
|
||||
|
||||
// Check existence
|
||||
if (!fs.existsSync(filePath)) {
|
||||
errors.push(`${filename}: file does not exist`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Skip non-JSON files (arch.md)
|
||||
if (filename.endsWith('.md')) continue;
|
||||
|
||||
// Parse JSON
|
||||
let data;
|
||||
try {
|
||||
data = JSON.parse(fs.readFileSync(filePath, 'utf8'));
|
||||
} catch (e) {
|
||||
errors.push(`${filename}: invalid JSON — ${e.message}`);
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check _meta.updated_at recency
|
||||
if (data._meta && data._meta.updated_at) {
|
||||
const age = now - new Date(data._meta.updated_at).getTime();
|
||||
if (age > STALE_MS) {
|
||||
warnings.push(`${filename}: _meta.updated_at is ${Math.round(age / 3600000)} hours old (>24 hr)`);
|
||||
}
|
||||
} else {
|
||||
warnings.push(`${filename}: missing _meta.updated_at`);
|
||||
}
|
||||
|
||||
// Validate entries are objects with expected fields
|
||||
if (data.entries && typeof data.entries === 'object') {
|
||||
// files.json: check exports are actual symbol names (no spaces)
|
||||
if (key === 'files') {
|
||||
for (const [entryPath, entry] of Object.entries(data.entries)) {
|
||||
if (entry.exports && Array.isArray(entry.exports)) {
|
||||
for (const exp of entry.exports) {
|
||||
if (typeof exp === 'string' && exp.includes(' ')) {
|
||||
warnings.push(`${filename}: "${entryPath}" export "${exp}" looks like a description (contains space)`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// Spot-check first 5 file paths exist on disk
|
||||
const entryPaths = Object.keys(data.entries).slice(0, 5);
|
||||
for (const ep of entryPaths) {
|
||||
if (!fs.existsSync(ep)) {
|
||||
warnings.push(`${filename}: entry path "${ep}" does not exist on disk`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// deps.json: check entries have version, type, used_by
|
||||
if (key === 'deps') {
|
||||
for (const [depName, entry] of Object.entries(data.entries)) {
|
||||
const missing = [];
|
||||
if (!entry.version) missing.push('version');
|
||||
if (!entry.type) missing.push('type');
|
||||
if (!entry.used_by) missing.push('used_by');
|
||||
if (missing.length > 0) {
|
||||
warnings.push(`${filename}: "${depName}" missing fields: ${missing.join(', ')}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return { valid: errors.length === 0, errors, warnings };
|
||||
}
|
||||
|
||||
/**
|
||||
* Patch _meta.updated_at in a JSON intel file to the current timestamp.
|
||||
* Reads the file, updates _meta.updated_at, increments version, writes back.
|
||||
*
|
||||
* NOTE: Does not gate on isIntelEnabled — operates on arbitrary file paths
|
||||
* for use by agents patching individual files outside the intel store.
|
||||
*
|
||||
* @param {string} filePath - Absolute or relative path to the JSON intel file
|
||||
* @returns {{ patched: boolean, file: string, timestamp: string } | { patched: false, error: string }}
|
||||
*/
|
||||
function intelPatchMeta(filePath) {
|
||||
try {
|
||||
if (!fs.existsSync(filePath)) {
|
||||
return { patched: false, error: `File not found: ${filePath}` };
|
||||
}
|
||||
|
||||
const content = fs.readFileSync(filePath, 'utf8');
|
||||
let data;
|
||||
try {
|
||||
data = JSON.parse(content);
|
||||
} catch (e) {
|
||||
return { patched: false, error: `Invalid JSON: ${e.message}` };
|
||||
}
|
||||
|
||||
if (!data._meta) {
|
||||
data._meta = {};
|
||||
}
|
||||
|
||||
const timestamp = new Date().toISOString();
|
||||
data._meta.updated_at = timestamp;
|
||||
data._meta.version = (data._meta.version || 0) + 1;
|
||||
|
||||
fs.writeFileSync(filePath, JSON.stringify(data, null, 2) + '\n', 'utf8');
|
||||
|
||||
return { patched: true, file: filePath, timestamp };
|
||||
} catch (e) {
|
||||
return { patched: false, error: e.message };
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract exports from a JS/CJS file by parsing module.exports or exports.X patterns.
|
||||
*
|
||||
* NOTE: Does not gate on isIntelEnabled — operates on arbitrary source files
|
||||
* for use by agents building intel data from project files.
|
||||
*
|
||||
* @param {string} filePath - Path to the JS/CJS file
|
||||
* @returns {{ file: string, exports: string[], method: string }}
|
||||
*/
|
||||
function intelExtractExports(filePath) {
|
||||
if (!fs.existsSync(filePath)) {
|
||||
return { file: filePath, exports: [], method: 'none' };
|
||||
}
|
||||
|
||||
const content = fs.readFileSync(filePath, 'utf8');
|
||||
let exports = [];
|
||||
let method = 'none';
|
||||
|
||||
// Try module.exports = { ... } pattern (handle multi-line)
|
||||
// Find the LAST module.exports assignment (the actual one, not references in code)
|
||||
const allMatches = [...content.matchAll(/module\.exports\s*=\s*\{/g)];
|
||||
if (allMatches.length > 0) {
|
||||
const lastMatch = allMatches[allMatches.length - 1];
|
||||
const startIdx = lastMatch.index + lastMatch[0].length;
|
||||
// Find matching closing brace by counting braces
|
||||
let depth = 1;
|
||||
let endIdx = startIdx;
|
||||
while (endIdx < content.length && depth > 0) {
|
||||
if (content[endIdx] === '{') depth++;
|
||||
else if (content[endIdx] === '}') depth--;
|
||||
if (depth > 0) endIdx++;
|
||||
}
|
||||
const block = content.substring(startIdx, endIdx);
|
||||
method = 'module.exports';
|
||||
// Extract key names from lines like " keyName," or " keyName: value,"
|
||||
const lines = block.split('\n');
|
||||
for (const line of lines) {
|
||||
const trimmed = line.trim();
|
||||
// Skip comments and empty lines
|
||||
if (!trimmed || trimmed.startsWith('//') || trimmed.startsWith('*')) continue;
|
||||
// Match identifier at start of line (before comma, colon, end of line)
|
||||
const keyMatch = trimmed.match(/^(\w+)\s*[,}:]/) || trimmed.match(/^(\w+)$/);
|
||||
if (keyMatch) {
|
||||
exports.push(keyMatch[1]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Also try individual exports.X = patterns (only at start of line, not inside strings/regex)
|
||||
const individualPattern = /^exports\.(\w+)\s*=/gm;
|
||||
let im;
|
||||
while ((im = individualPattern.exec(content)) !== null) {
|
||||
if (!exports.includes(im[1])) {
|
||||
exports.push(im[1]);
|
||||
if (method === 'none') method = 'exports.X';
|
||||
}
|
||||
}
|
||||
|
||||
const hadCjs = exports.length > 0;
|
||||
|
||||
// ESM patterns
|
||||
const esmExports = [];
|
||||
|
||||
// export default function X / export default class X
|
||||
const defaultNamedPattern = /^export\s+default\s+(?:function|class)\s+(\w+)/gm;
|
||||
let em;
|
||||
while ((em = defaultNamedPattern.exec(content)) !== null) {
|
||||
if (!esmExports.includes(em[1])) esmExports.push(em[1]);
|
||||
}
|
||||
|
||||
// export default (without named function/class)
|
||||
const defaultAnonPattern = /^export\s+default\s+(?!function\s|class\s)/gm;
|
||||
if (defaultAnonPattern.test(content) && esmExports.length === 0) {
|
||||
if (!esmExports.includes('default')) esmExports.push('default');
|
||||
}
|
||||
|
||||
// export function X( / export async function X(
|
||||
const exportFnPattern = /^export\s+(?:async\s+)?function\s+(\w+)\s*\(/gm;
|
||||
while ((em = exportFnPattern.exec(content)) !== null) {
|
||||
if (!esmExports.includes(em[1])) esmExports.push(em[1]);
|
||||
}
|
||||
|
||||
// export const X = / export let X = / export var X =
|
||||
const exportVarPattern = /^export\s+(?:const|let|var)\s+(\w+)\s*=/gm;
|
||||
while ((em = exportVarPattern.exec(content)) !== null) {
|
||||
if (!esmExports.includes(em[1])) esmExports.push(em[1]);
|
||||
}
|
||||
|
||||
// export class X
|
||||
const exportClassPattern = /^export\s+class\s+(\w+)/gm;
|
||||
while ((em = exportClassPattern.exec(content)) !== null) {
|
||||
if (!esmExports.includes(em[1])) esmExports.push(em[1]);
|
||||
}
|
||||
|
||||
// export { X, Y, Z } — strip "as alias" parts
|
||||
const exportBlockPattern = /^export\s*\{([^}]+)\}/gm;
|
||||
while ((em = exportBlockPattern.exec(content)) !== null) {
|
||||
const items = em[1].split(',');
|
||||
for (const item of items) {
|
||||
const trimmed = item.trim();
|
||||
if (!trimmed) continue;
|
||||
// "foo as bar" -> extract "foo"
|
||||
const name = trimmed.split(/\s+as\s+/)[0].trim();
|
||||
if (name && !esmExports.includes(name)) esmExports.push(name);
|
||||
}
|
||||
}
|
||||
|
||||
// Merge ESM exports into the result
|
||||
for (const e of esmExports) {
|
||||
if (!exports.includes(e)) exports.push(e);
|
||||
}
|
||||
|
||||
// Determine method
|
||||
const hadEsm = esmExports.length > 0;
|
||||
if (hadCjs && hadEsm) {
|
||||
method = 'mixed';
|
||||
} else if (hadEsm && !hadCjs) {
|
||||
method = 'esm';
|
||||
}
|
||||
|
||||
return { file: filePath, exports, method };
|
||||
}
|
||||
|
||||
// ─── Exports ─────────────────────────────────────────────────────────────────
|
||||
|
||||
module.exports = {
|
||||
// Public API
|
||||
intelQuery,
|
||||
intelUpdate,
|
||||
intelStatus,
|
||||
intelDiff,
|
||||
saveRefreshSnapshot,
|
||||
|
||||
// CLI subcommands
|
||||
intelSnapshot,
|
||||
intelValidate,
|
||||
intelExtractExports,
|
||||
intelPatchMeta,
|
||||
|
||||
// Utilities
|
||||
ensureIntelDir,
|
||||
isIntelEnabled,
|
||||
|
||||
// Constants
|
||||
INTEL_FILES,
|
||||
INTEL_DIR
|
||||
};
|
||||
@@ -1187,6 +1187,7 @@ describe('E2E: Copilot full install verification', () => {
|
||||
'gsd-doc-writer.agent.md',
|
||||
'gsd-executor.agent.md',
|
||||
'gsd-integration-checker.agent.md',
|
||||
'gsd-intel-updater.agent.md',
|
||||
'gsd-nyquist-auditor.agent.md',
|
||||
'gsd-phase-researcher.agent.md',
|
||||
'gsd-plan-checker.agent.md',
|
||||
|
||||
603
tests/intel.test.cjs
Normal file
603
tests/intel.test.cjs
Normal file
@@ -0,0 +1,603 @@
|
||||
/**
|
||||
* Tests for get-shit-done/bin/lib/intel.cjs
|
||||
*
|
||||
* Covers: query, status, diff, validate, snapshot, patch-meta,
|
||||
* extract-exports, enabled/disabled gating, and CLI routing via gsd-tools.
|
||||
*/
|
||||
|
||||
'use strict';
|
||||
|
||||
const { describe, test, beforeEach, afterEach } = require('node:test');
|
||||
const assert = require('node:assert/strict');
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { createTempProject, cleanup, runGsdTools } = require('./helpers.cjs');
|
||||
|
||||
const {
|
||||
intelQuery,
|
||||
intelStatus,
|
||||
intelDiff,
|
||||
intelValidate,
|
||||
intelSnapshot,
|
||||
intelPatchMeta,
|
||||
intelExtractExports,
|
||||
ensureIntelDir,
|
||||
isIntelEnabled,
|
||||
INTEL_FILES,
|
||||
} = require('../get-shit-done/bin/lib/intel.cjs');
|
||||
|
||||
// ─── Helpers ────────────────────────────────────────────────────────────────
|
||||
|
||||
function enableIntel(planningDir) {
|
||||
const configPath = path.join(planningDir, 'config.json');
|
||||
const config = fs.existsSync(configPath)
|
||||
? JSON.parse(fs.readFileSync(configPath, 'utf8'))
|
||||
: {};
|
||||
config.intel = { enabled: true };
|
||||
fs.writeFileSync(configPath, JSON.stringify(config, null, 2), 'utf8');
|
||||
}
|
||||
|
||||
function writeIntelJson(planningDir, filename, data) {
|
||||
const intelPath = path.join(planningDir, 'intel');
|
||||
fs.mkdirSync(intelPath, { recursive: true });
|
||||
fs.writeFileSync(
|
||||
path.join(intelPath, filename),
|
||||
JSON.stringify(data, null, 2),
|
||||
'utf8'
|
||||
);
|
||||
}
|
||||
|
||||
function writeIntelMd(planningDir, filename, content) {
|
||||
const intelPath = path.join(planningDir, 'intel');
|
||||
fs.mkdirSync(intelPath, { recursive: true });
|
||||
fs.writeFileSync(path.join(intelPath, filename), content, 'utf8');
|
||||
}
|
||||
|
||||
// ─── Disabled gating ────────────────────────────────────────────────────────
|
||||
|
||||
describe('intel disabled gating', () => {
|
||||
let tmpDir;
|
||||
let planningDir;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = createTempProject();
|
||||
planningDir = path.join(tmpDir, '.planning');
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
cleanup(tmpDir);
|
||||
});
|
||||
|
||||
test('isIntelEnabled returns false when no config.json exists', () => {
|
||||
assert.strictEqual(isIntelEnabled(planningDir), false);
|
||||
});
|
||||
|
||||
test('isIntelEnabled returns false when intel.enabled is not set', () => {
|
||||
fs.writeFileSync(
|
||||
path.join(planningDir, 'config.json'),
|
||||
JSON.stringify({ model_profile: 'balanced' }),
|
||||
'utf8'
|
||||
);
|
||||
assert.strictEqual(isIntelEnabled(planningDir), false);
|
||||
});
|
||||
|
||||
test('isIntelEnabled returns true when intel.enabled is true', () => {
|
||||
enableIntel(planningDir);
|
||||
assert.strictEqual(isIntelEnabled(planningDir), true);
|
||||
});
|
||||
|
||||
test('intelQuery returns disabled response when intel is off', () => {
|
||||
const result = intelQuery('test', planningDir);
|
||||
assert.strictEqual(result.disabled, true);
|
||||
assert.ok(result.message.includes('disabled'));
|
||||
});
|
||||
|
||||
test('intelStatus returns disabled response when intel is off', () => {
|
||||
const result = intelStatus(planningDir);
|
||||
assert.strictEqual(result.disabled, true);
|
||||
});
|
||||
|
||||
test('intelDiff returns disabled response when intel is off', () => {
|
||||
const result = intelDiff(planningDir);
|
||||
assert.strictEqual(result.disabled, true);
|
||||
});
|
||||
|
||||
test('intelValidate returns disabled response when intel is off', () => {
|
||||
const result = intelValidate(planningDir);
|
||||
assert.strictEqual(result.disabled, true);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── ensureIntelDir ─────────────────────────────────────────────────────────
|
||||
|
||||
describe('ensureIntelDir', () => {
|
||||
let tmpDir;
|
||||
let planningDir;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = createTempProject();
|
||||
planningDir = path.join(tmpDir, '.planning');
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
cleanup(tmpDir);
|
||||
});
|
||||
|
||||
test('creates intel directory if it does not exist', () => {
|
||||
const intelPath = ensureIntelDir(planningDir);
|
||||
assert.ok(fs.existsSync(intelPath));
|
||||
assert.ok(intelPath.endsWith('intel'));
|
||||
});
|
||||
|
||||
test('returns existing intel directory without error', () => {
|
||||
fs.mkdirSync(path.join(planningDir, 'intel'), { recursive: true });
|
||||
const intelPath = ensureIntelDir(planningDir);
|
||||
assert.ok(fs.existsSync(intelPath));
|
||||
});
|
||||
});
|
||||
|
||||
// ─── intelQuery ─────────────────────────────────────────────────────────────
|
||||
|
||||
describe('intelQuery', () => {
|
||||
let tmpDir;
|
||||
let planningDir;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = createTempProject();
|
||||
planningDir = path.join(tmpDir, '.planning');
|
||||
enableIntel(planningDir);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
cleanup(tmpDir);
|
||||
});
|
||||
|
||||
test('returns empty matches when no intel files exist', () => {
|
||||
const result = intelQuery('anything', planningDir);
|
||||
assert.strictEqual(result.total, 0);
|
||||
assert.deepStrictEqual(result.matches, []);
|
||||
assert.strictEqual(result.term, 'anything');
|
||||
});
|
||||
|
||||
test('finds matches in JSON file keys', () => {
|
||||
writeIntelJson(planningDir, 'files.json', {
|
||||
_meta: { updated_at: new Date().toISOString() },
|
||||
entries: {
|
||||
'src/auth/controller.ts': { size: 1024, type: 'typescript' },
|
||||
'src/utils/logger.ts': { size: 512, type: 'typescript' },
|
||||
},
|
||||
});
|
||||
|
||||
const result = intelQuery('auth', planningDir);
|
||||
assert.strictEqual(result.total, 1);
|
||||
assert.strictEqual(result.matches[0].source, 'files.json');
|
||||
assert.strictEqual(result.matches[0].entries[0].key, 'src/auth/controller.ts');
|
||||
});
|
||||
|
||||
test('finds matches in JSON file values', () => {
|
||||
writeIntelJson(planningDir, 'deps.json', {
|
||||
_meta: { updated_at: new Date().toISOString() },
|
||||
entries: {
|
||||
express: { version: '4.18.0', type: 'runtime', used_by: ['src/server.ts'] },
|
||||
},
|
||||
});
|
||||
|
||||
const result = intelQuery('express', planningDir);
|
||||
assert.strictEqual(result.total, 1);
|
||||
assert.strictEqual(result.matches[0].entries[0].key, 'express');
|
||||
});
|
||||
|
||||
test('search is case-insensitive', () => {
|
||||
writeIntelJson(planningDir, 'files.json', {
|
||||
entries: {
|
||||
'src/AuthController.ts': { type: 'typescript' },
|
||||
},
|
||||
});
|
||||
|
||||
const result = intelQuery('authcontroller', planningDir);
|
||||
assert.strictEqual(result.total, 1);
|
||||
});
|
||||
|
||||
test('finds matches in arch.md text', () => {
|
||||
writeIntelMd(planningDir, 'arch.md', [
|
||||
'# Architecture',
|
||||
'',
|
||||
'The system uses a layered architecture with REST API endpoints.',
|
||||
'Authentication is handled by JWT tokens.',
|
||||
].join('\n'));
|
||||
|
||||
const result = intelQuery('JWT', planningDir);
|
||||
assert.strictEqual(result.total, 1);
|
||||
assert.strictEqual(result.matches[0].source, 'arch.md');
|
||||
});
|
||||
|
||||
test('searches across multiple intel files', () => {
|
||||
writeIntelJson(planningDir, 'files.json', {
|
||||
entries: { 'src/auth.ts': { exports: ['authenticate'] } },
|
||||
});
|
||||
writeIntelJson(planningDir, 'apis.json', {
|
||||
entries: { '/api/auth': { method: 'POST', handler: 'authenticate' } },
|
||||
});
|
||||
|
||||
const result = intelQuery('auth', planningDir);
|
||||
assert.strictEqual(result.total, 2);
|
||||
assert.strictEqual(result.matches.length, 2);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── intelStatus ────────────────────────────────────────────────────────────
|
||||
|
||||
describe('intelStatus', () => {
|
||||
let tmpDir;
|
||||
let planningDir;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = createTempProject();
|
||||
planningDir = path.join(tmpDir, '.planning');
|
||||
enableIntel(planningDir);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
cleanup(tmpDir);
|
||||
});
|
||||
|
||||
test('reports missing files as stale', () => {
|
||||
const result = intelStatus(planningDir);
|
||||
assert.strictEqual(result.overall_stale, true);
|
||||
assert.strictEqual(result.files['files.json'].exists, false);
|
||||
assert.strictEqual(result.files['files.json'].stale, true);
|
||||
});
|
||||
|
||||
test('reports fresh files as not stale', () => {
|
||||
writeIntelJson(planningDir, 'files.json', {
|
||||
_meta: { updated_at: new Date().toISOString() },
|
||||
entries: {},
|
||||
});
|
||||
|
||||
const result = intelStatus(planningDir);
|
||||
assert.strictEqual(result.files['files.json'].exists, true);
|
||||
assert.strictEqual(result.files['files.json'].stale, false);
|
||||
});
|
||||
|
||||
test('reports old files as stale', () => {
|
||||
const oldDate = new Date(Date.now() - 25 * 60 * 60 * 1000).toISOString();
|
||||
writeIntelJson(planningDir, 'files.json', {
|
||||
_meta: { updated_at: oldDate },
|
||||
entries: {},
|
||||
});
|
||||
|
||||
const result = intelStatus(planningDir);
|
||||
assert.strictEqual(result.files['files.json'].stale, true);
|
||||
assert.strictEqual(result.overall_stale, true);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── intelDiff ──────────────────────────────────────────────────────────────
|
||||
|
||||
describe('intelDiff', () => {
|
||||
let tmpDir;
|
||||
let planningDir;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = createTempProject();
|
||||
planningDir = path.join(tmpDir, '.planning');
|
||||
enableIntel(planningDir);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
cleanup(tmpDir);
|
||||
});
|
||||
|
||||
test('returns no_baseline when no snapshot exists', () => {
|
||||
const result = intelDiff(planningDir);
|
||||
assert.strictEqual(result.no_baseline, true);
|
||||
});
|
||||
|
||||
test('detects added files since snapshot', () => {
|
||||
// Save an empty snapshot
|
||||
const intelPath = ensureIntelDir(planningDir);
|
||||
fs.writeFileSync(
|
||||
path.join(intelPath, '.last-refresh.json'),
|
||||
JSON.stringify({ hashes: {}, timestamp: new Date().toISOString(), version: 1 }),
|
||||
'utf8'
|
||||
);
|
||||
|
||||
// Add a file after snapshot
|
||||
writeIntelJson(planningDir, 'files.json', { entries: {} });
|
||||
|
||||
const result = intelDiff(planningDir);
|
||||
assert.ok(result.added.includes('files.json'));
|
||||
});
|
||||
|
||||
test('detects changed files since snapshot', () => {
|
||||
// Write initial file
|
||||
writeIntelJson(planningDir, 'files.json', { entries: { a: 1 } });
|
||||
|
||||
// Take snapshot
|
||||
intelSnapshot(planningDir);
|
||||
|
||||
// Modify file
|
||||
writeIntelJson(planningDir, 'files.json', { entries: { a: 1, b: 2 } });
|
||||
|
||||
const result = intelDiff(planningDir);
|
||||
assert.ok(result.changed.includes('files.json'));
|
||||
});
|
||||
});
|
||||
|
||||
// ─── intelSnapshot ──────────────────────────────────────────────────────────
|
||||
|
||||
describe('intelSnapshot', () => {
|
||||
let tmpDir;
|
||||
let planningDir;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = createTempProject();
|
||||
planningDir = path.join(tmpDir, '.planning');
|
||||
enableIntel(planningDir);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
cleanup(tmpDir);
|
||||
});
|
||||
|
||||
test('saves snapshot with file hashes', () => {
|
||||
writeIntelJson(planningDir, 'files.json', { entries: {} });
|
||||
|
||||
const result = intelSnapshot(planningDir);
|
||||
assert.strictEqual(result.saved, true);
|
||||
assert.strictEqual(result.files, 1);
|
||||
assert.ok(result.timestamp);
|
||||
|
||||
const snapshot = JSON.parse(
|
||||
fs.readFileSync(path.join(planningDir, 'intel', '.last-refresh.json'), 'utf8')
|
||||
);
|
||||
assert.ok(snapshot.hashes['files.json']);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── intelValidate ──────────────────────────────────────────────────────────
|
||||
|
||||
describe('intelValidate', () => {
|
||||
let tmpDir;
|
||||
let planningDir;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = createTempProject();
|
||||
planningDir = path.join(tmpDir, '.planning');
|
||||
enableIntel(planningDir);
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
cleanup(tmpDir);
|
||||
});
|
||||
|
||||
test('reports errors for missing files', () => {
|
||||
const result = intelValidate(planningDir);
|
||||
assert.strictEqual(result.valid, false);
|
||||
assert.ok(result.errors.length > 0);
|
||||
assert.ok(result.errors.some(e => e.includes('does not exist')));
|
||||
});
|
||||
|
||||
test('reports warnings for missing _meta.updated_at', () => {
|
||||
writeIntelJson(planningDir, 'files.json', { entries: {} });
|
||||
writeIntelJson(planningDir, 'apis.json', { entries: {} });
|
||||
writeIntelJson(planningDir, 'deps.json', { entries: {} });
|
||||
writeIntelJson(planningDir, 'stack.json', { entries: {} });
|
||||
writeIntelMd(planningDir, 'arch.md', '# Architecture\n');
|
||||
|
||||
const result = intelValidate(planningDir);
|
||||
assert.strictEqual(result.valid, true);
|
||||
assert.ok(result.warnings.some(w => w.includes('missing _meta.updated_at')));
|
||||
});
|
||||
|
||||
test('reports invalid JSON as error', () => {
|
||||
const intelPath = path.join(planningDir, 'intel');
|
||||
fs.mkdirSync(intelPath, { recursive: true });
|
||||
fs.writeFileSync(path.join(intelPath, 'files.json'), 'not valid json', 'utf8');
|
||||
writeIntelJson(planningDir, 'apis.json', { entries: {} });
|
||||
writeIntelJson(planningDir, 'deps.json', { entries: {} });
|
||||
writeIntelJson(planningDir, 'stack.json', { entries: {} });
|
||||
writeIntelMd(planningDir, 'arch.md', '# Architecture\n');
|
||||
|
||||
const result = intelValidate(planningDir);
|
||||
assert.strictEqual(result.valid, false);
|
||||
assert.ok(result.errors.some(e => e.includes('invalid JSON')));
|
||||
});
|
||||
|
||||
test('passes validation with complete fresh intel', () => {
|
||||
const now = new Date().toISOString();
|
||||
writeIntelJson(planningDir, 'files.json', {
|
||||
_meta: { updated_at: now },
|
||||
entries: {},
|
||||
});
|
||||
writeIntelJson(planningDir, 'apis.json', {
|
||||
_meta: { updated_at: now },
|
||||
entries: {},
|
||||
});
|
||||
writeIntelJson(planningDir, 'deps.json', {
|
||||
_meta: { updated_at: now },
|
||||
entries: {},
|
||||
});
|
||||
writeIntelJson(planningDir, 'stack.json', {
|
||||
_meta: { updated_at: now },
|
||||
entries: {},
|
||||
});
|
||||
writeIntelMd(planningDir, 'arch.md', '# Architecture\n');
|
||||
|
||||
const result = intelValidate(planningDir);
|
||||
assert.strictEqual(result.valid, true);
|
||||
assert.strictEqual(result.errors.length, 0);
|
||||
});
|
||||
});
|
||||
|
||||
// ─── intelPatchMeta ─────────────────────────────────────────────────────────
|
||||
|
||||
describe('intelPatchMeta', () => {
|
||||
let tmpDir;
|
||||
let planningDir;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = createTempProject();
|
||||
planningDir = path.join(tmpDir, '.planning');
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
cleanup(tmpDir);
|
||||
});
|
||||
|
||||
test('patches _meta.updated_at and increments version', () => {
|
||||
writeIntelJson(planningDir, 'files.json', {
|
||||
_meta: { updated_at: '2025-01-01T00:00:00Z', version: 1 },
|
||||
entries: {},
|
||||
});
|
||||
|
||||
const filePath = path.join(planningDir, 'intel', 'files.json');
|
||||
const result = intelPatchMeta(filePath);
|
||||
|
||||
assert.strictEqual(result.patched, true);
|
||||
|
||||
const data = JSON.parse(fs.readFileSync(filePath, 'utf8'));
|
||||
assert.strictEqual(data._meta.version, 2);
|
||||
assert.notStrictEqual(data._meta.updated_at, '2025-01-01T00:00:00Z');
|
||||
});
|
||||
|
||||
test('creates _meta if missing', () => {
|
||||
writeIntelJson(planningDir, 'files.json', { entries: {} });
|
||||
|
||||
const filePath = path.join(planningDir, 'intel', 'files.json');
|
||||
const result = intelPatchMeta(filePath);
|
||||
|
||||
assert.strictEqual(result.patched, true);
|
||||
|
||||
const data = JSON.parse(fs.readFileSync(filePath, 'utf8'));
|
||||
assert.ok(data._meta.updated_at);
|
||||
assert.strictEqual(data._meta.version, 1);
|
||||
});
|
||||
|
||||
test('returns error for missing file', () => {
|
||||
const result = intelPatchMeta('/nonexistent/file.json');
|
||||
assert.strictEqual(result.patched, false);
|
||||
assert.ok(result.error.includes('not found'));
|
||||
});
|
||||
|
||||
test('returns error for invalid JSON', () => {
|
||||
const filePath = path.join(tmpDir, 'bad.json');
|
||||
fs.writeFileSync(filePath, 'not json', 'utf8');
|
||||
|
||||
const result = intelPatchMeta(filePath);
|
||||
assert.strictEqual(result.patched, false);
|
||||
assert.ok(result.error.includes('Invalid JSON'));
|
||||
});
|
||||
});
|
||||
|
||||
// ─── intelExtractExports ────────────────────────────────────────────────────
|
||||
|
||||
describe('intelExtractExports', () => {
|
||||
let tmpDir;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = createTempProject();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
cleanup(tmpDir);
|
||||
});
|
||||
|
||||
test('extracts CJS module.exports object keys', () => {
|
||||
const filePath = path.join(tmpDir, 'example.cjs');
|
||||
fs.writeFileSync(filePath, [
|
||||
"'use strict';",
|
||||
'function doStuff() {}',
|
||||
'function helper() {}',
|
||||
'module.exports = {',
|
||||
' doStuff,',
|
||||
' helper,',
|
||||
'};',
|
||||
].join('\n'), 'utf8');
|
||||
|
||||
const result = intelExtractExports(filePath);
|
||||
assert.strictEqual(result.method, 'module.exports');
|
||||
assert.ok(result.exports.includes('doStuff'));
|
||||
assert.ok(result.exports.includes('helper'));
|
||||
});
|
||||
|
||||
test('extracts ESM named exports', () => {
|
||||
const filePath = path.join(tmpDir, 'example.mjs');
|
||||
fs.writeFileSync(filePath, [
|
||||
'export function greet() {}',
|
||||
'export const VERSION = "1.0";',
|
||||
'export class Widget {}',
|
||||
].join('\n'), 'utf8');
|
||||
|
||||
const result = intelExtractExports(filePath);
|
||||
assert.strictEqual(result.method, 'esm');
|
||||
assert.ok(result.exports.includes('greet'));
|
||||
assert.ok(result.exports.includes('VERSION'));
|
||||
assert.ok(result.exports.includes('Widget'));
|
||||
});
|
||||
|
||||
test('extracts ESM export block', () => {
|
||||
const filePath = path.join(tmpDir, 'example.js');
|
||||
fs.writeFileSync(filePath, [
|
||||
'function foo() {}',
|
||||
'function bar() {}',
|
||||
'export { foo, bar };',
|
||||
].join('\n'), 'utf8');
|
||||
|
||||
const result = intelExtractExports(filePath);
|
||||
assert.ok(result.exports.includes('foo'));
|
||||
assert.ok(result.exports.includes('bar'));
|
||||
});
|
||||
|
||||
test('returns empty exports for nonexistent file', () => {
|
||||
const result = intelExtractExports('/nonexistent/file.js');
|
||||
assert.deepStrictEqual(result.exports, []);
|
||||
assert.strictEqual(result.method, 'none');
|
||||
});
|
||||
});
|
||||
|
||||
// ─── CLI routing via gsd-tools ──────────────────────────────────────────────
|
||||
|
||||
describe('gsd-tools intel subcommands', () => {
|
||||
let tmpDir;
|
||||
|
||||
beforeEach(() => {
|
||||
tmpDir = createTempProject();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
cleanup(tmpDir);
|
||||
});
|
||||
|
||||
test('intel status returns disabled message when not enabled', () => {
|
||||
const result = runGsdTools(['intel', 'status'], tmpDir);
|
||||
assert.strictEqual(result.success, true);
|
||||
const output = JSON.parse(result.output);
|
||||
assert.strictEqual(output.disabled, true);
|
||||
});
|
||||
|
||||
test('intel query returns disabled message when not enabled', () => {
|
||||
const result = runGsdTools(['intel', 'query', 'test'], tmpDir);
|
||||
assert.strictEqual(result.success, true);
|
||||
const output = JSON.parse(result.output);
|
||||
assert.strictEqual(output.disabled, true);
|
||||
});
|
||||
|
||||
test('intel status returns file status when enabled', () => {
|
||||
enableIntel(path.join(tmpDir, '.planning'));
|
||||
const result = runGsdTools(['intel', 'status'], tmpDir);
|
||||
assert.strictEqual(result.success, true);
|
||||
const output = JSON.parse(result.output);
|
||||
assert.ok(output.files);
|
||||
assert.strictEqual(output.overall_stale, true);
|
||||
});
|
||||
|
||||
test('intel validate reports errors for missing files when enabled', () => {
|
||||
enableIntel(path.join(tmpDir, '.planning'));
|
||||
const result = runGsdTools(['intel', 'validate'], tmpDir);
|
||||
assert.strictEqual(result.success, true);
|
||||
const output = JSON.parse(result.output);
|
||||
assert.strictEqual(output.valid, false);
|
||||
assert.ok(output.errors.length > 0);
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user