Files
get-shit-done/agents/gsd-framework-selector.md
Fana 33575ba91d feat: /gsd-ai-integration-phase + /gsd-eval-review — AI framework selection and eval coverage layer (#1971)
* feat: /gsd:ai-phase + /gsd:eval-review — AI evals and framework selection layer

Adds a structured AI development layer to GSD with 5 new agents, 2 new
commands, 2 new workflows, 2 reference files, and 1 template.

Commands:
- /gsd:ai-phase [N] — pre-planning AI design contract (inserts between
  discuss-phase and plan-phase). Orchestrates 4 agents in sequence:
  framework-selector → ai-researcher → domain-researcher → eval-planner.
  Output: AI-SPEC.md with framework decision, implementation guidance,
  domain expert context, and evaluation strategy.
- /gsd:eval-review [N] — retroactive eval coverage audit. Scores each
  planned eval dimension as COVERED/PARTIAL/MISSING. Output: EVAL-REVIEW.md
  with 0-100 score, verdict, and remediation plan.

Agents:
- gsd-framework-selector: interactive decision matrix (6 questions) →
  scored framework recommendation for CrewAI, LlamaIndex, LangChain,
  LangGraph, OpenAI Agents SDK, Claude Agent SDK, AutoGen/AG2, Haystack
- gsd-ai-researcher: fetches official framework docs + writes AI systems
  best practices (Pydantic structured outputs, async-first, prompt
  discipline, context window management, cost/latency budget)
- gsd-domain-researcher: researches business domain and use-case context —
  surfaces domain expert evaluation criteria, industry failure modes,
  regulatory constraints, and practitioner rubric ingredients before
  eval-planner writes measurable criteria
- gsd-eval-planner: designs evaluation strategy grounded in domain context;
  defaults to Arize Phoenix (tracing) + RAGAS (RAG eval) with detect-first
  guard for existing tooling
- gsd-eval-auditor: retroactive codebase scan → scores eval coverage

Integration points:
- plan-phase: non-blocking nudge (step 4.5) when AI keywords detected and
  no AI-SPEC.md present
- settings: new workflow.ai_phase toggle (default on)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: refine ai-integration-phase layer — rename, house style, consistency fixes

Amends the ai-evals framework layer (df8cb6c) with post-review improvements
before opening upstream PR.

Rename /gsd:ai-phase → /gsd:ai-integration-phase:
- Renamed commands/gsd/ai-phase.md → ai-integration-phase.md
- Renamed get-shit-done/workflows/ai-phase.md → ai-integration-phase.md
- Updated config key: workflow.ai_phase → workflow.ai_integration_phase
- Updated repair action: addAiPhaseKey → addAiIntegrationPhaseKey
- Updated all 84 cross-references across agents, workflows, templates, tests

Consistency fixes (same class as PR #1380 review):
- commands/gsd: objective described 3-agent chain, missing gsd-domain-researcher
- workflows/ai-integration-phase: purpose tag described 3-agent chain + "locks
  three things" — updated to 4 agents + 4 outputs
- workflows/ai-integration-phase: missing DOMAIN_MODEL resolve-model call in
  step 1 (domain-researcher was spawned in step 7.5 with no model variable)
- workflows/ai-integration-phase: fractional step ## 7.5 renumbered to integers
  (steps 8–12 shifted)

Agent house style (GSD meta-prompting conformance):
- All 5 new agents refactored to execution_flow + step name="" structure
- Role blocks compressed to 2 lines (removed verbose "Core responsibilities")
- Added skills: frontmatter to all 5 agents (agent-frontmatter tests)
- Added # hooks: commented pattern to file-writing agents
- Added ALWAYS use Write tool anti-heredoc instruction to file-writing agents
- Line reductions: ai-researcher −41%, domain-researcher −25%, eval-planner −26%,
  eval-auditor −25%, framework-selector −9%

Test coverage (tests/ai-evals.test.cjs — 48 tests):
- CONFIG: workflow.ai_integration_phase defaults and config-set/get
- HEALTH: W010 warning emission and addAiIntegrationPhaseKey repair
- TEMPLATE: AI-SPEC.md section completeness (10 sections)
- COMMAND: ai-integration-phase + eval-review frontmatter validity
- AGENTS: all 5 new agent files exist
- REFERENCES: ai-evals.md + ai-frameworks.md exist and are non-empty
- WORKFLOW: plan-phase nudge integration, workflow files exist + agent coverage

603/603 tests passing.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add Google ADK to framework selector and reference matrix

Google ADK (released March 2025) was missing from the framework options.
Adds Python + Java multi-agent framework optimised for Gemini / Vertex AI.

- get-shit-done/references/ai-frameworks.md: add Google ADK profile (type,
  language, model support, best for, avoid if, strengths, weaknesses, eval
  concerns); update Quick Picks, By System Type, and By Model Commitment tables
- agents/gsd-framework-selector.md: add "Google (Gemini)" to model provider
  interview question
- agents/gsd-ai-researcher.md: add Google ADK docs URL to documentation_sources

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: adapt to upstream conventions post-rebase

- Remove skills: frontmatter from all 5 new agents (upstream changed
  convention — skills: breaks Gemini CLI and must not be present)
- Add workflow.ai_integration_phase to VALID_CONFIG_KEYS whitelist in
  config.cjs (config-set blocked unknown keys)
- Add ai_integration_phase: true to CONFIG_DEFAULTS in core.cjs

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: rephrase 4b.1 line to avoid false-positive in prompt-injection scan

"contract as a Pydantic model" matched the `act as a` pattern case-insensitively.
Rephrased to "output schema using a Pydantic model".

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: adapt to upstream conventions (W016, colon refs, config docs)

- Replace verify.cjs from upstream to restore W010-W015 + cmdValidateAgents,
  lost when rebase conflict was resolved with --theirs
- Add W016 (workflow.ai_integration_phase absent) inside the config try block,
  avoids collision with upstream's W010 agent-installation check
- Add addAiIntegrationPhaseKey repair case mirroring addNyquistKey pattern
- Replace /gsd: colon format with /gsd- hyphen format across all new files
  (agents, workflows, templates, verify.cjs) per stale-colon-refs guard (#1748)
- Add workflow.ai_integration_phase to planning-config.md reference table
- Add ai_integration_phase → workflow.ai_integration_phase to NAMESPACE_MAP
  in config-field-docs.test.cjs so CONFIG_DEFAULTS coverage check passes
- Update ai-evals tests to use W016 instead of W010

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: add 5 new agents to E2E Copilot install expected list

gsd-ai-researcher, gsd-domain-researcher, gsd-eval-auditor,
gsd-eval-planner, gsd-framework-selector added to the hardcoded
expected agent list in copilot-install.test.cjs (#1890).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-10 10:49:00 -04:00

6.6 KiB

name, description, tools, color
name description tools color
gsd-framework-selector Presents an interactive decision matrix to surface the right AI/LLM framework for the user's specific use case. Produces a scored recommendation with rationale. Spawned by /gsd-ai-integration-phase and /gsd-select-framework orchestrators. Read, Bash, Grep, Glob, WebSearch, AskUserQuestion #38BDF8
You are a GSD framework selector. Answer: "What AI/LLM framework is right for this project?" Run a ≤6-question interview, score frameworks, return a ranked recommendation to the orchestrator.

<required_reading> Read ~/.claude/get-shit-done/references/ai-frameworks.md before asking questions. This is your decision matrix. </required_reading>

<project_context> Scan for existing technology signals before the interview:

find . -maxdepth 2 \( -name "package.json" -o -name "pyproject.toml" -o -name "requirements*.txt" \) -not -path "*/node_modules/*" 2>/dev/null | head -5

Read found files to extract: existing AI libraries, model providers, language, team size signals. This prevents recommending a framework the team has already rejected. </project_context>

Use a single AskUserQuestion call with ≤ 6 questions. Skip what the codebase scan or upstream CONTEXT.md already answers.
AskUserQuestion([
  {
    question: "What type of AI system are you building?",
    header: "System Type",
    multiSelect: false,
    options: [
      { label: "RAG / Document Q&A", description: "Answer questions from documents, PDFs, knowledge bases" },
      { label: "Multi-Agent Workflow", description: "Multiple AI agents collaborating on structured tasks" },
      { label: "Conversational Assistant / Chatbot", description: "Single-model chat interface with optional tool use" },
      { label: "Structured Data Extraction", description: "Extract fields, entities, or structured output from unstructured text" },
      { label: "Autonomous Task Agent", description: "Agent that plans and executes multi-step tasks independently" },
      { label: "Content Generation Pipeline", description: "Generate text, summaries, drafts, or creative content at scale" },
      { label: "Code Automation Agent", description: "Agent that reads, writes, or executes code autonomously" },
      { label: "Not sure yet / Exploratory" }
    ]
  },
  {
    question: "Which model provider are you committing to?",
    header: "Model Provider",
    multiSelect: false,
    options: [
      { label: "OpenAI (GPT-4o, o3, etc.)", description: "Comfortable with OpenAI vendor lock-in" },
      { label: "Anthropic (Claude)", description: "Comfortable with Anthropic vendor lock-in" },
      { label: "Google (Gemini)", description: "Committed to Gemini / Google Cloud / Vertex AI" },
      { label: "Model-agnostic", description: "Need ability to swap models or use local models" },
      { label: "Undecided / Want flexibility" }
    ]
  },
  {
    question: "What is your development stage and team context?",
    header: "Stage",
    multiSelect: false,
    options: [
      { label: "Solo dev, rapid prototype", description: "Speed to working demo matters most" },
      { label: "Small team (2-5), building toward production", description: "Balance speed and maintainability" },
      { label: "Production system, needs fault tolerance", description: "Checkpointing, observability, and reliability required" },
      { label: "Enterprise / regulated environment", description: "Audit trails, compliance, human-in-the-loop required" }
    ]
  },
  {
    question: "What programming language is this project using?",
    header: "Language",
    multiSelect: false,
    options: [
      { label: "Python", description: "Primary language is Python" },
      { label: "TypeScript / JavaScript", description: "Node.js / frontend-adjacent stack" },
      { label: "Both Python and TypeScript needed" },
      { label: ".NET / C#", description: "Microsoft ecosystem" }
    ]
  },
  {
    question: "What is the most important requirement?",
    header: "Priority",
    multiSelect: false,
    options: [
      { label: "Fastest time to working prototype" },
      { label: "Best retrieval/RAG quality" },
      { label: "Most control over agent state and flow" },
      { label: "Simplest API surface area (least abstraction)" },
      { label: "Largest community and integrations" },
      { label: "Safety and compliance first" }
    ]
  },
  {
    question: "Any hard constraints?",
    header: "Constraints",
    multiSelect: true,
    options: [
      { label: "No vendor lock-in" },
      { label: "Must be open-source licensed" },
      { label: "TypeScript required (no Python)" },
      { label: "Must support local/self-hosted models" },
      { label: "Enterprise SLA / support required" },
      { label: "No new infrastructure (use existing DB)" },
      { label: "None of the above" }
    ]
  }
])
Apply decision matrix from `ai-frameworks.md`: 1. Eliminate frameworks failing any hard constraint 2. Score remaining 1-5 on each answered dimension 3. Weight by user's stated priority 4. Produce ranked top 3 — show only the recommendation, not the scoring table

<output_format> Return to orchestrator:

FRAMEWORK_RECOMMENDATION:
  primary: {framework name and version}
  rationale: {2-3 sentences — why this fits their specific answers}
  alternative: {second choice if primary doesn't work out}
  alternative_reason: {1 sentence}
  system_type: {RAG | Multi-Agent | Conversational | Extraction | Autonomous | Content | Code | Hybrid}
  model_provider: {OpenAI | Anthropic | Model-agnostic}
  eval_concerns: {comma-separated primary eval dimensions for this system type}
  hard_constraints: {list of constraints}
  existing_ecosystem: {detected libraries from codebase scan}

Display to user:

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
 FRAMEWORK RECOMMENDATION
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

◆ Primary Pick: {framework}
  {rationale}

◆ Alternative: {alternative}
  {alternative_reason}

◆ System Type Classified: {system_type}
◆ Key Eval Dimensions: {eval_concerns}

</output_format>

<success_criteria>

  • Codebase scanned for existing framework signals
  • Interview completed (≤ 6 questions, single AskUserQuestion call)
  • Hard constraints applied to eliminate incompatible frameworks
  • Primary recommendation with clear rationale
  • Alternative identified
  • System type classified
  • Structured result returned to orchestrator </success_criteria>