Files
claude-mem/scripts/cleanup-duplicates.ts
Alex Newman 2fc4153bef refactor: decompose monolithic services into modular architecture (#534)
* docs: add monolith refactor report with system breakdown

Comprehensive analysis of codebase identifying:
- 14 files over 500 lines requiring refactoring
- 3 critical monoliths (SessionStore, SearchManager, worker-service)
- 80% code duplication across agent files
- 5-phase refactoring roadmap with domain-based architecture

* fix: prevent memory_session_id from equaling content_session_id

The bug: memory_session_id was initialized to contentSessionId as a
"placeholder for FK purposes". This caused the SDK resume logic to
inject memory agent messages into the USER's Claude Code transcript,
corrupting their conversation history.

Root cause:
- SessionStore.createSDKSession initialized memory_session_id = contentSessionId
- SDKAgent checked memorySessionId !== contentSessionId but this check
  only worked if the session was fetched fresh from DB

The fix:
- SessionStore: Initialize memory_session_id as NULL, not contentSessionId
- SDKAgent: Simple truthy check !!session.memorySessionId (NULL = fresh start)
- Database migration: Ran UPDATE to set memory_session_id = NULL for 1807
  existing sessions that had the bug

Also adds [ALIGNMENT] logging across the session lifecycle to help debug
session continuity issues:
- Hook entry: contentSessionId + promptNumber
- DB lookup: contentSessionId → memorySessionId mapping proof
- Resume decision: shows which memorySessionId will be used for resume
- Capture: logs when memorySessionId is captured from first SDK response

UI: Added "Alignment" quick filter button in LogsModal to show only
alignment logs for debugging session continuity.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor: improve error handling in worker-service.ts

- Fix GENERIC_CATCH anti-patterns by logging full error objects instead of just messages
- Add [ANTI-PATTERN IGNORED] markers for legitimate cases (cleanup, hot paths)
- Simplify error handling comments to be more concise
- Improve httpShutdown() error discrimination for ECONNREFUSED
- Reduce LARGE_TRY_BLOCK issues in initialization code

Part of anti-pattern cleanup plan (132 total issues)

* refactor: improve error logging in SearchManager.ts

- Pass full error objects to logger instead of just error.message
- Fixes PARTIAL_ERROR_LOGGING anti-patterns (10 instances)
- Better debugging visibility when Chroma queries fail

Part of anti-pattern cleanup (133 remaining)

* refactor: improve error logging across SessionStore and mcp-server

- SessionStore.ts: Fix error logging in column rename utility
- mcp-server.ts: Log full error objects instead of just error.message
- Improve error handling in Worker API calls and tool execution

Part of anti-pattern cleanup (133 remaining)

* Refactor hooks to streamline error handling and loading states

- Simplified error handling in useContextPreview by removing try-catch and directly checking response status.
- Refactored usePagination to eliminate try-catch, improving readability and maintaining error handling through response checks.
- Cleaned up useSSE by removing unnecessary try-catch around JSON parsing, ensuring clarity in message handling.
- Enhanced useSettings by streamlining the saving process, removing try-catch, and directly checking the result for success.

* refactor: add error handling back to SearchManager Chroma calls

- Wrap queryChroma calls in try-catch to prevent generator crashes
- Log Chroma errors as warnings and fall back gracefully
- Fixes generator failures when Chroma has issues
- Part of anti-pattern cleanup recovery

* feat: Add generator failure investigation report and observation duplication regression report

- Created a comprehensive investigation report detailing the root cause of generator failures during anti-pattern cleanup, including the impact, investigation process, and implemented fixes.
- Documented the critical regression causing observation duplication due to race conditions in the SDK agent, outlining symptoms, root cause analysis, and proposed fixes.

* fix: address PR #528 review comments - atomic cleanup and detector improvements

This commit addresses critical review feedback from PR #528:

## 1. Atomic Message Cleanup (Fix Race Condition)

**Problem**: SessionRoutes.ts generator error handler had race condition
- Queried messages then marked failed in loop
- If crash during loop → partial marking → inconsistent state

**Solution**:
- Added `markSessionMessagesFailed()` to PendingMessageStore.ts
- Single atomic UPDATE statement replaces loop
- Follows existing pattern from `resetProcessingToPending()`

**Files**:
- src/services/sqlite/PendingMessageStore.ts (new method)
- src/services/worker/http/routes/SessionRoutes.ts (use new method)

## 2. Anti-Pattern Detector Improvements

**Problem**: Detector didn't recognize logger.failure() method
- Lines 212 & 335 already included "failure"
- Lines 112-113 (PARTIAL_ERROR_LOGGING detection) did not

**Solution**: Updated regex patterns to include "failure" for consistency

**Files**:
- scripts/anti-pattern-test/detect-error-handling-antipatterns.ts

## 3. Documentation

**PR Comment**: Added clarification on memory_session_id fix location
- Points to SessionStore.ts:1155
- Explains why NULL initialization prevents message injection bug

## Review Response

Addresses "Must Address Before Merge" items from review:
 Clarified memory_session_id bug fix location (via PR comment)
 Made generator error handler message cleanup atomic
 Deferred comprehensive test suite to follow-up PR (keeps PR focused)

## Testing

- Build passes with no errors
- Anti-pattern detector runs successfully
- Atomic cleanup follows proven pattern from existing methods

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* fix: FOREIGN KEY constraint and missing failed_at_epoch column

Two critical bugs fixed:

1. Missing failed_at_epoch column in pending_messages table
   - Added migration 20 to create the column
   - Fixes error when trying to mark messages as failed

2. FOREIGN KEY constraint failed when storing observations
   - All three agents (SDK, Gemini, OpenRouter) were passing
     session.contentSessionId instead of session.memorySessionId
   - storeObservationsAndMarkComplete expects memorySessionId
   - Added null check and clear error message

However, observations still not saving - see investigation report.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Refactor hook input parsing to improve error handling

- Added a nested try-catch block in new-hook.ts, save-hook.ts, and summary-hook.ts to handle JSON parsing errors more gracefully.
- Replaced direct error throwing with logging of the error details using logger.error.
- Ensured that the process exits cleanly after handling input in all three hooks.

* docs: update monolith report post session-logging merge

- SessionStore grew to 2,011 lines (49 methods) - highest priority
- SearchManager reduced to 1,778 lines (improved)
- Agent files reduced by ~45 lines combined
- Added trend indicators and post-merge observations
- Core refactoring proposal remains valid

* refactor(sqlite): decompose SessionStore into modular architecture

Extract the 2011-line SessionStore.ts monolith into focused, single-responsibility
modules following grep-optimized progressive disclosure pattern:

New module structure:
- sessions/ - Session creation and retrieval (create.ts, get.ts, types.ts)
- observations/ - Observation storage and queries (store.ts, get.ts, recent.ts, files.ts, types.ts)
- summaries/ - Summary storage and queries (store.ts, get.ts, recent.ts, types.ts)
- prompts/ - User prompt management (store.ts, get.ts, types.ts)
- timeline/ - Cross-entity timeline queries (queries.ts)
- import/ - Bulk import operations (bulk.ts)
- migrations/ - Database migrations (runner.ts)

New coordinator files:
- Database.ts - ClaudeMemDatabase class with re-exports
- transactions.ts - Atomic cross-entity transactions
- Named re-export facades (Sessions.ts, Observations.ts, etc.)

Key design decisions:
- All functions take `db: Database` as first parameter (functional style)
- Named re-exports instead of index.ts for grep-friendliness
- SessionStore retained as backward-compatible wrapper
- Target file size: 50-150 lines (60% compliance)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor(agents): extract shared logic into modular architecture

Consolidate duplicate code across SDKAgent, GeminiAgent, and OpenRouterAgent
into focused utility modules. Total reduction: 500 lines (29%).

New modules in src/services/worker/agents/:
- ResponseProcessor.ts: Atomic DB transactions, Chroma sync, SSE broadcast
- ObservationBroadcaster.ts: SSE event formatting and dispatch
- SessionCleanupHelper.ts: Session state cleanup and stuck message reset
- FallbackErrorHandler.ts: Provider error detection for fallback logic
- types.ts: Shared interfaces (WorkerRef, SSE payloads, StorageResult)

Bug fix: SDKAgent was incorrectly using obs.files instead of obs.files_read
and hardcoding files_modified to empty array.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor(search): extract search strategies into modular architecture

Decompose SearchManager into focused strategy pattern with:
- SearchOrchestrator: Coordinates strategy selection and fallback
- ChromaSearchStrategy: Vector semantic search via ChromaDB
- SQLiteSearchStrategy: Filter-only queries for date/project/type
- HybridSearchStrategy: Metadata filtering + semantic ranking
- ResultFormatter: Markdown table formatting for results
- TimelineBuilder: Chronological timeline construction
- Filter modules: DateFilter, ProjectFilter, TypeFilter

SearchManager now delegates to new infrastructure while maintaining
full backward compatibility with existing public API.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor(context): decompose context-generator into modular architecture

Extract 660-line monolith into focused components:
- ContextBuilder: Main orchestrator (~160 lines)
- ContextConfigLoader: Configuration loading
- TokenCalculator: Token budget calculations
- ObservationCompiler: Data retrieval and query building
- MarkdownFormatter/ColorFormatter: Output formatting
- Section renderers: Header, Timeline, Summary, Footer

Maintains full backward compatibility - context-generator.ts now
delegates to new ContextBuilder while preserving public API.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor(worker): decompose worker-service into modular infrastructure

Split 2000+ line monolith into focused modules:

Infrastructure:
- ProcessManager: PID files, signal handlers, child process cleanup
- HealthMonitor: Port checks, health polling, version matching
- GracefulShutdown: Coordinated cleanup on exit

Server:
- Server: Express app setup, core routes, route registration
- Middleware: Re-exports from existing middleware
- ErrorHandler: Centralized error handling with AppError class

Integrations:
- CursorHooksInstaller: Full Cursor IDE integration (registry, hooks, MCP)

WorkerService now acts as thin coordinator wiring all components together.
Maintains full backward compatibility with existing public API.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Refactor session queue processing and database interactions

- Implement claim-and-delete pattern in SessionQueueProcessor to simplify message handling and eliminate duplicate processing.
- Update PendingMessageStore to support atomic claim-and-delete operations, removing the need for intermediate processing states.
- Introduce storeObservations method in SessionStore for simplified observation and summary storage without message tracking.
- Remove deprecated methods and clean up session state management in worker agents.
- Adjust response processing to accommodate new storage patterns, ensuring atomic transactions for observations and summaries.
- Remove unnecessary reset logic for stuck messages due to the new queue handling approach.

* Add duplicate observation cleanup script

Script to clean up duplicate observations created by the batching bug
where observations were stored once per message ID instead of once per
observation. Includes safety checks to always keep at least one copy.

Usage:
  bun scripts/cleanup-duplicates.ts           # Dry run
  bun scripts/cleanup-duplicates.ts --execute # Delete duplicates
  bun scripts/cleanup-duplicates.ts --aggressive # Ignore time window

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-03 21:22:27 -05:00

244 lines
8.3 KiB
TypeScript
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
#!/usr/bin/env bun
/**
* Cleanup script for duplicate observations created by the batching bug.
*
* The bug: When multiple messages were batched together, observations were stored
* once per message ID instead of once per observation. For example, if 4 messages
* were batched and produced 3 observations, those 3 observations were stored
* 12 times (4×3) instead of 3 times.
*
* This script identifies duplicates by matching on:
* - memory_session_id (same session)
* - text (same content)
* - type (same observation type)
* - created_at_epoch within 60 seconds (same batch window)
*
* Usage:
* bun scripts/cleanup-duplicates.ts # Dry run (default)
* bun scripts/cleanup-duplicates.ts --execute # Actually delete duplicates
*/
import { Database } from 'bun:sqlite';
import { homedir } from 'os';
import { join } from 'path';
const DB_PATH = join(homedir(), '.claude-mem', 'claude-mem.db');
// Time window modes for duplicate detection
const TIME_WINDOW_MODES = {
strict: 5, // 5 seconds - only exact duplicates from same batch
normal: 60, // 60 seconds - duplicates within same minute
aggressive: 0, // 0 = ignore time entirely, match on session+text+type only
};
interface DuplicateGroup {
memory_session_id: string;
title: string;
type: string;
epoch_bucket: number;
count: number;
ids: number[];
keep_id: number;
delete_ids: number[];
}
interface ObservationRow {
id: number;
memory_session_id: string;
title: string | null;
subtitle: string | null;
narrative: string | null;
type: string;
created_at_epoch: number;
}
function main() {
const dryRun = !process.argv.includes('--execute');
const aggressive = process.argv.includes('--aggressive');
const strict = process.argv.includes('--strict');
// Determine time window
let windowMode: keyof typeof TIME_WINDOW_MODES = 'normal';
if (aggressive) windowMode = 'aggressive';
if (strict) windowMode = 'strict';
const batchWindowSeconds = TIME_WINDOW_MODES[windowMode];
console.log('='.repeat(60));
console.log('Claude-Mem Duplicate Observation Cleanup');
console.log('='.repeat(60));
console.log(`Mode: ${dryRun ? 'DRY RUN (use --execute to delete)' : 'EXECUTE'}`);
console.log(`Database: ${DB_PATH}`);
console.log(`Time window: ${windowMode} (${batchWindowSeconds === 0 ? 'ignore time' : batchWindowSeconds + ' seconds'})`);
console.log('');
console.log('Options:');
console.log(' --execute Actually delete duplicates (default: dry run)');
console.log(' --strict 5-second window (exact batch duplicates only)');
console.log(' --aggressive Ignore time, match on session+text+type only');
console.log('');
const db = dryRun
? new Database(DB_PATH, { readonly: true })
: new Database(DB_PATH);
// Get total observation count
const totalCount = db.prepare('SELECT COUNT(*) as count FROM observations').get() as { count: number };
console.log(`Total observations in database: ${totalCount.count}`);
// Find all observations and group by content fingerprint
const observations = db.prepare(`
SELECT
id,
memory_session_id,
title,
subtitle,
narrative,
type,
created_at_epoch
FROM observations
ORDER BY memory_session_id, title, type, created_at_epoch
`).all() as ObservationRow[];
console.log(`Analyzing ${observations.length} observations for duplicates...`);
console.log('');
// Group observations by fingerprint (session + text + type + time bucket)
const groups = new Map<string, ObservationRow[]>();
for (const obs of observations) {
// Skip observations without title (can't dedupe without content identifier)
if (obs.title === null) continue;
// Create content hash from title + subtitle + narrative
const contentKey = `${obs.title}|${obs.subtitle || ''}|${obs.narrative || ''}`;
// Create fingerprint based on time window mode
let fingerprint: string;
if (batchWindowSeconds === 0) {
// Aggressive mode: ignore time entirely
fingerprint = `${obs.memory_session_id}|${obs.type}|${contentKey}`;
} else {
// Normal/strict mode: include time bucket
const epochBucket = Math.floor(obs.created_at_epoch / batchWindowSeconds);
fingerprint = `${obs.memory_session_id}|${obs.type}|${epochBucket}|${contentKey}`;
}
if (!groups.has(fingerprint)) {
groups.set(fingerprint, []);
}
groups.get(fingerprint)!.push(obs);
}
// Find groups with duplicates
const duplicateGroups: DuplicateGroup[] = [];
for (const [fingerprint, rows] of groups) {
if (rows.length > 1) {
// Sort by id to keep the oldest (lowest id)
rows.sort((a, b) => a.id - b.id);
const keepId = rows[0].id;
const deleteIds = rows.slice(1).map(r => r.id);
// SAFETY: Never delete all copies - always keep at least one
if (deleteIds.length >= rows.length) {
throw new Error(`SAFETY VIOLATION: Would delete all ${rows.length} copies! Aborting.`);
}
if (!deleteIds.every(id => id !== keepId)) {
throw new Error(`SAFETY VIOLATION: Delete list contains keep_id ${keepId}! Aborting.`);
}
const title = rows[0].title || '';
duplicateGroups.push({
memory_session_id: rows[0].memory_session_id,
title: title.substring(0, 100) + (title.length > 100 ? '...' : ''),
type: rows[0].type,
epoch_bucket: batchWindowSeconds > 0 ? Math.floor(rows[0].created_at_epoch / batchWindowSeconds) : 0,
count: rows.length,
ids: rows.map(r => r.id),
keep_id: keepId,
delete_ids: deleteIds,
});
}
}
if (duplicateGroups.length === 0) {
console.log('No duplicate observations found!');
db.close();
return;
}
// Calculate stats
const totalDuplicates = duplicateGroups.reduce((sum, g) => sum + g.delete_ids.length, 0);
const affectedSessions = new Set(duplicateGroups.map(g => g.memory_session_id)).size;
console.log('DUPLICATE ANALYSIS:');
console.log('-'.repeat(60));
console.log(`Duplicate groups found: ${duplicateGroups.length}`);
console.log(`Total duplicates to remove: ${totalDuplicates}`);
console.log(`Affected sessions: ${affectedSessions}`);
console.log(`Observations after cleanup: ${totalCount.count - totalDuplicates}`);
console.log('');
// Show sample of duplicates
console.log('SAMPLE DUPLICATES (first 10 groups):');
console.log('-'.repeat(60));
for (const group of duplicateGroups.slice(0, 10)) {
console.log(`Session: ${group.memory_session_id.substring(0, 20)}...`);
console.log(`Type: ${group.type}`);
console.log(`Count: ${group.count} copies (keeping id=${group.keep_id}, deleting ${group.delete_ids.length})`);
console.log(`Title: "${group.title}"`);
console.log('');
}
if (duplicateGroups.length > 10) {
console.log(`... and ${duplicateGroups.length - 10} more groups`);
console.log('');
}
// Execute deletion if not dry run
if (!dryRun) {
console.log('EXECUTING DELETION...');
console.log('-'.repeat(60));
const allDeleteIds = duplicateGroups.flatMap(g => g.delete_ids);
// Delete in batches of 500 to avoid SQLite limits
const BATCH_SIZE = 500;
let deleted = 0;
db.exec('BEGIN TRANSACTION');
try {
for (let i = 0; i < allDeleteIds.length; i += BATCH_SIZE) {
const batch = allDeleteIds.slice(i, i + BATCH_SIZE);
const placeholders = batch.map(() => '?').join(',');
const stmt = db.prepare(`DELETE FROM observations WHERE id IN (${placeholders})`);
const result = stmt.run(...batch);
deleted += result.changes;
console.log(`Deleted batch ${Math.floor(i / BATCH_SIZE) + 1}: ${result.changes} observations`);
}
db.exec('COMMIT');
console.log('');
console.log(`Successfully deleted ${deleted} duplicate observations!`);
// Verify final count
const finalCount = db.prepare('SELECT COUNT(*) as count FROM observations').get() as { count: number };
console.log(`Final observation count: ${finalCount.count}`);
} catch (error) {
db.exec('ROLLBACK');
console.error('Error during deletion, rolled back:', error);
process.exit(1);
}
} else {
console.log('DRY RUN COMPLETE');
console.log('-'.repeat(60));
console.log('No changes were made. Run with --execute to delete duplicates.');
}
db.close();
}
main();