mirror of
https://github.com/thedotmack/claude-mem
synced 2026-04-25 17:15:04 +02:00
* fix: resolve search, database, and docker bugs (#1913, #1916, #1956, #1957, #2048) - Fix concept/concepts param mismatch in SearchManager.normalizeParams (#1916) - Add FTS5 keyword fallback when ChromaDB is unavailable (#1913, #2048) - Add periodic WAL checkpoint and journal_size_limit to prevent unbounded WAL growth (#1956) - Add periodic clearFailed() to purge stale pending_messages (#1957) - Fix nounset-safe TTY_ARGS expansion in docker/claude-mem/run.sh Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: prevent silent data loss on non-XML responses, add queue info to /health (#1867, #1874) - ResponseProcessor: mark messages as failed (with retry) instead of confirming when the LLM returns non-XML garbage (auth errors, rate limits) (#1874) - Health endpoint: include activeSessions count for queue liveness monitoring (#1867) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: cache isFts5Available() at construction time Addresses Greptile review: avoid DDL probe (CREATE + DROP) on every text query. Result is now cached in _fts5Available at construction. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve worker stability bugs — pool deadlock, MCP loopback, restart guard (#1868, #1876, #2053) - Replace flat consecutiveRestarts counter with time-windowed RestartGuard: only counts restarts within 60s window (cap=10), decays after 5min of success. Prevents stranding pending messages on long-running sessions. (#2053) - Add idle session eviction to pool slot allocation: when all slots are full, evict the idlest session (no pending work, oldest activity) to free a slot for new requests, preventing 60s timeout deadlock. (#1868) - Fix MCP loopback self-check: use process.execPath instead of bare 'node' which fails on non-interactive PATH. Fix crash misclassification by removing false "Generator exited unexpectedly" error log on normal completion. (#1876) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve hooks reliability bugs — summarize exit code, session-init health wait (#1896, #1901, #1903, #1907) - Wrap summarize hook's workerHttpRequest in try/catch to prevent exit code 2 (blocking error) on network failures or malformed responses. Session exit no longer blocks on worker errors. (#1901) - Add health-check wait loop to UserPromptSubmit session-init command in hooks.json. On Linux/WSL where hook ordering fires UserPromptSubmit before SessionStart, session-init now waits up to 10s for worker health before proceeding. Also wrap session-init HTTP call in try/catch. (#1907) - Close #1896 as already-fixed: mtime comparison at file-context.ts:255-267 bypasses truncation when file is newer than latest observation. - Close #1903 as no-repro: hooks.json correctly declares all hook events. Issue was Claude Code 12.0.1/macOS platform event-dispatch bug. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: security hardening — bearer auth, path validation, rate limits, per-user port (#1932, #1933, #1934, #1935, #1936) - Add bearer token auth to all API endpoints: auto-generated 32-byte token stored at ~/.claude-mem/worker-auth-token (mode 0600). All hook, MCP, viewer, and OpenCode requests include Authorization header. Health/readiness endpoints exempt for polling. (#1932, #1933) - Add path traversal protection: watch.context.path validated against project root and ~/.claude-mem/ before write. Rejects ../../../etc style attacks. (#1934) - Reduce JSON body limit from 50MB to 5MB. Add in-memory rate limiter (300 req/min/IP) to prevent abuse. (#1935) - Derive default worker port from UID (37700 + uid%100) to prevent cross-user data leakage on multi-user macOS. Windows falls back to 37777. Shell hooks use same formula via id -u. (#1936) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve search project filtering and import Chroma sync (#1911, #1912, #1914, #1918) - Fix per-type search endpoints to pass project filter to Chroma queries and SQLite hydration. searchObservations/Sessions/UserPrompts now use $or clause matching project + merged_into_project. (#1912) - Fix timeline/search methods to pass project to Chroma anchor queries. Prevents cross-project result leakage when project param omitted. (#1911) - Sync imported observations to ChromaDB after FTS rebuild. Import endpoint now calls chromaSync.syncObservation() for each imported row, making them visible to MCP search(). (#1914) - Fix session-init cwd fallback to match context.ts (process.cwd()). Prevents project key mismatch that caused "no previous sessions" on fresh sessions. (#1918) - Fix sync-marketplace restart to include auth token and per-user port. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve all CodeRabbit and Greptile review comments on PR #2080 - Fix run.sh comment mismatch (no-op flag vs empty array) - Gate session-init on health check success (prevent running when worker unreachable) - Fix date_desc ordering ignored in FTS session search - Age-scope failed message purge (1h retention) instead of clearing all - Anchor RestartGuard decay to real successes (null init, not Date.now()) - Add recordSuccess() calls in ResponseProcessor and completion path - Prevent caller headers from overriding bearer auth token - Add lazy cleanup for rate limiter map to prevent unbounded growth - Bound post-import Chroma sync with concurrency limit of 8 - Add doc_type:'observation' filter to Chroma queries feeding observation hydration - Add FTS fallback to all specialized search handlers (observations, sessions, prompts, timeline) - Add response.ok check and error handling in viewer saveSettings Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve CodeRabbit round-2 review comments - Use failure timestamp (COALESCE) instead of created_at_epoch for stale purge - Downgrade _fts5Available flag when FTS table creation fails - Escape FTS5 MATCH input by quoting user queries as literal phrases - Escape LIKE metacharacters (%, _, \) in prompt text search - Add response.ok check in initial settings load (matches save flow) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: resolve CodeRabbit round-3 review comments - Include failed_at_epoch in COALESCE for age-scoped purge - Re-throw FTS5 errors so callers can distinguish failure from no-results - Wrap all FTS fallback calls in SearchManager with try/catch Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor: remove bearer auth and platform_source from context inject Bearer token auth (#1932/#1933) added friction for all localhost API clients with no benefit — the worker already binds localhost-only (CORS restriction + host binding). Removed auth-token module, requireAuth middleware, and Authorization headers from all internal callers. platform_source filtering from the /api/context/inject path was never used by any caller and silently filtered out observations. The underlying platform_source column stays; only the query-time filter and its plumbing through ContextBuilder, ObservationCompiler, SearchRoutes, context.ts, and transcripts/processor.ts are removed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix: resolve CodeRabbit + Greptile + claude-review comments on PR #2081 - middleware.ts: drop 'Authorization' from CORS allowedHeaders (Greptile) - middleware.ts: rate limiter falls back to req.socket.remoteAddress; add Retry-After on 429 (claude-review) - SearchRoutes.ts: drop leftover platformSource read+pass in handleContextPreview (Greptile) - .docker-blowout-data/: stop tracking the empty SQLite placeholder and gitignore the dir (claude-review) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix: tighten rate limiter — correct boundary + drop dead cleanup branch - `entry.count >= RATE_LIMIT_MAX_REQUESTS` so the 300th request is the first rejected (was 301). - Removed the `requestCounts.size > 100` lazy-cleanup block — on a localhost-only server the map tops out at 1–2 entries, so the branch was dead code. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix: rate limiter correctly allows exactly 300 req/min; doc localhost scope - Check `entry.count >= max` BEFORE incrementing so the cap matches the comment: 300 requests pass, the 301st gets 429. - Added a comment noting the limiter is effectively a global cap on a localhost-only worker (all callers share the 127.0.0.1/::1 bucket). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix: normalise IPv4-mapped IPv6 in rate limiter client IP Strip the `::ffff:` prefix so a localhost caller routed as `::ffff:127.0.0.1` shares a bucket with `127.0.0.1`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * fix: size-guarded prune of rate limiter map for non-localhost deploys Prune expired entries only when the map exceeds 1000 keys and we're already doing a window reset, so the cost is zero on the localhost hot path (1–2 keys) and the map can't grow unbounded if the worker is ever bound on a non-loopback interface. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
140 lines
4.7 KiB
JavaScript
140 lines
4.7 KiB
JavaScript
#!/usr/bin/env node
|
||
/**
|
||
* Protected sync-marketplace script
|
||
*
|
||
* Prevents accidental rsync overwrite when installed plugin is on beta branch.
|
||
* If on beta, the user should use the UI to update instead.
|
||
*/
|
||
|
||
const { execSync } = require('child_process');
|
||
const { existsSync, readFileSync } = require('fs');
|
||
const path = require('path');
|
||
const os = require('os');
|
||
|
||
const INSTALLED_PATH = path.join(os.homedir(), '.claude', 'plugins', 'marketplaces', 'thedotmack');
|
||
const CACHE_BASE_PATH = path.join(os.homedir(), '.claude', 'plugins', 'cache', 'thedotmack', 'claude-mem');
|
||
|
||
function getCurrentBranch() {
|
||
try {
|
||
if (!existsSync(path.join(INSTALLED_PATH, '.git'))) {
|
||
return null;
|
||
}
|
||
return execSync('git rev-parse --abbrev-ref HEAD', {
|
||
cwd: INSTALLED_PATH,
|
||
encoding: 'utf-8',
|
||
stdio: ['pipe', 'pipe', 'pipe']
|
||
}).trim();
|
||
} catch {
|
||
return null;
|
||
}
|
||
}
|
||
|
||
function getGitignoreExcludes(basePath) {
|
||
const gitignorePath = path.join(basePath, '.gitignore');
|
||
if (!existsSync(gitignorePath)) return '';
|
||
|
||
const lines = readFileSync(gitignorePath, 'utf-8').split('\n');
|
||
return lines
|
||
.map(line => line.trim())
|
||
.filter(line => line && !line.startsWith('#') && !line.startsWith('!'))
|
||
.map(pattern => `--exclude=${JSON.stringify(pattern)}`)
|
||
.join(' ');
|
||
}
|
||
|
||
const branch = getCurrentBranch();
|
||
const isForce = process.argv.includes('--force');
|
||
|
||
if (branch && branch !== 'main' && !isForce) {
|
||
console.log('');
|
||
console.log('\x1b[33m%s\x1b[0m', `WARNING: Installed plugin is on beta branch: ${branch}`);
|
||
console.log('\x1b[33m%s\x1b[0m', 'Running rsync would overwrite beta code.');
|
||
console.log('');
|
||
console.log('Options:');
|
||
console.log(' 1. Use UI at http://localhost:37777 to update beta');
|
||
console.log(' 2. Switch to stable in UI first, then run sync');
|
||
console.log(' 3. Force rsync: npm run sync-marketplace:force');
|
||
console.log('');
|
||
process.exit(1);
|
||
}
|
||
|
||
// Get version from plugin.json
|
||
function getPluginVersion() {
|
||
try {
|
||
const pluginJsonPath = path.join(__dirname, '..', 'plugin', '.claude-plugin', 'plugin.json');
|
||
const pluginJson = JSON.parse(readFileSync(pluginJsonPath, 'utf-8'));
|
||
return pluginJson.version;
|
||
} catch (error) {
|
||
console.error('\x1b[31m%s\x1b[0m', 'Failed to read plugin version:', error.message);
|
||
process.exit(1);
|
||
}
|
||
}
|
||
|
||
// Normal rsync for main branch or fresh install
|
||
console.log('Syncing to marketplace...');
|
||
try {
|
||
const rootDir = path.join(__dirname, '..');
|
||
const gitignoreExcludes = getGitignoreExcludes(rootDir);
|
||
|
||
execSync(
|
||
`rsync -av --delete --exclude=.git --exclude=bun.lock --exclude=package-lock.json ${gitignoreExcludes} ./ ~/.claude/plugins/marketplaces/thedotmack/`,
|
||
{ stdio: 'inherit' }
|
||
);
|
||
|
||
console.log('Running bun install in marketplace...');
|
||
execSync(
|
||
'cd ~/.claude/plugins/marketplaces/thedotmack/ && bun install',
|
||
{ stdio: 'inherit' }
|
||
);
|
||
|
||
// Sync to cache folder with version
|
||
const version = getPluginVersion();
|
||
const CACHE_VERSION_PATH = path.join(CACHE_BASE_PATH, version);
|
||
|
||
const pluginDir = path.join(rootDir, 'plugin');
|
||
const pluginGitignoreExcludes = getGitignoreExcludes(pluginDir);
|
||
|
||
console.log(`Syncing to cache folder (version ${version})...`);
|
||
execSync(
|
||
`rsync -av --delete --exclude=.git ${pluginGitignoreExcludes} plugin/ "${CACHE_VERSION_PATH}/"`,
|
||
{ stdio: 'inherit' }
|
||
);
|
||
|
||
// Install dependencies in cache directory so worker can resolve them
|
||
console.log(`Running bun install in cache folder (version ${version})...`);
|
||
execSync(`bun install`, { cwd: CACHE_VERSION_PATH, stdio: 'inherit' });
|
||
|
||
console.log('\x1b[32m%s\x1b[0m', 'Sync complete!');
|
||
|
||
// Trigger worker restart after file sync
|
||
console.log('\n🔄 Triggering worker restart...');
|
||
const http = require('http');
|
||
const os = require('os');
|
||
// Use per-user port derivation (#1936)
|
||
const uid = typeof process.getuid === 'function' ? process.getuid() : 77;
|
||
const workerPort = parseInt(process.env.CLAUDE_MEM_WORKER_PORT || String(37700 + (uid % 100)), 10);
|
||
const req = http.request({
|
||
hostname: '127.0.0.1',
|
||
port: workerPort,
|
||
path: '/api/admin/restart',
|
||
method: 'POST',
|
||
timeout: 2000
|
||
}, (res) => {
|
||
if (res.statusCode === 200) {
|
||
console.log('\x1b[32m%s\x1b[0m', '✓ Worker restart triggered');
|
||
} else {
|
||
console.log('\x1b[33m%s\x1b[0m', `ℹ Worker restart returned status ${res.statusCode}`);
|
||
}
|
||
});
|
||
req.on('error', () => {
|
||
console.log('\x1b[33m%s\x1b[0m', 'ℹ Worker not running, will start on next hook');
|
||
});
|
||
req.on('timeout', () => {
|
||
req.destroy();
|
||
console.log('\x1b[33m%s\x1b[0m', 'ℹ Worker restart timed out');
|
||
});
|
||
req.end();
|
||
|
||
} catch (error) {
|
||
console.error('\x1b[31m%s\x1b[0m', 'Sync failed:', error.message);
|
||
process.exit(1);
|
||
} |