mirror of
https://github.com/thedotmack/claude-mem
synced 2026-04-25 17:15:04 +02:00
fix(mcp): MCP server crashes with Cannot find module 'bun:sqlite' under Node (#1645)
* fix(mcp): MCP server crashes with Cannot find module 'bun:sqlite' under Node The MCP server bundle (mcp-server.cjs) ships with `#!/usr/bin/env node` so it must run under Node, but commit2b60dd29added an import of `ensureWorkerStarted` from worker-service.ts. That import transitively pulls in DatabaseManager → bun:sqlite, blowing up at top-level require under Node. The bundle ballooned from ~358KB (v11.0.1) to ~1.96MB (v12.0.0) and crashed on every spawn, breaking the MCP server entirely for Codex/MCP-only clients and any flow that boots the MCP tool surface. Fix: 1. Extract `ensureWorkerStarted` and the Windows spawn-cooldown helpers into a new lightweight module `src/services/worker-spawner.ts` that only imports from infrastructure/ProcessManager, infrastructure/HealthMonitor, shared/*, and utils/logger — no SQLite, no ChromaSync, no DatabaseManager. 2. The new helper takes the worker script path explicitly so callers running under Node (mcp-server) can pass `worker-service.cjs` while callers already inside the worker (worker-service self-spawn) pass `__filename`. worker-service.ts keeps a thin wrapper for back-compat. 3. mcp-server.ts now imports from worker-spawner.js and resolves WORKER_SCRIPT_PATH via __dirname so the daemon can be auto-started for MCP-only clients without dragging in the entire worker bundle. 4. resolveWorkerRuntimePath() now searches for Bun on every platform (not just Windows). worker-service.cjs requires Bun at runtime, so when the spawner is invoked from a Node process the Unix branch can no longer fall through to process.execPath (= node). 5. spawnDaemon's Unix branch now calls resolveWorkerRuntimePath() instead of hardcoding process.execPath, fixing the same Node-spawning-Node bug for the actual subprocess launch on Linux/macOS. After: - mcp-server.cjs is 384KB again with zero `bun:sqlite` references - node mcp-server.cjs initializes and serves tools/list + tools/call (verified via JSON-RPC against the running worker) - ProcessManager test suite updated for the new cross-platform Bun resolution behavior; full suite has the same pre-existing failures as main, no regressions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(mcp): address PR #1645 review feedback (round 1) Per Claude Code Review on PR #1645: 1. mcp-server.ts: log a warning when both __dirname and import.meta.url resolution fail. The cwd() fallback is essentially dead code for the CJS bundle but if it ever fires it gives the user a breadcrumb instead of a silently-wrong WORKER_SCRIPT_PATH. 2. mcp-server.ts: existsSync check on WORKER_SCRIPT_PATH at module load. Surfaces a clear "worker-service.cjs not found at expected path" log line for partial installs / dev environments instead of letting the failure surface as a generic spawnDaemon error later. 3. ProcessManager.ts: explanatory comment on the Windows `return 0` sentinel in spawnDaemon. Documents that PowerShell Start-Process doesn't return a PID and that callers MUST use `pid === undefined` for failure detection — never falsy checks like `if (!pid)`. Items 4 (no direct unit tests for the worker-spawner Windows cooldown helpers) and 5 (process-manager.test.ts uses real ~/.claude-mem path) are deferred — the reviewer flagged the latter as out of scope, and the former needs an injectable-I/O refactor that isn't appropriate for a hotfix bugfix PR. Verified: build clean, mcp-server.cjs still 384KB / zero bun:sqlite, JSON-RPC tools/list still returns the 7-tool surface, ProcessManager test suite still 43/43. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(spawner): mkdir CLAUDE_MEM_DATA_DIR before writing Windows cooldown marker Per CodeRabbit on PR #1645: on a fresh user profile, the data dir may not exist yet when markWorkerSpawnAttempted() runs. writeFileSync would throw ENOENT, the catch would swallow it, and the marker would never be created — defeating the popup-loop protection this helper exists to provide. mkdirSync(dir, { recursive: true }) is a no-op when the directory already exists, so it's safe to call on every spawn attempt. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * docs(spawner): add APPROVED OVERRIDE annotations for cooldown marker catches Per CodeRabbit on PR #1645: silent catch blocks at spawn-cooldown sites should carry the APPROVED OVERRIDE annotation that the rest of the codebase uses (see ProcessManager.ts:689, BaseRouteHandler.ts:82, ChromaSync.ts:288). Both catches are intentional best-effort: - markWorkerSpawnAttempted: if mkdir/writeFileSync fails, the worker spawn itself will almost certainly fail too. Surfacing that downstream is far more useful than a noisy log line about a lock file. - clearWorkerSpawnAttempted: a stale marker is harmless. Worst case is one suppressed retry within the cooldown window, then self-heals. No behaviour change. Resolves the second half of CodeRabbit's lines 38-65 comment on worker-spawner.ts. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(mcp): address PR #1645 review feedback (round 2) Round 2 of Claude Code Review feedback on PR #1645: Build guardrail (most important — protects the regression this PR fixes): - scripts/build-hooks.js: post-build check that fails the build if mcp-server.cjs ever contains a `bun:sqlite` reference. This is the exact regression PR #1645 fixed; future contributors will get an immediate, actionable error if a transitive import re-introduces it. Verified the check trips when violated. Code clarity: - src/servers/mcp-server.ts: drop dead `_originalLog` capture — it was never restored. Less code is fewer bugs. - src/servers/mcp-server.ts: elevate `cwd()` fallback log from WARN to ERROR. Per reviewer: a wrong WORKER_SCRIPT_PATH means worker auto-start silently fails, so the breadcrumb should be loud and searchable. - src/services/worker-service.ts: extended doc comment on the `ensureWorkerStartedShared(port, __filename)` wrapper explaining why `__filename` is the correct script path here (CJS bundle = compiled worker-service.cjs) and why mcp-server.ts can't use the same trick. - src/services/infrastructure/ProcessManager.ts: inline comment on the `env.BUN === 'bun'` bare-command guard explaining why it's reachable even though `isBunExecutablePath('bun')` is true (pathExists returns false for relative names, so the second branch is what fires). Coverage: - src/services/infrastructure/ProcessManager.ts: add `/usr/bin/bun` to the Linux candidate paths so apt-installed Bun on Debian/Ubuntu is found without falling through to the PATH lookup. Out-of-scope items (deferred with rationale in PR replies): - Unit tests for ensureWorkerStarted / Windows cooldown helpers — needs injectable-I/O refactor unsuitable for a hotfix. - Sentinel object for Windows spawnDaemon `0` — broader API change. - Windows Scoop install path — follow-up for a future PR. - runOneTimeChromaMigration placement, aggressiveStartupCleanup, console.log redirect timing, platform timeout multiplier — all pre-existing and unrelated to this regression. Verified: build clean, guardrail trips on simulated violation, mcp-server.cjs still 0 bun:sqlite refs, ProcessManager tests 43/43. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(mcp): address PR #1645 review feedback (round 3) Round 3 of Claude Code Review feedback on PR #1645: ProcessManager.ts: improve actionability of "Bun not found" errors Both Windows and Unix branches of spawnDaemon previously logged a vague "Failed to locate Bun runtime" message when resolveWorkerRuntimePath() returned null. Replaced with an actionable message that names the install URL and explains *why* Bun is required (worker uses bun:sqlite). The existing null-guard at the call sites already prevents passing null to child_process.spawn — only the error text changed. scripts/build-hooks.js: refine bun:sqlite guardrail to match actual require() calls only The previous coarse `includes('bun:sqlite')` check tripped on its own improved error message, which legitimately mentions "bun:sqlite" by name. Switched to a regex that matches `require("bun:sqlite")` / `require('bun:sqlite')` (with optional whitespace, handles both quote styles, handles minified output) so error messages and inline comments can reference the module name without false positives. Verified the regex still trips on real violations (both spaced and minified forms) and correctly ignores string-literal mentions. Other round-3 items (verified, not changed): - TOOL_ENDPOINT_MAP: reviewer flagged as dead code, but it IS used at lines 250 and 263 by the search and timeline tool handlers. False positive — kept as-is. - if (!pid) callsites: grepped src/, zero offenders. The Windows `0` PID sentinel contract is safe; only the in-line documentation comment in ProcessManager.ts mentions the anti-pattern. - callWorkerAPIPost double-wrapping: pre-existing intentional behavior (only used by /api/observations/batch which returns raw data, not the MCP {content:[...]} shape). Unrelated to this regression. - Snap path / startParentHeartbeat / main().catch / test for non- existent workerScriptPath / etc — pre-existing or out of scope for this hotfix, deferred per established disposition. Verified: build clean, guardrail still trips on real violations, mcp-server.cjs has 0 require("bun:sqlite") calls, JSON-RPC tools/list returns the 7-tool surface, ProcessManager tests 43/43. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test(spawnDaemon): contract test for Windows 0 PID success sentinel Per CodeRabbit nitpick on PR #1645 commit7a96b3b9: add a focused test that documents the spawnDaemon return contract so any future contributor who introduces `if (!pid)` against a spawnDaemon return value (or its wrapper) sees a failing assertion explaining why the falsy check is incorrect. The test deliberately exercises the JS-level semantics rather than mocking PowerShell — a true mocked Windows test would require refactoring spawnDaemon to take an injectable execSync, which is a larger change than this hotfix should carry. The contract assertions here catch the same regression class (treating Windows success as failure) without that refactor. Verified: bun test tests/infrastructure/process-manager.test.ts now passes 44/44 (was 43/43). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(mcp): address PR #1645 review feedback (round 4) Round 4 of Claude Code Review feedback on PR #1645 (review of round-3 commit193286f9): tests/infrastructure/process-manager.test.ts: replace require('fs') with the already-imported statSync. Reviewer correctly flagged that the file uses ESM-style named imports everywhere else and the inline require() calls would break under strict ESM. Two callsites updated in the touchPidFile test. src/services/infrastructure/ProcessManager.ts: hoist resolveWorkerRuntimePath() and the `Bun runtime not found` error handling out of both branches in spawnDaemon. Both Windows and Unix branches need the same Bun lookup, and resolving once before the OS branch split avoids a duplicate execSync('which bun')/where bun in the no-well-known-path fallback. The error message is also DRY now — single source of truth instead of two near-identical strings. CodeRabbit confirmed in its previous reply that "All actionable items across all four review rounds are fully resolved" — these two minor items from claude-review of round 3 are the only remaining cleanup. Verified: build clean, ProcessManager tests still 44/44. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(mcp): address PR #1645 review feedback (round 5) Round 5 of Claude Code Review feedback on PR #1645: src/services/worker-spawner.ts: drop `export` from internal helpers `shouldSkipSpawnOnWindows`, `markWorkerSpawnAttempted`, and `clearWorkerSpawnAttempted` were exported even though they were private in worker-service.ts and nothing outside this module needs them. Removing the `export` keyword keeps the public surface to just `ensureWorkerStarted` and prevents future callers from bypassing the spawn lifecycle. scripts/build-hooks.js: broaden guardrail to all bun:* modules Previously the regex only caught `require("bun:sqlite")`, but every module in the `bun:` namespace (bun:ffi, bun:test, etc.) is Bun-only and would crash mcp-server.cjs the same way under Node. Generalized the regex to `require("bun:[a-z][a-z0-9_-]*")` so a transitive import of any Bun-only module fails the build instead of shipping a broken bundle. Verified the new regex still trips on bun:sqlite, bun:ffi, bun:test, and correctly ignores string-literal mentions in error messages. src/servers/mcp-server.ts: attribute root cause when dirname resolution fails Previously, if `__dirname`/`import.meta.url` resolution failed and we fell back to `process.cwd()`, the user would see two warnings: an error about the dirname fallback AND a separate warning about the missing worker bundle. The second warning hides the root cause — someone debugging would assume the install is broken when really it's a dirname-resolution failure. Track the failure with a flag and emit a single root-cause-attributing log line in the existence-check branch instead. The dirname fallback paths are still functionally unreachable in CJS deployment; this just makes the failure mode unmistakable if it ever does fire. Out of scope (consistent with prior rounds): - darwin/linux split for non-Windows candidate paths (benign today) - Integration test for non-existent workerScriptPath (test coverage gap deferred since rounds 1-2) - Defer existsSync check to first ensureWorkerStarted call (current module-init check is the loud signal we want) Already addressed in earlier rounds: - resolveWorkerRuntimePath() called twice in spawnDaemon → hoisted in round 4 (b2c114b4) - _originalLog dead code → removed in round 2 (7a96b3b9) Verified: build clean, broadened guardrail trips on bun:sqlite, bun:ffi, and bun:test (and ignores string literals), MCP server serves the 7-tool surface, ProcessManager tests still 44/44. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(mcp): address PR #1645 review feedback (round 6) Round 6 of Claude Code Review feedback on PR #1645: src/services/worker-spawner.ts: validate workerScriptPath at entry Add an empty-string + existsSync guard at the top of ensureWorkerStarted. Without this, a partial install or upstream path-resolution regression just surfaces as a low-signal child_process error from spawnDaemon. The explicit log line at the entry point makes that class of bug much easier to diagnose. The mcp-server.ts module-init existsSync check already covers this for the MCP-server caller, but defending at the spawner level reinforces the contract for any future caller. src/services/worker-spawner.ts: document SettingsDefaultsManager dependency boundary in the module header The spawner imports from SettingsDefaultsManager, ProcessManager, and HealthMonitor. None of those currently touch bun:sqlite, but if any of them ever does, the spawner's SQLite-free contract silently breaks. The build guardrail in build-hooks.js is the only thing that catches it. Header comment now flags this so future contributors audit transitive imports when adding helpers from the shared/infrastructure layers. src/services/infrastructure/ProcessManager.ts: add /snap/bin/bun Ubuntu Snap install path. Now alongside the existing apt path (/usr/bin/bun) and Homebrew/Linuxbrew paths. The PATH lookup catches it as fallback, but listing it explicitly avoids paying for an execSync('which bun') in the common case. src/servers/mcp-server.ts: elevate missing-bundle log warn → error A missing worker-service.cjs means EVERY MCP tool call that needs the worker silently fails. That's a broken-install state, not a transient condition — match the severity of the dirname-fallback branch above (which is already ERROR). Out of scope (consistent with prior rounds, reviewer agrees these are appropriately deferred): - Streaming bundle read in build-hooks.js (nit at current 384KB size) - Unit tests for ensureWorkerStarted / cooldown helpers - Integration test for non-existent workerScriptPath Verified: build clean, broadened guardrail still trips on bun:* imports and ignores string literals, MCP server serves the 7-tool surface, ProcessManager tests still 44/44. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(mcp): defer WORKER_SCRIPT_PATH check to first call (round 7) Round 7 of Claude Code Review feedback on PR #1645: src/servers/mcp-server.ts: extract module-level existsSync check into checkWorkerScriptPath() and call it lazily from ensureWorkerConnection() instead of at module load. The early-warning intent is preserved (the check still fires before any actual spawn attempt), but tests/tools that import this module without booting the MCP server no longer see noisy ERROR-level log lines for a worker bundle they never intended to start. The check is cheap and idempotent, so calling it on every auto-start attempt is fine. The two failure-mode branches (dirname-resolution failure vs simple missing-bundle) remain unchanged — the function body is identical to the previous module-level if-block, just hoisted into a function and called from ensureWorkerConnection(). False positive (no change needed): - Reviewer flagged `mkdirSync` as a dead import in worker-spawner.ts, but it IS used at line 71 in markWorkerSpawnAttempted (the round-1 ENOENT fix CodeRabbit explicitly asked for). Out of scope: - Volta path (~/.volta/bin/bun) — PATH fallback handles it; nit per reviewer - worker-spawner.ts unit tests — needs injectable I/O, deferred consistently since round 1 Verified: build clean, tests 44/44, smoke test 7-tool surface. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(mcp): address PR #1645 review feedback (round 8) Round 8 of Claude Code Review feedback on PR #1645: tests/services/worker-spawner.test.ts: NEW FILE — unit tests for the ensureWorkerStarted entry-point validation guards added in round 6. Covers the empty-string and non-existent-path cases without requiring the broader injectable-I/O refactor that the deeper spawn lifecycle tests would need. 2 new passing tests. src/services/infrastructure/ProcessManager.ts: memoize resolveWorkerRuntimePath() for the no-options call site (which is what spawnDaemon uses). Caches both successful resolutions and the not-found result so repeated spawn attempts (crash loops, health thrashing) don't repeatedly hit statSync on candidate paths. Tests that pass options bypass the cache entirely so existing test cases remain deterministic. Added resetWorkerRuntimePathCache() exported for test isolation only. src/servers/mcp-server.ts: rename checkWorkerScriptPath() → warnIfWorkerScriptMissing(). Per reviewer: the old name implied a boolean check but the function returns void and has side effects. New name is more accurate. DEFENDED (no change made): - Reviewer asked to elevate process.cwd() fallback to a synchronous throw at module load. This conflicts with round 7 feedback which asked to defer the existsSync check to first call to avoid noisy test logs. The current lazy approach is the right compromise: it fires before any actual spawn attempt, attributes the root cause, and doesn't pollute test imports. Throwing at module load would crash before stdio is wired up, which is much harder to debug than the lazy log line. - Reviewer asked to grep for `if (!pid)` callsites — already verified in round 3, zero offenders in src/. Out of scope: - Volta path (~/.volta/bin/bun) — PATH fallback handles it; reviewer marked as nit - Deeper unit tests for ensureWorkerStarted spawn lifecycle (PID file cleanup, health checks, etc.) — needs injectable I/O, deferred consistently since round 1 Verified: build clean, ProcessManager tests still 44/44, new worker-spawner tests 2/2, smoke test serves 7 tools. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(spawner): clear Windows cooldown marker on all healthy paths (round 9) Round 9 of PR #1645 review feedback. src/services/worker-spawner.ts: clear stale Windows cooldown marker on every healthy-return path Per CodeRabbit (genuine bug): The .worker-start-attempted marker was previously only cleared after a spawn initiated by ensureWorkerStarted itself succeeded. If a previous auto-start failed, then the worker became healthy via another session or a manual start, the early-return success branches (existing live PID, fast-path health check, port-in-use waitForHealth) would leave the stale marker behind. A subsequent genuine outage inside the 2-minute cooldown window would then be incorrectly suppressed on Windows. Now calls clearWorkerSpawnAttempted() on all three healthy success paths in addition to the existing post-spawn path. The function is already a no-op on non-Windows, so the change is risk-free for Linux and macOS callers. src/servers/mcp-server.ts: more actionable error when auto-start fails Per claude-review: when ensureWorkerStarted returns false (or throws), the caller currently logs a generic "Worker auto-start failed" line. Updated both error sites to explicitly call out which MCP tools will fail (search/timeline/get_observations) and to point at earlier log lines for the specific cause. Helps users distinguish "worker is just not running" from "tools are broken". DEFENDED (no change): - Sentinel object for Windows spawnDaemon 0 PID — broader API change, out of scope, deferred consistently since round 1 - Spawner lifecycle tests beyond input validation — needs injectable I/O, deferred consistently - Concurrent cooldown marker race on Windows — pre-existing, out of scope - stripHardcodedDirname() regex fragility assertion — pre-existing, out of scope Verified: build clean, ProcessManager tests 44/44, worker-spawner tests 2/2, smoke test 7-tool surface. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(spawner): don't cache null Bun-not-found result (round 10) Round 10 of PR #1645 review feedback. src/services/infrastructure/ProcessManager.ts: only cache successful resolveWorkerRuntimePath() results Genuine bug from claude-review: the round-8 memoization cached BOTH successful resolutions AND the not-found `null` result. If Bun isn't on PATH at the moment the MCP server first tries to spawn the worker — e.g., on a fresh install where the user installs Bun in another terminal and retries — every subsequent ensureWorkerConnection call would return the cached `null` and fail with a misleading "Bun not found" error even though Bun is now available. The fix is the one-line change the reviewer suggested: only cache when `result !== null`. Crash loops still get the fast-path memoized success; recovery from a fresh-install Bun install still works. src/servers/mcp-server.ts: rename warnIfWorkerScriptMissing → errorIfWorkerScriptMissing Per claude-review: the function uses logger.error but the name says "warn" — name/level mismatch. Renamed to match. The function still serves the same purpose (defensive lazy check), just with an accurate name. DEFENDED (no change): - Discriminated union for mcpServerDirResolutionFailed flag — current approach works, the noise is minimal, and the alternative would add type complexity for a path that's functionally unreachable in CJS deployment - macOS /usr/local/bin/bun "missing" — already in the Linux/macOS candidate list at line 137 (false positive from reviewer) - nix store path — out of scope, PATH fallback handles it - Long build-hooks.js error message — verbosity is intentional, this message only fires on a real regression and the diagnostic value is worth the line wrap - Spawner lifecycle test coverage gap — needs injectable I/O, deferred consistently Verified: build clean, ProcessManager tests 44/44, worker-spawner tests 2/2, smoke test 7-tool surface. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(mcp): bundle size budget guardrail (round 11) Round 11 of PR #1645 review feedback. scripts/build-hooks.js: secondary bundle-size budget guardrail Per claude-review: the existing `require("bun:*")` regex catches the specific regression class we already know about, but if esbuild ever changes how it emits external module specifiers, the regex could silently miss the regression. A bundle-size budget catches the structural symptom (worker-service.ts dragged into the bundle blew the size from ~358KB to ~1.96MB) regardless of how the imports look. Set the ceiling at 600KB. Current size is ~384KB; the broken v12.0.0 bundle was ~1920KB. Plenty of headroom for legitimate growth without incentivizing bundle bloat or false positives. Both guardrails fire independently — one is regex-based, one is size-based — so a regression has to defeat both to ship. tests/services/worker-spawner.test.ts: comment about port irrelevance Per claude-review: the hardcoded port values in the validation-guard tests are arbitrary because the path validation short-circuits before any network I/O. Added a comment explaining this so future readers don't waste time wondering why specific ports were picked. DEFENDED (no change): - clearWorkerSpawnAttempted on the unhealthy-live-PID return path: reviewer asked to clear the marker here too, but the current behavior is correct. The marker tracks "recently attempted a spawn" and exists to prevent rapid PowerShell-popup loops. If a wedged process is currently using the port, the spawn isn't actually happening on this code path (the helper returns false without reaching the spawn step). When the wedged process eventually dies and a subsequent call hits the spawn path, the marker correctly suppresses repeated retry attempts within the 2-minute cooldown. Clearing the marker on the unhealthy-return path would defeat exactly the popup-loop protection the marker exists to provide. - execSync in lookupBinaryInPath blocks event loop: pre-existing concern, not introduced by this PR. Reviewer notes "fires once, result cached". Not in scope for a hotfix. - Tracking issue for spawner lifecycle test gap: out of scope for this PR; the gap is documented in the test file's header comment with a back-reference to PR #1645. Verified: build clean, both guardrails functional (size budget is under the new ceiling), ProcessManager tests 44/44, worker-spawner tests 2/2, smoke test 7-tool surface. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix(mcp): eliminate double error log when worker bundle is missing (round 12) Round 12 of PR #1645 review feedback. src/servers/mcp-server.ts: errorIfWorkerScriptMissing() now only logs when the dirname-fallback attribution path is needed Previously a missing worker-service.cjs would produce two ERROR log lines on the same code path: 1. errorIfWorkerScriptMissing() in ensureWorkerConnection() 2. The existsSync guard inside ensureWorkerStarted() The simple "missing bundle" case is fully covered by the spawner's own existsSync guard. The mcp-server.ts function now ONLY logs when mcpServerDirResolutionFailed is true — that's the mcp-server-specific root-cause attribution that the spawner cannot provide on its own. Net effect: same single error log per bug class, cleaner triage. DEFENDED (no change): - mkdirSync error propagation in markWorkerSpawnAttempted: reviewer worried that mkdirSync/writeFileSync exceptions could escape, but the entire body is already wrapped in try/catch with an APPROVED OVERRIDE annotation. False positive. - clearWorkerSpawnAttempted on healthy paths: reviewer asked a clarifying question, not a change request. The behavior is intentional — the cooldown marker exists to prevent rapid PowerShell-popup loops from a series of failed spawns; a healthy worker means the marker has served its purpose and a future outage should NOT be suppressed. Will explain in PR reply. - __filename ESM concern in worker-service.ts wrapper: already documented in round 4 with an extended comment about the CJS bundle context and why mcp-server.ts can't use the same trick. - Spawn lifecycle integration tests: deferred consistently since round 1; gap is documented in worker-spawner.test.ts header. Verified: build clean, ProcessManager tests 44/44, worker-spawner tests 2/2, smoke test 7-tool surface. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * test(spawner): add bare-command BUN env override coverage Final round of PR #1645 review feedback: while preparing to merge, I noticed CodeRabbit's round-5 CHANGES_REQUESTED review on commit3570d2f0included an unaddressed nitpick — the env-driven bare-command branch in resolveWorkerRuntimePath() (returning a bare 'bun' unchanged when BUN or BUN_PATH is set that way) had no test coverage and could regress without any failing assertion. Added a focused test that exercises the env: { BUN: 'bun' } branch specifically. 47/47 tests pass (was 46/46). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -244,6 +244,42 @@ async function buildHooks() {
|
||||
const mcpServerStats = fs.statSync(`${hooksDir}/${MCP_SERVER.name}.cjs`);
|
||||
console.log(`✓ mcp-server built (${(mcpServerStats.size / 1024).toFixed(2)} KB)`);
|
||||
|
||||
// GUARDRAIL (#1645): The MCP server runs under Node, but the entire `bun:`
|
||||
// module namespace (bun:sqlite, bun:ffi, bun:test, etc.) is Bun-only. If
|
||||
// any transitive import in mcp-server.ts ever pulls one in, the bundle
|
||||
// will crash on first require under Node — which is exactly the regression
|
||||
// PR #1645 fixed for `bun:sqlite`. Fail the build instead of shipping a
|
||||
// broken bundle so future contributors get an immediate signal.
|
||||
//
|
||||
// Only flag actual `require("bun:...")` / `require('bun:...')` calls, not
|
||||
// the bare string — error messages and inline comments may legitimately
|
||||
// mention `bun:sqlite` by name without re-introducing the import.
|
||||
const mcpBundleContent = fs.readFileSync(`${hooksDir}/${MCP_SERVER.name}.cjs`, 'utf-8');
|
||||
const bunRequireRegex = /require\(\s*["']bun:[a-z][a-z0-9_-]*["']\s*\)/;
|
||||
const bunRequireMatch = mcpBundleContent.match(bunRequireRegex);
|
||||
if (bunRequireMatch) {
|
||||
throw new Error(
|
||||
`mcp-server.cjs contains a Bun-only ${bunRequireMatch[0]} call. This means a transitive import in src/servers/mcp-server.ts pulled in code from worker-service.ts (or another module that touches DatabaseManager/ChromaSync). The MCP server runs under Node and cannot load bun:* modules. Audit recent imports in src/servers/mcp-server.ts and src/services/worker-spawner.ts — the spawner module is intentionally lightweight and MUST NOT import anything that touches SQLite or other Bun-only modules. See PR #1645 for context.`
|
||||
);
|
||||
}
|
||||
|
||||
// SECONDARY GUARDRAIL (#1645 round 11): bundle size budget. The bun:sqlite
|
||||
// regex above catches the specific regression class we already know about,
|
||||
// but esbuild could in theory change how it emits external module specifiers
|
||||
// and silently slip past the regex. A bundle-size budget catches the
|
||||
// structural symptom (worker-service.ts dragged into the bundle blew the
|
||||
// size from ~358KB to ~1.96MB) regardless of how the imports look.
|
||||
//
|
||||
// 600KB is a generous ceiling — current size is ~384KB, the broken v12.0.0
|
||||
// bundle was ~1920KB, and there's plenty of headroom for legitimate growth
|
||||
// before we'd want to revisit this number.
|
||||
const MCP_SERVER_MAX_BYTES = 600 * 1024;
|
||||
if (mcpServerStats.size > MCP_SERVER_MAX_BYTES) {
|
||||
throw new Error(
|
||||
`mcp-server.cjs is ${(mcpServerStats.size / 1024).toFixed(2)} KB, exceeding the ${(MCP_SERVER_MAX_BYTES / 1024).toFixed(0)} KB budget. This usually means a transitive import pulled worker-service.ts (or another heavy module) into the MCP bundle. The MCP server is supposed to be a thin HTTP wrapper — audit recent imports in src/servers/mcp-server.ts and src/services/worker-spawner.ts. See PR #1645 for context on why this guardrail exists.`
|
||||
);
|
||||
}
|
||||
|
||||
// Build context generator
|
||||
console.log(`\n🔧 Building context generator...`);
|
||||
await build({
|
||||
|
||||
@@ -16,7 +16,6 @@ import { logger } from '../utils/logger.js';
|
||||
// CRITICAL: Redirect console to stderr BEFORE other imports
|
||||
// MCP uses stdio transport where stdout is reserved for JSON-RPC protocol messages.
|
||||
// Any logs to stdout break the protocol (Claude Desktop parses "[2025..." as JSON array).
|
||||
const _originalLog = console['log'];
|
||||
console['log'] = (...args: any[]) => {
|
||||
logger.error('CONSOLE', 'Intercepted console output (MCP protocol protection)', undefined, { args });
|
||||
};
|
||||
@@ -28,11 +27,69 @@ import {
|
||||
ListToolsRequestSchema,
|
||||
} from '@modelcontextprotocol/sdk/types.js';
|
||||
import { getWorkerPort, workerHttpRequest } from '../shared/worker-utils.js';
|
||||
import { ensureWorkerStarted } from '../services/worker-service.js';
|
||||
import { ensureWorkerStarted } from '../services/worker-spawner.js';
|
||||
import { searchCodebase, formatSearchResults } from '../services/smart-file-read/search.js';
|
||||
import { parseFile, formatFoldedView, unfoldSymbol } from '../services/smart-file-read/parser.js';
|
||||
import { readFile } from 'node:fs/promises';
|
||||
import { resolve } from 'node:path';
|
||||
import { existsSync } from 'node:fs';
|
||||
import { dirname, resolve } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
|
||||
// Resolve the path to worker-service.cjs, which lives alongside mcp-server.cjs
|
||||
// in the plugin's scripts directory. We need an explicit path because the MCP
|
||||
// server runs under Node while the worker must run under Bun, so we can't rely
|
||||
// on `__filename` pointing to a self-spawnable script.
|
||||
//
|
||||
// In the deployed CJS bundle, `__dirname` is always defined — the import.meta
|
||||
// fallback only exists to keep the source future-proof against an eventual
|
||||
// ESM port. Both fallback branches should be functionally unreachable today.
|
||||
let mcpServerDirResolutionFailed = false;
|
||||
const mcpServerDir = (() => {
|
||||
if (typeof __dirname !== 'undefined') return __dirname;
|
||||
try {
|
||||
return dirname(fileURLToPath(import.meta.url));
|
||||
} catch {
|
||||
// Last-ditch fallback: cwd is almost certainly wrong, but throwing here
|
||||
// would crash the MCP server before it can serve a single request. Mark
|
||||
// the failure so the existence check below can produce a single, loud,
|
||||
// root-cause-attributing log line instead of a confusing "missing worker
|
||||
// bundle" warning that hides the dirname resolution failure.
|
||||
mcpServerDirResolutionFailed = true;
|
||||
return process.cwd();
|
||||
}
|
||||
})();
|
||||
const WORKER_SCRIPT_PATH = resolve(mcpServerDir, 'worker-service.cjs');
|
||||
|
||||
/**
|
||||
* Surface a clear, actionable error if the worker bundle isn't where we
|
||||
* expect. Without this check, a missing or partial install only fails later
|
||||
* inside spawnDaemon as a generic "failed to spawn" message.
|
||||
*
|
||||
* If dirname resolution itself failed (extremely unlikely in CJS), attribute
|
||||
* the missing-bundle warning to the root cause so the user doesn't waste time
|
||||
* looking for an install bug that doesn't exist.
|
||||
*
|
||||
* Called lazily from `ensureWorkerConnection` (not at module load) so that
|
||||
* tests or tools that import this module without booting the MCP server
|
||||
* don't see noisy ERROR-level log lines for a worker they never intended
|
||||
* to start. The check is cheap and idempotent, so calling it on every
|
||||
* auto-start attempt is fine.
|
||||
*/
|
||||
function errorIfWorkerScriptMissing(): void {
|
||||
// Only log here when the dirname resolution itself failed — that's the
|
||||
// mcp-server-specific root cause attribution that the spawner cannot
|
||||
// provide. The plain "missing bundle" case is already covered by the
|
||||
// existsSync guard inside ensureWorkerStarted, and logging from both
|
||||
// sites would produce a confusing double-log on the same code path.
|
||||
if (!mcpServerDirResolutionFailed) return;
|
||||
if (existsSync(WORKER_SCRIPT_PATH)) return;
|
||||
|
||||
logger.error(
|
||||
'SYSTEM',
|
||||
'mcp-server: dirname resolution failed (both __dirname and import.meta.url are unavailable). Fell back to process.cwd() and the resolved WORKER_SCRIPT_PATH does not exist. This is the actual problem — the worker bundle is fine, but mcp-server cannot locate it. Worker auto-start will fail until the dirname-resolution path is fixed.',
|
||||
{ workerScriptPath: WORKER_SCRIPT_PATH, mcpServerDir }
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Map tool names to Worker HTTP endpoints
|
||||
@@ -156,11 +213,29 @@ async function ensureWorkerConnection(): Promise<boolean> {
|
||||
|
||||
logger.warn('SYSTEM', 'Worker not available, attempting auto-start for MCP client');
|
||||
|
||||
// Validate the worker bundle path lazily here (rather than at module load)
|
||||
// so that tests/tools that import this module without booting the MCP
|
||||
// server don't see noisy ERROR-level log lines for a worker they never
|
||||
// intended to start.
|
||||
errorIfWorkerScriptMissing();
|
||||
|
||||
try {
|
||||
const port = getWorkerPort();
|
||||
return await ensureWorkerStarted(port);
|
||||
const started = await ensureWorkerStarted(port, WORKER_SCRIPT_PATH);
|
||||
if (!started) {
|
||||
logger.error(
|
||||
'SYSTEM',
|
||||
'Worker auto-start returned false — MCP tools that require the worker (search, timeline, get_observations) will fail until the worker is running. Check earlier log lines for the specific failure reason (Bun not found, missing worker bundle, port conflict, etc.).'
|
||||
);
|
||||
}
|
||||
return started;
|
||||
} catch (error) {
|
||||
logger.error('SYSTEM', 'Worker auto-start failed', undefined, error as Error);
|
||||
logger.error(
|
||||
'SYSTEM',
|
||||
'Worker auto-start threw — MCP tools that require the worker (search, timeline, get_observations) will fail until the worker is running.',
|
||||
undefined,
|
||||
error as Error
|
||||
);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -71,21 +71,62 @@ function lookupBinaryInPath(binaryName: string, platform: NodeJS.Platform): stri
|
||||
}
|
||||
}
|
||||
|
||||
// Memoize the resolved runtime path for the no-options call site (which is
|
||||
// what spawnDaemon uses). Caches successful resolutions so repeated spawn
|
||||
// attempts (crash loops, health thrashing) don't repeatedly hit `statSync`
|
||||
// on the candidate paths.
|
||||
//
|
||||
// IMPORTANT: only success is cached. A `null` result (Bun not found) is
|
||||
// never cached so that a long-running MCP server can recover if the user
|
||||
// installs Bun in another terminal between the first failed lookup and a
|
||||
// subsequent retry. Caching `null` would permanently break the process
|
||||
// until restart. Per PR #1645 round-10 review.
|
||||
//
|
||||
// `undefined` means "not yet resolved"; tests that pass options bypass the
|
||||
// cache entirely.
|
||||
let cachedWorkerRuntimePath: string | undefined = undefined;
|
||||
|
||||
/**
|
||||
* Reset the memoized runtime path. Exported for test isolation only —
|
||||
* production code never needs to call this.
|
||||
*/
|
||||
export function resetWorkerRuntimePathCache(): void {
|
||||
cachedWorkerRuntimePath = undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve the runtime executable for spawning the worker daemon.
|
||||
*
|
||||
* Windows must prefer Bun because worker-service.cjs imports bun:sqlite,
|
||||
* which is unavailable in Node.js.
|
||||
* worker-service.cjs imports `bun:sqlite`, so it MUST run under Bun on every
|
||||
* platform — not just Windows. When the caller is already running under Bun
|
||||
* (e.g. the worker self-spawning from a hook), we reuse process.execPath to
|
||||
* avoid an extra PATH lookup. Otherwise (notably when the MCP server running
|
||||
* under Node spawns the worker for the first time) we locate the Bun binary
|
||||
* via env vars, well-known install locations, and finally the system PATH.
|
||||
*/
|
||||
export function resolveWorkerRuntimePath(options: RuntimeResolverOptions = {}): string | null {
|
||||
// Memoization fast path — only when called with no injected options. Tests
|
||||
// that pass options always run the full resolution (and never populate or
|
||||
// read the cache) to keep the existing test cases deterministic.
|
||||
const isMemoizable = Object.keys(options).length === 0;
|
||||
if (isMemoizable && cachedWorkerRuntimePath !== undefined) {
|
||||
return cachedWorkerRuntimePath;
|
||||
}
|
||||
|
||||
const result = resolveWorkerRuntimePathUncached(options);
|
||||
|
||||
// Only cache successful resolutions. See the comment on
|
||||
// `cachedWorkerRuntimePath` above for the rationale.
|
||||
if (isMemoizable && result !== null) {
|
||||
cachedWorkerRuntimePath = result;
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
function resolveWorkerRuntimePathUncached(options: RuntimeResolverOptions): string | null {
|
||||
const platform = options.platform ?? process.platform;
|
||||
const execPath = options.execPath ?? process.execPath;
|
||||
|
||||
// Non-Windows currently relies on the runtime that launched worker-service.
|
||||
if (platform !== 'win32') {
|
||||
return execPath;
|
||||
}
|
||||
|
||||
// If already running under Bun, reuse it directly.
|
||||
if (isBunExecutablePath(execPath)) {
|
||||
return execPath;
|
||||
@@ -96,7 +137,8 @@ export function resolveWorkerRuntimePath(options: RuntimeResolverOptions = {}):
|
||||
const pathExists = options.pathExists ?? existsSync;
|
||||
const lookupInPath = options.lookupInPath ?? lookupBinaryInPath;
|
||||
|
||||
const candidatePaths = [
|
||||
const candidatePaths: (string | undefined)[] = platform === 'win32'
|
||||
? [
|
||||
env.BUN,
|
||||
env.BUN_PATH,
|
||||
path.join(homeDirectory, '.bun', 'bin', 'bun.exe'),
|
||||
@@ -104,6 +146,16 @@ export function resolveWorkerRuntimePath(options: RuntimeResolverOptions = {}):
|
||||
env.USERPROFILE ? path.join(env.USERPROFILE, '.bun', 'bin', 'bun.exe') : undefined,
|
||||
env.LOCALAPPDATA ? path.join(env.LOCALAPPDATA, 'bun', 'bun.exe') : undefined,
|
||||
env.LOCALAPPDATA ? path.join(env.LOCALAPPDATA, 'bun', 'bin', 'bun.exe') : undefined,
|
||||
]
|
||||
: [
|
||||
env.BUN,
|
||||
env.BUN_PATH,
|
||||
path.join(homeDirectory, '.bun', 'bin', 'bun'),
|
||||
'/usr/local/bin/bun',
|
||||
'/opt/homebrew/bin/bun',
|
||||
'/home/linuxbrew/.linuxbrew/bin/bun',
|
||||
'/usr/bin/bun', // Debian/Ubuntu apt install path
|
||||
'/snap/bin/bun', // Ubuntu Snap install path
|
||||
];
|
||||
|
||||
for (const candidate of candidatePaths) {
|
||||
@@ -114,7 +166,11 @@ export function resolveWorkerRuntimePath(options: RuntimeResolverOptions = {}):
|
||||
return normalized;
|
||||
}
|
||||
|
||||
// Allow command-style values from env (e.g. BUN=bun)
|
||||
// Allow command-style values from env (e.g. BUN=bun). The previous branch
|
||||
// would also match this candidate via isBunExecutablePath('bun') === true,
|
||||
// but pathExists('bun') is false because it's a relative name — so this
|
||||
// branch is what actually fires for the bare-command case. We return the
|
||||
// bare name unchanged so child_process.spawn() resolves it via PATH.
|
||||
if (normalized.toLowerCase() === 'bun') {
|
||||
return normalized;
|
||||
}
|
||||
@@ -648,16 +704,24 @@ export function spawnDaemon(
|
||||
...extraEnv
|
||||
});
|
||||
|
||||
// worker-service.cjs imports `bun:sqlite`, so the spawned runtime MUST be
|
||||
// Bun on every platform — never the current process.execPath, which may be
|
||||
// Node when the caller is the MCP server. Resolve once before the OS branch
|
||||
// split so we don't pay for a duplicate PATH lookup if Bun isn't found at a
|
||||
// well-known path. See resolveWorkerRuntimePath() for the candidate list.
|
||||
const runtimePath = resolveWorkerRuntimePath();
|
||||
if (!runtimePath) {
|
||||
logger.error(
|
||||
'SYSTEM',
|
||||
'Bun runtime not found — install from https://bun.sh and ensure it is on PATH or set BUN env var. The worker daemon requires Bun because it uses bun:sqlite.'
|
||||
);
|
||||
return undefined;
|
||||
}
|
||||
|
||||
if (isWindows) {
|
||||
// Use PowerShell Start-Process to spawn a hidden, independent process
|
||||
// Unlike WMIC, PowerShell inherits environment variables from parent
|
||||
// -WindowStyle Hidden prevents console popup
|
||||
const runtimePath = resolveWorkerRuntimePath();
|
||||
|
||||
if (!runtimePath) {
|
||||
logger.error('SYSTEM', 'Failed to locate Bun runtime for Windows worker spawn');
|
||||
return undefined;
|
||||
}
|
||||
|
||||
// Use -EncodedCommand to avoid all shell quoting issues with spaces in paths
|
||||
const psScript = `Start-Process -FilePath '${runtimePath.replace(/'/g, "''")}' -ArgumentList @('${scriptPath.replace(/'/g, "''")}','--daemon') -WindowStyle Hidden`;
|
||||
@@ -669,6 +733,13 @@ export function spawnDaemon(
|
||||
windowsHide: true,
|
||||
env
|
||||
});
|
||||
// Windows success sentinel: PowerShell `Start-Process` does not return
|
||||
// the spawned PID, and we don't want to pay for an extra `Get-Process`
|
||||
// round-trip just to discover it. Return 0 (a conventionally invalid
|
||||
// Unix PID) so callers can distinguish "spawn dispatched" from "spawn
|
||||
// failed". Callers MUST use `pid === undefined` to detect failure —
|
||||
// never falsy checks like `if (!pid)`, which would silently treat
|
||||
// success as failure here.
|
||||
return 0;
|
||||
} catch (error) {
|
||||
// APPROVED OVERRIDE: Windows daemon spawn is best-effort; log and let callers fall back to health checks/retry flow.
|
||||
@@ -681,9 +752,10 @@ export function spawnDaemon(
|
||||
// controlling terminal. This prevents SIGHUP from reaching the daemon
|
||||
// even if the in-process SIGHUP handler somehow fails (belt-and-suspenders).
|
||||
// Fall back to standard detached spawn if setsid is not available.
|
||||
// `runtimePath` was resolved at the top of this function (see comment there).
|
||||
const setsidPath = '/usr/bin/setsid';
|
||||
if (existsSync(setsidPath)) {
|
||||
const child = spawn(setsidPath, [process.execPath, scriptPath, '--daemon'], {
|
||||
const child = spawn(setsidPath, [runtimePath, scriptPath, '--daemon'], {
|
||||
detached: true,
|
||||
stdio: 'ignore',
|
||||
env
|
||||
@@ -698,7 +770,7 @@ export function spawnDaemon(
|
||||
}
|
||||
|
||||
// Fallback: standard detached spawn (macOS, systems without setsid)
|
||||
const child = spawn(process.execPath, [scriptPath, '--daemon'], {
|
||||
const child = spawn(runtimePath, [scriptPath, '--daemon'], {
|
||||
detached: true,
|
||||
stdio: 'ignore',
|
||||
env
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
*/
|
||||
|
||||
import path from 'path';
|
||||
import { existsSync, writeFileSync, unlinkSync, statSync } from 'fs';
|
||||
import { existsSync } from 'fs';
|
||||
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
|
||||
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
|
||||
import { getWorkerPort, getWorkerHost } from '../shared/worker-utils.js';
|
||||
@@ -23,43 +23,11 @@ import { ChromaSync } from './sync/ChromaSync.js';
|
||||
import { configureSupervisorSignalHandlers, getSupervisor, startSupervisor } from '../supervisor/index.js';
|
||||
import { sanitizeEnv } from '../supervisor/env-sanitizer.js';
|
||||
|
||||
// Windows: avoid repeated spawn popups when startup fails (issue #921)
|
||||
const WINDOWS_SPAWN_COOLDOWN_MS = 2 * 60 * 1000;
|
||||
|
||||
function getWorkerSpawnLockPath(): string {
|
||||
return path.join(SettingsDefaultsManager.get('CLAUDE_MEM_DATA_DIR'), '.worker-start-attempted');
|
||||
}
|
||||
|
||||
function shouldSkipSpawnOnWindows(): boolean {
|
||||
if (process.platform !== 'win32') return false;
|
||||
const lockPath = getWorkerSpawnLockPath();
|
||||
if (!existsSync(lockPath)) return false;
|
||||
try {
|
||||
const modifiedTimeMs = statSync(lockPath).mtimeMs;
|
||||
return Date.now() - modifiedTimeMs < WINDOWS_SPAWN_COOLDOWN_MS;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function markWorkerSpawnAttempted(): void {
|
||||
if (process.platform !== 'win32') return;
|
||||
try {
|
||||
writeFileSync(getWorkerSpawnLockPath(), '', 'utf-8');
|
||||
} catch {
|
||||
// Best-effort lock file — failure to write shouldn't block startup
|
||||
}
|
||||
}
|
||||
|
||||
function clearWorkerSpawnAttempted(): void {
|
||||
if (process.platform !== 'win32') return;
|
||||
try {
|
||||
const lockPath = getWorkerSpawnLockPath();
|
||||
if (existsSync(lockPath)) unlinkSync(lockPath);
|
||||
} catch {
|
||||
// Best-effort cleanup
|
||||
}
|
||||
}
|
||||
// Worker spawn / Windows-cooldown helpers are defined in ./worker-spawner.ts
|
||||
// so that lightweight consumers (e.g. the MCP server running under Node) can
|
||||
// ensure the worker daemon is up without importing this entire module — which
|
||||
// transitively pulls in the SQLite database layer via ChromaSync/DatabaseManager.
|
||||
import { ensureWorkerStarted as ensureWorkerStartedShared } from './worker-spawner.js';
|
||||
|
||||
// Re-export for backward compatibility — canonical implementation in shared/plugin-state.ts
|
||||
export { isPluginDisabledInClaudeSettings } from '../shared/plugin-state.js';
|
||||
@@ -1022,96 +990,22 @@ export class WorkerService {
|
||||
|
||||
/**
|
||||
* Ensures the worker is started and healthy.
|
||||
* This function can be called by both 'start' and 'hook' commands.
|
||||
*
|
||||
* Thin wrapper around the canonical implementation in ./worker-spawner.ts.
|
||||
*
|
||||
* `__filename` is forwarded as the worker script path because, in the CJS
|
||||
* bundle that ships to users, `__filename` always resolves to the compiled
|
||||
* `worker-service.cjs` itself — which is exactly the script the spawner
|
||||
* needs to relaunch as a detached daemon. The MCP server (a separate Node
|
||||
* bundle) cannot rely on its own `__filename` because that would point at
|
||||
* `mcp-server.cjs`, so it computes the worker path explicitly via
|
||||
* `dirname(__filename) + 'worker-service.cjs'` instead.
|
||||
*
|
||||
* @param port - The TCP port (used for port-in-use checks and daemon spawn)
|
||||
* @returns true if worker is healthy (existing or newly started), false on failure
|
||||
*/
|
||||
export async function ensureWorkerStarted(port: number): Promise<boolean> {
|
||||
// Clean stale PID file first (cheap: 1 fs read + 1 signal-0 check)
|
||||
const pidFileStatus = cleanStalePidFile();
|
||||
if (pidFileStatus === 'alive') {
|
||||
logger.info('SYSTEM', 'Worker PID file points to a live process, skipping duplicate spawn');
|
||||
const healthy = await waitForHealth(port, getPlatformTimeout(HOOK_TIMEOUTS.PORT_IN_USE_WAIT));
|
||||
if (healthy) {
|
||||
logger.info('SYSTEM', 'Worker became healthy while waiting on live PID');
|
||||
return true;
|
||||
}
|
||||
logger.warn('SYSTEM', 'Live PID detected but worker did not become healthy before timeout');
|
||||
return false;
|
||||
}
|
||||
|
||||
// Check if worker is already running and healthy.
|
||||
// NOTE: Version mismatch auto-restart intentionally removed (#1435).
|
||||
// The marketplace bundle ships with __DEFAULT_PACKAGE_VERSION__ unbaked, causing
|
||||
// BUILT_IN_VERSION to fall back to "development". This creates a 100% reproducible
|
||||
// mismatch on every hook call, killing a healthy worker and often failing to restart
|
||||
// (cold start exceeds POST_SPAWN_WAIT). A working-but-old worker is strictly better
|
||||
// than a dead worker. Users must manually restart after genuine plugin updates.
|
||||
// See also: #566, #665, #667, #669, #689, #1124, #1145 (same pattern across 8+ releases).
|
||||
if (await waitForHealth(port, 1000)) {
|
||||
// Health passed — worker is listening. Also wait for readiness in case
|
||||
// another hook just spawned it and background init is still running.
|
||||
// This mirrors the fresh-spawn path (line ~1025) so concurrent hooks
|
||||
// don't race past a cold-starting worker's initialization guard.
|
||||
const ready = await waitForReadiness(port, getPlatformTimeout(HOOK_TIMEOUTS.READINESS_WAIT));
|
||||
if (!ready) {
|
||||
logger.warn('SYSTEM', 'Worker is alive but readiness timed out — proceeding anyway');
|
||||
}
|
||||
logger.info('SYSTEM', 'Worker already running and healthy');
|
||||
return true;
|
||||
}
|
||||
|
||||
// Check if port is in use by something else
|
||||
const portInUse = await isPortInUse(port);
|
||||
if (portInUse) {
|
||||
logger.info('SYSTEM', 'Port in use, waiting for worker to become healthy');
|
||||
const healthy = await waitForHealth(port, getPlatformTimeout(HOOK_TIMEOUTS.PORT_IN_USE_WAIT));
|
||||
if (healthy) {
|
||||
logger.info('SYSTEM', 'Worker is now healthy');
|
||||
return true;
|
||||
}
|
||||
logger.error('SYSTEM', 'Port in use but worker not responding to health checks');
|
||||
return false;
|
||||
}
|
||||
|
||||
// Windows: skip spawn if a recent attempt already failed (prevents repeated bun.exe popups, issue #921)
|
||||
if (shouldSkipSpawnOnWindows()) {
|
||||
logger.warn('SYSTEM', 'Worker unavailable on Windows — skipping spawn (recent attempt failed within cooldown)');
|
||||
return false;
|
||||
}
|
||||
|
||||
// Spawn new worker daemon
|
||||
logger.info('SYSTEM', 'Starting worker daemon');
|
||||
markWorkerSpawnAttempted();
|
||||
const pid = spawnDaemon(__filename, port);
|
||||
if (pid === undefined) {
|
||||
logger.error('SYSTEM', 'Failed to spawn worker daemon');
|
||||
return false;
|
||||
}
|
||||
|
||||
// PID file is written by the worker itself after listen() succeeds
|
||||
// This is race-free and works correctly on Windows where cmd.exe PID is useless
|
||||
|
||||
const healthy = await waitForHealth(port, getPlatformTimeout(HOOK_TIMEOUTS.POST_SPAWN_WAIT));
|
||||
if (!healthy) {
|
||||
removePidFile();
|
||||
logger.error('SYSTEM', 'Worker failed to start (health check timeout)');
|
||||
return false;
|
||||
}
|
||||
|
||||
// Health passed (HTTP listening). Now wait for DB + search initialization
|
||||
// so hooks that run immediately after can actually use the worker.
|
||||
const ready = await waitForReadiness(port, getPlatformTimeout(HOOK_TIMEOUTS.READINESS_WAIT));
|
||||
if (!ready) {
|
||||
logger.warn('SYSTEM', 'Worker is alive but readiness timed out — proceeding anyway');
|
||||
}
|
||||
|
||||
clearWorkerSpawnAttempted();
|
||||
// Touch PID file to signal other sessions that a spawn just completed.
|
||||
touchPidFile();
|
||||
logger.info('SYSTEM', 'Worker started successfully');
|
||||
return true;
|
||||
return ensureWorkerStartedShared(port, __filename);
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
|
||||
207
src/services/worker-spawner.ts
Normal file
207
src/services/worker-spawner.ts
Normal file
@@ -0,0 +1,207 @@
|
||||
/**
|
||||
* Worker Spawner - Lightweight worker daemon lifecycle helper
|
||||
*
|
||||
* Extracted from worker-service.ts so that lightweight consumers (like the
|
||||
* MCP server running under Node) can ensure the worker daemon is running
|
||||
* without importing the full worker-service bundle, which transitively pulls
|
||||
* in `bun:sqlite` and the entire database layer.
|
||||
*
|
||||
* This module MUST NOT import anything that touches SQLite, ChromaDB, or the
|
||||
* worker business logic modules. Keep it lean on purpose.
|
||||
*
|
||||
* Dependency boundary note: this file imports from `SettingsDefaultsManager`,
|
||||
* `ProcessManager`, and `HealthMonitor`. None of those currently touch
|
||||
* `bun:sqlite` or any other Bun-only module. If any of them ever does, this
|
||||
* module's SQLite-free contract silently breaks and the build guardrail in
|
||||
* `scripts/build-hooks.js` is the only thing that catches it. Audit transitive
|
||||
* imports here when adding new helpers from the shared/infrastructure layers.
|
||||
*/
|
||||
|
||||
import path from 'path';
|
||||
import { existsSync, mkdirSync, writeFileSync, unlinkSync, statSync } from 'fs';
|
||||
import { logger } from '../utils/logger.js';
|
||||
import { HOOK_TIMEOUTS } from '../shared/hook-constants.js';
|
||||
import { SettingsDefaultsManager } from '../shared/SettingsDefaultsManager.js';
|
||||
import {
|
||||
cleanStalePidFile,
|
||||
getPlatformTimeout,
|
||||
removePidFile,
|
||||
spawnDaemon,
|
||||
touchPidFile,
|
||||
} from './infrastructure/ProcessManager.js';
|
||||
import {
|
||||
isPortInUse,
|
||||
waitForHealth,
|
||||
waitForReadiness,
|
||||
} from './infrastructure/HealthMonitor.js';
|
||||
|
||||
// Windows: avoid repeated spawn popups when startup fails (issue #921)
|
||||
const WINDOWS_SPAWN_COOLDOWN_MS = 2 * 60 * 1000;
|
||||
|
||||
function getWorkerSpawnLockPath(): string {
|
||||
return path.join(SettingsDefaultsManager.get('CLAUDE_MEM_DATA_DIR'), '.worker-start-attempted');
|
||||
}
|
||||
|
||||
// Internal helpers — NOT exported. Only ensureWorkerStarted should be on the
|
||||
// public surface; callers must not bypass the lifecycle by calling these
|
||||
// directly. See PR #1645 review feedback for context.
|
||||
|
||||
function shouldSkipSpawnOnWindows(): boolean {
|
||||
if (process.platform !== 'win32') return false;
|
||||
const lockPath = getWorkerSpawnLockPath();
|
||||
if (!existsSync(lockPath)) return false;
|
||||
try {
|
||||
const modifiedTimeMs = statSync(lockPath).mtimeMs;
|
||||
return Date.now() - modifiedTimeMs < WINDOWS_SPAWN_COOLDOWN_MS;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function markWorkerSpawnAttempted(): void {
|
||||
if (process.platform !== 'win32') return;
|
||||
try {
|
||||
const lockPath = getWorkerSpawnLockPath();
|
||||
// Ensure CLAUDE_MEM_DATA_DIR exists before writing the marker. On a fresh
|
||||
// user profile the directory may not exist yet, in which case writeFileSync
|
||||
// would throw ENOENT, the catch would swallow it, and the cooldown marker
|
||||
// would never be created — defeating the popup-loop protection that this
|
||||
// helper exists to provide. recursive: true is a no-op when the dir already
|
||||
// exists, so this is safe to call on every spawn attempt.
|
||||
mkdirSync(path.dirname(lockPath), { recursive: true });
|
||||
writeFileSync(lockPath, '', 'utf-8');
|
||||
} catch {
|
||||
// APPROVED OVERRIDE: best-effort cooldown marker. If we can't even create
|
||||
// the data dir or write the marker, the worker spawn itself is almost
|
||||
// certainly going to fail too — surfacing that downstream gives the user
|
||||
// a far more useful error than a noisy log line about a lock file.
|
||||
}
|
||||
}
|
||||
|
||||
function clearWorkerSpawnAttempted(): void {
|
||||
if (process.platform !== 'win32') return;
|
||||
try {
|
||||
const lockPath = getWorkerSpawnLockPath();
|
||||
if (existsSync(lockPath)) unlinkSync(lockPath);
|
||||
} catch {
|
||||
// APPROVED OVERRIDE: best-effort cleanup of the cooldown marker after a
|
||||
// successful spawn. A stale marker on disk is harmless — the worst case
|
||||
// is one suppressed retry within the cooldown window, then it self-heals.
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensures the worker is started and healthy.
|
||||
*
|
||||
* @param port - The TCP port (used for port-in-use checks and daemon spawn)
|
||||
* @param workerScriptPath - Absolute path to the worker-service script to spawn.
|
||||
* Callers running inside worker-service pass `__filename`.
|
||||
* Callers outside (e.g., mcp-server) must resolve the
|
||||
* path to worker-service.cjs in the plugin's scripts dir.
|
||||
* @returns true if worker is healthy (existing or newly started), false on failure
|
||||
*/
|
||||
export async function ensureWorkerStarted(
|
||||
port: number,
|
||||
workerScriptPath: string
|
||||
): Promise<boolean> {
|
||||
// Defensive guard: validate the worker script path before any health check
|
||||
// or spawn attempt. Without this, an empty string or missing file just
|
||||
// surfaces as a low-signal child_process error from spawnDaemon. Callers
|
||||
// should always pass a valid path, but a partial install or a regression
|
||||
// in path resolution upstream is much easier to debug with an explicit
|
||||
// log line at the entry point. See PR #1645 review feedback for context.
|
||||
if (!workerScriptPath) {
|
||||
logger.error('SYSTEM', 'ensureWorkerStarted called with empty workerScriptPath — caller bug');
|
||||
return false;
|
||||
}
|
||||
if (!existsSync(workerScriptPath)) {
|
||||
logger.error(
|
||||
'SYSTEM',
|
||||
'ensureWorkerStarted: worker script not found at expected path — likely a partial install or build artifact missing',
|
||||
{ workerScriptPath }
|
||||
);
|
||||
return false;
|
||||
}
|
||||
|
||||
// Clean stale PID file first (cheap: 1 fs read + 1 signal-0 check)
|
||||
const pidFileStatus = cleanStalePidFile();
|
||||
if (pidFileStatus === 'alive') {
|
||||
logger.info('SYSTEM', 'Worker PID file points to a live process, skipping duplicate spawn');
|
||||
const healthy = await waitForHealth(port, getPlatformTimeout(HOOK_TIMEOUTS.PORT_IN_USE_WAIT));
|
||||
if (healthy) {
|
||||
// A previous failed spawn may have left a stale Windows cooldown marker
|
||||
// on disk. Now that the worker is confirmed healthy via this alternate
|
||||
// path, clear it so a future genuine outage isn't suppressed for the
|
||||
// remainder of the 2-minute window. Per CodeRabbit on PR #1645.
|
||||
// No-op on non-Windows.
|
||||
clearWorkerSpawnAttempted();
|
||||
logger.info('SYSTEM', 'Worker became healthy while waiting on live PID');
|
||||
return true;
|
||||
}
|
||||
logger.warn('SYSTEM', 'Live PID detected but worker did not become healthy before timeout');
|
||||
return false;
|
||||
}
|
||||
|
||||
// Check if worker is already running and healthy.
|
||||
// NOTE: Version mismatch auto-restart intentionally removed (#1435).
|
||||
if (await waitForHealth(port, 1000)) {
|
||||
// Same rationale as above: clear any stale cooldown marker now that we
|
||||
// know the worker is healthy via the fast-path health check.
|
||||
clearWorkerSpawnAttempted();
|
||||
const ready = await waitForReadiness(port, getPlatformTimeout(HOOK_TIMEOUTS.READINESS_WAIT));
|
||||
if (!ready) {
|
||||
logger.warn('SYSTEM', 'Worker is alive but readiness timed out — proceeding anyway');
|
||||
}
|
||||
logger.info('SYSTEM', 'Worker already running and healthy');
|
||||
return true;
|
||||
}
|
||||
|
||||
// Check if port is in use by something else
|
||||
const portInUse = await isPortInUse(port);
|
||||
if (portInUse) {
|
||||
logger.info('SYSTEM', 'Port in use, waiting for worker to become healthy');
|
||||
const healthy = await waitForHealth(port, getPlatformTimeout(HOOK_TIMEOUTS.PORT_IN_USE_WAIT));
|
||||
if (healthy) {
|
||||
// Same rationale as above.
|
||||
clearWorkerSpawnAttempted();
|
||||
logger.info('SYSTEM', 'Worker is now healthy');
|
||||
return true;
|
||||
}
|
||||
logger.error('SYSTEM', 'Port in use but worker not responding to health checks');
|
||||
return false;
|
||||
}
|
||||
|
||||
// Windows: skip spawn if a recent attempt already failed (issue #921)
|
||||
if (shouldSkipSpawnOnWindows()) {
|
||||
logger.warn('SYSTEM', 'Worker unavailable on Windows — skipping spawn (recent attempt failed within cooldown)');
|
||||
return false;
|
||||
}
|
||||
|
||||
// Spawn new worker daemon
|
||||
logger.info('SYSTEM', 'Starting worker daemon', { workerScriptPath });
|
||||
markWorkerSpawnAttempted();
|
||||
const pid = spawnDaemon(workerScriptPath, port);
|
||||
if (pid === undefined) {
|
||||
logger.error('SYSTEM', 'Failed to spawn worker daemon');
|
||||
return false;
|
||||
}
|
||||
|
||||
// PID file is written by the worker itself after listen() succeeds
|
||||
const healthy = await waitForHealth(port, getPlatformTimeout(HOOK_TIMEOUTS.POST_SPAWN_WAIT));
|
||||
if (!healthy) {
|
||||
removePidFile();
|
||||
logger.error('SYSTEM', 'Worker failed to start (health check timeout)');
|
||||
return false;
|
||||
}
|
||||
|
||||
// Health passed (HTTP listening). Now wait for DB + search initialization
|
||||
const ready = await waitForReadiness(port, getPlatformTimeout(HOOK_TIMEOUTS.READINESS_WAIT));
|
||||
if (!ready) {
|
||||
logger.warn('SYSTEM', 'Worker is alive but readiness timed out — proceeding anyway');
|
||||
}
|
||||
|
||||
clearWorkerSpawnAttempted();
|
||||
touchPidFile();
|
||||
logger.info('SYSTEM', 'Worker started successfully');
|
||||
return true;
|
||||
}
|
||||
@@ -1,5 +1,5 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'bun:test';
|
||||
import { existsSync, readFileSync, mkdirSync, writeFileSync, rmSync } from 'fs';
|
||||
import { existsSync, readFileSync, mkdirSync, writeFileSync, rmSync, statSync } from 'fs';
|
||||
import { homedir } from 'os';
|
||||
import { tmpdir } from 'os';
|
||||
import path from 'path';
|
||||
@@ -229,13 +229,65 @@ describe('ProcessManager', () => {
|
||||
});
|
||||
|
||||
describe('resolveWorkerRuntimePath', () => {
|
||||
it('should return current runtime on non-Windows platforms', () => {
|
||||
it('should reuse execPath when already running under Bun on Linux', () => {
|
||||
const resolved = resolveWorkerRuntimePath({
|
||||
platform: 'linux',
|
||||
execPath: '/usr/bin/node'
|
||||
execPath: '/home/alice/.bun/bin/bun'
|
||||
});
|
||||
|
||||
expect(resolved).toBe('/usr/bin/node');
|
||||
expect(resolved).toBe('/home/alice/.bun/bin/bun');
|
||||
});
|
||||
|
||||
it('should look up Bun on non-Windows when caller is Node (e.g. MCP server)', () => {
|
||||
const resolved = resolveWorkerRuntimePath({
|
||||
platform: 'linux',
|
||||
execPath: '/usr/bin/node',
|
||||
env: {} as NodeJS.ProcessEnv,
|
||||
homeDirectory: '/home/alice',
|
||||
pathExists: candidatePath => candidatePath === '/home/alice/.bun/bin/bun',
|
||||
lookupInPath: () => null
|
||||
});
|
||||
|
||||
expect(resolved).toBe('/home/alice/.bun/bin/bun');
|
||||
});
|
||||
|
||||
it('should preserve bare BUN env command on non-Windows so spawn resolves it via PATH', () => {
|
||||
const resolved = resolveWorkerRuntimePath({
|
||||
platform: 'linux',
|
||||
execPath: '/usr/bin/node',
|
||||
env: { BUN: 'bun' } as NodeJS.ProcessEnv,
|
||||
homeDirectory: '/home/alice',
|
||||
pathExists: () => false,
|
||||
lookupInPath: () => null
|
||||
});
|
||||
|
||||
expect(resolved).toBe('bun');
|
||||
});
|
||||
|
||||
it('should fall back to PATH lookup on non-Windows when no known Bun candidate exists', () => {
|
||||
const resolved = resolveWorkerRuntimePath({
|
||||
platform: 'linux',
|
||||
execPath: '/usr/bin/node',
|
||||
env: {} as NodeJS.ProcessEnv,
|
||||
homeDirectory: '/home/alice',
|
||||
pathExists: () => false,
|
||||
lookupInPath: () => '/custom/bin/bun'
|
||||
});
|
||||
|
||||
expect(resolved).toBe('/custom/bin/bun');
|
||||
});
|
||||
|
||||
it('should return null on non-Windows when Bun cannot be resolved', () => {
|
||||
const resolved = resolveWorkerRuntimePath({
|
||||
platform: 'linux',
|
||||
execPath: '/usr/bin/node',
|
||||
env: {} as NodeJS.ProcessEnv,
|
||||
homeDirectory: '/home/alice',
|
||||
pathExists: () => false,
|
||||
lookupInPath: () => null
|
||||
});
|
||||
|
||||
expect(resolved).toBeNull();
|
||||
});
|
||||
|
||||
it('should reuse execPath when already running under Bun on Windows', () => {
|
||||
@@ -380,7 +432,7 @@ describe('ProcessManager', () => {
|
||||
// Wait a bit to ensure measurable mtime difference
|
||||
await new Promise(r => setTimeout(r, 50));
|
||||
|
||||
const statsBefore = require('fs').statSync(PID_FILE);
|
||||
const statsBefore = statSync(PID_FILE);
|
||||
const mtimeBefore = statsBefore.mtimeMs;
|
||||
|
||||
// Wait again to ensure mtime advances
|
||||
@@ -388,7 +440,7 @@ describe('ProcessManager', () => {
|
||||
|
||||
touchPidFile();
|
||||
|
||||
const statsAfter = require('fs').statSync(PID_FILE);
|
||||
const statsAfter = statSync(PID_FILE);
|
||||
const mtimeAfter = statsAfter.mtimeMs;
|
||||
|
||||
expect(mtimeAfter).toBeGreaterThanOrEqual(mtimeBefore);
|
||||
@@ -439,6 +491,39 @@ describe('ProcessManager', () => {
|
||||
try { process.kill(result, 'SIGKILL'); } catch { /* already exited */ }
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* Documents the spawnDaemon return contract for the Windows `0` PID
|
||||
* success sentinel. PowerShell `Start-Process` does not return the spawned
|
||||
* PID, so the Windows branch returns 0 as a "spawn dispatched" sentinel.
|
||||
* Callers MUST use `pid === undefined` to detect failure — never falsy
|
||||
* checks like `if (!pid)`, which would silently treat success as failure
|
||||
* because 0 is falsy in JavaScript.
|
||||
*
|
||||
* This contract test exists so any future contributor introducing
|
||||
* `if (!pid)` against a spawnDaemon return value (or its wrapper) sees a
|
||||
* failing assertion that documents why the falsy check is incorrect.
|
||||
* See PR #1645 review feedback for context.
|
||||
*/
|
||||
it('Windows 0 PID success sentinel must NOT be detected via falsy check', () => {
|
||||
const windowsSuccessSentinel: number | undefined = 0;
|
||||
const failureSentinel: number | undefined = undefined;
|
||||
|
||||
// Correct contract: undefined === failure, anything else === success.
|
||||
expect(windowsSuccessSentinel === undefined).toBe(false);
|
||||
expect(failureSentinel === undefined).toBe(true);
|
||||
|
||||
// Demonstrates the bug a future regression would introduce:
|
||||
// `if (!pid)` is true for BOTH the Windows success sentinel AND the
|
||||
// genuine failure sentinel — silently treating success as failure.
|
||||
expect(!windowsSuccessSentinel).toBe(true); // ← this is the trap
|
||||
expect(!failureSentinel).toBe(true);
|
||||
|
||||
// Therefore, callers must use strict undefined comparison.
|
||||
const isFailure = (pid: number | undefined) => pid === undefined;
|
||||
expect(isFailure(windowsSuccessSentinel)).toBe(false);
|
||||
expect(isFailure(failureSentinel)).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('SIGHUP handling', () => {
|
||||
|
||||
31
tests/services/worker-spawner.test.ts
Normal file
31
tests/services/worker-spawner.test.ts
Normal file
@@ -0,0 +1,31 @@
|
||||
/**
|
||||
* Tests for worker-spawner.ts validation guards.
|
||||
*
|
||||
* These tests cover the entry-point defensive guards in `ensureWorkerStarted`
|
||||
* (empty workerScriptPath, non-existent workerScriptPath). The deeper spawn
|
||||
* lifecycle (PID file cleanup, health checks, daemon spawn, readiness wait)
|
||||
* is not unit-tested here because it requires injectable I/O and a broader
|
||||
* refactor — see PR #1645 review feedback discussion.
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'bun:test';
|
||||
import { ensureWorkerStarted } from '../../src/services/worker-spawner.js';
|
||||
|
||||
describe('ensureWorkerStarted validation guards', () => {
|
||||
// The port arguments here are arbitrary — both tests short-circuit on the
|
||||
// workerScriptPath validation guards before any network/health-check I/O,
|
||||
// so the port is never actually bound or contacted. Picked from an unlikely
|
||||
// range to prevent confusion if a future test ever does run real health
|
||||
// checks against these instances.
|
||||
|
||||
it('returns false when workerScriptPath is empty string', async () => {
|
||||
const result = await ensureWorkerStarted(39001, '');
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
|
||||
it('returns false when workerScriptPath does not exist on disk', async () => {
|
||||
const bogusPath = '/tmp/__claude-mem-test-nonexistent-worker-script-' + Date.now() + '.cjs';
|
||||
const result = await ensureWorkerStarted(39002, bogusPath);
|
||||
expect(result).toBe(false);
|
||||
});
|
||||
});
|
||||
Reference in New Issue
Block a user