Files
worldmonitor/tests/redis-caching.test.mjs
Sebastien Melki e68a7147dd chore(api): sebuf migration follow-ups (post-#3242) (#3287)
* chore(api-manifest): rewrite brief-why-matters reason as proper internal-helper justification

Carried in from #3248 merge as a band-aid (called out in #3242 review followup
checklist item 7). The endpoint genuinely belongs in internal-helper —
RELAY_SHARED_SECRET-bearer auth, cron-only caller, never reached by dashboards
or partners. Same shape constraint as api/notify.ts.

Replaces the apologetic "filed here to keep the lint green" framing with a
proper structural justification: modeling it as a generated service would
publish internal cron plumbing as user-facing API surface.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(lint): premium-fetch parity check for ServiceClients (closes #3279)

Adds scripts/enforce-premium-fetch.mjs — AST-walks src/, finds every
`new <ServiceClient>(...)` (variable decl OR `this.foo =` assignment),
tracks which methods each instance actually calls, and fails if any
called method targets a path in src/shared/premium-paths.ts
PREMIUM_RPC_PATHS without `{ fetch: premiumFetch }` on the constructor.

Per-call-site analysis (not class-level) keeps the trade/index.ts pattern
clean — publicClient with globalThis.fetch + premiumClient with
premiumFetch on the same TradeServiceClient class — since publicClient
never calls a premium method.

Wired into:
- npm run lint:premium-fetch
- .husky/pre-push (right after lint:rate-limit-policies)
- .github/workflows/lint-code.yml (right after lint:api-contract)

Found and fixed three latent instances of the HIGH(new) #1 class from
#3242 review (silent 401 → empty fallback for signed-in browser pros):

- src/services/correlation-engine/engine.ts — IntelligenceServiceClient
  built with no fetch option called deductSituation. LLM-assessment overlay
  on convergence cards never landed for browser pros without a WM key.
- src/services/economic/index.ts — EconomicServiceClient with
  globalThis.fetch called getNationalDebt. National-debt panel rendered
  empty for browser pros.
- src/services/sanctions-pressure.ts — SanctionsServiceClient with
  globalThis.fetch called listSanctionsPressure. Sanctions-pressure panel
  rendered empty for browser pros.

All three swap to premiumFetch (single shared client, mirrors the
supply-chain/index.ts justification — premiumFetch no-ops safely on
public methods, so the public methods on those clients keep working).

Verification:
- lint:premium-fetch clean (34 ServiceClient classes, 28 premium paths,
  466 src/ files analyzed)
- Negative test: revert any of the three to globalThis.fetch → exit 1
  with file:line and called-premium-method names
- typecheck + typecheck:api clean
- lint:api-contract / lint:rate-limit-policies / lint:boundaries clean
- tests/sanctions-pressure.test.mjs + premium-fetch.test.mts: 16/16 pass

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(military): fetchStaleFallback NEG_TTL=30s parity (closes #3277)

The legacy /api/military-flights handler had NEG_TTL = 30_000ms — a short
suppression window after a failed live + stale read so we don't Redis-hammer
the stale key during sustained relay+seed outages.

Carried into the sebuf list-military-flights handler:
- Module-scoped `staleNegUntil` timestamp (per-isolate on Vercel Edge,
  which is fine — each warm isolate gets its own 30s suppression window).
- Set whenever fetchStaleFallback returns null (key missing, parse fail,
  empty array after staleToProto filter, or thrown error).
- Checked at the entry of fetchStaleFallback before doing the Redis read.
- Test seam `_resetStaleNegativeCacheForTests()` exposed for unit tests.

Test pinned in tests/redis-caching.test.mjs: drives a stale-empty cycle
three times — first read hits Redis, second within window doesn't, after
test-only reset it does again.

Verified: 18/18 redis-caching tests pass, typecheck:api clean,
lint:premium-fetch clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(lint): rate-limit-policies regex → import() (closes #3278)

The previous lint regex-parsed ENDPOINT_RATE_POLICIES from the source
file. That worked because the literal happens to fit a single line per
key today, but a future reformat (multi-line key wrap, formatter swap,
etc.) would silently break the lint without breaking the build —
exactly the failure mode that's worse than no lint at all.

Fix:
- Export ENDPOINT_RATE_POLICIES from server/_shared/rate-limit.ts.
- Convert scripts/enforce-rate-limit-policies.mjs to async + dynamic
  import() of the policy object directly. Same TS module that the
  gateway uses at runtime → no source-of-truth drift possible.
- Run via tsx (already a dev dep, used by test:data) so the .mjs
  shebang can resolve a .ts import.
- npm script swapped to `tsx scripts/...`. .husky/pre-push uses
  `npm run lint:rate-limit-policies` so no hook change needed.

Verified:
- Clean: 6 policies / 182 gateway routes.
- Negative test (rename a key to the original sanctions typo
  /api/sanctions/v1/lookup-entity): exit 1 with the same incident-
  attributed remedy message as before.
- Reformat test (split a single-line entry across multiple lines):
  still passes — the property is what's read, not the source layout.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(shipping/v2): alertThreshold: 0 preserved; drop dead validation branch (#3242 followup)

Before: alert_threshold was a plain int32. proto3 scalar default is 0, so
the handler couldn't distinguish "partner explicitly sent 0 (deliver every
disruption)" from "partner omitted the field (apply legacy default 50)" —
both arrived as 0 and got coerced to 50 by `> 0 ? : 50`. Silent intent-drop
for any partner who wanted every alert. The subsequent `alertThreshold < 0`
branch was also unreachable after that coercion.

After:
- Proto field is `optional int32 alert_threshold` — TS type becomes
  `alertThreshold?: number`, so omitted = undefined and explicit 0 stays 0.
- Handler uses `req.alertThreshold ?? 50` — undefined → 50, any number
  passes through unchanged.
- Dead `< 0 || > 100` runtime check removed; buf.validate `int32.gte = 0,
  int32.lte = 100` already enforces the range at the wire layer.

Partner wire contract: identical for the omit-field and 1..100 cases.
Only behavioural change is explicit 0 — previously impossible to request,
now honored per proto3 optional semantics.

Scoped `buf generate --path worldmonitor/shipping/v2` to avoid the full-
regen `@ts-nocheck` drift Seb documented in the #3242 PR comments.
Re-applied `@ts-nocheck` on the two regenerated files manually.

Tests:
- `alertThreshold 0 coerces to 50` flipped to `alertThreshold 0 preserved`.
- New test: `alertThreshold omitted (undefined) applies legacy default 50`.
- `rejects > 100` test removed — proto/wire validation handles it; direct
  handler calls intentionally bypass wire and the handler no longer carries
  a redundant runtime range check.

Verified: 18/18 shipping-v2-handler tests pass, typecheck + typecheck:api
clean, all 4 custom lints clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(shipping/v2): document missing webhook delivery worker + DNS-rebinding contract (#3242 followup)

#3242 followup checklist item 6 from @koala73 — sanity-check that the
delivery worker honors the re-resolve-and-re-check contract that
isBlockedCallbackUrl explicitly delegates to it.

Audit finding: no delivery worker for shipping/v2 webhooks exists in this
repo. Grep across the entire tree (excluding generated/dist) shows the
only readers of webhook:sub:* records are the registration / inspection /
rotate-secret handlers themselves. No code reads them and POSTs to the
stored callbackUrl. The delivery worker is presumed to live in Railway
(separate repo) or hasn't been built yet — neither is auditable from
this repo.

Refreshes the comment block at the top of webhook-shared.ts to:
- explicitly state DNS rebinding is NOT mitigated at registration
- spell out the four-step contract the delivery worker MUST follow
  (re-validate URL, dns.lookup, re-check resolved IP against patterns,
   fetch with resolved IP + Host header preserved)
- flag the in-repo gap so anyone landing delivery code can't miss it

Tracking the gap as #3288 — acceptance there is "delivery worker imports
the patterns + helpers from webhook-shared.ts and applies the four steps
before each send." Action moves to wherever the delivery worker actually
lives (Railway likely).

No code change. Tests + lints unchanged.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* ci(lint): add rate-limit-policies step (greptile P1 #3287)

Pre-push hook ran lint:rate-limit-policies but the CI workflow did not,
so fork PRs and --no-verify pushes bypassed the exact drift check the
lint was added to enforce (closes #3278). Adding it right after
lint:api-contract so it runs in the same context the lint was designed
for.

* refactor(lint): premium-fetch regex → import() + loop classRe (greptile P2 #3287)

Two fragilities greptile flagged on enforce-premium-fetch.mjs:

1. loadPremiumPaths regex-parsed src/shared/premium-paths.ts with
   /'(\/api\/[^']+)'/g — same class of silent drift we just removed
   from enforce-rate-limit-policies in #3278. Reformatting the source
   Set (double quotes, spread, helper-computed entries) would drop
   paths from the lint while leaving the runtime untouched. Fix: flip
   the shebang to `#!/usr/bin/env -S npx tsx` and dynamic-import
   PREMIUM_RPC_PATHS directly, mirroring the rate-limit pattern.
   package.json lint:premium-fetch now invokes via tsx too so the
   npm-script path matches direct execution.

2. loadClientClassMap ran classRe.exec once, silently dropping every
   ServiceClient after the first if a file ever contained more than
   one. Current codegen emits one class per file so this was latent,
   but a template change would ship un-linted classes. Fix: collect
   every class-open match with matchAll, slice each class body with
   the next class's start as the boundary, and scan methods per-body
   so method-to-class binding stays correct even with multiple
   classes per file.

Verification:
- lint:premium-fetch clean (34 classes / 28 premium paths / 466 files
  — identical counts to pre-refactor, so no coverage regression).
- Negative test: revert src/services/economic/index.ts to
  globalThis.fetch → exit 1 with file:line, bound var name, and
  premium method list (getNationalDebt). Restore → clean.
- lint:rate-limit-policies still clean.

* fix(shipping/v2): re-add alertThreshold handler range guard (greptile nit 1 #3287)

Wire-layer buf.validate enforces 0..100, but direct handler invocation
(internal jobs, test harnesses, future transports) bypasses it. Cheap
invariant-at-the-boundary — rejects < 0 or > 100 with ValidationError
before the record is stored.

Tests: restored the rejects-out-of-range cases that were dropped when the
branch was (correctly) deleted as dead code on the previous commit.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(lint): premium-fetch method-regex → TS AST (greptile nits 2+5 #3287)

loadClientClassMap:
  The method regex `async (\w+)\s*\([^)]*\)\s*:\s*Promise<[^>]+>\s*\{\s*let
  path = "..."` assumed (a) no nested `)` in arg types, (b) no nested `>`
  in the return type, (c) `let path = "..."` as the literal first statement.
  Any codegen template shift would silently drop methods with the lint still
  passing clean — the same silent-drift class #3287 just closed on the
  premium-paths side.

  Now walks the service_client.ts AST, matches `export class *ServiceClient`,
  iterates `MethodDeclaration` members, and reads the first
  `let path: string = '...'` variable statement as a StringLiteral. Tolerant
  to any reformatting of arg/return types or method shape.

findCalls scope-blindness:
  Added limitation comment — the walker matches `<varName>.<method>()`
  anywhere in the file without respecting scope. Two constructions in
  different function scopes sharing a var name merge their called-method
  sets. No current src/ file hits this; the lint errs cautiously (flags
  both instances). Keeping the walker simple until scope-aware binding
  is needed.

webhook-shared.ts:
  Inlined issue reference (#3288) so the breadcrumb resolves without
  bouncing through an MDX that isn't in the diff.

Verification:
- lint:premium-fetch clean — 34 classes / 28 premium paths / 489 files.
  Pre-refactor: 34 / 28 / 466. Class + path counts identical; file bump
  is from the main-branch rebase, not the refactor.
- Negative test: revert src/services/economic/index.ts premiumFetch →
  globalThis.fetch. Lint exits 1 at `src/services/economic/index.ts:64:7`
  with `premium method(s) called: getNationalDebt`. Restore → clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(lint): rate-limit OpenAPI regex → yaml parser (greptile nit 3 #3287)

Input side (ENDPOINT_RATE_POLICIES) was flipped to live `import()` in
4e79d029. Output side (OpenAPI routes) still regex-scraped top-level
`paths:` keys with `/^\s{4}(\/api\/[^\s:]+):/gm` — hard-coded 4-space
indent. Any YAML formatter change (2-space indent, flow style, line
folding) would silently drop routes and let policy-drift slip through
— same silent-drift class the input-side fix closed.

Now uses the `yaml` package (already a dep) to parse each
.openapi.yaml and reads `doc.paths` directly.

Verification:
- Clean: 6 policies / 189 routes (was 182 — yaml parser picks up a
  handful the regex missed, closing a silent coverage gap).
- Negative test: rename policy key back to /api/sanctions/v1/lookup-entity
  → exits 1 with the same incident-attributed remedy. Restore → clean.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(codegen): regenerate unified OpenAPI bundle for alert_threshold proto change

The shipping/v2 webhook alert_threshold field was flipped from `int32` to
`optional int32` with an expanded doc comment in f3339464. That comment
now surfaces in the unified docs/api/worldmonitor.openapi.yaml bundle
(introduced by #3341). Regenerated with sebuf v0.11.1 to pick it up.

No behaviour change — bundle-only documentation drift.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-24 18:00:41 +03:00

952 lines
35 KiB
JavaScript

import { describe, it } from 'node:test';
import assert from 'node:assert/strict';
import { mkdtempSync, readFileSync, rmSync, writeFileSync } from 'node:fs';
import { basename, dirname, join, resolve } from 'node:path';
import { tmpdir } from 'node:os';
import { fileURLToPath, pathToFileURL } from 'node:url';
const __dirname = dirname(fileURLToPath(import.meta.url));
const root = resolve(__dirname, '..');
const REDIS_MODULE_URL = pathToFileURL(resolve(root, 'server/_shared/redis.ts')).href;
function jsonResponse(payload, ok = true) {
return {
ok,
async json() {
return payload;
},
};
}
function withEnv(overrides) {
const previous = new Map();
for (const [key, value] of Object.entries(overrides)) {
previous.set(key, process.env[key]);
if (value == null) {
delete process.env[key];
} else {
process.env[key] = value;
}
}
return () => {
for (const [key, value] of previous.entries()) {
if (value == null) {
delete process.env[key];
} else {
process.env[key] = value;
}
}
};
}
async function importRedisFresh() {
return import(`${REDIS_MODULE_URL}?t=${Date.now()}-${Math.random().toString(16).slice(2)}`);
}
async function importPatchedTsModule(relPath, replacements) {
const sourcePath = resolve(root, relPath);
let source = readFileSync(sourcePath, 'utf-8');
for (const [specifier, targetPath] of Object.entries(replacements)) {
source = source.replaceAll(`'${specifier}'`, `'${pathToFileURL(targetPath).href}'`);
}
const tempDir = mkdtempSync(join(tmpdir(), 'wm-ts-module-'));
const tempPath = join(tempDir, basename(sourcePath));
writeFileSync(tempPath, source);
const module = await import(`${pathToFileURL(tempPath).href}?t=${Date.now()}-${Math.random().toString(16).slice(2)}`);
return {
module,
cleanup() {
rmSync(tempDir, { recursive: true, force: true });
},
};
}
describe('redis caching behavior', { concurrency: 1 }, () => {
it('coalesces concurrent misses into one upstream fetcher execution', async () => {
const redis = await importRedisFresh();
const restoreEnv = withEnv({
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
VERCEL_ENV: undefined,
VERCEL_GIT_COMMIT_SHA: undefined,
});
const originalFetch = globalThis.fetch;
let getCalls = 0;
let setCalls = 0;
globalThis.fetch = async (url) => {
const raw = String(url);
if (raw.includes('/get/')) {
getCalls += 1;
return jsonResponse({ result: undefined });
}
if (raw.includes('/set/')) {
setCalls += 1;
return jsonResponse({ result: 'OK' });
}
throw new Error(`Unexpected fetch URL: ${raw}`);
};
try {
let fetcherCalls = 0;
const fetcher = async () => {
fetcherCalls += 1;
await new Promise((resolvePromise) => setTimeout(resolvePromise, 5));
return { value: 42 };
};
const [a, b, c] = await Promise.all([
redis.cachedFetchJson('military:test:key', 60, fetcher),
redis.cachedFetchJson('military:test:key', 60, fetcher),
redis.cachedFetchJson('military:test:key', 60, fetcher),
]);
assert.equal(fetcherCalls, 1, 'concurrent callers should share a single miss fetch');
assert.deepEqual(a, { value: 42 });
assert.deepEqual(b, { value: 42 });
assert.deepEqual(c, { value: 42 });
assert.equal(getCalls, 3, 'each caller should still attempt one cache read');
assert.ok(setCalls >= 1, 'at least one cache write should happen after coalesced fetch (data + optional seed-meta)');
} finally {
globalThis.fetch = originalFetch;
restoreEnv();
}
});
it('parses pipeline results and skips malformed entries', async () => {
const redis = await importRedisFresh();
const restoreEnv = withEnv({
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
VERCEL_ENV: undefined,
VERCEL_GIT_COMMIT_SHA: undefined,
});
const originalFetch = globalThis.fetch;
let pipelineCalls = 0;
globalThis.fetch = async (_url, init = {}) => {
pipelineCalls += 1;
const pipeline = JSON.parse(String(init.body));
assert.equal(pipeline.length, 3);
assert.deepEqual(pipeline.map((cmd) => cmd[0]), ['GET', 'GET', 'GET']);
return jsonResponse([
{ result: JSON.stringify({ details: { id: 'a1' } }) },
{ result: '{ malformed json' },
{ result: JSON.stringify({ details: { id: 'c3' } }) },
]);
};
try {
const map = await redis.getCachedJsonBatch(['k1', 'k2', 'k3']);
assert.equal(pipelineCalls, 1, 'batch lookup should use one pipeline round-trip');
assert.deepEqual(map.get('k1'), { details: { id: 'a1' } });
assert.equal(map.has('k2'), false, 'malformed JSON entry should be skipped');
assert.deepEqual(map.get('k3'), { details: { id: 'c3' } });
} finally {
globalThis.fetch = originalFetch;
restoreEnv();
}
});
});
describe('cachedFetchJsonWithMeta source labeling', { concurrency: 1 }, () => {
it('reports source=cache on Redis hit', async () => {
const redis = await importRedisFresh();
const restoreEnv = withEnv({
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
VERCEL_ENV: undefined,
VERCEL_GIT_COMMIT_SHA: undefined,
});
const originalFetch = globalThis.fetch;
globalThis.fetch = async (url) => {
const raw = String(url);
if (raw.includes('/get/')) {
return jsonResponse({ result: JSON.stringify({ value: 'cached-data' }) });
}
throw new Error(`Unexpected fetch URL: ${raw}`);
};
try {
let fetcherCalled = false;
const { data, source } = await redis.cachedFetchJsonWithMeta('meta:test:hit', 60, async () => {
fetcherCalled = true;
return { value: 'fresh-data' };
});
assert.equal(source, 'cache', 'should report source=cache on Redis hit');
assert.deepEqual(data, { value: 'cached-data' });
assert.equal(fetcherCalled, false, 'fetcher should not run on cache hit');
} finally {
globalThis.fetch = originalFetch;
restoreEnv();
}
});
it('reports source=fresh on cache miss', async () => {
const redis = await importRedisFresh();
const restoreEnv = withEnv({
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
VERCEL_ENV: undefined,
VERCEL_GIT_COMMIT_SHA: undefined,
});
const originalFetch = globalThis.fetch;
globalThis.fetch = async (url) => {
const raw = String(url);
if (raw.includes('/get/')) return jsonResponse({ result: undefined });
if (raw.includes('/set/')) return jsonResponse({ result: 'OK' });
throw new Error(`Unexpected fetch URL: ${raw}`);
};
try {
const { data, source } = await redis.cachedFetchJsonWithMeta('meta:test:miss', 60, async () => {
return { value: 'fresh-data' };
});
assert.equal(source, 'fresh', 'should report source=fresh on cache miss');
assert.deepEqual(data, { value: 'fresh-data' });
} finally {
globalThis.fetch = originalFetch;
restoreEnv();
}
});
it('reports source=fresh for ALL coalesced concurrent callers', async () => {
const redis = await importRedisFresh();
const restoreEnv = withEnv({
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
VERCEL_ENV: undefined,
VERCEL_GIT_COMMIT_SHA: undefined,
});
const originalFetch = globalThis.fetch;
globalThis.fetch = async (url) => {
const raw = String(url);
if (raw.includes('/get/')) return jsonResponse({ result: undefined });
if (raw.includes('/set/')) return jsonResponse({ result: 'OK' });
throw new Error(`Unexpected fetch URL: ${raw}`);
};
try {
let fetcherCalls = 0;
const fetcher = async () => {
fetcherCalls += 1;
await new Promise((r) => setTimeout(r, 10));
return { value: 'coalesced' };
};
const [a, b, c] = await Promise.all([
redis.cachedFetchJsonWithMeta('meta:test:coalesce', 60, fetcher),
redis.cachedFetchJsonWithMeta('meta:test:coalesce', 60, fetcher),
redis.cachedFetchJsonWithMeta('meta:test:coalesce', 60, fetcher),
]);
assert.equal(fetcherCalls, 1, 'only one fetcher should run');
assert.equal(a.source, 'fresh', 'leader should report fresh');
assert.equal(b.source, 'fresh', 'follower 1 should report fresh (not cache)');
assert.equal(c.source, 'fresh', 'follower 2 should report fresh (not cache)');
assert.deepEqual(a.data, { value: 'coalesced' });
assert.deepEqual(b.data, { value: 'coalesced' });
assert.deepEqual(c.data, { value: 'coalesced' });
} finally {
globalThis.fetch = originalFetch;
restoreEnv();
}
});
it('TOCTOU: reports cache when Redis is populated between concurrent reads', async () => {
const redis = await importRedisFresh();
const restoreEnv = withEnv({
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
VERCEL_ENV: undefined,
VERCEL_GIT_COMMIT_SHA: undefined,
});
const originalFetch = globalThis.fetch;
// First call: cache miss. Second call (from a "different instance"): cache hit.
let getCalls = 0;
globalThis.fetch = async (url) => {
const raw = String(url);
if (raw.includes('/get/')) {
getCalls += 1;
if (getCalls === 1) return jsonResponse({ result: undefined });
// Simulate another instance populating cache between calls
return jsonResponse({ result: JSON.stringify({ value: 'from-other-instance' }) });
}
if (raw.includes('/set/')) return jsonResponse({ result: 'OK' });
throw new Error(`Unexpected fetch URL: ${raw}`);
};
try {
// First call: miss → fetcher runs → fresh
const first = await redis.cachedFetchJsonWithMeta('meta:test:toctou', 60, async () => {
return { value: 'fetched' };
});
assert.equal(first.source, 'fresh');
assert.deepEqual(first.data, { value: 'fetched' });
// Second call (fresh module import to clear inflight map): cache hit from other instance
const redis2 = await importRedisFresh();
const second = await redis2.cachedFetchJsonWithMeta('meta:test:toctou', 60, async () => {
throw new Error('fetcher should not run on cache hit');
});
assert.equal(second.source, 'cache', 'should report cache when Redis has data');
assert.deepEqual(second.data, { value: 'from-other-instance' });
} finally {
globalThis.fetch = originalFetch;
restoreEnv();
}
});
});
describe('negative-result caching', { concurrency: 1 }, () => {
it('caches sentinel on null fetcher result and suppresses subsequent upstream calls', async () => {
const redis = await importRedisFresh();
const restoreEnv = withEnv({
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
VERCEL_ENV: undefined,
VERCEL_GIT_COMMIT_SHA: undefined,
});
const originalFetch = globalThis.fetch;
const store = new Map();
globalThis.fetch = async (url) => {
const raw = String(url);
if (raw.includes('/get/')) {
const key = decodeURIComponent(raw.split('/get/').pop() || '');
const val = store.get(key);
return jsonResponse({ result: val ?? undefined });
}
if (raw.includes('/set/')) {
const parts = raw.split('/set/').pop().split('/');
const key = decodeURIComponent(parts[0]);
const value = decodeURIComponent(parts[1]);
store.set(key, value);
return jsonResponse({ result: 'OK' });
}
throw new Error(`Unexpected fetch URL: ${raw}`);
};
try {
let fetcherCalls = 0;
const fetcher = async () => {
fetcherCalls += 1;
return null;
};
const first = await redis.cachedFetchJson('neg:test:suppress', 300, fetcher);
assert.equal(first, null, 'first call should return null');
assert.equal(fetcherCalls, 1, 'fetcher should run on first call');
const redis2 = await importRedisFresh();
const second = await redis2.cachedFetchJson('neg:test:suppress', 300, fetcher);
assert.equal(second, null, 'second call should return null from sentinel');
assert.equal(fetcherCalls, 1, 'fetcher should NOT run again — sentinel suppresses');
} finally {
globalThis.fetch = originalFetch;
restoreEnv();
}
});
it('cachedFetchJsonWithMeta returns data:null source:cache on sentinel hit', async () => {
const redis = await importRedisFresh();
const restoreEnv = withEnv({
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
VERCEL_ENV: undefined,
VERCEL_GIT_COMMIT_SHA: undefined,
});
const originalFetch = globalThis.fetch;
const store = new Map();
globalThis.fetch = async (url) => {
const raw = String(url);
if (raw.includes('/get/')) {
const key = decodeURIComponent(raw.split('/get/').pop() || '');
const val = store.get(key);
return jsonResponse({ result: val ?? undefined });
}
if (raw.includes('/set/')) {
const parts = raw.split('/set/').pop().split('/');
const key = decodeURIComponent(parts[0]);
const value = decodeURIComponent(parts[1]);
store.set(key, value);
return jsonResponse({ result: 'OK' });
}
throw new Error(`Unexpected fetch URL: ${raw}`);
};
try {
const first = await redis.cachedFetchJsonWithMeta('neg:meta:sentinel', 300, async () => null);
assert.equal(first.data, null);
assert.equal(first.source, 'fresh', 'first null result is fresh');
const redis2 = await importRedisFresh();
const second = await redis2.cachedFetchJsonWithMeta('neg:meta:sentinel', 300, async () => {
throw new Error('fetcher should not run on sentinel hit');
});
assert.equal(second.data, null, 'sentinel should resolve to null data, not the sentinel string');
assert.equal(second.source, 'cache', 'sentinel hit should report source=cache');
} finally {
globalThis.fetch = originalFetch;
restoreEnv();
}
});
it('does not cache sentinel when fetcher throws', async () => {
const redis = await importRedisFresh();
const restoreEnv = withEnv({
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
VERCEL_ENV: undefined,
VERCEL_GIT_COMMIT_SHA: undefined,
});
const originalFetch = globalThis.fetch;
let setCalls = 0;
globalThis.fetch = async (url) => {
const raw = String(url);
if (raw.includes('/get/')) return jsonResponse({ result: undefined });
if (raw.includes('/set/')) {
setCalls += 1;
return jsonResponse({ result: 'OK' });
}
throw new Error(`Unexpected fetch URL: ${raw}`);
};
try {
let fetcherCalls = 0;
const throwingFetcher = async () => {
fetcherCalls += 1;
throw new Error('upstream ETIMEDOUT');
};
await assert.rejects(() => redis.cachedFetchJson('neg:test:throw', 300, throwingFetcher));
assert.equal(fetcherCalls, 1);
assert.equal(setCalls, 0, 'no sentinel should be cached when fetcher throws');
const redis2 = await importRedisFresh();
await assert.rejects(() => redis2.cachedFetchJson('neg:test:throw', 300, throwingFetcher));
assert.equal(fetcherCalls, 2, 'fetcher should run again after a thrown error (no sentinel)');
} finally {
globalThis.fetch = originalFetch;
restoreEnv();
}
});
});
describe('theater posture caching behavior', { concurrency: 1 }, () => {
async function importTheaterPosture() {
return importPatchedTsModule('server/worldmonitor/military/v1/get-theater-posture.ts', {
'./_shared': resolve(root, 'server/worldmonitor/military/v1/_shared.ts'),
'../../../_shared/constants': resolve(root, 'server/_shared/constants.ts'),
'../../../_shared/redis': resolve(root, 'server/_shared/redis.ts'),
});
}
function mockOpenSkyResponse() {
return jsonResponse({
states: [
['ae1234', 'RCH001', null, null, null, 50.0, 36.0, 30000, false, 400, 90],
['ae5678', 'DUKE02', null, null, null, 51.0, 35.0, 25000, false, 350, 180],
],
});
}
it('reads live data from Redis without making upstream calls', async () => {
const { module, cleanup } = await importTheaterPosture();
const restoreEnv = withEnv({
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
});
const originalFetch = globalThis.fetch;
const liveData = { theaters: [{ theater: 'live-test', postureLevel: 'elevated', activeFlights: 5, trackedVessels: 0, activeOperations: [], assessedAt: Date.now() }] };
let openskyFetchCount = 0;
globalThis.fetch = async (url) => {
const raw = String(url);
if (raw.includes('/get/')) {
const key = decodeURIComponent(raw.split('/get/').pop() || '');
if (key === 'theater-posture:sebuf:v1') {
return jsonResponse({ result: JSON.stringify(liveData) });
}
return jsonResponse({ result: undefined });
}
if (raw.includes('opensky-network.org') || raw.includes('wingbits.com')) {
openskyFetchCount += 1;
}
return jsonResponse({}, false);
};
try {
const result = await module.getTheaterPosture({}, {});
assert.equal(openskyFetchCount, 0, 'must not call upstream APIs (Redis-read-only)');
assert.deepEqual(result, liveData, 'should return live Redis data');
} finally {
cleanup();
globalThis.fetch = originalFetch;
restoreEnv();
}
});
it('falls back to stale/backup when both upstreams are down', async () => {
const { module, cleanup } = await importTheaterPosture();
const restoreEnv = withEnv({
LOCAL_API_MODE: 'sidecar',
WS_RELAY_URL: undefined,
WINGBITS_API_KEY: undefined,
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
VERCEL_ENV: undefined,
VERCEL_GIT_COMMIT_SHA: undefined,
});
const originalFetch = globalThis.fetch;
const staleData = { theaters: [{ theater: 'stale-test', postureLevel: 'normal', activeFlights: 1, trackedVessels: 0, activeOperations: [], assessedAt: 1 }] };
globalThis.fetch = async (url) => {
const raw = String(url);
if (raw.includes('/get/')) {
const key = decodeURIComponent(raw.split('/get/').pop() || '');
if (key === 'theater-posture:sebuf:v1') {
return jsonResponse({ result: undefined });
}
if (key === 'theater_posture:sebuf:stale:v1') {
return jsonResponse({ result: JSON.stringify(staleData) });
}
return jsonResponse({ result: undefined });
}
if (raw.includes('/set/')) {
return jsonResponse({ result: 'OK' });
}
if (raw.includes('opensky-network.org')) {
throw new Error('OpenSky down');
}
return jsonResponse({}, false);
};
try {
const result = await module.getTheaterPosture({}, {});
assert.deepEqual(result, staleData, 'should return stale cache when upstreams fail');
} finally {
cleanup();
globalThis.fetch = originalFetch;
restoreEnv();
}
});
it('returns empty theaters when all tiers exhausted', async () => {
const { module, cleanup } = await importTheaterPosture();
const restoreEnv = withEnv({
LOCAL_API_MODE: 'sidecar',
WS_RELAY_URL: undefined,
WINGBITS_API_KEY: undefined,
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
VERCEL_ENV: undefined,
VERCEL_GIT_COMMIT_SHA: undefined,
});
const originalFetch = globalThis.fetch;
globalThis.fetch = async (url) => {
const raw = String(url);
if (raw.includes('/get/')) {
return jsonResponse({ result: undefined });
}
if (raw.includes('/set/')) {
return jsonResponse({ result: 'OK' });
}
if (raw.includes('opensky-network.org')) {
throw new Error('OpenSky down');
}
return jsonResponse({}, false);
};
try {
const result = await module.getTheaterPosture({}, {});
assert.deepEqual(result, { theaters: [] }, 'should return empty when all tiers exhausted');
} finally {
cleanup();
globalThis.fetch = originalFetch;
restoreEnv();
}
});
it('does not write to Redis (read-only handler)', async () => {
const { module, cleanup } = await importTheaterPosture();
const restoreEnv = withEnv({
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
});
const originalFetch = globalThis.fetch;
const cacheWrites = [];
globalThis.fetch = async (url) => {
const raw = String(url);
if (raw.includes('/get/')) {
return jsonResponse({ result: undefined });
}
if (raw.includes('/set/') || raw.includes('/pipeline')) {
cacheWrites.push(raw);
return jsonResponse({ result: 'OK' });
}
return jsonResponse({}, false);
};
try {
await module.getTheaterPosture({}, {});
assert.equal(cacheWrites.length, 0, 'handler must not write to Redis (read-only)');
} finally {
cleanup();
globalThis.fetch = originalFetch;
restoreEnv();
}
});
});
describe('country intel brief caching behavior', { concurrency: 1 }, () => {
async function importCountryIntelBrief() {
return importPatchedTsModule('server/worldmonitor/intelligence/v1/get-country-intel-brief.ts', {
'./_shared': resolve(root, 'server/worldmonitor/intelligence/v1/_shared.ts'),
'../../../_shared/constants': resolve(root, 'server/_shared/constants.ts'),
'../../../_shared/redis': resolve(root, 'server/_shared/redis.ts'),
'../../../_shared/llm-health': resolve(root, 'tests/helpers/llm-health-stub.ts'),
'../../../_shared/llm': resolve(root, 'server/_shared/llm.ts'),
'../../../_shared/hash': resolve(root, 'server/_shared/hash.ts'),
'../../../_shared/premium-check': resolve(root, 'tests/helpers/premium-check-stub.ts'),
'../../../_shared/llm-sanitize.js': resolve(root, 'server/_shared/llm-sanitize.js'),
'../../../_shared/cache-keys': resolve(root, 'server/_shared/cache-keys.ts'),
});
}
function parseRedisKey(rawUrl, op) {
const marker = `/${op}/`;
const idx = rawUrl.indexOf(marker);
if (idx === -1) return '';
return decodeURIComponent(rawUrl.slice(idx + marker.length).split('/')[0] || '');
}
function makeCtx(url) {
return { request: new Request(url) };
}
it('uses distinct cache keys for distinct context snapshots', async () => {
const { module, cleanup } = await importCountryIntelBrief();
const restoreEnv = withEnv({
GROQ_API_KEY: 'test-key',
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
VERCEL_ENV: undefined,
VERCEL_GIT_COMMIT_SHA: undefined,
});
const originalFetch = globalThis.fetch;
const store = new Map();
const setKeys = [];
const userPrompts = [];
let groqCalls = 0;
globalThis.fetch = async (url, init = {}) => {
const raw = String(url);
if (raw === 'https://api.groq.com') {
return jsonResponse({});
}
if (raw.includes('/get/')) {
const key = parseRedisKey(raw, 'get');
return jsonResponse({ result: store.get(key) });
}
if (raw.includes('/set/')) {
const key = parseRedisKey(raw, 'set');
const encodedValue = raw.slice(raw.indexOf('/set/') + 5).split('/')[1] || '';
store.set(key, decodeURIComponent(encodedValue));
if (!key.startsWith('seed-meta:')) setKeys.push(key);
return jsonResponse({ result: 'OK' });
}
if (raw.includes('api.groq.com/openai/v1/chat/completions')) {
groqCalls += 1;
const body = JSON.parse(String(init.body || '{}'));
userPrompts.push(body.messages?.[1]?.content || '');
return jsonResponse({ choices: [{ message: { content: `brief-${groqCalls}` } }] });
}
throw new Error(`Unexpected fetch URL: ${raw}`);
};
try {
const req = { countryCode: 'IL' };
const alpha = await module.getCountryIntelBrief(makeCtx('https://example.com/api/intelligence/v1/get-country-intel-brief?country_code=IL&context=alpha'), req);
const beta = await module.getCountryIntelBrief(makeCtx('https://example.com/api/intelligence/v1/get-country-intel-brief?country_code=IL&context=beta'), req);
const alphaCached = await module.getCountryIntelBrief(makeCtx('https://example.com/api/intelligence/v1/get-country-intel-brief?country_code=IL&context=alpha'), req);
assert.equal(groqCalls, 2, 'different contexts should not share one cache entry');
assert.equal(setKeys.length, 2, 'one cache write per unique context');
assert.notEqual(setKeys[0], setKeys[1], 'context hash should differentiate cache keys');
assert.ok(setKeys[0]?.startsWith('ci-sebuf:v3:IL:'), 'cache key should use v3 country-intel namespace');
assert.ok(setKeys[1]?.startsWith('ci-sebuf:v3:IL:'), 'cache key should use v3 country-intel namespace');
assert.equal(alpha.brief, 'brief-1');
assert.equal(beta.brief, 'brief-2');
assert.equal(alphaCached.brief, 'brief-1', 'same context should hit cache');
assert.match(userPrompts[0], /Context snapshot:\s*alpha/);
assert.match(userPrompts[1], /Context snapshot:\s*beta/);
} finally {
cleanup();
globalThis.fetch = originalFetch;
restoreEnv();
}
});
it('uses base cache key and prompt when context is missing or blank', async () => {
const { module, cleanup } = await importCountryIntelBrief();
const restoreEnv = withEnv({
GROQ_API_KEY: 'test-key',
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
VERCEL_ENV: undefined,
VERCEL_GIT_COMMIT_SHA: undefined,
});
const originalFetch = globalThis.fetch;
const store = new Map();
const setKeys = [];
const userPrompts = [];
let groqCalls = 0;
globalThis.fetch = async (url, init = {}) => {
const raw = String(url);
if (raw === 'https://api.groq.com') {
return jsonResponse({});
}
if (raw.includes('/get/')) {
const key = parseRedisKey(raw, 'get');
return jsonResponse({ result: store.get(key) });
}
if (raw.includes('/set/')) {
const key = parseRedisKey(raw, 'set');
const encodedValue = raw.slice(raw.indexOf('/set/') + 5).split('/')[1] || '';
store.set(key, decodeURIComponent(encodedValue));
if (!key.startsWith('seed-meta:')) setKeys.push(key);
return jsonResponse({ result: 'OK' });
}
if (raw.includes('api.groq.com/openai/v1/chat/completions')) {
groqCalls += 1;
const body = JSON.parse(String(init.body || '{}'));
userPrompts.push(body.messages?.[1]?.content || '');
return jsonResponse({ choices: [{ message: { content: 'base-brief' } }] });
}
throw new Error(`Unexpected fetch URL: ${raw}`);
};
try {
const req = { countryCode: 'US' };
const first = await module.getCountryIntelBrief(makeCtx('https://example.com/api/intelligence/v1/get-country-intel-brief?country_code=US'), req);
const second = await module.getCountryIntelBrief(makeCtx('https://example.com/api/intelligence/v1/get-country-intel-brief?country_code=US&context=%20%20%20'), req);
assert.equal(groqCalls, 1, 'blank context should reuse base cache entry');
assert.equal(setKeys.length, 1);
assert.ok(setKeys[0]?.endsWith(':base'), 'missing context should use :base cache suffix');
assert.ok(!userPrompts[0]?.includes('Context snapshot:'), 'prompt should omit context block when absent');
assert.equal(first.brief, 'base-brief');
assert.equal(second.brief, 'base-brief');
} finally {
cleanup();
globalThis.fetch = originalFetch;
restoreEnv();
}
});
});
describe('military flights bbox behavior', { concurrency: 1 }, () => {
async function importListMilitaryFlights() {
return importPatchedTsModule('server/worldmonitor/military/v1/list-military-flights.ts', {
'./_shared': resolve(root, 'server/worldmonitor/military/v1/_shared.ts'),
'../../../_shared/constants': resolve(root, 'server/_shared/constants.ts'),
'../../../_shared/redis': resolve(root, 'server/_shared/redis.ts'),
'../../../_shared/relay': resolve(root, 'server/_shared/relay.ts'),
'../../../_shared/response-headers': resolve(root, 'server/_shared/response-headers.ts'),
});
}
const request = {
swLat: 10,
swLon: 10,
neLat: 11,
neLon: 11,
};
it('fetches expanded quantized bbox but returns only flights inside the requested bbox', async () => {
const { module, cleanup } = await importListMilitaryFlights();
const restoreEnv = withEnv({
LOCAL_API_MODE: 'sidecar',
WS_RELAY_URL: undefined,
UPSTASH_REDIS_REST_URL: undefined,
UPSTASH_REDIS_REST_TOKEN: undefined,
});
const originalFetch = globalThis.fetch;
const fetchUrls = [];
globalThis.fetch = async (url) => {
const raw = String(url);
fetchUrls.push(raw);
if (!raw.includes('opensky-network.org/api/states/all')) {
throw new Error(`Unexpected fetch URL: ${raw}`);
}
return jsonResponse({
states: [
['in-bounds', 'RCH123', null, null, null, 10.5, 10.5, 20000, false, 300, 90],
['south-out', 'RCH124', null, null, null, 10.4, 9.7, 22000, false, 280, 95],
['east-out', 'RCH125', null, null, null, 11.3, 10.6, 21000, false, 290, 92],
],
});
};
try {
const result = await module.listMilitaryFlights({}, request);
assert.deepEqual(
result.flights.map((flight) => flight.id),
['IN-BOUNDS'],
'response should not include out-of-viewport flights (hex_code canonical form is uppercase)',
);
assert.equal(fetchUrls.length, 1);
const params = new URL(fetchUrls[0]).searchParams;
assert.equal(params.get('lamin'), '9.5');
assert.equal(params.get('lamax'), '11.5');
assert.equal(params.get('lomin'), '9.5');
assert.equal(params.get('lomax'), '11.5');
} finally {
cleanup();
globalThis.fetch = originalFetch;
restoreEnv();
}
});
it('filters cached quantized-cell results back to the requested bbox', async () => {
const { module, cleanup } = await importListMilitaryFlights();
const restoreEnv = withEnv({
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
LOCAL_API_MODE: undefined,
WS_RELAY_URL: undefined,
VERCEL_ENV: undefined,
VERCEL_GIT_COMMIT_SHA: undefined,
});
const originalFetch = globalThis.fetch;
let openskyCalls = 0;
let redisGetCalls = 0;
globalThis.fetch = async (url) => {
const raw = String(url);
if (raw.includes('/get/')) {
redisGetCalls += 1;
return jsonResponse({
result: JSON.stringify({
flights: [
{ id: 'cache-in', location: { latitude: 10.2, longitude: 10.2 } },
{ id: 'cache-out', location: { latitude: 9.8, longitude: 10.2 } },
],
clusters: [],
}),
});
}
if (raw.includes('opensky-network.org/api/states/all')) {
openskyCalls += 1;
}
throw new Error(`Unexpected fetch URL: ${raw}`);
};
try {
const result = await module.listMilitaryFlights({}, request);
assert.equal(redisGetCalls, 1, 'handler should read quantized cache first');
assert.equal(openskyCalls, 0, 'cache hit should avoid upstream fetch');
assert.deepEqual(
result.flights.map((flight) => flight.id),
['cache-in'],
'cached quantized-cell payload must be re-filtered to request bbox',
);
} finally {
cleanup();
globalThis.fetch = originalFetch;
restoreEnv();
}
});
// #3277 — fetchStaleFallback NEG_TTL parity with the legacy
// /api/military-flights handler. Without the negative cache, a sustained
// relay+seed outage would Redis-hammer the stale key on every request.
it('suppresses stale Redis read for 30s after a stale-key miss (NEG_TTL parity)', async () => {
const { module, cleanup } = await importListMilitaryFlights();
module._resetStaleNegativeCacheForTests();
const restoreEnv = withEnv({
UPSTASH_REDIS_REST_URL: 'https://redis.test',
UPSTASH_REDIS_REST_TOKEN: 'token',
LOCAL_API_MODE: undefined,
WS_RELAY_URL: undefined,
VERCEL_ENV: undefined,
VERCEL_GIT_COMMIT_SHA: undefined,
});
const originalFetch = globalThis.fetch;
const staleGetCalls = [];
globalThis.fetch = async (url) => {
const raw = String(url);
if (raw.includes('/get/')) {
if (raw.includes('military%3Aflights%3Astale%3Av1')) {
staleGetCalls.push(raw);
}
// Both keys empty — drives cachedFetchJson to call the fetcher
// (which returns null because no relay) and then the handler falls
// through to fetchStaleFallback (which returns null because stale
// is also empty → arms the negative cache).
return jsonResponse({ result: null });
}
throw new Error(`Unexpected fetch URL: ${raw}`);
};
try {
const ctx = { request: new Request('https://wm.test/api/military/v1/list-military-flights') };
// Call 1 — live empty + stale empty. Stale key MUST be read once,
// and the negative cache MUST be armed for the next 30s.
const r1 = await module.listMilitaryFlights(ctx, request);
assert.deepEqual(r1.flights, [], 'no live, no stale → empty response');
assert.equal(staleGetCalls.length, 1, 'first call reads stale key once');
// Call 2 — within the 30s negative-cache window. Live cache may be
// re-checked but the stale key MUST NOT be re-read.
staleGetCalls.length = 0;
const r2 = await module.listMilitaryFlights(ctx, request);
assert.deepEqual(r2.flights, [], 'still empty during negative-cache window');
assert.equal(
staleGetCalls.length,
0,
'second call within NEG_TTL window must not re-read stale key',
);
// Reset the negative cache (simulates wall-clock advance past 30s) →
// stale read should resume.
module._resetStaleNegativeCacheForTests();
const r3 = await module.listMilitaryFlights(ctx, request);
assert.deepEqual(r3.flights, []);
assert.equal(
staleGetCalls.length,
1,
'after negative-cache reset, stale key is re-read',
);
} finally {
cleanup();
globalThis.fetch = originalFetch;
restoreEnv();
}
});
});