Files
worldmonitor/server
Elie Habib da1fa3367b fix(resilience-ranking): chunked warm SET, always-on rebuild, truthful meta (Slice B) (#3124)
* fix(resilience-ranking): chunked warm SET, always-on rebuild, truthful meta

Slice B follow-up to PR #3121. Three coupled production failures observed:

1. Per-country score persistence works (Slice A), but the 222-SET single
   pipeline body (~600KB) exceeds REDIS_PIPELINE_TIMEOUT_MS (5s) on Vercel
   Edge. runRedisPipeline returns []; persistence guard correctly returns
   empty; coverage = 0/222 < 75%; ranking publish silently dropped. Live
   Railway log: "Ranking: 0 ranked, 222 greyed out" → "Rebuilt … with 222
   countries (bulk-call race left ranking:v9 null)" — second call only
   succeeded because Upstash had finally caught up between attempts.

2. The seeder's probe + rebuild block lives inside `if (missing > 0)`. When
   per-country scores survive a cron tick (TTL 6h, cron every 6h), missing=0
   and the rebuild path is skipped. Ranking aggregate then expires alone and
   is never refreshed until scores also expire — multi-hour gaps where
   `resilience:ranking:v9` is gone while seed-meta still claims freshness.

3. `writeRankingSeedMeta` fires whenever finalWarmed > 0, regardless of
   whether the ranking key is actually present. Health endpoint sees fresh
   meta + missing data → EMPTY_ON_DEMAND with a misleading seedAge.

Fixes:
- _shared.ts: split the warm pipeline SET into SET_BATCH=30-command chunks
  so each pipeline body fits well under timeout. Pad missing-batch results
  with empty entries so the per-command alignment stays correct (failed
  batches stay excluded from `warmed`, no proof = no claim).
- seed-resilience-scores.mjs: extract `ensureRankingPresent` helper, call
  it from BOTH the missing>0 and missing===0 branches so the ranking gets
  refreshed every cron. Add a post-rebuild STRLEN verification — rebuild
  HTTP can return 200 with a payload but still skip the SET (coverage gate,
  pipeline failure).
- main(): only writeRankingSeedMeta when result.rankingPresent === true.
  Otherwise log and let the next cron retry.

Tests:
- resilience-ranking.test.mts: assert pipelines stay ≤30 commands.
- resilience-scores-seed.test.mjs: structural checks that the rebuild is
  hoisted (≥2 callsites of ensureRankingPresent), STRLEN verification is
  present, and meta write is gated on rankingPresent.

Full resilience suite: 373/373 pass (was 370 — 3 new tests).

* fix(resilience-ranking): seeder no longer writes seed-meta (handler is sole writer)

Reviewer P1: ensureRankingPresent() returning true only means the ranking
key exists in Redis — not that THIS cron actually wrote it. The handler
skips both the ranking SET and the meta SET when coverage < 75%, so an
older ranking from a prior cron can linger while this cron's data didn't
land. Under that scenario, the previous commit still wrote a fresh
seed-meta:resilience:ranking, recreating the stale-meta-over-stale-data
failure this PR is meant to eliminate.

Fix: remove seeder-side seed-meta writes entirely. The ranking handler
already writes ranking + meta atomically in the same pipeline when (and
only when) coverage passes the gate. ensureRankingPresent() triggers the
handler every cron, which addresses the original rationale for the seeder
heartbeat (meta going stale during quiet Pro usage) without the seeder
needing to lie.

Consequence on failure:
- Coverage gate trips → handler writes neither ranking nor meta.
- seed-meta stays at its previous timestamp; api/health reports accurate
  staleness (STALE_SEED after maxStaleMin, then CRIT) instead of a fresh
  meta over stale/empty data.

Tests updated: the "meta gated on rankingPresent" assertion is replaced
with "seeder must not SET seed-meta:resilience:ranking" + "no
writeRankingSeedMeta". Comments may still reference the key name for
maintainer clarity — the assertion targets actual SET commands.

Full resilience suite: 373/373 pass.

* fix(resilience-ranking): always refresh + 12h TTL (close timing hole)

Reviewer P1+P2:

- P1: ranking TTL == cron interval (both 6h) left a timing hole. If a cron
  wrote the key near the end of its run and the next cron fired near the
  start of its interval, the key was still alive at probe time →
  ensureRankingPresent() returned early → no rebuild → key expired a short
  while later and stayed absent until a cron eventually ran while the key
  was missing. Multi-hour EMPTY_ON_DEMAND gaps.

- P2: probing only the ranking data key (not seed-meta) meant a partial
  handler pipeline (ranking SET ok, meta SET missed) would self-heal only
  when the ranking itself expired — never during its TTL window.

Fix:

1. Bump RESILIENCE_RANKING_CACHE_TTL_SECONDS from 6h to 12h (2x cron
   interval). A single missed or slow cron no longer causes a gap.
   Server-side and seeder-side constants kept in sync.

2. Replace ensureRankingPresent() with refreshRankingAggregate(): drop the
   'if key present, skip' short-circuit. Rebuild every cron, unconditionally.
   One cheap HTTP call keeps ranking + seed-meta rolling forward together
   and self-heals the partial-pipeline case — handler retries the atomic
   pair every 6h regardless of whether the keys are currently live.

3. Update health.js comment to reflect the new TTL and refresh cadence
   (12h data TTL, 6h refresh, 12h staleness threshold = 2 missed ticks).

Tests:
- RESILIENCE_RANKING_CACHE_TTL_SECONDS asserts 12h (was 6h).
- New assertion: refreshRankingAggregate must NOT early-return on probe-
  hit, and the rebuild HTTP call must be unconditional in its body.
- DEL-guard test relaxed to allow comments between '{' and the DEL line
  (structural property preserved).

Full resilience suite: 375/375.

* fix(resilience-ranking): parallelize warm batches + atomic rebuild via ?refresh=1

Reviewer P2s:

- Warm path serialized the 8 batch pipelines with `await` in a for-loop,
  adding ~7 extra Upstash round-trips (100-500ms each on Edge) to the warm
  wall-clock. Batches are independent; Promise.all collapses them into one
  slowest-batch window.

- DEL+rebuild created a brief absence window: if the rebuild request failed
  transiently, the ranking stayed absent until the next cron. Now seeder
  calls `/api/resilience/v1/get-resilience-ranking?refresh=1` and the
  handler bypasses its cache-hit early-return, recomputing and SETting
  atomically. On rebuild failure, the existing (possibly stale-but-present)
  ranking is preserved instead of being nuked.

Handler: read ctx.request.url for the refresh query param; guard the URL
parse with try/catch so an unparseable url falls back to the cached-first
behavior.

Tests:
- New: ?refresh=1 must bypass the cache-hit early-return (fails on old code,
  passes now).
- DEL-guard test replaced with 'does NOT DEL' + 'uses ?refresh=1'.
- Batch chunking still asserted at SET_BATCH=30.

Full resilience suite: 376/376.

* fix(resilience-ranking): bulk-warm call also needs ?refresh=1 (asymmetric TTL hazard)

Reviewer P1: in the 6h-12h window, per-country score keys have expired
(TTL 6h) but the ranking aggregate is still alive (TTL 12h). The seeder's
bulk-warm call was hitting get-resilience-ranking without ?refresh=1, so
the handler's cache-hit early-return fired and the entire warm path was
skipped. Scores stayed missing; coverage degraded; the only recovery was
the per-country laggard loop (5-request batches) — which silently no-ops
when WM_KEY is absent. This defeated the whole point of the chunked bulk
warm introduced in this PR.

Fix: the bulk-warm fetch at scripts/seed-resilience-scores.mjs:167 now
appends ?refresh=1, matching the rebuild call. Every seeder-initiated hit
on the ranking endpoint forces the handler to route through
warmMissingResilienceScores and its chunked pipeline SET, regardless of
whether the aggregate is still cached.

Test extended: structural assertion now scans ALL occurrences of
get-resilience-ranking in the seeder and requires every one of them to
carry ?refresh=1. Fails the moment a future change adds a bare call.

Full resilience suite: 376/376.

* fix(resilience-ranking): gate ?refresh=1 on seed key + detect partial pipeline publish

Reviewer P1: ?refresh=1 was honored for any caller — including valid Pro
bearer tokens. A full warm is ~222 score computations + chunked pipeline
SETs; a Pro user looping on refresh=1 (or an automated client) could DoS
Upstash quota and Edge budget. Gate refresh behind
WORLDMONITOR_VALID_KEYS / WORLDMONITOR_API_KEY (X-WorldMonitor-Key
header) — the same allowlist the cron uses. Pro bearer tokens get the
standard cache-first path; refresh requires the seed service key.

Reviewer P2: the handler's atomic runRedisPipeline SET of ranking + meta
is non-transactional on Upstash REST — either SET can fail independently.
If the ranking landed but meta missed, the seeder's STRLEN verify would
pass (ranking present) while /api/health stays stuck on stale meta.

Two-part fix:
- Handler inspects pipelineResult[0] and [1] and logs a warning when
  either SET didn't return OK. Ops-greppable signal.
- Seeder's verify now checks BOTH keys in parallel: STRLEN on ranking
  data, and GET + fetchedAt freshness (<5min) on seed-meta. Partial
  publish logs a warning; next cron retries (SET is idempotent).

Tests:
- New: ?refresh=1 without/with-wrong X-WorldMonitor-Key must NOT trigger
  recompute (falls back to cached response). Existing bypass test updated
  to carry a valid seed key header.

Full resilience suite: 376/376 + 1 new = 377/377.
2026-04-16 12:48:41 +04:00
..