fix(supply-chain): split chokepoint transit data + close silent zero-state cache (#3185)

* fix(supply-chain): split chokepoint transit data + close silent zero-state cache

Production supply-chain panel was rendering 13 empty chokepoints because
the getChokepointStatus RPC silently cached zero-state for 5 minutes:

1. supply_chain:transit-summaries:v1 grew to ~500 KB (180d × 13 × 14 fields
   of history per chokepoint).
2. REDIS_OP_TIMEOUT_MS is 1.5 s. Vercel Sydney edge → Upstash for a 500 KB
   GET consistently exceeded the budget; getCachedJson caught the AbortError
   and returned null.
3. The 500 KB portwatch fallback read hit the same timeout.
4. summaries = {} → every summaries[cp.id] was undefined → 13 chokepoints
   got the zero-state default → cached as a non-null success response for
   REDIS_CACHE_TTL (5 min) instead of NEG_SENTINEL (120 s).

Fix (one PR, per docs/plans/chokepoint-rpc-payload-split.md):

- ais-relay.cjs: split seedTransitSummaries output.
  - supply_chain:transit-summaries:v1 — compact (~30 KB, no history).
  - supply_chain:transit-summaries:history:v1:{id} — per chokepoint
    (~35 KB each, 13 keys). Both under the 1.5 s Redis read budget.
- New RPC GetChokepointHistory: lazy-loaded on card expand.
- get-chokepoint-status.ts: drop the 500 KB portwatch/corridorrisk/
  chokepoint_transits fallback reads. Treat a null transit-summaries
  read as upstreamUnavailable=true so cachedFetchJson writes NEG_SENTINEL
  (2 min) instead of a 5-min zero-state pin. Omit history from the
  response (proto field stays declared; empty array).
- server/_shared/redis.ts: tag AbortError timeouts with [REDIS-TIMEOUT]
  key=… timeoutMs=… so log drains / Sentry-Vercel integration pick up
  large-payload timeouts instead of them being silently swallowed.
- SupplyChainPanel.ts + MapPopup.ts: lazy-fetch history on card expand
  via fetchChokepointHistory; session-scoped cache; graceful "History
  unavailable" on empty/error. PRO gating on the map popup unchanged.
- Gateway: cache-tier entry for /get-chokepoint-history (slow).
- Tests: regression guards for upstreamUnavailable gate + per-id key
  shape + handler wiring + proto query annotations.

Audit included in plan: no other RPC consumer read stacks >200 KB
besides displacement:summary:v1:2026 (724 KB, same risk, flagged for
follow-up PR). wildfire:fires:v1 at 1.7 MB loads via bootstrap (3 s
timeout, different path) — monitor but out of scope.

Expected impact:
- supply_chain:chokepoints:v4 payload drops from ~508 KB to <100 KB.
- supply_chain:transit-summaries:v1 drops from ~502 KB to <50 KB.
- RPC Redis reads stay well under 1.5 s in the hot path.
- Silent zero-state pinning is now impossible: null reads → 2-min neg
  cache → self-heal on next relay tick.

* fix(supply-chain): address PR #3185 review — stop caching empty/error + fix partial coverage

Two P1 regressions caught in review:

1. Client cache poisoning on empty/error (MapPopup.ts, SupplyChainPanel.ts)
   Empty-array is truthy in JS, so MapPopup's `!cached && !inflight` branch
   never fired once we cached []. Neither `cached && cached.length` fired
   either — popup stuck on "Loading transit history..." for the session.
   SupplyChainPanel had the explicit `cached && !cached.length` branch but
   still never retried, so the same transient became session-sticky there too.

   Fix: cache ONLY non-empty successful responses. Empty/error show the
   "History unavailable" placeholder but leave the cache untouched, so the
   next re-expand retries. The /get-chokepoint-history gateway tier is
   "slow" (5-min CF edge cache) → retries stay cheap.

2. Partial portwatch coverage treated as healthy (ais-relay.cjs)
   seedTransitSummaries iterated Object.entries(pw), so if seed-portwatch
   dropped N of 13 chokepoints (ArcGIS reject/empty), summaries had <13 keys.
   get-chokepoint-status upstreamUnavailable fires only on fully-empty
   summaries, so the N missing chokepoints fell through to zero-state rows
   that got pinned in cache for 5 minutes.

   Fix: iterate CANONICAL_IDS (Object.keys(CHOKEPOINT_THREAT_LEVELS)) and
   fill zero-state for any ID missing from pw. Shape is consistently 13
   keys. Track pwCovered → envelope + seed-meta recordCount reflect real
   upstream coverage (not shape size), so health.js can distinguish 13/13
   healthy from 10/13 partial. Warn-log on shortfall.

Tests: new regression guards
- panel must NOT cache empty arrays (historyCache.set with []).
- writer must iterate CANONICAL_IDS, not Object.entries(pw).
- seed-meta recordCount binds to pwCovered.

5718/5718 data tests pass. typecheck + typecheck:api clean.
This commit is contained in:
Elie Habib
2026-04-18 23:14:00 +04:00
committed by GitHub
parent d1e084061d
commit 3c47c1b222
19 changed files with 633 additions and 119 deletions

File diff suppressed because one or more lines are too long

View File

@@ -53,6 +53,41 @@ paths:
application/json:
schema:
$ref: '#/components/schemas/Error'
/api/supply-chain/v1/get-chokepoint-history:
get:
tags:
- SupplyChainService
summary: GetChokepointHistory
description: |-
GetChokepointHistory returns transit-count history for a single chokepoint,
loaded lazily on card expand. Keeps the status RPC compact (no 180-day
history per chokepoint on every call).
operationId: GetChokepointHistory
parameters:
- name: chokepointId
in: query
required: false
schema:
type: string
responses:
"200":
description: Successful response
content:
application/json:
schema:
$ref: '#/components/schemas/GetChokepointHistoryResponse'
"400":
description: Validation error
content:
application/json:
schema:
$ref: '#/components/schemas/ValidationError'
default:
description: Error response
content:
application/json:
schema:
$ref: '#/components/schemas/Error'
/api/supply-chain/v1/get-critical-minerals:
get:
tags:
@@ -623,6 +658,29 @@ components:
type: string
hazardAlertName:
type: string
GetChokepointHistoryRequest:
type: object
properties:
chokepointId:
type: string
required:
- chokepointId
description: |-
GetChokepointHistory returns the transit-count history for a single
chokepoint. Loaded lazily on card expand so the main chokepoint-status
response can stay compact (no 180-day history per chokepoint).
GetChokepointHistoryResponse:
type: object
properties:
chokepointId:
type: string
history:
type: array
items:
$ref: '#/components/schemas/TransitDayCount'
fetchedAt:
type: string
format: int64
GetCriticalMineralsRequest:
type: object
GetCriticalMineralsResponse:

View File

@@ -0,0 +1,23 @@
syntax = "proto3";
package worldmonitor.supply_chain.v1;
import "buf/validate/validate.proto";
import "sebuf/http/annotations.proto";
import "worldmonitor/supply_chain/v1/supply_chain_data.proto";
// GetChokepointHistory returns the transit-count history for a single
// chokepoint. Loaded lazily on card expand so the main chokepoint-status
// response can stay compact (no 180-day history per chokepoint).
message GetChokepointHistoryRequest {
string chokepoint_id = 1 [
(buf.validate.field).required = true,
(sebuf.http.query) = {name: "chokepointId"}
];
}
message GetChokepointHistoryResponse {
string chokepoint_id = 1;
repeated TransitDayCount history = 2;
int64 fetched_at = 3;
}

View File

@@ -5,6 +5,7 @@ package worldmonitor.supply_chain.v1;
import "sebuf/http/annotations.proto";
import "worldmonitor/supply_chain/v1/get_shipping_rates.proto";
import "worldmonitor/supply_chain/v1/get_chokepoint_status.proto";
import "worldmonitor/supply_chain/v1/get_chokepoint_history.proto";
import "worldmonitor/supply_chain/v1/get_critical_minerals.proto";
import "worldmonitor/supply_chain/v1/get_shipping_stress.proto";
import "worldmonitor/supply_chain/v1/get_country_chokepoint_index.proto";
@@ -25,6 +26,13 @@ service SupplyChainService {
option (sebuf.http.config) = {path: "/get-chokepoint-status", method: HTTP_METHOD_GET};
}
// GetChokepointHistory returns transit-count history for a single chokepoint,
// loaded lazily on card expand. Keeps the status RPC compact (no 180-day
// history per chokepoint on every call).
rpc GetChokepointHistory(GetChokepointHistoryRequest) returns (GetChokepointHistoryResponse) {
option (sebuf.http.config) = {path: "/get-chokepoint-history", method: HTTP_METHOD_GET};
}
rpc GetCriticalMinerals(GetCriticalMineralsRequest) returns (GetCriticalMineralsResponse) {
option (sebuf.http.config) = {path: "/get-critical-minerals", method: HTTP_METHOD_GET};
}

View File

@@ -7367,7 +7367,13 @@ setInterval(() => {
}, CHOKEPOINT_TRANSIT_INTERVAL_MS).unref?.();
// --- Pre-assembled Transit Summaries (Railway advantage: avoids large Redis reads on Vercel) ---
// Split storage: compact summary (no history, ~30KB) + per-id history keys (~35KB each).
// The compact summary is read on every /api/supply-chain/v1/get-chokepoint-status call.
// History keys are read only on card expand via /get-chokepoint-history. Before this
// split the combined payload was ~500KB and timed out at Vercel edge's 1.5s Redis read
// budget (docs/plans/chokepoint-rpc-payload-split.md).
const TRANSIT_SUMMARY_REDIS_KEY = 'supply_chain:transit-summaries:v1';
const TRANSIT_SUMMARY_HISTORY_KEY_PREFIX = 'supply_chain:transit-summaries:history:v1:';
const TRANSIT_SUMMARY_TTL = 3600; // 1h — 6x interval; survives ~5 consecutive missed pings
const TRANSIT_SUMMARY_INTERVAL_MS = 10 * 60 * 1000;
@@ -7424,10 +7430,22 @@ async function seedTransitSummaries() {
const now = Date.now();
const summaries = {};
// Iterate the canonical chokepoint ID set rather than whatever pw happens to
// carry today. If seed-portwatch dropped 3 of 13 (flaky ArcGIS), those 3
// would otherwise vanish from summaries and the RPC would render zero-state
// rows for them — which get-chokepoint-status treats as healthy because its
// upstreamUnavailable gate fires only on fully-empty summaries. By emitting
// all 13 with zero-state for missing IDs, the shape is consistent and the
// coverage shortfall surfaces via the `pwCovered/N` log + recordCount only.
const CANONICAL_IDS = Object.keys(CHOKEPOINT_THREAT_LEVELS);
let pwCovered = 0;
for (const [cpId, cpData] of Object.entries(pw)) {
for (const cpId of CANONICAL_IDS) {
const cpData = pw[cpId];
if (cpData) pwCovered++;
const threatLevel = CHOKEPOINT_THREAT_LEVELS[cpId] || 'normal';
const anomaly = detectTrafficAnomalyRelay(cpData.history, threatLevel);
const history = cpData?.history ?? [];
const anomaly = detectTrafficAnomalyRelay(history, threatLevel);
// Get relay transit counts for this chokepoint
let relayTransit = null;
@@ -7448,13 +7466,15 @@ async function seedTransitSummaries() {
}
const cr = latestCorridorRiskData?.[cpId];
// Compact summary: no history field. Consumed by get-chokepoint-status on
// every request, so keep it small.
summaries[cpId] = {
todayTotal: relayTransit?.total ?? 0,
todayTanker: relayTransit?.tanker ?? 0,
todayCargo: relayTransit?.cargo ?? 0,
todayOther: relayTransit?.other ?? 0,
wowChangePct: cpData.wowChangePct ?? 0,
history: cpData.history ?? [],
wowChangePct: cpData?.wowChangePct ?? 0,
riskLevel: cr?.riskLevel ?? '',
incidentCount7d: cr?.incidentCount7d ?? 0,
disruptionPct: cr?.disruptionPct ?? 0,
@@ -7462,11 +7482,31 @@ async function seedTransitSummaries() {
riskReportAction: cr?.riskReportAction ?? '',
anomaly,
};
// Per-id history key — only fetched on card expand via GetChokepointHistory.
// Write best-effort: a failure here doesn't block the summary publish. An
// empty history key just means the chart is unavailable for that chokepoint
// until the next successful relay tick.
const historyPayload = { chokepointId: cpId, history, fetchedAt: now };
const historyOk = await envelopeWrite(
`${TRANSIT_SUMMARY_HISTORY_KEY_PREFIX}${cpId}`,
historyPayload,
TRANSIT_SUMMARY_TTL,
{ recordCount: history.length, sourceVersion: 'transit-summaries-history' },
);
if (!historyOk) console.warn(`[TransitSummary] history write failed for ${cpId}`);
}
const ok = await envelopeWrite(TRANSIT_SUMMARY_REDIS_KEY, { summaries, fetchedAt: now }, TRANSIT_SUMMARY_TTL, { recordCount: Object.keys(summaries).length, sourceVersion: 'transit-summaries' });
await upstashSet('seed-meta:supply_chain:transit-summaries', { fetchedAt: now, recordCount: Object.keys(summaries).length }, 604800);
console.log(`[TransitSummary] Seeded ${Object.keys(summaries).length} summaries (redis: ${ok ? 'OK' : 'FAIL'})`);
if (pwCovered < CANONICAL_IDS.length) {
console.warn(`[TransitSummary] portwatch coverage shortfall: ${pwCovered}/${CANONICAL_IDS.length} — missing chokepoints will publish zero-state until next upstream success`);
}
const ok = await envelopeWrite(TRANSIT_SUMMARY_REDIS_KEY, { summaries, fetchedAt: now }, TRANSIT_SUMMARY_TTL, { recordCount: pwCovered, sourceVersion: 'transit-summaries' });
// seed-meta recordCount = pwCovered (actual upstream coverage), not the
// canonical-shape key count. Lets api/health.js detect a coverage shortfall
// as a freshness anomaly rather than being masked by the always-13 shape.
await upstashSet('seed-meta:supply_chain:transit-summaries', { fetchedAt: now, recordCount: pwCovered }, 604800);
console.log(`[TransitSummary] Seeded ${pwCovered}/${CANONICAL_IDS.length} from portwatch + per-id history (redis: ${ok ? 'OK' : 'FAIL'})`);
}
// Seed transit summaries every 10 min (same as transit counter)

View File

@@ -80,7 +80,17 @@ export async function getCachedJson(key: string, raw = false): Promise<unknown |
// through unchanged (unwrapEnvelope returns {_seed: null, data: raw}).
return unwrapEnvelope(JSON.parse(data.result)).data;
} catch (err) {
console.warn('[redis] getCachedJson failed:', errMsg(err));
// AbortError = timeout; structured + errored so log drains (e.g. Sentry via
// Vercel integration) pick it up. Large-payload timeouts used to silently
// return null and let downstream callers cache zero-state — see
// docs/plans/chokepoint-rpc-payload-split.md for the incident that added
// this tag.
const isTimeout = err instanceof Error && err.name === 'AbortError';
if (isTimeout) {
console.error(`[REDIS-TIMEOUT] getCachedJson key=${key} timeoutMs=${REDIS_OP_TIMEOUT_MS}`);
} else {
console.warn('[redis] getCachedJson failed:', errMsg(err));
}
return null;
}
}

View File

@@ -168,6 +168,7 @@ const RPC_CACHE_TIER: Record<string, CacheTier> = {
'/api/forecast/v1/get-simulation-package': 'slow',
'/api/forecast/v1/get-simulation-outcome': 'slow',
'/api/supply-chain/v1/get-chokepoint-status': 'medium',
'/api/supply-chain/v1/get-chokepoint-history': 'slow',
'/api/news/v1/list-feed-digest': 'slow',
'/api/intelligence/v1/get-country-facts': 'daily',
'/api/intelligence/v1/list-security-advisories': 'slow',

View File

@@ -0,0 +1,42 @@
import type {
ServerContext,
GetChokepointHistoryRequest,
GetChokepointHistoryResponse,
TransitDayCount,
} from '../../../../src/generated/server/worldmonitor/supply_chain/v1/service_server';
import { getCachedJson } from '../../../_shared/redis';
import { CANONICAL_CHOKEPOINTS } from './_chokepoint-ids';
const HISTORY_KEY_PREFIX = 'supply_chain:transit-summaries:history:v1:';
const VALID_IDS = new Set(CANONICAL_CHOKEPOINTS.map(c => c.id));
interface HistoryPayload {
chokepointId: string;
history: TransitDayCount[];
fetchedAt: number;
}
export async function getChokepointHistory(
_ctx: ServerContext,
req: GetChokepointHistoryRequest,
): Promise<GetChokepointHistoryResponse> {
const id = String(req.chokepointId || '').trim();
if (!id || !VALID_IDS.has(id)) {
return { chokepointId: '', history: [], fetchedAt: '0' };
}
try {
const payload = await getCachedJson(`${HISTORY_KEY_PREFIX}${id}`, true) as HistoryPayload | null;
if (!payload || !Array.isArray(payload.history)) {
return { chokepointId: id, history: [], fetchedAt: '0' };
}
return {
chokepointId: id,
history: payload.history,
fetchedAt: String(payload.fetchedAt ?? 0),
};
} catch {
return { chokepointId: id, history: [], fetchedAt: '0' };
}
}

View File

@@ -15,17 +15,19 @@ import type {
import { cachedFetchJson, getCachedJson, setCachedJson } from '../../../_shared/redis';
import { listNavigationalWarnings } from '../../maritime/v1/list-navigational-warnings';
import { getVesselSnapshot } from '../../maritime/v1/get-vessel-snapshot';
import type { PortWatchData } from './_portwatch-upstream';
import { CANONICAL_CHOKEPOINTS } from './_chokepoint-ids';
// @ts-expect-error — .mjs module, no declaration file
import { computeDisruptionScore, scoreToStatus, SEVERITY_SCORE, THREAT_LEVEL, detectTrafficAnomaly } from './_scoring.mjs';
import { computeDisruptionScore, scoreToStatus, SEVERITY_SCORE, THREAT_LEVEL } from './_scoring.mjs';
import { type ThreatLevel, threatLevelToWarRiskTier } from './_insurance-tier';
import { CHOKEPOINT_STATUS_KEY as REDIS_CACHE_KEY } from '../../../_shared/cache-keys';
const TRANSIT_SUMMARIES_KEY = 'supply_chain:transit-summaries:v1';
const PORTWATCH_FALLBACK_KEY = 'supply_chain:portwatch:v1';
const CORRIDORRISK_FALLBACK_KEY = 'supply_chain:corridorrisk:v1';
const TRANSIT_COUNTS_FALLBACK_KEY = 'supply_chain:chokepoint_transits:v1';
const FLOWS_KEY = 'energy:chokepoint-flows:v1';
// NOTE: historical fallback via supply_chain:portwatch:v1 / corridorrisk / chokepoint_transits
// was removed — those keys are ~500KB each, and reading them on top of the already-large
// transit-summaries payload was causing Vercel-edge Redis timeouts (1.5s budget) and pinning
// a silent zero-state cache. Today the ais-relay writer is authoritative for the compact
// summary; if it's missing we fail-fast via upstreamUnavailable so cachedFetchJson writes
// NEG_SENTINEL (120s) instead of caching a fake 5-min healthy-but-empty response.
// See docs/plans/chokepoint-rpc-payload-split.md.
const REDIS_CACHE_TTL = 300; // 5 min
const THREAT_CONFIG_MAX_AGE_DAYS = 120;
const NEARBY_CHOKEPOINT_RADIUS_KM = 300;
@@ -67,13 +69,15 @@ interface ChokepointConfig {
type DirectionLabel = 'eastbound' | 'westbound' | 'northbound' | 'southbound';
// Compact summary written by ais-relay.cjs — no history array; per-id history
// lives in `supply_chain:transit-summaries:history:v1:{id}` and is served by
// GetChokepointHistory on card expand.
interface PreBuiltTransitSummary {
todayTotal: number;
todayTanker: number;
todayCargo: number;
todayOther: number;
wowChangePct: number;
history: import('./_portwatch-upstream').TransitDayCount[];
riskLevel: string;
incidentCount7d: number;
disruptionPct: number;
@@ -239,47 +243,7 @@ interface ChokepointFetchResult {
upstreamUnavailable: boolean;
}
interface CorridorRiskEntry { riskLevel: string; incidentCount7d: number; disruptionPct: number; riskSummary: string; riskReportAction: string }
interface RelayTransitEntry { tanker: number; cargo: number; other: number; total: number }
interface FlowEstimateEntry { currentMbd: number; baselineMbd: number; flowRatio: number; disrupted: boolean; source: string; hazardAlertLevel: string | null; hazardAlertName: string | null }
interface RelayTransitPayload { transits: Record<string, RelayTransitEntry>; fetchedAt: number }
function buildFallbackSummaries(
portwatch: PortWatchData | null,
corridorRisk: Record<string, CorridorRiskEntry> | null,
transitData: RelayTransitPayload | null,
chokepoints: ChokepointConfig[],
): Record<string, PreBuiltTransitSummary> {
const summaries: Record<string, PreBuiltTransitSummary> = {};
const relayMap = new Map<string, RelayTransitEntry>();
if (transitData?.transits) {
for (const [relayName, entry] of Object.entries(transitData.transits)) {
const canonical = CANONICAL_CHOKEPOINTS.find(c => c.relayName === relayName);
if (canonical) relayMap.set(canonical.id, entry);
}
}
for (const cp of chokepoints) {
const pw = portwatch?.[cp.id];
const cr = corridorRisk?.[cp.id];
const relay = relayMap.get(cp.id);
const anomaly = detectTrafficAnomaly(pw?.history ?? [], cp.threatLevel);
summaries[cp.id] = {
todayTotal: relay?.total ?? 0,
todayTanker: relay?.tanker ?? 0,
todayCargo: relay?.cargo ?? 0,
todayOther: relay?.other ?? 0,
wowChangePct: pw?.wowChangePct ?? 0,
history: pw?.history ?? [],
riskLevel: cr?.riskLevel ?? '',
incidentCount7d: cr?.incidentCount7d ?? 0,
disruptionPct: cr?.disruptionPct ?? 0,
riskSummary: cr?.riskSummary ?? '',
riskReportAction: cr?.riskReportAction ?? '',
anomaly,
};
}
return summaries;
}
async function fetchChokepointData(): Promise<ChokepointFetchResult> {
const ctx = makeInternalCtx();
@@ -294,22 +258,22 @@ async function fetchChokepointData(): Promise<ChokepointFetchResult> {
getCachedJson(FLOWS_KEY, true).catch(() => null) as Promise<Record<string, FlowEstimateEntry> | null>,
]);
let summaries = transitSummariesData?.summaries ?? {};
const summaries = transitSummariesData?.summaries ?? {};
const transitSummariesMissing = Object.keys(summaries).length === 0;
// Fallback: if pre-built summaries are empty, read raw upstream keys directly
if (Object.keys(summaries).length === 0) {
const [portwatch, corridorRisk, transitCounts] = await Promise.all([
getCachedJson(PORTWATCH_FALLBACK_KEY, true).catch(() => null) as Promise<PortWatchData | null>,
getCachedJson(CORRIDORRISK_FALLBACK_KEY, true).catch(() => null) as Promise<Record<string, CorridorRiskEntry> | null>,
getCachedJson(TRANSIT_COUNTS_FALLBACK_KEY, true).catch(() => null) as Promise<RelayTransitPayload | null>,
]);
if (portwatch && Object.keys(portwatch).length > 0) {
summaries = buildFallbackSummaries(portwatch, corridorRisk, transitCounts, CHOKEPOINTS);
}
}
const warnings = navResult.warnings || [];
const disruptions: AisDisruption[] = vesselResult.snapshot?.disruptions || [];
const upstreamUnavailable = (navFailed && vesselFailed) || (navFailed && disruptions.length === 0) || (vesselFailed && warnings.length === 0);
// Treat a missing compact summary as upstream-unavailable so the outer
// cachedFetchJson caches NEG_SENTINEL (120s neg TTL) rather than pinning a
// healthy-but-zero response for the full REDIS_CACHE_TTL (5min). Before this
// gate, a single Redis read timeout silently published 13 zero-state
// chokepoints to supply_chain:chokepoints:v4 and the panel stayed empty
// until that cache expired. See docs/plans/chokepoint-rpc-payload-split.md.
const upstreamUnavailable = transitSummariesMissing
|| (navFailed && vesselFailed)
|| (navFailed && disruptions.length === 0)
|| (vesselFailed && warnings.length === 0);
const warningsByChokepoint = groupWarningsByChokepoint(warnings);
const disruptionsByChokepoint = groupDisruptionsByChokepoint(disruptions);
const threatConfigFresh = isThreatConfigFresh();
@@ -366,7 +330,10 @@ async function fetchChokepointData(): Promise<ChokepointFetchResult> {
todayCargo: ts.todayCargo,
todayOther: ts.todayOther,
wowChangePct: ts.wowChangePct,
history: ts.history,
// History is served separately by GetChokepointHistory (lazy-loaded on
// card expand) — field stays declared for proto compat but is empty
// on the main status response.
history: [],
riskLevel: ts.riskLevel,
incidentCount7d: ts.incidentCount7d,
disruptionPct: ts.disruptionPct,

View File

@@ -2,6 +2,7 @@ import type { SupplyChainServiceHandler } from '../../../../src/generated/server
import { getShippingRates } from './get-shipping-rates';
import { getChokepointStatus } from './get-chokepoint-status';
import { getChokepointHistory } from './get-chokepoint-history';
import { getCriticalMinerals } from './get-critical-minerals';
import { getShippingStress } from './get-shipping-stress';
import { getCountryChokepointIndex } from './get-country-chokepoint-index';
@@ -14,6 +15,7 @@ import { getRouteImpact } from './get-route-impact';
export const supplyChainHandler: SupplyChainServiceHandler = {
getShippingRates,
getChokepointStatus,
getChokepointHistory,
getCriticalMinerals,
getShippingStress,
getCountryChokepointIndex,

View File

@@ -12,7 +12,8 @@ import { escapeHtml, sanitizeUrl } from '@/utils/sanitize';
import { isMobileDevice, getCSSColor } from '@/utils';
import { TransitChart } from '@/utils/transit-chart';
import { HS2RingChart } from '@/utils/hs2-ring-chart';
import type { GetChokepointStatusResponse } from '@/services/supply-chain';
import type { GetChokepointStatusResponse, TransitDayCount } from '@/services/supply-chain';
import { fetchChokepointHistory } from '@/services/supply-chain';
import { t } from '@/services/i18n';
import { fetchHotspotContext, formatArticleDate, extractDomain, type GdeltArticle } from '@/services/gdelt-intel';
import { getWingbitsLiveFlight } from '@/services/wingbits';
@@ -225,6 +226,10 @@ export class MapPopup {
private repairShips: RepairShip[] = [];
private chokepointData: GetChokepointStatusResponse | null = null;
private transitChart: TransitChart | null = null;
// Session-scoped cache: history is now lazy-loaded via GetChokepointHistory
// when a waterway popup opens (main status RPC omits it to keep payloads small).
private static historyCache = new Map<string, TransitDayCount[]>();
private static historyInflight = new Set<string>();
private isMobileSheet = false;
private sheetTouchStartY: number | null = null;
private sheetCurrentOffset = 0;
@@ -272,12 +277,46 @@ export class MapPopup {
c => c.id === waterway.chokepointId,
);
const chartEl = this.popup.querySelector<HTMLElement>('[data-transit-chart]');
if (chartEl && cp?.transitSummary?.history?.length) {
this.transitChart = new TransitChart();
this.transitChart.mount(chartEl, cp.transitSummary.history);
const cpId = cp?.id ?? '';
const isPro = hasPremiumAccess(getAuthState());
if (chartEl && cpId && isPro) {
const cached = MapPopup.historyCache.get(cpId);
if (cached && cached.length) {
this.transitChart = new TransitChart();
this.transitChart.mount(chartEl, cached);
} else if (!MapPopup.historyInflight.has(cpId)) {
// We cache ONLY non-empty successful responses. An empty-array result
// or error is not cached, so re-opening the popup retries. Caching []
// would poison the chokepoint for the session — empty-array is
// truthy in JS, so `cached && cached.length` is false AND
// `!cached` is also false → neither branch fires, popup stuck on
// "Loading…". The /get-chokepoint-history gateway tier is "slow"
// (5-min CF edge cache) so retries stay cheap.
MapPopup.historyInflight.add(cpId);
void fetchChokepointHistory(cpId).then(resp => {
MapPopup.historyInflight.delete(cpId);
const liveEl = this.popup?.querySelector<HTMLElement>('[data-transit-chart]');
if (!liveEl) return;
if (resp.history.length) {
MapPopup.historyCache.set(cpId, resp.history);
liveEl.textContent = '';
this.transitChart = new TransitChart();
this.transitChart.mount(liveEl, resp.history);
} else {
liveEl.textContent = t('components.supplyChain.historyUnavailable') || 'History unavailable';
}
}).catch(() => {
MapPopup.historyInflight.delete(cpId);
const liveEl = this.popup?.querySelector<HTMLElement>('[data-transit-chart]');
if (liveEl) liveEl.textContent = t('components.supplyChain.historyUnavailable') || 'History unavailable';
});
}
}
// Track PRO gate impression for transit chart
if (cp?.transitSummary?.history?.length && !hasPremiumAccess(getAuthState())) {
// Track PRO gate impression for transit chart — we always render the gate
// for non-PRO users on chokepoints (history is a PRO feature); this
// doesn't depend on whether history has resolved.
if (cpId && !isPro) {
trackGateHit('chokepoint-transit-chart');
}
@@ -1293,7 +1332,10 @@ export class MapPopup {
const cp = this.chokepointData?.chokepoints?.find(
c => c.id === waterway.chokepointId,
);
const hasChart = !!(cp?.transitSummary?.history?.length);
// Chart is now lazy-loaded via GetChokepointHistory on popup mount. Always
// render the section for any known chokepoint; the initial placeholder
// swaps to a chart (PRO) or "History unavailable" as the fetch resolves.
const hasChart = !!cp;
const isPro = hasPremiumAccess(getAuthState());
const sectors = CHOKEPOINT_HS2_SECTORS[waterway.chokepointId];
@@ -1307,7 +1349,7 @@ export class MapPopup {
let chartSection = '';
if (hasChart) {
if (isPro) {
chartSection = `<div data-transit-chart="${escapeHtml(waterway.name)}" style="margin-top:10px;min-height:200px"></div>`;
chartSection = `<div data-transit-chart="${escapeHtml(waterway.name)}" style="margin-top:10px;min-height:200px;display:flex;align-items:center;justify-content:center;color:var(--text-dim,#888);font-size:12px">${t('components.supplyChain.loadingHistory') || 'Loading transit history\u2026'}</div>`;
} else {
chartSection = `
<div class="sector-pro-gate" data-gate="chokepoint-transit-chart" style="position:relative;overflow:hidden;border-radius:6px;margin-top:10px;min-height:120px;background:var(--surface-elevated, #111)">

View File

@@ -5,7 +5,8 @@ import type {
GetCriticalMineralsResponse,
GetShippingStressResponse,
} from '@/services/supply-chain';
import { fetchBypassOptions } from '@/services/supply-chain';
import { fetchBypassOptions, fetchChokepointHistory } from '@/services/supply-chain';
import type { TransitDayCount } from '@/services/supply-chain';
import type { ScenarioResult } from '@/config/scenario-templates';
import { SCENARIO_TEMPLATES } from '@/config/scenario-templates';
import { TransitChart } from '@/utils/transit-chart';
@@ -32,6 +33,11 @@ export class SupplyChainPanel extends Panel {
private transitChart = new TransitChart();
private chartObserver: MutationObserver | null = null;
private chartMountTimer: ReturnType<typeof setTimeout> | null = null;
// Session-scoped cache for lazy-loaded transit histories (keyed by chokepoint id).
// Populated on first card expand via fetchChokepointHistory; reused across re-renders
// so we don't refetch 35KB per expand/collapse cycle.
private historyCache = new Map<string, TransitDayCount[]>();
private historyInflight = new Set<string>();
private bypassUnsubscribe: (() => void) | null = null;
private bypassGateTracked = false;
private onDismissScenario: (() => void) | null = null;
@@ -159,9 +165,46 @@ export class SupplyChainPanel extends Panel {
const mountTransitChart = (): boolean => {
const el = this.content.querySelector(`[data-chart-cp="${expandedCpName}"]`) as HTMLElement | null;
if (!el) return false;
if (cp?.transitSummary?.history?.length) {
this.transitChart.mount(el, cp.transitSummary.history);
const cpId = cp?.id ?? '';
if (!cpId) { el.textContent = t('components.supplyChain.historyUnavailable') || 'History unavailable'; return true; }
const cached = this.historyCache.get(cpId);
if (cached && cached.length) {
el.removeAttribute('style');
el.style.marginTop = '8px';
el.style.minHeight = '200px';
el.textContent = '';
this.transitChart.mount(el, cached);
return true;
}
// NOTE: we do NOT cache empty/error results — a transient deploy-window
// miss or a brief Redis error would otherwise poison the chokepoint for
// the entire session. Each re-expand retries; the /get-chokepoint-history
// gateway tier is "slow" (5-min CF edge cache) so retries stay cheap.
if (this.historyInflight.has(cpId)) return true;
this.historyInflight.add(cpId);
void fetchChokepointHistory(cpId).then(resp => {
this.historyInflight.delete(cpId);
// Still mounted? Re-query — DOM may have re-rendered since fetch started.
const liveEl = this.content.querySelector(`[data-chart-cp-id="${cpId}"]`) as HTMLElement | null;
if (!liveEl) return;
if (resp.history.length) {
this.historyCache.set(cpId, resp.history);
liveEl.removeAttribute('style');
liveEl.style.marginTop = '8px';
liveEl.style.minHeight = '200px';
liveEl.textContent = '';
this.transitChart.mount(liveEl, resp.history);
} else {
liveEl.textContent = t('components.supplyChain.historyUnavailable') || 'History unavailable';
}
}).catch(() => {
this.historyInflight.delete(cpId);
const liveEl = this.content.querySelector(`[data-chart-cp-id="${cpId}"]`) as HTMLElement | null;
if (liveEl) liveEl.textContent = t('components.supplyChain.historyUnavailable') || 'History unavailable';
});
return true;
};
@@ -299,8 +342,12 @@ export class SupplyChainPanel extends Panel {
const actionRow = expanded && ts?.riskReportAction
? `<div class="sc-routing-advisory">${escapeHtml(ts.riskReportAction)}</div>`
: '';
const chartPlaceholder = expanded && ts?.history?.length
? `<div data-chart-cp="${escapeHtml(cp.name)}" style="margin-top:8px;min-height:200px"></div>`
// Always render the chart placeholder when expanded — history is now
// lazy-loaded via GetChokepointHistory RPC (see mountTransitChart below).
// The placeholder shows a loading hint that's swapped to a chart once
// history resolves, or to a graceful "unavailable" message on empty.
const chartPlaceholder = expanded
? `<div data-chart-cp="${escapeHtml(cp.name)}" data-chart-cp-id="${escapeHtml(cp.id)}" style="margin-top:8px;min-height:200px;display:flex;align-items:center;justify-content:center;color:var(--text-dim,#888);font-size:12px">${t('components.supplyChain.loadingHistory') || 'Loading transit history\u2026'}</div>`
: '';
const tier = cp.warRiskTier ?? 'WAR_RISK_TIER_NORMAL';

View File

@@ -102,6 +102,16 @@ export interface FlowEstimate {
hazardAlertName: string;
}
export interface GetChokepointHistoryRequest {
chokepointId: string;
}
export interface GetChokepointHistoryResponse {
chokepointId: string;
history: TransitDayCount[];
fetchedAt: string;
}
export interface GetCriticalMineralsRequest {
}
@@ -415,6 +425,31 @@ export class SupplyChainServiceClient {
return await resp.json() as GetChokepointStatusResponse;
}
async getChokepointHistory(req: GetChokepointHistoryRequest, options?: SupplyChainServiceCallOptions): Promise<GetChokepointHistoryResponse> {
let path = "/api/supply-chain/v1/get-chokepoint-history";
const params = new URLSearchParams();
if (req.chokepointId != null && req.chokepointId !== "") params.set("chokepointId", String(req.chokepointId));
const url = this.baseURL + path + (params.toString() ? "?" + params.toString() : "");
const headers: Record<string, string> = {
"Content-Type": "application/json",
...this.defaultHeaders,
...options?.headers,
};
const resp = await this.fetchFn(url, {
method: "GET",
headers,
signal: options?.signal,
});
if (!resp.ok) {
return this.handleError(resp);
}
return await resp.json() as GetChokepointHistoryResponse;
}
async getCriticalMinerals(req: GetCriticalMineralsRequest, options?: SupplyChainServiceCallOptions): Promise<GetCriticalMineralsResponse> {
let path = "/api/supply-chain/v1/get-critical-minerals";
const url = this.baseURL + path;

View File

@@ -102,6 +102,16 @@ export interface FlowEstimate {
hazardAlertName: string;
}
export interface GetChokepointHistoryRequest {
chokepointId: string;
}
export interface GetChokepointHistoryResponse {
chokepointId: string;
history: TransitDayCount[];
fetchedAt: string;
}
export interface GetCriticalMineralsRequest {
}
@@ -368,6 +378,7 @@ export interface RouteDescriptor {
export interface SupplyChainServiceHandler {
getShippingRates(ctx: ServerContext, req: GetShippingRatesRequest): Promise<GetShippingRatesResponse>;
getChokepointStatus(ctx: ServerContext, req: GetChokepointStatusRequest): Promise<GetChokepointStatusResponse>;
getChokepointHistory(ctx: ServerContext, req: GetChokepointHistoryRequest): Promise<GetChokepointHistoryResponse>;
getCriticalMinerals(ctx: ServerContext, req: GetCriticalMineralsRequest): Promise<GetCriticalMineralsResponse>;
getShippingStress(ctx: ServerContext, req: GetShippingStressRequest): Promise<GetShippingStressResponse>;
getCountryChokepointIndex(ctx: ServerContext, req: GetCountryChokepointIndexRequest): Promise<GetCountryChokepointIndexResponse>;
@@ -457,6 +468,53 @@ export function createSupplyChainServiceRoutes(
}
},
},
{
method: "GET",
path: "/api/supply-chain/v1/get-chokepoint-history",
handler: async (req: Request): Promise<Response> => {
try {
const pathParams: Record<string, string> = {};
const url = new URL(req.url, "http://localhost");
const params = url.searchParams;
const body: GetChokepointHistoryRequest = {
chokepointId: params.get("chokepointId") ?? "",
};
if (options?.validateRequest) {
const bodyViolations = options.validateRequest("getChokepointHistory", body);
if (bodyViolations) {
throw new ValidationError(bodyViolations);
}
}
const ctx: ServerContext = {
request: req,
pathParams,
headers: Object.fromEntries(req.headers.entries()),
};
const result = await handler.getChokepointHistory(ctx, body);
return new Response(JSON.stringify(result as GetChokepointHistoryResponse), {
status: 200,
headers: { "Content-Type": "application/json" },
});
} catch (err: unknown) {
if (err instanceof ValidationError) {
return new Response(JSON.stringify({ violations: err.violations }), {
status: 400,
headers: { "Content-Type": "application/json" },
});
}
if (options?.onError) {
return options.onError(err, req);
}
const message = err instanceof Error ? err.message : String(err);
return new Response(JSON.stringify({ message }), {
status: 500,
headers: { "Content-Type": "application/json" },
});
}
},
},
{
method: "GET",
path: "/api/supply-chain/v1/get-critical-minerals",

View File

@@ -947,6 +947,8 @@
"corridorDisruption": "Corridor Disruption",
"corridor": "Corridor",
"loadingCorridors": "Loading corridor data...",
"loadingHistory": "Loading transit history…",
"historyUnavailable": "Transit history unavailable",
"mineral": "Mineral",
"topProducers": "Top Producers",
"risk": "Risk",

View File

@@ -4,6 +4,7 @@ import {
SupplyChainServiceClient,
type GetShippingRatesResponse,
type GetChokepointStatusResponse,
type GetChokepointHistoryResponse,
type GetCriticalMineralsResponse,
type GetShippingStressResponse,
type GetCountryChokepointIndexResponse,
@@ -19,6 +20,7 @@ import {
type ShippingRatePoint,
type ChokepointExposureEntry,
type BypassOption,
type TransitDayCount,
} from '@/generated/client/worldmonitor/supply_chain/v1/service_client';
import { createCircuitBreaker } from '@/utils';
import { getHydratedData } from '@/services/bootstrap';
@@ -26,6 +28,7 @@ import { getHydratedData } from '@/services/bootstrap';
export type {
GetShippingRatesResponse,
GetChokepointStatusResponse,
GetChokepointHistoryResponse,
GetCriticalMineralsResponse,
GetShippingStressResponse,
GetCountryChokepointIndexResponse,
@@ -41,6 +44,7 @@ export type {
ShippingRatePoint,
ChokepointExposureEntry,
BypassOption,
TransitDayCount,
};
const client = new SupplyChainServiceClient(getRpcBaseUrl(), { fetch: (...args) => globalThis.fetch(...args) });
@@ -79,6 +83,22 @@ export async function fetchChokepointStatus(): Promise<GetChokepointStatusRespon
}
}
/**
* Lazy-load transit history for a single chokepoint. Main status RPC returns
* transitSummary.history = [] to keep the payload under the 1.5s Redis read
* budget; this call pulls the ~35KB per-id history key only when a card is
* expanded. See docs/plans/chokepoint-rpc-payload-split.md.
*/
export async function fetchChokepointHistory(
chokepointId: string,
): Promise<GetChokepointHistoryResponse> {
try {
return await client.getChokepointHistory({ chokepointId });
} catch {
return { chokepointId, history: [], fetchedAt: '0' };
}
}
export async function fetchCriticalMinerals(): Promise<GetCriticalMineralsResponse> {
const hydrated = getHydratedData('minerals') as GetCriticalMineralsResponse | undefined;
if (hydrated?.minerals?.length) return hydrated;

View File

@@ -0,0 +1,86 @@
import { describe, it } from 'node:test';
import assert from 'node:assert/strict';
import { readFileSync } from 'node:fs';
import { dirname, resolve } from 'node:path';
import { fileURLToPath } from 'node:url';
const __dirname = dirname(fileURLToPath(import.meta.url));
const root = resolve(__dirname, '..');
const handlerSrc = readFileSync(
resolve(root, 'server/worldmonitor/supply-chain/v1/get-chokepoint-history.ts'),
'utf-8',
);
const handlerMapSrc = readFileSync(
resolve(root, 'server/worldmonitor/supply-chain/v1/handler.ts'),
'utf-8',
);
describe('get-chokepoint-history handler (source analysis)', () => {
it('reads from the per-id history key prefix', () => {
assert.match(handlerSrc, /supply_chain:transit-summaries:history:v1:/);
});
it('uses getCachedJson in raw mode (unprefixed key)', () => {
assert.match(handlerSrc, /getCachedJson\(`\$\{HISTORY_KEY_PREFIX\}\$\{id\}`,\s*true\)/);
});
it('validates chokepointId against the canonical set', () => {
assert.match(handlerSrc, /CANONICAL_CHOKEPOINTS/);
assert.match(handlerSrc, /VALID_IDS\.has\(id\)/);
});
it('returns empty history with fetchedAt=0 on invalid id, missing key, or error', () => {
// Invalid id branch
assert.match(handlerSrc, /!id\s*\|\|\s*!VALID_IDS\.has\(id\)/);
// Missing key / non-array branch
assert.match(handlerSrc, /!payload\s*\|\|\s*!Array\.isArray\(payload\.history\)/);
// Catch block returns empty history (all three paths return fetchedAt '0')
const emptyReturns = [...handlerSrc.matchAll(/fetchedAt:\s*'0'/g)];
assert.ok(emptyReturns.length >= 3, `expected 3+ fetchedAt:'0' returns, got ${emptyReturns.length}`);
});
it('is wired into the SupplyChainService handler map', () => {
assert.match(handlerMapSrc, /import\s+\{\s*getChokepointHistory\s*\}/);
assert.match(handlerMapSrc, /\bgetChokepointHistory,/);
});
});
describe('proto wiring', () => {
const protoSrc = readFileSync(
resolve(root, 'proto/worldmonitor/supply_chain/v1/service.proto'),
'utf-8',
);
const historyProto = readFileSync(
resolve(root, 'proto/worldmonitor/supply_chain/v1/get_chokepoint_history.proto'),
'utf-8',
);
it('service.proto imports and registers GetChokepointHistory', () => {
assert.match(protoSrc, /import "worldmonitor\/supply_chain\/v1\/get_chokepoint_history\.proto"/);
assert.match(protoSrc, /rpc GetChokepointHistory\(GetChokepointHistoryRequest\) returns \(GetChokepointHistoryResponse\)/);
assert.match(protoSrc, /path:\s*"\/get-chokepoint-history",\s*method:\s*HTTP_METHOD_GET/);
});
it('GetChokepointHistoryRequest requires chokepoint_id as a query param', () => {
assert.match(historyProto, /\(buf\.validate\.field\)\.required\s*=\s*true/);
assert.match(historyProto, /\(sebuf\.http\.query\)\s*=\s*\{name:\s*"chokepointId"\}/);
});
it('GetChokepointHistoryResponse carries chokepoint_id, history, fetched_at', () => {
assert.match(historyProto, /string chokepoint_id\s*=\s*1/);
assert.match(historyProto, /repeated TransitDayCount history\s*=\s*2/);
assert.match(historyProto, /int64 fetched_at\s*=\s*3/);
});
});
describe('Redis timeout observability', () => {
const redisSrc = readFileSync(resolve(root, 'server/_shared/redis.ts'), 'utf-8');
it('logs [REDIS-TIMEOUT] with key and timeoutMs on AbortError', () => {
// Grepable tag that log drains / Sentry-Vercel integration can pick up —
// before this, large-payload timeouts silently returned null and consumers
// cached zero-state. See docs/plans/chokepoint-rpc-payload-split.md.
assert.match(redisSrc, /isTimeout\s*=\s*err instanceof Error && err\.name === 'AbortError'/);
assert.match(redisSrc, /\[REDIS-TIMEOUT\] getCachedJson key=\$\{key\} timeoutMs=\$\{REDIS_OP_TIMEOUT_MS\}/);
});
});

View File

@@ -81,19 +81,43 @@ describe('SupplyChainPanel transit chart mount contract', () => {
assert.ok(body.includes('disconnect'), 'observer callback must disconnect itself');
});
it('mountTransitChart checks for chart element and transit history before mounting', () => {
// The mount function should guard against missing DOM elements and missing data
it('mountTransitChart lazy-loads history via fetchChokepointHistory and mounts on resolve', () => {
// After the payload-split: history is NOT part of the main status RPC.
// mountTransitChart must (1) find the chart element by cp name,
// (2) check a session cache, (3) call fetchChokepointHistory on miss,
// (4) mount the chart on the live element when the fetch resolves.
assert.ok(
panelSrc.includes('querySelector(`[data-chart-cp='),
'must query for chart container element by chokepoint name'
);
assert.ok(
panelSrc.includes('transitSummary?.history?.length'),
'must check transitSummary.history exists before mounting'
panelSrc.includes('fetchChokepointHistory('),
'must lazy-fetch history via fetchChokepointHistory RPC'
);
assert.ok(
panelSrc.includes('this.historyCache'),
'must cache history results for the session (avoid refetch on re-expand)'
);
assert.ok(
panelSrc.includes('transitChart.mount('),
'must call transitChart.mount with element and history data'
'must call transitChart.mount when history resolves'
);
});
it('does NOT cache empty/error results — session-sticky regression guard', () => {
// Caching [] or on error would poison the chokepoint for the whole
// session (transient miss → never retries). Only cache on non-empty
// success. Empty/error show the "unavailable" placeholder but leave
// the cache untouched so the next re-expand retries.
assert.ok(
!panelSrc.match(/historyCache\.set\([^,]+,\s*\[\]\)/),
'panel must NOT cache empty arrays'
);
// The success branch gates the set() on resp.history.length — match the
// conditional-set pattern inside the .then() block.
assert.ok(
/if\s*\(resp\.history\.length\)\s*\{[\s\S]*?historyCache\.set\(/.test(panelSrc),
'panel must only cache on resp.history.length > 0'
);
});

View File

@@ -45,19 +45,54 @@ describe('seedTransitSummaries (relay)', () => {
assert.match(relaySrc, /seed-meta:supply_chain:transit-summaries/);
});
it('summary object includes all required fields', () => {
it('compact summary object includes all stat fields (history split out)', () => {
assert.match(relaySrc, /todayTotal:/);
assert.match(relaySrc, /todayTanker:/);
assert.match(relaySrc, /todayCargo:/);
assert.match(relaySrc, /todayOther:/);
assert.match(relaySrc, /wowChangePct:/);
assert.match(relaySrc, /history:/);
assert.match(relaySrc, /riskLevel:/);
assert.match(relaySrc, /incidentCount7d:/);
assert.match(relaySrc, /disruptionPct:/);
assert.match(relaySrc, /anomaly/);
});
it('compact summary object does NOT inline history (payload-split guard)', () => {
// Matches the `summaries[cpId] = { ... }` block specifically — history
// belongs to the per-id key now, not the compact summary.
const block = relaySrc.match(/summaries\[cpId\]\s*=\s*\{([\s\S]*?)\};/);
assert.ok(block, 'compact summary assignment not found');
assert.doesNotMatch(block[1], /\bhistory:/);
});
it('writes per-id history keys via envelopeWrite', () => {
assert.match(relaySrc, /TRANSIT_SUMMARY_HISTORY_KEY_PREFIX/);
assert.match(relaySrc, /supply_chain:transit-summaries:history:v1:/);
// Per-id payload includes chokepointId, history, fetchedAt
assert.match(relaySrc, /chokepointId:\s*cpId,\s*history,\s*fetchedAt:\s*now/);
});
it('iterates the canonical chokepoint ID set (not Object.entries(pw))', () => {
// Partial-coverage regression guard: iterating over whatever pw carries
// silently drops missing chokepoints. RPC sees a partial summaries shape
// and caches zero-state rows for 5 min since upstreamUnavailable only
// fires on fully-empty. Writer must emit all 13 canonical IDs with
// zero-state fill for missing upstream data.
assert.match(relaySrc, /CANONICAL_IDS\s*=\s*Object\.keys\(CHOKEPOINT_THREAT_LEVELS\)/);
assert.match(relaySrc, /for\s*\(const cpId of CANONICAL_IDS\)/);
assert.doesNotMatch(relaySrc, /for\s*\(const \[cpId, cpData\] of Object\.entries\(pw\)\)/);
});
it('records actual upstream coverage (pwCovered) in seed-meta + envelope', () => {
// seed-meta recordCount must reflect pwCovered, not the always-13 canonical
// shape size — otherwise health.js can't distinguish healthy 13/13 from
// partial-upstream 10/13.
assert.match(relaySrc, /let\s+pwCovered\s*=\s*0/);
assert.match(relaySrc, /if\s*\(cpData\)\s*pwCovered\+\+/);
assert.match(relaySrc, /recordCount:\s*pwCovered/);
assert.match(relaySrc, /coverage shortfall/);
});
it('reads latestCorridorRiskData for riskLevel/incidentCount7d/disruptionPct', () => {
assert.match(relaySrc, /latestCorridorRiskData\?\.\[cpId\]/);
assert.match(relaySrc, /cr\?\.riskLevel/);
@@ -66,12 +101,19 @@ describe('seedTransitSummaries (relay)', () => {
});
it('reads pw from Redis for history and wowChangePct', () => {
assert.match(relaySrc, /cpData\.history/);
assert.match(relaySrc, /cpData\.wowChangePct/);
// After canonical-coverage refactor, cpData is nullable (missing upstream),
// so access is `cpData?.history` / `cpData?.wowChangePct` with zero-state
// fallback for missing IDs.
assert.match(relaySrc, /cpData\?\.history/);
assert.match(relaySrc, /cpData\?\.wowChangePct/);
});
it('calls detectTrafficAnomalyRelay with history and threat level', () => {
assert.match(relaySrc, /detectTrafficAnomalyRelay\(cpData\.history,\s*threatLevel\)/);
it('calls detectTrafficAnomalyRelay with local history binding', () => {
// history is bound from `cpData?.history ?? []` before the anomaly call,
// so detectTrafficAnomalyRelay runs on a concrete array even when the
// canonical chokepoint is missing from this cycle's portwatch payload.
assert.match(relaySrc, /const history = cpData\?\.history \?\? \[\]/);
assert.match(relaySrc, /detectTrafficAnomalyRelay\(history,\s*threatLevel\)/);
});
it('wraps summaries in { summaries, fetchedAt } envelope', () => {
@@ -234,16 +276,13 @@ describe('get-chokepoint-status handler (source analysis)', () => {
assert.match(handlerSrc, /getCachedJson\(TRANSIT_SUMMARIES_KEY/);
});
it('imports PortWatchData for fallback assembly', () => {
assert.match(handlerSrc, /import.*PortWatchData/);
});
it('does NOT import CorridorRiskData (uses local interface)', () => {
assert.doesNotMatch(handlerSrc, /import.*CorridorRiskData/);
});
it('imports CANONICAL_CHOKEPOINTS for fallback relay-name mapping', () => {
assert.match(handlerSrc, /import.*CANONICAL_CHOKEPOINTS/);
it('does NOT import PortWatchData or CANONICAL_CHOKEPOINTS (fallback path removed)', () => {
// Fallback against raw 500KB portwatch/corridorrisk keys was removed —
// the compact transit-summaries key is authoritative; missing key now
// surfaces as upstreamUnavailable=true rather than triggering a large
// secondary read that times out at the 1.5s Redis budget.
assert.doesNotMatch(handlerSrc, /import.*PortWatchData/);
assert.doesNotMatch(handlerSrc, /import\s*\{\s*CANONICAL_CHOKEPOINTS\s*\}/);
});
it('does NOT import portwatchNameToId or corridorRiskNameToId', () => {
@@ -251,6 +290,22 @@ describe('get-chokepoint-status handler (source analysis)', () => {
assert.doesNotMatch(handlerSrc, /import.*corridorRiskNameToId/);
});
it('treats missing transit-summaries as upstreamUnavailable (silent-cache regression guard)', () => {
// Regression guard for the silent zero-state cache bug: before this fix,
// a null transit-summaries read produced 13 zero-state chokepoints that
// were cached for 5 min (REDIS_CACHE_TTL). Now we mark upstreamUnavailable
// so cachedFetchJson writes NEG_SENTINEL (120s) and retries on next poll.
assert.match(handlerSrc, /transitSummariesMissing/);
assert.match(handlerSrc, /const upstreamUnavailable\s*=\s*transitSummariesMissing/);
});
it('omits history from the transit summary response (lazy-loaded via GetChokepointHistory)', () => {
// Main status response no longer carries 180-day history per chokepoint —
// clients lazy-fetch via GetChokepointHistory on card expand. Field stays
// declared for proto compat but is always empty in this RPC.
assert.match(handlerSrc, /history:\s*\[\],\s*\n\s*riskLevel:\s*ts\.riskLevel/);
});
it('defines PreBuiltTransitSummary interface with all required fields', () => {
assert.match(handlerSrc, /interface PreBuiltTransitSummary/);
assert.match(handlerSrc, /todayTotal:\s*number/);
@@ -474,26 +529,20 @@ describe('CHOKEPOINT_THREAT_LEVELS relay-handler sync', () => {
});
// ---------------------------------------------------------------------------
// 8. Handler reads pre-built summaries first, falls back to raw keys
// 8. Handler reads ONLY the compact transit-summaries key (no fallback)
// ---------------------------------------------------------------------------
describe('handler transit data strategy', () => {
it('reads TRANSIT_SUMMARIES_KEY as primary source', () => {
it('reads TRANSIT_SUMMARIES_KEY as the only transit source', () => {
assert.match(handlerSrc, /TRANSIT_SUMMARIES_KEY/);
});
it('has fallback keys for portwatch, corridorrisk, and transit counts', () => {
assert.match(handlerSrc, /PORTWATCH_FALLBACK_KEY/);
assert.match(handlerSrc, /CORRIDORRISK_FALLBACK_KEY/);
assert.match(handlerSrc, /TRANSIT_COUNTS_FALLBACK_KEY/);
});
it('fallback triggers only when pre-built summaries are empty', () => {
assert.match(handlerSrc, /Object\.keys\(summaries\)\.length === 0/);
});
it('fallback builds summaries with detectTrafficAnomaly', () => {
assert.match(handlerSrc, /buildFallbackSummaries/);
assert.match(handlerSrc, /detectTrafficAnomaly/);
it('does NOT reference removed fallback keys (portwatch / corridorrisk / chokepoint_transits)', () => {
// Previously each of these was a ~500KB secondary read that stacked on
// top of the 1.5s Redis read budget and timed out. Removed in payload-split PR.
assert.doesNotMatch(handlerSrc, /PORTWATCH_FALLBACK_KEY/);
assert.doesNotMatch(handlerSrc, /CORRIDORRISK_FALLBACK_KEY/);
assert.doesNotMatch(handlerSrc, /TRANSIT_COUNTS_FALLBACK_KEY/);
assert.doesNotMatch(handlerSrc, /buildFallbackSummaries/);
});
it('does NOT call getPortWatchTransits or fetchCorridorRisk (no upstream fetch)', () => {