feat(energy-atlas): live tanker map layer + contract (parity PR 3, plan U7-U8) (#3402)

* feat(energy-atlas): live tanker map layer + contract (PR 3, plan U7-U8)

Lands the third and final parity-push surface — per-vessel tanker positions
inside chokepoint bounding boxes, refreshed every 60s. Closes the visual
gap with peer reference energy-intel sites for the live AIS tanker view.

Per docs/plans/2026-04-25-003-feat-energy-parity-pushup-plan.md PR 3.
Codex-approved through 8 review rounds against origin/main @ 050073354.

U7 — Contract changes (relay + handler + proto + gateway + rate-limit + test):

- scripts/ais-relay.cjs: parallel `tankerReports` Map populated for AIS
  ship type 80-89 (tanker class) per ITU-R M.1371. SEPARATE from the
  existing `candidateReports` Map (military-only) so the existing
  military-detection consumer's contract stays unchanged. Snapshot
  endpoint extended to accept `bbox=swLat,swLon,neLat,neLon` + `tankers=true`
  query params, with bbox-filtering applied server-side. Tanker reports
  cleaned up on the same retention window as candidate reports; capped
  at 200 per response (10× headroom for global storage).
- proto/worldmonitor/maritime/v1/{get_,}vessel_snapshot.proto:
  - new `bool include_tankers = 6` request field
  - new `repeated SnapshotCandidateReport tanker_reports = 7` response
    field (reuses existing message shape; parallel to candidate_reports)
- server/worldmonitor/maritime/v1/get-vessel-snapshot.ts: REPLACES the
  prior 5-minute `with|without` cache with a request-keyed cache —
  (includeCandidates, includeTankers, quantizedBbox) — at 60s TTL for
  the live-tanker path and 5min TTL for the existing density/disruption
  consumers. Also adds 1° bbox quantization for cache-key reuse and a
  10° max-bbox guard (BboxTooLargeError) to prevent malicious clients
  from pulling all tankers through one query.
- server/gateway.ts: NEW `'live'` cache tier. CacheTier union extended;
  TIER_HEADERS + TIER_CDN_CACHE both gain entries with `s-maxage=60,
  stale-while-revalidate=60`. RPC_CACHE_TIER maps the maritime endpoint
  from `'no-store'` to `'live'` so the CDN absorbs concurrent identical
  requests across all viewers (without this, N viewers × 6 chokepoints
  hit AISStream upstream linearly).
- server/_shared/rate-limit.ts: ENDPOINT_RATE_POLICIES entry for the
  maritime endpoint at 60 req/min/IP — enough headroom for one user's
  6-chokepoint tab plus refreshes; flags only true scrape-class traffic.
- tests/route-cache-tier.test.mjs: regex extended to include `live` so
  the every-route-has-an-explicit-tier check still recognises the new
  mapping. Without this, the new tier would silently drop the maritime
  route from the validator's route map.

U8 — LiveTankersLayer consumer:

- src/services/live-tankers.ts: per-chokepoint fetcher with 60s in-memory
  cache. Promise.allSettled — never .all — so one chokepoint failing
  doesn't blank the whole layer (failed zones serve last-known data).
  Sources bbox centroids from src/config/chokepoint-registry.ts
  (CORRECT location — server/.../​_chokepoint-ids.ts strips lat/lon).
  Default chokepoint set: hormuz_strait, suez, bab_el_mandeb,
  malacca_strait, panama, bosphorus.
- src/components/DeckGLMap.ts: new `createLiveTankersLayer()` ScatterplotLayer
  styled by speed (anchored amber when speed < 0.5 kn, underway cyan,
  unknown gray); new `loadLiveTankers()` async loader with abort-controller
  cancellation. Layer instantiated when `mapLayers.liveTankers && this.liveTankers.length > 0`.
- src/config/map-layer-definitions.ts: `LayerDefinition` for `liveTankers`
  with `renderers: ['flat'], deckGLOnly: true` (matches existing
  storageFacilities/fuelShortages pattern). Added to `VARIANT_LAYER_ORDER.energy`
  near `ais` so getLayersForVariant() and sanitizeLayersForVariant()
  include it on the energy variant — without this addition the layer
  would be silently stripped even when toggled on.
- src/types/index.ts: `liveTankers?: boolean` on the MapLayers union.
- src/config/panels.ts: ENERGY_MAP_LAYERS + ENERGY_MOBILE_MAP_LAYERS
  both gain `liveTankers: true`. Default `false` everywhere else.
- src/services/maritime/index.ts: existing snapshot consumer pinned to
  `includeTankers: false` to satisfy the proto's new required field;
  preserves identical behavior for the AIS-density / military-detection
  surfaces.

Tests:
- npm run typecheck clean.
- 5 unit tests in tests/live-tankers-service.test.mjs cover the default
  chokepoint set (rejects ids that aren't in CHOKEPOINT_REGISTRY), the
  60s cache TTL pin (must match gateway 'live' tier s-maxage), and bbox
  derivation (±2° padding, total span under the 10° handler guard).
- tests/route-cache-tier.test.mjs continues to pass after the regex
  extension; the new maritime tier is correctly extracted.

Defense in depth:
- THREE-layer cache (CDN 'live' tier → handler bbox-keyed 60s → service
  in-memory 60s) means concurrent users hit the relay sub-linearly.
- Server-side 200-vessel cap on tanker_reports + client-side cap;
  protects layer render perf even on a runaway relay payload.
- Bbox-size guard (10° max) prevents a single global-bbox query from
  exfiltrating every tanker.
- Per-IP rate limit at 60/min covers normal use; flags scrape-class only.
- Existing military-detection contract preserved: `candidate_reports`
  field semantics unchanged; consumers self-select via include_tankers
  vs include_candidates rather than the response field changing meaning.

* fix(energy-atlas): wire LiveTankers loop + 400 bbox-range guard (PR3 review)

Three findings from review of #3402:

P1 — loadLiveTankers() was never called (DeckGLMap.ts:2999):
- Add ensureLiveTankersLoop() / stopLiveTankersLoop() helpers paired with
  the layer-enabled / layer-disabled branches in updateLayers(). The
  ensure helper kicks an immediate load + a 60s setInterval; idempotent
  so calling it on every layers update is safe.
- Wire stopLiveTankersLoop() into destroy() and into the layer-disabled
  branch so we don't hammer the relay when the layer is off.
- Layer factory now runs only when liveTankers.length > 0; ensureLoop
  fires on every observed-enabled tick so first-paint kicks the load
  even before the first tanker arrives.

P1 — bbox lat/lon range guard (get-vessel-snapshot.ts:253):
- Out-of-range bboxes (e.g. ne_lat=200) previously passed the size
  guard (200-195=5° < 10°) but failed at the relay, which silently
  drops the bbox param and returns a global capped subset — making
  the layer appear to "work" with stale phantom data.
- Add isValidLatLon() check inside extractAndValidateBbox(): every
  corner must satisfy [-90, 90] / [-180, 180] before the size guard
  runs. Failure throws BboxValidationError.

P2 — BboxTooLargeError surfaced as 500 instead of 400:
- server/error-mapper.ts maps errors to HTTP status by checking
  `'statusCode' in error`. The previous BboxTooLargeError extended
  Error without that property, so the mapper fell through to
  "unhandled error" → 500.
- Rename to BboxValidationError, add `readonly statusCode = 400`.
  Mapper now surfaces it as HTTP 400 with a descriptive reason.
- Keep BboxTooLargeError as a backwards-compat alias so existing
  imports / tests don't break.

Tests:
- Updated tests/server-handlers.test.mjs structural test to pin the
  new class name + statusCode + lat/lon range checks. 24 tests pass.
- typecheck (src + api) clean.

* fix(energy-atlas): thread AbortSignal through fetchLiveTankers (PR3 review #2)

P2 — AbortController was created + aborted but signal was never passed
into the actual fetch path (DeckGLMap.ts:3048 / live-tankers.ts:100):
- Toggling the layer off, destroying the map, or starting a new refresh
  did not actually cancel in-flight network work. A slow older refresh
  could complete after a newer one and overwrite this.liveTankers with
  stale data.

Threading:
- fetchLiveTankers() now accepts `options.signal: AbortSignal`. Signal
  is passed through to client.getVesselSnapshot() per chokepoint via
  the Connect-RPC client's standard `{ signal }` option.
- Per-zone abort handling: bail early if signal is already aborted
  before the fetch starts (saves a wasted RPC + cache write); re-check
  after the fetch resolves so a slow resolver can't clobber cache
  after the caller cancelled.

Stale-result race guard in DeckGLMap.loadLiveTankers:
- Capture controller in a local before storing on this.liveTankersAbort.
- After fetchLiveTankers resolves, drop the result if EITHER:
  - controller.signal is now aborted (newer load cancelled this one)
  - this.liveTankersAbort points to a different controller (a newer
    load already started + replaced us in the field)
- Without these guards, an older fetch that completed despite
  signal.aborted could still write to this.liveTankers and call
  updateLayers, racing with the newer load.

Tests: 1 new signature-pin test in tests/live-tankers-service.test.mts
verifies fetchLiveTankers accepts options.signal — guards against future
edits silently dropping the parameter and re-introducing the race.
6 tests pass. typecheck clean.

* fix(energy-atlas): bound vessel-snapshot cache via LRU eviction (PR3 review)

Greptile P2 finding: the in-process cache Map grows unbounded across the
serverless instance lifetime. Each distinct (includeCandidates,
includeTankers, quantizedBbox) triple creates a slot that's never evicted.
With 1° quantization and a misbehaving client the keyspace is ~64,000
entries — realistic load is ~12, so a 128-slot cap leaves 10x headroom
while making OOM impossible.

Implementation:
- SNAPSHOT_CACHE_MAX_SLOTS = 128.
- evictIfNeeded() walks insertion order and evicts the first slot whose
  inFlight is null. Slots with active fetches are skipped to avoid
  orphaning awaiting callers; we accept brief over-cap growth until
  in-flight settles.
- touchSlot() re-inserts a slot at the end of Map insertion order on
  hit / in-flight join / fresh write so it counts as most-recently-used.
This commit is contained in:
Elie Habib
2026-04-25 17:56:23 +04:00
committed by GitHub
parent 0bca368a7d
commit 5c955691a9
21 changed files with 824 additions and 45 deletions

View File

@@ -96,6 +96,10 @@ export const ENDPOINT_RATE_POLICIES: Record<string, EndpointRatePolicy> = {
// inline Upstash INCR. Gateway now enforces the same budget with per-IP
// keying in checkEndpointRateLimit.
'/api/scenario/v1/run-scenario': { limit: 10, window: '60 s' },
// Live tanker map (Energy Atlas): one user with 6 chokepoints × 1 call/min
// = 6 req/min/IP base load. 60/min headroom covers tab refreshes + zoom
// pans within a single user without flagging legitimate traffic.
'/api/maritime/v1/get-vessel-snapshot': { limit: 60, window: '60 s' },
};
const endpointLimiters = new Map<string, Ratelimit>();

View File

@@ -26,11 +26,16 @@ export const serverOptions: ServerOptions = { onError: mapErrorToResponse };
// NOTE: This map is shared across all domain bundles (~3KB). Kept centralised for
// single-source-of-truth maintainability; the size is negligible vs handler code.
type CacheTier = 'fast' | 'medium' | 'slow' | 'slow-browser' | 'static' | 'daily' | 'no-store';
type CacheTier = 'fast' | 'medium' | 'slow' | 'slow-browser' | 'static' | 'daily' | 'no-store' | 'live';
// Three-tier caching: browser (max-age) → CF edge (s-maxage) → Vercel CDN (CDN-Cache-Control).
// CF ignores Vary: Origin so it may pin a single ACAO value, but this is acceptable
// since production traffic is same-origin and preview deployments hit Vercel CDN directly.
//
// 'live' tier (60s) is for endpoints with strict freshness contracts — the
// energy-atlas live-tanker map layer requires position fixes to refresh on
// the order of one minute. Every shorter-than-medium tier is custom; we keep
// the existing tiers untouched so unrelated endpoints aren't impacted.
const TIER_HEADERS: Record<CacheTier, string> = {
fast: 'public, max-age=60, s-maxage=300, stale-while-revalidate=60, stale-if-error=600',
medium: 'public, max-age=120, s-maxage=600, stale-while-revalidate=120, stale-if-error=900',
@@ -39,6 +44,7 @@ const TIER_HEADERS: Record<CacheTier, string> = {
static: 'public, max-age=600, s-maxage=3600, stale-while-revalidate=600, stale-if-error=14400',
daily: 'public, max-age=3600, s-maxage=14400, stale-while-revalidate=7200, stale-if-error=172800',
'no-store': 'no-store',
live: 'public, max-age=30, s-maxage=60, stale-while-revalidate=60, stale-if-error=300',
};
// Vercel CDN-specific cache TTLs — CDN-Cache-Control overrides Cache-Control for
@@ -52,10 +58,14 @@ const TIER_CDN_CACHE: Record<CacheTier, string | null> = {
static: 'public, s-maxage=14400, stale-while-revalidate=3600, stale-if-error=28800',
daily: 'public, s-maxage=86400, stale-while-revalidate=14400, stale-if-error=172800',
'no-store': null,
live: 'public, s-maxage=60, stale-while-revalidate=60, stale-if-error=300',
};
const RPC_CACHE_TIER: Record<string, CacheTier> = {
'/api/maritime/v1/get-vessel-snapshot': 'no-store',
// 'live' tier — bbox-quantized + tanker-aware caching upstream of the
// 60s in-handler cache, absorbing identical-bbox requests at the CDN
// before they hit this Vercel function. Energy Atlas live-tanker layer.
'/api/maritime/v1/get-vessel-snapshot': 'live',
'/api/market/v1/list-market-quotes': 'medium',
'/api/market/v1/list-crypto-quotes': 'medium',

View File

@@ -27,10 +27,27 @@ const SEVERITY_MAP: Record<string, AisDisruptionSeverity> = {
high: 'AIS_DISRUPTION_SEVERITY_HIGH',
};
// Cache the two variants separately — candidate reports materially change
// payload size, and clients with no position callbacks should not have to
// wait on or pay for the heavier payload.
const SNAPSHOT_CACHE_TTL_MS = 300_000; // 5 min -- matches client poll interval
// In-process cache TTLs.
//
// The base snapshot (no candidates, no tankers, no bbox) is the high-traffic
// path consumed by the AIS-density layer + military-detection consumers. It
// re-uses the existing 5-minute cache because density / disruptions only
// change once per relay cycle.
//
// Tanker (live-tanker map layer) and bbox-filtered responses MUST refresh
// every 60s to honor the live-tanker freshness contract — anything longer
// shows stale vessel positions and collapses distinct bboxes onto one
// payload, defeating the bbox parameter entirely.
const SNAPSHOT_CACHE_TTL_BASE_MS = 300_000; // 5 min for non-bbox / non-tanker reads
const SNAPSHOT_CACHE_TTL_LIVE_MS = 60_000; // 60 s for live tanker / bbox reads
// 1° bbox quantization for cache-key reuse: a user panning a few decimal
// degrees should hit the same cache slot as another user nearby. Done
// server-side so the gateway 'live' tier sees identical query strings and
// the CDN absorbs the request before it reaches this handler.
function quantize(v: number): number {
return Math.floor(v);
}
interface SnapshotCacheSlot {
snapshot: VesselSnapshot | undefined;
@@ -38,28 +55,91 @@ interface SnapshotCacheSlot {
inFlight: Promise<VesselSnapshot | undefined> | null;
}
const cache: Record<'with' | 'without', SnapshotCacheSlot> = {
with: { snapshot: undefined, timestamp: 0, inFlight: null },
without: { snapshot: undefined, timestamp: 0, inFlight: null },
};
// Cache keyed by request shape: candidates, tankers, and quantized bbox.
// Replaces the prior `with|without` keying which would silently serve
// stale tanker data and collapse distinct bboxes.
//
// LRU-bounded: each distinct (includeCandidates, includeTankers, quantizedBbox)
// triple creates a slot. With 1° quantization and a misbehaving client, the
// keyspace is ~64,000 (180×360); without a cap the Map would grow unbounded
// across the lifetime of the serverless instance. Realistic load is ~12 slots
// (6 chokepoints × 2 flag combos), so a 128-slot cap leaves >10x headroom for
// edge panning while making OOM impossible.
const SNAPSHOT_CACHE_MAX_SLOTS = 128;
const cache = new Map<string, SnapshotCacheSlot>();
async function fetchVesselSnapshot(includeCandidates: boolean): Promise<VesselSnapshot | undefined> {
const slot = cache[includeCandidates ? 'with' : 'without'];
function touchSlot(key: string, slot: SnapshotCacheSlot): void {
// Move to end of insertion order so it's most-recently-used. Map iteration
// order = insertion order, so the first entry is the LRU candidate.
cache.delete(key);
cache.set(key, slot);
}
function evictIfNeeded(): void {
if (cache.size < SNAPSHOT_CACHE_MAX_SLOTS) return;
// Walk insertion order; evict the first slot that has no in-flight fetch.
// An in-flight slot is still in use by an awaiting caller — evicting it
// would orphan the promise.
for (const [k, s] of cache) {
if (s.inFlight === null) {
cache.delete(k);
return;
}
}
// All slots in flight — nothing to evict. Caller still inserts; we
// accept temporary growth past the cap until in-flight settles.
}
function cacheKeyFor(
includeCandidates: boolean,
includeTankers: boolean,
bbox: { swLat: number; swLon: number; neLat: number; neLon: number } | null,
): string {
const c = includeCandidates ? '1' : '0';
const t = includeTankers ? '1' : '0';
if (!bbox) return `${c}${t}|null`;
const sl = quantize(bbox.swLat);
const so = quantize(bbox.swLon);
const nl = quantize(bbox.neLat);
const no = quantize(bbox.neLon);
return `${c}${t}|${sl},${so},${nl},${no}`;
}
function ttlFor(includeTankers: boolean, bbox: unknown): number {
return includeTankers || bbox ? SNAPSHOT_CACHE_TTL_LIVE_MS : SNAPSHOT_CACHE_TTL_BASE_MS;
}
async function fetchVesselSnapshot(
includeCandidates: boolean,
includeTankers: boolean,
bbox: { swLat: number; swLon: number; neLat: number; neLon: number } | null,
): Promise<VesselSnapshot | undefined> {
const key = cacheKeyFor(includeCandidates, includeTankers, bbox);
let slot = cache.get(key);
if (!slot) {
evictIfNeeded();
slot = { snapshot: undefined, timestamp: 0, inFlight: null };
cache.set(key, slot);
}
const now = Date.now();
if (slot.snapshot && (now - slot.timestamp) < SNAPSHOT_CACHE_TTL_MS) {
const ttl = ttlFor(includeTankers, bbox);
if (slot.snapshot && (now - slot.timestamp) < ttl) {
touchSlot(key, slot);
return slot.snapshot;
}
if (slot.inFlight) {
touchSlot(key, slot);
return slot.inFlight;
}
slot.inFlight = fetchVesselSnapshotFromRelay(includeCandidates);
slot.inFlight = fetchVesselSnapshotFromRelay(includeCandidates, includeTankers, bbox);
try {
const result = await slot.inFlight;
if (result) {
slot.snapshot = result;
slot.timestamp = Date.now();
touchSlot(key, slot);
}
return result ?? slot.snapshot; // serve stale on relay failure
} finally {
@@ -87,13 +167,31 @@ function toCandidateReport(raw: any): SnapshotCandidateReport | null {
};
}
async function fetchVesselSnapshotFromRelay(includeCandidates: boolean): Promise<VesselSnapshot | undefined> {
async function fetchVesselSnapshotFromRelay(
includeCandidates: boolean,
includeTankers: boolean,
bbox: { swLat: number; swLon: number; neLat: number; neLon: number } | null,
): Promise<VesselSnapshot | undefined> {
try {
const relayBaseUrl = getRelayBaseUrl();
if (!relayBaseUrl) return undefined;
const params = new URLSearchParams();
params.set('candidates', includeCandidates ? 'true' : 'false');
if (includeTankers) params.set('tankers', 'true');
if (bbox) {
// Quantized bbox: prevents the relay from caching one URL per
// floating-point pixel as users pan. Same quantization as the
// handler-side cache key so they stay consistent.
const sl = quantize(bbox.swLat);
const so = quantize(bbox.swLon);
const nl = quantize(bbox.neLat);
const no = quantize(bbox.neLon);
params.set('bbox', `${sl},${so},${nl},${no}`);
}
const response = await fetch(
`${relayBaseUrl}/ais/snapshot?candidates=${includeCandidates ? 'true' : 'false'}`,
`${relayBaseUrl}/ais/snapshot?${params.toString()}`,
{
headers: getRelayHeaders(),
signal: AbortSignal.timeout(10000),
@@ -141,6 +239,9 @@ async function fetchVesselSnapshotFromRelay(includeCandidates: boolean): Promise
const candidateReports = (includeCandidates && Array.isArray(data.candidateReports))
? data.candidateReports.map(toCandidateReport).filter((r: SnapshotCandidateReport | null): r is SnapshotCandidateReport => r !== null)
: [];
const tankerReports = (includeTankers && Array.isArray(data.tankerReports))
? data.tankerReports.map(toCandidateReport).filter((r: SnapshotCandidateReport | null): r is SnapshotCandidateReport => r !== null)
: [];
return {
snapshotAt: Date.now(),
@@ -153,6 +254,7 @@ async function fetchVesselSnapshotFromRelay(includeCandidates: boolean): Promise
messages: Number.isFinite(Number(rawStatus.messages)) ? Number(rawStatus.messages) : 0,
},
candidateReports,
tankerReports,
};
} catch {
return undefined;
@@ -163,14 +265,79 @@ async function fetchVesselSnapshotFromRelay(includeCandidates: boolean): Promise
// RPC handler
// ========================================================================
// Bbox-size guard: reject requests where either dimension exceeds 10°. This
// prevents a malicious or buggy client from requesting a global box and
// pulling every tanker through one query.
const MAX_BBOX_DEGREES = 10;
/**
* 400-class bbox validation error. Carries `statusCode = 400` so
* server/error-mapper.ts surfaces it as HTTP 400 (the mapper branches
* on `'statusCode' in error`; a plain Error would fall through to
* "unhandled error" → 500). Used for both the size guard and the
* lat/lon range guard.
*
* Range checks matter because the relay silently DROPS a malformed
* bbox param and serves a global capped tanker subset — making the
* layer appear to "work" with stale data instead of failing loudly.
*/
export class BboxValidationError extends Error {
readonly statusCode = 400;
constructor(reason: string) {
super(`bbox invalid: ${reason}`);
this.name = 'BboxValidationError';
}
}
// Backwards-compatible alias for tests that imported BboxTooLargeError.
// Prefer BboxValidationError for new code.
export const BboxTooLargeError = BboxValidationError;
function isValidLatLon(lat: number, lon: number): boolean {
return (
Number.isFinite(lat) && Number.isFinite(lon) &&
lat >= -90 && lat <= 90 && lon >= -180 && lon <= 180
);
}
function extractAndValidateBbox(req: GetVesselSnapshotRequest): { swLat: number; swLon: number; neLat: number; neLon: number } | null {
const sw = { lat: Number(req.swLat), lon: Number(req.swLon) };
const ne = { lat: Number(req.neLat), lon: Number(req.neLon) };
// All zeroes (the default for unset proto doubles) → no bbox.
if (sw.lat === 0 && sw.lon === 0 && ne.lat === 0 && ne.lon === 0) {
return null;
}
if (!isValidLatLon(sw.lat, sw.lon)) {
throw new BboxValidationError('sw corner outside lat/lon domain (-90..90 / -180..180)');
}
if (!isValidLatLon(ne.lat, ne.lon)) {
throw new BboxValidationError('ne corner outside lat/lon domain (-90..90 / -180..180)');
}
if (sw.lat > ne.lat || sw.lon > ne.lon) {
throw new BboxValidationError('sw corner must be south-west of ne corner');
}
if (ne.lat - sw.lat > MAX_BBOX_DEGREES || ne.lon - sw.lon > MAX_BBOX_DEGREES) {
throw new BboxValidationError(`each dimension must be ≤ ${MAX_BBOX_DEGREES} degrees`);
}
return { swLat: sw.lat, swLon: sw.lon, neLat: ne.lat, neLon: ne.lon };
}
export async function getVesselSnapshot(
_ctx: ServerContext,
req: GetVesselSnapshotRequest,
): Promise<GetVesselSnapshotResponse> {
try {
const snapshot = await fetchVesselSnapshot(Boolean(req.includeCandidates));
const bbox = extractAndValidateBbox(req);
const snapshot = await fetchVesselSnapshot(
Boolean(req.includeCandidates),
Boolean(req.includeTankers),
bbox,
);
return { snapshot };
} catch {
} catch (err) {
// BboxValidationError carries statusCode=400; rethrowing lets the
// gateway error-mapper surface it as HTTP 400 with the reason string.
if (err instanceof BboxValidationError) throw err;
return { snapshot: undefined };
}
}

View File

@@ -257,7 +257,14 @@ async function fetchChokepointData(): Promise<ChokepointFetchResult> {
const [navResult, vesselResult, transitSummariesData, flowsData] = await Promise.all([
listNavigationalWarnings(ctx, { area: '', pageSize: 0, cursor: '' }).catch((): ListNavigationalWarningsResponse => { navFailed = true; return { warnings: [], pagination: undefined }; }),
getVesselSnapshot(ctx, { neLat: 90, neLon: 180, swLat: -90, swLon: -180, includeCandidates: false }).catch((): GetVesselSnapshotResponse => { vesselFailed = true; return { snapshot: undefined }; }),
// All-zero bbox = "no filter, full snapshot" per the new bbox extractor
// in get-vessel-snapshot.ts. Previously this passed (-90, -180, 90, 180)
// because the handler ignored bbox entirely; the new 10° max-bbox guard
// (added for the live-tanker contract) would reject that range. This
// call doesn't need bbox filtering — it wants the global density +
// disruption surface — so pass zeros and skip both candidate and tanker
// payload tiers.
getVesselSnapshot(ctx, { neLat: 0, neLon: 0, swLat: 0, swLon: 0, includeCandidates: false, includeTankers: false }).catch((): GetVesselSnapshotResponse => { vesselFailed = true; return { snapshot: undefined }; }),
getCachedJson(TRANSIT_SUMMARIES_KEY, true).catch(() => null) as Promise<TransitSummariesPayload | null>,
getCachedJson(FLOWS_KEY, true).catch(() => null) as Promise<Record<string, FlowEstimateEntry> | null>,
]);