Mechanical fixes across 13 files:
- isNaN() → Number.isNaN() (all values already numeric from parseFloat/parseInt)
- let → const where never reassigned
- Math.pow() → ** operator
- Unnecessary continue in for loop
- Useless string escape in test description
- Missing parseInt radix parameter
- Remove unused private class member (write-only counter)
- Prefix unused function parameter with _
Config: suppress noImportantStyles (CSS !important is intentional) and
useLiteralKeys (bracket notation used for computed/dynamic keys) in
biome.json. Remaining 49 warnings are all noExcessiveCognitiveComplexity
(already configured as warn, safe to address incrementally).
When a seed fetches data but validation rejects it (e.g. FIRMS API
returns 0 fires due to timeout), extend the existing key's TTL
instead of letting it expire. Old data survives until the next
successful fetch. Applies to all seeds using runSeed().
* fix(trade): align flows cache key with seed (US vs World, not China)
The seed writes trade:flows:v1:840:000:10 (US vs World) but the
data-loader requested trade:flows:v1:840:156:10 (US vs China),
causing perpetual cache misses and a hidden Flows tab.
* feat(seed): add bilateral trade flow pairs (US-China, US-Germany, etc.)
Seed now writes both reporter-vs-World AND key bilateral pairs
so switching between global and bilateral views hits warm cache.
* feat(seed): add World-China and World-US bilateral flow pairs
* fix(trade): revert flows to US-China (840/156), seed now covers this key
The bilateral seed entries now write trade:flows:v1:840:156:10,
so the original US-China request hits cache. Keeps the panel
showing bilateral data consistent with the tariffs partner.
* fix(aviation): stop Vercel from calling AviationStack directly
- get-airport-ops-summary: read from relay seed cache (aviation:delays:intl:v3)
instead of calling fetchAviationStackDelays() on every cache miss
- list-airport-flights + get-flight-status: proxy through Railway relay
/aviationstack endpoint instead of calling AviationStack from Vercel edge
- Add /aviationstack proxy endpoint to ais-relay with 2min in-memory cache
Vercel should NEVER call external paid APIs directly. Railway relay is
the sole egress point for AviationStack (gold standard).
* fix(config): update aviationStack feature to require WS_RELAY_URL
Aviation handlers now proxy through Railway relay instead of calling
AviationStack directly. Update runtime-config to reflect the actual
dependency.
Hourly cron + 3600s TTL = data expires right as the next seed starts,
causing a ~30s EMPTY window. Bumped to 4800s (80min) so old data
persists while the new seed runs.
* feat(supply-chain): add SCFI, CCFI, and BDI freight indices to shipping tab
Transform the Shipping Rates tab from 2 lagging monthly FRED indices into
a real-time freight cost dashboard with container and bulk shipping rates.
Seed script: add fetchSCFI/fetchCCFI (SSE JSON API) and fetchBDI (HandyBulk
HTML scrape) with inline history accumulation using source observation dates.
Handler: make cache-only (seed is sole aggregator, no FRED fallback on miss).
Panel: group indices into Container Rates, Bulk Shipping, Economic Indicators.
Tests: 26 functional tests with fixture data for parsers, history, and handler.
* fix(supply-chain): use raw Redis read and correct SCFI composite unit
- Handler: switch from cachedFetchJson (env-prefixed) to getCachedJson(key, true)
so preview deployments read the unprefixed seed key correctly
- Seed: SCFI composite is a dimensionless index, not USD/TEU (route-level unit)
- Tests: update assertions to match both fixes
* feat(trade): add US Treasury customs revenue to Trade Policy panel
US customs duties revenue spiked 4-5x under Trump tariffs (from
$7B/month to $27-31B/month) but the WTO tariff data only goes to
2024. Adds Treasury MTS data showing monthly customs revenue.
- Add GetCustomsRevenue RPC (proto, handler, cache tier)
- Add Treasury fetch to seed-supply-chain-trade.mjs (free API, no key)
- Add Revenue tab to TradePolicyPanel with FYTD YoY comparison
- Fix WTO gate: per-tab gating so Revenue works without WTO key
- Wire bootstrap hydration, health, seed-health tracking
* test(trade): add customs revenue feature tests
22 structural tests covering:
- Handler: raw key mode, empty-cache behavior, correct Redis key
- Seed: Treasury API URL, classification filter, timeout, row
validation, amount conversion, sort order, seed-meta naming
- Panel: WTO gate fix (per-tab not panel-wide), revenue tab
defaults when WTO key missing, dynamic FYTD comparison
- Client: no WTO feature gate, bootstrap hydration, type exports
* fix(trade): align FYTD comparison by fiscal month count
Prior FY comparison was filtering by calendar month, which compared
5 months of FY2026 (Oct-Feb) against only 2 months of FY2025
(Jan-Feb), inflating the YoY percentage. Now takes the first N
months of the prior FY matching the current FY month count.
* fix(trade): register treasury_revenue DataSourceId and localize revenue tab
- Add treasury_revenue to DataSourceId union type so freshness
tracking actually works (was silently ignored)
- Register in data-freshness.ts source config + gap messages
- Add i18n keys: revenue tab label, empty state, unavailable banner
- Update infoTooltip to include Revenue tab description
* fix(trade): complete revenue tab localization
Use t() for all remaining hardcoded strings: footer source labels,
FYTD summary headline, prior-year comparison, and table column
headers. Wire the fytdLabel/vsPriorFy keys that were added but
not used.
* fix(test): update revenue source assertion for localized string
Kalshi multi-outcome events return market titles like "Before 2035",
"Rhode Island", "Johnny Depp" which are meaningless without the parent
event context. Now combines event title with market title when the
market title is short and doesn't contain a question mark.
Before: "Before 2035" (KALSHI)
After: "Will AGI be achieved?: Before 2035" (KALSHI)
The corridorrisk raw key (2h TTL) expires between hourly seed cycles,
causing health to report EMPTY even though data flows correctly through
transit-summaries:v1.
- Increase CORRIDOR_RISK_TTL from 2h to 4h (3 retries before expiry)
- Add corridorrisk to ON_DEMAND_KEYS (WARN instead of CRIT when empty)
The top-level import crashes seed-forecasts on Railway when the
package isn't installed. Dynamic import defers the load to when
S3 mode is actually used, allowing the seed to run without the
SDK when R2 is not configured.
* feat(forecast): add AI Forecasts prediction module (Pro-tier)
MiroFish-inspired prediction engine that generates structured forecasts
across 6 domains (conflict, market, supply chain, political, military,
infrastructure) using existing WorldMonitor data streams.
- Proto definitions for ForecastService with GetForecasts RPC
- Dedicated seed script (seed-forecasts.mjs) with 6 domain detectors,
cross-domain cascade resolver, prediction market calibration, and
trend detection via prior snapshot comparison
- Premium-gated RPC handler (PREMIUM_RPC_PATHS enforcement)
- Lazy-loaded ForecastPanel with domain filters, probability bars,
trend arrows, signal evidence, and cascade links
- Health monitoring integration (seed-meta freshness tracking)
- Refresh scheduler with API key guard
* test(forecast): add 47 unit tests for forecast detectors and utilities
Covers forecastId, normalize, resolveCascades, calibrateWithMarkets,
computeTrends, and smoke tests for all 6 domain detectors. Exports
testable functions from seed script with direct-run guard.
* fix(forecast): domain mismatch 'infra' vs 'infrastructure', add panel category
- Seed script used 'infra' but ForecastPanel filtered on 'infrastructure',
causing Infra tab to show zero results
- Added 'forecast' to intelligence category in PANEL_CATEGORY_MAP
* fix(forecast): move CSS to one-time injection, improve type safety
- P2: Move style block from setContent to one-time document.head injection
to prevent CSS accumulation on repeated renders
- P3: Replace +toFixed(3) with Math.round for readability in seed script
- P3: Use Forecast type instead of any[] in RPC handler filter
* fix(forecast): handle sebuf proto data shapes from Redis
Detectors now normalize CII scores from server-side proto format
(combinedScore, TREND_DIRECTION_RISING, region) to uniform shape.
Outage severity handles proto enum format (SEVERITY_LEVEL_HIGH).
Added confidence floor of 0.3 for single-source predictions.
Verified against live Redis: 2 predictions generated (Iran infra
shutdown, IL political instability).
* feat(forecast): unlock AI Forecasts on web, lock desktop only (trial)
- Remove forecast RPC from PREMIUM_RPC_PATHS (web access is free)
- Panel locked on desktop only (same as oref-sirens/telegram-intel)
- Remove API key guards from data-loader and refresh scheduler
- Web users get full access during trial period
* chore: regenerate proto types with make generate
Re-ran make generate after rebasing on main. Plugin v0.7.0 dropped
@ts-nocheck from output, added it back to all 50 generated files.
Fixed 4 type errors from proto codegen changes:
- MarketSource enum -> string union type
- TemporalAnomalyProto -> TemporalAnomaly rename
- webcam lastUpdated number -> string
* chore: add proto freshness check to pre-push hook
Runs make generate before push and compares checksums of generated files.
If proto types are stale, blocks push with instructions to regenerate.
Skips gracefully if buf CLI is not installed.
* fix(forecast): use chokepoints v4 key, include ciiContribution in unrest
- P1: Switch chokepoints input from stale v2 to active v4 Redis key,
matching bootstrap.js and cache-keys.ts
- P2: Add ciiContribution to unrest component fallback chain in
normalizeCiiEntry so political detector reads the correct sebuf field
* feat(forecast): Phase 2 LLM scenario enrichment + confidence model
MiroFish-inspired enhancements:
- LLM scenario narratives via Groq/OpenRouter (narrative-only, no numeric
adjustment). Evidence-grounded prompts with mandatory signal citation
and few-shot examples from MiroFish's SECTION_SYSTEM_PROMPT_TEMPLATE.
- Top-4 predictions batched into single LLM call for cost efficiency.
- News context from newsInsights attached to all predictions for LLM
prompt grounding (NOT in signals, cannot affect confidence).
- Deterministic confidence model: source diversity via SIGNAL_TO_SOURCE
mapping (deduplicates cii+cii_delta, theater+indicators) + calibration
agreement from prediction market drift. Floor 0.2, ceiling 1.0.
- Output validation: rejects scenarios without signal references.
- Truncated JSON repair for small model output.
- Structured JSON logging for LLM calls.
- Redis cache for LLM scenarios (1h TTL).
- 23 new tests (70 total), all passing.
- Live-tested: OpenRouter gemini-2.5-flash produces evidence-grounded
scenario narratives from real WorldMonitor data.
* feat(forecast): Phase 3 multi-perspective scenarios, projections, data-driven cascades
MiroFish-inspired enhancements:
- Multi-perspective LLM analysis: top-2 predictions get strategic,
regional, and contrarian viewpoints via combined LLM call
- Probability projections: domain-specific decay curves (h24/d7/d30)
anchored to timeHorizon so probability equals projections[timeHorizon]
- Data-driven cascade rules: moved from hardcoded array to JSON config
(scripts/data/cascade-rules.json) with schema validation, named
predicate evaluators, unknown key rejection, and fallback to defaults
- 4 new cascade paths: infrastructure->supply_chain, infrastructure->market
(both requiresSeverity:total), conflict->political, political->market
- Proto: added Perspectives and Projections messages to Forecast
- ForecastPanel: renders projections row and conditional perspectives toggle
- 89 tests (19 new), all passing
- Live-tested: OpenRouter produces perspectives from real data
* feat(forecast): Phase 4 data utilization + entity graph
Fixes data gaps that prevented 4 of 6 detectors from firing:
- Input normalizers: chokepoint v4 shape + GPS hexes-to-zones mapping
- Chokepoint warm-ping (production-only, requires WM_API_BASE_URL)
- Lowered CII conflict threshold from 70 to 60, gated on level=high|critical
4 new standalone detectors:
- UCDP conflict zones (10+ events per country)
- Cyber threat concentration (5+ threats per country)
- GPS jamming in maritime shipping zones (5 regions)
- Prediction markets as signals (60-90% probability markets)
Entity-relationship graph (file-based, 38 nodes):
- Countries, theaters, commodities, chokepoints, alliances
- Alias table resolves both ISO codes and display names
- Graph cascade discovery links predictions across entities
Result: 51 predictions (up from 1-2), spanning conflict, infrastructure,
and supply chain domains. 112 tests, all passing.
* fix(forecast): redis cache format, signal source mapping, type safety
Fresh-eyes audit fixes:
- BUG: redisSet used wrong Upstash API format (POST body with {value,ex}
instead of command array ['SET',key,value,'EX',ttl]). LLM cache writes
were silently failing, causing fresh LLM calls every run.
- BUG: prediction_market signal type missing from SIGNAL_TO_SOURCE,
inflating confidence for market-derived predictions.
- CLEANUP: Remove unnecessary (f as any) casts in ForecastPanel since
generated Forecast type already has projections/perspectives fields.
- CLEANUP: Bump health maxStaleMin from 60 to 90 to avoid false STALE
alerts when LLM calls add latency to seed runs.
* feat(forecast): headline-entity matching with news corroboration signals
Uses entity graph aliases to match headlines to predictions by
country/theater (excludes commodity/infrastructure nodes to prevent
false positives). Predictions with matching headlines get a
news_corroboration signal visible in the panel.
Also fixes buildUserPrompt to merge unique headlines from ALL
predictions in the LLM batch (was only reading preds[0].newsContext).
Live-tested: 13 of 51 predictions now have corroborating headlines
(Iran, Israel, Syria, Ukraine, etc). 116 tests, all passing.
* feat(forecast): add country-codes.json for headline-entity matching
56 countries with ISO codes, full names, and scoring keywords (extracted
from src/config/countries.ts + UCDP-relevant additions). Used by
attachNewsContext for richer headline matching via getSearchTermsForRegion
which combines country-codes + entity graph + keyword aliases.
14/57 predictions now have news corroboration (limited by headline
coverage, not matching quality: only 8 headlines currently available).
* feat(forecast): read 300 headlines from news digest instead of 8
Read news:digest:v1:full:en (300 headlines across 16 categories) instead
of just news:insights:v1 topStories (8 headlines). Fallback to topStories
if digest is unavailable.
Result: news corroboration jumped from 25% to 64% (38/59 predictions).
* fix(forecast): handle parenthetical country names in headline matching
Strip suffixes like '(Zaire)', '(Burma)', '(Soviet Union)' from UCDP
region names before matching against country-codes.json. Also use
includes() for reverse name lookup to catch partial matches.
Corroboration: 64% -> 69% (41/59). Remaining 18 unmatched are countries
with no current English-language news coverage.
* fix(forecast): cache validated LLM output, add digest test, log cache errors
Fresh-eyes audit fixes:
- Combined LLM cache now stores only validated items (was caching raw
unvalidated output, serving potentially invalid scenarios on cache hit)
- redisSet logs warnings on failure (was silently swallowing all errors)
- Added digest-based test for attachNewsContext (primary path was untested)
- Fixed test arity: attachNewsContext(preds, news, digest) with 3 params
* fix(forecast): remove dead confidenceFromSources, reduce warm-ping timeout
- P2: Remove confidenceFromSources (dead code, computeConfidence overwrites
all initial confidence values). Inline the formula in original detectors.
- P3: Reduce warm-ping timeout from 30s to 15s (non-critical step)
- P3: Add trial status comment on forecast panel config
* fix(forecast): resolve ISO codes to country names, fix market detector, safe pre-push
P1 fixes from code review:
- CII ISO codes (IL, IR) now resolved to full country names (Israel, Iran)
via country-codes.json. Prevents substring false positives (IL matching
Chile) in event correlation. Uses word-boundary regex for matching.
- Market detector CII-to-theater mapping now uses entity graph traversal
instead of broken theater-name substring matching. Iran correctly maps
to Middle East theater via graph links.
- Pre-push hook no longer runs destructive git checkout on proto freshness
failure. Reports mismatch and exits without modifying worktree.
* feat(forecast): add structured scenario pipeline and trace export
* fix(forecast): hydrate bootstrap and trim generated drift
* fix(forecast): keep required supply-chain contract updates
* fix(ci): add forecasts to cache-keys registry and regenerate proto
Add forecasts entry to BOOTSTRAP_CACHE_KEYS and BOOTSTRAP_TIERS in
cache-keys.ts to match api/bootstrap.js. Regenerate SupplyChain proto
to fix duplicate TransitDayCount and add riskSummary/riskReportAction.
* fix(data): restore bootstrap and cache test coverage
* fix: resolve linting and test failures
- Remove dead writeSeedMeta/estimateRecordCount functions from redis.ts
(intentionally removed from cachedFetchJson; seed-meta now written
only by explicit seed flows, not generic cache reads)
- Fix globe dayNight test to match actual code (forces dayNight: false
+ hideLayerToggle, not catalog-based exclusion)
- Fix country-geometry test mock URL from CDN to /data/countries.geojson
(source changed to use local bundled file)
* fix(lint): remove duplicate llm-health key in redis-caching test
Duplicate object key '../../../_shared/llm-health' caused the stub
to be overwritten by the real module. Removed the second entry so
the test correctly uses the stub.
Validation now accepts empty ACLED events array when humanitarian or
pizzint data was fetched. Previously the seed wrote extra keys
(humanitarian, pizzint) but skipped the canonical key because
validateFn required non-empty events.
The standalone seed-usni-fleet.mjs cannot reach USNI because:
1. USNI Cloudflare blocks Node.js TLS fingerprint (JA3)
2. curl is not installed on Railway cron containers
3. Froxy residential proxy is IP-whitelisted to the relay fixed IP
Move the USNI seed loop back into ais-relay.cjs where it has access to
curl + the whitelisted proxy. Uses orefCurlFetch for the fetch, same
pattern as the OREF alerts loop. Writes to the same Redis keys
(usni-fleet:sebuf:v1, stale:v1, seed-meta:military:usni-fleet).
6h seed interval, 7h TTL, 7d stale TTL (unchanged from standalone).
* fix(seeds): improve resilience and fix dead APIs across seed scripts
- Fix wrong domain in seed-service-statuses (worldmonitor.app to api.worldmonitor.app)
- Fix Kalshi API domain migration (trading-api.kalshi.com to api.elections.kalshi.com)
- Replace dead trending APIs (gitterapp.com, herokuapp.com) with OSSInsight + GitHub Search
- Fix case-sensitive HTML detection in seed-usni-fleet (lowercase doctype not matched)
- Add Promise.allSettled rejection logging across 8 seed scripts
- Wrap fetch loops in try-catch (seed-supply-chain-trade, seed-economy) so a single
network error no longer kills the entire function
- Update list-trending-repos.ts RPC handler to match seed changes
* fix(seeds): correct OSSInsight response parsing and period-aware GitHub Search fallback
- OSSInsight returns {data: {rows: [...]}} not {data: [...]}, fix both seed and handler
- GitHub Search fallback now respects period parameter (daily=1d, weekly=7d, monthly=30d)
* fix(seeds): correct OSSInsight period values (past_week/past_month, not past_7_days/past_28_days)
Kalshi public market data endpoints require no authentication. Remove
the KALSHI_API_KEY gate that was disabling Kalshi entirely when the
env var was missing, and drop the Authorization header.
Rewrite the Vercel RPC handler to read from Railway-seeded Redis only
(gold standard), removing the fallback that fetched directly from
Gamma/Kalshi APIs on Vercel edge. Handler goes from 330 to 85 lines.
Double all prediction timing values to reduce Railway cron cost:
- Redis TTL: 15min -> 30min
- Health maxStaleMin: 15min -> 30min
- Client hydration freshness: 20min -> 40min
- Railway cron: 10min -> 20min (requires dashboard update)
* feat(advisories): gold standard migration for security advisories
Move security advisories from client-side RSS fetching (24 feeds per
page load) to Railway cron seed with Redis-read-only Vercel handler.
- Add seed script fetching via relay RSS proxy with domain allowlist
- Add ListSecurityAdvisories proto, handler, and RPC cache tier
- Add bootstrap hydration key for instant page load
- Rewrite client service: bootstrap -> RPC fallback, no browser RSS
- Wire health.js, seed-health.js, and dataSize tracking
* fix(advisories): empty RPC returns ok:true, use full country map
P1 fixes from Codex review:
- Return ok:true for empty-but-successful RPC responses so the panel
clears to empty instead of stuck loading on cold environments
- Replace 50-entry hardcoded country map with 251-entry shared config
generated from the project GeoJSON + aliases, matching coverage of
the old client-side nameToCountryCode matcher
* fix(advisories): add Cote d'Ivoire and other missing country aliases
Adds 14 missing aliases including "cote d ivoire" (US State Dept
title format), common article-prefixed names (the Bahamas, the
Gambia), and alternative official names (Czechia, Eswatini, Cabo
Verde, Timor-Leste).
* fix(proto): inject @ts-nocheck via Makefile generate target
buf generate does not emit @ts-nocheck, but tsc strict mode rejects
the generated code. Adding a post-generation sed step in the Makefile
ensures both CI proto-freshness (make generate + diff) and CI
typecheck (tsc --noEmit) pass consistently.
Node 20's fetch() (undici) tries IPv6 first. Railway containers don't
support IPv6 (IPV6_NDISC failures in network trace), causing all seed
services to crash.
Fix: set NODE_OPTIONS=--dns-result-order=ipv4first via nixpacks.toml
so all Railway services prefer IPv4. Keeps Node 20 for import attributes.
* test: rewrite transit chart test as structural contract verification
Replace fragile source-string extraction + new Function() compilation
with structural pattern checks on the source code. Tests verify:
- render() clears chart before content change
- clearTransitChart() cancels timer, disconnects observer, destroys chart
- MutationObserver setup for DOM readiness detection
- Fallback timer for no-op renders (100-500ms range)
- Both callbacks (observer + timer) clean up each other
- Tab switch and collapse clear chart state
- Mount function guards against missing element/data
Replaces PR #1634's approach which was brittle (method body extraction,
TypeScript cast stripping, sandboxed execution).
* fix: log fetch error cause in seed retry and FATAL handlers
Node 20 fetch() throws TypeError('fetch failed') with the real error
hidden in err.cause (DNS, TLS, timeout). The current logging only shows
'fetch failed' which is useless for diagnosis. Now logs err.cause.message
in both withRetry() retries and FATAL catch blocks.
* fix(usni-fleet): add Node.js HTTP CONNECT proxy fallback, detect Cloudflare HTML
curl is not available in Railway's Railpack v0.18.0 containers. The seed
was failing with ENOENT on curl, then getting Cloudflare-blocked on
Node.js direct.
- Add fetchViaHttpProxy: Node.js HTTP CONNECT tunnel through residential
proxy (no curl dependency). Uses the same RESIDENTIAL_PROXY_AUTH env.
- Add Cloudflare HTML detection: reject early when response starts with
<!DOCTYPE instead of passing HTML to JSON.parse.
- Fallback chain: curl direct -> curl+proxy -> Node.js+proxy -> Node.js direct
- Add nixpacks.toml with curl for future Railpack builds
* fix: use ESM import for node:http (require breaks in .mjs)
Previously, every merge to main triggered a Vercel build even for
scripts-only changes (seed scripts, relay updates). Now checks if
any web-relevant files changed on main too, skipping the build when
only scripts/, docs/, .github/, etc. are modified.
* refactor(seeds): extract USNI fleet seed from relay with proxy support
USNI's Cloudflare blocks Railway IPs (JA3 fingerprinting), causing
the relay's USNI seed to fail with HTML response instead of JSON.
Changes:
- New seed-usni-fleet.mjs: standalone script with HTTP CONNECT proxy
support via RESIDENTIAL_PROXY_AUTH (falls back to direct fetch)
- Removed ~240 lines of USNI code from ais-relay.cjs
- TTL bumped from 21600s (6h) to 25200s (7h) for cold start buffer
- Exact same parsing logic (hull types, region coords, vessel extraction,
strike group detection, battle force summary)
- runSeed pattern with lock, validation, seed-meta, verification
Deploy: Railway cron service, 6h interval, needs RESIDENTIAL_PROXY_AUTH
* fix(seeds): only use RESIDENTIAL_PROXY_AUTH for USNI proxy
* fix(seeds): use OREF_PROXY_AUTH for USNI proxy (the one proxy available)
PR #1596 removed the feeds but left the domains in the allowlist.
The relay still accepted proxy requests for these 403-blocked domains
from clients with cached old bundles. Removed:
- breakingdefense.com (403)
- www.arabnews.com (403)
- www.aei.org (403)
- mymodernmet.com (403)
Updated all 3 copies: shared/, scripts/shared/, api/
TTL (1h) equaled cron interval (1h), leaving zero buffer for container
cold starts. Health saw EMPTY records=0 during the gap between key
expiry and next seed run. 70min TTL covers the cold start window.
- Log which providers are available before calling
- Log HTTP error response body (shows Groq 429 reason)
- Log when response is empty/short
- Log success with response length
- Log explicit "All providers failed" at end of loop
Warm-ping requests from Railway IPs were getting Cloudflare bot
challenge HTML instead of JSON. The relay's service-statuses warm-ping
works because it sends Origin: 'https://worldmonitor.app'. Added the
same header to seed-infra.mjs and seed-military-maritime-news.mjs.
Fixes: usniFleet STALE_SEED, cableHealth EMPTY_ON_DEMAND
- Supply chain: add severity floor (critical=0.55, high=0.35) and raise
multiplier from 0.7 to 0.9. Hormuz at 80/100 RED now produces ~60-75%
instead of 47%.
- GPS jamming: raise normalization range from 30 to 60 hexes, multiplier
from 0.5 to 0.7, add 10% bonus for 20+ hexes. 53 hexes now produces
~72% instead of 49%.
- LLM: log when no API keys configured (was silently skipping all providers)
- Logging: add per-domain breakdown and top-5 predictions to pipeline output
for Railway log diagnosis
Railway's Railpack builder sets NODE_OPTIONS=--expose-gc which Node 18
rejects during mise install. Node 20+ accepts this flag. All Railway
seed services using scripts/package.json will now build with Node 20.
The CorridorRisk API provides rich intelligence that we were storing
but not displaying. Now surfaced in the panel:
- risk_summary: live intelligence narrative shown in the description
area (e.g. "Armed confrontations are active across the Persian Gulf
with 52% of events classified as armed clashes")
- risk_report.action: routing recommendation shown when card is
expanded (e.g. "Recommend REROUTING via Cape of Good Hope for all
non-essential Gulf cargo")
Changes:
- Proto: add risk_summary and risk_report_action to TransitSummary
- Relay: extract risk_report.action in seedCorridorRisk, pass both
fields through seedTransitSummaries
- Handler: pass through to API response + include in description
- UI: riskSummary in risk row, riskReportAction in expanded view
corridorrisk: EMPTY in health despite relay running new code. The seed
produces zero log output, making it impossible to diagnose. Added:
- Log fetch start ("Fetching...") and in-flight skip
- Log HTTP error with response body and content-type
- Detect HTML responses (Cloudflare challenge) before JSON.parse
- Increase timeout from 10s to 15s for slow Railway regions
* feat(forecast): add AI Forecasts prediction module (Pro-tier)
MiroFish-inspired prediction engine that generates structured forecasts
across 6 domains (conflict, market, supply chain, political, military,
infrastructure) using existing WorldMonitor data streams.
- Proto definitions for ForecastService with GetForecasts RPC
- Dedicated seed script (seed-forecasts.mjs) with 6 domain detectors,
cross-domain cascade resolver, prediction market calibration, and
trend detection via prior snapshot comparison
- Premium-gated RPC handler (PREMIUM_RPC_PATHS enforcement)
- Lazy-loaded ForecastPanel with domain filters, probability bars,
trend arrows, signal evidence, and cascade links
- Health monitoring integration (seed-meta freshness tracking)
- Refresh scheduler with API key guard
* test(forecast): add 47 unit tests for forecast detectors and utilities
Covers forecastId, normalize, resolveCascades, calibrateWithMarkets,
computeTrends, and smoke tests for all 6 domain detectors. Exports
testable functions from seed script with direct-run guard.
* fix(forecast): domain mismatch 'infra' vs 'infrastructure', add panel category
- Seed script used 'infra' but ForecastPanel filtered on 'infrastructure',
causing Infra tab to show zero results
- Added 'forecast' to intelligence category in PANEL_CATEGORY_MAP
* fix(forecast): move CSS to one-time injection, improve type safety
- P2: Move style block from setContent to one-time document.head injection
to prevent CSS accumulation on repeated renders
- P3: Replace +toFixed(3) with Math.round for readability in seed script
- P3: Use Forecast type instead of any[] in RPC handler filter
* fix(forecast): handle sebuf proto data shapes from Redis
Detectors now normalize CII scores from server-side proto format
(combinedScore, TREND_DIRECTION_RISING, region) to uniform shape.
Outage severity handles proto enum format (SEVERITY_LEVEL_HIGH).
Added confidence floor of 0.3 for single-source predictions.
Verified against live Redis: 2 predictions generated (Iran infra
shutdown, IL political instability).
* feat(forecast): unlock AI Forecasts on web, lock desktop only (trial)
- Remove forecast RPC from PREMIUM_RPC_PATHS (web access is free)
- Panel locked on desktop only (same as oref-sirens/telegram-intel)
- Remove API key guards from data-loader and refresh scheduler
- Web users get full access during trial period
* chore: regenerate proto types with make generate
Re-ran make generate after rebasing on main. Plugin v0.7.0 dropped
@ts-nocheck from output, added it back to all 50 generated files.
Fixed 4 type errors from proto codegen changes:
- MarketSource enum -> string union type
- TemporalAnomalyProto -> TemporalAnomaly rename
- webcam lastUpdated number -> string
* fix(forecast): use chokepoints v4 key, include ciiContribution in unrest
- P1: Switch chokepoints input from stale v2 to active v4 Redis key,
matching bootstrap.js and cache-keys.ts
- P2: Add ciiContribution to unrest component fallback chain in
normalizeCiiEntry so political detector reads the correct sebuf field
* feat(forecast): Phase 2 LLM scenario enrichment + confidence model
MiroFish-inspired enhancements:
- LLM scenario narratives via Groq/OpenRouter (narrative-only, no numeric
adjustment). Evidence-grounded prompts with mandatory signal citation
and few-shot examples from MiroFish's SECTION_SYSTEM_PROMPT_TEMPLATE.
- Top-4 predictions batched into single LLM call for cost efficiency.
- News context from newsInsights attached to all predictions for LLM
prompt grounding (NOT in signals, cannot affect confidence).
- Deterministic confidence model: source diversity via SIGNAL_TO_SOURCE
mapping (deduplicates cii+cii_delta, theater+indicators) + calibration
agreement from prediction market drift. Floor 0.2, ceiling 1.0.
- Output validation: rejects scenarios without signal references.
- Truncated JSON repair for small model output.
- Structured JSON logging for LLM calls.
- Redis cache for LLM scenarios (1h TTL).
- 23 new tests (70 total), all passing.
- Live-tested: OpenRouter gemini-2.5-flash produces evidence-grounded
scenario narratives from real WorldMonitor data.
* feat(forecast): Phase 3 multi-perspective scenarios, projections, data-driven cascades
MiroFish-inspired enhancements:
- Multi-perspective LLM analysis: top-2 predictions get strategic,
regional, and contrarian viewpoints via combined LLM call
- Probability projections: domain-specific decay curves (h24/d7/d30)
anchored to timeHorizon so probability equals projections[timeHorizon]
- Data-driven cascade rules: moved from hardcoded array to JSON config
(scripts/data/cascade-rules.json) with schema validation, named
predicate evaluators, unknown key rejection, and fallback to defaults
- 4 new cascade paths: infrastructure->supply_chain, infrastructure->market
(both requiresSeverity:total), conflict->political, political->market
- Proto: added Perspectives and Projections messages to Forecast
- ForecastPanel: renders projections row and conditional perspectives toggle
- 89 tests (19 new), all passing
- Live-tested: OpenRouter produces perspectives from real data
* feat(forecast): Phase 4 data utilization + entity graph
Fixes data gaps that prevented 4 of 6 detectors from firing:
- Input normalizers: chokepoint v4 shape + GPS hexes-to-zones mapping
- Chokepoint warm-ping (production-only, requires WM_API_BASE_URL)
- Lowered CII conflict threshold from 70 to 60, gated on level=high|critical
4 new standalone detectors:
- UCDP conflict zones (10+ events per country)
- Cyber threat concentration (5+ threats per country)
- GPS jamming in maritime shipping zones (5 regions)
- Prediction markets as signals (60-90% probability markets)
Entity-relationship graph (file-based, 38 nodes):
- Countries, theaters, commodities, chokepoints, alliances
- Alias table resolves both ISO codes and display names
- Graph cascade discovery links predictions across entities
Result: 51 predictions (up from 1-2), spanning conflict, infrastructure,
and supply chain domains. 112 tests, all passing.
* fix(forecast): redis cache format, signal source mapping, type safety
Fresh-eyes audit fixes:
- BUG: redisSet used wrong Upstash API format (POST body with {value,ex}
instead of command array ['SET',key,value,'EX',ttl]). LLM cache writes
were silently failing, causing fresh LLM calls every run.
- BUG: prediction_market signal type missing from SIGNAL_TO_SOURCE,
inflating confidence for market-derived predictions.
- CLEANUP: Remove unnecessary (f as any) casts in ForecastPanel since
generated Forecast type already has projections/perspectives fields.
- CLEANUP: Bump health maxStaleMin from 60 to 90 to avoid false STALE
alerts when LLM calls add latency to seed runs.
* feat(forecast): headline-entity matching with news corroboration signals
Uses entity graph aliases to match headlines to predictions by
country/theater (excludes commodity/infrastructure nodes to prevent
false positives). Predictions with matching headlines get a
news_corroboration signal visible in the panel.
Also fixes buildUserPrompt to merge unique headlines from ALL
predictions in the LLM batch (was only reading preds[0].newsContext).
Live-tested: 13 of 51 predictions now have corroborating headlines
(Iran, Israel, Syria, Ukraine, etc). 116 tests, all passing.
* feat(forecast): add country-codes.json for headline-entity matching
56 countries with ISO codes, full names, and scoring keywords (extracted
from src/config/countries.ts + UCDP-relevant additions). Used by
attachNewsContext for richer headline matching via getSearchTermsForRegion
which combines country-codes + entity graph + keyword aliases.
14/57 predictions now have news corroboration (limited by headline
coverage, not matching quality: only 8 headlines currently available).
* feat(forecast): read 300 headlines from news digest instead of 8
Read news:digest:v1:full:en (300 headlines across 16 categories) instead
of just news:insights:v1 topStories (8 headlines). Fallback to topStories
if digest is unavailable.
Result: news corroboration jumped from 25% to 64% (38/59 predictions).
* fix(forecast): handle parenthetical country names in headline matching
Strip suffixes like '(Zaire)', '(Burma)', '(Soviet Union)' from UCDP
region names before matching against country-codes.json. Also use
includes() for reverse name lookup to catch partial matches.
Corroboration: 64% -> 69% (41/59). Remaining 18 unmatched are countries
with no current English-language news coverage.
* fix(forecast): cache validated LLM output, add digest test, log cache errors
Fresh-eyes audit fixes:
- Combined LLM cache now stores only validated items (was caching raw
unvalidated output, serving potentially invalid scenarios on cache hit)
- redisSet logs warnings on failure (was silently swallowing all errors)
- Added digest-based test for attachNewsContext (primary path was untested)
- Fixed test arity: attachNewsContext(preds, news, digest) with 3 params
* fix(forecast): remove dead confidenceFromSources, reduce warm-ping timeout
- P2: Remove confidenceFromSources (dead code, computeConfidence overwrites
all initial confidence values). Inline the formula in original detectors.
- P3: Reduce warm-ping timeout from 30s to 15s (non-critical step)
- P3: Add trial status comment on forecast panel config
* fix(forecast): resolve ISO codes to country names, fix market detector, safe pre-push
P1 fixes from code review:
- CII ISO codes (IL, IR) now resolved to full country names (Israel, Iran)
via country-codes.json. Prevents substring false positives (IL matching
Chile) in event correlation. Uses word-boundary regex for matching.
- Market detector CII-to-theater mapping now uses entity graph traversal
instead of broken theater-name substring matching. Iran correctly maps
to Middle East theater via graph links.
- Pre-push hook no longer runs destructive git checkout on proto freshness
failure. Reports mismatch and exits without modifying worktree.
* fix(seeds): rethrow non-fetch failures in runSeed()
Split runSeed() into two phases so only upstream fetch errors get
the graceful TTL-extension path. Redis publish, seed-meta, and
verification failures now rethrow (exit 1) so monitoring catches them.
* fix(seeds): separate fetch from publish errors in standalone scripts
Split seed-airport-delays, seed-military-flights, and
seed-service-statuses into two phases matching runSeed() pattern:
- Phase 1: upstream fetch errors are graceful (extend TTL, exit 0)
- Phase 2: Redis publish/verify errors propagate (exit 1)
* fix(seeds): make Redis SET throw on failure so publish errors propagate
Local redisSet() returned false instead of throwing, silently masking
Redis write failures. writeExtraKey() also warned instead of throwing.
Both now throw on non-OK responses, ensuring Phase 2 catch fires.
* fix(seed): treat empty Redis key after successful RPC as publish failure
When cachedFetchJson() silently swallows a Redis write failure, the
warm-ping script now throws instead of warning, reaching the outer
catch handler (exit 1) so monitoring detects the issue.
* perf: reduce uncached API calls via client-side circuit breaker caches
Add client-side circuit breaker caches with IndexedDB persistence to the
top 3 uncached API endpoints (CF analytics: 10.5M uncached requests/day):
- classify-events (5.37M/day): 6hr cache per normalized title, shouldCache
guards against caching null/transient failures
- get-population-exposure (3.45M/day): 6hr cache per coordinate key
(toFixed(4) for ~11m precision), 64-entry LRU
- summarize-article (1.68M/day): 2hr cache per headline-set hash via
buildSummaryCacheKey, eliminates both cache-check and summarize RPCs
Fix workbox-*.js getting no-cache headers (3.62M/day): exclude from SPA
catch-all regex in vercel.json, add explicit immutable cache rule for
content-hashed workbox files.
Migrate USNI fleet fetch from Vercel edge to Railway relay (gold standard):
- Add seedUSNIFleet() loop to ais-relay.cjs (6hr interval, gzip support)
- Make server handler Redis-read-only (435 lines reduced to 38)
- Move usniFleet from ON_DEMAND to BOOTSTRAP_KEYS in health.js
- Add persistCache + shouldCache to client breaker
Estimated reduction: ~14.3M uncached requests/day.
* fix: address code review findings (P1 + P2)
P1: Include SummarizeOptions in summary cache key to prevent cross-option
cache pollution (e.g. cloud summary replayed after user disables cloud LLMs).
P2: Document that forceRefresh is intentionally ignored now that USNI
fetching moved to Railway relay (Vercel is Redis-read-only).
* fix: reject forceRefresh explicitly instead of silently ignoring it
Return an error response with explanation when forceRefresh=true is sent,
rather than silently returning cached data. Makes the behavior regression
visible to any caller instead of masking it.
* fix(build): set worker.format to 'es' for Vite 6 compatibility
Vite 6 defaults worker.format to 'iife' which fails with code-splitting
workers (analysis.worker.ts uses dynamic imports). Setting 'es' fixes
the Vercel production build.
* fix(test): update deploy-config test for workbox regex exclusion
The SPA catch-all regex test hard-coded the old pattern without the
workbox exclusion. Update to match the new vercel.json source pattern.
* feat(seeds): add Railway seed scripts for economic and trade endpoints
Two new seed scripts to eliminate Vercel edge external API calls:
seed-economy.mjs:
- EIA energy prices (WTI, Brent) -> economic:energy:v1:all
- EIA energy capacity (Solar, Wind, Coal) -> economic:capacity:v1:COL,SUN,WND:20
- FRED series (10 series) -> economic:fred:v1:<id>:120
- Macro signals (Yahoo, Alternative.me, Mempool) -> economic:macro-signals:v1
seed-supply-chain-trade.mjs:
- Shipping rates (FRED) -> supply_chain:shipping:v2
- Trade barriers (WTO tariff gap) -> trade:barriers:v1:tariff-gap:50
- Trade restrictions (WTO MFN overview) -> trade:restrictions:v1:tariff-overview:50
- Trade flows (WTO, 15 major reporters) -> trade:flows:v1:<reporter>:000:10
- Tariff trends (WTO, 15 major reporters) -> trade:tariffs:v1:<reporter>:all:10
Cache keys match handler patterns exactly so cachedFetchJson finds
pre-seeded data and avoids live external API calls from Vercel edge.
* feat(seeds): add seed-aviation.mjs for airport ops and aviation news
Seeds 2 aviation endpoints with predictable default params:
- getAirportOpsSummary (AviationStack + NOTAM) -> aviation:ops-summary:v1:CDG,ESB,FRA,IST,LHR,SAW
- listAviationNews (9 RSS feeds, 24h window) -> aviation:news::24:v1
NOT seeded (inherently on-demand, user-specific inputs):
- getFlightStatus: specific flight number lookup
- trackAircraft: bounding-box or icao24 queries
- listAirportFlights: arbitrary airport+direction+limit combos
- getCarrierOps: depends on listAirportFlights with variable params
* feat(seeds): add seed-conflict-intel.mjs for ACLED, HAPI, and PizzINT
Seeds 3 conflict/intelligence endpoints with predictable default params:
- listAcledEvents (all countries, last 30 days) -> conflict:acled:v1:all:0:0
- getHumanitarianSummary (20 top conflict countries) -> conflict:humanitarian:v1:<CC>
- getPizzintStatus (base + GDELT variants) -> intel:pizzint:v1:base, intel:pizzint:v1:gdelt
NOT seeded (inherently on-demand, LLM or user-specific inputs):
- classifyEvent: per-headline LLM classification
- deductSituation: per-query LLM deduction
- getCountryIntelBrief: per-country LLM brief with context hash
- getCountryFacts: per-country REST Countries + Wikidata + Wikipedia
- searchGdeltDocuments: per-query GDELT search
Requires: ACLED_EMAIL, ACLED_KEY, UPSTASH_REDIS_REST_URL/TOKEN
* feat(seeds): add seed-research.mjs for arXiv, HN, tech events, trending repos
Seeds 4 research endpoints:
- listArxivPapers (cs.AI, cs.CL, cs.CR) -> research:arxiv:v1:<cat>::50
- listHackernewsItems (top, best feeds) -> research:hackernews:v1:<feed>:30
- listTechEvents (Techmeme ICS + dev.events RSS) -> research:tech-events:v1
- listTrendingRepos (python, javascript, typescript) -> research:trending:v1:<lang>:daily:50
Tech events key is also seeded by the relay, this script provides backup
hydration and ensures the key is warm even if relay hasn't run yet.
Requires: UPSTASH_REDIS_REST_URL/TOKEN
* feat(seeds): add seed-military-maritime-news.mjs for USNI and nav warnings
Seeds 2 endpoints with predictable default params:
- USNI Fleet Report (WordPress JSON API) -> usni-fleet:sebuf:v1 + stale backup
- Navigational Warnings (NGA broadcast, all areas) -> maritime:navwarnings:v1:all
NOT seeded (inherently on-demand):
- getAircraftDetails/batch: per-icao24 Wingbits lookup
- listMilitaryFlights: bounding-box query (quantized 1-degree grid)
- getVesselSnapshot: in-memory cache, reads from relay /ais-snapshot
- listFeedDigest: per-feed-URL RSS caching (hundreds of feeds, relay proxied)
- summarizeArticle: per-article LLM summarization
Requires: UPSTASH_REDIS_REST_URL/TOKEN
* feat(seeds): add seed-infra.mjs warm-ping for service statuses and cable health
Uses warm-ping pattern (calls Vercel RPC from Railway) because:
- list-service-statuses: 30 status page parsers with 8 custom formats
- get-cable-health: NGA text analysis with cable name matching + proximity
Replicating this logic in a standalone script is fragile and duplicative.
NOT seeded (on-demand):
- search-imagery: per-bbox/datetime STAC query
- get-giving-summary: hardcoded baselines, no external fetches
- get-webcam-image: per-webcamId Windy API lookup
* fix(seeds): move secondary key writes before process.exit, fix data shapes
Critical bugs found in code review:
1. runSeed() calls process.exit(0) after primary key write, so .then()
callbacks were dead code. All secondary keys (FRED, macro signals,
trade data, HAPI summaries, pizzint, HN, trending, etc.) were NEVER
written. Fix: move writeExtraKey calls inside fetchAll() before return.
2. FRED cache key used :120 suffix but handler default is :0 (req.limit||0).
Fixed to :0 so seed matches handler cache key for default requests.
3. USNI and nav warnings seed parsers produced wrong data shapes vs handler
(different field names, missing fields). Converted to warm-ping pattern
(like seed-infra.mjs) to avoid shape divergence.
* fix(seeds): reduce GDELT 429 rate limiting in seed-gdelt-intel
Problems from logs: every topic fetch hits 429, runs take 3-5min,
4th run failed fatally after 12min of cascading retries.
Fixes:
- Increase inter-topic delay: 12s -> 20s (GDELT needs longer cooldown)
- Increase initial backoff: 10s -> 20s, with 15s increments per retry
- Graceful degradation: exhausted retries return empty topic instead of
throwing (prevents withRetry from restarting ALL topics from scratch)
- Align TTL with health.js: 3600s -> 7200s (matches maxStaleMin:120)
- Validation allows partial success (3/6 topics minimum)
Cron interval should also be increased from 30min to 2h on Railway
to match the new 2h TTL.
* fix(seeds): 4 bugs from review - ACLED auth, NOTAM key, infra precedence, curated events
P1: ACLED auth used wrong endpoint (api/acled/token) and env vars (ACLED_KEY).
Fixed to match server/acled-auth.ts: ACLED_EMAIL+ACLED_PASSWORD via /oauth/token,
with ACLED_ACCESS_TOKEN static fallback.
P1: Aviation NOTAM key was aviation:notam-closures:v1, handler reads
aviation:notam:closures:v2. Fixed key to match _shared.ts.
P2: Infra warm-ping had operator precedence bug in nullish coalescing:
(a ?? b) ? c : d instead of a ?? (b ? c : d). Added parens.
P2: Research seed missed curated conferences that the handler appends
(CURATED_EVENTS in list-tech-events.ts). Added same curated events so
seeded data matches what the handler would produce.
* fix(seeds): add seed-meta freshness metadata for all secondary keys
Added writeExtraKeyWithMeta() to _seed-utils.mjs that writes both the
data key and a seed-meta:<key> freshness metadata entry. All secondary
key writes in seed scripts now use this helper so health.js can track
freshness for: energy capacity, FRED series, macro signals, trade
barriers/restrictions/flows/tariffs, aviation news, HAPI summaries,
PizzINT, arXiv categories, HN feeds, tech events, trending repos.
Previously only the primary key per script got seed-meta (via runSeed),
leaving secondary keys operationally invisible to health monitoring.
* fix(seeds): align seed-meta keys with health.js conventions
P1: writeExtraKeyWithMeta wrote seed-meta:<full-cache-key> (e.g.,
seed-meta:economic:macro-signals:v1), but health.js expects normalized
names without version suffixes (seed-meta:economic:macro-signals).
Fixed by stripping trailing :v\d+ from key. Added metaKeyOverride
param for cases needing explicit control.
P1: shipping seed used runSeed('supply-chain', 'shipping-trade', ...)
producing seed-meta:supply-chain:shipping-trade, but health.js expects
seed-meta:supply_chain:shipping. Fixed domain/resource to match.
* fix(seeds): only write seed-meta after successful data key write
writeExtraKey() now returns false on failure. writeExtraKeyWithMeta()
skips seed-meta write when the data write fails, preventing false-positive
health reports for keys like macro-signals and tech-events.
* fix(seeds): extend TTL on stale data instead of crashing on fetch errors
Seed scripts crashed with process.exit(1) when upstream APIs returned
errors (e.g., Wingbits 401), causing Redis keys to expire and panels
to lose data. Now all seeds gracefully extend TTL on existing keys and
exit 0, keeping stale data alive until the API recovers.
- Add shared extendExistingTtl() helper to _seed-utils.mjs
- Update runSeed() catch block (fixes 24 scripts using it)
- Fix fetch-gpsjam.mjs, seed-airport-delays.mjs,
seed-military-flights.mjs, seed-service-statuses.mjs
* fix(seeds): preserve per-key TTLs when extending stale military data
THEATER_POSTURE_BACKUP_KEY has a 7-day TTL (604800s) but was being
extended with STALE_TTL (86400s), shortening it from 7 days to 1 day
during upstream outages. Now each key group gets its original TTL.
* fix(supply-chain): increase Redis timeout for PortWatch and remove content height cap
Root cause: getCachedJson has a 1500ms timeout, but the PortWatch
payload (~149KB for 13 chokepoints x 175 days) exceeds this on
high-latency Edge regions. The fetch silently times out and returns
null, so the handler builds responses with empty transit summaries.
Fix: add optional timeoutMs param to getCachedJson, use 5000ms for
the PortWatch fetch. Also remove the 300px max-height on
.economic-content so the Supply Chain panel fills available height.
* refactor(supply-chain): move transit summary assembly to Railway relay
Vercel Edge was reading 3 large Redis keys (PortWatch 149KB, transit
counts, CorridorRisk) and assembling transit summaries on every request.
The 1500ms Redis timeout caused the 149KB PortWatch fetch to silently
fail on high-latency Edge regions (Mumbai bom1), leaving all transit
data empty.
Now Railway builds the pre-assembled transit summaries (including
anomaly detection) and writes them to a single key. Vercel reads
ONE small pre-built key instead of 3 raw keys.
Flow: Railway seeds PortWatch + transit counts -> builds summaries ->
writes supply_chain:transit-summaries:v1 -> Vercel reads it.
This follows the gold standard: "Vercel reads Redis ONLY; Railway
makes ALL external API calls and data assembly."
* test(supply-chain): add sync tests for relay threat levels and name mappings
detectTrafficAnomalyRelay and CHOKEPOINT_THREAT_LEVELS in the relay are
duplicated from _scoring.mjs and get-chokepoint-status.ts because
ais-relay.cjs is CJS. Added sync tests that validate:
- Every canonical chokepoint has a relay threat level
- Relay threat levels match handler CHOKEPOINTS config
- RELAY_NAME_TO_ID covers all canonical chokepoints
This catches drift between the two source-of-truth files.
* fix(ui): restore bounded scroll on economic-content with flex layout
The previous fix replaced max-height: 300px with flex: 1 1 auto, but
.panel-content was not a flex container so the flex rule was ignored.
This caused tabs to scroll away with the content.
Fix: use :has(.economic-content) to make .panel-content a flex column
only for panels with tabbed economic content. Tabs stay pinned, content
area scrolls independently.
* feat(supply-chain): fix CorridorRisk API integration (open beta, no key needed)
The CorridorRisk API is in open beta at corridorrisk.io/api/corridors
(not api.corridorrisk.io/v1/corridors). No API key required during beta.
Changes:
- Fix URL to corridorrisk.io/api/corridors
- Remove API key requirement (open beta)
- Update name matching for actual API names (e.g. "Persian Gulf &
Strait of Hormuz" -> hormuz_strait)
- Derive riskLevel from score (>=70 critical, >=50 high, etc.)
- Store riskScore, vesselCount, eventCount7d, riskSummary
- Feed CorridorRisk data into transit summaries
* test(supply-chain): comprehensive transit summary integration tests
75 tests across 10 suites covering:
- Relay seedTransitSummaries assembly (Redis key, fields, triggers)
- CorridorRisk name mapping and risk level derivation from score
- Handler reads pre-built summaries (not raw upstream keys)
- Handler isolation: no PortWatchData/CorridorRiskData/CANONICAL_CHOKEPOINTS imports
- detectTrafficAnomalyRelay sync with _scoring.mjs (side-by-side execution)
- detectTrafficAnomaly edge cases (boundaries, threat levels, unsorted history)
- CHOKEPOINT_THREAT_LEVELS relay-handler sync validation
* fix(supply-chain): hydrate transit summaries from Redis on relay restart
After relay restart, latestPortwatchData and latestCorridorRiskData are
null. The initial seedTransitSummaries call (35s after boot) would
return early with no data, leaving the transit-summaries:v1 key stale
until the next PortWatch seed completes (6+ seconds later).
Fix: seedTransitSummaries now reads persisted PortWatch and CorridorRisk
data from Redis when in-memory state is empty. This covers the cold-start
gap so Vercel always has fresh transit summaries.
Also adds 5 tests validating the hydration path order and assignment.
* fix(supply-chain): add fallback to raw Redis keys when pre-built summaries are empty
P1: If supply_chain:transit-summaries:v1 is absent (relay not deployed,
restart in progress, or transient PortWatch failure), the handler now
falls back to reading the raw portwatch, corridorrisk, and transit count
keys directly and assembling summaries on the fly.
This ensures corridor risk data (riskLevel, incidentCount7d, disruptionPct)
is never silently zeroed out, and users keep history/counts even during
the 6-hour PortWatch re-seed window.
Strategy: pre-built summaries (fast path) -> raw keys fallback (slow path)
-> all-zero defaults (last resort).