The CorridorRisk API provides rich intelligence that we were storing
but not displaying. Now surfaced in the panel:
- risk_summary: live intelligence narrative shown in the description
area (e.g. "Armed confrontations are active across the Persian Gulf
with 52% of events classified as armed clashes")
- risk_report.action: routing recommendation shown when card is
expanded (e.g. "Recommend REROUTING via Cape of Good Hope for all
non-essential Gulf cargo")
Changes:
- Proto: add risk_summary and risk_report_action to TransitSummary
- Relay: extract risk_report.action in seedCorridorRisk, pass both
fields through seedTransitSummaries
- Handler: pass through to API response + include in description
- UI: riskSummary in risk row, riskReportAction in expanded view
- Add /api/forecast/v1/get-forecasts to RPC_CACHE_TIER as 'medium'
(route-cache-tier test requires every GET route has explicit entry)
- Fix transitSummary test regex to accept optional field syntax (?:)
from proto codegen v0.7.0
* feat(forecast): add AI Forecasts prediction module (Pro-tier)
MiroFish-inspired prediction engine that generates structured forecasts
across 6 domains (conflict, market, supply chain, political, military,
infrastructure) using existing WorldMonitor data streams.
- Proto definitions for ForecastService with GetForecasts RPC
- Dedicated seed script (seed-forecasts.mjs) with 6 domain detectors,
cross-domain cascade resolver, prediction market calibration, and
trend detection via prior snapshot comparison
- Premium-gated RPC handler (PREMIUM_RPC_PATHS enforcement)
- Lazy-loaded ForecastPanel with domain filters, probability bars,
trend arrows, signal evidence, and cascade links
- Health monitoring integration (seed-meta freshness tracking)
- Refresh scheduler with API key guard
* test(forecast): add 47 unit tests for forecast detectors and utilities
Covers forecastId, normalize, resolveCascades, calibrateWithMarkets,
computeTrends, and smoke tests for all 6 domain detectors. Exports
testable functions from seed script with direct-run guard.
* fix(forecast): domain mismatch 'infra' vs 'infrastructure', add panel category
- Seed script used 'infra' but ForecastPanel filtered on 'infrastructure',
causing Infra tab to show zero results
- Added 'forecast' to intelligence category in PANEL_CATEGORY_MAP
* fix(forecast): move CSS to one-time injection, improve type safety
- P2: Move style block from setContent to one-time document.head injection
to prevent CSS accumulation on repeated renders
- P3: Replace +toFixed(3) with Math.round for readability in seed script
- P3: Use Forecast type instead of any[] in RPC handler filter
* fix(forecast): handle sebuf proto data shapes from Redis
Detectors now normalize CII scores from server-side proto format
(combinedScore, TREND_DIRECTION_RISING, region) to uniform shape.
Outage severity handles proto enum format (SEVERITY_LEVEL_HIGH).
Added confidence floor of 0.3 for single-source predictions.
Verified against live Redis: 2 predictions generated (Iran infra
shutdown, IL political instability).
* feat(forecast): unlock AI Forecasts on web, lock desktop only (trial)
- Remove forecast RPC from PREMIUM_RPC_PATHS (web access is free)
- Panel locked on desktop only (same as oref-sirens/telegram-intel)
- Remove API key guards from data-loader and refresh scheduler
- Web users get full access during trial period
* chore: regenerate proto types with make generate
Re-ran make generate after rebasing on main. Plugin v0.7.0 dropped
@ts-nocheck from output, added it back to all 50 generated files.
Fixed 4 type errors from proto codegen changes:
- MarketSource enum -> string union type
- TemporalAnomalyProto -> TemporalAnomaly rename
- webcam lastUpdated number -> string
* fix(forecast): use chokepoints v4 key, include ciiContribution in unrest
- P1: Switch chokepoints input from stale v2 to active v4 Redis key,
matching bootstrap.js and cache-keys.ts
- P2: Add ciiContribution to unrest component fallback chain in
normalizeCiiEntry so political detector reads the correct sebuf field
* feat(forecast): Phase 2 LLM scenario enrichment + confidence model
MiroFish-inspired enhancements:
- LLM scenario narratives via Groq/OpenRouter (narrative-only, no numeric
adjustment). Evidence-grounded prompts with mandatory signal citation
and few-shot examples from MiroFish's SECTION_SYSTEM_PROMPT_TEMPLATE.
- Top-4 predictions batched into single LLM call for cost efficiency.
- News context from newsInsights attached to all predictions for LLM
prompt grounding (NOT in signals, cannot affect confidence).
- Deterministic confidence model: source diversity via SIGNAL_TO_SOURCE
mapping (deduplicates cii+cii_delta, theater+indicators) + calibration
agreement from prediction market drift. Floor 0.2, ceiling 1.0.
- Output validation: rejects scenarios without signal references.
- Truncated JSON repair for small model output.
- Structured JSON logging for LLM calls.
- Redis cache for LLM scenarios (1h TTL).
- 23 new tests (70 total), all passing.
- Live-tested: OpenRouter gemini-2.5-flash produces evidence-grounded
scenario narratives from real WorldMonitor data.
* feat(forecast): Phase 3 multi-perspective scenarios, projections, data-driven cascades
MiroFish-inspired enhancements:
- Multi-perspective LLM analysis: top-2 predictions get strategic,
regional, and contrarian viewpoints via combined LLM call
- Probability projections: domain-specific decay curves (h24/d7/d30)
anchored to timeHorizon so probability equals projections[timeHorizon]
- Data-driven cascade rules: moved from hardcoded array to JSON config
(scripts/data/cascade-rules.json) with schema validation, named
predicate evaluators, unknown key rejection, and fallback to defaults
- 4 new cascade paths: infrastructure->supply_chain, infrastructure->market
(both requiresSeverity:total), conflict->political, political->market
- Proto: added Perspectives and Projections messages to Forecast
- ForecastPanel: renders projections row and conditional perspectives toggle
- 89 tests (19 new), all passing
- Live-tested: OpenRouter produces perspectives from real data
* feat(forecast): Phase 4 data utilization + entity graph
Fixes data gaps that prevented 4 of 6 detectors from firing:
- Input normalizers: chokepoint v4 shape + GPS hexes-to-zones mapping
- Chokepoint warm-ping (production-only, requires WM_API_BASE_URL)
- Lowered CII conflict threshold from 70 to 60, gated on level=high|critical
4 new standalone detectors:
- UCDP conflict zones (10+ events per country)
- Cyber threat concentration (5+ threats per country)
- GPS jamming in maritime shipping zones (5 regions)
- Prediction markets as signals (60-90% probability markets)
Entity-relationship graph (file-based, 38 nodes):
- Countries, theaters, commodities, chokepoints, alliances
- Alias table resolves both ISO codes and display names
- Graph cascade discovery links predictions across entities
Result: 51 predictions (up from 1-2), spanning conflict, infrastructure,
and supply chain domains. 112 tests, all passing.
* fix(forecast): redis cache format, signal source mapping, type safety
Fresh-eyes audit fixes:
- BUG: redisSet used wrong Upstash API format (POST body with {value,ex}
instead of command array ['SET',key,value,'EX',ttl]). LLM cache writes
were silently failing, causing fresh LLM calls every run.
- BUG: prediction_market signal type missing from SIGNAL_TO_SOURCE,
inflating confidence for market-derived predictions.
- CLEANUP: Remove unnecessary (f as any) casts in ForecastPanel since
generated Forecast type already has projections/perspectives fields.
- CLEANUP: Bump health maxStaleMin from 60 to 90 to avoid false STALE
alerts when LLM calls add latency to seed runs.
* feat(forecast): headline-entity matching with news corroboration signals
Uses entity graph aliases to match headlines to predictions by
country/theater (excludes commodity/infrastructure nodes to prevent
false positives). Predictions with matching headlines get a
news_corroboration signal visible in the panel.
Also fixes buildUserPrompt to merge unique headlines from ALL
predictions in the LLM batch (was only reading preds[0].newsContext).
Live-tested: 13 of 51 predictions now have corroborating headlines
(Iran, Israel, Syria, Ukraine, etc). 116 tests, all passing.
* feat(forecast): add country-codes.json for headline-entity matching
56 countries with ISO codes, full names, and scoring keywords (extracted
from src/config/countries.ts + UCDP-relevant additions). Used by
attachNewsContext for richer headline matching via getSearchTermsForRegion
which combines country-codes + entity graph + keyword aliases.
14/57 predictions now have news corroboration (limited by headline
coverage, not matching quality: only 8 headlines currently available).
* feat(forecast): read 300 headlines from news digest instead of 8
Read news:digest:v1:full:en (300 headlines across 16 categories) instead
of just news:insights:v1 topStories (8 headlines). Fallback to topStories
if digest is unavailable.
Result: news corroboration jumped from 25% to 64% (38/59 predictions).
* fix(forecast): handle parenthetical country names in headline matching
Strip suffixes like '(Zaire)', '(Burma)', '(Soviet Union)' from UCDP
region names before matching against country-codes.json. Also use
includes() for reverse name lookup to catch partial matches.
Corroboration: 64% -> 69% (41/59). Remaining 18 unmatched are countries
with no current English-language news coverage.
* fix(forecast): cache validated LLM output, add digest test, log cache errors
Fresh-eyes audit fixes:
- Combined LLM cache now stores only validated items (was caching raw
unvalidated output, serving potentially invalid scenarios on cache hit)
- redisSet logs warnings on failure (was silently swallowing all errors)
- Added digest-based test for attachNewsContext (primary path was untested)
- Fixed test arity: attachNewsContext(preds, news, digest) with 3 params
* fix(forecast): remove dead confidenceFromSources, reduce warm-ping timeout
- P2: Remove confidenceFromSources (dead code, computeConfidence overwrites
all initial confidence values). Inline the formula in original detectors.
- P3: Reduce warm-ping timeout from 30s to 15s (non-critical step)
- P3: Add trial status comment on forecast panel config
* fix(forecast): resolve ISO codes to country names, fix market detector, safe pre-push
P1 fixes from code review:
- CII ISO codes (IL, IR) now resolved to full country names (Israel, Iran)
via country-codes.json. Prevents substring false positives (IL matching
Chile) in event correlation. Uses word-boundary regex for matching.
- Market detector CII-to-theater mapping now uses entity graph traversal
instead of broken theater-name substring matching. Iran correctly maps
to Middle East theater via graph links.
- Pre-push hook no longer runs destructive git checkout on proto freshness
failure. Reports mismatch and exits without modifying worktree.
* perf: reduce uncached API calls via client-side circuit breaker caches
Add client-side circuit breaker caches with IndexedDB persistence to the
top 3 uncached API endpoints (CF analytics: 10.5M uncached requests/day):
- classify-events (5.37M/day): 6hr cache per normalized title, shouldCache
guards against caching null/transient failures
- get-population-exposure (3.45M/day): 6hr cache per coordinate key
(toFixed(4) for ~11m precision), 64-entry LRU
- summarize-article (1.68M/day): 2hr cache per headline-set hash via
buildSummaryCacheKey, eliminates both cache-check and summarize RPCs
Fix workbox-*.js getting no-cache headers (3.62M/day): exclude from SPA
catch-all regex in vercel.json, add explicit immutable cache rule for
content-hashed workbox files.
Migrate USNI fleet fetch from Vercel edge to Railway relay (gold standard):
- Add seedUSNIFleet() loop to ais-relay.cjs (6hr interval, gzip support)
- Make server handler Redis-read-only (435 lines reduced to 38)
- Move usniFleet from ON_DEMAND to BOOTSTRAP_KEYS in health.js
- Add persistCache + shouldCache to client breaker
Estimated reduction: ~14.3M uncached requests/day.
* fix: address code review findings (P1 + P2)
P1: Include SummarizeOptions in summary cache key to prevent cross-option
cache pollution (e.g. cloud summary replayed after user disables cloud LLMs).
P2: Document that forceRefresh is intentionally ignored now that USNI
fetching moved to Railway relay (Vercel is Redis-read-only).
* fix: reject forceRefresh explicitly instead of silently ignoring it
Return an error response with explanation when forceRefresh=true is sent,
rather than silently returning cached data. Makes the behavior regression
visible to any caller instead of masking it.
* fix(build): set worker.format to 'es' for Vite 6 compatibility
Vite 6 defaults worker.format to 'iife' which fails with code-splitting
workers (analysis.worker.ts uses dynamic imports). Setting 'es' fixes
the Vercel production build.
* fix(test): update deploy-config test for workbox regex exclusion
The SPA catch-all regex test hard-coded the old pattern without the
workbox exclusion. Update to match the new vercel.json source pattern.
* fix(supply-chain): increase Redis timeout for PortWatch and remove content height cap
Root cause: getCachedJson has a 1500ms timeout, but the PortWatch
payload (~149KB for 13 chokepoints x 175 days) exceeds this on
high-latency Edge regions. The fetch silently times out and returns
null, so the handler builds responses with empty transit summaries.
Fix: add optional timeoutMs param to getCachedJson, use 5000ms for
the PortWatch fetch. Also remove the 300px max-height on
.economic-content so the Supply Chain panel fills available height.
* refactor(supply-chain): move transit summary assembly to Railway relay
Vercel Edge was reading 3 large Redis keys (PortWatch 149KB, transit
counts, CorridorRisk) and assembling transit summaries on every request.
The 1500ms Redis timeout caused the 149KB PortWatch fetch to silently
fail on high-latency Edge regions (Mumbai bom1), leaving all transit
data empty.
Now Railway builds the pre-assembled transit summaries (including
anomaly detection) and writes them to a single key. Vercel reads
ONE small pre-built key instead of 3 raw keys.
Flow: Railway seeds PortWatch + transit counts -> builds summaries ->
writes supply_chain:transit-summaries:v1 -> Vercel reads it.
This follows the gold standard: "Vercel reads Redis ONLY; Railway
makes ALL external API calls and data assembly."
* test(supply-chain): add sync tests for relay threat levels and name mappings
detectTrafficAnomalyRelay and CHOKEPOINT_THREAT_LEVELS in the relay are
duplicated from _scoring.mjs and get-chokepoint-status.ts because
ais-relay.cjs is CJS. Added sync tests that validate:
- Every canonical chokepoint has a relay threat level
- Relay threat levels match handler CHOKEPOINTS config
- RELAY_NAME_TO_ID covers all canonical chokepoints
This catches drift between the two source-of-truth files.
* fix(ui): restore bounded scroll on economic-content with flex layout
The previous fix replaced max-height: 300px with flex: 1 1 auto, but
.panel-content was not a flex container so the flex rule was ignored.
This caused tabs to scroll away with the content.
Fix: use :has(.economic-content) to make .panel-content a flex column
only for panels with tabbed economic content. Tabs stay pinned, content
area scrolls independently.
* feat(supply-chain): fix CorridorRisk API integration (open beta, no key needed)
The CorridorRisk API is in open beta at corridorrisk.io/api/corridors
(not api.corridorrisk.io/v1/corridors). No API key required during beta.
Changes:
- Fix URL to corridorrisk.io/api/corridors
- Remove API key requirement (open beta)
- Update name matching for actual API names (e.g. "Persian Gulf &
Strait of Hormuz" -> hormuz_strait)
- Derive riskLevel from score (>=70 critical, >=50 high, etc.)
- Store riskScore, vesselCount, eventCount7d, riskSummary
- Feed CorridorRisk data into transit summaries
* test(supply-chain): comprehensive transit summary integration tests
75 tests across 10 suites covering:
- Relay seedTransitSummaries assembly (Redis key, fields, triggers)
- CorridorRisk name mapping and risk level derivation from score
- Handler reads pre-built summaries (not raw upstream keys)
- Handler isolation: no PortWatchData/CorridorRiskData/CANONICAL_CHOKEPOINTS imports
- detectTrafficAnomalyRelay sync with _scoring.mjs (side-by-side execution)
- detectTrafficAnomaly edge cases (boundaries, threat levels, unsorted history)
- CHOKEPOINT_THREAT_LEVELS relay-handler sync validation
* fix(supply-chain): hydrate transit summaries from Redis on relay restart
After relay restart, latestPortwatchData and latestCorridorRiskData are
null. The initial seedTransitSummaries call (35s after boot) would
return early with no data, leaving the transit-summaries:v1 key stale
until the next PortWatch seed completes (6+ seconds later).
Fix: seedTransitSummaries now reads persisted PortWatch and CorridorRisk
data from Redis when in-memory state is empty. This covers the cold-start
gap so Vercel always has fresh transit summaries.
Also adds 5 tests validating the hydration path order and assignment.
* fix(supply-chain): add fallback to raw Redis keys when pre-built summaries are empty
P1: If supply_chain:transit-summaries:v1 is absent (relay not deployed,
restart in progress, or transient PortWatch failure), the handler now
falls back to reading the raw portwatch, corridorrisk, and transit count
keys directly and assembling summaries on the fly.
This ensures corridor risk data (riskLevel, incidentCount7d, disruptionPct)
is never silently zeroed out, and users keep history/counts even during
the 6-hour PortWatch re-seed window.
Strategy: pre-built summaries (fast path) -> raw keys fallback (slow path)
-> all-zero defaults (last resort).
* feat(supply-chain): detect AIS dark-transit anomalies in war zones
When PortWatch history shows >50% traffic drop in war_zone or critical
chokepoints, surface it as intelligence: "Traffic down X% vs 30-day
baseline — vessels may be transiting dark (AIS off)".
The absence of AIS signals in conflict zones like Hormuz is itself a
signal (vessels disabling transponders to avoid targeting).
Changes:
- Add detectTrafficAnomaly() comparing 7-day vs 30-day baseline
- Boost disruption score by 10 when traffic anomaly detected
- Show WoW% from PortWatch even when real-time AIS counts are 0
- 6 new tests for anomaly detection edge cases
* fix(supply-chain): clamp disruptionScore to 100 and dedupe anomaly function
P1: disruptionScore could exceed 100 when anomalyBonus was added on top
of a max-score base, rendering "110/100" in the UI. Now clamped before
assignment, not just for status.
P2: detectTrafficAnomaly was duplicated in test file, so regressions in
the real code path would go undetected. Moved function into _scoring.mjs
(pure, no server deps). Both handler and tests import the same function.
* fix(supply-chain): require 37 days for traffic anomaly detection
detectTrafficAnomaly needs 7 recent + 30 baseline days. The threshold
was 30, which would use a partial baseline (23 days). Now correctly
requires 37 rows before signaling.
* feat: harness engineering P0 - linting, testing, architecture docs
Add foundational infrastructure for agent-first development:
- AGENTS.md: agent entry point with progressive disclosure to deeper docs
- ARCHITECTURE.md: 12-section system reference with source-file refs and ownership rule
- Biome 2.4.7 linter with project-tuned rules, CI workflow (lint-code.yml)
- Architectural boundary lint enforcing forward-only dependency direction (lint-boundaries.mjs)
- Unit test CI workflow (test.yml), all 1083 tests passing
- Fixed 9 pre-existing test failures (bootstrap sync, deploy-config headers, globe parity, redis mocks, geometry URL, import.meta.env null safety)
- Fixed 12 architectural boundary violations (types moved to proper layers)
- Added 3 missing cache tier entries in gateway.ts
- Synced cache-keys.ts with bootstrap.js
- Renamed docs/architecture.mdx to "Design Philosophy" with cross-references
- Deprecated legacy docs/Docs_To_Review/ARCHITECTURE.md
- Harness engineering roadmap tracking doc
* fix: address PR review feedback on harness-engineering-p0
- countries-geojson.test.mjs: skip gracefully when CDN unreachable
instead of failing CI on network issues
- country-geometry-overrides.test.mts: relax timing assertion
(250ms -> 2000ms) for constrained CI environments
- lint-boundaries.mjs: implement the documented api/ boundary check
(was documented but missing, causing false green)
* fix(lint): scan api/ .ts files in boundary check
The api/ boundary check only scanned .js/.mjs files, missing the 25
sebuf RPC .ts edge functions. Now scans .ts files with correct rules:
- Legacy .js: fully self-contained (no server/ or src/ imports)
- RPC .ts: may import server/ and src/generated/ (bundled at deploy),
but blocks imports from src/ application code
* fix(lint): detect import() type expressions in boundary lint
- Move AppContext back to app/app-context.ts (aggregate type that
references components/services/utils belongs at the top, not types/)
- Move HappyContentCategory and TechHQ to types/ (simple enums/interfaces)
- Boundary lint now catches import('@/layer') expressions, not just
from '@/layer' imports
- correlation-engine imports of AppContext marked boundary-ignore
(type-only imports of top-level aggregate)
* fix(supply-chain): correct PortWatch ArcGIS service URL, field names, and chokepoint mappings
The PortWatch seed was returning no data because the ArcGIS service name,
WHERE clause fields, date field, and chokepoint names were all wrong.
Verified all 12 chokepoints return 175 days of data against the live API.
Added error logging to pwFetchAllPages for future debugging.
* fix(supply-chain): sync geofence names with relayName renames
CHOKEPOINT_GEOFENCES in ais-relay.cjs still used old names
('Strait of Malacca', 'Bab el-Mandeb', 'Strait of Gibraltar')
while _chokepoint-ids.ts relayName was updated. buildRelayLookup
does exact string match, so these 3 chokepoints had zero transit
counts despite relay data being present.
Rename geofence entries to match the new relayName values and
update corresponding test assertions.
* fix(supply-chain): correct PortWatch ArcGIS URL, field names, and chokepoint mappings
The PortWatch seed was failing silently because:
1. Wrong service name: portal_chokepoint_daily -> Daily_Chokepoints_Data
2. Wrong query fields: chokepoint/observation_date -> portname/date (epoch)
3. Wrong data model: expected one row per vessel type, actual schema has
all counts as columns (n_tanker, n_cargo, n_total) per row
4. Wrong chokepoint names: e.g. "Strait of Malacca" -> "Malacca Strait",
"Bab el-Mandeb" -> "Bab el-Mandeb Strait", "Bosphorus" -> "Bosporus Strait"
5. Removed Dardanelles (not in PortWatch dataset)
Discovered via IMF PortWatch ArcGIS service directory and returnDistinctValues
query on the portname field.
* feat(supply-chain): add Korea, Dover, Kerch, Lombok chokepoints
Extend from 10 to 14 monitored chokepoints using PortWatch data
availability. All 4 new straits have IMF PortWatch coverage.
- Korea Strait: Japan-Korea trade, busiest East Asia corridor
- Dover Strait: world's busiest shipping lane
- Kerch Strait: war_zone (Russia controls, Ukraine grain restricted)
- Lombok Strait: Malacca bypass for VLCCs
Added to: handler config, canonical ID map, PortWatch seed names,
AIS relay transit counter, tests.
* docs: update maritime docs and changelog for 14 chokepoints + transit intelligence
- maritime-intelligence.mdx: 9 -> 14 chokepoints, add data source descriptions,
add chart rendering note
- changelog.mdx + CHANGELOG.md: add [Unreleased] section for #1560 and #1572
* fix(tests): update portwatch test for pre-aggregated column model
pwClassifyVesselType was removed when switching to pre-aggregated
n_tanker/n_cargo/n_total columns. Update test to verify the new
field names instead.
* fix(supply-chain): sync canonical PortWatch names with actual ArcGIS feed
P1: Dardanelles has no PortWatch data (0 rows). Set portwatchName to empty
string so it won't attempt fetch or show phantom zero history.
P2: portwatchNameToId() returned undefined for Malacca Strait, Bab el-Mandeb
Strait, Gibraltar Strait, Bosporus Strait because canonical map used
old names instead of actual ArcGIS portname values.
Fixed mappings:
Strait of Malacca -> Malacca Strait
Bab el-Mandeb -> Bab el-Mandeb Strait
Strait of Gibraltar -> Gibraltar Strait
Bosphorus -> Bosporus Strait
Dardanelles -> '' (not in PortWatch)
* refactor(supply-chain): merge Dardanelles into Turkish Straits
IMF PortWatch tracks Bosphorus+Dardanelles as a single corridor
(Bosporus Strait). Keeping them separate caused double-counting in
AIS transit data and left Dardanelles with permanently empty history.
- Merge into single "Turkish Straits" entry (id stays 'bosphorus')
- Absorb all Dardanelles keywords (canakkale, gallipoli, aegean)
- Single wider AIS geofence (lat 40.70, lon 28.0, radius 1.5)
- 14 -> 13 chokepoints
- Update docs, changelog, tests
* fix: rename Turkish Straits to Bosporus Strait (match PortWatch naming)
* feat(supply-chain): replace S&P Global with 3 free maritime data sources
Replace expensive S&P Global Maritime API with IMF PortWatch (vessel transit
counts), CorridorRisk (risk intelligence), and AISStream chokepoint crossing
counter. All external API calls run on Railway relay, Vercel reads Redis only.
- Add 4 new chokepoints (10 total): Cape of Good Hope, Gibraltar, Bosphorus, Dardanelles
- Add TransitSummary proto (field 14) with today counts, WoW%, 180d history, risk context
- Add D3 multi-line chart (tanker vs cargo) with expandable chokepoint cards
- Add crossing detection with enter+dwell+exit semantics, 30min cooldown, 5min min dwell
- Add PortWatch seed loop (6h), CorridorRisk seed loop (1h), transit seed loop (10min)
- Add canonical chokepoint ID map for cross-source name resolution
- 177 tests passing across 6 test files
* fix(supply-chain): address P2 review findings
- Discard partial PortWatch pagination results on mid-page failure (prevents
truncated history with wrong WoW numbers cached for 6h)
- Rename "Transit today" to "24h" label (rolling 24h window, not calendar day)
- Fix chart label from "30d" to "180d" (matches actual PortWatch query range)
- Add 30s initial seed for chokepoint transits on relay cold start (prevents
10min gap of zero transit data)
* feat(supply-chain): swap D3 chart for TradingView lightweight-charts
Replace hand-rolled D3 SVG transit chart with lightweight-charts v5 canvas
rendering for Bloomberg-quality time-series visualization.
- Add TransitChart helper class with mount/destroy lifecycle, theme listener,
and autoSize support
- Use MutationObserver (not rAF) to mount chart after setContent debounce
- Clean up chart on tab switch, collapse, and re-render (no orphaned canvases)
- Respond to theme-changed events via chart.applyOptions()
- D3 stays for other 5 components (ProgressCharts, RenewableEnergy, etc.)
* feat(supply-chain): add geo coords and trade routes for 4 new chokepoints
Cherry-pick from PR #1511: Cape of Good Hope, Gibraltar, Bosphorus, and
Dardanelles map-layer coordinates and trade route definitions.
* fix(supply-chain): health.js v2->v4 key + double cache TTLs for missed seeds
- health.js chokepoints key was still v2, now v4 (matches handler + bootstrap)
- PortWatch TTL: 21600s (6h) -> 43200s (12h), seed interval stays 6h
- CorridorRisk TTL: 3600s (1h) -> 7200s (2h), seed interval stays 1h
- Ensures one missed seed run doesn't expire the key and cause empty data
Moves isProviderAvailable() check from before cachedFetchJson() to inside
the fetcher callback. This ensures cache hits still serve valid data during
provider outages instead of returning empty results.
Changes:
- classify-event: health gate moved inside cachedFetchJson callback
- deduct-situation: same
- get-country-intel-brief: same
- summarize-article: same
- _batch-classify: break → return results on health gate failure
- callLlm (llm.ts): health gate added to provider chain
- local-api-server: /api/llm-health endpoint + startup warmup
Scope cleanup per review:
- Reverted LlmStatusIndicator (extracted to #1528)
- Reverted ACLED credential cleanup (extracted to #1530)
- Reverted isSidecar → isLocalDeployment rename (extracted to #1532)
Co-authored-by: Elie Habib <elie.habib@gmail.com>
Broadens the Ollama host allowlist bypass to include Docker mode alongside
sidecar. Both are trusted local deployments where Ollama can safely bind
to non-localhost addresses.
Extracted from PR #1522 (scope split).
Co-authored-by: Jon Torrez <jrtorrez31337@users.noreply.github.com>
Railway deploys with rootDirectory=scripts/, so ../shared/ resolves to
/shared/ which doesn't exist. Move the canonical file to scripts/data/
and update all four consumers.
- Move GEOPOLITICAL_TAGS, TECH_TAGS, FINANCE_TAGS, and EXCLUDE_KEYWORDS
to shared/prediction-tags.json so seed, RPC handler, and client all
reference a single source of truth
- Remove open_interest proto field (always 0 for Polymarket, never
displayed in UI) and corresponding openInterest assignments
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Elie Habib <elie.habib@gmail.com>
Kalshi trading API returns 401 without authentication. Disable all
Kalshi fetches when KALSHI_API_KEY is not set, and pass it as a
Bearer token when present. Seed logs "disabled" instead of spamming
401 errors on every run.
* feat(predictions): add Kalshi as prediction market data source
* fix(predictions): address Kalshi integration review feedback
- Gate Kalshi fetch behind category check to avoid wasted calls on tech-scoped requests
- Replace fragile double-cast bootstrap typing with BootstrapMarket interface
- Fix zero-price falsy bug in seed script using Number.isFinite guard
- Align RPC market selection with seed script (highest-volume via single-pass loop)
- Raise Kalshi volume threshold to 5000 for signal quality parity
- Add missing .prediction-source badge CSS with per-source color variants
* fix(predictions): address P1/P2 review items for Kalshi integration
- Apply isExcluded() filter and volume threshold (5000) to live Kalshi
RPC path so cache-miss results match seed curation quality
- Include FINANCE_TAGS in seed allTags so 'markets' tag is fetched
- Align Kalshi title mapping (market.title || event.title) between
seed and RPC handler
- Remove silent geopolitical fallback for finance variant so missing
finance bootstrap falls through to RPC fetch
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(predictions): prefer yes_sub_title for Kalshi multi-contract events
For multi-contract Kalshi events (e.g. papal election candidates),
market.title is the generic event question while yes_sub_title
identifies the specific contract. Use yes_sub_title when present
in both seed and RPC paths so titles are accurate and consistent.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(predictions): use general Kalshi trading API subdomain
Switch from api.elections.kalshi.com (elections-only) to
trading-api.kalshi.com so economy, crypto, and other non-election
markets are included in the finance variant.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Elie Habib <elie.habib@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(acled): add OAuth token manager with automatic refresh
ACLED access tokens expire every 24 hours, but WorldMonitor stores a
static ACLED_ACCESS_TOKEN with no refresh logic — causing all ACLED
API calls to fail after the first day.
This commit adds `acled-auth.ts`, an OAuth token manager that:
- Exchanges ACLED_EMAIL + ACLED_PASSWORD for an access token (24h)
and refresh token (14d) via the official ACLED OAuth endpoint
- Caches tokens in memory and auto-refreshes before expiry
- Falls back to static ACLED_ACCESS_TOKEN for backward compatibility
- Deduplicates concurrent refresh attempts
- Degrades gracefully when no credentials are configured
The only change to the existing `acled.ts` is replacing the synchronous
`process.env.ACLED_ACCESS_TOKEN` read with an async call to the new
`getAcledAccessToken()` helper.
Fixes#1283
Relates to #290
* fix: address review feedback on ACLED OAuth PR
- Use Redis (Upstash) as L2 token cache to survive Vercel Edge cold starts
(in-memory cache retained as fast-path L1)
- Add CHROME_UA User-Agent header on OAuth token exchange and refresh
- Update seed script to use OAuth flow via getAcledToken() helper
instead of raw process.env.ACLED_ACCESS_TOKEN
- Add security comment to .env.example about plaintext password trade-offs
- Sidecar ACLED_ACCESS_TOKEN case is a validation probe (tests user-provided
value, not process.env) — data fetching delegates to handler modules
* feat(sidecar): add ACLED_EMAIL/ACLED_PASSWORD to env allowlist and validation
- Add ACLED_EMAIL and ACLED_PASSWORD to ALLOWED_ENV_KEYS set
- Add ACLED_EMAIL validation case (store-only, verified with password)
- Add ACLED_PASSWORD validation case with OAuth token exchange via
acleddata.com/api/acled/user/login
- On successful login, store obtained OAuth token in ACLED_ACCESS_TOKEN
- Follows existing validation patterns (Cloudflare challenge handling,
auth failure detection, User-Agent header)
* fix: address remaining review feedback (duplicate OAuth, em dashes, emoji)
- Extract shared ACLED OAuth helper into scripts/shared/acled-oauth.mjs
- Remove ~55 lines of duplicate OAuth logic from seed-unrest-events.mjs,
now imports getAcledToken from the shared helper
- Replace em dashes with ASCII dashes in acled-auth.ts section comments
- Replace em dash with parentheses in sidecar validation message
- Remove emoji from .env.example security note
Addresses koala73's second review: MEDIUM (duplicate OAuth), LOW (em
dashes), LOW (emoji).
* fix: align sidecar OAuth endpoint, fix L1/L2 cache, cleanup artifacts
- Sidecar: switch from /api/acled/user/login (JSON) to /oauth/token
(URL-encoded) to match server/_shared/acled-auth.ts exactly
- acled-auth.ts: check L2 Redis when L1 is expired, not only when L1
is null (fixes stale L1 skipping fresher L2 from another isolate)
- acled-oauth.mjs: remove stray backslash on line 9
- seed-unrest-events.mjs: remove extra blank line at line 13
---------
Co-authored-by: Elie Habib <elie.habib@gmail.com>
Co-authored-by: RepairYourTech <30200484+RepairYourTech@users.noreply.github.com>
* fix(predictions): replace volume-only sort with composite scoring, add finance variant and region ranking
The prediction panel was surfacing irrelevant near-certain markets (1%/99% meme
markets like celebrity presidential bids) because the discrepancy filter was
inverted and sorting was by volume alone.
- Replace broken discrepancy filter with composite scoring (60% uncertainty +
40% log-scaled volume) in seed script
- Add meme candidate detection and sports/entertainment keyword exclusion
- Add finance variant with dedicated tags for economy/trade/rates topics
- Add region-aware soft ranking outside circuit breaker cache
- Add input validation (category max 50, query max 100) in RPC handler
- Skip events without markets instead of defaulting to yesPrice=50
- Per-bucket relaxation safety valve when <15 markets pass strict filters
* fix(predictions): apply region sort before truncation, add RPC fallback scoring, validate finance seed
- Keep 25 candidates from bootstrap/RPC, apply region sort, then slice to 15
(previously sliced to 15 first, making region boost ineffective for markets
ranked 16-25)
- Add client-side uncertainty scoring + near-certain filter (10-90%) for RPC
fallback path (previously fell back to Gamma's volume-only ordering)
- Include finance array in seed validation (previously only checked
geopolitical/tech, allowing broken finance data to ship silently)
* test(predictions): add 54 unit tests for scoring, filtering, and region tagging
Extract pure prediction scoring functions into shared module
(_prediction-scoring.mjs) for testability. Tests cover parseYesPrice,
isExcluded, isMemeCandidate, tagRegions, shouldInclude, scoreMarket,
filterAndScore, isExpired, plus regression tests for the meme market
surfacing bug that motivated this fix.
Statuspage.io uses "major" indicator for any partial system outage (e.g.
1 region out of 63 being down). Previously this mapped to MAJOR_OUTAGE,
showing as red "OUTAGE" in the panel. Now "major" maps to PARTIAL_OUTAGE,
and the frontend displays PARTIAL_OUTAGE as "DEGRADED" (yellow) which
better reflects limited-scope issues.
Only "major_outage" (component-level) and "critical" now trigger the red
OUTAGE display.
* fix(tech-events): prevent partial fetch results from being cached
Techmeme ICS and dev.events RSS fetches on Vercel edge can partially
fail (timeout, truncation), returning only 1 event instead of 20+.
The handler cached this partial result for 6 hours, causing the Tech
Events panel to show empty.
- Add 8s AbortSignal.timeout on both external fetches
- Require minimum 5 events before caching (at least curated count)
* fix(tech-events): remove MIN_EVENTS threshold and add diagnostic logging
The MIN_EVENTS=5 threshold caused empty results when both external
sources fail on Vercel edge (only 4 curated events available). Now
any events > 0 are cached. Added detailed logging to diagnose why
Techmeme ICS and dev.events RSS fetches fail on Vercel edge.
Also removed past STEP Dubai 2026 event.
* fix(tech-events): route fetches through Railway relay when direct fails
Vercel edge functions cannot reliably reach Techmeme ICS and dev.events
RSS (datacenter IP blocking). Added fetchTextWithRelay() that tries
direct fetch first, then falls back to Railway relay proxy (/rss endpoint)
which fetches from a different IP. Same pattern used by news feed digest
and other handlers that hit blocked external sources.
* feat(tech-events): gold standard pipeline with Railway seed + bootstrap hydration
Full data pipeline overhaul to match project conventions:
- Add tech events seed loop to ais-relay.cjs: fetches Techmeme ICS +
dev.events RSS every 6h from Railway (avoids Vercel IP blocking),
parses both sources, merges with curated fallback events, writes to
Redis (data key + bootstrap key + seed-meta)
- Register in api/bootstrap.js BOOTSTRAP_CACHE_KEYS (SLOW tier)
- Register in api/health.js BOOTSTRAP_KEYS + SEED_META (420min stale)
- Restructure RPC handler: reads from single broad Redis key (populated
by seed), applies geocoding + filtering in-memory per request params.
Fallback fetcher only runs on cold start before first seed
- TechEventsPanel: check getHydratedData('techEvents') from bootstrap
before falling back to RPC call
- data-loader: use hydrated bootstrap data for map layer, RPC fallback
Techmeme ICS and dev.events RSS fetches on Vercel edge can partially
fail (timeout, truncation), returning only 1 event instead of 20+.
The handler cached this partial result for 6 hours, causing the Tech
Events panel to show empty.
- Add 8s AbortSignal.timeout on both external fetches
- Require minimum 5 events before caching (at least curated count)
* feat(desktop): compile domain handlers + add in-memory sidecar cache
The sidecar was broken for all 23 sebuf/RPC domain routes because
the build script (build-sidecar-handlers.mjs) never existed on main
while package.json already referenced it. This adds the missing script
and an in-memory TTL+LRU cache so the sidecar doesn't need Upstash Redis.
- Add scripts/build-sidecar-handlers.mjs (esbuild multi-entry, 23 domains)
- Add server/_shared/sidecar-cache.ts (500 entries, 50MB max, lazy sweep)
- Modify redis.ts getCachedJson/setCachedJson to use dynamic import for
sidecar cache when LOCAL_API_MODE=tauri-sidecar (zero cost on Vercel Edge)
- Update tauri.conf.json beforeDevCommand to compile handlers
- Add gitignore pattern for compiled api/*/v1/[rpc].js
* fix(desktop): gate premium panel fetches and open footer links in browser
Skip oref-sirens and telegram-intel HTTP requests on desktop when
WORLDMONITOR_API_KEY is not present. Use absolute URLs for footer
links on desktop so the Tauri external link handler opens them in
the system browser instead of navigating within the webview.
* fix(desktop): cloud proxy, bootstrap timeouts, and panel data fixes
- Set Origin header on cloud proxy requests (fixes 401 from API key validator)
- Strip If-None-Match/If-Modified-Since headers (fixes stale 304 responses)
- Add cloud-preferred routing for market/economic/news/infrastructure/research
- Enable cloud fallback via LOCAL_API_CLOUD_FALLBACK env var in main.rs
- Increase bootstrap timeouts on desktop (8s/12s vs 3s/5s) for sidecar proxy hops
- Force per-feed RSS fallback on desktop (server digest has fewer categories)
- Add finance feeds to commodity variant (client + server)
- Remove desktop diagnostics from ServiceStatusPanel (show cloud statuses only)
- Restore DeductionPanel CSS from PR #1162
- Deduplicate repeated sidecar error logs
Replace "WorldMonitor" with "World Monitor" in all user-facing display
text across blog posts, docs, layouts, structured data, footer, offline
page, and X-Title headers. Technical identifiers (User-Agent strings,
X-WorldMonitor-Key headers, @WorldMonitorApp handle, function names)
are preserved unchanged. Also adds anchors color to Mintlify docs config
to fix blue link color in dark mode.
* feat(intel): add country facts section and right-click context menu
Add a Country Facts section (expanded view only) showing head of state,
population, capital, languages, currencies, area, and Wikipedia summary
with thumbnail. Data sourced from RestCountries API and Wikidata/Wikipedia
with 24h server-side caching via cachedFetchJson.
Add right-click context menu on both DeckGL and Globe maps with "Open
Country Brief" and "Copy Coordinates" actions. Menu dismisses on click
outside or Escape, clamped to viewport bounds.
* fix(intel): address review findings for country facts and context menu
- Show noFacts state on RPC failure instead of leaving panel on "Loading..."
- Extract contextmenu handler to named bound method in DeckGLMap and
GlobeMap so it can be removed in destroy(), preventing listener leaks
on map mode switches
- Constrain Wikidata SPARQL to current head of state by filtering out
statements with an end date qualifier (pq:P582)
* fix(intel): scope Wikidata title to country's head-of-state office
Use pq:P1039 (position held) qualifier on the P35 statement instead of
querying the person's arbitrary offices via wdt:P39. This ensures the
title returned (e.g. "President") is the one associated with the
head-of-state role for the specific country, not an unrelated position.
* fix(i18n): translate country facts and context menu keys for all locales
* fix(intel): use correct Wikidata qualifier P39 for head-of-state office title
P1039 (subject has role) is never set on P35 statements. P39 (position
held) is the actual qualifier used, returning values like "president"
for US and "President of the French Republic" for FR.
Convert s3:// thumbnail URLs to https://<bucket>.s3.amazonaws.com/<key>
so they pass the img-src CSP directive. Replace inline onerror handler
with event delegation to avoid script-src CSP violation.
- Add AVIATIONSTACK and NOTAM proto enum values for accurate source attribution
- AviationStack flight data alerts now show "Flight Data" instead of "Computed"
- NOTAM closure/restriction alerts now show "NOTAM"
- Remove generateSimulatedDelay() fallback that produced fake random alerts
- Reduce all aviation cache TTLs from 2h to 30min for fresher data
- Reduce relay seed interval from 1h to 30min, TTL from 4h to 1h
- Reduce seed freshness threshold from 45min to 20min
- Update health check maxStaleMin from 90 to 60min
- Update all 21 locale files with new source labels
* fix(map): fix satellite imagery STAC backend and merge into Orbital Surveillance layer
The satellite imagery layer was broken because the backend fetched
catalog.json from Capella's S3 bucket which returns 404. Replaced with
Element 84's Earth Search STAC API (Sentinel-2 + Sentinel-1 data).
Also merged the separate Satellite Imagery layer into the existing
Orbital Surveillance layer since they are complementary features.
Adds bbox/datetime snapping for better cache hit rates.
* fix: address PR review findings for satellite imagery merge
P1: Decouple imagery fetch from satellite TLE loading. Imagery
footprints now load asynchronously (fire-and-forget) so toggling
Orbital Surveillance stays fast.
P2: Migrate old satelliteImagery URL param to satellites so existing
shared links/bookmarks preserve overlay state.
P2: Map legacy source values (e.g. "capella") to all collections
instead of returning empty results.
* fix: only refresh imagery on viewport move if scenes already loaded
Prevents imagery API calls on every pan/zoom for users who only want
orbital tracking. Viewport imagery refresh only triggers after the
initial load has already populated scenes.
* fix: restore notamOverlay entries lost during rebase conflict resolution
* fix(health): fix riskScores seeding gap and seed-meta key mismatch
- Switch RPC handler to cachedFetchJsonWithMeta so stale key is refreshed
on every successful response (cache hit or miss), not just cache misses
- Fix seed-meta key mismatch: health.js and seed-health.js now check
seed-meta:risk:scores:sebuf (matching what cachedFetchJson writes)
- Add warm-ping loop in relay (8min interval) to keep RPC cache fresh
- Remove dead startCiiSeedLoop and 345 lines of unused CII seed code
* fix(scoring): await stale key write to prevent edge runtime drop
Edge/serverless runtimes may terminate the isolate before a
fire-and-forget Redis write completes. Await the setCachedJson
call so the stale key TTL is guaranteed to be extended.
* feat(map): merge NOTAM closures into Aviation layer, fix click popup
Consolidate the separate "NOTAM Closures" toggle into the "Aviation"
layer so users get a single checkbox for flight delays, NOTAM rings,
and aircraft positions.
- Remove notamOverlay from MapLayers, all variants, URL state, registry
- Render NOTAM rings under flights toggle in both DeckGL and Globe maps
- Wire notam-overlay-layer click to flight popup (was missing entirely)
- Broaden NOTAM detection: restrictions (RA/RO, TFR, danger areas)
render as major severity; closures remain severe
- Add restrictedIcaos to LoadedNotamResult for severity distinction
* fix(aviation): separate restriction NOTAMs from closures in all consumers
Restrictions (TFR, danger areas) were being added to closedIcaoCodes,
causing ops-summary to report them as full closures and CII scoring to
apply the closure penalty (+20 instead of +10).
- Keep closedIcaoCodes for real closures only, restrictedIcaoCodes separate
- Restrictions use delayType 'general' (not 'closure') so downstream code
(popup labels, globe rings, CII scoring) treats them correctly
- ops-summary now shows RESTRICTED flag instead of CLOSED for restrictions
- buildNotamAlert/mergeNotamWithExistingAlert accept delayType param
* feat(natural): add tropical cyclone tracking from NHC and GDACS
Integrate NHC ArcGIS REST API (15 storm slots across AT/EP/CP basins)
and GDACS TC field extraction to provide real-time tropical cyclone data
with forecast tracks, uncertainty cones, and historical track paths.
- Proto: add optional TC fields (storm_id, wind_kt, pressure_mb, etc.)
plus ForecastPoint, PastTrackPoint, CoordRing messages
- Server/seed: NHC two-pass query (forecast points then detail layers),
GDACS wind/pressure parsing, Saffir-Simpson classification, dedup
strategy (NHC > GDACS > EONET), pressureMb validation (850-1050),
advisory date with Number.isFinite guard
- Globe: dashed red forecast track, per-segment wind-colored past track,
semi-transparent orange forecast cone polygon
- Popup: TC details panel with color-coded category badge, wind/pressure
- Frontend mapper: forward all TC fields, convert CoordRing to number[][][]
* fix(natural): improve GDACS dedup, NHC classification, and TC popup i18n
- GDACS dedup now checks name + geographic proximity instead of name-only
- NHC classification uses stormtype field for subtropical/post-tropical
- TC popup labels use t() for localization instead of hardcoded English
* feat(map): add cyclone-specific deck.gl layers for 2D map
- Storm center ScatterplotLayer with Saffir-Simpson wind coloring
- Past track PathLayer with per-segment wind-speed color ramp
- Forecast track PathLayer with dashed line via PathStyleExtension
- Cone PolygonLayer for forecast uncertainty visualization
- Tooltip and click routing for all new storm layer IDs
* fix(map): remove click routing for synthetic storm track/cone layers
Track and cone layers carry lightweight objects without full NaturalEvent
fields. Clicking them would pass incomplete data to the popup renderer.
Only storm-centers-layer (which holds the full NaturalEvent) routes to
the natEvent popup. Tracks and cones remain tooltip-only.
* fix(map): attach parent NaturalEvent to synthetic storm layers for clicks
Synthetic track/cone objects now carry _event reference to the parent
NaturalEvent. Click handler unwraps _event before passing to popup,
so clicking any storm element opens the full TC popup.
* feat(map): add NOTAM overlay + satellite imagery integration
NOTAM Overlay:
- Expand airport monitoring from MENA-only to 64 global airports
- Add ScatterplotLayer (55km red rings) on flat map for airspace closures
- Add CSS-pulsing ring markers on globe for closures
- Independent of flights layer toggle (works when flights OFF)
- Bump NOTAM cache key v1 to v2
Satellite Imagery:
- Add Capella SAR STAC catalog proxy at /api/imagery/v1
- SSRF protection via URL allowlist + bbox/datetime validation
- SatelliteImageryPanel with preview thumbnails and scene metadata
- PolygonLayer footprints on flat map with viewport-triggered search
- Polygon footprints on globe with "Search this area" button
- Full variant only, default disabled
Layer key propagation across all 23+ files including variants,
harnesses, registry, URL state, and renderer channels.
* fix(imagery): wire panel data flow, fix viewport race, add datetime filter
P1 fixes:
- Imagery scenes now flow through MapContainer.setOnImageryUpdate()
callback, making data available to both renderers and panel
- Add version guard to fetchImageryForViewport() preventing stale
responses from overwriting newer viewport data
- Wire SatelliteImageryPanel.update() and setOnSearchArea() in
panel-layout.ts (panel was previously unhooked)
- Globe mode "Search this area" fetches via MapContainer.getBbox()
P2 fix:
- search-imagery.ts now filters STAC items by datetime range when
the client provides the datetime parameter
Also:
- Add MapContainer.getBbox() for viewport-aware imagery fetching
- Add DeckGLMap.getBbox() public method
- Data-loader layer toggle triggers initial imagery fetch
* fix(imagery): complete source filter + fix date-only end bound
- Filter STAC items by constellation when source param is provided,
making the API contract match actual behavior
- Date-only end bounds (YYYY-MM-DD without T) now include the full
day (23:59:59.999Z) instead of only midnight
- Sort bboxes by area (smallest first) so AE matches before SA for Dubai coords
- Explicit outage severity matching (no catch-all inflating unknown severities)
- Add 'united arab emirates' to AE keywords for ACLED/UCDP normalization
- Add CU/MX/BR/AE to client-side TIER1_NAMES (was showing raw ISO codes)
- Add UAE geo attribution test verifying bbox overlap resolution
Cuba is experiencing a severe humanitarian crisis (grid collapse, 20h+
blackouts, protests, UN collapse warning) but was completely absent from
CII because it was not in TIER1_COUNTRIES, CURATED_COUNTRIES, or any
server-side scoring config. Added with baseline risk 45, multiplier 2.0.
Move theaterPosture from SLOW (2h CDN) to FAST tier (20min/10min after
PR #1314) so military posture data stays fresh. Increase risk scores
breaker TTL to 30min to match health.js maxStaleMin, and reduce
localStorage staleness from 24h to 1h to prevent stale risk data in UI.