- Add IndexNow verification key file (public/a7f3e9d1b2c44e8f9a0b1c2d3e4f5a6b.txt)
- Update sitemap.xml lastmod to 2026-03-19 to signal freshness to crawlers
- Add lastmod dates to blog sitemap via @astrojs/sitemap serialize()
- Add scripts/seo-indexnow-submit.mjs to resubmit all 23 URLs to IndexNow
(run after deploy: node scripts/seo-indexnow-submit.mjs)
* fix(gdelt): reduce topics 6→4 to cut 429 rate-limit pressure
Drops sanctions and intelligence topics (covered by other data sources).
Keeps military, cyber, nuclear, maritime as core high-signal topics.
Happy-path runtime drops from ~2.5min to ~1.5min. Worst-case retry
storm is now 3 gaps instead of 5, significantly reducing total backoff
duration per run on the 2h cron cycle.
Lowers validation threshold from ≥3 to ≥2 of 4 topics.
* fix(gdelt): reduce cron to 4h and extend TTL to 6h
Data from GDELT's 24h window doesn't turn over fast enough to justify
2h polling. Switching to 4h halves Railway runs (12/day → 6/day) and
doubles the cooldown between IP hits, reducing 429 pressure.
TTL bumped 4h→6h so cached data outlives the 4h cron gap.
health.js maxStaleMin 200→300 (5h, warning window before 6h expiry).
seed-health.js intervalMin 100→150 (150×2=300 = maxStaleMin).
Railway cron schedule needs updating to: 0 */4 * * *
* fix(gdelt): sync UI topic list with seed and require all 4 topics for write
P1: Remove sanctions and intelligence from INTEL_TOPICS in
src/services/gdelt-intel.ts so the panel no longer renders tabs that
can never be hydrated from the 4-topic Redis payload. Tabs now match
the seed: military, cyber, nuclear, maritime.
P2: Raise validation threshold back to >=4 (all topics required).
With >=2 a partial run would overwrite the complete snapshot with
incomplete data, making missing tabs show blank panels until the next
full run. Requiring all 4 means a partial run extends the existing
TTL instead of replacing good data with bad.
Audit revealed 6 mismatches where data TTLs or maxStaleMin thresholds were
too tight relative to cron intervals, causing spurious STALE_SEED warnings.
gdeltIntel: maxStaleMin 180 to 240 (1h cron now has 4x buffer vs 3x)
securityAdvisories: TTL 70m to 120m, maxStaleMin 90 to 120 (2x cron buffer)
techEvents: TTL 360m to 480m, maxStaleMin 420 to 480 (1h buffer for 6h cron)
forecasts: TTL 80m to 105m (outlives maxStaleMin:90 when cron is delayed)
correlationCards: TTL 10m to 20m (data outlives maxStaleMin:15)
usniFleet: maxStaleMin 420 to 480 (extra buffer for relay-seeded key)
* fix(health): prevent cableHealth EMPTY/CRIT when NGA has no active warnings
When the NGA broadcast-warn API returns 0 active warnings, computeHealthMap
produces an empty cables object. The handler wrote recordCount:0 to seed-meta,
causing health.js to report EMPTY (CRIT) even though the feed is operational.
Zero cable disruptions is a valid healthy state, not missing data.
Write Math.max(count, 1) so health.js only fires CRIT if the feed is
genuinely broken (no seed-meta at all), not when NGA reports no disruptions.
* fix(health): fix cableHealth EMPTY/CRIT — TTL shorter than warm-ping interval
Root cause: CACHE_TTL was 600s (10 min) but the relay warm-ping runs every
30 min. The cable-health-v1 Redis key expired 20 min before the next ping,
causing health.js to see a missing key → EMPTY → CRIT for 20 of every 30 min.
Fix: increase CACHE_TTL to 3600s (1h) to match health.js maxStaleMin:60 and
outlive the warm-ping interval with margin.
Also reverts the earlier incorrect Math.max(count,1) change — health.js reads
the cached payload directly, not meta.recordCount.
* fix(health): fix spending EMPTY/CRIT — TTL matched seed interval exactly
SPENDING_CACHE_TTL was 3600s = SPENDING_SEED_INTERVAL_MS (1h). At exact
equality the key expires the moment the next seed runs, causing a window
where health.js sees EMPTY → CRIT. Double the TTL to 2h so the key always
outlives the seeder.
* fix(health): fix weather CACHE_TTL matching seed interval exactly
WEATHER_CACHE_TTL was 900s = WEATHER_SEED_INTERVAL_MS (15 min). Same
TTL=interval race as spending: key can expire at the exact moment the
seeder fires, leaving a window where health.js sees EMPTY. Double TTL
to 1800s (30 min) to eliminate the race.
* fix(health): fix 5 remaining TTL < 2x interval races in ais-relay
Ensure every Redis key TTL is at least 2× its seeder interval so a single
slow/missed cycle never causes EMPTY/CRIT flapping:
- USNI: 7h → 12h (interval 6h, was 1.17x)
- THEATER_POSTURE: 15m → 20m (interval 10m, was 1.5x)
- CYBER: 3h → 4h (interval 2h, was 1.5x)
- CHOKEPOINT_TRANSIT: 15m → 20m (interval 10m, was 1.5x)
- TRANSIT_SUMMARY: 15m → 20m (interval 10m, was 1.5x)
* test: update transit-summaries TTL assertion to match new 2x minimum rule
* fix(posture): compute vessel counts server-side from AIS stream
trackedVessels was hardcoded to 0 in seedTheaterPosture(). The relay
has live AIS data in the vessels Map but never used it for posture.
Now counts military vessels (shipType 35/50-59, naval prefixes, MMSI
patterns) within each theater's bounds using the same identification
logic as isLikelyMilitaryCandidate(). Vessels seen in the last 6h
contribute to both the vessel count and the combined posture level.
This ensures the bootstrap theater posture data shows accurate vessel
presence regardless of whether the client has AIS toggled on.
* perf(posture): use candidateReports instead of iterating all vessels
candidateReports is already pre-filtered for military candidates on
AIS message arrival. No need to re-apply isLikelyMilitaryCandidate
on every vessel for every theater. Reduces ~90k function calls to a
simple bounds check on the candidate set.
* fix(posture): address vessel count issues in theater posture
1. Remove early exit on flights.length === 0 so vessel-only scenarios
still seed posture (P1 — Codex review comment)
2. Add isStrictMilitaryVessel() to filter candidateReports to shipType
35/55 and named naval vessels only — drops tugs, pilot boats, SAR
craft (shipType 50-59) that inflate counts in maritime theaters
3. Cap vessel contribution at floor(elevated_threshold / 2) to prevent
naval traffic from dominating flight-calibrated posture thresholds
4. Update seed-meta recordCount and log to include vessel counts
* fix(health): increase gdeltIntel maxStaleMin from 120 to 180
Seeder runs every ~120min but threshold was exactly 120min, causing
STALE/CRIT flapping whenever there's any timing jitter. 180min gives
a 60min buffer to prevent oscillation.
* fix(health): increase gdeltIntel CACHE_TTL to 4h to match maxStaleMin:180
CACHE_TTL was 7200s (120min) while health.js maxStaleMin was raised to
180min. When a seed run is delayed past 120min the data key expires,
health evaluates EMPTY/CRIT before the stale check can ever fire, making
the 60min warning buffer unreachable. Setting TTL to 14400s (240min)
ensures the key outlives the stale threshold so STALE_SEED triggers
before EMPTY/CRIT on delayed runs.
* feat(forecast): cluster situations in world state
* feat(forecast): add report continuity history
* fix(forecast): stabilize report continuity matching
* feat(widgets): add Exa web search + fix widget API endpoints
- Replace Tavily with Exa as primary stock-news search provider
(Exa → Brave → SerpAPI → Google News RSS cascade)
- Add search_web tool to widget agent so AI can fetch live data
about any topic beyond the pre-defined RPC catalog
- Exa primary (type:auto + content snippets), Brave fallback
- Fix all widget tool endpoints: /rpc/... paths were hitting
Vercel catch-all and returning SPA HTML instead of JSON data
- Fix wm-widget-shell min-height causing fixed-size border that
clipped AI widget content
- Add HTML response guard in tool handler
- Update env key: TAVILY_API_KEYS → EXA_API_KEYS throughout
* fix(stock-news): use type 'neural' for Exa search (type 'news' is invalid)
* feat(forecast): add actor continuity to world state
* fix(forecast): report full actor continuity counts
* feat(forecast): add branch continuity to world state
* fix(railway): detect and fix wrong startCommand on seed services
All seed services use rootDirectory="scripts" in Railway, so the
correct startCommand is `node seed-<name>.mjs` (no scripts/ prefix).
A startCommand like `node scripts/seed-radiation-watch.mjs` causes
MODULE_NOT_FOUND at runtime because it resolves to
scripts/scripts/seed-radiation-watch.mjs which does not exist.
Extends railway-set-watch-paths.mjs to:
- Read startCommand alongside watchPatterns per service instance
- Validate it matches the expected `node <name>.mjs` form
- Fix it in the same serviceInstanceUpdate mutation if wrong
Run `node scripts/railway-set-watch-paths.mjs` to repair any affected
services (seed-radiation-watch, seed-climate-anomalies, seed-sanctions-pressure).
* fix(railway): filter serviceInstances by target environment
The query used serviceInstances(first: 1) without filtering by
environment, so edges[0] could return a different environment's
config than the one being updated. Now passes envId to match the
mutation target.
* feat(widgets): add PRO interactive widgets via iframe srcdoc
Introduces a PRO tier for AI-generated widgets that supports full JS
execution (Chart.js, sortable tables, animated counters) via sandboxed
iframes — no Docker, no build step required.
Key design decisions:
- Server returns <body> + inline <script> only; client builds the full
<!DOCTYPE html> skeleton with CSP guaranteed as the first <head> child
so the AI can never inject or bypass the security policy
- sandbox="allow-scripts" only — no allow-same-origin, no allow-forms
- PRO HTML stored in separate wm-pro-html-{id} localStorage key to
isolate 80KB quota pressure from the main widget metadata array
- Raw localStorage.setItem() for PRO writes with HTML-first write order
and metadata rollback on failure (bypasses saveToStorage which swallows
QuotaExceededError)
- Separate PRO_WIDGET_KEY env var + x-pro-key header gate on Railway
- Separate rate limit bucket (20/hr PRO vs 10/hr basic)
- Claude Sonnet 4.6 (8192 tokens, 10 turns, 120s) for PRO vs Haiku for
basic; health endpoint exposes proKeyConfigured for modal preflight
* feat(pro): gate finance panels and widget buttons behind wm-pro-key
The PRO localStorage key now unlocks the three previously desktop-only
finance panels (stock-analysis, stock-backtest, daily-market-brief) on
the web variant, giving PRO users access without needing WORLDMONITOR_API_KEY.
Button visibility is now cleanly separated by key:
- wm-widget-key only → basic "Create with AI" button
- wm-pro-key only → PRO "Create Interactive" button only
- both keys → both buttons
- no key → neither button
Widget boot loader also accepts either key so PRO-only users see their
saved interactive widgets on page load.
* fix(widgets): inject Chart.js CDN into PRO iframe shell so new Chart() is defined
* feat(health): restore seed-meta tracking for riskScores, serviceStatuses, cableHealth, chokepoints
These 4 keys were reporting STALE_SEED / going untracked because their
warm-ping loops never wrote seed-meta. PR #1649 removed seed-meta from
cachedFetchJson but no replacement tracking was added, so health.js
lost visibility into their freshness.
Changes:
- ais-relay.cjs: seedCiiWarmPing() writes seed-meta:intelligence:risk-scores after success
- ais-relay.cjs: seedServiceStatuses() writes seed-meta:infra:service-statuses after success
- ais-relay.cjs: new startChokepointWarmPingLoop() — 30 min warm-ping for supply_chain:chokepoints:v4
- ais-relay.cjs: new startCableHealthWarmPingLoop() — 30 min warm-ping + seed-meta:cable-health write
- get-cable-health.ts: switch to cachedFetchJsonWithMeta, write seed-meta:cable-health on fresh fetch
- api/health.js: re-add SEED_META entries for serviceStatuses (30 min), cableHealth (60 min), riskScores (15 min)
- api/health.js: remove riskScores, serviceStatuses, cableHealth from ON_DEMAND_KEYS — they now have proper freshness tracking
* fix(health): only write seed-meta on genuinely fresh data (P1 review fixes)
Fixes two P1 issues from PR review:
1. seedCableHealthWarmPing() was writing seed-meta:cable-health after
any 200 response, defeating the source==='fresh' guard already present
in getCableHealth(). Removed the relay write — the handler owns it.
2. seedServiceStatuses() was writing seed-meta:infra:service-statuses
after any 200, but listServiceStatuses() can return in-memory fallback
statuses on upstream scrape failures with a 200. The relay write was
advancing fetchedAt even when stale fallback data was returned.
Fix: switch handler to cachedFetchJsonWithMeta and only write seed-meta
when source==='fresh' (i.e. upstream status pages were actually scraped).
Removed the relay write entirely.
* fix(health): only write risk-scores seed-meta when data is present
The CII warm-ping wrote seed-meta after any 200 response, but the RPC
can return cached/stale fallback data with 0 scores during upstream
outages. This masked staleness in health checks. Now only writes
seed-meta when count > 0 (meaningful data received).
* fix(seeds): extend WB key TTLs in relay drop guard
The ais-relay seedWorldBank() percentage-drop guard returned early
without extending TTLs. During persistent partial WB outages, this
would let all 6 keys (3 data + 3 seed-meta) expire after 7 days.
Now extends TTLs on all WB keys before returning, matching the
standalone seed-wb-indicators.mjs behavior.
* fix(seeds): check upstashExpire return values in WB drop guard
upstashExpire returns false on failure (never throws), so the prior
code always logged success. Now checks all 6 return values and logs
partial failure count if any EXPIRE calls fail.
* fix(ucdp): never overwrite existing data with empty results
The standalone UCDP seed was writing 0 events to Redis when the API
returned empty, overwriting the last known good data. Health then
reported EMPTY_DATA CRIT even though valid data existed before.
Now extends TTL on both the data key and seed-meta when 0 events
are produced, preserving the last good payload until the next
successful fetch.
* fix(ucdp): verify EXPIRE responses before logging success
Check HTTP status of both EXPIRE calls. Log warnings on failure
instead of always claiming TTL was extended.
Health semantics:
- Add faaDelays + gpsjam to EMPTY_DATA_OK_KEYS (0 records = calm, not error)
- Fix EMPTY_DATA_OK_KEYS branch to still check seed-meta freshness
(prevents stale empty caches from staying green indefinitely)
Seed guards:
- seed-airport-delays: fix meta key in fetch-failure path
(seed-meta:aviation:delays -> seed-meta:aviation:faa + seed-meta:aviation:notam)
- seed-military-flights: add full TTL extension on zero-flights branch
(was exiting without preserving any derived data TTLs)
- seed-wb-indicators: add percentage-drop guard (new count < 50% of cached
= likely partial API failure, extend TTL instead of overwriting)
- ais-relay.cjs: same percentage-drop guard for WB dual writer
Codex-reviewed plan (5 rounds, approved).
- Strip enrichmentMeta from bootstrap.js forecasts payload (seed-internal, not for clients)
- Rename quietDomainBonus -> priorityDomainBonus (it applies to priority domains, not quiet ones)
- Extract cyber score formula magic numbers into named constants (CYBER_SCORE_TYPE_MULTIPLIER, etc.)
- Pre-compute analysisPriority in rankForecastsForAnalysis to avoid double-call per comparison
- Log when filterPublishedForecasts weak-fallback gate suppresses forecasts
- Log how many fallback narratives populateFallbackNarratives applies
- Add // penalties comment header in computeAnalysisPriority
* feat(thermal): add thermal escalation seeded service
Cherry-picked from codex/thermal-escalation-phase1 and retargeted
to main. Includes thermal escalation seed script, RPC handler,
proto definitions, bootstrap/health/seed-health wiring, gateway
cache tier, client service, and tests.
* fix(thermal): wire data-loader, fix typing, recalculate summary
Wire fetchThermalEscalations into data-loader.ts with panel forwarding,
freshness tracking, and variant gating. Fix seed-health intervalMin from
90 to 180 to match 3h TTL. Replace 8 as-any casts with typed interface.
Recalculate summary counts after maxItems slice.
* fix(thermal): enforce maxItems on hydrated data + fix bootstrap keys
Codex P2: hydration branch now slices clusters to maxItems before
mapping, matching the RPC fallback behavior.
Also add thermalEscalation to bootstrap.js BOOTSTRAP_CACHE_KEYS and
SLOW_KEYS (was lost during conflict resolution).
* fix(thermal): recalculate summary on sliced hydrated clusters
When maxItems truncates the cluster array from bootstrap hydration,
the summary was still using the original full-set counts. Now
recalculates clusterCount, elevatedCount, spikeCount, etc. on the
sliced array, matching the handler's behavior.
seed-sanctions-pressure.mjs imports fast-xml-parser to parse OFAC SDN
XML feeds, but the package was never added to scripts/package.json.
Railway deploys crash with ERR_MODULE_NOT_FOUND on startup.
* feat(sanctions): add OFAC sanctions pressure intelligence
* fix(sanctions): strip _state from API response, fix code/name alignment, cap seed limit
- trimResponse now destructures _state before spreading to prevent seed
internals leaking to API clients during the atomicPublish→afterPublish window
- buildLocationMap and extractPartyCountries now sort (code, name) as aligned
pairs instead of calling uniqueSorted independently on each array; fixes
code↔name mispairing for OFAC-specific codes like XC (Crimea) where
alphabetic order of codes and names diverges
- DEFAULT_RECENT_LIMIT reduced from 120 to 60 to match MAX_ITEMS_LIMIT so
seeded entries beyond the handler cap are not written unnecessarily
- Add tests/sanctions-pressure.test.mjs covering all three invariants
* fix(sanctions): register sanctions:pressure:v1 in health.js BOOTSTRAP_KEYS and SEED_META
Adds sanctionsPressure to health.js so the health endpoint monitors the
seeded key for emptiness (CRIT) and freshness via seed-meta:sanctions:pressure
(maxStaleMin: 720 matches 12h seed TTL). Without this, health was blind to
stale or missing sanctions data.
When upstream APIs fail and seeds extend existing data key TTLs, the
seed-meta key was left untouched. Health checks use seed-meta fetchedAt
to determine staleness, so preserved data still triggered STALE_SEED
warnings even though the data was valid.
Now all TTL extension paths include the corresponding seed-meta key:
- _seed-utils.mjs runSeed() (fetch failure + validation skip)
- fetch-gpsjam.mjs (Wingbits 500 fallback)
- seed-airport-delays.mjs (FAA fetch failure)
- seed-military-flights.mjs (OpenSky fetch failure)
- seed-service-statuses.mjs (RPC fetch failure)
* fix(ucdp): add page error logging, page-0 fallback, and TTL extension on empty
Three resilience improvements for UCDP seed loop:
1. Log actual error messages on page fetch failures instead of silently
swallowing them. Enables diagnosing API outages vs rate limits.
2. Fall back to page 0 data when all newest-page fetches fail. Page 0
is already fetched during version discovery, so this is free. Provides
partial (older) data instead of writing 0 events.
3. When 0 events remain after processing, extend existing Redis key TTL
instead of overwriting with empty payload. Preserves stale-but-valid
data for the next cycle rather than causing EMPTY_DATA CRIT in health.
* fix(ucdp): remove page-0 fallback, stop seed-meta on failed fetches
P1 fixes from review:
- Remove page-0 fallback that overwrote last known good cache with
stale historical data. Extend existing key TTL instead.
- Stop writing fresh seed-meta timestamps when no new payload is
written (both all-pages-failed and empty-after-filtering branches).
Health checks should reflect actual data freshness, not failed attempts.
Add 6 targeted source-analysis tests verifying:
- Error logging on page failures
- No page-0 data injection
- TTL extension on failure branches
- seed-meta only written on successful publish