- Add World Bank seed loop to ais-relay (24h interval) for techReadiness,
progressData, and renewableEnergy — previously manual-only with no cron
- Add seed-meta writes for UCDP and GPSJAM relay loops so health endpoint
can track freshness (was always showing STALE/unknown)
- Fix theater posture and service statuses RPC URLs from worldmonitor.app
to api.worldmonitor.app to bypass Cloudflare bot protection (403)
- Adjust UCDP maxStaleMin from 60 to 420 to match 6h relay interval
Railway rootDirectory isolates build context — postinstall cp from
../shared/ fails because parent dirs aren't in the Nixpacks image.
Commit JSON/CJS configs directly into scripts/shared/.
- Remove useless postinstall from scripts/package.json
- Remove scripts/shared/ from .gitignore
- Commit all shared config files into scripts/shared/
- Add sync test to catch drift between shared/ and scripts/shared/
Railway deploys seed services with rootDirectory=scripts/, placing files
at /app/ without the parent shared/ directory. The createRequire +
require('../shared/X.json') pattern resolves to /shared/ which doesn't
exist in the container.
- Add loadSharedConfig() to _seed-utils.mjs: tries ../shared/ (local)
then ./shared/ (Railway) with clear error on miss
- Add requireShared() to ais-relay.cjs with same dual-path fallback
- Add postinstall to scripts/package.json that copies ../shared/ into
./shared/ during Railway build
- Update all 6 seed scripts to use loadSharedConfig instead of
createRequire + require
- Add scripts/shared/ to .gitignore
Fixes crash introduced by #1212 (shared JSON consolidation).
* fix(aviation): unify NOTAM status logic between map and ops table
Both endpoints now use a shared NOTAM loader (seed-first with live
fallback) so they see the same closure snapshot. When an airport has
a NOTAM *and* real flight data, the new mergeNotamWithExistingAlert()
preserves observed stats instead of hard-replacing with severe/closure.
This fixes DXB showing NORMAL in the ops table but SEVERE on the map.
- Extract loadNotamClosures() to _shared.ts (used by both handlers)
- Add mergeNotamWithExistingAlert() for NOTAM + flight data merge
- Align ops-summary severity: NOTAM floor = moderate (operating) or
severe (no flights), matching the map's merge logic
- Fix OMAD → OMAA ICAO typo in seed MENA list (AUH)
* fix: resolve strict tsconfig type errors in API build
* fix(aviation): preserve closure delayType for NOTAM-closed airports
Downstream consumers (MapPopup, data-loader, country-instability) rely
on delayType === 'closure' as the only closure signal. Always set
delayType to closure in mergeNotamWithExistingAlert() since the NOTAM
confirms the airport is closed — severity is still nuanced by flight data.
* fix: eliminate frontend external API calls, enforce gold standard pattern
- Polymarket: remove browser fan-out (536→105 lines), bootstrap → RPC only
- USASpending: remove direct API calls, read from bootstrap hydration
- NWS Weather: remove direct API calls, read from bootstrap hydration
- Nominatim: proxy through api/reverse-geocode.js with Redis cache + SSRF clamping
- Add seed scripts for weather alerts (15min) and spending (60min)
- Wire both seed loops into ais-relay.cjs
- Register weatherAlerts + spending in bootstrap.js and health.js
- Add 4 missing standalone keys to health.js (cyberThreatsRpc, militaryBases, temporalAnomalies, displacement)
* fix: resolve reload regressions and null-cache poisoning from #1217
- Weather/Spending: fall back to `/api/bootstrap?keys=` on scheduled
reloads after the one-shot `getHydratedData()` is consumed
- Prediction: add client-side bootstrap filter for country markets
when RPC fails (server skips bootstrap for query-based requests)
- Reverse-geocode: restore abort/timeout guard so transient network
errors don't permanently poison the in-memory cache
* feat(infrastructure): expand submarine cables to 86 via TeleGeography API seed
- Add `seed-submarine-cables.mjs` Railway cron script fetching 86 strategic
cables from TeleGeography API (was 19 hand-curated)
- Update `geo.ts` static baseline with full cable data (routes, landing points,
owners, RFS year, regions)
- Update `get-cable-health.ts` cable name/landing mappings for new slug-based IDs
- Add `data?.cables?.length` to `_seed-utils.mjs` record count heuristic
- Update `map-harness.ts` cable ID references
- Remove GitHub Actions workflows for UCDP and WB indicators (Railway cron only)
* fix(infrastructure): cable route matching, name false positives, validation threshold
- Fix route geometry: only strip numeric suffix when result matches a known
cable slug, preventing seamewe-6→seamewe, farice-1→farice, etc.
- Fix name matching: use word-boundary regex instead of substring includes;
disambiguate short names (ACE→ACE CABLE, SAFE→SAFE CABLE, PEACE→PEACE CABLE,
TEAMS→TEAMS CABLE) to prevent false matches on common NGA words
- Raise validation threshold from 50 to 75 (88% success required) to prevent
heavily partial upstream results from overwriting good cached data
* fix(infrastructure): tie validation threshold to 90% of configured cable count
Dynamic threshold based on CABLE_REGIONS length instead of a hardcoded number.
Currently requires >= 78 of 86 cables (90%).
Adding a new item (crypto, ETF, stablecoin, gulf symbol, etc.) previously
required editing 2-4 files because the same list was hardcoded independently
in seed scripts, RPC handlers, and frontend config. Following the proven
shared/crypto.json pattern, extract 6 new shared JSON configs so each list
has a single source of truth.
New shared configs:
- shared/stablecoins.json (ids + coinpaprika mappings)
- shared/etfs.json (BTC spot ETF tickers + issuers)
- shared/gulf.json (GCC indices, currencies, oil benchmarks)
- shared/sectors.json (sector ETF symbols + names)
- shared/commodities.json (VIX, gold, oil, gas, silver, copper)
- shared/stocks.json (market symbols + yahoo-only set)
All seed scripts, RPC handlers, and frontend config now import from
these shared JSON files instead of maintaining independent copies.
- Use resolved-flights-only denominator (landed+active+cancelled+diverted)
instead of all flights including scheduled/unknown. DXB was showing 15%
cancelled (NORMAL) when the real rate among resolved flights is ~58% (MAJOR).
- Add flight_date=today filter to AviationStack API calls to avoid mixing
historical/future flights into today's cancellation stats.
- Factor cancellation rate into ops summary table severity (was ignored,
only delay minutes were considered). Uses shared severityFromCancelRate()
to avoid threshold duplication.
- Add minimum resolved threshold (>=10) before using resolved denominator
to prevent extreme percentages from tiny samples.
- Add 12 major airports to AviationStack monitoring: YVR, SCL, DUB, LIS,
ATH, WAW, CAN, TPE, MNL, AMM, KWI, CMN (40→52 airports).
* fix: three panel issues — Tech Readiness toggle, Crypto top 10, FIRMS key check
1. #1132 — Add tech-readiness to FULL_PANELS so it appears in the
Settings toggle list for Full/Geopolitical variant users.
2. #979 — Expand crypto panel from 4 coins to top 10 by market cap
(BTC, ETH, USDT, BNB, SOL, XRP, USDC, ADA, DOGE, TRX) across
client config, server metadata, CoinPaprika fallback map, and
seed script.
3. #997 — Check isFeatureAvailable('nasaFirms') before loading FIRMS
data. When the API key is missing, show a clear "not configured"
message instead of the generic "No fire data available".
Closes#1132, closes#979, closes#997
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: replace stablecoins with AVAX/LINK, remove duplicate key, revert FIRMS change
- Replace USDT/USDC (stablecoins pegged ~$1) with AVAX and LINK
- Remove duplicate 'usd-coin' key in COINPAPRIKA_ID_MAP
- Add CoinPaprika fallback IDs for avalanche-2 and chainlink
- Revert FIRMS API key gating (handled differently now)
- Add sync comments across the 3 crypto config locations
* fix: update AIS relay + seed CoinPaprika fallback for all 10 coins
The AIS relay (primary seeder) still had the old 4-coin list.
The seed script's CoinPaprika fallback map was also missing the
new coins. Both now have all 10 entries.
* refactor: DRY crypto config into shared/crypto.json
Single source of truth for crypto IDs, metadata, and CoinPaprika
fallback mappings. All 4 consumers now import from shared/crypto.json:
- src/config/markets.ts (client)
- server/worldmonitor/market/v1/_shared.ts (server)
- scripts/seed-crypto-quotes.mjs (seed script)
- scripts/ais-relay.cjs (primary relay seeder)
Adding a new coin now requires editing only shared/crypto.json.
* chore: fix pre-existing markdown lint errors in README.md
Add blank lines between headings and lists per MD022/MD032 rules.
* fix: correct CoinPaprika XRP mapping and add crypto config test
- Fix xrp-ripple → xrp-xrp (current CoinPaprika id)
- Add tests/crypto-config.test.mjs: validates every coin has meta,
coinpaprika mapping, unique symbols, no stablecoins, and valid
id format — bad fallback ids now fail fast
* test: validate CoinPaprika ids against live API
The regex-only check wouldn't have caught the xrp-ripple typo.
New test fetches /v1/coins from CoinPaprika and asserts every
configured id exists. Gracefully skips if API is unreachable.
* fix(test): handle network failures in CoinPaprika API validation
Wrap fetch in try-catch so DNS failures, timeouts, and rate limits
skip gracefully instead of failing the test suite.
---------
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Elie Habib <elie.habib@gmail.com>
* feat(intelligence): server-side batch AI classification for news headlines
Move LLM classification from per-client RPCs to a server-side seed loop.
The relay batch-classifies digest titles every 15min via any OpenAI-compatible
endpoint, caches results in Redis, and the digest handler enriches items
from cache before serving — eliminating ~80 classify-event calls per client.
- Remove duplicate digest branch in data-loader.ts (dead code)
- Add _batch-classify.ts with provider-agnostic batch LLM classify
- Update classify-event.ts to use LLM_API_KEY/LLM_API_URL/LLM_MODEL env vars
- Add upstashMGet helper + classify seed loop to ais-relay.cjs
- Add enrichWithAiCache to list-feed-digest.ts (single batch Redis read)
- Preserve high-confidence keyword hits (>= 0.9) via upgrade rule
* fix(intelligence): use protocol-aware transport in relay LLM fetch, tolerate wrapped JSON in classify-event
- classifyFetchLlm: pick http/https module based on URL protocol so
http://localhost works for local Ollama/vLLM during dev
- classify-event.ts: extract JSON object from fenced/wrapped model output
(matches relay's tolerant parsing for OpenAI-compatible providers)
* perf(edge): reduce unnecessary Vercel edge invocations (#6 findings)
Phase 1 — Client-only fixes:
- Remove predictions double getHydratedData read from data-loader.ts;
fetchPredictions() handles hydration internally
- Fix UCDP delete-on-read race: read hydratedUcdp once in data-loader,
pass to both fetchUcdpClassifications() and fetchUcdpEvents()
Phase 2 — Batch RPCs (proto + server + client):
- Add GetHumanitarianSummaryBatch RPC: replaces 20-request HAPI fanout
with single batch call (getCachedJsonBatch + per-key Redis caching)
- Add GetFredSeriesBatch RPC: replaces 7-request FRED fanout with
single batch call (same pattern)
- Both batch RPCs have 404 deploy-skew fallback to per-item calls
Phase 3 — Seed gap:
- Add seed-service-statuses.mjs standalone seed script
- Add 15-min warm-ping loop in AIS relay for service statuses
- Remove serviceStatuses from ON_DEMAND_KEYS in health.js
Net savings: up to 28 edge calls eliminated on cold miss per page load.
* fix(edge): address code review findings (P1–P3)
P1: Fix dead 404 deploy-skew fallback — circuit breaker was swallowing
the ApiError before the catch block could detect it. Move 404 fallback
inside the breaker callback so it executes before the breaker catches.
P2: Replace 172-line seed-service-statuses.mjs (duplicated parser logic)
with a 60-line warm-ping that triggers the existing RPC handler.
P2: Extract shared ISO2_TO_ISO3 mapping to conflict/v1/_shared.ts,
eliminating duplication between single and batch HAPI handlers.
P3: Remove unnecessary UPSTASH_ENABLED guard from relay warm-ping
(it calls Vercel RPC, not Redis directly).
P3: Clean up unused per-series FRED breakers and fetchSingleFredSeries
(replaced by batch breaker). Update getFredStatus() accordingly.
* fix(edge): use concurrent fetches in batch handlers
HAPI batch: replace serial loop with groups of 5 concurrent fetches
using Promise.allSettled for partial-success resilience. Bump client
timeout to 60s (4 rounds × 15s upstream timeout worst case).
FRED batch: replace serial loop with fully parallel Promise.allSettled
(max 10 series, each hits separate FRED endpoint).
Both changes prevent empty-result regression on cold cache that the
serial approach caused when upstream latency exceeded the client timeout.
* fix(cyber): prevent AbuseIPDB quota burn when Redis rate check fails
The catch block in fetchAbuseIpDb() was falling through to the API call
when the Redis rate-limit check failed (e.g. Redis down, first run with
no key). With a 10-minute cron interval, this could exhaust the 100
calls/day free-plan limit in under 17 hours.
Now returns early with { ok: false, threats: [] } so the other 4 IOC
sources still seed normally while AbuseIPDB is safely skipped.
* fix(seeds): respect API rate limits and log fetch failures
1. seed-fire-detections.mjs: increase delay from 200ms to 6s between
FIRMS API calls. Free tier allows 10 req/min; 27 calls at 200ms
exceeded this and caused silent failures.
2. ais-relay.cjs (positive events): increase GDELT delay from 500ms to
5.5s to respect the documented 1 req/5s rate limit.
3. ais-relay.cjs (cyber fetchers): replace 5 silent `catch { return [] }`
blocks with `console.warn` logging so failures are visible in Railway
logs. Dead code today (cyber loop disabled) but sets the right example
for contributors.
* fix(seeds): extend FIRMS lock TTL and restore AbuseIPDB resilience
P1: seed-fire-detections.mjs — the 6s FIRMS pacing makes the job take
~162s minimum, exceeding the default 120s lock TTL. Extend lockTtlMs
to 300s (5 min) to prevent overlapping cron invocations.
P2: seed-cyber-threats.mjs — revert the early return on Redis rate-check
failure. A transient Redis blip should not permanently disable AbuseIPDB
for that run. Instead, log a warning and proceed with caution. The 2h
rate-limit interval + 10-min cron means at most 1 extra call per Redis
outage window, well within the 100/day budget.
* fix(wildfire): extend lock TTL to 10 min for worst-case FIRMS timeouts
27 calls × (6s pacing + 30s per-request timeout) = 972s worst case.
300s lock was still too short under partial upstream slowness.
* Add Pakistan–Afghanistan hotspot and conflict zone
Introduce a new INTEL_HOTSPOTS entry (pak_afghan) to track Pakistan–Afghanistan border tensions, including location, keywords, agencies, status, escalation indicators, and humanitarian significance. Also add a CONFLICT_ZONES polygon for 'Pakistan–Afghanistan War' with center, intensity, parties, startDate (Feb 21, 2026), key developments, and displacement/casualty notes to enable monitoring of cross-border strikes, TTP activity, and regional instability.
* Update conflict zone center coordinates
Adjust the center coordinates for the specified conflict zone in src/config/geo.ts from [50, 30] to [69, 31.8] to better reflect the actual Pakistan/Afghanistan border region and improve map centering/visualization accuracy.
* Add country boundary overrides (Pakistan)
Support optional country boundary overrides by loading public/data/country-boundary-overrides.geojson and replacing main country geometries when ISO codes match. Add a script (scripts/fetch-pakistan-boundary-override.mjs) to fetch Pakistan's de facto boundary from Natural Earth and write the override file, and document the override workflow in CONTRIBUTING.md. The country-geometry service now attempts to apply overrides and updates cached polygons/bboxes; failures are ignored since overrides are optional.
* fix: neutralize language, parallel override loading, fetch timeout
- Rename conflict zone from "War" to "Border Conflict", intensity high→medium
- Rewrite description to factual language (no "open war" claim)
- Load country boundary overrides in parallel with main GeoJSON
- Neutralize comments/docs: reference Natural Earth source, remove political terms
- Add 60s timeout to Natural Earth fetch script (~24MB download)
- Add trailing newline to GeoJSON override file
* refactor: serve country boundary overrides from R2 CDN
Move country-boundary-overrides.geojson from public/data/ to R2 bucket
(worldmonitor-maps) to avoid serving large static files through Vercel.
Update fetch URL, docs, and script with rclone upload instructions.
* fix: use maps.worldmonitor.app for R2 override URL (CF-proxied)
* fix(geo): bound optional country override fetch
---------
Co-authored-by: Elie Habib <elie.habib@gmail.com>
* feat: premium panel gating, code cleanup, and backend simplifications
Recovered stranded changes from fix/desktop-premium-error-unification.
Premium gating:
- Add premium field ('locked'|'enhanced') to PanelConfig and LayerDefinition
- Panel.showLocked() with lock icon, CTA button, and _locked guard
- PRO badge for enhanced panels when no WM API key
- Exponential backoff auto-retry on showError() (15s→30s→60s→180s cap)
- Gate oref-sirens and telegram-intel panels behind WM API key
- Lock gpsJamming and iranAttacks layer toggles, badge ciiChoropleth
- Add tauri-titlebar drag region for custom titlebar
Code cleanup:
- Extract inline CSS from AirlineIntelPanel, WorldClockPanel to panels.css
- Remove unused showGeoError() from CountryBriefPage
- Remove dead geocodeFailed/retryBtn/closeBtn locale keys (20 files)
- Clean up var names and inline styles across 6 components
Backend:
- Remove seed-meta throttle from redis.ts (unnecessary complexity)
- Risk scores: call handler functions directly instead of raw Redis reads
- Update OpenRouter model to gpt-oss-safeguard-20b:nitro
- Add direct UCDP API fetching with version probing
Config:
- Remove titleBarStyle: Overlay from tauri.conf.json
- Add build:pro and build-sidecar-handlers to build:desktop
- Remove DXB/RUH from default aviation watchlist
- Simplify reverse-geocode (remove AbortController wrapper)
* fix: cast handler requests to any for API tsconfig compat
* fix: revert stale changes that conflict with merged PRs
Reverts files to main versions where old branch changes would
overwrite intentional fixes from PRs #1134, #1138, #1144, #1154:
- news/_shared.ts: keep gemini-2.5-flash model (not stale gpt-oss)
- redis.ts: keep seed-meta throttle from PR #1138
- reverse-geocode.ts: keep AbortController timeout from PR #1134
- CountryBriefPage.ts: keep showGeoError() from PR #1134
- country-intel.ts: keep showGeoError usage from PR #1134
- get-risk-scores.ts: revert non-existent imports
- watchlist.ts: keep DXB/RUH airports from PR #1144
- locales: restore geocodeFailed/retryBtn/closeBtn keys
* fix: neutralize language, parallel override loading, fetch timeout
- Rename conflict zone from "War" to "Border Conflict", intensity high→medium
- Rewrite description to factual language (no "open war" claim)
- Load country boundary overrides in parallel with main GeoJSON
- Neutralize comments/docs: reference Natural Earth source, remove political terms
- Add 60s timeout to Natural Earth fetch script (~24MB download)
- Add trailing newline to GeoJSON override file
* fix: restore caller messages in Panel errors and vessel expansion in popups
- Move UCDP direct-fetch cooldown after successful fetch to avoid
suppressing all data for 10 minutes on a single failure
- Use caller-provided messages in showError/showRetrying instead of
discarding them; respect autoRetrySeconds parameter
- Restore cluster-toggle click handler and expandable vessel list
in military cluster popups
* fix(health): add seed-meta tracking for all bootstrap keys missing freshness data
6 bootstrap keys had no SEED_META entries in health.js, so health
endpoint could never track their freshness (always seedStale: true).
health.js:
- Add progressData + renewableEnergy to BOOTSTRAP_KEYS (missed in PR #1159)
- Add SEED_META entries for: positiveGeoEvents, riskScores, iranEvents,
ucdpEvents, sectors, techReadiness, progressData, renewableEnergy
ais-relay.cjs:
- Add seed-meta writes for positive events (GDELT), risk scores (CII),
and sectors — these loops had no freshness tracking
- iranEvents and ucdpEvents already write seed-meta via their seed scripts
* fix(seed): add seed-meta writes to seed-wb-indicators.mjs
The seed script wrote data keys but never wrote seed-meta keys,
causing health endpoint to report STALE_SEED for techReadiness,
progressData, and renewableEnergy indefinitely.
* fix(economic): seed all WB indicators on Railway, never call WB API from frontend
Extends seed-wb-indicators.mjs to pre-compute progress data (4 indicators)
and renewable energy data (EG.ELC.RNEW.ZS) alongside tech readiness rankings.
Frontend callers (progress-data.ts, renewable-energy-data.ts, getTechReadinessRankings,
getCountryComparison) now read exclusively from bootstrap/Redis seed data.
Zero Vercel Edge → World Bank API calls remain.
* fix: address code review findings (P1+P2)
- Fix triple JSON.parse in seed verification (P1)
- Graceful fallback for renewable data fetch failure (P2)
- Use Map lookup instead of Array.find in progress-data (P2)
- Update regression test for bootstrap-only getTechReadinessRankings (P2)
Manual seed script for Iran conflict events. Reads from
scripts/data/iran-events-latest.json, geocodes via LOCATION_COORDS,
seeds to conflict:iran-events:v1. Data file stays gitignored.
- Exit 0 on failure so Railway cron doesn't restart container
- Wait 3s after RPC warm before re-reading digest from Redis
- Fall back to existing insights (LKG) when digest key is missing
* feat(seeds): add BIS data seed job and relax health thresholds
Add seed-bis-data.mjs that fetches all 3 BIS datasets (policy rates,
exchange rates, credit-to-GDP) in parallel and writes to Redis. This
keeps the cache warm instead of relying on on-demand RPC calls.
Relax BIS health thresholds from 1440min (24h) to 2880min (48h) since
BIS data is monthly/quarterly — 24h was too aggressive.
* fix(health): relax minerals and giving thresholds to 7 days
Both are static/hardcoded data with no external API calls.
2880min (48h) was too aggressive for annual data.
* fix(gpsjam): write seed-meta for health freshness tracking
The fetch-gpsjam script seeded Redis data but never wrote
seed-meta:intelligence:gpsjam, causing health to report STALE_SEED.
* data(iran): import 100 events + add 27 geocoded locations
Import latest LiveUAMap events (March 5-6, 2026) covering
US-Israeli strikes on Iran, Iranian retaliatory attacks on
Gulf states and Israel, and regional diplomatic developments.
New LOCATION_COORDS: Paveh, Parchin, Rasht, Khorramabad,
Damavand, Parand, Javanrud, Basra, Karbala, Nakhchivan,
Koya, Elad, Juffair, Hodeidah, Sana'a, Ma'ameer, Pakdasht,
Alborz, Khor al-Zubair, Prince Sultan AB, Ben Gurion,
Tel Nof, Azerbaijan, Yemen.
* fix: remove seed script and event data from git tracking
These files are already in .gitignore but were committed previously.
Event data belongs in Redis only, not in the repo.
* feat(api): add comprehensive health check endpoint for UptimeRobot
Checks all 44 Redis cache keys (33 bootstrap + 11 standalone) plus
17 seed-meta freshness timestamps in a single Redis pipeline.
- Returns HEALTHY/DEGRADED/UNHEALTHY with per-key status
- Distinguishes seed-backed keys (STALE_SEED) from on-demand keys (EMPTY_ON_DEMAND)
- No auth required, ?compact=1 for minimal payload
- UptimeRobot: keyword monitor on "HEALTHY", HTTP 503 on UNHEALTHY
* feat(market): add CoinPaprika fallback for crypto/stablecoin data
CoinGecko 429 rate limiting causes seed and RPC failures.
Added CoinPaprika (free, 250K req/mo, no key) as automatic fallback
when CoinGecko fails. Also adds CoinGecko Pro key support.
- _shared.ts: fetchCryptoMarkets() unified wrapper (CoinGecko → CoinPaprika)
- list-crypto-quotes.ts: use fetchCryptoMarkets instead of direct CoinGecko
- list-stablecoin-markets.ts: same, removed duplicate CoinGecko fetch
- seed-crypto-quotes.mjs: CoinPaprika fallback + Pro key support
- seed-stablecoin-markets.mjs: same
- ais-relay.cjs: both seedCryptoQuotes and seedStablecoinMarkets
At midnight UTC, FIRMS API returns 0 fire detections due to date
rollover. The validateFn correctly rejects empty data, but previously
this threw a FATAL error and crashed. Now it exits cleanly (code 0),
preserving existing cached data in Redis for the next successful run.
* perf: reduce Vercel data transfer costs with CDN optimization
- Increase polling intervals (markets 8→12min, feeds 15→20min, crypto 8→12min)
- Increase background tab hiddenMultiplier from 10→30 (polls 3x less when hidden)
- Double CDN s-maxage TTLs across all cache tiers in gateway
- Add CDN-Cache-Control header for Cloudflare-specific longer edge caching
- Add ETag generation + 304 Not Modified support in gateway (zero-byte revalidation)
- Add CDN-Cache-Control to bootstrap endpoint
- Add explicit SPA rewrite rule in vercel.json for CF proxy compatibility
- Add Cache-Control headers for /map-styles/, /data/, /textures/ static paths
* fix: prevent CF from caching SPA HTML + reduce Polymarket bandwidth 95%
- vercel.json: apply no-cache headers to ALL SPA routes (same regex as
rewrite rule), not just / and /index.html — prevents CF proxy from
caching stale HTML that references old content-hashed bundle filenames
- Polymarket: add server-side aggregation via Railway seed script that
fetches all tags once and writes to Redis, eliminating 11-request
fan-out per user per poll cycle
- Bootstrap: add predictions to hydration keys for zero-cost page load
- RPC handler: read Railway-seeded bootstrap key before falling back to
live Gamma API fetch
- Client: 3-strategy waterfall (bootstrap → RPC → fan-out fallback)
Root cause: AbuseIPDB has 100 calls/day limit. The cyber seed cron runs
every 2h with a 2h TTL — tight race causes Vercel handler fallthrough
to live fetches when the key expires between cron runs.
Three fixes:
1. Rate-guard AbuseIPDB in seed-cyber-threats.mjs: checks Redis key
`rate:abuseipdb:last-call` before calling API, uses cached threats
from `cache:abuseipdb:threats` between calls (2h minimum interval)
2. Disable duplicate cyber seed loop in ais-relay.cjs (standalone cron
handles it — avoids 12 extra AbuseIPDB calls/day)
3. Increase seed TTL from 2h to 3h to survive 1 missed cron cycle
The cyber seed wrote to `cyber:threats:v2:0:::` but the handler reads
from `cyber:threats:v2` — seed data was invisible to the handler, causing
every request to fall through to live AbuseIPDB/OTX/URLhaus fetches and
burning API quotas.
Additionally, 4 market domains (crypto, gulf, stablecoins, ETF flows) had
handler-side seed-reading code but no corresponding seed functions in the
Railway relay. All requests fell through to live CoinGecko/Yahoo fetches.
Changes:
- Fix CYBER_RPC_KEY to match handler's REDIS_CACHE_KEY
- Add seed-meta:cyber:threats write with fetchedAt timestamp
- Add seedGulfQuotes() — Yahoo Finance, 14 symbols, 1h interval
- Add seedEtfFlows() — Yahoo Finance, 10 BTC ETFs, 1h interval
- Add seedCryptoQuotes() — CoinGecko, 4 coins, 30min interval
- Add seedStablecoinMarkets() — CoinGecko, 5 stablecoins, 30min interval
- All new seeds write both data key and seed-meta key
- Wire all into seedAllMarketData() loop
When the 34MB data file isn't available locally, the script now:
1. Checks /data/ (Railway volume mount)
2. Checks scripts/data/ (local)
3. Downloads from Cloudflare R2 bucket (worldmonitor-data)
4. Falls back to Redis check (skip if data already seeded)
R2 bucket: worldmonitor-data/seed-data/military-bases-final.json
Requires CLOUDFLARE_R2_TOKEN or CLOUDFLARE_API_TOKEN env var on Railway.
- Check Railway volume mount (/data/) first, then local scripts/data/
- If no file found, check if Redis already has active data — skip gracefully
- No more crash when deployed as cron without the 34MB data file
The data uses Redis GEO/HASH keys with no TTL (persists indefinitely).
Re-seeding only needed when base data changes or Redis is wiped.
* feat: enhance support for HLS streams and update font styles
* chore: add .vercelignore to exclude large local build artifacts from Vercel deploys
* chore: include node types in tsconfig to fix server type errors on Vercel build
* fix(middleware): guard optional variant OG lookup to satisfy strict TS
* fix: desktop build and live channels handle null safety
- scripts/build-sidecar-sebuf.mjs: Skip building removed [domain]/v1/[rpc].ts (removed in #785)
- src/live-channels-window.ts: Add optional chaining for handle property to prevent null errors
- src-tauri/Cargo.lock: Bump version to 2.5.24
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
* fix: address review issues on PR #1020
- Remove AGENTS.md (project guidelines belong to repo owner)
- Restore tracking script in index.html (accidentally removed)
- Revert tsconfig.json "node" types (leaks Node globals to frontend)
- Add protocol validation to isHlsUrl() (security: block non-http URIs)
- Revert Cargo.lock version bump (release management concern)
* fix: address P2/P3 review findings
- Preserve hlsUrl for HLS-only channels in refreshChannelInfo (was
incorrectly clearing the stream URL on every refresh cycle)
- Replace deprecated .substr() with .substring()
- Extract duplicated HLS display name logic into getChannelDisplayName()
---------
Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com>
Co-authored-by: Elie Habib <elie.habib@gmail.com>
* fix: allow zero fire detections in seed validation
FIRMS NRT data has a rolling window — at certain hours, all 9 monitored
regions can legitimately return 0 active fire detections. The strict
length > 0 validation caused CRASHED status on Railway cron runs
during these periods. Structure-only validation is sufficient.
* fix: add rate-limit-aware retry for CoinGecko 429s
The default withRetry (1s/2s/4s backoff) is too short for CoinGecko
rate limits. New fetchWithRateLimitRetry uses 10s/20s/30s/40s/50s
delays with up to 5 attempts specifically for 429 responses.
* fix: add 429 rate-limit retry to all Yahoo and CoinGecko seed scripts
Yahoo Finance and CoinGecko both return 429 when rate limited. The
default withRetry (1s/2s/4s) is too fast for rate limits. Added
per-request 429-specific retry with longer backoff:
- Yahoo: 5s/10s/15s/20s (4 attempts per symbol)
- CoinGecko: 10s/20s/30s/40s/50s (5 attempts)
Scripts updated: seed-etf-flows, seed-gulf-quotes, seed-commodity-quotes,
seed-market-quotes, seed-stablecoin-markets.
The GPS jamming data pipeline had no scheduled seed — fetch-gpsjam.mjs
existed as a standalone script but was never wired into the relay's
setInterval-based seed system. Redis key intelligence:gpsjam:v1 was
always empty, forcing the edge function to fall back to direct
gpsjam.org fetches (without lat/lon pre-computation).
Adds startGpsJamSeedLoop() that runs every 6 hours:
- Fetches manifest + CSV from gpsjam.org
- Parses H3 hex data with min-aircraft threshold
- Converts H3→lat/lon via h3-js (pre-computed for frontend)
- Classifies regions for conflict zone tagging
- Writes enriched data to Redis with 24h TTL
* feat: add seed-first pattern to 15 RPC handlers with Railway seed scripts
Migrate handlers from direct external API calls to seed-first pattern:
Railway cron seeds Redis → handlers read from Redis → fallback to live
fetch if seed stale and SEED_FALLBACK_* env enabled.
Handlers updated: earthquakes, fire-detections, internet-outages,
climate-anomalies, unrest-events, cyber-threats, market-quotes,
commodity-quotes, crypto-quotes, etf-flows, gulf-quotes,
stablecoin-markets, natural-events, displacement-summary, risk-scores.
Also adds:
- scripts/_seed-utils.mjs (shared seed framework with atomic publish,
distributed locks, retry, freshness metadata)
- 13 seed scripts for Railway cron
- api/seed-health.js monitoring endpoint
- scripts/validate-seed-migration.mjs post-deploy validation
- Restored multi-source CII in get-risk-scores (8 sources: ACLED,
UCDP, outages, climate, cyber, fires, GPS, Iran)
* feat: add seed scripts for market quotes, commodity quotes & airport delays
New seed scripts:
- seed-market-quotes.mjs: 28 symbols via Yahoo Finance + Finnhub
- seed-commodity-quotes.mjs: 6 commodity futures via Yahoo Finance
- seed-airport-delays.mjs: FAA + NOTAM airport closure data
Handler changes (seed-first pattern):
- list-market-quotes.ts: read from seed data before live fetch
- list-commodity-quotes.ts: read from seed data before live fetch
- list-airport-delays.ts: seed-first for FAA and NOTAM data
Other changes:
- ais-relay.cjs: add DISABLE_RELAY_MARKET_SEED guard for cutover
- _seed-utils.mjs: add sleep, parseYahooChart, writeExtraKey helpers
- seed-health.js: monitor 4 new seed domains
- validate-seed-migration.mjs: add new domains to validation
* fix: extract digest items from category buckets in seed-insights
The news digest Redis key stores items nested in category buckets
({ categories: { politics: { items: [...] }, ... } }), not as a
flat array. The script was checking for digest.items which is
undefined, causing "Digest has no items" errors on every run.
Railway marks cron jobs as "failed" when the Node.js process doesn't
exit cleanly. The seed scripts relied on natural event loop drain,
but undici's connection pool keeps handles alive, causing Railway to
kill the process and mark it as failed.
Changes:
- Add process.exit(0) on success and lock-skip paths in runSeed()
- Fix recordCount for crypto (.quotes) and stablecoin (.stablecoins)
- Add writeExtraKey, sleep, parseYahooChart shared utilities
- Add extraKeys option to runSeed for bootstrap hydration keys
The RSS_ALLOWED_DOMAINS refactor missed the redirect handler at line 4755,
causing ReferenceError: allowedDomains is not defined every time an RSS
feed returns a 301/302 redirect. This crashes the entire relay process.
* perf(rss): route RSS direct to Railway, skip Vercel middleman
Vercel /api/rss-proxy has 65% error rate (207K failed invocations/12h).
Route browser RSS requests directly to Railway (proxy.worldmonitor.app)
via Cloudflare CDN, eliminating Vercel as middleman.
- Add VITE_RSS_DIRECT_TO_RELAY feature flag (default off) for staged rollout
- Centralize RSS proxy URL in rssProxyUrl() with desktop/dev/prod routing
- Make Railway /rss public (skip auth, keep rate limiting with CF-Connecting-IP)
- Add wildcard *.worldmonitor.app CORS + always emit Vary: Origin on /rss
- Extract ~290 RSS domains to shared/rss-allowed-domains.cjs (single source of truth)
- Convert Railway domain check to Set for O(1) lookups
- Remove rss-proxy from KEYED_CLOUD_API_PATTERN (no longer needs API key header)
- Add edge function test for shared domain list import
* fix(edge): replace node:module with JSON import for edge-compatible RSS domains
api/_rss-allowed-domains.js used createRequire from node:module which is
unsupported in Vercel Edge Runtime, breaking all edge functions (including
api/gpsjam). Replaced with JSON import attribute syntax that works in both
esbuild (Vercel build) and Node.js 22+ (tests).
Also fixed middleware.ts TS18048 error where VARIANT_OG[variant] could be
undefined.
* test(edge): add guard against node: built-in imports in api/ files
Scans ALL api/*.js files (including _ helpers) for node: module imports
which are unsupported in Vercel Edge Runtime. This would have caught the
createRequire(node:module) bug before it reached Vercel.
* fix(edge): inline domain array and remove NextResponse reference
- Replace `import ... with { type: 'json' }` in _rss-allowed-domains.js
with inline array — Vercel esbuild doesn't support import attributes
- Replace `NextResponse.next()` with bare `return` in middleware.ts —
NextResponse was never imported
* ci(pre-push): add esbuild bundle check and edge function tests
The pre-push hook now catches Vercel build failures locally:
- esbuild bundles each api/*.js entrypoint (catches import attribute
syntax, missing modules, and other bundler errors)
- runs edge function test suite (node: imports, module isolation)
* fix: add circuit breaker + bootstrap to CII risk scores
Same pattern as theater posture (#948): replace fragile in-memory cache
+ manual persistent-cache with circuit breaker (SWR, IndexedDB, cooldown)
and bootstrap hydration. Eliminates learning-mode delay on cold start
and survives RPC failures without blanking the panel.
* fix: add localStorage sync prime for CII risk scores
getCachedScores() is called synchronously by country-intel.ts as a
fallback during learning mode. Without localStorage priming, the
breaker's async IndexedDB hydration hasn't run yet and returns null.
- Add shape validator (isValidCiiEntry) for untrusted localStorage data
- Add loadFromStorage/saveToStorage with 24h staleness ceiling
- Prime breaker synchronously at module load from localStorage
- Skip priming for empty cii arrays to avoid cached-empty trap
- Save to localStorage on both bootstrap and RPC success paths
* feat: Railway CII seed + bootstrap hydration for instant panel render
- Add 8-source CII seed to Railway (ACLED, UCDP, outages, climate, cyber, fires, GPS, Iran strikes)
- Neuter Vercel handler to read-only (returns Railway-seeded cache, never recomputes)
- Register riskScores in bootstrap FAST tier for CDN-cached delivery
- Add early CII hydration in data-loader before intelligence signals
- Add CIIPanel.renderFromCached() for instant render from bootstrap data
- Refactor cached-risk-scores.ts: circuit breaker + localStorage sync prime + bootstrap hydration
- Progressive enhancement: cached render → full 18-source local recompute (no spinner)
* fix: remove duplicate riskScores key in BOOTSTRAP_TIERS after merge
The AU Smartraveller RSS feeds have been consistently returning 503
from both Vercel edge and Railway relay. Remove all references from
security-advisories feeds, rss-proxy allowed domains, and relay allowlist.
GDELT GEO API had 99.9% timeout rate on Vercel Edge (746 invocations, ~31s
sequential calls vs 25s edge limit). Move fetching to Railway cron (15min),
write to Redis, have Vercel serve read-only from cache with bootstrap hydration.
- Add startPositiveEventsSeedLoop() to ais-relay.cjs (3 queries, dedup, classify)
- Rewrite handler to cache-read-only pattern (matches UCDP)
- Register bootstrap key in FAST_KEYS for instant first render
- Wire getHydratedData() in data-loader before RPC fallback
Add --compressed to all OREF curl requests (~90% bandwidth reduction).
Introduce 3-tier bootstrap: local file (Railway volume) → Redis → upstream,
so restarts never need to re-fetch the full AlertsHistory.json through the
paid residential proxy. Local file is kept in sync after every poll cycle
and upstream bootstrap. OREF_DATA_DIR env var opts in to local persistence.
- Wrap all 4 behavioral it() blocks in try/finally so clearAllCircuitBreakers()
always runs on assertion failure (P2 — leaked breaker state between tests)
- Add assert.ok(fnStart !== -1) guards for fetchHapiSummary, fetchPositiveGdeltArticles,
and fetchGdeltArticles so renames produce a clear diagnostic (P2 — silent false-positives)
- Fix misleading comment in seed-wb-indicators.mjs: WLD/EAS are 3-char codes and
aren't filtered by iso3.length !== 3 (P3)
- Add timeout-minutes: 10 and permissions: contents: read to seed GHA workflow (P3)