mirror of
https://github.com/koala73/worldmonitor.git
synced 2026-05-13 18:46:21 +02:00
26ecf3d91dc5ef9b2e1db36edfaab161efcc3622
138 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
26ecf3d91d |
feat(seeds): add BIS data seed job and relax health thresholds (#1131)
* feat(seeds): add BIS data seed job and relax health thresholds Add seed-bis-data.mjs that fetches all 3 BIS datasets (policy rates, exchange rates, credit-to-GDP) in parallel and writes to Redis. This keeps the cache warm instead of relying on on-demand RPC calls. Relax BIS health thresholds from 1440min (24h) to 2880min (48h) since BIS data is monthly/quarterly — 24h was too aggressive. * fix(health): relax minerals and giving thresholds to 7 days Both are static/hardcoded data with no external API calls. 2880min (48h) was too aggressive for annual data. * fix(gpsjam): write seed-meta for health freshness tracking The fetch-gpsjam script seeded Redis data but never wrote seed-meta:intelligence:gpsjam, causing health to report STALE_SEED. |
||
|
|
7c760c575a | fix(health): resolve bisCredit empty data and theater posture warnings (#1124) | ||
|
|
804e4128f6 |
fix(cyber): suppress MaxListenersExceededWarning in GeoIP hydration (#1120)
setMaxListeners on AbortSignal to match concurrent fetch count, preventing 100+ warning lines in Railway logs. |
||
|
|
5e25bb1386 |
fix(health): resolve all critical health check failures (#1111)
## Summary - Reclassify 10 on-demand keys (BIS, supply chain, theater posture, etc.) from BOOTSTRAP → STANDALONE + ON_DEMAND to stop false CRITs - Fix seed-insights Railway OOM by correcting service-level settings - Unify LLM fallback chain (Groq → OpenRouter → Ollama) in seed-insights - Switch OpenRouter model to `openai/gpt-oss-safeguard-20b:nitro` - Fix GDELT v2/geo → v1/gkg_geojson for unrestEvents and positiveGeoEvents (v2 endpoint is dead) - Add seed-meta writes for marketQuotes/commodityQuotes in AIS relay (zero extra Yahoo calls) - Remove aggressive coord filter in cyber threats that dropped all threats when GeoIP rate-limited ## Health impact - 6 false CRITs → eliminated (reclassified as on-demand) - marketQuotes/commodityQuotes STALE_SEED → OK (seed-meta tracking) - unrestEvents EMPTY_DATA → OK (GDELT v1 fix) - positiveGeoEvents EMPTY_DATA → OK (GDELT v1 fix in relay) - cyberThreats resilience improved (coord filter removal) |
||
|
|
1262e79b38 |
fix: remove data files from git tracking (#1114)
* data(iran): import 100 events + add 27 geocoded locations Import latest LiveUAMap events (March 5-6, 2026) covering US-Israeli strikes on Iran, Iranian retaliatory attacks on Gulf states and Israel, and regional diplomatic developments. New LOCATION_COORDS: Paveh, Parchin, Rasht, Khorramabad, Damavand, Parand, Javanrud, Basra, Karbala, Nakhchivan, Koya, Elad, Juffair, Hodeidah, Sana'a, Ma'ameer, Pakdasht, Alborz, Khor al-Zubair, Prince Sultan AB, Ben Gurion, Tel Nof, Azerbaijan, Yemen. * fix: remove seed script and event data from git tracking These files are already in .gitignore but were committed previously. Event data belongs in Redis only, not in the repo. |
||
|
|
d5cabc6ecd |
feat(market): add CoinPaprika fallback for crypto/stablecoin data (#1092)
* feat(api): add comprehensive health check endpoint for UptimeRobot Checks all 44 Redis cache keys (33 bootstrap + 11 standalone) plus 17 seed-meta freshness timestamps in a single Redis pipeline. - Returns HEALTHY/DEGRADED/UNHEALTHY with per-key status - Distinguishes seed-backed keys (STALE_SEED) from on-demand keys (EMPTY_ON_DEMAND) - No auth required, ?compact=1 for minimal payload - UptimeRobot: keyword monitor on "HEALTHY", HTTP 503 on UNHEALTHY * feat(market): add CoinPaprika fallback for crypto/stablecoin data CoinGecko 429 rate limiting causes seed and RPC failures. Added CoinPaprika (free, 250K req/mo, no key) as automatic fallback when CoinGecko fails. Also adds CoinGecko Pro key support. - _shared.ts: fetchCryptoMarkets() unified wrapper (CoinGecko → CoinPaprika) - list-crypto-quotes.ts: use fetchCryptoMarkets instead of direct CoinGecko - list-stablecoin-markets.ts: same, removed duplicate CoinGecko fetch - seed-crypto-quotes.mjs: CoinPaprika fallback + Pro key support - seed-stablecoin-markets.mjs: same - ais-relay.cjs: both seedCryptoQuotes and seedStablecoinMarkets |
||
|
|
314d341563 |
fix: gracefully skip seed write when validation fails (empty data) (#1089)
At midnight UTC, FIRMS API returns 0 fire detections due to date rollover. The validateFn correctly rejects empty data, but previously this threw a FATAL error and crashed. Now it exits cleanly (code 0), preserving existing cached data in Redis for the next successful run. |
||
|
|
320786f82a |
fix: prevent CF caching SPA HTML + Polymarket bandwidth optimization (#1058)
* perf: reduce Vercel data transfer costs with CDN optimization - Increase polling intervals (markets 8→12min, feeds 15→20min, crypto 8→12min) - Increase background tab hiddenMultiplier from 10→30 (polls 3x less when hidden) - Double CDN s-maxage TTLs across all cache tiers in gateway - Add CDN-Cache-Control header for Cloudflare-specific longer edge caching - Add ETag generation + 304 Not Modified support in gateway (zero-byte revalidation) - Add CDN-Cache-Control to bootstrap endpoint - Add explicit SPA rewrite rule in vercel.json for CF proxy compatibility - Add Cache-Control headers for /map-styles/, /data/, /textures/ static paths * fix: prevent CF from caching SPA HTML + reduce Polymarket bandwidth 95% - vercel.json: apply no-cache headers to ALL SPA routes (same regex as rewrite rule), not just / and /index.html — prevents CF proxy from caching stale HTML that references old content-hashed bundle filenames - Polymarket: add server-side aggregation via Railway seed script that fetches all tags once and writes to Redis, eliminating 11-request fan-out per user per poll cycle - Bootstrap: add predictions to hydration keys for zero-cost page load - RPC handler: read Railway-seeded bootstrap key before falling back to live Gamma API fetch - Client: 3-strategy waterfall (bootstrap → RPC → fan-out fallback) |
||
|
|
478df641fa |
fix: rate-guard AbuseIPDB calls and disable duplicate cyber seed loop (#1055)
Root cause: AbuseIPDB has 100 calls/day limit. The cyber seed cron runs every 2h with a 2h TTL — tight race causes Vercel handler fallthrough to live fetches when the key expires between cron runs. Three fixes: 1. Rate-guard AbuseIPDB in seed-cyber-threats.mjs: checks Redis key `rate:abuseipdb:last-call` before calling API, uses cached threats from `cache:abuseipdb:threats` between calls (2h minimum interval) 2. Disable duplicate cyber seed loop in ais-relay.cjs (standalone cron handles it — avoids 12 extra AbuseIPDB calls/day) 3. Increase seed TTL from 2h to 3h to survive 1 missed cron cycle |
||
|
|
86c1d1a807 |
fix: correct cyber seed Redis key mismatch and add missing market seed functions (#1054)
The cyber seed wrote to `cyber:threats:v2:0:::` but the handler reads from `cyber:threats:v2` — seed data was invisible to the handler, causing every request to fall through to live AbuseIPDB/OTX/URLhaus fetches and burning API quotas. Additionally, 4 market domains (crypto, gulf, stablecoins, ETF flows) had handler-side seed-reading code but no corresponding seed functions in the Railway relay. All requests fell through to live CoinGecko/Yahoo fetches. Changes: - Fix CYBER_RPC_KEY to match handler's REDIS_CACHE_KEY - Add seed-meta:cyber:threats write with fetchedAt timestamp - Add seedGulfQuotes() — Yahoo Finance, 14 symbols, 1h interval - Add seedEtfFlows() — Yahoo Finance, 10 BTC ETFs, 1h interval - Add seedCryptoQuotes() — CoinGecko, 4 coins, 30min interval - Add seedStablecoinMarkets() — CoinGecko, 5 stablecoins, 30min interval - All new seeds write both data key and seed-meta key - Wire all into seedAllMarketData() loop |
||
|
|
6128efcdef |
fix: add R2 fallback for military bases seed data (#1053)
When the 34MB data file isn't available locally, the script now: 1. Checks /data/ (Railway volume mount) 2. Checks scripts/data/ (local) 3. Downloads from Cloudflare R2 bucket (worldmonitor-data) 4. Falls back to Redis check (skip if data already seeded) R2 bucket: worldmonitor-data/seed-data/military-bases-final.json Requires CLOUDFLARE_R2_TOKEN or CLOUDFLARE_API_TOKEN env var on Railway. |
||
|
|
2b48350b07 |
fix: make seed-military-bases resilient to missing data file (#1051)
- Check Railway volume mount (/data/) first, then local scripts/data/ - If no file found, check if Redis already has active data — skip gracefully - No more crash when deployed as cron without the 34MB data file The data uses Redis GEO/HASH keys with no TTL (persists indefinitely). Re-seeding only needed when base data changes or Redis is wiped. |
||
|
|
309eeea6fc |
feat: add 24 geocoder locations and auto-rotate CDN cache buster for Iran events (#1047)
- Add missing locations to seed script: Bukan, Saqqez, Sardasht, Marivan, Baneh, Sulaymaniyah, Riffa, Al-Kharj, Al-Jawf, Mehrabad, Mahallati, Tehransar, Borujerdi, Incirlik, Aqaba, Ashkelon, Jerusalem, Sri Lanka, Tabriz, Yazd, Hatay, Najaf, Hazmieh - Replace hardcoded ?_v=9 cache-bust with time-based rotation (2-min buckets) so CDN cache refreshes automatically after Redis imports - Update iran-events-latest.json with Mar 5 import data (100 events) |
||
|
|
02f3fe77a9 |
feat: Arabic font support and HLS live streaming UI (#1020)
* feat: enhance support for HLS streams and update font styles * chore: add .vercelignore to exclude large local build artifacts from Vercel deploys * chore: include node types in tsconfig to fix server type errors on Vercel build * fix(middleware): guard optional variant OG lookup to satisfy strict TS * fix: desktop build and live channels handle null safety - scripts/build-sidecar-sebuf.mjs: Skip building removed [domain]/v1/[rpc].ts (removed in #785) - src/live-channels-window.ts: Add optional chaining for handle property to prevent null errors - src-tauri/Cargo.lock: Bump version to 2.5.24 Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> * fix: address review issues on PR #1020 - Remove AGENTS.md (project guidelines belong to repo owner) - Restore tracking script in index.html (accidentally removed) - Revert tsconfig.json "node" types (leaks Node globals to frontend) - Add protocol validation to isHlsUrl() (security: block non-http URIs) - Revert Cargo.lock version bump (release management concern) * fix: address P2/P3 review findings - Preserve hlsUrl for HLS-only channels in refreshChannelInfo (was incorrectly clearing the stream URL on every refresh cycle) - Replace deprecated .substr() with .substring() - Extract duplicated HLS display name logic into getChannelDisplayName() --------- Co-authored-by: Qwen-Coder <qwen-coder@alibabacloud.com> Co-authored-by: Elie Habib <elie.habib@gmail.com> |
||
|
|
f06db59720 |
fix: graceful degradation for seed scripts with missing keys or downed sources (#1045)
- seed-unrest-events: relax validation (ACLED missing + GDELT 404 = crash), console.warn → console.log for non-fatal failures - seed-natural-events: relax validation, console.error → console.log - seed-climate-anomalies: relax validation, console.error → console.log - seed-internet-outages: console.error → console.log for missing key Railway tags console.warn/error as severity:error, making healthy runs look like crashes in the dashboard. |
||
|
|
32d3023815 |
fix: query 3 VIIRS satellites sequentially for fire detections (#1036)
SNPP and NOAA20 frequently return 0 rows (data gaps). The previous single-source parallel approach hit all 9 regions simultaneously, causing FIRMS rate limits and timeouts on Railway. Changes: - Query SNPP + NOAA20 + NOAA21 with deduplication - Sequential requests with 200ms delay (avoids rate limiting) - 30s timeout (was 15s) - Restore strict validation (length > 0) |
||
|
|
2f2486cc8e |
fix: harden seed scripts with 429 rate-limit retry and relaxed validation (#1026)
* fix: allow zero fire detections in seed validation FIRMS NRT data has a rolling window — at certain hours, all 9 monitored regions can legitimately return 0 active fire detections. The strict length > 0 validation caused CRASHED status on Railway cron runs during these periods. Structure-only validation is sufficient. * fix: add rate-limit-aware retry for CoinGecko 429s The default withRetry (1s/2s/4s backoff) is too short for CoinGecko rate limits. New fetchWithRateLimitRetry uses 10s/20s/30s/40s/50s delays with up to 5 attempts specifically for 429 responses. * fix: add 429 rate-limit retry to all Yahoo and CoinGecko seed scripts Yahoo Finance and CoinGecko both return 429 when rate limited. The default withRetry (1s/2s/4s) is too fast for rate limits. Added per-request 429-specific retry with longer backoff: - Yahoo: 5s/10s/15s/20s (4 attempts per symbol) - CoinGecko: 10s/20s/30s/40s/50s (5 attempts) Scripts updated: seed-etf-flows, seed-gulf-quotes, seed-commodity-quotes, seed-market-quotes, seed-stablecoin-markets. |
||
|
|
b001d25527 |
fix(relay): add GPS jamming seed loop to Railway relay (#1016)
The GPS jamming data pipeline had no scheduled seed — fetch-gpsjam.mjs existed as a standalone script but was never wired into the relay's setInterval-based seed system. Redis key intelligence:gpsjam:v1 was always empty, forcing the edge function to fall back to direct gpsjam.org fetches (without lat/lon pre-computation). Adds startGpsJamSeedLoop() that runs every 6 hours: - Fetches manifest + CSV from gpsjam.org - Parses H3 hex data with min-aircraft threshold - Converts H3→lat/lon via h3-js (pre-computed for frontend) - Classifies regions for conflict zone tagging - Writes enriched data to Redis with 24h TTL |
||
|
|
4a8ab3855f |
fix: seed-insights digest shape extraction (#1011)
* feat: add seed-first pattern to 15 RPC handlers with Railway seed scripts
Migrate handlers from direct external API calls to seed-first pattern:
Railway cron seeds Redis → handlers read from Redis → fallback to live
fetch if seed stale and SEED_FALLBACK_* env enabled.
Handlers updated: earthquakes, fire-detections, internet-outages,
climate-anomalies, unrest-events, cyber-threats, market-quotes,
commodity-quotes, crypto-quotes, etf-flows, gulf-quotes,
stablecoin-markets, natural-events, displacement-summary, risk-scores.
Also adds:
- scripts/_seed-utils.mjs (shared seed framework with atomic publish,
distributed locks, retry, freshness metadata)
- 13 seed scripts for Railway cron
- api/seed-health.js monitoring endpoint
- scripts/validate-seed-migration.mjs post-deploy validation
- Restored multi-source CII in get-risk-scores (8 sources: ACLED,
UCDP, outages, climate, cyber, fires, GPS, Iran)
* feat: add seed scripts for market quotes, commodity quotes & airport delays
New seed scripts:
- seed-market-quotes.mjs: 28 symbols via Yahoo Finance + Finnhub
- seed-commodity-quotes.mjs: 6 commodity futures via Yahoo Finance
- seed-airport-delays.mjs: FAA + NOTAM airport closure data
Handler changes (seed-first pattern):
- list-market-quotes.ts: read from seed data before live fetch
- list-commodity-quotes.ts: read from seed data before live fetch
- list-airport-delays.ts: seed-first for FAA and NOTAM data
Other changes:
- ais-relay.cjs: add DISABLE_RELAY_MARKET_SEED guard for cutover
- _seed-utils.mjs: add sleep, parseYahooChart, writeExtraKey helpers
- seed-health.js: monitor 4 new seed domains
- validate-seed-migration.mjs: add new domains to validation
* fix: extract digest items from category buckets in seed-insights
The news digest Redis key stores items nested in category buckets
({ categories: { politics: { items: [...] }, ... } }), not as a
flat array. The script was checking for digest.items which is
undefined, causing "Digest has no items" errors on every run.
|
||
|
|
40abcae887 |
feat: CII Railway seed — pre-compute instability scores from 8 sources (#996)
Adds seedCiiScores() to ais-relay.cjs that runs every 10 minutes: - Reads 7 Redis sources (UCDP, outages, climate, cyber, fires, GPS jam, Iran) - Calls ACLED API directly for protests/riots/battles - Computes simplified CII scores for 20 TIER1 countries - Writes to risk:scores:sebuf:v1 (TTL 900s) + stale key (TTL 3600s) Frontend bootstrap hydration (already on main) consumes these scores for instant CII panel render on page load. |
||
|
|
124085edd6 |
fix: add process.exit(0) to seed scripts for Railway cron compatibility (#999)
Railway marks cron jobs as "failed" when the Node.js process doesn't exit cleanly. The seed scripts relied on natural event loop drain, but undici's connection pool keeps handles alive, causing Railway to kill the process and mark it as failed. Changes: - Add process.exit(0) on success and lock-skip paths in runSeed() - Fix recordCount for crypto (.quotes) and stablecoin (.stablecoins) - Add writeExtraKey, sleep, parseYahooChart shared utilities - Add extraKeys option to runSeed for bootstrap hydration keys |
||
|
|
80b8071356 |
feat: server-side AI insights via Railway cron + bootstrap hydration (#1003)
Move the heavy AI insights pipeline (clustering, scoring, LLM brief) from client-side (15-40s per user) to a 5-min Railway cron job. The frontend reads pre-computed insights instantly via bootstrap hydration, with graceful fallback to the existing client-side pipeline. - Add _clustering.mjs: Jaccard clustering + importance scoring (pure JS) - Add seed-insights.mjs: Railway cron reads digest, clusters, calls Groq/OpenRouter for brief, writes to Redis with LKG preservation - Register insights key in bootstrap.js FAST_KEYS tier - Add insights-loader.ts: module-level cached bootstrap reader - Modify InsightsPanel.ts: server-first path (2-step progress) with client fallback (4-step, unchanged behavior) - Add unit tests for clustering (12) and insights-loader (7) |
||
|
|
42cd258f5a |
fix: RSS redirect crash — allowedDomains was renamed but redirect handler not updated (#995)
The RSS_ALLOWED_DOMAINS refactor missed the redirect handler at line 4755, causing ReferenceError: allowedDomains is not defined every time an RSS feed returns a 301/302 redirect. This crashes the entire relay process. |
||
|
|
898ac7b1c4 |
perf(rss): route RSS direct to Railway, skip Vercel middleman (#961)
* perf(rss): route RSS direct to Railway, skip Vercel middleman
Vercel /api/rss-proxy has 65% error rate (207K failed invocations/12h).
Route browser RSS requests directly to Railway (proxy.worldmonitor.app)
via Cloudflare CDN, eliminating Vercel as middleman.
- Add VITE_RSS_DIRECT_TO_RELAY feature flag (default off) for staged rollout
- Centralize RSS proxy URL in rssProxyUrl() with desktop/dev/prod routing
- Make Railway /rss public (skip auth, keep rate limiting with CF-Connecting-IP)
- Add wildcard *.worldmonitor.app CORS + always emit Vary: Origin on /rss
- Extract ~290 RSS domains to shared/rss-allowed-domains.cjs (single source of truth)
- Convert Railway domain check to Set for O(1) lookups
- Remove rss-proxy from KEYED_CLOUD_API_PATTERN (no longer needs API key header)
- Add edge function test for shared domain list import
* fix(edge): replace node:module with JSON import for edge-compatible RSS domains
api/_rss-allowed-domains.js used createRequire from node:module which is
unsupported in Vercel Edge Runtime, breaking all edge functions (including
api/gpsjam). Replaced with JSON import attribute syntax that works in both
esbuild (Vercel build) and Node.js 22+ (tests).
Also fixed middleware.ts TS18048 error where VARIANT_OG[variant] could be
undefined.
* test(edge): add guard against node: built-in imports in api/ files
Scans ALL api/*.js files (including _ helpers) for node: module imports
which are unsupported in Vercel Edge Runtime. This would have caught the
createRequire(node:module) bug before it reached Vercel.
* fix(edge): inline domain array and remove NextResponse reference
- Replace `import ... with { type: 'json' }` in _rss-allowed-domains.js
with inline array — Vercel esbuild doesn't support import attributes
- Replace `NextResponse.next()` with bare `return` in middleware.ts —
NextResponse was never imported
* ci(pre-push): add esbuild bundle check and edge function tests
The pre-push hook now catches Vercel build failures locally:
- esbuild bundles each api/*.js entrypoint (catches import attribute
syntax, missing modules, and other bundler errors)
- runs edge function test suite (node: imports, module isolation)
|
||
|
|
78a14306d9 |
feat: add seed-first pattern to 15 RPC handlers with Railway seed scripts (#989)
Migrate handlers from direct external API calls to seed-first pattern: Railway cron seeds Redis → handlers read from Redis → fallback to live fetch if seed stale and SEED_FALLBACK_* env enabled. Handlers updated: earthquakes, fire-detections, internet-outages, climate-anomalies, unrest-events, cyber-threats, market-quotes, commodity-quotes, crypto-quotes, etf-flows, gulf-quotes, stablecoin-markets, natural-events, displacement-summary, risk-scores. Also adds: - scripts/_seed-utils.mjs (shared seed framework with atomic publish, distributed locks, retry, freshness metadata) - 13 seed scripts for Railway cron - api/seed-health.js monitoring endpoint - scripts/validate-seed-migration.mjs post-deploy validation - Restored multi-source CII in get-risk-scores (8 sources: ACLED, UCDP, outages, climate, cyber, fires, GPS, Iran) |
||
|
|
c7942b800a |
feat: Railway CII seed + bootstrap hydration for instant panel render (#984)
* fix: add circuit breaker + bootstrap to CII risk scores Same pattern as theater posture (#948): replace fragile in-memory cache + manual persistent-cache with circuit breaker (SWR, IndexedDB, cooldown) and bootstrap hydration. Eliminates learning-mode delay on cold start and survives RPC failures without blanking the panel. * fix: add localStorage sync prime for CII risk scores getCachedScores() is called synchronously by country-intel.ts as a fallback during learning mode. Without localStorage priming, the breaker's async IndexedDB hydration hasn't run yet and returns null. - Add shape validator (isValidCiiEntry) for untrusted localStorage data - Add loadFromStorage/saveToStorage with 24h staleness ceiling - Prime breaker synchronously at module load from localStorage - Skip priming for empty cii arrays to avoid cached-empty trap - Save to localStorage on both bootstrap and RPC success paths * feat: Railway CII seed + bootstrap hydration for instant panel render - Add 8-source CII seed to Railway (ACLED, UCDP, outages, climate, cyber, fires, GPS, Iran strikes) - Neuter Vercel handler to read-only (returns Railway-seeded cache, never recomputes) - Register riskScores in bootstrap FAST tier for CDN-cached delivery - Add early CII hydration in data-loader before intelligence signals - Add CIIPanel.renderFromCached() for instant render from bootstrap data - Refactor cached-risk-scores.ts: circuit breaker + localStorage sync prime + bootstrap hydration - Progressive enhancement: cached render → full 18-source local recompute (no spinner) * fix: remove duplicate riskScores key in BOOTSTRAP_TIERS after merge |
||
|
|
5709ed45a2 |
fix: remove smartraveller.gov.au feeds causing 503 errors (#982)
The AU Smartraveller RSS feeds have been consistently returning 503 from both Vercel edge and Railway relay. Remove all references from security-advisories feeds, rss-proxy allowed domains, and relay allowlist. |
||
|
|
9b46bf6f73 |
perf(positive-events): move GDELT fetch to Railway seed, serve from Redis cache (#957)
GDELT GEO API had 99.9% timeout rate on Vercel Edge (746 invocations, ~31s sequential calls vs 25s edge limit). Move fetching to Railway cron (15min), write to Redis, have Vercel serve read-only from cache with bootstrap hydration. - Add startPositiveEventsSeedLoop() to ais-relay.cjs (3 queries, dedup, classify) - Rewrite handler to cache-read-only pattern (matches UCDP) - Register bootstrap key in FAST_KEYS for instant first render - Wire getHydratedData() in data-loader before RPC fallback |
||
|
|
a80b462306 |
perf(oref): reduce proxy bandwidth with gzip + local file persistence (#928)
Add --compressed to all OREF curl requests (~90% bandwidth reduction). Introduce 3-tier bootstrap: local file (Railway volume) → Redis → upstream, so restarts never need to re-fetch the full AlertsHistory.json through the paid residential proxy. Local file is kept in sync after every poll cycle and upstream bootstrap. OREF_DATA_DIR env var opts in to local persistence. |
||
|
|
6ec076c8d3 |
test(circuit-breakers): harden regression tests with try/finally and existence guards (#911)
- Wrap all 4 behavioral it() blocks in try/finally so clearAllCircuitBreakers() always runs on assertion failure (P2 — leaked breaker state between tests) - Add assert.ok(fnStart !== -1) guards for fetchHapiSummary, fetchPositiveGdeltArticles, and fetchGdeltArticles so renames produce a clear diagnostic (P2 — silent false-positives) - Fix misleading comment in seed-wb-indicators.mjs: WLD/EAS are 3-char codes and aren't filtered by iso3.length !== 3 (P3) - Add timeout-minutes: 10 and permissions: contents: read to seed GHA workflow (P3) |
||
|
|
07aca2c396 |
feat(conflict): seed 100 Iran events + add 20 geocoding locations (#899)
- Import latest LiveUAMap Iran events (100 events, March 2026) - Add missing LOCATION_COORDS: Khomein, Markazi, Kashan, Qom, Ahvaz, Dezful, Khorramshahr, Ilam, Laar, Kermanshah, Fujairah, Hermel, Amman, Jeddah, Dhahran, Al Minhad, Galilee, Evin - Bump cache-bust param _v=8 → _v=9 to bypass stale CDN/IndexedDB |
||
|
|
a5b2af8e11 |
feat(tech-readiness): bootstrap hydration via Railway seed + bootstrap key (#889)
* feat(tech-readiness): bootstrap hydration via Railway seed + bootstrap key
Add pre-computed TechReadiness rankings to the bootstrap payload so the
panel renders immediately on first load instead of waiting for 4 slow
World Bank RPC calls (which can trip circuit breakers on cold starts,
causing persistent "No data available" until the 5-min cooldown expires).
- scripts/seed-wb-indicators.mjs: new Railway seed script that fetches
IT.NET.USER.ZS / IT.CEL.SETS.P2 / IT.NET.BBND.P2 / GB.XPD.RSDV.GD.ZS
for all countries, computes rankings (same weights as the frontend
getTechReadinessRankings), and writes economic:worldbank-techreadiness:v1
to Redis with a 7-day TTL
- api/bootstrap.js: register techReadiness key in BOOTSTRAP_CACHE_KEYS
and SLOW_KEYS (s-maxage=3600, appropriate for annual WB data)
- src/services/economic/index.ts: fast-path in getTechReadinessRankings()
returns getHydratedData('techReadiness') immediately on first page load;
country-specific comparison requests still use live RPCs
* ci: add weekly GHA workflow for WB tech readiness seed
|
||
|
|
40be228713 |
fix(cyber): seed cyber threats on Railway + fix Cloudflare 500 errors (#880)
Railway seeding: - Add full cyber threats seed loop in scripts/ais-relay.cjs (5 IOC sources: Feodo, URLhaus, C2IntelFeeds, AlienVault OTX, AbuseIPDB) - GeoIP hydration via ipinfo.io → freeipapi.com with FIFO-capped cache (2048) - Writes both RPC cache key (cyber:threats:v2:0:::) and bootstrap key (cyber:threats-bootstrap:v2) with 3h TTL - Register cyberThreats in api/bootstrap.js BOOTSTRAP_CACHE_KEYS + SLOW_KEYS Cloudflare 500 fixes: - error-mapper.ts: map SyntaxError → 400 (req.json() on malformed POST body) - summarize-article.ts: reduce LLM timeout 30s → 25s (was equal to edge budget) - intelligence/_shared.ts: reduce UPSTREAM_TIMEOUT_MS 30_000 → 25_000 - cyber/_shared.ts: reduce source/geo timeouts and concurrency to fit edge budget |
||
|
|
e7f5a5b8e5 |
fix(market): add bootstrap hydration for markets & commodities panels (#867)
Markets and commodities panels showed "Failed to load" because they relied entirely on the listMarketQuotes RPC while sectors worked via bootstrap hydration. Both also shared a single circuit breaker — 2 transient failures across both calls triggered a 5-minute cooldown. - Add bootstrap Redis keys (market:stocks-bootstrap:v1 and market:commodities-bootstrap:v1) to Railway seed and bootstrap API - Hydrate markets/commodities from bootstrap on page load (same pattern as sectors) - Split circuit breaker: separate stockBreaker and commodityBreaker so commodity failures don't kill market retries and vice versa |
||
|
|
e6ab1883ca |
fix(market): parse comma-separated query params and align Railway cache keys (#856)
* fix(market): parse comma-separated query params and align Railway cache keys
Two bugs causing all market panels to show "Failed to load":
1. Sebuf codegen assigns `params.get("symbols")` (a string) to fields
typed as `string[]`. At runtime handlers receive "AAPL,AMZN,..."
instead of ["AAPL","AMZN",...]. This causes:
- `[..."string"]` spreading into characters → garbage Redis cache keys
- `symbols.filter()` → TypeError (strings lack .filter())
- Handlers fall through to catch → return empty `{ quotes: [] }`
2. Frontend routes commodities and sectors through `listMarketQuotes`
RPC (via `fetchMultipleStocks`), constructing Redis keys like
`market:quotes:v1:^VIX,CL=F,...`. But Railway seeds wrote to
`market:commodities:v1:...` and `market:sectors:v1` — different
key prefixes → permanent cache miss → fallback to Yahoo from
Vercel IP → 429 rate limit → empty data.
Fix:
- Add `parseStringArray()` helper that normalizes string|string[] → string[]
- Apply to all market handlers (quotes, commodities, crypto, stablecoins)
- Railway seed now also writes under `market:quotes:v1:` keys matching
what the Vercel handler constructs for commodity and sector symbols
* fix(economic): add 20s client-side timeout to all RPC calls
All EconomicServiceClient calls (FRED, World Bank, EIA, BIS) lacked
AbortSignal timeouts. If Vercel hangs or is slow, the circuit breaker's
execute() awaits forever, keeping panels stuck in "Fetching" state.
Add AbortSignal.timeout(20_000) to every client call so the circuit
breaker can catch the AbortError and fall through to cached/default data.
|
||
|
|
6c4901f5da |
fix(aviation): move AviationStack fetching to Railway relay, reduce to 40 airports (#858)
AviationStack API calls cost ~$100/day because each cache miss triggered 114 individual API calls from Vercel Edge (where isolates don't share in-flight dedup). This moves all AviationStack fetching to the Railway relay (like market data, OREF, UCDP) and reduces to 40 top international hubs (down from 114). - Add AVIATIONSTACK_AIRPORTS constant (40 curated IATA codes) - Add startAviationSeedLoop() to ais-relay.cjs (2h interval, 4h TTL) - Make Vercel handler cache-read-only (getCachedJson + simulation fallback) - Delete Vercel cron (warm-aviation-cache.ts) and remove from vercel.json |
||
|
|
411b015e0b |
fix(market+feeds): Railway market data cron + complete missing tech feed categories (#850)
* fix(tech): add missing dev/ipo/producthunt feed categories + market debug logging Developer, IPO & SPAC, and Product Hunt panels showed UNAVAILABLE on tech.worldmonitor.app because these categories had no server-side feed definitions in _feeds.ts. The client fell back to per-feed RSS proxy mode gated behind a disabled feature flag, resulting in empty panels. - Add dev (4 feeds), ipo (2 feeds), producthunt (1 feed) to server-side VARIANT_FEEDS.tech so the digest endpoint includes them - Add ipo and producthunt to client-side tech variant FEEDS so loadNews() iterates and renders these categories from the digest - Add console.warn logging to Finnhub, Yahoo direct, and Yahoo relay failure paths in _shared.ts (all errors were silently swallowed, making market data debugging impossible) * fix(market+feeds): add Railway market data cron + missing hardware/outages feed categories Market data: Yahoo Finance returns HTTP 429 from Vercel edge IPs. Railway relay has a different IP that Yahoo does not rate-limit. Add periodic seed job (5min interval) that fetches quotes from Finnhub/Yahoo and writes to Redis, so Vercel handlers serve from cache via cachedFetchJson. - seedMarketQuotes: 25 stocks via Finnhub + 3 indices via Yahoo (staggered) - seedCommodityQuotes: 6 commodities via Yahoo (staggered 150ms) - seedSectorSummary: 12 sector ETFs via Finnhub, Yahoo fallback - Redis keys match Vercel handler construction exactly (verified) - TTL 1800s survives 5 missed seed cycles - CHROME_UA hoisted to top-level (was defined after market code) Feed categories: hardware and outages were missing from server-side VARIANT_FEEDS.tech, causing UNAVAILABLE panels on tech.worldmonitor.app. |
||
|
|
67cdf009fd |
fix(relay): add exponential backoff for failing RSS feeds (#853)
RSS feeds that fail (socket hang up, timeout, non-2xx) were retried every 60s indefinitely, hammering broken upstreams. Adds per-feed exponential backoff: 1min → 2min → 4min → 8min → 15min cap. - Separate rssBackoffUntil/rssFailureCount maps (no response cache mutation) - Stale successful data served during backoff (BACKOFF-STALE) - 503 + Retry-After header when no stale data available - Failure count preserved across backoff expiry for fast re-escalation - Reset on success (2xx or 304 revalidation) |
||
|
|
f1faf07144 |
fix(market+tech): Yahoo relay fallback + RSS digest relay for blocked feeds (#835)
* fix(market): route Yahoo Finance through Railway relay to bypass 429 rate limits Yahoo Finance returns 429 from all Vercel edge IPs, causing empty market data across MARKETS, COMMODITIES, and HEATMAP panels. Empty rate-limited responses were also cached at full 8-min TTL, compounding the outage. - Add /yahoo-chart proxy endpoint to Railway relay with 5-min in-memory cache - Add relay fallback to fetchYahooQuote(): direct Yahoo → relay → null - Return null for all empty quote results (120s negative cache vs 480s) * fix: remove unused yahooRateLimited variable * fix(tech-panels): route RSS digest through Railway relay when direct fetch fails Server-side digest builder fetches RSS feeds directly from Vercel edge, but many tech sites (a16z, Stratechery, EU Startups, etc.) block Vercel IPs. This caused vcblogs, regionalStartups, unicorns, accelerators, and policy categories to return 0 items → UNAVAILABLE panels. Add Railway relay fallback to fetchAndParseRss(): direct fetch → on failure → relay /rss proxy → parse. Same pattern as Yahoo chart fix. |
||
|
|
37f07a6af2 |
fix(prod): CORS fallback, rate-limit bump, RSS proxy allowlist (#814)
- Add wildcard CORS headers in vercel.json for /api/* routes so Vercel infra 500s (which bypass edge function code) still include CORS headers - Bump rate limit from 300 to 600 req/60s in both rate-limit files to accommodate dashboard init burst (~30-40 parallel requests) - Add smartraveller.gov.au (bare + www) to Railway relay RSS allowlist - Add redirect hostname validation in fetchWithRedirects to prevent SSRF via open redirects on allowed domains |
||
|
|
9c5ad83651 |
feat(conflict): seed 100 Iran war events and expand geocoder (#792)
Add 26 new locations to seed script geocoder (Beersheba, Akrotiri, Bandar Abbas, Kerman, Natanz, Beirut, Baalbek, Ras Tanura, Ras Laffan, Quneitra, etc.) and bump CDN cache-bust _v=7 → _v=8. |
||
|
|
392349ee27 |
fix(relay): deduplicate UCDP constants crashing Railway container (#766)
PR #760 added a second UCDP implementation block (HTTP relay handler) that redeclared const UCDP_PAGE_SIZE, UCDP_VIOLENCE_TYPE_MAP, and functions ucdpFetchPage/ucdpDiscoverVersion already declared by the Redis seeder block — causing SyntaxError on startup and crash-loop. Rename relay-specific identifiers with RELAY prefix; shared constants (UCDP_PAGE_SIZE, UCDP_TRAILING_WINDOW_MS) are reused from block 1. |
||
|
|
b423995363 |
feat(conflict): wire UCDP (#760)
* feat(conflict): wire UCDP API access token across full stack UCDP API now requires an `x-ucdp-access-token` header. Renames the stub `UC_DP_KEY` to `UCDP_ACCESS_TOKEN` (matching ACLED convention) and wires it through Rust keychain, sidecar allowlist + verification, handler fetch headers, feature toggles, and desktop settings UI. - Rename UC_DP_KEY → UCDP_ACCESS_TOKEN in type system and labels - Add ucdpConflicts feature toggle with required secret - Add UCDP_ACCESS_TOKEN to Rust SUPPORTED_SECRET_KEYS (24→25) - Add sidecar ALLOWED_ENV_KEYS entry + validation with dynamic GED version probing - Handler sends x-ucdp-access-token header when token is present - UC_DP_KEY fallback in handler for one-release migration window - Update .env.example, desktop-readiness, and docs * feat(conflict): pre-fetch UCDP events via Railway cron + Redis cache Replace the 228-line edge handler that fetched UCDP GED API on every request with a thin Redis reader. The heavy fetch logic (version discovery, paginated backward fetch, 1-year trailing window filter) now runs as a setInterval loop in the Railway relay (ais-relay.cjs) every 6 hours, writing to Redis key conflict:ucdp-events:v1. Changes: - Add UCDP seed loop to ais-relay.cjs (6h interval, 6 pages, 2K cap) - Rewrite list-ucdp-events.ts as thin Redis reader (35 lines) - Add conflict:ucdp-events:v1 to bootstrap batch keys - Protect key from cache-purge via durable data prefix - Add manual-only seed-ucdp-events workflow + standalone script - Rename panel "UCDP Events" → "Armed Conflict Events" in locale - Add 24h TTL + 25h staleness check as safety nets |
||
|
|
16673d7110 | fix(desktop-package): detect linux node target from host arch (#742) | ||
|
|
b279e881a2 |
feat(rag): worker-side vector store with opt-in Headline Memory (#675)
* Add Security Advisories panel with government travel alerts (#460) * feat: add Security Advisories panel with government travel advisory feeds Adds a new panel aggregating travel/security advisories from official government foreign affairs agencies (US State Dept, AU DFAT Smartraveller, UK FCDO, NZ MFAT). Advisories are categorized by severity level (Do Not Travel, Reconsider, Caution, Normal) with filter tabs by source country. Includes summary counts, auto-refresh, and persistent caching via the existing data-freshness system. * chore: update package-lock.json * fix: event delegation, localization, and cleanup for SecurityAdvisories panel P1 fixes: - Use event delegation on this.content (bound once in constructor) instead of direct addEventListener after each innerHTML replacement — prevents memory leaks and stale listener issues on re-render - Use setContent() consistently instead of mixing with this.content.innerHTML - Add securityAdvisories translations to all 16 non-English locale files (panels name, component strings, common.all key) - Revert unrelated package-lock.json version bump P2 fixes: - Deduplicate loadSecurityAdvisories — loadIntelligenceData now calls the shared method instead of inlining duplicate fetch+set logic - Add Accept header to fetch calls for better content negotiation * feat(advisories): add US embassy alerts, CDC, ECDC, and WHO health feeds Adds 21 new advisory RSS feeds: - 13 US Embassy per-country security alerts (TH, AE, DE, UA, MX, IN, PK, CO, PL, BD, IT, DO, MM) - CDC Travel Notices - 5 ECDC feeds (epidemiological, threats, risk assessments, avian flu, publications) - 2 WHO feeds (global news, Africa emergencies) Panel gains a Health filter tab for CDC/ECDC/WHO sources. All new domains added to RSS proxy allowlist. i18n "health" key added across all 17 locales. * feat(cache): add negative-result caching to cachedFetchJson (#466) When upstream APIs return errors (HTTP 403, 429, timeout), fetchers return null. Previously null results were not cached, causing repeated request storms against broken APIs every refresh cycle. Now caches a sentinel value ('__WM_NEG__') with a short 2-minute TTL on null results. Subsequent requests within that window get null immediately without hitting upstream. Thrown errors (transient) skip sentinel caching and retry immediately. Also filters sentinels from getCachedJsonBatch pipeline reads and fixes theater posture coalescing test (expected 2 OpenSky fetches for 2 theater query regions, not 1). * feat: convert 52 API endpoints from POST to GET for edge caching (#468) * feat: convert 52 API endpoints from POST to GET for edge caching Convert all cacheable sebuf RPC endpoints to HTTP GET with query/path parameters, enabling CDN edge caching to reduce costs. Flatten nested request types (TimeRange, PaginationRequest, BoundingBox) into scalar query params. Add path params for resource lookups (GetFredSeries, GetHumanitarianSummary, GetCountryStockIndex, GetCountryIntelBrief, GetAircraftDetails). Rewrite router with hybrid static/dynamic matching for path param support. Kept as POST: SummarizeArticle, ClassifyEvent, RecordBaselineSnapshot, GetAircraftDetailsBatch, RegisterInterest. Generated with sebuf v0.9.0 (protoc-gen-ts-client, protoc-gen-ts-server). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: add rate_limited field to market response protos The rateLimited field was hand-patched into generated files on main but never declared in the proto definitions. Regenerating wiped it out, breaking the build. Now properly defined in both ListEtfFlowsResponse and ListMarketQuotesResponse protos. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: remove accidentally committed .planning files Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * feat: add Cloudflare edge caching infrastructure for api.worldmonitor.app (#471) Route web production RPC traffic through api.worldmonitor.app via fetch interceptor (installWebApiRedirect). Add default Cache-Control headers (s-maxage=300, stale-while-revalidate=60) on GET 200 responses, with no-store override for real-time endpoints (vessel snapshot). Update CORS to allow GET method. Skip Vercel bot middleware for API subdomain using hostname check (non-spoofable, replacing CF-Ray header approach). Update desktop cloud fallback to route through api.worldmonitor.app. * fix(beta): eagerly load T5-small model when beta mode is enabled BETA_MODE now couples the badge AND model loading — the summarization-beta model starts loading on startup instead of waiting for the first summarization call. * fix: move 5 path-param endpoints to query params for Vercel routing (#472) Vercel's `api/[domain]/v1/[rpc].ts` captures one dynamic segment. Path params like `/get-humanitarian-summary/SA` add an extra segment that has no matching route file, causing 404 on both OPTIONS preflight and direct requests. These endpoints were broken in production. Changes: - Remove `{param}` from 5 service.proto HTTP paths - Add `(sebuf.http.query)` annotations to request message fields - Update generated client/server code to use URLSearchParams - Update OpenAPI specs (YAML + JSON) to declare query params - Add early-return guards in 4 handlers for missing required params - Add happy.worldmonitor.app to runtime.ts redirect hosts Affected endpoints: - GET /api/conflict/v1/get-humanitarian-summary?country_code=SA - GET /api/economic/v1/get-fred-series?series_id=T10Y2Y&limit=120 - GET /api/market/v1/get-country-stock-index?country_code=US - GET /api/intelligence/v1/get-country-intel-brief?country_code=US - GET /api/military/v1/get-aircraft-details?icao24=a12345 * fix(security-advisories): route feeds through RSS proxy to avoid CORS blocks (#473) - Advisory feeds were fetched directly from the browser, hitting CORS on all 21 feeds (US State Dept, AU Smartraveller, US Embassies, ECDC, CDC, WHO). Route through /api/rss-proxy on web, keep proxyUrl for desktop. - Fix double slash in ECDC Avian Influenza URL (323//feed → 323/feed) - Add feeds.news24.com to RSS proxy allowlist (was returning 403) * feat(cache): tiered edge Cache-Control aligned to upstream TTLs (#474) * fix: move 5 path-param endpoints to query params for Vercel routing Vercel's `api/[domain]/v1/[rpc].ts` captures one dynamic segment. Path params like `/get-humanitarian-summary/SA` add an extra segment that has no matching route file, causing 404 on both OPTIONS preflight and direct requests. These endpoints were broken in production. Changes: - Remove `{param}` from 5 service.proto HTTP paths - Add `(sebuf.http.query)` annotations to request message fields - Update generated client/server code to use URLSearchParams - Update OpenAPI specs (YAML + JSON) to declare query params - Add early-return guards in 4 handlers for missing required params - Add happy.worldmonitor.app to runtime.ts redirect hosts Affected endpoints: - GET /api/conflict/v1/get-humanitarian-summary?country_code=SA - GET /api/economic/v1/get-fred-series?series_id=T10Y2Y&limit=120 - GET /api/market/v1/get-country-stock-index?country_code=US - GET /api/intelligence/v1/get-country-intel-brief?country_code=US - GET /api/military/v1/get-aircraft-details?icao24=a12345 * feat(cache): add tiered edge Cache-Control aligned to upstream TTLs Replace flat s-maxage=300 with 5 tiers (fast/medium/slow/static/no-store) mapped per-endpoint to respect upstream Redis TTLs. Adds stale-if-error resilience headers and X-No-Cache plumbing for future degraded responses. X-Cache-Tier debug header gated behind ?_debug query param. * fix(tech): use rss() for CISA feed, drop build from pre-push hook (#475) - CISA Advisories used dead rss.worldmonitor.app domain (404), switch to rss() helper - Remove Vite build from pre-push hook (tsc already catches errors) * fix(desktop): enable click-to-play YouTube embeds + CISA feed fixes (#476) * fix(tech): use rss() for CISA feed, drop build from pre-push hook - CISA Advisories used dead rss.worldmonitor.app domain (404), switch to rss() helper - Remove Vite build from pre-push hook (tsc already catches errors) * fix(desktop): enable click-to-play for YouTube embeds in WKWebView WKWebView blocks programmatic autoplay in cross-origin iframes regardless of allow attributes, Permissions-Policy, mute-first retries, or secure context. Documented all 10 approaches tested in docs/internal/. Changes: - Switch sidecar embed origin from 127.0.0.1 to localhost (secure context) - Add MutationObserver + retry chain as best-effort autoplay attempts - Use postMessage('*') to fix tauri://localhost cross-origin messaging - Make sidecar play overlay non-interactive (pointer-events:none) - Fix .webcam-iframe pointer-events:none blocking clicks in grid view - Add expand button to grid cells for switching to single view on desktop - Add http://localhost:* to CSP frame-src in index.html and tauri.conf.json * fix(gateway): convert stale POST requests to GET for backwards compat (#477) Stale cached client bundles still send POST to endpoints converted to GET in PR #468, causing 404s. The gateway now parses the POST JSON body into query params and retries the match as GET. * feat(proxy): add Cloudflare edge caching for proxy.worldmonitor.app (#478) Add CDN-Cache-Control headers to all proxy endpoints so Cloudflare can cache responses at the edge independently of browser Cache-Control: - RSS: 600s edge + stale-while-revalidate=300 (browser: 300s) - UCDP: 3600s edge (matches browser) - OpenSky: 15s edge (browser: 30s) for fresher flight data - WorldBank: 1800s/86400s edge (matches browser) - Polymarket: 120s edge (matches browser) - Telegram: 10s edge (matches browser) - AIS snapshot: 2s edge (matches browser) Also fixes: - Vary header merging: sendCompressed/sendPreGzipped now merge existing Vary: Origin instead of overwriting, preventing cross-origin cache poisoning at the edge - Stale fallback responses (OpenSky, WorldBank, Polymarket, RSS) now set Cache-Control: no-store + CDN-Cache-Control: no-store to prevent edge caching of degraded responses - All no-cache branches get CDN-Cache-Control: no-store - /opensky-reset gets no-store (state-changing endpoint) * fix(sentry): add noise filters for 4 unresolved issues (#479) - Tighten AbortError filter to match "AbortError: The operation was aborted" - Filter "The user aborted a request" (normal navigation cancellation) - Filter UltraViewer service worker injection errors (/uv/service/) - Filter Huawei WebView __isInQueue__ injection * feat: configurable VITE_WS_API_URL + harden POST→GET shim (#480) * fix(gateway): harden POST→GET shim with scalar guard and size limit - Only convert string/number/boolean values to query params (skip objects, nested arrays, __proto__ etc.) to prevent prototype pollution vectors - Skip body parsing for Content-Length > 1MB to avoid memory pressure * feat: make API base URL configurable via VITE_WS_API_URL Replace hardcoded api.worldmonitor.app with VITE_WS_API_URL env var. When empty, installWebApiRedirect() is skipped entirely — relative /api/* calls stay on the same domain (local installs). When set, browser fetch is redirected to that URL. Also adds VITE_WS_API_URL and VITE_WS_RELAY_URL hostnames to APP_HOSTS allowlist dynamically. * fix(analytics): use greedy regex in PostHog ingest rewrites (#481) Vercel's :path* wildcard doesn't match trailing slashes that PostHog SDK appends (e.g. /ingest/s/?compression=...), causing 404s. Switch to :path(.*) which matches all path segments including trailing slashes. Ref: PostHog/posthog#17596 * perf(proxy): increase AIS snapshot edge TTL from 2s to 10s (#482) With 20k requests/30min (60% of proxy traffic) and per-PoP caching, a 2s edge TTL expires before the next request from the same PoP arrives, resulting in near-zero cache hits. 10s allows same-PoP dedup while keeping browser TTL at 2s for fresh vessel positions. * fix(markets): commodities panel showing stocks instead of commodities (#483) The shared circuit breaker (cacheTtlMs: 0) cached the stocks response, then the stale-while-revalidate path returned that cached stocks data for the subsequent commodities fetch. Skip SWR when caching is disabled. * feat(gateway): complete edge cache tier coverage + degraded-response policy (#484) - Add 11 missing GET routes to RPC_CACHE_TIER map (8 slow, 3 medium) - Add response-headers side-channel (WeakMap) so handlers can signal X-No-Cache without codegen changes; wire into military-flights and positive-geo-events handlers on upstream failure - Add env-controlled per-endpoint tier override (CACHE_TIER_OVERRIDE_*) for incident response rollback - Add VITE_WS_API_URL hostname allowlist (*.worldmonitor.app + localhost) - Fix fetch.bind(globalThis) in positive-events-geo.ts (deferred lambda) - Add CI test asserting every generated GET route has an explicit cache tier entry (prevents silent default-tier drift) * chore: bump version to 2.5.20 + changelog Covers PRs #452–#484: Cloudflare edge caching, commodities SWR fix, security advisories panel, settings redesign, 52 POST→GET migrations. * fix(rss): remove stale indianewsnetwork.com from proxy allowlist (#486) Feed has no <pubDate> fields and latest content is from April 2022. Not referenced in any feed config — only in the proxy domain allowlist. * feat(i18n): add Korean (한국어) localization (#487) - Add ko.json with all 1606 translation keys matching en.json structure - Register 'ko' in SUPPORTED_LANGUAGES, LANGUAGES display array, and locale map - Korean appears as 🇰🇷 한국어 in the language dropdown * feat: add Polish tv livestreams (#488) * feat(rss): add Axios (api.axios.com/feed) as US news source (#494) Add api.axios.com to proxy allowlist and CSP connect-src, register Axios feed under US category as Tier 2 mainstream source. * perf: bootstrap endpoint + polling optimization (#495) * perf: bootstrap endpoint + polling optimization (phases 3-4) Replace 15+ individual RPC calls on startup with a single /api/bootstrap batch call that fetches pre-cached data from Redis. Consolidate 6 panel setInterval timers into the central RefreshScheduler for hidden-tab awareness (10x multiplier) and adaptive backoff (up to 4x for unchanged data). Convert IntelligenceGapBadge from 10s polling to event-driven updates with 60s safety fallback. * fix(bootstrap): inline Redis + cache keys in edge function Vercel Edge Functions cannot resolve cross-directory TypeScript imports from server/_shared/. Inline getCachedJsonBatch and BOOTSTRAP_CACHE_KEYS directly in api/bootstrap.js. Add sync test to ensure inlined keys stay in sync with the canonical server/_shared/cache-keys.ts registry. * test: add Edge Function module isolation guard for all api/*.js files Prevents any Edge Function from importing from ../server/ or ../src/ which breaks Vercel builds. Scans all 12 non-helper Edge Functions. * fix(bootstrap): read unprefixed cache keys on all environments Preview deploys set VERCEL_ENV=preview which caused getKeyPrefix() to prefix Redis keys with preview:<sha>:, but handlers only write to unprefixed keys on production. Bootstrap is a read-only consumer of production cache — always read unprefixed keys. * fix(bootstrap): wire sectors hydration + add coverage guard - Wire getHydratedData('sectors') in data-loader to skip Yahoo Finance fetch when bootstrap provides sector data - Add test ensuring every bootstrap key has a getHydratedData consumer — prevents adding keys without wiring them * fix(server): resolve 25 TypeScript errors + add server typecheck to CI - _shared.ts: remove unused `delay` variable - list-etf-flows.ts: add missing `rateLimited` field to 3 return literals - list-market-quotes.ts: add missing `rateLimited` field to 4 return literals - get-cable-health.ts: add non-null assertions for regex groups and array access - list-positive-geo-events.ts: add non-null assertion for array index - get-chokepoint-status.ts: add required fields to request objects - CI: run `typecheck:api` (tsconfig.api.json) alongside `typecheck` to catch server/ TS errors before merge * feat(military): server-side military bases 125K + rate limiting (#496) * feat(military): server-side military bases with 125K entries + rate limiting (#485) Migrate military bases from 224 static client-side entries to 125,380 server-side entries stored in Redis GEO sorted sets, served via bbox-filtered GEOSEARCH endpoint with server-side clustering. Data pipeline: - Pizzint/Polyglobe: 79,156 entries (Supabase extraction) - OpenStreetMap: 45,185 entries - MIRTA: 821 entries - Curated strategic: 218 entries - 277 proximity duplicates removed Server: - ListMilitaryBases RPC with GEOSEARCH + HMGET + tier/filter/clustering - Antimeridian handling (split bbox queries) - Blue-green Redis deployment with atomic version pointer switch - geoSearchByBox() + getHashFieldsBatch() helpers in redis.ts Security: - @upstash/ratelimit: 60 req/min sliding window per IP - IP spoofing fix: prioritize x-real-ip (Vercel-injected) over x-forwarded-for - Require API key for non-browser requests (blocks unauthenticated curl/scripts) - Input validation: allowlisted types/kinds, regex country, clamped bbox/zoom Frontend: - Viewport-driven loading with bbox quantization + debounce - Server-side grid clustering at low zoom levels - Enriched popup with kind, category badges (airforce/naval/nuclear/space) - Static 224 bases kept as search fallback + initial render * fix(military): fallback to production Redis keys in preview deployments Preview deployments prefix Redis keys with `preview:{sha}:` but military bases data is seeded to unprefixed (production) keys. When the prefixed `military:bases:active` key is missing, fall back to the unprefixed key and use raw (unprefixed) keys for geo/meta lookups. * fix: remove unused 'remaining' destructure in rate-limit (TS6133) * ci: add typecheck:api to pre-push hook to catch server-side TS errors * debug(military): add X-Bases-Debug response header for preview diagnostics * fix(bases): trigger initial server fetch on map load fetchServerBases() was only called on moveend — if the user never panned/zoomed, the API was never called and only the 224 static fallback bases showed. * perf(military): debounce base fetches + upgrade edge cache to static tier (#497) - Add 300ms debounce on moveend to prevent rapid pan flooding - Fixes stale-bbox bug where pendingFetch returns old viewport data - Upgrade edge cache tier from medium (5min) to static (1hr) — bases are static infrastructure, aligned with server-side cachedFetchJson TTL - Keep error logging in catch blocks for production diagnostics * fix(cyber): make GeoIP centroid fallback jitter deterministic (#498) Replace Math.random() jitter with DJB2 hash seeded by the threat indicator (IP/URL), so the same threat always maps to the same coordinates across requests while different threats from the same country still spread out. Closes #203 Co-authored-by: Chris Chen <fuleinist@users.noreply.github.com> * fix: use cross-env for Windows-compatible npm scripts (#499) Replace direct `VAR=value command` syntax with cross-env/cross-env-shell so dev, build, test, and desktop scripts work on Windows PowerShell/CMD. Co-authored-by: facusturla <facusturla@users.noreply.github.com> * feat(live-news): add CBC News to optional North America channels (#502) YouTube handle @CBCNews with fallback video ID 5vfaDsMhCF4. * fix(bootstrap): harden hydration cache + polling review fixes (#504) - Filter null/undefined values before storing in hydration cache to prevent future consumers using !== undefined from misinterpreting null as valid data - Debounce wm:intelligence-updated event handler via requestAnimationFrame to coalesce rapid alert generation into a single render pass - Include alert IDs in StrategicRiskPanel change fingerprint so content changes are detected even when alert count stays the same - Replace JSON.stringify change detection in ServiceStatusPanel with lightweight name:status fingerprint - Document max effective refresh interval (40x base) in scheduler * fix(geo): tokenization-based keyword matching to prevent false positives (#503) * fix(geo): tokenization-based keyword matching to prevent false positives Replace String.includes() with tokenization-based Set.has() matching across the geo-tagging pipeline. Prevents false positives like "assad" matching inside "ambassador" and "hts" matching inside "rights". - Add src/utils/keyword-match.ts as single source of truth - Decompose possessives/hyphens ("Assad's" → includes "assad") - Support multi-word phrase matching ("white house" as contiguous) - Remove false-positive-prone DC keywords ('house', 'us ') - Update 9 consumer files across geo-hub, map, CII, and asset systems - Add 44 tests covering false positives, true positives, edge cases Co-authored-by: karim <mirakijka@gmail.com> Fixes #324 * fix(geo): add inflection suffix matching + fix test imports Address code review feedback: P1a: Add suffix-aware matching for plurals and demonyms so existing keyword lists don't regress (houthi→houthis, ukraine→ukrainian, iran→iranian, israel→israeli, russia→russian, taiwan→taiwanese). Uses curated suffix list + e-dropping rule to avoid false positives. P1b: Expand conflictTopics arrays in DeckGLMap and Map with demonym forms so "Iranian senate..." correctly registers as conflict topic. P2: Replace inline test functions with real module import via tsx. Tests now exercise the production keyword-match.ts directly. * fix: wire geo-keyword tests into test:data command The .mts test file wasn't covered by `node --test tests/*.test.mjs`. Add `npx tsx --test tests/*.test.mts` so test:data runs both suites. * fix: cross-platform test:data + pin tsx in devDependencies - Use tsx as test runner for both .mjs and .mts (single invocation) - Removes ; separator which breaks on Windows cmd.exe - Add tsx to devDependencies so it works in offline/CI environments * fix(geo): multi-word demonym matching + short-keyword suffix guard - Add wordMatches() for suffix-aware phrase matching so "South Korean" matches keyword "south korea" and "North Korean" matches "north korea" - Add MIN_SUFFIX_KEYWORD_LEN=4 guard so short keywords like "ai", "us", "hts" only do exact-match (prevents "ais"→"ai", "uses"→"us" false positives) - Add 5 new tests covering both fixes (58 total, all passing) * fix(geo): support plural demonyms in keyword matching Add compound suffixes (ians, eans, ans, ns, is) to handle plural demonym forms like "Iranians"→"iran", "Ukrainians"→"ukraine", "Russians"→"russia", "Israelis"→"israel". Adds 5 new tests (63 total). --------- Co-authored-by: karim <mirakijka@gmail.com> * chore: strip 61 debug console.log calls from 20 service files (#501) * chore: strip 61 debug console.log calls from services Remove development/tracing console.log statements from 20 files. These add noise to production browser consoles and increase bundle size. Preserved: all console.error (error handling) and console.warn (warnings). Preserved: debug-gated logs in runtime.ts (controlled by verbose flag). Removed: debugInjectTestEvents() from geo-convergence.ts (test-only code). Removed: logSummary()/logReport() methods that were pure console.log wrappers. * fix: remove orphaned stubs and remaining debug logs from stripped services - Remove empty logReport() method and unused startTime variable (parallel-analysis.ts) - Remove orphaned console.group/console.groupEnd pair (parallel-analysis.ts) - Remove empty logSignalSummary() export (signal-aggregator.ts) - Remove logSignalSummary import/call and 3 remaining console.logs (InsightsPanel.ts) - Remove no-op logDirectFetchBlockedOnce() and dead infrastructure (prediction/index.ts) * fix: generalize Vercel preview origin regex + include filters in bases cache key (#506) - api/_api-key.js: preview URL pattern was user-specific (-elie-), rejecting other collaborators' Vercel preview deployments. Generalized to match any worldmonitor-*.vercel.app origin. - military-bases.ts: client cache key only checked bbox/zoom, ignoring type/kind/country filters. Switching filters without panning returned stale results. Unified into single cacheKey string. * fix(prediction): filter stale/expired markets from Polymarket panel (#507) Prediction panel was showing expired markets (e.g. "Will US strike Iran on Feb 9" at 0%). Root causes: no active/archived API filters, no end_date_min param, no client-side expiry guard, and sub-market selection picking highest volume before filtering expired ones. - Add active=true, archived=false, end_date_min API params to all 3 Gamma API call sites (events, markets, probe) - Pre-filter sub-markets by closed/expired BEFORE volume selection in both fetchPredictions() and fetchCountryMarkets() - Add defense-in-depth isExpired() client-side filter on final results - Propagate endDate through all market object paths including sebuf fallback - Show expiry date in PredictionPanel UI with new .prediction-meta layout - Add "closes" i18n key to all 18 locale files - Add endDate to server handler GammaMarket/GammaEvent interfaces and map to proto closesAt field * fix(relay): guard proxy handlers against ERR_HTTP_HEADERS_SENT crash (#509) Polymarket and World Bank proxy handlers had unguarded res.writeHead() calls in error/timeout callbacks that race with the response callback. When upstream partially responds then times out, both paths write headers → process crash. Replace 5 raw writeHead+end calls with safeEnd() which checks res.headersSent before writing. * feat(breaking-news): add active alert banner with audio for critical/high RSS items (#508) RSS items classified as critical/high threat now trigger a full-width breaking news banner with audio alert, auto-dismiss (60s/30s by severity), visibility-aware timer pause, dedup, and a toggle in the Intelligence Findings dropdown. * fix(sentry): filter Android OEM WebView bridge injection errors (#510) Add ignoreErrors pattern for LIDNotifyId, onWebViewAppeared, and onGetWiFiBSSID — native bridge functions injected by Lenovo/Huawei device SDKs into Chrome Mobile WebView. No stack frames in our code. * chore: add validated telegram channels list (global + ME + Iran + cyber) (#249) * feat(conflict): add Iran Attacks map layer + strip debug logs (#511) * chore: strip 61 debug console.log calls from services Remove development/tracing console.log statements from 20 files. These add noise to production browser consoles and increase bundle size. Preserved: all console.error (error handling) and console.warn (warnings). Preserved: debug-gated logs in runtime.ts (controlled by verbose flag). Removed: debugInjectTestEvents() from geo-convergence.ts (test-only code). Removed: logSummary()/logReport() methods that were pure console.log wrappers. * fix: remove orphaned stubs and remaining debug logs from stripped services - Remove empty logReport() method and unused startTime variable (parallel-analysis.ts) - Remove orphaned console.group/console.groupEnd pair (parallel-analysis.ts) - Remove empty logSignalSummary() export (signal-aggregator.ts) - Remove logSignalSummary import/call and 3 remaining console.logs (InsightsPanel.ts) - Remove no-op logDirectFetchBlockedOnce() and dead infrastructure (prediction/index.ts) * feat(conflict): add Iran Attacks map layer Adds a new Iran-focused conflict events layer that aggregates real-time events, geocodes via 40-city lookup table, caches 15min in Redis, and renders as a toggleable DeckGL ScatterplotLayer with severity coloring. - New proto + codegen for ListIranEvents RPC - Server handler with HTML parsing, city geocoding, category mapping - Frontend service with circuit breaker - DeckGL ScatterplotLayer with severity-based color/size - MapPopup with sanitized source links - iranAttacks toggle across all variants, harnesses, and URL state * fix: resolve bootstrap 401 and 429 rate limiting on page init (#512) Same-origin browser requests don't send Origin header (per CORS spec), causing validateApiKey to reject them. Extract origin from Referer as fallback. Increase rate limit from 60 to 200 req/min to accommodate the ~50 requests fired during page initialization. * fix(relay): prevent Polymarket OOM via request deduplication (#513) Concurrent Polymarket requests for the same cache key each fired independent https.get() calls. With 12 categories × multiple clients, 740 requests piled up in 10s, all buffering response bodies → 4.1GB heap → OOM crash on Railway. Fix: in-flight promise map deduplicates concurrent requests to the same cache key. 429/error responses are negative-cached for 30s to prevent retry storms. * fix(threat-classifier): add military/conflict keyword gaps and news-to-conflict bridge (#514) Breaking news headlines like "Israel's strike on Iran" were classified as info level because the keyword classifier lacked standalone conflict phrases. Additionally, the conflict instability score depended solely on ACLED data (1-7 day lag) with no bridge from real-time breaking news. - Add 3 critical + 18 high contextual military/conflict keywords - Preserve threat classification on semantically merged clusters - Add news-derived conflict floor when ACLED/HAPI report zero signal - Upsert news events by cluster ID to prevent duplicates - Extract newsEventIndex to module-level Map for serialization safety * fix(breaking-news): let critical alerts bypass global cooldown and replace HIGH alerts (#516) Global cooldown (60s) was blocking critical alerts when a less important HIGH alert fired from an earlier RSS batch. Added priority-aware cooldown so critical alerts always break through. Banner now auto-dismisses HIGH alerts when a CRITICAL arrives. Added Iran/strikes keywords to classifier. * fix(rate-limit): increase sliding window to 300 req/min (#515) App init fires many concurrent classify-event, summarize-article, and record-baseline-snapshot calls, exhausting the 200/min limit and causing 429s. Bump to 300 as a temporary measure while client-side batching is implemented. * fix(breaking-news): fix fake pubDate fallback and filter noisy think-tank alerts (#517) Two bugs causing stale CrisisWatch article to fire as breaking alert: 1. Non-standard pubDate format ("Friday, February 27, 2026 - 12:38") failed to parse → fallback was `new Date()` (NOW) → day-old articles appeared as "just now" and passed recency gate on every fetch 2. Tier 3+ sources (think tanks) firing alerts on keyword-only matches like "War" in policy analysis titles — too noisy for breaking alerts Fix: parsePubDate() handles non-standard formats and falls back to epoch (not now). Tier 3+ sources require LLM classification to fire. * fix: make iran-events handler read-only from Redis (#518) Remove server-side LiveUAMap scraper (blocked by Cloudflare 403 on Vercel IPs). Handler now reads pre-populated Redis cache pushed from local browser scraping. Change cache tier from slow to fast to prevent CDN from serving stale empty responses for 30+ minutes. * fix(relay): Polymarket circuit breaker + concurrency limiter (OOM fix) (#519) * fix(rate-limit): increase sliding window to 300 req/min App init fires many concurrent classify-event, summarize-article, and record-baseline-snapshot calls, exhausting the 200/min limit and causing 429s. Bump to 300 as a temporary measure while client-side batching is implemented. * fix(relay): add Polymarket circuit breaker + concurrency limiter to prevent OOM Railway relay OOM crash: 280 Polymarket 429 errors in 8s, heap hit 3.7GB. Multiple unique cache keys bypassed per-key dedup, flooding upstream. - Circuit breaker: trips after 5 consecutive failures, 60s cooldown - Concurrent upstream limiter: max 3 simultaneous requests - Negative cache TTL: 30s → 60s to reduce retry frequency - Upstream slot freed on response.on('end'), not headers, preventing body buffer accumulation past the concurrency cap * fix(relay): guard against double-finalization on Polymarket timeout request.destroy() in timeout handler also fires request.on('error'), causing double decrement of polymarketActiveUpstream (counter goes negative, disabling concurrency cap) and double circuit breaker trip. Add finalized guard so decrement + failure accounting happens exactly once per request regardless of which error path fires first. * fix(threat-classifier): stagger AI classification requests to avoid Groq 429 (#520) flushBatch() fired up to 20 classifyEvent RPCs simultaneously via Promise.all, instantly hitting Groq's ~30 req/min rate limit. - Sequential execution with 2s min-gap between requests (~28 req/min) - waitForGap() enforces hard floor + jitter across batch boundaries - batchInFlight guard prevents concurrent flush loops - 429/5xx: requeue failed job (with retry cap) + remaining untouched jobs - Queue cap at 100 items with warn on overflow * fix(relay): regenerate package-lock.json with telegram dependency The lockfile was missing resolved entries for the telegram package, causing Railway to skip installation despite it being in package.json. * chore: trigger deploy to flush CDN cache for iran-events endpoint * Revert "fix(relay): regenerate package-lock.json with telegram dependency" This reverts commit |
||
|
|
98150d639d |
feat: persist OREF history to Redis + retry bootstrap (#674)
* feat: persist OREF history to Redis + retry bootstrap on failure OREF history was lost on every container restart — single curl call with no retry, no persistence. Panel showed "0 alerts" until history re-accumulated over hours. Changes to scripts/ais-relay.cjs: - Add Upstash Redis REST helpers (upstashGet/upstashSet) using https.request with HTTPS-only validation, 5s timeout, never-throw semantics - Persist history to Redis after each poll mutation (version-deduped, concurrent-guarded, 200-wave hard cap, 7d TTL matching purge window) - Bootstrap from Redis first on startup (schema validation, 7d purge filter, 24h count recompute, lastAlertsJson seeding to prevent duplicate waves) - Fall through to upstream retry if Redis data is empty or all stale - Upstream retry: 3 attempts with exponential backoff + jitter (~70s budget) - Expose redisEnabled + bootstrapSource in /health endpoint - Preserves main's totalHistoryCount field and 7-day history retention Failure matrix: - UP/UP: Redis first (instant), poll refreshes + persists - UP/DOWN: Bootstrap from upstream, persist fails silently - DOWN/UP: Bootstrap from Redis cache - DOWN/DOWN: 3 retries then empty history * fix: increment persist version after upstream bootstrap to seed Redis Without this, _persistVersion stays at 0 after bootstrap, matching _lastPersistedVersion — orefPersistHistory() skips the write. A restart before any new alerts would lose all bootstrapped history. |
||
|
|
cae3c08436 |
feat(oref): add 1,478 Hebrew→English location translations + wire sirens into breaking news banner (#661)
- Generate static location map from eladnava/pikud-haoref-api cities.json (1,478 entries) - Lazy-load translations in oref-alerts.ts with retry on failure - Add dispatchOrefBreakingAlert() with stable dedupe key and global cooldown bypass - Wire oref siren alerts into breaking news banner on initial fetch and polling updates |
||
|
|
c6b94a55bf |
fix(oref): grab newest history records and preserve bootstrap data (#653)
OREF AlertsHistory.json returns records newest-first, but the bootstrap used .slice(-500) which took the oldest 500 — all outside the 24h window. The poll loop then purged them all, leaving historyCount24h = 0. Three fixes: - Use .slice(0, 500) to take the newest 500 records from OREF history - Extend history purge from 24h to 7 days so bootstrap data persists - Add totalHistoryCount field for badge fallback when 24h count is zero |
||
|
|
88215cb517 | feat: add Redis caching for GPS jamming data (#646) | ||
|
|
36e36d8b57 |
Cost/traffic hardening, runtime fallback controls, and PostHog removal (#638)
- Remove PostHog analytics runtime and configuration - Add API rate limiting (api/_rate-limit.js) - Harden traffic controls across edge functions - Add runtime fallback controls and data-loader improvements - Add military base data scripts (fetch-mirta-bases, fetch-osm-bases) - Gitignore large raw data files - Settings playground prototypes |