mirror of
https://github.com/koala73/worldmonitor.git
synced 2026-05-14 02:56:21 +02:00
4a8ab3855f7be8e3f7ea614ef31ca3dec86fe2de
120 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
4a8ab3855f |
fix: seed-insights digest shape extraction (#1011)
* feat: add seed-first pattern to 15 RPC handlers with Railway seed scripts
Migrate handlers from direct external API calls to seed-first pattern:
Railway cron seeds Redis → handlers read from Redis → fallback to live
fetch if seed stale and SEED_FALLBACK_* env enabled.
Handlers updated: earthquakes, fire-detections, internet-outages,
climate-anomalies, unrest-events, cyber-threats, market-quotes,
commodity-quotes, crypto-quotes, etf-flows, gulf-quotes,
stablecoin-markets, natural-events, displacement-summary, risk-scores.
Also adds:
- scripts/_seed-utils.mjs (shared seed framework with atomic publish,
distributed locks, retry, freshness metadata)
- 13 seed scripts for Railway cron
- api/seed-health.js monitoring endpoint
- scripts/validate-seed-migration.mjs post-deploy validation
- Restored multi-source CII in get-risk-scores (8 sources: ACLED,
UCDP, outages, climate, cyber, fires, GPS, Iran)
* feat: add seed scripts for market quotes, commodity quotes & airport delays
New seed scripts:
- seed-market-quotes.mjs: 28 symbols via Yahoo Finance + Finnhub
- seed-commodity-quotes.mjs: 6 commodity futures via Yahoo Finance
- seed-airport-delays.mjs: FAA + NOTAM airport closure data
Handler changes (seed-first pattern):
- list-market-quotes.ts: read from seed data before live fetch
- list-commodity-quotes.ts: read from seed data before live fetch
- list-airport-delays.ts: seed-first for FAA and NOTAM data
Other changes:
- ais-relay.cjs: add DISABLE_RELAY_MARKET_SEED guard for cutover
- _seed-utils.mjs: add sleep, parseYahooChart, writeExtraKey helpers
- seed-health.js: monitor 4 new seed domains
- validate-seed-migration.mjs: add new domains to validation
* fix: extract digest items from category buckets in seed-insights
The news digest Redis key stores items nested in category buckets
({ categories: { politics: { items: [...] }, ... } }), not as a
flat array. The script was checking for digest.items which is
undefined, causing "Digest has no items" errors on every run.
|
||
|
|
40abcae887 |
feat: CII Railway seed — pre-compute instability scores from 8 sources (#996)
Adds seedCiiScores() to ais-relay.cjs that runs every 10 minutes: - Reads 7 Redis sources (UCDP, outages, climate, cyber, fires, GPS jam, Iran) - Calls ACLED API directly for protests/riots/battles - Computes simplified CII scores for 20 TIER1 countries - Writes to risk:scores:sebuf:v1 (TTL 900s) + stale key (TTL 3600s) Frontend bootstrap hydration (already on main) consumes these scores for instant CII panel render on page load. |
||
|
|
124085edd6 |
fix: add process.exit(0) to seed scripts for Railway cron compatibility (#999)
Railway marks cron jobs as "failed" when the Node.js process doesn't exit cleanly. The seed scripts relied on natural event loop drain, but undici's connection pool keeps handles alive, causing Railway to kill the process and mark it as failed. Changes: - Add process.exit(0) on success and lock-skip paths in runSeed() - Fix recordCount for crypto (.quotes) and stablecoin (.stablecoins) - Add writeExtraKey, sleep, parseYahooChart shared utilities - Add extraKeys option to runSeed for bootstrap hydration keys |
||
|
|
80b8071356 |
feat: server-side AI insights via Railway cron + bootstrap hydration (#1003)
Move the heavy AI insights pipeline (clustering, scoring, LLM brief) from client-side (15-40s per user) to a 5-min Railway cron job. The frontend reads pre-computed insights instantly via bootstrap hydration, with graceful fallback to the existing client-side pipeline. - Add _clustering.mjs: Jaccard clustering + importance scoring (pure JS) - Add seed-insights.mjs: Railway cron reads digest, clusters, calls Groq/OpenRouter for brief, writes to Redis with LKG preservation - Register insights key in bootstrap.js FAST_KEYS tier - Add insights-loader.ts: module-level cached bootstrap reader - Modify InsightsPanel.ts: server-first path (2-step progress) with client fallback (4-step, unchanged behavior) - Add unit tests for clustering (12) and insights-loader (7) |
||
|
|
42cd258f5a |
fix: RSS redirect crash — allowedDomains was renamed but redirect handler not updated (#995)
The RSS_ALLOWED_DOMAINS refactor missed the redirect handler at line 4755, causing ReferenceError: allowedDomains is not defined every time an RSS feed returns a 301/302 redirect. This crashes the entire relay process. |
||
|
|
898ac7b1c4 |
perf(rss): route RSS direct to Railway, skip Vercel middleman (#961)
* perf(rss): route RSS direct to Railway, skip Vercel middleman
Vercel /api/rss-proxy has 65% error rate (207K failed invocations/12h).
Route browser RSS requests directly to Railway (proxy.worldmonitor.app)
via Cloudflare CDN, eliminating Vercel as middleman.
- Add VITE_RSS_DIRECT_TO_RELAY feature flag (default off) for staged rollout
- Centralize RSS proxy URL in rssProxyUrl() with desktop/dev/prod routing
- Make Railway /rss public (skip auth, keep rate limiting with CF-Connecting-IP)
- Add wildcard *.worldmonitor.app CORS + always emit Vary: Origin on /rss
- Extract ~290 RSS domains to shared/rss-allowed-domains.cjs (single source of truth)
- Convert Railway domain check to Set for O(1) lookups
- Remove rss-proxy from KEYED_CLOUD_API_PATTERN (no longer needs API key header)
- Add edge function test for shared domain list import
* fix(edge): replace node:module with JSON import for edge-compatible RSS domains
api/_rss-allowed-domains.js used createRequire from node:module which is
unsupported in Vercel Edge Runtime, breaking all edge functions (including
api/gpsjam). Replaced with JSON import attribute syntax that works in both
esbuild (Vercel build) and Node.js 22+ (tests).
Also fixed middleware.ts TS18048 error where VARIANT_OG[variant] could be
undefined.
* test(edge): add guard against node: built-in imports in api/ files
Scans ALL api/*.js files (including _ helpers) for node: module imports
which are unsupported in Vercel Edge Runtime. This would have caught the
createRequire(node:module) bug before it reached Vercel.
* fix(edge): inline domain array and remove NextResponse reference
- Replace `import ... with { type: 'json' }` in _rss-allowed-domains.js
with inline array — Vercel esbuild doesn't support import attributes
- Replace `NextResponse.next()` with bare `return` in middleware.ts —
NextResponse was never imported
* ci(pre-push): add esbuild bundle check and edge function tests
The pre-push hook now catches Vercel build failures locally:
- esbuild bundles each api/*.js entrypoint (catches import attribute
syntax, missing modules, and other bundler errors)
- runs edge function test suite (node: imports, module isolation)
|
||
|
|
78a14306d9 |
feat: add seed-first pattern to 15 RPC handlers with Railway seed scripts (#989)
Migrate handlers from direct external API calls to seed-first pattern: Railway cron seeds Redis → handlers read from Redis → fallback to live fetch if seed stale and SEED_FALLBACK_* env enabled. Handlers updated: earthquakes, fire-detections, internet-outages, climate-anomalies, unrest-events, cyber-threats, market-quotes, commodity-quotes, crypto-quotes, etf-flows, gulf-quotes, stablecoin-markets, natural-events, displacement-summary, risk-scores. Also adds: - scripts/_seed-utils.mjs (shared seed framework with atomic publish, distributed locks, retry, freshness metadata) - 13 seed scripts for Railway cron - api/seed-health.js monitoring endpoint - scripts/validate-seed-migration.mjs post-deploy validation - Restored multi-source CII in get-risk-scores (8 sources: ACLED, UCDP, outages, climate, cyber, fires, GPS, Iran) |
||
|
|
c7942b800a |
feat: Railway CII seed + bootstrap hydration for instant panel render (#984)
* fix: add circuit breaker + bootstrap to CII risk scores Same pattern as theater posture (#948): replace fragile in-memory cache + manual persistent-cache with circuit breaker (SWR, IndexedDB, cooldown) and bootstrap hydration. Eliminates learning-mode delay on cold start and survives RPC failures without blanking the panel. * fix: add localStorage sync prime for CII risk scores getCachedScores() is called synchronously by country-intel.ts as a fallback during learning mode. Without localStorage priming, the breaker's async IndexedDB hydration hasn't run yet and returns null. - Add shape validator (isValidCiiEntry) for untrusted localStorage data - Add loadFromStorage/saveToStorage with 24h staleness ceiling - Prime breaker synchronously at module load from localStorage - Skip priming for empty cii arrays to avoid cached-empty trap - Save to localStorage on both bootstrap and RPC success paths * feat: Railway CII seed + bootstrap hydration for instant panel render - Add 8-source CII seed to Railway (ACLED, UCDP, outages, climate, cyber, fires, GPS, Iran strikes) - Neuter Vercel handler to read-only (returns Railway-seeded cache, never recomputes) - Register riskScores in bootstrap FAST tier for CDN-cached delivery - Add early CII hydration in data-loader before intelligence signals - Add CIIPanel.renderFromCached() for instant render from bootstrap data - Refactor cached-risk-scores.ts: circuit breaker + localStorage sync prime + bootstrap hydration - Progressive enhancement: cached render → full 18-source local recompute (no spinner) * fix: remove duplicate riskScores key in BOOTSTRAP_TIERS after merge |
||
|
|
5709ed45a2 |
fix: remove smartraveller.gov.au feeds causing 503 errors (#982)
The AU Smartraveller RSS feeds have been consistently returning 503 from both Vercel edge and Railway relay. Remove all references from security-advisories feeds, rss-proxy allowed domains, and relay allowlist. |
||
|
|
9b46bf6f73 |
perf(positive-events): move GDELT fetch to Railway seed, serve from Redis cache (#957)
GDELT GEO API had 99.9% timeout rate on Vercel Edge (746 invocations, ~31s sequential calls vs 25s edge limit). Move fetching to Railway cron (15min), write to Redis, have Vercel serve read-only from cache with bootstrap hydration. - Add startPositiveEventsSeedLoop() to ais-relay.cjs (3 queries, dedup, classify) - Rewrite handler to cache-read-only pattern (matches UCDP) - Register bootstrap key in FAST_KEYS for instant first render - Wire getHydratedData() in data-loader before RPC fallback |
||
|
|
a80b462306 |
perf(oref): reduce proxy bandwidth with gzip + local file persistence (#928)
Add --compressed to all OREF curl requests (~90% bandwidth reduction). Introduce 3-tier bootstrap: local file (Railway volume) → Redis → upstream, so restarts never need to re-fetch the full AlertsHistory.json through the paid residential proxy. Local file is kept in sync after every poll cycle and upstream bootstrap. OREF_DATA_DIR env var opts in to local persistence. |
||
|
|
6ec076c8d3 |
test(circuit-breakers): harden regression tests with try/finally and existence guards (#911)
- Wrap all 4 behavioral it() blocks in try/finally so clearAllCircuitBreakers() always runs on assertion failure (P2 — leaked breaker state between tests) - Add assert.ok(fnStart !== -1) guards for fetchHapiSummary, fetchPositiveGdeltArticles, and fetchGdeltArticles so renames produce a clear diagnostic (P2 — silent false-positives) - Fix misleading comment in seed-wb-indicators.mjs: WLD/EAS are 3-char codes and aren't filtered by iso3.length !== 3 (P3) - Add timeout-minutes: 10 and permissions: contents: read to seed GHA workflow (P3) |
||
|
|
07aca2c396 |
feat(conflict): seed 100 Iran events + add 20 geocoding locations (#899)
- Import latest LiveUAMap Iran events (100 events, March 2026) - Add missing LOCATION_COORDS: Khomein, Markazi, Kashan, Qom, Ahvaz, Dezful, Khorramshahr, Ilam, Laar, Kermanshah, Fujairah, Hermel, Amman, Jeddah, Dhahran, Al Minhad, Galilee, Evin - Bump cache-bust param _v=8 → _v=9 to bypass stale CDN/IndexedDB |
||
|
|
a5b2af8e11 |
feat(tech-readiness): bootstrap hydration via Railway seed + bootstrap key (#889)
* feat(tech-readiness): bootstrap hydration via Railway seed + bootstrap key
Add pre-computed TechReadiness rankings to the bootstrap payload so the
panel renders immediately on first load instead of waiting for 4 slow
World Bank RPC calls (which can trip circuit breakers on cold starts,
causing persistent "No data available" until the 5-min cooldown expires).
- scripts/seed-wb-indicators.mjs: new Railway seed script that fetches
IT.NET.USER.ZS / IT.CEL.SETS.P2 / IT.NET.BBND.P2 / GB.XPD.RSDV.GD.ZS
for all countries, computes rankings (same weights as the frontend
getTechReadinessRankings), and writes economic:worldbank-techreadiness:v1
to Redis with a 7-day TTL
- api/bootstrap.js: register techReadiness key in BOOTSTRAP_CACHE_KEYS
and SLOW_KEYS (s-maxage=3600, appropriate for annual WB data)
- src/services/economic/index.ts: fast-path in getTechReadinessRankings()
returns getHydratedData('techReadiness') immediately on first page load;
country-specific comparison requests still use live RPCs
* ci: add weekly GHA workflow for WB tech readiness seed
|
||
|
|
40be228713 |
fix(cyber): seed cyber threats on Railway + fix Cloudflare 500 errors (#880)
Railway seeding: - Add full cyber threats seed loop in scripts/ais-relay.cjs (5 IOC sources: Feodo, URLhaus, C2IntelFeeds, AlienVault OTX, AbuseIPDB) - GeoIP hydration via ipinfo.io → freeipapi.com with FIFO-capped cache (2048) - Writes both RPC cache key (cyber:threats:v2:0:::) and bootstrap key (cyber:threats-bootstrap:v2) with 3h TTL - Register cyberThreats in api/bootstrap.js BOOTSTRAP_CACHE_KEYS + SLOW_KEYS Cloudflare 500 fixes: - error-mapper.ts: map SyntaxError → 400 (req.json() on malformed POST body) - summarize-article.ts: reduce LLM timeout 30s → 25s (was equal to edge budget) - intelligence/_shared.ts: reduce UPSTREAM_TIMEOUT_MS 30_000 → 25_000 - cyber/_shared.ts: reduce source/geo timeouts and concurrency to fit edge budget |
||
|
|
e7f5a5b8e5 |
fix(market): add bootstrap hydration for markets & commodities panels (#867)
Markets and commodities panels showed "Failed to load" because they relied entirely on the listMarketQuotes RPC while sectors worked via bootstrap hydration. Both also shared a single circuit breaker — 2 transient failures across both calls triggered a 5-minute cooldown. - Add bootstrap Redis keys (market:stocks-bootstrap:v1 and market:commodities-bootstrap:v1) to Railway seed and bootstrap API - Hydrate markets/commodities from bootstrap on page load (same pattern as sectors) - Split circuit breaker: separate stockBreaker and commodityBreaker so commodity failures don't kill market retries and vice versa |
||
|
|
e6ab1883ca |
fix(market): parse comma-separated query params and align Railway cache keys (#856)
* fix(market): parse comma-separated query params and align Railway cache keys
Two bugs causing all market panels to show "Failed to load":
1. Sebuf codegen assigns `params.get("symbols")` (a string) to fields
typed as `string[]`. At runtime handlers receive "AAPL,AMZN,..."
instead of ["AAPL","AMZN",...]. This causes:
- `[..."string"]` spreading into characters → garbage Redis cache keys
- `symbols.filter()` → TypeError (strings lack .filter())
- Handlers fall through to catch → return empty `{ quotes: [] }`
2. Frontend routes commodities and sectors through `listMarketQuotes`
RPC (via `fetchMultipleStocks`), constructing Redis keys like
`market:quotes:v1:^VIX,CL=F,...`. But Railway seeds wrote to
`market:commodities:v1:...` and `market:sectors:v1` — different
key prefixes → permanent cache miss → fallback to Yahoo from
Vercel IP → 429 rate limit → empty data.
Fix:
- Add `parseStringArray()` helper that normalizes string|string[] → string[]
- Apply to all market handlers (quotes, commodities, crypto, stablecoins)
- Railway seed now also writes under `market:quotes:v1:` keys matching
what the Vercel handler constructs for commodity and sector symbols
* fix(economic): add 20s client-side timeout to all RPC calls
All EconomicServiceClient calls (FRED, World Bank, EIA, BIS) lacked
AbortSignal timeouts. If Vercel hangs or is slow, the circuit breaker's
execute() awaits forever, keeping panels stuck in "Fetching" state.
Add AbortSignal.timeout(20_000) to every client call so the circuit
breaker can catch the AbortError and fall through to cached/default data.
|
||
|
|
6c4901f5da |
fix(aviation): move AviationStack fetching to Railway relay, reduce to 40 airports (#858)
AviationStack API calls cost ~$100/day because each cache miss triggered 114 individual API calls from Vercel Edge (where isolates don't share in-flight dedup). This moves all AviationStack fetching to the Railway relay (like market data, OREF, UCDP) and reduces to 40 top international hubs (down from 114). - Add AVIATIONSTACK_AIRPORTS constant (40 curated IATA codes) - Add startAviationSeedLoop() to ais-relay.cjs (2h interval, 4h TTL) - Make Vercel handler cache-read-only (getCachedJson + simulation fallback) - Delete Vercel cron (warm-aviation-cache.ts) and remove from vercel.json |
||
|
|
411b015e0b |
fix(market+feeds): Railway market data cron + complete missing tech feed categories (#850)
* fix(tech): add missing dev/ipo/producthunt feed categories + market debug logging Developer, IPO & SPAC, and Product Hunt panels showed UNAVAILABLE on tech.worldmonitor.app because these categories had no server-side feed definitions in _feeds.ts. The client fell back to per-feed RSS proxy mode gated behind a disabled feature flag, resulting in empty panels. - Add dev (4 feeds), ipo (2 feeds), producthunt (1 feed) to server-side VARIANT_FEEDS.tech so the digest endpoint includes them - Add ipo and producthunt to client-side tech variant FEEDS so loadNews() iterates and renders these categories from the digest - Add console.warn logging to Finnhub, Yahoo direct, and Yahoo relay failure paths in _shared.ts (all errors were silently swallowed, making market data debugging impossible) * fix(market+feeds): add Railway market data cron + missing hardware/outages feed categories Market data: Yahoo Finance returns HTTP 429 from Vercel edge IPs. Railway relay has a different IP that Yahoo does not rate-limit. Add periodic seed job (5min interval) that fetches quotes from Finnhub/Yahoo and writes to Redis, so Vercel handlers serve from cache via cachedFetchJson. - seedMarketQuotes: 25 stocks via Finnhub + 3 indices via Yahoo (staggered) - seedCommodityQuotes: 6 commodities via Yahoo (staggered 150ms) - seedSectorSummary: 12 sector ETFs via Finnhub, Yahoo fallback - Redis keys match Vercel handler construction exactly (verified) - TTL 1800s survives 5 missed seed cycles - CHROME_UA hoisted to top-level (was defined after market code) Feed categories: hardware and outages were missing from server-side VARIANT_FEEDS.tech, causing UNAVAILABLE panels on tech.worldmonitor.app. |
||
|
|
67cdf009fd |
fix(relay): add exponential backoff for failing RSS feeds (#853)
RSS feeds that fail (socket hang up, timeout, non-2xx) were retried every 60s indefinitely, hammering broken upstreams. Adds per-feed exponential backoff: 1min → 2min → 4min → 8min → 15min cap. - Separate rssBackoffUntil/rssFailureCount maps (no response cache mutation) - Stale successful data served during backoff (BACKOFF-STALE) - 503 + Retry-After header when no stale data available - Failure count preserved across backoff expiry for fast re-escalation - Reset on success (2xx or 304 revalidation) |
||
|
|
f1faf07144 |
fix(market+tech): Yahoo relay fallback + RSS digest relay for blocked feeds (#835)
* fix(market): route Yahoo Finance through Railway relay to bypass 429 rate limits Yahoo Finance returns 429 from all Vercel edge IPs, causing empty market data across MARKETS, COMMODITIES, and HEATMAP panels. Empty rate-limited responses were also cached at full 8-min TTL, compounding the outage. - Add /yahoo-chart proxy endpoint to Railway relay with 5-min in-memory cache - Add relay fallback to fetchYahooQuote(): direct Yahoo → relay → null - Return null for all empty quote results (120s negative cache vs 480s) * fix: remove unused yahooRateLimited variable * fix(tech-panels): route RSS digest through Railway relay when direct fetch fails Server-side digest builder fetches RSS feeds directly from Vercel edge, but many tech sites (a16z, Stratechery, EU Startups, etc.) block Vercel IPs. This caused vcblogs, regionalStartups, unicorns, accelerators, and policy categories to return 0 items → UNAVAILABLE panels. Add Railway relay fallback to fetchAndParseRss(): direct fetch → on failure → relay /rss proxy → parse. Same pattern as Yahoo chart fix. |
||
|
|
37f07a6af2 |
fix(prod): CORS fallback, rate-limit bump, RSS proxy allowlist (#814)
- Add wildcard CORS headers in vercel.json for /api/* routes so Vercel infra 500s (which bypass edge function code) still include CORS headers - Bump rate limit from 300 to 600 req/60s in both rate-limit files to accommodate dashboard init burst (~30-40 parallel requests) - Add smartraveller.gov.au (bare + www) to Railway relay RSS allowlist - Add redirect hostname validation in fetchWithRedirects to prevent SSRF via open redirects on allowed domains |
||
|
|
9c5ad83651 |
feat(conflict): seed 100 Iran war events and expand geocoder (#792)
Add 26 new locations to seed script geocoder (Beersheba, Akrotiri, Bandar Abbas, Kerman, Natanz, Beirut, Baalbek, Ras Tanura, Ras Laffan, Quneitra, etc.) and bump CDN cache-bust _v=7 → _v=8. |
||
|
|
392349ee27 |
fix(relay): deduplicate UCDP constants crashing Railway container (#766)
PR #760 added a second UCDP implementation block (HTTP relay handler) that redeclared const UCDP_PAGE_SIZE, UCDP_VIOLENCE_TYPE_MAP, and functions ucdpFetchPage/ucdpDiscoverVersion already declared by the Redis seeder block — causing SyntaxError on startup and crash-loop. Rename relay-specific identifiers with RELAY prefix; shared constants (UCDP_PAGE_SIZE, UCDP_TRAILING_WINDOW_MS) are reused from block 1. |
||
|
|
b423995363 |
feat(conflict): wire UCDP (#760)
* feat(conflict): wire UCDP API access token across full stack UCDP API now requires an `x-ucdp-access-token` header. Renames the stub `UC_DP_KEY` to `UCDP_ACCESS_TOKEN` (matching ACLED convention) and wires it through Rust keychain, sidecar allowlist + verification, handler fetch headers, feature toggles, and desktop settings UI. - Rename UC_DP_KEY → UCDP_ACCESS_TOKEN in type system and labels - Add ucdpConflicts feature toggle with required secret - Add UCDP_ACCESS_TOKEN to Rust SUPPORTED_SECRET_KEYS (24→25) - Add sidecar ALLOWED_ENV_KEYS entry + validation with dynamic GED version probing - Handler sends x-ucdp-access-token header when token is present - UC_DP_KEY fallback in handler for one-release migration window - Update .env.example, desktop-readiness, and docs * feat(conflict): pre-fetch UCDP events via Railway cron + Redis cache Replace the 228-line edge handler that fetched UCDP GED API on every request with a thin Redis reader. The heavy fetch logic (version discovery, paginated backward fetch, 1-year trailing window filter) now runs as a setInterval loop in the Railway relay (ais-relay.cjs) every 6 hours, writing to Redis key conflict:ucdp-events:v1. Changes: - Add UCDP seed loop to ais-relay.cjs (6h interval, 6 pages, 2K cap) - Rewrite list-ucdp-events.ts as thin Redis reader (35 lines) - Add conflict:ucdp-events:v1 to bootstrap batch keys - Protect key from cache-purge via durable data prefix - Add manual-only seed-ucdp-events workflow + standalone script - Rename panel "UCDP Events" → "Armed Conflict Events" in locale - Add 24h TTL + 25h staleness check as safety nets |
||
|
|
16673d7110 | fix(desktop-package): detect linux node target from host arch (#742) | ||
|
|
b279e881a2 |
feat(rag): worker-side vector store with opt-in Headline Memory (#675)
* Add Security Advisories panel with government travel alerts (#460) * feat: add Security Advisories panel with government travel advisory feeds Adds a new panel aggregating travel/security advisories from official government foreign affairs agencies (US State Dept, AU DFAT Smartraveller, UK FCDO, NZ MFAT). Advisories are categorized by severity level (Do Not Travel, Reconsider, Caution, Normal) with filter tabs by source country. Includes summary counts, auto-refresh, and persistent caching via the existing data-freshness system. * chore: update package-lock.json * fix: event delegation, localization, and cleanup for SecurityAdvisories panel P1 fixes: - Use event delegation on this.content (bound once in constructor) instead of direct addEventListener after each innerHTML replacement — prevents memory leaks and stale listener issues on re-render - Use setContent() consistently instead of mixing with this.content.innerHTML - Add securityAdvisories translations to all 16 non-English locale files (panels name, component strings, common.all key) - Revert unrelated package-lock.json version bump P2 fixes: - Deduplicate loadSecurityAdvisories — loadIntelligenceData now calls the shared method instead of inlining duplicate fetch+set logic - Add Accept header to fetch calls for better content negotiation * feat(advisories): add US embassy alerts, CDC, ECDC, and WHO health feeds Adds 21 new advisory RSS feeds: - 13 US Embassy per-country security alerts (TH, AE, DE, UA, MX, IN, PK, CO, PL, BD, IT, DO, MM) - CDC Travel Notices - 5 ECDC feeds (epidemiological, threats, risk assessments, avian flu, publications) - 2 WHO feeds (global news, Africa emergencies) Panel gains a Health filter tab for CDC/ECDC/WHO sources. All new domains added to RSS proxy allowlist. i18n "health" key added across all 17 locales. * feat(cache): add negative-result caching to cachedFetchJson (#466) When upstream APIs return errors (HTTP 403, 429, timeout), fetchers return null. Previously null results were not cached, causing repeated request storms against broken APIs every refresh cycle. Now caches a sentinel value ('__WM_NEG__') with a short 2-minute TTL on null results. Subsequent requests within that window get null immediately without hitting upstream. Thrown errors (transient) skip sentinel caching and retry immediately. Also filters sentinels from getCachedJsonBatch pipeline reads and fixes theater posture coalescing test (expected 2 OpenSky fetches for 2 theater query regions, not 1). * feat: convert 52 API endpoints from POST to GET for edge caching (#468) * feat: convert 52 API endpoints from POST to GET for edge caching Convert all cacheable sebuf RPC endpoints to HTTP GET with query/path parameters, enabling CDN edge caching to reduce costs. Flatten nested request types (TimeRange, PaginationRequest, BoundingBox) into scalar query params. Add path params for resource lookups (GetFredSeries, GetHumanitarianSummary, GetCountryStockIndex, GetCountryIntelBrief, GetAircraftDetails). Rewrite router with hybrid static/dynamic matching for path param support. Kept as POST: SummarizeArticle, ClassifyEvent, RecordBaselineSnapshot, GetAircraftDetailsBatch, RegisterInterest. Generated with sebuf v0.9.0 (protoc-gen-ts-client, protoc-gen-ts-server). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: add rate_limited field to market response protos The rateLimited field was hand-patched into generated files on main but never declared in the proto definitions. Regenerating wiped it out, breaking the build. Now properly defined in both ListEtfFlowsResponse and ListMarketQuotesResponse protos. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: remove accidentally committed .planning files Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * feat: add Cloudflare edge caching infrastructure for api.worldmonitor.app (#471) Route web production RPC traffic through api.worldmonitor.app via fetch interceptor (installWebApiRedirect). Add default Cache-Control headers (s-maxage=300, stale-while-revalidate=60) on GET 200 responses, with no-store override for real-time endpoints (vessel snapshot). Update CORS to allow GET method. Skip Vercel bot middleware for API subdomain using hostname check (non-spoofable, replacing CF-Ray header approach). Update desktop cloud fallback to route through api.worldmonitor.app. * fix(beta): eagerly load T5-small model when beta mode is enabled BETA_MODE now couples the badge AND model loading — the summarization-beta model starts loading on startup instead of waiting for the first summarization call. * fix: move 5 path-param endpoints to query params for Vercel routing (#472) Vercel's `api/[domain]/v1/[rpc].ts` captures one dynamic segment. Path params like `/get-humanitarian-summary/SA` add an extra segment that has no matching route file, causing 404 on both OPTIONS preflight and direct requests. These endpoints were broken in production. Changes: - Remove `{param}` from 5 service.proto HTTP paths - Add `(sebuf.http.query)` annotations to request message fields - Update generated client/server code to use URLSearchParams - Update OpenAPI specs (YAML + JSON) to declare query params - Add early-return guards in 4 handlers for missing required params - Add happy.worldmonitor.app to runtime.ts redirect hosts Affected endpoints: - GET /api/conflict/v1/get-humanitarian-summary?country_code=SA - GET /api/economic/v1/get-fred-series?series_id=T10Y2Y&limit=120 - GET /api/market/v1/get-country-stock-index?country_code=US - GET /api/intelligence/v1/get-country-intel-brief?country_code=US - GET /api/military/v1/get-aircraft-details?icao24=a12345 * fix(security-advisories): route feeds through RSS proxy to avoid CORS blocks (#473) - Advisory feeds were fetched directly from the browser, hitting CORS on all 21 feeds (US State Dept, AU Smartraveller, US Embassies, ECDC, CDC, WHO). Route through /api/rss-proxy on web, keep proxyUrl for desktop. - Fix double slash in ECDC Avian Influenza URL (323//feed → 323/feed) - Add feeds.news24.com to RSS proxy allowlist (was returning 403) * feat(cache): tiered edge Cache-Control aligned to upstream TTLs (#474) * fix: move 5 path-param endpoints to query params for Vercel routing Vercel's `api/[domain]/v1/[rpc].ts` captures one dynamic segment. Path params like `/get-humanitarian-summary/SA` add an extra segment that has no matching route file, causing 404 on both OPTIONS preflight and direct requests. These endpoints were broken in production. Changes: - Remove `{param}` from 5 service.proto HTTP paths - Add `(sebuf.http.query)` annotations to request message fields - Update generated client/server code to use URLSearchParams - Update OpenAPI specs (YAML + JSON) to declare query params - Add early-return guards in 4 handlers for missing required params - Add happy.worldmonitor.app to runtime.ts redirect hosts Affected endpoints: - GET /api/conflict/v1/get-humanitarian-summary?country_code=SA - GET /api/economic/v1/get-fred-series?series_id=T10Y2Y&limit=120 - GET /api/market/v1/get-country-stock-index?country_code=US - GET /api/intelligence/v1/get-country-intel-brief?country_code=US - GET /api/military/v1/get-aircraft-details?icao24=a12345 * feat(cache): add tiered edge Cache-Control aligned to upstream TTLs Replace flat s-maxage=300 with 5 tiers (fast/medium/slow/static/no-store) mapped per-endpoint to respect upstream Redis TTLs. Adds stale-if-error resilience headers and X-No-Cache plumbing for future degraded responses. X-Cache-Tier debug header gated behind ?_debug query param. * fix(tech): use rss() for CISA feed, drop build from pre-push hook (#475) - CISA Advisories used dead rss.worldmonitor.app domain (404), switch to rss() helper - Remove Vite build from pre-push hook (tsc already catches errors) * fix(desktop): enable click-to-play YouTube embeds + CISA feed fixes (#476) * fix(tech): use rss() for CISA feed, drop build from pre-push hook - CISA Advisories used dead rss.worldmonitor.app domain (404), switch to rss() helper - Remove Vite build from pre-push hook (tsc already catches errors) * fix(desktop): enable click-to-play for YouTube embeds in WKWebView WKWebView blocks programmatic autoplay in cross-origin iframes regardless of allow attributes, Permissions-Policy, mute-first retries, or secure context. Documented all 10 approaches tested in docs/internal/. Changes: - Switch sidecar embed origin from 127.0.0.1 to localhost (secure context) - Add MutationObserver + retry chain as best-effort autoplay attempts - Use postMessage('*') to fix tauri://localhost cross-origin messaging - Make sidecar play overlay non-interactive (pointer-events:none) - Fix .webcam-iframe pointer-events:none blocking clicks in grid view - Add expand button to grid cells for switching to single view on desktop - Add http://localhost:* to CSP frame-src in index.html and tauri.conf.json * fix(gateway): convert stale POST requests to GET for backwards compat (#477) Stale cached client bundles still send POST to endpoints converted to GET in PR #468, causing 404s. The gateway now parses the POST JSON body into query params and retries the match as GET. * feat(proxy): add Cloudflare edge caching for proxy.worldmonitor.app (#478) Add CDN-Cache-Control headers to all proxy endpoints so Cloudflare can cache responses at the edge independently of browser Cache-Control: - RSS: 600s edge + stale-while-revalidate=300 (browser: 300s) - UCDP: 3600s edge (matches browser) - OpenSky: 15s edge (browser: 30s) for fresher flight data - WorldBank: 1800s/86400s edge (matches browser) - Polymarket: 120s edge (matches browser) - Telegram: 10s edge (matches browser) - AIS snapshot: 2s edge (matches browser) Also fixes: - Vary header merging: sendCompressed/sendPreGzipped now merge existing Vary: Origin instead of overwriting, preventing cross-origin cache poisoning at the edge - Stale fallback responses (OpenSky, WorldBank, Polymarket, RSS) now set Cache-Control: no-store + CDN-Cache-Control: no-store to prevent edge caching of degraded responses - All no-cache branches get CDN-Cache-Control: no-store - /opensky-reset gets no-store (state-changing endpoint) * fix(sentry): add noise filters for 4 unresolved issues (#479) - Tighten AbortError filter to match "AbortError: The operation was aborted" - Filter "The user aborted a request" (normal navigation cancellation) - Filter UltraViewer service worker injection errors (/uv/service/) - Filter Huawei WebView __isInQueue__ injection * feat: configurable VITE_WS_API_URL + harden POST→GET shim (#480) * fix(gateway): harden POST→GET shim with scalar guard and size limit - Only convert string/number/boolean values to query params (skip objects, nested arrays, __proto__ etc.) to prevent prototype pollution vectors - Skip body parsing for Content-Length > 1MB to avoid memory pressure * feat: make API base URL configurable via VITE_WS_API_URL Replace hardcoded api.worldmonitor.app with VITE_WS_API_URL env var. When empty, installWebApiRedirect() is skipped entirely — relative /api/* calls stay on the same domain (local installs). When set, browser fetch is redirected to that URL. Also adds VITE_WS_API_URL and VITE_WS_RELAY_URL hostnames to APP_HOSTS allowlist dynamically. * fix(analytics): use greedy regex in PostHog ingest rewrites (#481) Vercel's :path* wildcard doesn't match trailing slashes that PostHog SDK appends (e.g. /ingest/s/?compression=...), causing 404s. Switch to :path(.*) which matches all path segments including trailing slashes. Ref: PostHog/posthog#17596 * perf(proxy): increase AIS snapshot edge TTL from 2s to 10s (#482) With 20k requests/30min (60% of proxy traffic) and per-PoP caching, a 2s edge TTL expires before the next request from the same PoP arrives, resulting in near-zero cache hits. 10s allows same-PoP dedup while keeping browser TTL at 2s for fresh vessel positions. * fix(markets): commodities panel showing stocks instead of commodities (#483) The shared circuit breaker (cacheTtlMs: 0) cached the stocks response, then the stale-while-revalidate path returned that cached stocks data for the subsequent commodities fetch. Skip SWR when caching is disabled. * feat(gateway): complete edge cache tier coverage + degraded-response policy (#484) - Add 11 missing GET routes to RPC_CACHE_TIER map (8 slow, 3 medium) - Add response-headers side-channel (WeakMap) so handlers can signal X-No-Cache without codegen changes; wire into military-flights and positive-geo-events handlers on upstream failure - Add env-controlled per-endpoint tier override (CACHE_TIER_OVERRIDE_*) for incident response rollback - Add VITE_WS_API_URL hostname allowlist (*.worldmonitor.app + localhost) - Fix fetch.bind(globalThis) in positive-events-geo.ts (deferred lambda) - Add CI test asserting every generated GET route has an explicit cache tier entry (prevents silent default-tier drift) * chore: bump version to 2.5.20 + changelog Covers PRs #452–#484: Cloudflare edge caching, commodities SWR fix, security advisories panel, settings redesign, 52 POST→GET migrations. * fix(rss): remove stale indianewsnetwork.com from proxy allowlist (#486) Feed has no <pubDate> fields and latest content is from April 2022. Not referenced in any feed config — only in the proxy domain allowlist. * feat(i18n): add Korean (한국어) localization (#487) - Add ko.json with all 1606 translation keys matching en.json structure - Register 'ko' in SUPPORTED_LANGUAGES, LANGUAGES display array, and locale map - Korean appears as 🇰🇷 한국어 in the language dropdown * feat: add Polish tv livestreams (#488) * feat(rss): add Axios (api.axios.com/feed) as US news source (#494) Add api.axios.com to proxy allowlist and CSP connect-src, register Axios feed under US category as Tier 2 mainstream source. * perf: bootstrap endpoint + polling optimization (#495) * perf: bootstrap endpoint + polling optimization (phases 3-4) Replace 15+ individual RPC calls on startup with a single /api/bootstrap batch call that fetches pre-cached data from Redis. Consolidate 6 panel setInterval timers into the central RefreshScheduler for hidden-tab awareness (10x multiplier) and adaptive backoff (up to 4x for unchanged data). Convert IntelligenceGapBadge from 10s polling to event-driven updates with 60s safety fallback. * fix(bootstrap): inline Redis + cache keys in edge function Vercel Edge Functions cannot resolve cross-directory TypeScript imports from server/_shared/. Inline getCachedJsonBatch and BOOTSTRAP_CACHE_KEYS directly in api/bootstrap.js. Add sync test to ensure inlined keys stay in sync with the canonical server/_shared/cache-keys.ts registry. * test: add Edge Function module isolation guard for all api/*.js files Prevents any Edge Function from importing from ../server/ or ../src/ which breaks Vercel builds. Scans all 12 non-helper Edge Functions. * fix(bootstrap): read unprefixed cache keys on all environments Preview deploys set VERCEL_ENV=preview which caused getKeyPrefix() to prefix Redis keys with preview:<sha>:, but handlers only write to unprefixed keys on production. Bootstrap is a read-only consumer of production cache — always read unprefixed keys. * fix(bootstrap): wire sectors hydration + add coverage guard - Wire getHydratedData('sectors') in data-loader to skip Yahoo Finance fetch when bootstrap provides sector data - Add test ensuring every bootstrap key has a getHydratedData consumer — prevents adding keys without wiring them * fix(server): resolve 25 TypeScript errors + add server typecheck to CI - _shared.ts: remove unused `delay` variable - list-etf-flows.ts: add missing `rateLimited` field to 3 return literals - list-market-quotes.ts: add missing `rateLimited` field to 4 return literals - get-cable-health.ts: add non-null assertions for regex groups and array access - list-positive-geo-events.ts: add non-null assertion for array index - get-chokepoint-status.ts: add required fields to request objects - CI: run `typecheck:api` (tsconfig.api.json) alongside `typecheck` to catch server/ TS errors before merge * feat(military): server-side military bases 125K + rate limiting (#496) * feat(military): server-side military bases with 125K entries + rate limiting (#485) Migrate military bases from 224 static client-side entries to 125,380 server-side entries stored in Redis GEO sorted sets, served via bbox-filtered GEOSEARCH endpoint with server-side clustering. Data pipeline: - Pizzint/Polyglobe: 79,156 entries (Supabase extraction) - OpenStreetMap: 45,185 entries - MIRTA: 821 entries - Curated strategic: 218 entries - 277 proximity duplicates removed Server: - ListMilitaryBases RPC with GEOSEARCH + HMGET + tier/filter/clustering - Antimeridian handling (split bbox queries) - Blue-green Redis deployment with atomic version pointer switch - geoSearchByBox() + getHashFieldsBatch() helpers in redis.ts Security: - @upstash/ratelimit: 60 req/min sliding window per IP - IP spoofing fix: prioritize x-real-ip (Vercel-injected) over x-forwarded-for - Require API key for non-browser requests (blocks unauthenticated curl/scripts) - Input validation: allowlisted types/kinds, regex country, clamped bbox/zoom Frontend: - Viewport-driven loading with bbox quantization + debounce - Server-side grid clustering at low zoom levels - Enriched popup with kind, category badges (airforce/naval/nuclear/space) - Static 224 bases kept as search fallback + initial render * fix(military): fallback to production Redis keys in preview deployments Preview deployments prefix Redis keys with `preview:{sha}:` but military bases data is seeded to unprefixed (production) keys. When the prefixed `military:bases:active` key is missing, fall back to the unprefixed key and use raw (unprefixed) keys for geo/meta lookups. * fix: remove unused 'remaining' destructure in rate-limit (TS6133) * ci: add typecheck:api to pre-push hook to catch server-side TS errors * debug(military): add X-Bases-Debug response header for preview diagnostics * fix(bases): trigger initial server fetch on map load fetchServerBases() was only called on moveend — if the user never panned/zoomed, the API was never called and only the 224 static fallback bases showed. * perf(military): debounce base fetches + upgrade edge cache to static tier (#497) - Add 300ms debounce on moveend to prevent rapid pan flooding - Fixes stale-bbox bug where pendingFetch returns old viewport data - Upgrade edge cache tier from medium (5min) to static (1hr) — bases are static infrastructure, aligned with server-side cachedFetchJson TTL - Keep error logging in catch blocks for production diagnostics * fix(cyber): make GeoIP centroid fallback jitter deterministic (#498) Replace Math.random() jitter with DJB2 hash seeded by the threat indicator (IP/URL), so the same threat always maps to the same coordinates across requests while different threats from the same country still spread out. Closes #203 Co-authored-by: Chris Chen <fuleinist@users.noreply.github.com> * fix: use cross-env for Windows-compatible npm scripts (#499) Replace direct `VAR=value command` syntax with cross-env/cross-env-shell so dev, build, test, and desktop scripts work on Windows PowerShell/CMD. Co-authored-by: facusturla <facusturla@users.noreply.github.com> * feat(live-news): add CBC News to optional North America channels (#502) YouTube handle @CBCNews with fallback video ID 5vfaDsMhCF4. * fix(bootstrap): harden hydration cache + polling review fixes (#504) - Filter null/undefined values before storing in hydration cache to prevent future consumers using !== undefined from misinterpreting null as valid data - Debounce wm:intelligence-updated event handler via requestAnimationFrame to coalesce rapid alert generation into a single render pass - Include alert IDs in StrategicRiskPanel change fingerprint so content changes are detected even when alert count stays the same - Replace JSON.stringify change detection in ServiceStatusPanel with lightweight name:status fingerprint - Document max effective refresh interval (40x base) in scheduler * fix(geo): tokenization-based keyword matching to prevent false positives (#503) * fix(geo): tokenization-based keyword matching to prevent false positives Replace String.includes() with tokenization-based Set.has() matching across the geo-tagging pipeline. Prevents false positives like "assad" matching inside "ambassador" and "hts" matching inside "rights". - Add src/utils/keyword-match.ts as single source of truth - Decompose possessives/hyphens ("Assad's" → includes "assad") - Support multi-word phrase matching ("white house" as contiguous) - Remove false-positive-prone DC keywords ('house', 'us ') - Update 9 consumer files across geo-hub, map, CII, and asset systems - Add 44 tests covering false positives, true positives, edge cases Co-authored-by: karim <mirakijka@gmail.com> Fixes #324 * fix(geo): add inflection suffix matching + fix test imports Address code review feedback: P1a: Add suffix-aware matching for plurals and demonyms so existing keyword lists don't regress (houthi→houthis, ukraine→ukrainian, iran→iranian, israel→israeli, russia→russian, taiwan→taiwanese). Uses curated suffix list + e-dropping rule to avoid false positives. P1b: Expand conflictTopics arrays in DeckGLMap and Map with demonym forms so "Iranian senate..." correctly registers as conflict topic. P2: Replace inline test functions with real module import via tsx. Tests now exercise the production keyword-match.ts directly. * fix: wire geo-keyword tests into test:data command The .mts test file wasn't covered by `node --test tests/*.test.mjs`. Add `npx tsx --test tests/*.test.mts` so test:data runs both suites. * fix: cross-platform test:data + pin tsx in devDependencies - Use tsx as test runner for both .mjs and .mts (single invocation) - Removes ; separator which breaks on Windows cmd.exe - Add tsx to devDependencies so it works in offline/CI environments * fix(geo): multi-word demonym matching + short-keyword suffix guard - Add wordMatches() for suffix-aware phrase matching so "South Korean" matches keyword "south korea" and "North Korean" matches "north korea" - Add MIN_SUFFIX_KEYWORD_LEN=4 guard so short keywords like "ai", "us", "hts" only do exact-match (prevents "ais"→"ai", "uses"→"us" false positives) - Add 5 new tests covering both fixes (58 total, all passing) * fix(geo): support plural demonyms in keyword matching Add compound suffixes (ians, eans, ans, ns, is) to handle plural demonym forms like "Iranians"→"iran", "Ukrainians"→"ukraine", "Russians"→"russia", "Israelis"→"israel". Adds 5 new tests (63 total). --------- Co-authored-by: karim <mirakijka@gmail.com> * chore: strip 61 debug console.log calls from 20 service files (#501) * chore: strip 61 debug console.log calls from services Remove development/tracing console.log statements from 20 files. These add noise to production browser consoles and increase bundle size. Preserved: all console.error (error handling) and console.warn (warnings). Preserved: debug-gated logs in runtime.ts (controlled by verbose flag). Removed: debugInjectTestEvents() from geo-convergence.ts (test-only code). Removed: logSummary()/logReport() methods that were pure console.log wrappers. * fix: remove orphaned stubs and remaining debug logs from stripped services - Remove empty logReport() method and unused startTime variable (parallel-analysis.ts) - Remove orphaned console.group/console.groupEnd pair (parallel-analysis.ts) - Remove empty logSignalSummary() export (signal-aggregator.ts) - Remove logSignalSummary import/call and 3 remaining console.logs (InsightsPanel.ts) - Remove no-op logDirectFetchBlockedOnce() and dead infrastructure (prediction/index.ts) * fix: generalize Vercel preview origin regex + include filters in bases cache key (#506) - api/_api-key.js: preview URL pattern was user-specific (-elie-), rejecting other collaborators' Vercel preview deployments. Generalized to match any worldmonitor-*.vercel.app origin. - military-bases.ts: client cache key only checked bbox/zoom, ignoring type/kind/country filters. Switching filters without panning returned stale results. Unified into single cacheKey string. * fix(prediction): filter stale/expired markets from Polymarket panel (#507) Prediction panel was showing expired markets (e.g. "Will US strike Iran on Feb 9" at 0%). Root causes: no active/archived API filters, no end_date_min param, no client-side expiry guard, and sub-market selection picking highest volume before filtering expired ones. - Add active=true, archived=false, end_date_min API params to all 3 Gamma API call sites (events, markets, probe) - Pre-filter sub-markets by closed/expired BEFORE volume selection in both fetchPredictions() and fetchCountryMarkets() - Add defense-in-depth isExpired() client-side filter on final results - Propagate endDate through all market object paths including sebuf fallback - Show expiry date in PredictionPanel UI with new .prediction-meta layout - Add "closes" i18n key to all 18 locale files - Add endDate to server handler GammaMarket/GammaEvent interfaces and map to proto closesAt field * fix(relay): guard proxy handlers against ERR_HTTP_HEADERS_SENT crash (#509) Polymarket and World Bank proxy handlers had unguarded res.writeHead() calls in error/timeout callbacks that race with the response callback. When upstream partially responds then times out, both paths write headers → process crash. Replace 5 raw writeHead+end calls with safeEnd() which checks res.headersSent before writing. * feat(breaking-news): add active alert banner with audio for critical/high RSS items (#508) RSS items classified as critical/high threat now trigger a full-width breaking news banner with audio alert, auto-dismiss (60s/30s by severity), visibility-aware timer pause, dedup, and a toggle in the Intelligence Findings dropdown. * fix(sentry): filter Android OEM WebView bridge injection errors (#510) Add ignoreErrors pattern for LIDNotifyId, onWebViewAppeared, and onGetWiFiBSSID — native bridge functions injected by Lenovo/Huawei device SDKs into Chrome Mobile WebView. No stack frames in our code. * chore: add validated telegram channels list (global + ME + Iran + cyber) (#249) * feat(conflict): add Iran Attacks map layer + strip debug logs (#511) * chore: strip 61 debug console.log calls from services Remove development/tracing console.log statements from 20 files. These add noise to production browser consoles and increase bundle size. Preserved: all console.error (error handling) and console.warn (warnings). Preserved: debug-gated logs in runtime.ts (controlled by verbose flag). Removed: debugInjectTestEvents() from geo-convergence.ts (test-only code). Removed: logSummary()/logReport() methods that were pure console.log wrappers. * fix: remove orphaned stubs and remaining debug logs from stripped services - Remove empty logReport() method and unused startTime variable (parallel-analysis.ts) - Remove orphaned console.group/console.groupEnd pair (parallel-analysis.ts) - Remove empty logSignalSummary() export (signal-aggregator.ts) - Remove logSignalSummary import/call and 3 remaining console.logs (InsightsPanel.ts) - Remove no-op logDirectFetchBlockedOnce() and dead infrastructure (prediction/index.ts) * feat(conflict): add Iran Attacks map layer Adds a new Iran-focused conflict events layer that aggregates real-time events, geocodes via 40-city lookup table, caches 15min in Redis, and renders as a toggleable DeckGL ScatterplotLayer with severity coloring. - New proto + codegen for ListIranEvents RPC - Server handler with HTML parsing, city geocoding, category mapping - Frontend service with circuit breaker - DeckGL ScatterplotLayer with severity-based color/size - MapPopup with sanitized source links - iranAttacks toggle across all variants, harnesses, and URL state * fix: resolve bootstrap 401 and 429 rate limiting on page init (#512) Same-origin browser requests don't send Origin header (per CORS spec), causing validateApiKey to reject them. Extract origin from Referer as fallback. Increase rate limit from 60 to 200 req/min to accommodate the ~50 requests fired during page initialization. * fix(relay): prevent Polymarket OOM via request deduplication (#513) Concurrent Polymarket requests for the same cache key each fired independent https.get() calls. With 12 categories × multiple clients, 740 requests piled up in 10s, all buffering response bodies → 4.1GB heap → OOM crash on Railway. Fix: in-flight promise map deduplicates concurrent requests to the same cache key. 429/error responses are negative-cached for 30s to prevent retry storms. * fix(threat-classifier): add military/conflict keyword gaps and news-to-conflict bridge (#514) Breaking news headlines like "Israel's strike on Iran" were classified as info level because the keyword classifier lacked standalone conflict phrases. Additionally, the conflict instability score depended solely on ACLED data (1-7 day lag) with no bridge from real-time breaking news. - Add 3 critical + 18 high contextual military/conflict keywords - Preserve threat classification on semantically merged clusters - Add news-derived conflict floor when ACLED/HAPI report zero signal - Upsert news events by cluster ID to prevent duplicates - Extract newsEventIndex to module-level Map for serialization safety * fix(breaking-news): let critical alerts bypass global cooldown and replace HIGH alerts (#516) Global cooldown (60s) was blocking critical alerts when a less important HIGH alert fired from an earlier RSS batch. Added priority-aware cooldown so critical alerts always break through. Banner now auto-dismisses HIGH alerts when a CRITICAL arrives. Added Iran/strikes keywords to classifier. * fix(rate-limit): increase sliding window to 300 req/min (#515) App init fires many concurrent classify-event, summarize-article, and record-baseline-snapshot calls, exhausting the 200/min limit and causing 429s. Bump to 300 as a temporary measure while client-side batching is implemented. * fix(breaking-news): fix fake pubDate fallback and filter noisy think-tank alerts (#517) Two bugs causing stale CrisisWatch article to fire as breaking alert: 1. Non-standard pubDate format ("Friday, February 27, 2026 - 12:38") failed to parse → fallback was `new Date()` (NOW) → day-old articles appeared as "just now" and passed recency gate on every fetch 2. Tier 3+ sources (think tanks) firing alerts on keyword-only matches like "War" in policy analysis titles — too noisy for breaking alerts Fix: parsePubDate() handles non-standard formats and falls back to epoch (not now). Tier 3+ sources require LLM classification to fire. * fix: make iran-events handler read-only from Redis (#518) Remove server-side LiveUAMap scraper (blocked by Cloudflare 403 on Vercel IPs). Handler now reads pre-populated Redis cache pushed from local browser scraping. Change cache tier from slow to fast to prevent CDN from serving stale empty responses for 30+ minutes. * fix(relay): Polymarket circuit breaker + concurrency limiter (OOM fix) (#519) * fix(rate-limit): increase sliding window to 300 req/min App init fires many concurrent classify-event, summarize-article, and record-baseline-snapshot calls, exhausting the 200/min limit and causing 429s. Bump to 300 as a temporary measure while client-side batching is implemented. * fix(relay): add Polymarket circuit breaker + concurrency limiter to prevent OOM Railway relay OOM crash: 280 Polymarket 429 errors in 8s, heap hit 3.7GB. Multiple unique cache keys bypassed per-key dedup, flooding upstream. - Circuit breaker: trips after 5 consecutive failures, 60s cooldown - Concurrent upstream limiter: max 3 simultaneous requests - Negative cache TTL: 30s → 60s to reduce retry frequency - Upstream slot freed on response.on('end'), not headers, preventing body buffer accumulation past the concurrency cap * fix(relay): guard against double-finalization on Polymarket timeout request.destroy() in timeout handler also fires request.on('error'), causing double decrement of polymarketActiveUpstream (counter goes negative, disabling concurrency cap) and double circuit breaker trip. Add finalized guard so decrement + failure accounting happens exactly once per request regardless of which error path fires first. * fix(threat-classifier): stagger AI classification requests to avoid Groq 429 (#520) flushBatch() fired up to 20 classifyEvent RPCs simultaneously via Promise.all, instantly hitting Groq's ~30 req/min rate limit. - Sequential execution with 2s min-gap between requests (~28 req/min) - waitForGap() enforces hard floor + jitter across batch boundaries - batchInFlight guard prevents concurrent flush loops - 429/5xx: requeue failed job (with retry cap) + remaining untouched jobs - Queue cap at 100 items with warn on overflow * fix(relay): regenerate package-lock.json with telegram dependency The lockfile was missing resolved entries for the telegram package, causing Railway to skip installation despite it being in package.json. * chore: trigger deploy to flush CDN cache for iran-events endpoint * Revert "fix(relay): regenerate package-lock.json with telegram dependency" This reverts commit |
||
|
|
98150d639d |
feat: persist OREF history to Redis + retry bootstrap (#674)
* feat: persist OREF history to Redis + retry bootstrap on failure OREF history was lost on every container restart — single curl call with no retry, no persistence. Panel showed "0 alerts" until history re-accumulated over hours. Changes to scripts/ais-relay.cjs: - Add Upstash Redis REST helpers (upstashGet/upstashSet) using https.request with HTTPS-only validation, 5s timeout, never-throw semantics - Persist history to Redis after each poll mutation (version-deduped, concurrent-guarded, 200-wave hard cap, 7d TTL matching purge window) - Bootstrap from Redis first on startup (schema validation, 7d purge filter, 24h count recompute, lastAlertsJson seeding to prevent duplicate waves) - Fall through to upstream retry if Redis data is empty or all stale - Upstream retry: 3 attempts with exponential backoff + jitter (~70s budget) - Expose redisEnabled + bootstrapSource in /health endpoint - Preserves main's totalHistoryCount field and 7-day history retention Failure matrix: - UP/UP: Redis first (instant), poll refreshes + persists - UP/DOWN: Bootstrap from upstream, persist fails silently - DOWN/UP: Bootstrap from Redis cache - DOWN/DOWN: 3 retries then empty history * fix: increment persist version after upstream bootstrap to seed Redis Without this, _persistVersion stays at 0 after bootstrap, matching _lastPersistedVersion — orefPersistHistory() skips the write. A restart before any new alerts would lose all bootstrapped history. |
||
|
|
cae3c08436 |
feat(oref): add 1,478 Hebrew→English location translations + wire sirens into breaking news banner (#661)
- Generate static location map from eladnava/pikud-haoref-api cities.json (1,478 entries) - Lazy-load translations in oref-alerts.ts with retry on failure - Add dispatchOrefBreakingAlert() with stable dedupe key and global cooldown bypass - Wire oref siren alerts into breaking news banner on initial fetch and polling updates |
||
|
|
c6b94a55bf |
fix(oref): grab newest history records and preserve bootstrap data (#653)
OREF AlertsHistory.json returns records newest-first, but the bootstrap used .slice(-500) which took the oldest 500 — all outside the 24h window. The poll loop then purged them all, leaving historyCount24h = 0. Three fixes: - Use .slice(0, 500) to take the newest 500 records from OREF history - Extend history purge from 24h to 7 days so bootstrap data persists - Add totalHistoryCount field for badge fallback when 24h count is zero |
||
|
|
88215cb517 | feat: add Redis caching for GPS jamming data (#646) | ||
|
|
36e36d8b57 |
Cost/traffic hardening, runtime fallback controls, and PostHog removal (#638)
- Remove PostHog analytics runtime and configuration - Add API rate limiting (api/_rate-limit.js) - Harden traffic controls across edge functions - Add runtime fallback controls and data-loader improvements - Add military base data scripts (fetch-mirta-bases, fetch-osm-bases) - Gitignore large raw data files - Settings playground prototypes |
||
|
|
11f241ea5f |
feat(rss): add conditional GET (ETag/If-Modified-Since) to Railway relay (#625)
When RSS cache expires, send If-None-Match/If-Modified-Since headers on revalidation. Upstream 304 responses refresh the cache timestamp and serve cached body with zero bandwidth, cutting egress ~80-95% for feeds that support conditional GET. |
||
|
|
45d9bac10c |
feat(oref): show history waves timeline with translation and NaN fix (#618)
- Fetch and display alert history waves in OrefSirensPanel (cap 50 most recent) - Last-hour waves highlighted with amber border and RECENT badge - Translate Hebrew history alerts via existing translateAlerts pipeline - Guard formatAlertTime/formatWaveTime against NaN from unparseable OREF dates - Cap relay history bootstrap to 500 records - Add 3-minute TTL to prevent re-fetching history on every 10s poll - Remove dead .oref-footer/.oref-history CSS; add i18n key for history summary |
||
|
|
c19b4ecdd0 |
fix(relay): increase OREF curl maxBuffer to prevent ENOBUFS (#609)
* fix(relay): increase OREF curl maxBuffer to 10MB to prevent ENOBUFS AlertsHistory.json response exceeds execFileSync default 1MB buffer, causing spawnSync ENOBUFS on Railway container at startup. * fix(relay): use curl -o tmpfile for OREF history instead of stdout buffer Large AlertsHistory.json overflows execFileSync stdout buffer (ENOBUFS). Now writes to temp file via curl -o, reads with fs.readFileSync, cleans up. Live alerts (tiny payload) still use stdout path. |
||
|
|
506135f716 |
fix(aviation): route NOTAM through relay + improve intl logging (#599)
Root cause: ICAO NOTAM API times out from Vercel edge (>10s). AviationStack alerts indistinguishable from simulation in logs. Changes: - Add /notam proxy endpoint to Railway relay (25s timeout, 30min cache) - Route fetchNotamClosures through relay when WS_RELAY_URL is set - Fall back to direct ICAO call (20s timeout) when no relay - Log cache hits with real vs simulated alert counts - Send all MENA airports in single NOTAM request (was batched by 20) Requires: ICAO_API_KEY env var on Railway relay |
||
|
|
a2a18f0646 |
fix(polymarket): add queue backpressure and response limit slicing (#593)
- Add POLYMARKET_MAX_QUEUED=20 cap to prevent unbounded queue growth under sustained load (rejects with negative cache when full) - Use requestedLimit to slice cached responses — callers requesting limit=20 now get 20 items instead of the full 50-item upstream payload - Hoist PROXY_STRIP_KEYS Set to module level (avoids per-call allocation) |
||
|
|
560fb685aa |
fix(relay): stop Polymarket cache stampede from concurrent limit + CDN bypass (#592)
Three issues caused continuous MISS every 5 seconds: 1. Concurrent limit rejection poisoned cache: 11 tags fire via Promise.all but POLYMARKET_MAX_CONCURRENT=3, so 8 tags got negative-cached with empty [] (5 min TTL). Those 8 tags NEVER got positive cache because they were always throttled. Fix: replace reject-with-negative-cache with a proper queue — excess requests wait for a slot instead of being silently rejected. 2. Cache key fragmentation: fetchPredictions(limit=20) and fetchCountryMarkets(limit=30) created separate cache entries for the same tag. Fix: normalize to canonical limit=50 upstream, cache key is shared regardless of caller's requested limit. 3. CDN bypass: end_date_min timestamp in query string made every URL unique, preventing Vercel CDN caching entirely. Fix: strip end_date_min, active, archived from proxy params (relay ignores them anyway). |
||
|
|
bd5e05e236 |
fix(relay): delay Telegram connect 60s on startup to prevent AUTH_KEY_DUPLICATED (#587)
Railway zero-downtime deploys start the new container before the old one receives SIGTERM. Both containers connect with the same session string simultaneously, triggering Telegram's AUTH_KEY_DUPLICATED which permanently invalidates the session. A 60s startup delay gives the old container time to disconnect gracefully. Configurable via TELEGRAM_STARTUP_DELAY_MS env. |
||
|
|
ff98e3eac7 |
feat: add GPS/GNSS jamming map layer + CII integration (#570)
* feat: add GPS/GNSS jamming data ingestion from gpsjam.org - scripts/fetch-gpsjam.mjs: standalone fetcher that downloads daily H3 hex data, filters medium/high interference, converts to lat/lon via h3-js, and writes JSON. Can be run on cron. - api/gpsjam.js: Vercel Edge Function that proxies gpsjam.org data with 1hr cache, returns medium/high hexes for frontend consumption. - src/services/gps-interference.ts: frontend service that fetches from the Edge API, converts H3→lat/lon, and classifies by conflict region. - h3-js added as dependency for hex→coordinate conversion. * feat: add GPS jamming map layer, CII integration, and country brief signals Wire gpsjam.org data into map visualization, instability scoring, and country intelligence. ScatterplotLayer renders high (red) and medium (orange) interference hexes. CII security score incorporates jamming counts per country via h3→country geocoding with cache. Country briefs show jamming zone chip. Full i18n across 18 locales including popup labels. Data loads with intelligence signals cycle (15min), gated by 1hr client-side cache. |
||
|
|
81f82cdfe9 |
feat(relay): bootstrap OREF 24h history on startup (#582)
* fix(relay): improve OREF curl error logging with stderr capture -s flag silenced curl errors. Add -S to show errors, capture stderr via stdio pipes, and log curl's actual error message instead of generic "Command failed" from execFileSync. * feat(relay): bootstrap OREF 24h history on startup and add missing headers - Fetch AlertsHistory.json once on startup to populate orefState.history immediately instead of starting empty - Add X-Requested-With: XMLHttpRequest header required by Akamai WAF - Add IST→UTC date converter handling DST ambiguity - Redact proxy credentials from error logs and client responses - Fix historyCount24h to count individual alert records, not snapshots - Guard historyCount24h reducer for both array and string data shapes - Add input validation to orefDateToUTC for malformed dates |
||
|
|
b99e9c0efb |
fix(relay): add timeouts and logging to Telegram poll loop (#578)
GramJS getEntity/getMessages have no built-in timeout. When the first channel hangs (FLOOD_WAIT, MTProto stall), telegramPollInFlight stays true forever, blocking all future polls — zero messages collected, zero errors logged, frontend shows "No messages available". - Add 15s per-channel timeout on getEntity + getMessages calls - Add 3-min overall poll cycle timeout - Force-clear stuck in-flight flag after 3.5 minutes - Detect FLOOD_WAIT errors and break loop early - Log per-cycle summary: channels polled, new msgs, errors, duration - Track media-only messages separately (no text → not a bug) - Expose lastError, pollInFlight, pollInFlightSince on /status endpoint |
||
|
|
e08b7b1673 |
fix(relay): replace nixpacks.toml with railpack.json for curl (#571)
Railway uses Railpack (not Nixpacks). nixpacks.toml in scripts/ was silently skipped. Use railpack.json at repo root with deploy.aptPackages to install curl at runtime for OREF polling. |
||
|
|
e26f7cae52 |
fix(relay): increase Polymarket cache TTL to 10 minutes (#568)
* fix(relay): increase Polymarket cache TTL to 10 minutes All requests were MISS with 2-min TTL under concurrent load. Bump to 10-min cache and 5-min negative cache to reduce upstream pressure. * fix(relay): normalize Polymarket cache key from canonical params Raw url.search as cache key meant ?tag=fed&endpoint=events and ?endpoint=events&tag_slug=fed produced different keys for the same upstream request — defeating both cache and inflight dedup, causing 121 MISS entries in 3 seconds. Build cache key from parsed canonical params (endpoint + sorted query string) so all equivalent requests share one cache entry. |
||
|
|
691810e3cd | fix(relay): install curl in Railway container for OREF polling (#567) | ||
|
|
8d64830525 | fix(relay): upstreamWs → upstreamSocket in graceful shutdown (#565) | ||
|
|
afc5a7a34a |
fix(relay): replace smart quotes crashing relay on startup (#563)
* fix(relay): replace Unicode smart quotes crashing Node.js CJS parser * fix(relay): await Telegram disconnect + guard startup poll |
||
|
|
6476b91ed2 |
fix(relay): add graceful shutdown + poll concurrency guard for Telegram (#562)
- SIGTERM/SIGINT handler disconnects Telegram client before container dies - telegramPollInFlight guard prevents overlapping poll cycles - Mid-poll AUTH_KEY_DUPLICATED now permanently disables (was reconnect loop) |
||
|
|
efc1945bb8 |
feat(live-news): move YouTube proxy scraping to Railway relay
Vercel serverless cannot use node:http/https for HTTP CONNECT proxy tunnels. Move the residential proxy YouTube scraping to the Railway relay (ais-relay.cjs) which has full Node.js access. - Add /youtube-live route to relay with proxy + direct fetch fallback - Add 5-min in-memory cache for channel lookups, 1hr for oembed - Revert Vercel api/youtube/live.js to edge runtime — now proxies to Railway first, falls back to direct scrape |
||
|
|
59e52ba865 |
fix(relay): use execFileSync for OREF curl to avoid shell injection (#546)
Proxy credentials with special characters (semicolons, dollar signs) were interpolated into a shell command via execSync. Switch to execFileSync which passes args directly without shell parsing. |