* fix(supply-chain): correct PortWatch ArcGIS service URL, field names, and chokepoint mappings
The PortWatch seed was returning no data because the ArcGIS service name,
WHERE clause fields, date field, and chokepoint names were all wrong.
Verified all 12 chokepoints return 175 days of data against the live API.
Added error logging to pwFetchAllPages for future debugging.
* fix(supply-chain): sync geofence names with relayName renames
CHOKEPOINT_GEOFENCES in ais-relay.cjs still used old names
('Strait of Malacca', 'Bab el-Mandeb', 'Strait of Gibraltar')
while _chokepoint-ids.ts relayName was updated. buildRelayLookup
does exact string match, so these 3 chokepoints had zero transit
counts despite relay data being present.
Rename geofence entries to match the new relayName values and
update corresponding test assertions.
The Wingbits function silently returned null on missing API key,
HTTP errors, and caught exceptions. Now logs the specific failure
reason so issues are visible in Railway logs.
* fix(supply-chain): correct PortWatch ArcGIS URL, field names, and chokepoint mappings
The PortWatch seed was failing silently because:
1. Wrong service name: portal_chokepoint_daily -> Daily_Chokepoints_Data
2. Wrong query fields: chokepoint/observation_date -> portname/date (epoch)
3. Wrong data model: expected one row per vessel type, actual schema has
all counts as columns (n_tanker, n_cargo, n_total) per row
4. Wrong chokepoint names: e.g. "Strait of Malacca" -> "Malacca Strait",
"Bab el-Mandeb" -> "Bab el-Mandeb Strait", "Bosphorus" -> "Bosporus Strait"
5. Removed Dardanelles (not in PortWatch dataset)
Discovered via IMF PortWatch ArcGIS service directory and returnDistinctValues
query on the portname field.
* feat(supply-chain): add Korea, Dover, Kerch, Lombok chokepoints
Extend from 10 to 14 monitored chokepoints using PortWatch data
availability. All 4 new straits have IMF PortWatch coverage.
- Korea Strait: Japan-Korea trade, busiest East Asia corridor
- Dover Strait: world's busiest shipping lane
- Kerch Strait: war_zone (Russia controls, Ukraine grain restricted)
- Lombok Strait: Malacca bypass for VLCCs
Added to: handler config, canonical ID map, PortWatch seed names,
AIS relay transit counter, tests.
* docs: update maritime docs and changelog for 14 chokepoints + transit intelligence
- maritime-intelligence.mdx: 9 -> 14 chokepoints, add data source descriptions,
add chart rendering note
- changelog.mdx + CHANGELOG.md: add [Unreleased] section for #1560 and #1572
* fix(tests): update portwatch test for pre-aggregated column model
pwClassifyVesselType was removed when switching to pre-aggregated
n_tanker/n_cargo/n_total columns. Update test to verify the new
field names instead.
* fix(supply-chain): sync canonical PortWatch names with actual ArcGIS feed
P1: Dardanelles has no PortWatch data (0 rows). Set portwatchName to empty
string so it won't attempt fetch or show phantom zero history.
P2: portwatchNameToId() returned undefined for Malacca Strait, Bab el-Mandeb
Strait, Gibraltar Strait, Bosporus Strait because canonical map used
old names instead of actual ArcGIS portname values.
Fixed mappings:
Strait of Malacca -> Malacca Strait
Bab el-Mandeb -> Bab el-Mandeb Strait
Strait of Gibraltar -> Gibraltar Strait
Bosphorus -> Bosporus Strait
Dardanelles -> '' (not in PortWatch)
* refactor(supply-chain): merge Dardanelles into Turkish Straits
IMF PortWatch tracks Bosphorus+Dardanelles as a single corridor
(Bosporus Strait). Keeping them separate caused double-counting in
AIS transit data and left Dardanelles with permanently empty history.
- Merge into single "Turkish Straits" entry (id stays 'bosphorus')
- Absorb all Dardanelles keywords (canakkale, gallipoli, aegean)
- Single wider AIS geofence (lat 40.70, lon 28.0, radius 1.5)
- 14 -> 13 chokepoints
- Update docs, changelog, tests
* fix: rename Turkish Straits to Bosporus Strait (match PortWatch naming)
* feat(correlation): server-side correlation engine seed + bootstrap hydration
Move correlation card computation from client-side (per-browser, 10-30s delay)
to server-side (Railway cron, instant via bootstrap). Seed script reads 8 Redis
keys, runs 4 adapter signal collectors (military, escalation, economic, disaster),
clusters/scores/generates cards, writes to Redis with 10min TTL.
- New: scripts/seed-correlation.mjs (pure JS port of correlation engine)
- bootstrap.js: add correlationCards to FAST_KEYS tier
- health.js + seed-health.js: register for monitoring (maxStaleMin: 15)
- CorrelationPanel: consume bootstrap on construction, show "Analyzing..." only
after live engine has run (not for bootstrap-only cards)
- _seed-utils.mjs: support opts.recordCount override (function or number)
* fix(correlation): stale timestamp fallback + coordinate-based country resolution
P1: news stories lacked per-story pubDate, causing Date.now() fallback on
every seed run. Now _clustering.mjs propagates pubDate through to
enrichedStories, and seed-correlation reads s.pubDate then generatedAt.
P2: normalizeToCode dropped signals with unparseable country names.
Added centroid-based coordinate fallback (haversine nearest-match within
800km) matching the live engine's getCountryAtCoordinates behavior.
* fix(correlation): add 11 missing country centroids to coordinate fallback
CI, CR, CV, CY, GA, IS, LA, SZ, TL, TT, XK were in the normalization
maps but missing from COUNTRY_CENTROIDS, causing coordinate-only signals
in those countries to be misclassified or dropped during bootstrap.
* fix(correlation): align protest/outage field names with actual Redis schema
Codex review P1 findings: seed-correlation read wrong field names from
Redis data.
Protests (unrest:events:v1): p.time -> p.occurredAt, p.lat/lon ->
p.location.latitude/longitude, severity enum SEVERITY_LEVEL_* mapping.
Outages (infra:outages:v1): o.pubDate -> o.detectedAt, o.lat/lon ->
o.location.latitude/longitude, severity enum OUTAGE_SEVERITY_* mapping.
Both escalation and disaster adapters updated. Old field names kept as
fallbacks for data shape compatibility.
* feat(supply-chain): replace S&P Global with 3 free maritime data sources
Replace expensive S&P Global Maritime API with IMF PortWatch (vessel transit
counts), CorridorRisk (risk intelligence), and AISStream chokepoint crossing
counter. All external API calls run on Railway relay, Vercel reads Redis only.
- Add 4 new chokepoints (10 total): Cape of Good Hope, Gibraltar, Bosphorus, Dardanelles
- Add TransitSummary proto (field 14) with today counts, WoW%, 180d history, risk context
- Add D3 multi-line chart (tanker vs cargo) with expandable chokepoint cards
- Add crossing detection with enter+dwell+exit semantics, 30min cooldown, 5min min dwell
- Add PortWatch seed loop (6h), CorridorRisk seed loop (1h), transit seed loop (10min)
- Add canonical chokepoint ID map for cross-source name resolution
- 177 tests passing across 6 test files
* fix(supply-chain): address P2 review findings
- Discard partial PortWatch pagination results on mid-page failure (prevents
truncated history with wrong WoW numbers cached for 6h)
- Rename "Transit today" to "24h" label (rolling 24h window, not calendar day)
- Fix chart label from "30d" to "180d" (matches actual PortWatch query range)
- Add 30s initial seed for chokepoint transits on relay cold start (prevents
10min gap of zero transit data)
* feat(supply-chain): swap D3 chart for TradingView lightweight-charts
Replace hand-rolled D3 SVG transit chart with lightweight-charts v5 canvas
rendering for Bloomberg-quality time-series visualization.
- Add TransitChart helper class with mount/destroy lifecycle, theme listener,
and autoSize support
- Use MutationObserver (not rAF) to mount chart after setContent debounce
- Clean up chart on tab switch, collapse, and re-render (no orphaned canvases)
- Respond to theme-changed events via chart.applyOptions()
- D3 stays for other 5 components (ProgressCharts, RenewableEnergy, etc.)
* feat(supply-chain): add geo coords and trade routes for 4 new chokepoints
Cherry-pick from PR #1511: Cape of Good Hope, Gibraltar, Bosphorus, and
Dardanelles map-layer coordinates and trade route definitions.
* fix(supply-chain): health.js v2->v4 key + double cache TTLs for missed seeds
- health.js chokepoints key was still v2, now v4 (matches handler + bootstrap)
- PortWatch TTL: 21600s (6h) -> 43200s (12h), seed interval stays 6h
- CorridorRisk TTL: 3600s (1h) -> 7200s (2h), seed interval stays 1h
- Ensures one missed seed run doesn't expire the key and cause empty data
Windy API returns 400 at ~1050 offset globally regardless of bounding
box size. The quadrant-splitting on 400 caused infinite recursion since
every sub-region hit the same limit.
On 400: keep cameras already fetched, stop paginating. The 10K cap
split is retained with MAX_SPLIT_DEPTH=3 to prevent runaway recursion.
Mixing || and ?? in the same expression without explicit grouping is
a JS syntax error. This broke ALL Railway seed scripts after #1556.
Refactored to use ?? throughout with explicit Array.isArray guard so
non-topic seeds correctly fall through to their own length checks.
- Rename `const seedMeta` to `seedMetaVal` to avoid shadowing the
`seedMeta()` function, which caused "Cannot access before initialization"
- Auto-split into quadrants when Windy API returns 400 (offset limit ~1000),
instead of only splitting at the 10K safety cap
* feat: seed GDELT intelligence topics to Redis with bootstrap hydration
Add standalone seed script that pre-populates all 6 Live Intelligence
topics (military, cyber, nuclear, sanctions, intelligence, maritime)
from the GDELT Doc API into Redis. Frontend consumes bootstrap data
lazily via the service layer, falling back to RPC if unavailable.
- scripts/seed-gdelt-intel.mjs: new seed script with per-topic 429 retry
- api/bootstrap.js: register gdeltIntel in FAST_KEYS
- api/health.js: register in BOOTSTRAP_KEYS + SEED_META + dataSize
- api/seed-health.js: register in SEED_DOMAINS
- scripts/_seed-utils.mjs: add topics to recordCount detection
- src/services/gdelt-intel.ts: lazy bootstrap consumption in service layer
* fix(seed): align staleness thresholds and strengthen GDELT validation
- seed-health intervalMin 30→60 so staleness (120min) matches health.js maxStaleMin
- validate requires ≥3/6 topics populated (not just military)
- recordCount sums articles across topics instead of reporting topic count
Gate LlmStatusIndicator behind isDesktopRuntime() so Vercel web
never fires a wasted 404 fetch to /api/llm-health (sidecar-only).
Add Kharg Island, Qom, Andisheh, and Ankara to the Iran events
geolocation map for accurate event placement.
Three issues caused intermittent DEGRADED health:
1. When ICAO API returned empty (timeout, challenge page), the relay
seed updated seed-meta but did not refresh the data key TTL. After
1h the data key expired, health saw EMPTY, reported CRIT.
Fix: call EXPIRE on the data key to extend TTL on empty response.
2. health.js dataSize() did not recognize the closedIcaos array field,
falling back to Object.keys count (always 2). Now properly counts
the closure array length.
3. 0 airport closures is the normal healthy state, but health treated
it as EMPTY_DATA (CRIT). Added EMPTY_DATA_OK_KEYS set so NOTAM
closures with 0 records reports OK when the key exists or seed-meta
is fresh.
Add missing seed-meta write for intlDelays in ais-relay, add untracked
SEED_META entries (intlDelays, faaDelays, theaterPosture) to health.js,
add 6 missing domains to seed-health.js, and return 503 when degraded.
ICAO API rejects %2C-encoded commas with 403. The manual seed script
passes literal commas in the locations query param, which works.
Match that behavior.
Railway deploys with rootDirectory=scripts/, so ../shared/ resolves to
/shared/ which doesn't exist. Move the canonical file to scripts/data/
and update all four consumers.
NOTAM closures data (aviation:notam:closures:v2) had no scheduled seeder,
causing health endpoint to report CRIT/EMPTY. The manual seed script
(seed-airport-delays.mjs) was never deployed as a cron, and the RPC
handler only populates on-demand with a 30min TTL.
Add a 30-minute seed loop to the AIS relay that fetches from ICAO API
and writes closures to Redis, matching the pattern of all other relay
seed loops. Also add seed-meta tracking in health.js (maxStaleMin: 90).
- Move GEOPOLITICAL_TAGS, TECH_TAGS, FINANCE_TAGS, and EXCLUDE_KEYWORDS
to shared/prediction-tags.json so seed, RPC handler, and client all
reference a single source of truth
- Remove open_interest proto field (always 0 for Polymarket, never
displayed in UI) and corresponding openInterest assignments
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Elie Habib <elie.habib@gmail.com>
Kalshi trading API returns 401 without authentication. Disable all
Kalshi fetches when KALSHI_API_KEY is not set, and pass it as a
Bearer token when present. Seed logs "disabled" instead of spamming
401 errors on every run.
* feat(predictions): add Kalshi as prediction market data source
* fix(predictions): address Kalshi integration review feedback
- Gate Kalshi fetch behind category check to avoid wasted calls on tech-scoped requests
- Replace fragile double-cast bootstrap typing with BootstrapMarket interface
- Fix zero-price falsy bug in seed script using Number.isFinite guard
- Align RPC market selection with seed script (highest-volume via single-pass loop)
- Raise Kalshi volume threshold to 5000 for signal quality parity
- Add missing .prediction-source badge CSS with per-source color variants
* fix(predictions): address P1/P2 review items for Kalshi integration
- Apply isExcluded() filter and volume threshold (5000) to live Kalshi
RPC path so cache-miss results match seed curation quality
- Include FINANCE_TAGS in seed allTags so 'markets' tag is fetched
- Align Kalshi title mapping (market.title || event.title) between
seed and RPC handler
- Remove silent geopolitical fallback for finance variant so missing
finance bootstrap falls through to RPC fetch
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(predictions): prefer yes_sub_title for Kalshi multi-contract events
For multi-contract Kalshi events (e.g. papal election candidates),
market.title is the generic event question while yes_sub_title
identifies the specific contract. Use yes_sub_title when present
in both seed and RPC paths so titles are accurate and consistent.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(predictions): use general Kalshi trading API subdomain
Switch from api.elections.kalshi.com (elections-only) to
trading-api.kalshi.com so economy, crypto, and other non-election
markets are included in the finance variant.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
---------
Co-authored-by: Elie Habib <elie.habib@gmail.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* fix(acled): add OAuth token manager with automatic refresh
ACLED access tokens expire every 24 hours, but WorldMonitor stores a
static ACLED_ACCESS_TOKEN with no refresh logic — causing all ACLED
API calls to fail after the first day.
This commit adds `acled-auth.ts`, an OAuth token manager that:
- Exchanges ACLED_EMAIL + ACLED_PASSWORD for an access token (24h)
and refresh token (14d) via the official ACLED OAuth endpoint
- Caches tokens in memory and auto-refreshes before expiry
- Falls back to static ACLED_ACCESS_TOKEN for backward compatibility
- Deduplicates concurrent refresh attempts
- Degrades gracefully when no credentials are configured
The only change to the existing `acled.ts` is replacing the synchronous
`process.env.ACLED_ACCESS_TOKEN` read with an async call to the new
`getAcledAccessToken()` helper.
Fixes#1283
Relates to #290
* fix: address review feedback on ACLED OAuth PR
- Use Redis (Upstash) as L2 token cache to survive Vercel Edge cold starts
(in-memory cache retained as fast-path L1)
- Add CHROME_UA User-Agent header on OAuth token exchange and refresh
- Update seed script to use OAuth flow via getAcledToken() helper
instead of raw process.env.ACLED_ACCESS_TOKEN
- Add security comment to .env.example about plaintext password trade-offs
- Sidecar ACLED_ACCESS_TOKEN case is a validation probe (tests user-provided
value, not process.env) — data fetching delegates to handler modules
* feat(sidecar): add ACLED_EMAIL/ACLED_PASSWORD to env allowlist and validation
- Add ACLED_EMAIL and ACLED_PASSWORD to ALLOWED_ENV_KEYS set
- Add ACLED_EMAIL validation case (store-only, verified with password)
- Add ACLED_PASSWORD validation case with OAuth token exchange via
acleddata.com/api/acled/user/login
- On successful login, store obtained OAuth token in ACLED_ACCESS_TOKEN
- Follows existing validation patterns (Cloudflare challenge handling,
auth failure detection, User-Agent header)
* fix: address remaining review feedback (duplicate OAuth, em dashes, emoji)
- Extract shared ACLED OAuth helper into scripts/shared/acled-oauth.mjs
- Remove ~55 lines of duplicate OAuth logic from seed-unrest-events.mjs,
now imports getAcledToken from the shared helper
- Replace em dashes with ASCII dashes in acled-auth.ts section comments
- Replace em dash with parentheses in sidecar validation message
- Remove emoji from .env.example security note
Addresses koala73's second review: MEDIUM (duplicate OAuth), LOW (em
dashes), LOW (emoji).
* fix: align sidecar OAuth endpoint, fix L1/L2 cache, cleanup artifacts
- Sidecar: switch from /api/acled/user/login (JSON) to /oauth/token
(URL-encoded) to match server/_shared/acled-auth.ts exactly
- acled-auth.ts: check L2 Redis when L1 is expired, not only when L1
is null (fixes stale L1 skipping fresher L2 from another isolate)
- acled-oauth.mjs: remove stray backslash on line 9
- seed-unrest-events.mjs: remove extra blank line at line 13
---------
Co-authored-by: Elie Habib <elie.habib@gmail.com>
Co-authored-by: RepairYourTech <30200484+RepairYourTech@users.noreply.github.com>
* fix(predictions): replace volume-only sort with composite scoring, add finance variant and region ranking
The prediction panel was surfacing irrelevant near-certain markets (1%/99% meme
markets like celebrity presidential bids) because the discrepancy filter was
inverted and sorting was by volume alone.
- Replace broken discrepancy filter with composite scoring (60% uncertainty +
40% log-scaled volume) in seed script
- Add meme candidate detection and sports/entertainment keyword exclusion
- Add finance variant with dedicated tags for economy/trade/rates topics
- Add region-aware soft ranking outside circuit breaker cache
- Add input validation (category max 50, query max 100) in RPC handler
- Skip events without markets instead of defaulting to yesPrice=50
- Per-bucket relaxation safety valve when <15 markets pass strict filters
* fix(predictions): apply region sort before truncation, add RPC fallback scoring, validate finance seed
- Keep 25 candidates from bootstrap/RPC, apply region sort, then slice to 15
(previously sliced to 15 first, making region boost ineffective for markets
ranked 16-25)
- Add client-side uncertainty scoring + near-certain filter (10-90%) for RPC
fallback path (previously fell back to Gamma's volume-only ordering)
- Include finance array in seed validation (previously only checked
geopolitical/tech, allowing broken finance data to ship silently)
* test(predictions): add 54 unit tests for scoring, filtering, and region tagging
Extract pure prediction scoring functions into shared module
(_prediction-scoring.mjs) for testability. Tests cover parseYesPrice,
isExcluded, isMemeCandidate, tagRegions, shouldInclude, scoreMarket,
filterAndScore, isExpired, plus regression tests for the meme market
surfacing bug that motivated this fix.
* fix(tech-events): prevent partial fetch results from being cached
Techmeme ICS and dev.events RSS fetches on Vercel edge can partially
fail (timeout, truncation), returning only 1 event instead of 20+.
The handler cached this partial result for 6 hours, causing the Tech
Events panel to show empty.
- Add 8s AbortSignal.timeout on both external fetches
- Require minimum 5 events before caching (at least curated count)
* fix(tech-events): remove MIN_EVENTS threshold and add diagnostic logging
The MIN_EVENTS=5 threshold caused empty results when both external
sources fail on Vercel edge (only 4 curated events available). Now
any events > 0 are cached. Added detailed logging to diagnose why
Techmeme ICS and dev.events RSS fetches fail on Vercel edge.
Also removed past STEP Dubai 2026 event.
* fix(tech-events): route fetches through Railway relay when direct fails
Vercel edge functions cannot reliably reach Techmeme ICS and dev.events
RSS (datacenter IP blocking). Added fetchTextWithRelay() that tries
direct fetch first, then falls back to Railway relay proxy (/rss endpoint)
which fetches from a different IP. Same pattern used by news feed digest
and other handlers that hit blocked external sources.
* feat(tech-events): gold standard pipeline with Railway seed + bootstrap hydration
Full data pipeline overhaul to match project conventions:
- Add tech events seed loop to ais-relay.cjs: fetches Techmeme ICS +
dev.events RSS every 6h from Railway (avoids Vercel IP blocking),
parses both sources, merges with curated fallback events, writes to
Redis (data key + bootstrap key + seed-meta)
- Register in api/bootstrap.js BOOTSTRAP_CACHE_KEYS (SLOW tier)
- Register in api/health.js BOOTSTRAP_KEYS + SEED_META (420min stale)
- Restructure RPC handler: reads from single broad Redis key (populated
by seed), applies geocoding + filtering in-memory per request params.
Fallback fetcher only runs on cold start before first seed
- TechEventsPanel: check getHydratedData('techEvents') from bootstrap
before falling back to RPC call
- data-loader: use hydrated bootstrap data for map layer, RPC fallback
* feat(desktop): compile domain handlers + add in-memory sidecar cache
The sidecar was broken for all 23 sebuf/RPC domain routes because
the build script (build-sidecar-handlers.mjs) never existed on main
while package.json already referenced it. This adds the missing script
and an in-memory TTL+LRU cache so the sidecar doesn't need Upstash Redis.
- Add scripts/build-sidecar-handlers.mjs (esbuild multi-entry, 23 domains)
- Add server/_shared/sidecar-cache.ts (500 entries, 50MB max, lazy sweep)
- Modify redis.ts getCachedJson/setCachedJson to use dynamic import for
sidecar cache when LOCAL_API_MODE=tauri-sidecar (zero cost on Vercel Edge)
- Update tauri.conf.json beforeDevCommand to compile handlers
- Add gitignore pattern for compiled api/*/v1/[rpc].js
* fix(desktop): gate premium panel fetches and open footer links in browser
Skip oref-sirens and telegram-intel HTTP requests on desktop when
WORLDMONITOR_API_KEY is not present. Use absolute URLs for footer
links on desktop so the Tauri external link handler opens them in
the system browser instead of navigating within the webview.
* fix(desktop): cloud proxy, bootstrap timeouts, and panel data fixes
- Set Origin header on cloud proxy requests (fixes 401 from API key validator)
- Strip If-None-Match/If-Modified-Since headers (fixes stale 304 responses)
- Add cloud-preferred routing for market/economic/news/infrastructure/research
- Enable cloud fallback via LOCAL_API_CLOUD_FALLBACK env var in main.rs
- Increase bootstrap timeouts on desktop (8s/12s vs 3s/5s) for sidecar proxy hops
- Force per-feed RSS fallback on desktop (server digest has fewer categories)
- Add finance feeds to commodity variant (client + server)
- Remove desktop diagnostics from ServiceStatusPanel (show cloud statuses only)
- Restore DeductionPanel CSS from PR #1162
- Deduplicate repeated sidecar error logs
Replace "WorldMonitor" with "World Monitor" in all user-facing display
text across blog posts, docs, layouts, structured data, footer, offline
page, and X-Title headers. Technical identifiers (User-Agent strings,
X-WorldMonitor-Key headers, @WorldMonitorApp handle, function names)
are preserved unchanged. Also adds anchors color to Mintlify docs config
to fix blue link color in dark mode.
Checks VERCEL_GIT_PULL_REQUEST_ID before proceeding.
Branch pushes without an open PR are skipped (exit 0),
eliminating wasted build minutes from 378+ feature branches.
Production (main) always builds.
Seed services and relay were redeploying on every push (blog, frontend, etc)
because no watchPatterns were configured. Added utility script that sets
watchPatterns via Railway GraphQL API so services only redeploy when their
actual source files change. Already applied to all 23 services.
* feat(blog): add Astro blog at /blog with 16 SEO-optimized posts
Adds a static Astro blog built during Vercel deploy and served at
worldmonitor.app/blog. Includes 16 marketing/SEO posts covering
features, use cases, and comparisons from customer perspectives.
- blog-site/: Astro static site with content collections, RSS, sitemap
- Vercel build pipeline: build:blog builds Astro and copies to public/blog/
- vercel.json: exclude /blog from SPA catch-all rewrite and no-cache headers
- vercel.json: ignoreCommand triggers deploy on blog-site/ changes
- Cache: /blog/_astro/* immutable, blog HTML uses Vercel defaults
* fix(blog): fix markdown lint errors in blog posts
Add blank lines around headings (MD022) and lists (MD032) across
all 16 blog post files to pass markdownlint checks.
* fix(ci): move ignoreCommand to script to stay under 256 char limit
Vercel schema validates ignoreCommand max length at 256 characters.
Move the logic to scripts/vercel-ignore.sh and reference it inline.
* fix(blog): address PR review findings
- Add blog sitemap to robots.txt for SEO discovery
- Use www.worldmonitor.app consistently (canonical domain)
- Clean public/blog/ before copy to prevent stale files
- Use npm ci for hermetic CI builds
* fix(blog): move blog dependency install to postinstall phase
Separates dependency installation from compilation. Blog deps are
now installed during npm install (postinstall hook), not during build.
- Add AVIATIONSTACK and NOTAM proto enum values for accurate source attribution
- AviationStack flight data alerts now show "Flight Data" instead of "Computed"
- NOTAM closure/restriction alerts now show "NOTAM"
- Remove generateSimulatedDelay() fallback that produced fake random alerts
- Reduce all aviation cache TTLs from 2h to 30min for fresher data
- Reduce relay seed interval from 1h to 30min, TTL from 4h to 1h
- Reduce seed freshness threshold from 45min to 20min
- Update health check maxStaleMin from 90 to 60min
- Update all 21 locale files with new source labels
* fix(health): fix riskScores seeding gap and seed-meta key mismatch
- Switch RPC handler to cachedFetchJsonWithMeta so stale key is refreshed
on every successful response (cache hit or miss), not just cache misses
- Fix seed-meta key mismatch: health.js and seed-health.js now check
seed-meta:risk:scores:sebuf (matching what cachedFetchJson writes)
- Add warm-ping loop in relay (8min interval) to keep RPC cache fresh
- Remove dead startCiiSeedLoop and 345 lines of unused CII seed code
* fix(scoring): await stale key write to prevent edge runtime drop
Edge/serverless runtimes may terminate the isolate before a
fire-and-forget Redis write completes. Await the setCachedJson
call so the stale key TTL is guaranteed to be extended.
* feat(natural): add tropical cyclone tracking from NHC and GDACS
Integrate NHC ArcGIS REST API (15 storm slots across AT/EP/CP basins)
and GDACS TC field extraction to provide real-time tropical cyclone data
with forecast tracks, uncertainty cones, and historical track paths.
- Proto: add optional TC fields (storm_id, wind_kt, pressure_mb, etc.)
plus ForecastPoint, PastTrackPoint, CoordRing messages
- Server/seed: NHC two-pass query (forecast points then detail layers),
GDACS wind/pressure parsing, Saffir-Simpson classification, dedup
strategy (NHC > GDACS > EONET), pressureMb validation (850-1050),
advisory date with Number.isFinite guard
- Globe: dashed red forecast track, per-segment wind-colored past track,
semi-transparent orange forecast cone polygon
- Popup: TC details panel with color-coded category badge, wind/pressure
- Frontend mapper: forward all TC fields, convert CoordRing to number[][][]
* fix(natural): improve GDACS dedup, NHC classification, and TC popup i18n
- GDACS dedup now checks name + geographic proximity instead of name-only
- NHC classification uses stormtype field for subtropical/post-tropical
- TC popup labels use t() for localization instead of hardcoded English
* feat(map): add cyclone-specific deck.gl layers for 2D map
- Storm center ScatterplotLayer with Saffir-Simpson wind coloring
- Past track PathLayer with per-segment wind-speed color ramp
- Forecast track PathLayer with dashed line via PathStyleExtension
- Cone PolygonLayer for forecast uncertainty visualization
- Tooltip and click routing for all new storm layer IDs
* fix(map): remove click routing for synthetic storm track/cone layers
Track and cone layers carry lightweight objects without full NaturalEvent
fields. Clicking them would pass incomplete data to the popup renderer.
Only storm-centers-layer (which holds the full NaturalEvent) routes to
the natEvent popup. Tracks and cones remain tooltip-only.
* fix(map): attach parent NaturalEvent to synthetic storm layers for clicks
Synthetic track/cone objects now carry _event reference to the parent
NaturalEvent. Click handler unwraps _event before passing to popup,
so clicking any storm element opens the full TC popup.
* feat(map): add NOTAM overlay + satellite imagery integration
NOTAM Overlay:
- Expand airport monitoring from MENA-only to 64 global airports
- Add ScatterplotLayer (55km red rings) on flat map for airspace closures
- Add CSS-pulsing ring markers on globe for closures
- Independent of flights layer toggle (works when flights OFF)
- Bump NOTAM cache key v1 to v2
Satellite Imagery:
- Add Capella SAR STAC catalog proxy at /api/imagery/v1
- SSRF protection via URL allowlist + bbox/datetime validation
- SatelliteImageryPanel with preview thumbnails and scene metadata
- PolygonLayer footprints on flat map with viewport-triggered search
- Polygon footprints on globe with "Search this area" button
- Full variant only, default disabled
Layer key propagation across all 23+ files including variants,
harnesses, registry, URL state, and renderer channels.
* fix(imagery): wire panel data flow, fix viewport race, add datetime filter
P1 fixes:
- Imagery scenes now flow through MapContainer.setOnImageryUpdate()
callback, making data available to both renderers and panel
- Add version guard to fetchImageryForViewport() preventing stale
responses from overwriting newer viewport data
- Wire SatelliteImageryPanel.update() and setOnSearchArea() in
panel-layout.ts (panel was previously unhooked)
- Globe mode "Search this area" fetches via MapContainer.getBbox()
P2 fix:
- search-imagery.ts now filters STAC items by datetime range when
the client provides the datetime parameter
Also:
- Add MapContainer.getBbox() for viewport-aware imagery fetching
- Add DeckGLMap.getBbox() public method
- Data-loader layer toggle triggers initial imagery fetch
* fix(imagery): complete source filter + fix date-only end bound
- Filter STAC items by constellation when source param is provided,
making the API contract match actual behavior
- Date-only end bounds (YYYY-MM-DD without T) now include the full
day (23:59:59.999Z) instead of only midnight
Add Yehud, Sitra, Sanandaj, Ma'ameer, Northern Cyprus to
LOCATION_COORDS for geolocating new Iran conflict events.
74 events seeded to Redis from LiveUAMap import.
* fix(seed): add new locations and day-ago parsing for Iran events
Add 11 new location coordinates (al-Kharj, Petah Tikva, Beersheba,
Oman, Oslo, Aghdasiyeh, Rey, Beirut, Azraq) and support "a day ago"
/ "N days ago" relative time parsing.
* feat(pro): add early access promotional banner to dashboard
Thin, dismissible top banner promoting WorldMonitor Pro with
"Reserve your spot" CTA linking to /pro. Dismisses for 7 days
via localStorage timestamp. Slide-down animation, responsive,
light/dark theme compatible via CSS variables.
Remove patterns that match zero satellites in CelesTrak:
- OFEK/EROS (Israel), IGS (Japan) — classified
- LACROSSE/TOPAZ (US NRO) — retired/listed as USA-*
- KONDOR/PERSONA/BARS-M/RESURS-P (Russia) — listed as COSMOS 2xxx
- HISEA/SUPERVIEW (China), CSO-/HELIOS (France) — not in groups
- RISAT/EOS-0x (India) — not in resource group
- Add CelesTrak 'active' group (~6000 sats, filtered down)
- Add Israeli (OFEK, EROS), Indian (RISAT, CARTOSAT, EOS), Japanese (IGS),
Turkish (GOKTURK, RASAT), French (CSO, HELIOS), US NRO (LACROSSE, TOPAZ, USA-*)
- Add Russian (KONDOR, PERSONA, BARS-M, RESURS-P), Chinese (HISEA, SUPERVIEW, ZIYUAN)
- Widen COSMOS regex to 2[4-9]xx for newer Russian recon sats
- Add country colors for IL, IN, JP, TR on globe
Railway Nixpacks images don't include curl. Replaced curlFetchJson()
with proxyFetchJson() using Node.js http/https/tls modules for
HTTP CONNECT proxy tunneling to OpenSky.
Track ~80-120 intelligence-relevant satellites on the 3D globe using CelesTrak
TLE data and client-side SGP4 propagation (satellite.js). Satellites render at
actual orbital altitude with country-coded colors, 15-min orbit trails, and
ground footprint projections.
Architecture: Railway seeds TLEs every 2h → Redis → Vercel CDN (1h cache) →
browser does SGP4 math every 3s (zero server cost for real-time movement).
- New relay seed loop (ais-relay.cjs) fetching military + resource groups
- New edge handler (api/satellites.js) with 10min cache + negative cache
- Frontend service with circuit breaker and propagation lifecycle
- GlobeMap integration: markers, trails (pathsData), footprints, tooltips
- Layer registry as globe-only "Orbital Surveillance" with i18n (21 locales)
- Full documentation at docs/ORBITAL_SURVEILLANCE.md with roadmap
- Fix pre-existing SearchModal TS error (non-null assertion)
* feat(seeds): standalone military flights seed + relay cleanup
- Create scripts/seed-military-flights.mjs as standalone Railway cron seed
with 3-tier fallback: OpenSky auth → OpenSky anonymous → Wingbits
- Remove military flights seed from ais-relay.cjs (452 lines)
- Theater posture seed remains in relay with its own OpenSky + Wingbits fallback
- Standalone seed writes military:flights:v1, stale, and theater posture keys
* feat(opensky): HTTP CONNECT tunnel via residential proxy + better logging
- New OPENSKY_PROXY_AUTH env var (falls back to OREF_PROXY_AUTH)
- _openskyProxyConnect() helper for HTTP CONNECT tunneling in relay
- Updated _attemptOpenSkyTokenFetch() and _openskyRawFetch() to route
through proxy when OPENSKY_PROXY_AUTH is set
- /opensky-diag now shows proxyEnabled status
- Startup log shows (via proxy) or (direct)
- seed-military-flights.mjs: curl-based proxy for OpenSky auth + anon
- seed-military-flights.mjs: verbose Wingbits logging (response shape,
per-area flight counts, sample data) to debug 0-aircraft issue
- Better HTTP error logging: status code + response body on non-2xx
* fix(wingbits): use correct response key 'data' instead of 'flights'
Wingbits API returns { alias, data: [...] } not { flights: [...] }.
This caused 0 aircraft from Wingbits in both standalone seed and relay
theater posture. Also fixed field mappings: 'c' (country), 'ra' (timestamp),
'og' (onGround) match actual Wingbits response format.
Verified locally: 3761 raw → 52 military matches from WESTERN region alone.
* fix(wingbits): correct field mapping + wingbits-first fetch order
- 'c' field is internal Wingbits classification (A0-C2), NOT country code
Removed from originCountry mapping to avoid false matches
- Wingbits now tier 1 (no proxy, fast, reliable), OpenSky supplements
via proxy as tier 2/3 for additional aircraft coverage
- Verified: Wingbits returns altitude in feet, speed in knots already
(ft→m→ft round-trip through unified pipeline is correct)
When OpenSky is unavailable (both proxy auth and anonymous API),
Wingbits now serves as fallback. When both sources are available,
Wingbits supplements OpenSky with additional aircraft (deduped by
icao24). Wingbits data is converted to OpenSky state array format
for unified processing through the existing military detection pipeline.
When Railway's OpenSky OAuth2 auth times out (IP-level rate limiting),
the military flights seed now falls back to OpenSky's anonymous public
API endpoint directly. This ensures data flows even when the
authenticated proxy is blocked.