seed-sanctions-pressure.mjs imports fast-xml-parser to parse OFAC SDN
XML feeds, but the package was never added to scripts/package.json.
Railway deploys crash with ERR_MODULE_NOT_FOUND on startup.
* feat(sanctions): add OFAC sanctions pressure intelligence
* fix(sanctions): strip _state from API response, fix code/name alignment, cap seed limit
- trimResponse now destructures _state before spreading to prevent seed
internals leaking to API clients during the atomicPublish→afterPublish window
- buildLocationMap and extractPartyCountries now sort (code, name) as aligned
pairs instead of calling uniqueSorted independently on each array; fixes
code↔name mispairing for OFAC-specific codes like XC (Crimea) where
alphabetic order of codes and names diverges
- DEFAULT_RECENT_LIMIT reduced from 120 to 60 to match MAX_ITEMS_LIMIT so
seeded entries beyond the handler cap are not written unnecessarily
- Add tests/sanctions-pressure.test.mjs covering all three invariants
* fix(sanctions): register sanctions:pressure:v1 in health.js BOOTSTRAP_KEYS and SEED_META
Adds sanctionsPressure to health.js so the health endpoint monitors the
seeded key for emptiness (CRIT) and freshness via seed-meta:sanctions:pressure
(maxStaleMin: 720 matches 12h seed TTL). Without this, health was blind to
stale or missing sanctions data.
When upstream APIs fail and seeds extend existing data key TTLs, the
seed-meta key was left untouched. Health checks use seed-meta fetchedAt
to determine staleness, so preserved data still triggered STALE_SEED
warnings even though the data was valid.
Now all TTL extension paths include the corresponding seed-meta key:
- _seed-utils.mjs runSeed() (fetch failure + validation skip)
- fetch-gpsjam.mjs (Wingbits 500 fallback)
- seed-airport-delays.mjs (FAA fetch failure)
- seed-military-flights.mjs (OpenSky fetch failure)
- seed-service-statuses.mjs (RPC fetch failure)
* fix(ucdp): add page error logging, page-0 fallback, and TTL extension on empty
Three resilience improvements for UCDP seed loop:
1. Log actual error messages on page fetch failures instead of silently
swallowing them. Enables diagnosing API outages vs rate limits.
2. Fall back to page 0 data when all newest-page fetches fail. Page 0
is already fetched during version discovery, so this is free. Provides
partial (older) data instead of writing 0 events.
3. When 0 events remain after processing, extend existing Redis key TTL
instead of overwriting with empty payload. Preserves stale-but-valid
data for the next cycle rather than causing EMPTY_DATA CRIT in health.
* fix(ucdp): remove page-0 fallback, stop seed-meta on failed fetches
P1 fixes from review:
- Remove page-0 fallback that overwrote last known good cache with
stale historical data. Extend existing key TTL instead.
- Stop writing fresh seed-meta timestamps when no new payload is
written (both all-pages-failed and empty-after-filtering branches).
Health checks should reflect actual data freshness, not failed attempts.
Add 6 targeted source-analysis tests verifying:
- Error logging on page failures
- No page-0 data injection
- TTL extension on failure branches
- seed-meta only written on successful publish
Mechanical fixes across 13 files:
- isNaN() → Number.isNaN() (all values already numeric from parseFloat/parseInt)
- let → const where never reassigned
- Math.pow() → ** operator
- Unnecessary continue in for loop
- Useless string escape in test description
- Missing parseInt radix parameter
- Remove unused private class member (write-only counter)
- Prefix unused function parameter with _
Config: suppress noImportantStyles (CSS !important is intentional) and
useLiteralKeys (bracket notation used for computed/dynamic keys) in
biome.json. Remaining 49 warnings are all noExcessiveCognitiveComplexity
(already configured as warn, safe to address incrementally).
When a seed fetches data but validation rejects it (e.g. FIRMS API
returns 0 fires due to timeout), extend the existing key's TTL
instead of letting it expire. Old data survives until the next
successful fetch. Applies to all seeds using runSeed().
* fix(trade): align flows cache key with seed (US vs World, not China)
The seed writes trade:flows:v1:840:000:10 (US vs World) but the
data-loader requested trade:flows:v1:840:156:10 (US vs China),
causing perpetual cache misses and a hidden Flows tab.
* feat(seed): add bilateral trade flow pairs (US-China, US-Germany, etc.)
Seed now writes both reporter-vs-World AND key bilateral pairs
so switching between global and bilateral views hits warm cache.
* feat(seed): add World-China and World-US bilateral flow pairs
* fix(trade): revert flows to US-China (840/156), seed now covers this key
The bilateral seed entries now write trade:flows:v1:840:156:10,
so the original US-China request hits cache. Keeps the panel
showing bilateral data consistent with the tariffs partner.
* fix(aviation): stop Vercel from calling AviationStack directly
- get-airport-ops-summary: read from relay seed cache (aviation:delays:intl:v3)
instead of calling fetchAviationStackDelays() on every cache miss
- list-airport-flights + get-flight-status: proxy through Railway relay
/aviationstack endpoint instead of calling AviationStack from Vercel edge
- Add /aviationstack proxy endpoint to ais-relay with 2min in-memory cache
Vercel should NEVER call external paid APIs directly. Railway relay is
the sole egress point for AviationStack (gold standard).
* fix(config): update aviationStack feature to require WS_RELAY_URL
Aviation handlers now proxy through Railway relay instead of calling
AviationStack directly. Update runtime-config to reflect the actual
dependency.
Hourly cron + 3600s TTL = data expires right as the next seed starts,
causing a ~30s EMPTY window. Bumped to 4800s (80min) so old data
persists while the new seed runs.
* feat(supply-chain): add SCFI, CCFI, and BDI freight indices to shipping tab
Transform the Shipping Rates tab from 2 lagging monthly FRED indices into
a real-time freight cost dashboard with container and bulk shipping rates.
Seed script: add fetchSCFI/fetchCCFI (SSE JSON API) and fetchBDI (HandyBulk
HTML scrape) with inline history accumulation using source observation dates.
Handler: make cache-only (seed is sole aggregator, no FRED fallback on miss).
Panel: group indices into Container Rates, Bulk Shipping, Economic Indicators.
Tests: 26 functional tests with fixture data for parsers, history, and handler.
* fix(supply-chain): use raw Redis read and correct SCFI composite unit
- Handler: switch from cachedFetchJson (env-prefixed) to getCachedJson(key, true)
so preview deployments read the unprefixed seed key correctly
- Seed: SCFI composite is a dimensionless index, not USD/TEU (route-level unit)
- Tests: update assertions to match both fixes
* feat(trade): add US Treasury customs revenue to Trade Policy panel
US customs duties revenue spiked 4-5x under Trump tariffs (from
$7B/month to $27-31B/month) but the WTO tariff data only goes to
2024. Adds Treasury MTS data showing monthly customs revenue.
- Add GetCustomsRevenue RPC (proto, handler, cache tier)
- Add Treasury fetch to seed-supply-chain-trade.mjs (free API, no key)
- Add Revenue tab to TradePolicyPanel with FYTD YoY comparison
- Fix WTO gate: per-tab gating so Revenue works without WTO key
- Wire bootstrap hydration, health, seed-health tracking
* test(trade): add customs revenue feature tests
22 structural tests covering:
- Handler: raw key mode, empty-cache behavior, correct Redis key
- Seed: Treasury API URL, classification filter, timeout, row
validation, amount conversion, sort order, seed-meta naming
- Panel: WTO gate fix (per-tab not panel-wide), revenue tab
defaults when WTO key missing, dynamic FYTD comparison
- Client: no WTO feature gate, bootstrap hydration, type exports
* fix(trade): align FYTD comparison by fiscal month count
Prior FY comparison was filtering by calendar month, which compared
5 months of FY2026 (Oct-Feb) against only 2 months of FY2025
(Jan-Feb), inflating the YoY percentage. Now takes the first N
months of the prior FY matching the current FY month count.
* fix(trade): register treasury_revenue DataSourceId and localize revenue tab
- Add treasury_revenue to DataSourceId union type so freshness
tracking actually works (was silently ignored)
- Register in data-freshness.ts source config + gap messages
- Add i18n keys: revenue tab label, empty state, unavailable banner
- Update infoTooltip to include Revenue tab description
* fix(trade): complete revenue tab localization
Use t() for all remaining hardcoded strings: footer source labels,
FYTD summary headline, prior-year comparison, and table column
headers. Wire the fytdLabel/vsPriorFy keys that were added but
not used.
* fix(test): update revenue source assertion for localized string
Kalshi multi-outcome events return market titles like "Before 2035",
"Rhode Island", "Johnny Depp" which are meaningless without the parent
event context. Now combines event title with market title when the
market title is short and doesn't contain a question mark.
Before: "Before 2035" (KALSHI)
After: "Will AGI be achieved?: Before 2035" (KALSHI)
The corridorrisk raw key (2h TTL) expires between hourly seed cycles,
causing health to report EMPTY even though data flows correctly through
transit-summaries:v1.
- Increase CORRIDOR_RISK_TTL from 2h to 4h (3 retries before expiry)
- Add corridorrisk to ON_DEMAND_KEYS (WARN instead of CRIT when empty)
The top-level import crashes seed-forecasts on Railway when the
package isn't installed. Dynamic import defers the load to when
S3 mode is actually used, allowing the seed to run without the
SDK when R2 is not configured.
* feat(forecast): add AI Forecasts prediction module (Pro-tier)
MiroFish-inspired prediction engine that generates structured forecasts
across 6 domains (conflict, market, supply chain, political, military,
infrastructure) using existing WorldMonitor data streams.
- Proto definitions for ForecastService with GetForecasts RPC
- Dedicated seed script (seed-forecasts.mjs) with 6 domain detectors,
cross-domain cascade resolver, prediction market calibration, and
trend detection via prior snapshot comparison
- Premium-gated RPC handler (PREMIUM_RPC_PATHS enforcement)
- Lazy-loaded ForecastPanel with domain filters, probability bars,
trend arrows, signal evidence, and cascade links
- Health monitoring integration (seed-meta freshness tracking)
- Refresh scheduler with API key guard
* test(forecast): add 47 unit tests for forecast detectors and utilities
Covers forecastId, normalize, resolveCascades, calibrateWithMarkets,
computeTrends, and smoke tests for all 6 domain detectors. Exports
testable functions from seed script with direct-run guard.
* fix(forecast): domain mismatch 'infra' vs 'infrastructure', add panel category
- Seed script used 'infra' but ForecastPanel filtered on 'infrastructure',
causing Infra tab to show zero results
- Added 'forecast' to intelligence category in PANEL_CATEGORY_MAP
* fix(forecast): move CSS to one-time injection, improve type safety
- P2: Move style block from setContent to one-time document.head injection
to prevent CSS accumulation on repeated renders
- P3: Replace +toFixed(3) with Math.round for readability in seed script
- P3: Use Forecast type instead of any[] in RPC handler filter
* fix(forecast): handle sebuf proto data shapes from Redis
Detectors now normalize CII scores from server-side proto format
(combinedScore, TREND_DIRECTION_RISING, region) to uniform shape.
Outage severity handles proto enum format (SEVERITY_LEVEL_HIGH).
Added confidence floor of 0.3 for single-source predictions.
Verified against live Redis: 2 predictions generated (Iran infra
shutdown, IL political instability).
* feat(forecast): unlock AI Forecasts on web, lock desktop only (trial)
- Remove forecast RPC from PREMIUM_RPC_PATHS (web access is free)
- Panel locked on desktop only (same as oref-sirens/telegram-intel)
- Remove API key guards from data-loader and refresh scheduler
- Web users get full access during trial period
* chore: regenerate proto types with make generate
Re-ran make generate after rebasing on main. Plugin v0.7.0 dropped
@ts-nocheck from output, added it back to all 50 generated files.
Fixed 4 type errors from proto codegen changes:
- MarketSource enum -> string union type
- TemporalAnomalyProto -> TemporalAnomaly rename
- webcam lastUpdated number -> string
* chore: add proto freshness check to pre-push hook
Runs make generate before push and compares checksums of generated files.
If proto types are stale, blocks push with instructions to regenerate.
Skips gracefully if buf CLI is not installed.
* fix(forecast): use chokepoints v4 key, include ciiContribution in unrest
- P1: Switch chokepoints input from stale v2 to active v4 Redis key,
matching bootstrap.js and cache-keys.ts
- P2: Add ciiContribution to unrest component fallback chain in
normalizeCiiEntry so political detector reads the correct sebuf field
* feat(forecast): Phase 2 LLM scenario enrichment + confidence model
MiroFish-inspired enhancements:
- LLM scenario narratives via Groq/OpenRouter (narrative-only, no numeric
adjustment). Evidence-grounded prompts with mandatory signal citation
and few-shot examples from MiroFish's SECTION_SYSTEM_PROMPT_TEMPLATE.
- Top-4 predictions batched into single LLM call for cost efficiency.
- News context from newsInsights attached to all predictions for LLM
prompt grounding (NOT in signals, cannot affect confidence).
- Deterministic confidence model: source diversity via SIGNAL_TO_SOURCE
mapping (deduplicates cii+cii_delta, theater+indicators) + calibration
agreement from prediction market drift. Floor 0.2, ceiling 1.0.
- Output validation: rejects scenarios without signal references.
- Truncated JSON repair for small model output.
- Structured JSON logging for LLM calls.
- Redis cache for LLM scenarios (1h TTL).
- 23 new tests (70 total), all passing.
- Live-tested: OpenRouter gemini-2.5-flash produces evidence-grounded
scenario narratives from real WorldMonitor data.
* feat(forecast): Phase 3 multi-perspective scenarios, projections, data-driven cascades
MiroFish-inspired enhancements:
- Multi-perspective LLM analysis: top-2 predictions get strategic,
regional, and contrarian viewpoints via combined LLM call
- Probability projections: domain-specific decay curves (h24/d7/d30)
anchored to timeHorizon so probability equals projections[timeHorizon]
- Data-driven cascade rules: moved from hardcoded array to JSON config
(scripts/data/cascade-rules.json) with schema validation, named
predicate evaluators, unknown key rejection, and fallback to defaults
- 4 new cascade paths: infrastructure->supply_chain, infrastructure->market
(both requiresSeverity:total), conflict->political, political->market
- Proto: added Perspectives and Projections messages to Forecast
- ForecastPanel: renders projections row and conditional perspectives toggle
- 89 tests (19 new), all passing
- Live-tested: OpenRouter produces perspectives from real data
* feat(forecast): Phase 4 data utilization + entity graph
Fixes data gaps that prevented 4 of 6 detectors from firing:
- Input normalizers: chokepoint v4 shape + GPS hexes-to-zones mapping
- Chokepoint warm-ping (production-only, requires WM_API_BASE_URL)
- Lowered CII conflict threshold from 70 to 60, gated on level=high|critical
4 new standalone detectors:
- UCDP conflict zones (10+ events per country)
- Cyber threat concentration (5+ threats per country)
- GPS jamming in maritime shipping zones (5 regions)
- Prediction markets as signals (60-90% probability markets)
Entity-relationship graph (file-based, 38 nodes):
- Countries, theaters, commodities, chokepoints, alliances
- Alias table resolves both ISO codes and display names
- Graph cascade discovery links predictions across entities
Result: 51 predictions (up from 1-2), spanning conflict, infrastructure,
and supply chain domains. 112 tests, all passing.
* fix(forecast): redis cache format, signal source mapping, type safety
Fresh-eyes audit fixes:
- BUG: redisSet used wrong Upstash API format (POST body with {value,ex}
instead of command array ['SET',key,value,'EX',ttl]). LLM cache writes
were silently failing, causing fresh LLM calls every run.
- BUG: prediction_market signal type missing from SIGNAL_TO_SOURCE,
inflating confidence for market-derived predictions.
- CLEANUP: Remove unnecessary (f as any) casts in ForecastPanel since
generated Forecast type already has projections/perspectives fields.
- CLEANUP: Bump health maxStaleMin from 60 to 90 to avoid false STALE
alerts when LLM calls add latency to seed runs.
* feat(forecast): headline-entity matching with news corroboration signals
Uses entity graph aliases to match headlines to predictions by
country/theater (excludes commodity/infrastructure nodes to prevent
false positives). Predictions with matching headlines get a
news_corroboration signal visible in the panel.
Also fixes buildUserPrompt to merge unique headlines from ALL
predictions in the LLM batch (was only reading preds[0].newsContext).
Live-tested: 13 of 51 predictions now have corroborating headlines
(Iran, Israel, Syria, Ukraine, etc). 116 tests, all passing.
* feat(forecast): add country-codes.json for headline-entity matching
56 countries with ISO codes, full names, and scoring keywords (extracted
from src/config/countries.ts + UCDP-relevant additions). Used by
attachNewsContext for richer headline matching via getSearchTermsForRegion
which combines country-codes + entity graph + keyword aliases.
14/57 predictions now have news corroboration (limited by headline
coverage, not matching quality: only 8 headlines currently available).
* feat(forecast): read 300 headlines from news digest instead of 8
Read news:digest:v1:full:en (300 headlines across 16 categories) instead
of just news:insights:v1 topStories (8 headlines). Fallback to topStories
if digest is unavailable.
Result: news corroboration jumped from 25% to 64% (38/59 predictions).
* fix(forecast): handle parenthetical country names in headline matching
Strip suffixes like '(Zaire)', '(Burma)', '(Soviet Union)' from UCDP
region names before matching against country-codes.json. Also use
includes() for reverse name lookup to catch partial matches.
Corroboration: 64% -> 69% (41/59). Remaining 18 unmatched are countries
with no current English-language news coverage.
* fix(forecast): cache validated LLM output, add digest test, log cache errors
Fresh-eyes audit fixes:
- Combined LLM cache now stores only validated items (was caching raw
unvalidated output, serving potentially invalid scenarios on cache hit)
- redisSet logs warnings on failure (was silently swallowing all errors)
- Added digest-based test for attachNewsContext (primary path was untested)
- Fixed test arity: attachNewsContext(preds, news, digest) with 3 params
* fix(forecast): remove dead confidenceFromSources, reduce warm-ping timeout
- P2: Remove confidenceFromSources (dead code, computeConfidence overwrites
all initial confidence values). Inline the formula in original detectors.
- P3: Reduce warm-ping timeout from 30s to 15s (non-critical step)
- P3: Add trial status comment on forecast panel config
* fix(forecast): resolve ISO codes to country names, fix market detector, safe pre-push
P1 fixes from code review:
- CII ISO codes (IL, IR) now resolved to full country names (Israel, Iran)
via country-codes.json. Prevents substring false positives (IL matching
Chile) in event correlation. Uses word-boundary regex for matching.
- Market detector CII-to-theater mapping now uses entity graph traversal
instead of broken theater-name substring matching. Iran correctly maps
to Middle East theater via graph links.
- Pre-push hook no longer runs destructive git checkout on proto freshness
failure. Reports mismatch and exits without modifying worktree.
* feat(forecast): add structured scenario pipeline and trace export
* fix(forecast): hydrate bootstrap and trim generated drift
* fix(forecast): keep required supply-chain contract updates
* fix(ci): add forecasts to cache-keys registry and regenerate proto
Add forecasts entry to BOOTSTRAP_CACHE_KEYS and BOOTSTRAP_TIERS in
cache-keys.ts to match api/bootstrap.js. Regenerate SupplyChain proto
to fix duplicate TransitDayCount and add riskSummary/riskReportAction.
* fix(data): restore bootstrap and cache test coverage
* fix: resolve linting and test failures
- Remove dead writeSeedMeta/estimateRecordCount functions from redis.ts
(intentionally removed from cachedFetchJson; seed-meta now written
only by explicit seed flows, not generic cache reads)
- Fix globe dayNight test to match actual code (forces dayNight: false
+ hideLayerToggle, not catalog-based exclusion)
- Fix country-geometry test mock URL from CDN to /data/countries.geojson
(source changed to use local bundled file)
* fix(lint): remove duplicate llm-health key in redis-caching test
Duplicate object key '../../../_shared/llm-health' caused the stub
to be overwritten by the real module. Removed the second entry so
the test correctly uses the stub.
Validation now accepts empty ACLED events array when humanitarian or
pizzint data was fetched. Previously the seed wrote extra keys
(humanitarian, pizzint) but skipped the canonical key because
validateFn required non-empty events.
The standalone seed-usni-fleet.mjs cannot reach USNI because:
1. USNI Cloudflare blocks Node.js TLS fingerprint (JA3)
2. curl is not installed on Railway cron containers
3. Froxy residential proxy is IP-whitelisted to the relay fixed IP
Move the USNI seed loop back into ais-relay.cjs where it has access to
curl + the whitelisted proxy. Uses orefCurlFetch for the fetch, same
pattern as the OREF alerts loop. Writes to the same Redis keys
(usni-fleet:sebuf:v1, stale:v1, seed-meta:military:usni-fleet).
6h seed interval, 7h TTL, 7d stale TTL (unchanged from standalone).
* fix(seeds): improve resilience and fix dead APIs across seed scripts
- Fix wrong domain in seed-service-statuses (worldmonitor.app to api.worldmonitor.app)
- Fix Kalshi API domain migration (trading-api.kalshi.com to api.elections.kalshi.com)
- Replace dead trending APIs (gitterapp.com, herokuapp.com) with OSSInsight + GitHub Search
- Fix case-sensitive HTML detection in seed-usni-fleet (lowercase doctype not matched)
- Add Promise.allSettled rejection logging across 8 seed scripts
- Wrap fetch loops in try-catch (seed-supply-chain-trade, seed-economy) so a single
network error no longer kills the entire function
- Update list-trending-repos.ts RPC handler to match seed changes
* fix(seeds): correct OSSInsight response parsing and period-aware GitHub Search fallback
- OSSInsight returns {data: {rows: [...]}} not {data: [...]}, fix both seed and handler
- GitHub Search fallback now respects period parameter (daily=1d, weekly=7d, monthly=30d)
* fix(seeds): correct OSSInsight period values (past_week/past_month, not past_7_days/past_28_days)
Kalshi public market data endpoints require no authentication. Remove
the KALSHI_API_KEY gate that was disabling Kalshi entirely when the
env var was missing, and drop the Authorization header.
Rewrite the Vercel RPC handler to read from Railway-seeded Redis only
(gold standard), removing the fallback that fetched directly from
Gamma/Kalshi APIs on Vercel edge. Handler goes from 330 to 85 lines.
Double all prediction timing values to reduce Railway cron cost:
- Redis TTL: 15min -> 30min
- Health maxStaleMin: 15min -> 30min
- Client hydration freshness: 20min -> 40min
- Railway cron: 10min -> 20min (requires dashboard update)
* feat(advisories): gold standard migration for security advisories
Move security advisories from client-side RSS fetching (24 feeds per
page load) to Railway cron seed with Redis-read-only Vercel handler.
- Add seed script fetching via relay RSS proxy with domain allowlist
- Add ListSecurityAdvisories proto, handler, and RPC cache tier
- Add bootstrap hydration key for instant page load
- Rewrite client service: bootstrap -> RPC fallback, no browser RSS
- Wire health.js, seed-health.js, and dataSize tracking
* fix(advisories): empty RPC returns ok:true, use full country map
P1 fixes from Codex review:
- Return ok:true for empty-but-successful RPC responses so the panel
clears to empty instead of stuck loading on cold environments
- Replace 50-entry hardcoded country map with 251-entry shared config
generated from the project GeoJSON + aliases, matching coverage of
the old client-side nameToCountryCode matcher
* fix(advisories): add Cote d'Ivoire and other missing country aliases
Adds 14 missing aliases including "cote d ivoire" (US State Dept
title format), common article-prefixed names (the Bahamas, the
Gambia), and alternative official names (Czechia, Eswatini, Cabo
Verde, Timor-Leste).
* fix(proto): inject @ts-nocheck via Makefile generate target
buf generate does not emit @ts-nocheck, but tsc strict mode rejects
the generated code. Adding a post-generation sed step in the Makefile
ensures both CI proto-freshness (make generate + diff) and CI
typecheck (tsc --noEmit) pass consistently.
Node 20's fetch() (undici) tries IPv6 first. Railway containers don't
support IPv6 (IPV6_NDISC failures in network trace), causing all seed
services to crash.
Fix: set NODE_OPTIONS=--dns-result-order=ipv4first via nixpacks.toml
so all Railway services prefer IPv4. Keeps Node 20 for import attributes.
* test: rewrite transit chart test as structural contract verification
Replace fragile source-string extraction + new Function() compilation
with structural pattern checks on the source code. Tests verify:
- render() clears chart before content change
- clearTransitChart() cancels timer, disconnects observer, destroys chart
- MutationObserver setup for DOM readiness detection
- Fallback timer for no-op renders (100-500ms range)
- Both callbacks (observer + timer) clean up each other
- Tab switch and collapse clear chart state
- Mount function guards against missing element/data
Replaces PR #1634's approach which was brittle (method body extraction,
TypeScript cast stripping, sandboxed execution).
* fix: log fetch error cause in seed retry and FATAL handlers
Node 20 fetch() throws TypeError('fetch failed') with the real error
hidden in err.cause (DNS, TLS, timeout). The current logging only shows
'fetch failed' which is useless for diagnosis. Now logs err.cause.message
in both withRetry() retries and FATAL catch blocks.
* fix(usni-fleet): add Node.js HTTP CONNECT proxy fallback, detect Cloudflare HTML
curl is not available in Railway's Railpack v0.18.0 containers. The seed
was failing with ENOENT on curl, then getting Cloudflare-blocked on
Node.js direct.
- Add fetchViaHttpProxy: Node.js HTTP CONNECT tunnel through residential
proxy (no curl dependency). Uses the same RESIDENTIAL_PROXY_AUTH env.
- Add Cloudflare HTML detection: reject early when response starts with
<!DOCTYPE instead of passing HTML to JSON.parse.
- Fallback chain: curl direct -> curl+proxy -> Node.js+proxy -> Node.js direct
- Add nixpacks.toml with curl for future Railpack builds
* fix: use ESM import for node:http (require breaks in .mjs)
Previously, every merge to main triggered a Vercel build even for
scripts-only changes (seed scripts, relay updates). Now checks if
any web-relevant files changed on main too, skipping the build when
only scripts/, docs/, .github/, etc. are modified.