* ci: skip typecheck for scripts-only PRs; fix vercel-ignore empty SHA
Typecheck workflow:
- Add paths-ignore for scripts/** and .github/** on pull_request and push.
Seed scripts are plain .mjs — not TypeScript — so typechecking adds ~2min
with zero coverage benefit for scripts-only changes.
vercel-ignore.sh:
- When VERCEL_GIT_PREVIOUS_SHA is empty or invalid (can happen on incremental
PR pushes), fall back to git merge-base HEAD origin/main instead of defaulting
to exit 1 (build). This was causing Vercel to deploy on scripts-only PRs even
though the ignore script correctly excludes scripts/ from web-relevant paths.
* fix(ci): remove .github/** from typecheck paths-ignore to unblock PR
* security: add unicode safety guard to hooks and CI
* fix(unicode-safety): drop FE0F, PUA; fix col tracking; scan .husky/
- Remove FE0F (emoji presentation selector) from suspicious set — it
false-positives on ASCII keycap sequences (#️⃣ etc.) in source strings
- Remove Private Use Area (E000–F8FF) check — not a parser attack vector
and legitimately used by icon font string literals
- Fix column tracking for astral-plane characters (cp > 0xFFFF): increment
by 2 to match UTF-16 editor column positions
- Remove now-unused prevCp variable
- Add .husky/ to SCAN_ROOTS and '' to INCLUDED_EXTENSIONS so extensionless
hook scripts (pre-commit, pre-push) are included in full-repo scans
---------
Co-authored-by: Elie Habib <elie.habib@gmail.com>
* fix(gdelt): exponential backoff + post-exhaust cooldown for 429s
Military and cyber queries consistently 429 all retry attempts
because GDELT's rate limit window exceeds the previous 50s max
backoff. Old linear 20/35/50s was too short.
Changes:
- Backoff: 60s, 120s, 240s (exponential) instead of 20/35/50s
- POST_EXHAUST_DELAY: 2min cooldown after a topic exhausts all
retries before moving to the next topic, giving GDELT's sliding
window time to reset
- `exhausted: true` flag on retry-exhausted results to trigger cooldown
* fix(gdelt): scope post-exhaust cooldown to 429 exhaustion only
exhausted:true was set for any terminal error (500, timeout, bad JSON),
causing the 2min rate-limit cooldown to fire for non-rate-limit failures.
Only mark exhausted:true when 429 was the reason the topic gave up.
- seed-economy: skip Redis write for macro-signals when totalCount=0 so
Yahoo failures don't overwrite previously cached good data
- seed-economy: raise MACRO_TTL 900→1800s so valid data survives two
missed 15-min cron cycles
- health: tighten macroSignals maxStaleMin 60→20 to alert before TTL
expires (was 4x longer than the data lifetime)
- ais-relay: add one retry pass for commodity symbols that return null,
with 3s cooldown, to recover from transient Yahoo 429 rate limits
* fix(health): adjust gdeltIntel maxStaleMin for 6h cron; fix silent EXPIRE no-op on expired keys
- gdeltIntel maxStaleMin: 150 → 420 (6h cron + 1h grace). The 150 threshold was
calibrated for the old 2h cron — with 6h intervals it fires STALE throughout
most of each cycle, masking the signal entirely.
- _seed-utils extendExistingTtl: EXPIRE returns 0 (no-op) on expired/missing keys,
but the log always said "Extended TTL on N key(s)" regardless. Added per-result
checking: keys that returned 0 now emit a WARNING so the death-spiral condition
(validate fails + key expired + EXPIRE is silently a no-op) is visible in logs
rather than silently passing as if TTL was extended.
* fix(seed-health): align gdelt-intel intervalMin to 210 (420min maxStaleMin / 2)
Codex flagged mismatch: health.js allows 420min before flagging gdelt-intel
stale, but seed-health.js still used intervalMin: 150 (flags after 300min).
Ops tooling monitoring seed-health would generate spurious alerts for most
of each 6h cron cycle. Align to 210min per the maxStaleMin/2 convention.
* fix(widgets): reduce design drift between AI widgets and default panels
- System prompt: add explicit visual design rules with anti-patterns
(no font-family overrides, CSS vars only, max 4px radius, compact rows)
Restore 6 intel topics (sanctions + intelligence), fix disp-stat → disp-stat-box class name
Add correct HTML pattern examples for rows, tables, stats grids
- widget-sanitizer.ts PRO iframe: monospace font stack, CSS variable palette,
font-family:inherit!important on *, table/th/td baseline styles, change-positive/negative classes
- main.css: enforce font-family:inherit!important on .wm-widget-generated and all descendants
so inline style="font-family:..." from AI output cannot override the monospace look
* fix(widgets): correct table wrapper structure, ban PRO style blocks
- trade-tariffs-table is a wrapper div around <table>, not a class on
the table itself; fix example and add explicit anti-pattern note
- PRO widget prompt: disallow <style> blocks in body content since
they load after the iframe head CSS and can override the monospace
font and dark palette guardrails (source-order wins)
* feat: effective tariff rate source
* fix(trade): extract parse helpers, fix tests, add health monitoring
- Extract htmlToPlainText/toIsoDate/parseBudgetLabEffectiveTariffHtml
to scripts/_trade-parse-utils.mjs so tests can import directly
- Fix toIsoDate to use month-name lookup instead of fragile
new Date(\`\${text} UTC\`) which is not spec-guaranteed
- Replace new Function() test reconstruction with direct ESM import
- Add test fixtures for parser patterns 2 and 3 (previously untested)
- Add tariffTrendsUs to health.js STANDALONE_KEYS + SEED_META
(key trade:tariffs:v1:840:all:10, maxStaleMin 900 = 2.5x the 6h TTL)
* fix(test): update sourceVersion assertion for budgetlab addition
---------
Co-authored-by: Elie Habib <elie.habib@gmail.com>
Adds India (IN) to country-boundary-overrides.geojson using the same
Natural Earth 50m Admin 0 Countries dataset already used for Pakistan.
The override system automatically replaces the base geometry on app load,
providing a higher-resolution MultiPolygon boundary (1518 points).
Renames the fetch script to reflect its broader purpose and updates
doc references in CONTRIBUTING.md and maps-and-geocoding.mdx.
Fixeskoala73/worldmonitor#1721
Co-authored-by: stablegenius49 <185121704+stablegenius49@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Elie Habib <elie.habib@gmail.com>
* fix(gdelt): relax validate threshold from 4 to 2 topics to prevent STALE on partial 429s
When GDELT rate-limits any single topic, fetchWithRetry returns { articles: [] }.
With validate requiring all 4 topics populated, a single 429 causes atomicPublish
to skip the write entirely — seed-meta TTL is NOT extended. After 2-3 consecutive
cron runs that fail validation, seedAge exceeds maxStaleMin:300 and health reports
STALE_SEED.
Fix: require only 2/4 topics to have articles. A partial snapshot is far better
than no write at all — it keeps seed-meta fresh and prevents recurring STALE alerts.
Also fix stale comment claiming cron runs every 4h (actual cron is 2h).
* fix(gdelt): merge previous snapshot for 429-failed topics instead of overwriting with empty
The earlier validate relaxation (>= 2) still overwrote previously cached articles
for rate-limited topics with empty arrays, causing blank tabs and empty RPC responses
for up to the 6h TTL even though good data was available.
Fix: after fetching all topics, read the existing canonical key from Redis. For any
topic that returned 0 articles due to 429 exhaustion, restore the previous snapshot's
articles for that topic before publishing. The validate >= 2 threshold remains as a
safety net for first-run or total GDELT outages.
* feat(aviation): add Wingbits as civilian flight fallback when OpenSky is unreachable
OpenSky is completely unreachable from Railway (direct fetch blocked,
residential proxy returning CONNECT 422). Wingbits is healthy with
2000+ flights per cycle.
Adds /wingbits/track relay endpoint that accepts a viewport bbox,
queries Wingbits /v1/flights, and returns all flights (no military
filter) as PositionSample objects. Wires it as third source in
trackAircraft after OpenSky relay and direct OpenSky attempts fail.
OpenSky remains first in the waterfall so it auto-recovers when the
proxy is fixed.
* fix(aviation): scale Wingbits bbox width by cos(lat) and read all onGround variants
P1: longitude degree is only 60 nm at the equator; multiply by cos(centerLat)
to avoid over-expanding the bbox at high latitudes.
P2: honour f.og ?? f.gr ?? f.onGround to match all Wingbits field variants
(mirrors seed-military-flights.mjs:871).
- Add IndexNow verification key file (public/a7f3e9d1b2c44e8f9a0b1c2d3e4f5a6b.txt)
- Update sitemap.xml lastmod to 2026-03-19 to signal freshness to crawlers
- Add lastmod dates to blog sitemap via @astrojs/sitemap serialize()
- Add scripts/seo-indexnow-submit.mjs to resubmit all 23 URLs to IndexNow
(run after deploy: node scripts/seo-indexnow-submit.mjs)
* fix(gdelt): reduce topics 6→4 to cut 429 rate-limit pressure
Drops sanctions and intelligence topics (covered by other data sources).
Keeps military, cyber, nuclear, maritime as core high-signal topics.
Happy-path runtime drops from ~2.5min to ~1.5min. Worst-case retry
storm is now 3 gaps instead of 5, significantly reducing total backoff
duration per run on the 2h cron cycle.
Lowers validation threshold from ≥3 to ≥2 of 4 topics.
* fix(gdelt): reduce cron to 4h and extend TTL to 6h
Data from GDELT's 24h window doesn't turn over fast enough to justify
2h polling. Switching to 4h halves Railway runs (12/day → 6/day) and
doubles the cooldown between IP hits, reducing 429 pressure.
TTL bumped 4h→6h so cached data outlives the 4h cron gap.
health.js maxStaleMin 200→300 (5h, warning window before 6h expiry).
seed-health.js intervalMin 100→150 (150×2=300 = maxStaleMin).
Railway cron schedule needs updating to: 0 */4 * * *
* fix(gdelt): sync UI topic list with seed and require all 4 topics for write
P1: Remove sanctions and intelligence from INTEL_TOPICS in
src/services/gdelt-intel.ts so the panel no longer renders tabs that
can never be hydrated from the 4-topic Redis payload. Tabs now match
the seed: military, cyber, nuclear, maritime.
P2: Raise validation threshold back to >=4 (all topics required).
With >=2 a partial run would overwrite the complete snapshot with
incomplete data, making missing tabs show blank panels until the next
full run. Requiring all 4 means a partial run extends the existing
TTL instead of replacing good data with bad.
Audit revealed 6 mismatches where data TTLs or maxStaleMin thresholds were
too tight relative to cron intervals, causing spurious STALE_SEED warnings.
gdeltIntel: maxStaleMin 180 to 240 (1h cron now has 4x buffer vs 3x)
securityAdvisories: TTL 70m to 120m, maxStaleMin 90 to 120 (2x cron buffer)
techEvents: TTL 360m to 480m, maxStaleMin 420 to 480 (1h buffer for 6h cron)
forecasts: TTL 80m to 105m (outlives maxStaleMin:90 when cron is delayed)
correlationCards: TTL 10m to 20m (data outlives maxStaleMin:15)
usniFleet: maxStaleMin 420 to 480 (extra buffer for relay-seeded key)
* fix(health): prevent cableHealth EMPTY/CRIT when NGA has no active warnings
When the NGA broadcast-warn API returns 0 active warnings, computeHealthMap
produces an empty cables object. The handler wrote recordCount:0 to seed-meta,
causing health.js to report EMPTY (CRIT) even though the feed is operational.
Zero cable disruptions is a valid healthy state, not missing data.
Write Math.max(count, 1) so health.js only fires CRIT if the feed is
genuinely broken (no seed-meta at all), not when NGA reports no disruptions.
* fix(health): fix cableHealth EMPTY/CRIT — TTL shorter than warm-ping interval
Root cause: CACHE_TTL was 600s (10 min) but the relay warm-ping runs every
30 min. The cable-health-v1 Redis key expired 20 min before the next ping,
causing health.js to see a missing key → EMPTY → CRIT for 20 of every 30 min.
Fix: increase CACHE_TTL to 3600s (1h) to match health.js maxStaleMin:60 and
outlive the warm-ping interval with margin.
Also reverts the earlier incorrect Math.max(count,1) change — health.js reads
the cached payload directly, not meta.recordCount.
* fix(health): fix spending EMPTY/CRIT — TTL matched seed interval exactly
SPENDING_CACHE_TTL was 3600s = SPENDING_SEED_INTERVAL_MS (1h). At exact
equality the key expires the moment the next seed runs, causing a window
where health.js sees EMPTY → CRIT. Double the TTL to 2h so the key always
outlives the seeder.
* fix(health): fix weather CACHE_TTL matching seed interval exactly
WEATHER_CACHE_TTL was 900s = WEATHER_SEED_INTERVAL_MS (15 min). Same
TTL=interval race as spending: key can expire at the exact moment the
seeder fires, leaving a window where health.js sees EMPTY. Double TTL
to 1800s (30 min) to eliminate the race.
* fix(health): fix 5 remaining TTL < 2x interval races in ais-relay
Ensure every Redis key TTL is at least 2× its seeder interval so a single
slow/missed cycle never causes EMPTY/CRIT flapping:
- USNI: 7h → 12h (interval 6h, was 1.17x)
- THEATER_POSTURE: 15m → 20m (interval 10m, was 1.5x)
- CYBER: 3h → 4h (interval 2h, was 1.5x)
- CHOKEPOINT_TRANSIT: 15m → 20m (interval 10m, was 1.5x)
- TRANSIT_SUMMARY: 15m → 20m (interval 10m, was 1.5x)
* test: update transit-summaries TTL assertion to match new 2x minimum rule
* fix(posture): compute vessel counts server-side from AIS stream
trackedVessels was hardcoded to 0 in seedTheaterPosture(). The relay
has live AIS data in the vessels Map but never used it for posture.
Now counts military vessels (shipType 35/50-59, naval prefixes, MMSI
patterns) within each theater's bounds using the same identification
logic as isLikelyMilitaryCandidate(). Vessels seen in the last 6h
contribute to both the vessel count and the combined posture level.
This ensures the bootstrap theater posture data shows accurate vessel
presence regardless of whether the client has AIS toggled on.
* perf(posture): use candidateReports instead of iterating all vessels
candidateReports is already pre-filtered for military candidates on
AIS message arrival. No need to re-apply isLikelyMilitaryCandidate
on every vessel for every theater. Reduces ~90k function calls to a
simple bounds check on the candidate set.
* fix(posture): address vessel count issues in theater posture
1. Remove early exit on flights.length === 0 so vessel-only scenarios
still seed posture (P1 — Codex review comment)
2. Add isStrictMilitaryVessel() to filter candidateReports to shipType
35/55 and named naval vessels only — drops tugs, pilot boats, SAR
craft (shipType 50-59) that inflate counts in maritime theaters
3. Cap vessel contribution at floor(elevated_threshold / 2) to prevent
naval traffic from dominating flight-calibrated posture thresholds
4. Update seed-meta recordCount and log to include vessel counts
* fix(health): increase gdeltIntel maxStaleMin from 120 to 180
Seeder runs every ~120min but threshold was exactly 120min, causing
STALE/CRIT flapping whenever there's any timing jitter. 180min gives
a 60min buffer to prevent oscillation.
* fix(health): increase gdeltIntel CACHE_TTL to 4h to match maxStaleMin:180
CACHE_TTL was 7200s (120min) while health.js maxStaleMin was raised to
180min. When a seed run is delayed past 120min the data key expires,
health evaluates EMPTY/CRIT before the stale check can ever fire, making
the 60min warning buffer unreachable. Setting TTL to 14400s (240min)
ensures the key outlives the stale threshold so STALE_SEED triggers
before EMPTY/CRIT on delayed runs.
* feat(forecast): cluster situations in world state
* feat(forecast): add report continuity history
* fix(forecast): stabilize report continuity matching
* feat(widgets): add Exa web search + fix widget API endpoints
- Replace Tavily with Exa as primary stock-news search provider
(Exa → Brave → SerpAPI → Google News RSS cascade)
- Add search_web tool to widget agent so AI can fetch live data
about any topic beyond the pre-defined RPC catalog
- Exa primary (type:auto + content snippets), Brave fallback
- Fix all widget tool endpoints: /rpc/... paths were hitting
Vercel catch-all and returning SPA HTML instead of JSON data
- Fix wm-widget-shell min-height causing fixed-size border that
clipped AI widget content
- Add HTML response guard in tool handler
- Update env key: TAVILY_API_KEYS → EXA_API_KEYS throughout
* fix(stock-news): use type 'neural' for Exa search (type 'news' is invalid)
* feat(forecast): add actor continuity to world state
* fix(forecast): report full actor continuity counts
* feat(forecast): add branch continuity to world state
* fix(railway): detect and fix wrong startCommand on seed services
All seed services use rootDirectory="scripts" in Railway, so the
correct startCommand is `node seed-<name>.mjs` (no scripts/ prefix).
A startCommand like `node scripts/seed-radiation-watch.mjs` causes
MODULE_NOT_FOUND at runtime because it resolves to
scripts/scripts/seed-radiation-watch.mjs which does not exist.
Extends railway-set-watch-paths.mjs to:
- Read startCommand alongside watchPatterns per service instance
- Validate it matches the expected `node <name>.mjs` form
- Fix it in the same serviceInstanceUpdate mutation if wrong
Run `node scripts/railway-set-watch-paths.mjs` to repair any affected
services (seed-radiation-watch, seed-climate-anomalies, seed-sanctions-pressure).
* fix(railway): filter serviceInstances by target environment
The query used serviceInstances(first: 1) without filtering by
environment, so edges[0] could return a different environment's
config than the one being updated. Now passes envId to match the
mutation target.
* feat(widgets): add PRO interactive widgets via iframe srcdoc
Introduces a PRO tier for AI-generated widgets that supports full JS
execution (Chart.js, sortable tables, animated counters) via sandboxed
iframes — no Docker, no build step required.
Key design decisions:
- Server returns <body> + inline <script> only; client builds the full
<!DOCTYPE html> skeleton with CSP guaranteed as the first <head> child
so the AI can never inject or bypass the security policy
- sandbox="allow-scripts" only — no allow-same-origin, no allow-forms
- PRO HTML stored in separate wm-pro-html-{id} localStorage key to
isolate 80KB quota pressure from the main widget metadata array
- Raw localStorage.setItem() for PRO writes with HTML-first write order
and metadata rollback on failure (bypasses saveToStorage which swallows
QuotaExceededError)
- Separate PRO_WIDGET_KEY env var + x-pro-key header gate on Railway
- Separate rate limit bucket (20/hr PRO vs 10/hr basic)
- Claude Sonnet 4.6 (8192 tokens, 10 turns, 120s) for PRO vs Haiku for
basic; health endpoint exposes proKeyConfigured for modal preflight
* feat(pro): gate finance panels and widget buttons behind wm-pro-key
The PRO localStorage key now unlocks the three previously desktop-only
finance panels (stock-analysis, stock-backtest, daily-market-brief) on
the web variant, giving PRO users access without needing WORLDMONITOR_API_KEY.
Button visibility is now cleanly separated by key:
- wm-widget-key only → basic "Create with AI" button
- wm-pro-key only → PRO "Create Interactive" button only
- both keys → both buttons
- no key → neither button
Widget boot loader also accepts either key so PRO-only users see their
saved interactive widgets on page load.
* fix(widgets): inject Chart.js CDN into PRO iframe shell so new Chart() is defined
* feat(health): restore seed-meta tracking for riskScores, serviceStatuses, cableHealth, chokepoints
These 4 keys were reporting STALE_SEED / going untracked because their
warm-ping loops never wrote seed-meta. PR #1649 removed seed-meta from
cachedFetchJson but no replacement tracking was added, so health.js
lost visibility into their freshness.
Changes:
- ais-relay.cjs: seedCiiWarmPing() writes seed-meta:intelligence:risk-scores after success
- ais-relay.cjs: seedServiceStatuses() writes seed-meta:infra:service-statuses after success
- ais-relay.cjs: new startChokepointWarmPingLoop() — 30 min warm-ping for supply_chain:chokepoints:v4
- ais-relay.cjs: new startCableHealthWarmPingLoop() — 30 min warm-ping + seed-meta:cable-health write
- get-cable-health.ts: switch to cachedFetchJsonWithMeta, write seed-meta:cable-health on fresh fetch
- api/health.js: re-add SEED_META entries for serviceStatuses (30 min), cableHealth (60 min), riskScores (15 min)
- api/health.js: remove riskScores, serviceStatuses, cableHealth from ON_DEMAND_KEYS — they now have proper freshness tracking
* fix(health): only write seed-meta on genuinely fresh data (P1 review fixes)
Fixes two P1 issues from PR review:
1. seedCableHealthWarmPing() was writing seed-meta:cable-health after
any 200 response, defeating the source==='fresh' guard already present
in getCableHealth(). Removed the relay write — the handler owns it.
2. seedServiceStatuses() was writing seed-meta:infra:service-statuses
after any 200, but listServiceStatuses() can return in-memory fallback
statuses on upstream scrape failures with a 200. The relay write was
advancing fetchedAt even when stale fallback data was returned.
Fix: switch handler to cachedFetchJsonWithMeta and only write seed-meta
when source==='fresh' (i.e. upstream status pages were actually scraped).
Removed the relay write entirely.
* fix(health): only write risk-scores seed-meta when data is present
The CII warm-ping wrote seed-meta after any 200 response, but the RPC
can return cached/stale fallback data with 0 scores during upstream
outages. This masked staleness in health checks. Now only writes
seed-meta when count > 0 (meaningful data received).
* fix(seeds): extend WB key TTLs in relay drop guard
The ais-relay seedWorldBank() percentage-drop guard returned early
without extending TTLs. During persistent partial WB outages, this
would let all 6 keys (3 data + 3 seed-meta) expire after 7 days.
Now extends TTLs on all WB keys before returning, matching the
standalone seed-wb-indicators.mjs behavior.
* fix(seeds): check upstashExpire return values in WB drop guard
upstashExpire returns false on failure (never throws), so the prior
code always logged success. Now checks all 6 return values and logs
partial failure count if any EXPIRE calls fail.
* fix(ucdp): never overwrite existing data with empty results
The standalone UCDP seed was writing 0 events to Redis when the API
returned empty, overwriting the last known good data. Health then
reported EMPTY_DATA CRIT even though valid data existed before.
Now extends TTL on both the data key and seed-meta when 0 events
are produced, preserving the last good payload until the next
successful fetch.
* fix(ucdp): verify EXPIRE responses before logging success
Check HTTP status of both EXPIRE calls. Log warnings on failure
instead of always claiming TTL was extended.
Health semantics:
- Add faaDelays + gpsjam to EMPTY_DATA_OK_KEYS (0 records = calm, not error)
- Fix EMPTY_DATA_OK_KEYS branch to still check seed-meta freshness
(prevents stale empty caches from staying green indefinitely)
Seed guards:
- seed-airport-delays: fix meta key in fetch-failure path
(seed-meta:aviation:delays -> seed-meta:aviation:faa + seed-meta:aviation:notam)
- seed-military-flights: add full TTL extension on zero-flights branch
(was exiting without preserving any derived data TTLs)
- seed-wb-indicators: add percentage-drop guard (new count < 50% of cached
= likely partial API failure, extend TTL instead of overwriting)
- ais-relay.cjs: same percentage-drop guard for WB dual writer
Codex-reviewed plan (5 rounds, approved).
- Strip enrichmentMeta from bootstrap.js forecasts payload (seed-internal, not for clients)
- Rename quietDomainBonus -> priorityDomainBonus (it applies to priority domains, not quiet ones)
- Extract cyber score formula magic numbers into named constants (CYBER_SCORE_TYPE_MULTIPLIER, etc.)
- Pre-compute analysisPriority in rankForecastsForAnalysis to avoid double-call per comparison
- Log when filterPublishedForecasts weak-fallback gate suppresses forecasts
- Log how many fallback narratives populateFallbackNarratives applies
- Add // penalties comment header in computeAnalysisPriority
* feat(thermal): add thermal escalation seeded service
Cherry-picked from codex/thermal-escalation-phase1 and retargeted
to main. Includes thermal escalation seed script, RPC handler,
proto definitions, bootstrap/health/seed-health wiring, gateway
cache tier, client service, and tests.
* fix(thermal): wire data-loader, fix typing, recalculate summary
Wire fetchThermalEscalations into data-loader.ts with panel forwarding,
freshness tracking, and variant gating. Fix seed-health intervalMin from
90 to 180 to match 3h TTL. Replace 8 as-any casts with typed interface.
Recalculate summary counts after maxItems slice.
* fix(thermal): enforce maxItems on hydrated data + fix bootstrap keys
Codex P2: hydration branch now slices clusters to maxItems before
mapping, matching the RPC fallback behavior.
Also add thermalEscalation to bootstrap.js BOOTSTRAP_CACHE_KEYS and
SLOW_KEYS (was lost during conflict resolution).
* fix(thermal): recalculate summary on sliced hydrated clusters
When maxItems truncates the cluster array from bootstrap hydration,
the summary was still using the original full-set counts. Now
recalculates clusterCount, elevatedCount, spikeCount, etc. on the
sliced array, matching the handler's behavior.