mirror of
https://github.com/koala73/worldmonitor.git
synced 2026-04-25 17:14:57 +02:00
* feat(seed-contract): PR 2a — runSeed envelope dual-write + 91 seeders migrated
Opt-in contract path in runSeed: when opts.declareRecords is provided, write
{_seed, data} envelope to the canonical key alongside legacy seed-meta:*
(dual-write). State machine: OK / OK_ZERO / RETRY with zeroIsValid opt.
declareRecords throws or returns non-integer → hard fail (contract violation).
extraKeys[*] support per-key declareRecords; each extra key writes its own
envelope. Legacy seeders (no declareRecords) entirely unchanged.
Migrated all 91 scripts/seed-*.mjs to contract mode. Each exports
declareRecords returning the canonical record count, and passes
schemaVersion: 1 + maxStaleMin (matched to api/health.js SEED_META, or 2.5x
interval where no registry entry exists). Contract conformance reports 84/86
seeders with full descriptor (2 pre-existing warnings).
Legacy seed-meta keys still written so unmigrated readers keep working;
follow-up slices flip health.js + readers to envelope-first.
Tests: 61/61 PR 1 tests still pass.
Next slices for PR 2:
- api/health.js registry collapse + 15 seed-bundle-*.mjs canonicalKey wiring
- reader migration (mcp, resilience, aviation, displacement, regional-snapshot)
- direct writers — ais-relay.cjs, consumer-prices-core publish.ts
- public-boundary stripSeedEnvelope + test migration
Plan: docs/plans/2026-04-14-002-fix-runseed-zero-record-lockout-plan.md
* fix(seed-contract): unwrap envelopes in internal cross-seed readers
After PR 2a enveloped 91 canonical keys as {_seed, data}, every script-side
reader that returned the raw parsed JSON started silently handing callers the
envelope instead of the bare payload. WoW baselines (bigmac, grocery-basket,
fear-greed) saw undefined .countries / .composite; seed-climate-anomalies saw
undefined .normals from climate:zone-normals:v1; seed-thermal-escalation saw
undefined .fireDetections from wildfire:fires:v1; seed-forecasts' ~40-key
pipeline batch returned envelopes for every input.
Fix: route every script-side reader through unwrapEnvelope(...).data. Legacy
bare-shape values pass through unchanged (unwrapEnvelope returns
{_seed: null, data: raw} for any non-envelope shape).
Changed:
- scripts/_seed-utils.mjs: import unwrapEnvelope; redisGet, readSeedSnapshot,
verifySeedKey all unwrap. Exported new readCanonicalValue() helper for
cross-seed consumers.
- 18 seed-*.mjs scripts with local redisGet-style helpers or inline fetch
patched to unwrap via the envelope source module (subagent sweep).
- scripts/seed-forecasts.mjs pipeline batch: parse() unwraps each result.
- scripts/seed-energy-spine.mjs redisMget: unwraps each result.
Tests:
- tests/seed-utils-envelope-reads.test.mjs: 7 new cases covering envelope
+ legacy + null paths for readSeedSnapshot and verifySeedKey.
- Full seed suite: 67/67 pass (was 61, +6 new).
Addresses both of user's P1 findings on PR #3097.
* feat(seed-contract): envelope-aware reads in server + api helpers
Every RPC and public-boundary reader now automatically strips _seed from
contract-mode canonical keys. Legacy bare-shape values pass through unchanged
(unwrapEnvelope no-ops on non-envelope shapes).
Changed helpers (one-place fix — unblocks ~60 call sites):
- server/_shared/redis.ts: getRawJson, getCachedJson, getCachedJsonBatch
unwrap by default. cachedFetchJson inherits via getCachedJson.
- api/_upstash-json.js: readJsonFromUpstash unwraps (covers api/mcp.ts
tool responses + all its canonical-key reads).
- api/bootstrap.js: getCachedJsonBatch unwraps (public-boundary —
clients never see envelope metadata).
Left intentionally unchanged:
- api/health.js / api/seed-health.js: read only seed-meta:* keys which
remain bare-shape during dual-write. unwrapEnvelope already imported at
the meta-read boundary (PR 1) as a defensive no-op.
Tests: 67/67 seed tests pass. typecheck + typecheck:api clean.
This is the blast-radius fix the PR #3097 review called out — external
readers that would otherwise see {_seed, data} after the writer side
migrated.
* fix(test): strip export keyword in vm.runInContext'd seed source
cross-source-signals-regulatory.test.mjs loads scripts/seed-cross-source-signals.mjs
via vm.runInContext, which cannot parse ESM `export` syntax. PR 2a added
`export function declareRecords` to every seeder, which broke this test's
static-analysis approach.
Fix: strip the `export` keyword from the declareRecords line in the
preprocessed source string so the function body still evaluates as a plain
declaration.
Full test:data suite: 5307/5307 pass. typecheck + typecheck:api clean.
* feat(seed-contract): consumer-prices publish.ts writes envelopes
Wrap the 5 canonical keys written by consumer-prices-core/src/jobs/publish.ts
(overview, movers:7d/30d, freshness, categories:7d/30d/90d, retailer-spread,
basket-series) in {_seed, data} envelopes. Legacy seed-meta:<key> writes
preserved for dual-write.
Inlined a buildEnvelope helper (10 lines) rather than taking a cross-package
dependency — consumer-prices-core is a standalone npm package. Documented the
four-file parity contract (mjs source, ts mirror, js edge mirror, this copy).
Contract fields: sourceVersion='consumer-prices-core-publish-v1', schemaVersion=1,
state='OK' (recordCount>0) or 'OK_ZERO' (legitimate zero).
Typecheck: no new errors in publish.ts.
* fix(seed-contract): 3 more server-side readers unwrap envelopes
Found during final audit:
- server/worldmonitor/resilience/v1/_shared.ts: resilience score reader
parsed cached GetResilienceScoreResponse raw. Contract-mode seed-resilience-scores
now envelopes those keys.
- server/worldmonitor/resilience/v1/get-resilience-ranking.ts: p05/p95
interval lookup parsed raw from seed-resilience-scores' extra-key path.
- server/worldmonitor/infrastructure/v1/_shared.ts: mgetJson() used for
count-source keys (wildfire:fires:v1, news:insights:v1) which are both
contract-mode now.
All three now unwrap via server/_shared/seed-envelope. Legacy shapes pass
through unchanged.
Typecheck clean.
* feat(seed-contract): ais-relay.cjs direct writes produce envelopes
32 canonical-key write sites in scripts/ais-relay.cjs now produce {_seed, data}
envelopes. Inlined buildEnvelope() (CJS module can't require ESM source) +
envelopeWrite(key, data, ttlSeconds, meta) wrapper. Enveloped keys span market
bootstrap, aviation, cyber-threats, theater-posture, weather-alerts, economic
spending/fred/worldbank, tech-events, corridor-risk, usni-fleet, shipping-stress,
social:reddit, wsb-tickers, pizzint, product-catalog, chokepoint transits,
ucdp-events, satellites, oref.
Left bare (not seeded data keys): seed-meta:* (dual-write legacy),
classifyCacheKey LLM cache, notam:prev-closed-state internal state,
wm:notif:scan-dedup flags.
Updated tests/ucdp-seed-resilience.test.mjs regex to accept both upstashSet
(pre-contract) and envelopeWrite (post-contract) call patterns.
* feat(seed-contract): 15 bundle files add canonicalKey for envelope gate
54 bundle sections across 12 files now declare canonicalKey alongside the
existing seedMetaKey. _bundle-runner.mjs (from PR 1) prefers canonicalKey
when both are present — gates section runs on envelope._seed.fetchedAt
read directly from the data key, eliminating the meta-outlives-data class
of bugs.
Files touched:
- climate (5), derived-signals (2), ecb-eu (3), energy-sources (6),
health (2), imf-extended (4), macro (10), market-backup (9),
portwatch (4), relay-backup (2), resilience-recovery (5), static-ref (2)
Skipped (14 sections, 3 whole bundles): multi-key writers, dynamic
templated keys (displacement year-scoped), or non-runSeed orchestrators
(regional brief cron, resilience-scores' 222-country publish, validation/
benchmark scripts). These continue to use seedMetaKey or their own gate.
seedMetaKey preserved everywhere — dual-write. _bundle-runner.mjs falls
back to legacy when canonicalKey is absent.
All 15 bundles pass node --check. test:data: 5307/5307. typecheck:all: clean.
* fix(seed-contract): 4 PR #3097 review P1s — transform/declareRecords mismatches + envelope leaks
Addresses both P1 findings and the extra-key seed-meta leak surfaced in review:
1. runSeed helper-level invariant: seed-meta:* keys NEVER envelope.
scripts/_seed-utils.mjs exports shouldEnvelopeKey(key) — returns false for
any key starting with 'seed-meta:'. Both atomicPublish (canonical) and
writeExtraKey (extras) gate the envelope wrap through this helper. Fixes
seed-iea-oil-stocks' ANALYSIS_META_EXTRA_KEY silently getting enveloped,
which broke health.js parsing the value as bare {fetchedAt, recordCount}.
Also defends against any future manual writeExtraKey(..., envelopeMeta)
call that happens to target a seed-meta:* key.
2. seed-token-panels canonical + extras fixed.
publishTransform returns data.defi (the defi panel itself, shape {tokens}).
Old declareRecords counted data.defi.tokens + data.ai.tokens + data.other.tokens
on the transformed payload → 0 → RETRY path → canonical market:defi-tokens:v1
never wrote, and because runSeed returned before the extraKeys loop,
market:ai-tokens:v1 + market:other-tokens:v1 stayed stale too.
New: declareRecords counts data.tokens on the transformed shape. AI_KEY +
OTHER_KEY extras reuse the same function (transforms return structurally
identical panels). Added isMain guard so test imports don't fire runSeed.
3. api/product-catalog.js cached reader unwraps envelope.
ais-relay.cjs now envelopes product-catalog:v2 via envelopeWrite(). The
edge reader did raw JSON.parse(result) and returned {_seed, data} to
clients, breaking the cached path. Fix: import unwrapEnvelope from
./_seed-envelope.js, apply after JSON.parse. One site — :238-241 is
downstream of getFromCache(), so the single reader fix covers both.
4. Regression lock tests/seed-contract-transform-regressions.test.mjs (11 cases):
- shouldEnvelopeKey invariant: seed-meta:* false, canonical true
- Token-panels declareRecords works on transformed shape (canonical + both extras)
- Explicit repro of pre-fix buggy signature returning 0 — guards against revert
- resolveRecordCount accepts 0, rejects non-integer
- Product-catalog envelope unwrap returns bare shape; legacy passes through
Verification:
- npm run test:data → 5318/5318 pass (was 5307 — 11 new regressions)
- npm run typecheck:all → clean
- node --check on every modified script
iea-oil-stocks canonical declareRecords was NOT broken (user confirmed during
review — buildIndex preserves .members); only its ANALYSIS_META_EXTRA_KEY
was affected, now covered generically by commit 1's helper invariant.
* fix(seed-contract): seed-token-panels validateFn also runs on post-transform shape
Review finding: fixing declareRecords wasn't sufficient — atomicPublish() runs
validateFn(publishData) on the transformed payload too. seed-token-panels'
validate() checked data.defi/.ai/.other on the transformed {tokens} shape,
returned false, and runSeed took the early skipped-write branch (before even
reaching the declareRecords RETRY logic). Net effect: same as before the
declareRecords fix — canonical + both extras stayed stale.
Fix: validate() now checks the canonical defi panel directly (Array.isArray
(data?.tokens) && has at least one t.price > 0). AI/OTHER panels are validated
implicitly by their own extraKey declareRecords on write.
Audited the other 9 seeders with publishTransform (bls-series, bis-extended,
bis-data, gdelt-intel, trade-flows, iea-oil-stocks, jodi-gas, sanctions-pressure,
forecasts): all validateFn's correctly target the post-transform shape. Only
token-panels regressed.
Added 4 regression tests (tests/seed-contract-transform-regressions.test.mjs):
- validate accepts transformed panel with priced tokens
- validate rejects all-zero-price tokens
- validate rejects empty/missing tokens
- Explicit pre-fix repro (buggy old signature fails on transformed shape)
Verification:
- npm run test:data → 5322/5322 pass (was 5318; +4 new)
- npm run typecheck:all → clean
- node --check clean
* feat(seed-contract): add /api/seed-contract-probe validation endpoint
Single machine-readable gate for 'is PR #3097 working in production'.
Replaces the curl/jq ritual with one authenticated edge call that returns
HTTP 200 ok:true or 503 + failing check list.
What it validates:
- 8 canonical keys have {_seed, data} envelopes with required data fields
and minRecords floors (fsi-eu, zone-normals, 3 token panels + minRecords
guard against token-panels RETRY regression, product-catalog, wildfire,
earthquakes).
- 2 seed-meta:* keys remain BARE (shouldEnvelopeKey invariant; guards
against iea-oil-stocks ANALYSIS_META_EXTRA_KEY-class regressions).
- /api/product-catalog + /api/bootstrap responses contain no '_seed' leak.
Auth: x-probe-secret header must match RELAY_SHARED_SECRET (reuses existing
Vercel↔Railway internal trust boundary).
Probe logic is exported (checkProbe, checkPublicBoundary, DEFAULT_PROBES) for
hermetic testing. tests/seed-contract-probe.test.mjs covers every branch:
envelope pass/fail on field/records/shape, bare pass/fail on shape/field,
missing/malformed JSON, Redis non-2xx, boundary seed-leak detection,
DEFAULT_PROBES sanity (seed-meta invariant present, token-panels minRecords
guard present).
Usage:
curl -H "x-probe-secret: $RELAY_SHARED_SECRET" \
https://api.worldmonitor.app/api/seed-contract-probe
PR 3 will extend the probe with a stricter mode that asserts seed-meta:*
keys are GONE (not just bare) once legacy dual-write is removed.
Verification:
- tests/seed-contract-probe.test.mjs → 15/15 pass
- npm run test:data → 5338/5338 (was 5322; +16 new incl. conformance)
- npm run typecheck:all → clean
* fix(seed-contract): tighten probe — minRecords on AI/OTHER + cache-path source header
Review P2 findings: the probe's stated guards were weaker than advertised.
1. market:ai-tokens:v1 + market:other-tokens:v1 probes claimed to guard the
token-panels extra-key RETRY regression but only checked shape='envelope'
+ dataHas:['tokens']. If an extra-key declareRecords regressed to 0, both
probes would still pass because checkProbe() only inspects _seed.recordCount
when minRecords is set. Now both enforce minRecords: 1.
2. /api/product-catalog boundary check only asserted no '_seed' leak — which
is also true for the static fallback path. A broken cached reader
(getFromCache returning null or throwing) could serve fallback silently
and still pass this probe. Now:
- api/product-catalog.js emits X-Product-Catalog-Source: cache|dodo|fallback
on the response (the json() helper gained an optional source param wired
to each of the three branches).
- checkPublicBoundary declaratively requires that header's value match
'cache' for /api/product-catalog, so a fallback-serve fails the probe
with reason 'source:fallback!=cache' or 'source:missing!=cache'.
Test updates (tests/seed-contract-probe.test.mjs):
- Boundary check reworked to use a BOUNDARY_CHECKS config with optional
requireSourceHeader per endpoint.
- New cases: served-from-cache passes, served-from-fallback fails with source
mismatch, missing header fails, seed-leak still takes precedence, bad
status fails.
- Token-panels sanity test now asserts minRecords≥1 on all 3 panels.
Verification:
- tests/seed-contract-probe.test.mjs → 17/17 pass (was 15, +2 net)
- npm run test:data → 5340/5340
- npm run typecheck:all → clean
393 lines
16 KiB
JavaScript
393 lines
16 KiB
JavaScript
#!/usr/bin/env node
|
||
// @ts-check
|
||
/**
|
||
* Hyperliquid perp positioning flow seeder.
|
||
*
|
||
* Polls the public Hyperliquid /info endpoint every 5 minutes, computes a
|
||
* 4-component composite "positioning stress" score (funding / volume / OI /
|
||
* basis) per asset, and publishes a self-contained snapshot — current metrics
|
||
* plus short per-asset sparkline arrays for funding, OI and score.
|
||
*
|
||
* Used as a leading indicator for commodities / crypto / FX in CommoditiesPanel.
|
||
*/
|
||
|
||
import { loadEnvFile, runSeed, readSeedSnapshot } from './_seed-utils.mjs';
|
||
|
||
loadEnvFile(import.meta.url);
|
||
|
||
export const CANONICAL_KEY = 'market:hyperliquid:flow:v1';
|
||
export const CACHE_TTL_SECONDS = 2700; // 9× cron cadence (5 min); honest grace window
|
||
export const SPARK_MAX = 60; // 5h @ 5min
|
||
export const HYPERLIQUID_URL = 'https://api.hyperliquid.xyz/info';
|
||
export const REQUEST_TIMEOUT_MS = 15_000;
|
||
export const MIN_NOTIONAL_USD_24H = 500_000;
|
||
export const STALE_SYMBOL_DROP_AFTER_POLLS = 3;
|
||
export const VOLUME_BASELINE_MIN_SAMPLES = 12; // 1h @ 5min cadence — minimum history to score volume spike
|
||
export const MAX_UPSTREAM_UNIVERSE = 2000; // defensive cap; Hyperliquid has ~200 perps today
|
||
|
||
// Hardcoded symbol whitelist — never iterate the full universe.
|
||
// `class`: scoring threshold class. `display`: UI label. `group`: panel section.
|
||
export const ASSETS = [
|
||
{ symbol: 'BTC', class: 'crypto', display: 'BTC', group: 'crypto' },
|
||
{ symbol: 'ETH', class: 'crypto', display: 'ETH', group: 'crypto' },
|
||
{ symbol: 'SOL', class: 'crypto', display: 'SOL', group: 'crypto' },
|
||
{ symbol: 'PAXG', class: 'commodity', display: 'PAXG (gold)', group: 'metals' },
|
||
{ symbol: 'xyz:CL', class: 'commodity', display: 'WTI Crude', group: 'oil' },
|
||
{ symbol: 'xyz:BRENTOIL', class: 'commodity', display: 'Brent Crude', group: 'oil' },
|
||
{ symbol: 'xyz:GOLD', class: 'commodity', display: 'Gold', group: 'metals' },
|
||
{ symbol: 'xyz:SILVER', class: 'commodity', display: 'Silver', group: 'metals' },
|
||
{ symbol: 'xyz:PLATINUM', class: 'commodity', display: 'Platinum', group: 'metals' },
|
||
{ symbol: 'xyz:PALLADIUM', class: 'commodity', display: 'Palladium', group: 'metals' },
|
||
{ symbol: 'xyz:COPPER', class: 'commodity', display: 'Copper', group: 'industrial' },
|
||
{ symbol: 'xyz:NATGAS', class: 'commodity', display: 'Natural Gas', group: 'gas' },
|
||
{ symbol: 'xyz:EUR', class: 'commodity', display: 'EUR', group: 'fx' },
|
||
{ symbol: 'xyz:JPY', class: 'commodity', display: 'JPY', group: 'fx' },
|
||
];
|
||
|
||
// Risk weights — must sum to 1.0
|
||
export const WEIGHTS = { funding: 0.30, volume: 0.25, oi: 0.25, basis: 0.20 };
|
||
|
||
export const THRESHOLDS = {
|
||
crypto: { funding: 0.001, volume: 5.0, oi: 0.20, basis: 0.05 },
|
||
commodity: { funding: 0.0005, volume: 3.0, oi: 0.15, basis: 0.03 },
|
||
};
|
||
|
||
export const ALERT_THRESHOLD = 60;
|
||
|
||
// ── Pure scoring helpers ──────────────────────────────────────────────────────
|
||
|
||
export function clamp(x, lo = 0, hi = 100) {
|
||
if (!Number.isFinite(x)) return 0;
|
||
return Math.max(lo, Math.min(hi, x));
|
||
}
|
||
|
||
export function scoreFunding(rate, threshold) {
|
||
if (!Number.isFinite(rate) || threshold <= 0) return 0;
|
||
return clamp((Math.abs(rate) / threshold) * 100);
|
||
}
|
||
|
||
export function scoreVolume(currentVol, avgVol, threshold) {
|
||
if (!Number.isFinite(currentVol) || !(avgVol > 0) || threshold <= 0) return 0;
|
||
return clamp(((currentVol / avgVol) / threshold) * 100);
|
||
}
|
||
|
||
export function scoreOi(currentOi, prevOi, threshold) {
|
||
if (!Number.isFinite(currentOi) || !(prevOi > 0) || threshold <= 0) return 0;
|
||
return clamp((Math.abs(currentOi - prevOi) / prevOi / threshold) * 100);
|
||
}
|
||
|
||
export function scoreBasis(mark, oracle, threshold) {
|
||
if (!Number.isFinite(mark) || !(oracle > 0) || threshold <= 0) return 0;
|
||
return clamp((Math.abs(mark - oracle) / oracle / threshold) * 100);
|
||
}
|
||
|
||
/**
|
||
* Compute composite score and alerts for one asset.
|
||
*
|
||
* `prevAsset` may be null/undefined for cold start; in that case OI delta and
|
||
* volume spike are scored as 0 (we lack baselines).
|
||
*
|
||
* Per-asset `warmup` is TRUE until the volume baseline has VOLUME_BASELINE_MIN_SAMPLES
|
||
* and there is a prior OI to compute delta against — NOT just on the first poll after
|
||
* cold start. Without this, the "warming up" badge flips to false on poll 2 while the
|
||
* score is still missing most of its baseline.
|
||
*
|
||
* @param {{ symbol: string; display: string; class: 'crypto'|'commodity'; group: string }} meta
|
||
* @param {Record<string, string>} ctx
|
||
* @param {any} prevAsset
|
||
* @param {{ coldStart?: boolean }} [opts]
|
||
*/
|
||
export function computeAsset(meta, ctx, prevAsset, opts = {}) {
|
||
const t = THRESHOLDS[meta.class];
|
||
const fundingRate = Number(ctx.funding);
|
||
const currentOi = Number(ctx.openInterest);
|
||
const markPx = Number(ctx.markPx);
|
||
const oraclePx = Number(ctx.oraclePx);
|
||
const dayNotional = Number(ctx.dayNtlVlm);
|
||
const prevOi = prevAsset?.openInterest ?? null;
|
||
const prevVolSamples = /** @type {number[]} */ ((prevAsset?.sparkVol || []).filter(
|
||
/** @param {unknown} v */ (v) => Number.isFinite(v)
|
||
));
|
||
|
||
const fundingScore = scoreFunding(fundingRate, t.funding);
|
||
|
||
// Volume spike scored against the MOST RECENT 12 samples in sparkVol.
|
||
// sparkVol is newest-at-tail (see shiftAndAppend), so we must slice(-N) — NOT
|
||
// slice(0, N), which would anchor the baseline to the oldest window and never
|
||
// update after the first hour.
|
||
let volumeScore = 0;
|
||
const volumeBaselineReady = prevVolSamples.length >= VOLUME_BASELINE_MIN_SAMPLES;
|
||
if (dayNotional >= MIN_NOTIONAL_USD_24H && volumeBaselineReady) {
|
||
const recent = prevVolSamples.slice(-VOLUME_BASELINE_MIN_SAMPLES);
|
||
const avg = recent.reduce((a, b) => a + b, 0) / recent.length;
|
||
volumeScore = scoreVolume(dayNotional, avg, t.volume);
|
||
}
|
||
|
||
const oiScore = prevOi != null ? scoreOi(currentOi, prevOi, t.oi) : 0;
|
||
const basisScore = scoreBasis(markPx, oraclePx, t.basis);
|
||
|
||
const composite = clamp(
|
||
fundingScore * WEIGHTS.funding +
|
||
volumeScore * WEIGHTS.volume +
|
||
oiScore * WEIGHTS.oi +
|
||
basisScore * WEIGHTS.basis,
|
||
);
|
||
|
||
const sparkFunding = shiftAndAppend(prevAsset?.sparkFunding, Number.isFinite(fundingRate) ? fundingRate : 0);
|
||
const sparkOi = shiftAndAppend(prevAsset?.sparkOi, Number.isFinite(currentOi) ? currentOi : 0);
|
||
const sparkScore = shiftAndAppend(prevAsset?.sparkScore, composite);
|
||
const sparkVol = shiftAndAppend(prevAsset?.sparkVol, Number.isFinite(dayNotional) ? dayNotional : 0);
|
||
|
||
// Warmup stays TRUE until both baselines are usable — cold-start OR insufficient
|
||
// volume history OR missing prior OI. Clears only when the asset can produce all
|
||
// four component scores.
|
||
const warmup = opts.coldStart === true || !volumeBaselineReady || prevOi == null;
|
||
|
||
const alerts = [];
|
||
if (composite >= ALERT_THRESHOLD) {
|
||
alerts.push(`HIGH RISK ${composite.toFixed(0)}/100`);
|
||
}
|
||
|
||
return {
|
||
symbol: meta.symbol,
|
||
display: meta.display,
|
||
class: meta.class,
|
||
group: meta.group,
|
||
funding: Number.isFinite(fundingRate) ? fundingRate : null,
|
||
openInterest: Number.isFinite(currentOi) ? currentOi : null,
|
||
markPx: Number.isFinite(markPx) ? markPx : null,
|
||
oraclePx: Number.isFinite(oraclePx) ? oraclePx : null,
|
||
dayNotional: Number.isFinite(dayNotional) ? dayNotional : null,
|
||
fundingScore,
|
||
volumeScore,
|
||
oiScore,
|
||
basisScore,
|
||
composite,
|
||
sparkFunding,
|
||
sparkOi,
|
||
sparkScore,
|
||
sparkVol,
|
||
stale: false,
|
||
staleSince: null,
|
||
missingPolls: 0,
|
||
alerts,
|
||
warmup,
|
||
};
|
||
}
|
||
|
||
function shiftAndAppend(prev, value) {
|
||
const arr = Array.isArray(prev) ? prev.slice(-(SPARK_MAX - 1)) : [];
|
||
arr.push(value);
|
||
return arr;
|
||
}
|
||
|
||
// ── Hyperliquid client ────────────────────────────────────────────────────────
|
||
|
||
// Minimum universe size expected per dex. Default perps have ~200; xyz builder
|
||
// dex has ~60. Each threshold is half the observed size so we still reject
|
||
// genuinely broken payloads without false-positives on a thinner dex.
|
||
const MIN_UNIVERSE_DEFAULT = 50;
|
||
const MIN_UNIVERSE_XYZ = 30;
|
||
|
||
/**
|
||
* POST /info {type:'metaAndAssetCtxs', [dex]}. Returns raw [meta, assetCtxs].
|
||
* @param {string|undefined} dex
|
||
* @param {typeof fetch} [fetchImpl]
|
||
*/
|
||
export async function fetchHyperliquidMetaAndCtxs(dex = undefined, fetchImpl = fetch) {
|
||
const body = dex ? { type: 'metaAndAssetCtxs', dex } : { type: 'metaAndAssetCtxs' };
|
||
const resp = await fetchImpl(HYPERLIQUID_URL, {
|
||
method: 'POST',
|
||
headers: {
|
||
'Content-Type': 'application/json',
|
||
Accept: 'application/json',
|
||
'User-Agent': 'WorldMonitor/1.0 (+https://worldmonitor.app)',
|
||
},
|
||
body: JSON.stringify(body),
|
||
signal: AbortSignal.timeout(REQUEST_TIMEOUT_MS),
|
||
});
|
||
if (!resp.ok) throw new Error(`Hyperliquid HTTP ${resp.status}${dex ? ` (dex=${dex})` : ''}`);
|
||
const ct = resp.headers?.get?.('content-type') || '';
|
||
if (!ct.toLowerCase().includes('application/json')) {
|
||
throw new Error(`Hyperliquid wrong content-type: ${ct || '<missing>'}${dex ? ` (dex=${dex})` : ''}`);
|
||
}
|
||
return resp.json();
|
||
}
|
||
|
||
/**
|
||
* Fetch both the default perp dex (BTC/ETH/SOL/PAXG...) and the xyz builder
|
||
* dex (commodities + FX perps) in parallel, validate each payload, and merge
|
||
* into a single `{universe, assetCtxs}`.
|
||
*
|
||
* xyz: asset names already carry the `xyz:` prefix in their universe entries,
|
||
* so no rewriting is needed — just concatenate.
|
||
*/
|
||
export async function fetchAllMetaAndCtxs(fetchImpl = fetch) {
|
||
const [defaultRaw, xyzRaw] = await Promise.all([
|
||
fetchHyperliquidMetaAndCtxs(undefined, fetchImpl),
|
||
fetchHyperliquidMetaAndCtxs('xyz', fetchImpl),
|
||
]);
|
||
const def = validateDexPayload(defaultRaw, 'default', MIN_UNIVERSE_DEFAULT);
|
||
const xyz = validateDexPayload(xyzRaw, 'xyz', MIN_UNIVERSE_XYZ);
|
||
return {
|
||
universe: [...def.universe, ...xyz.universe],
|
||
assetCtxs: [...def.assetCtxs, ...xyz.assetCtxs],
|
||
};
|
||
}
|
||
|
||
/**
|
||
* Strict shape validation for ONE dex payload. Returns `[meta, assetCtxs]` where
|
||
* meta = { universe: [{ name, ... }, ...] }
|
||
* assetCtxs = [{ funding, openInterest, markPx, oraclePx, dayNtlVlm, ... }, ...]
|
||
* with assetCtxs[i] aligned to universe[i].
|
||
*
|
||
* Throws on any mismatch — never persist a partial / malformed payload.
|
||
*
|
||
* @param {unknown} raw
|
||
* @param {string} dexLabel
|
||
* @param {number} minUniverse
|
||
*/
|
||
export function validateDexPayload(raw, dexLabel, minUniverse) {
|
||
if (!Array.isArray(raw) || raw.length < 2) {
|
||
throw new Error(`Hyperliquid ${dexLabel} payload not a [meta, assetCtxs] tuple`);
|
||
}
|
||
const [meta, assetCtxs] = raw;
|
||
if (!meta || !Array.isArray(meta.universe)) {
|
||
throw new Error(`Hyperliquid ${dexLabel} meta.universe missing or not array`);
|
||
}
|
||
if (meta.universe.length < minUniverse) {
|
||
throw new Error(`Hyperliquid ${dexLabel} universe suspiciously small: ${meta.universe.length} < ${minUniverse}`);
|
||
}
|
||
if (meta.universe.length > MAX_UPSTREAM_UNIVERSE) {
|
||
throw new Error(`Hyperliquid ${dexLabel} universe over cap: ${meta.universe.length} > ${MAX_UPSTREAM_UNIVERSE}`);
|
||
}
|
||
if (!Array.isArray(assetCtxs) || assetCtxs.length !== meta.universe.length) {
|
||
throw new Error(`Hyperliquid ${dexLabel} assetCtxs length does not match universe`);
|
||
}
|
||
for (const m of meta.universe) {
|
||
if (typeof m?.name !== 'string') throw new Error(`Hyperliquid ${dexLabel} universe entry missing name`);
|
||
}
|
||
return { universe: meta.universe, assetCtxs };
|
||
}
|
||
|
||
/**
|
||
* Back-compat wrapper used by buildSnapshot. Accepts either a single-dex raw
|
||
* `[meta, assetCtxs]` tuple (tests) or the merged `{universe, assetCtxs}` shape
|
||
* produced by fetchAllMetaAndCtxs. Returns the merged shape.
|
||
*/
|
||
export function validateUpstream(raw) {
|
||
// Merged shape from fetchAllMetaAndCtxs: already validated per-dex.
|
||
if (raw && !Array.isArray(raw) && Array.isArray(raw.universe) && Array.isArray(raw.assetCtxs)) {
|
||
return { universe: raw.universe, assetCtxs: raw.assetCtxs };
|
||
}
|
||
// Single-dex tuple (legacy / tests): validate as default dex.
|
||
return validateDexPayload(raw, 'default', MIN_UNIVERSE_DEFAULT);
|
||
}
|
||
|
||
export function indexBySymbol({ universe, assetCtxs }) {
|
||
const out = new Map();
|
||
for (let i = 0; i < universe.length; i++) {
|
||
out.set(universe[i].name, assetCtxs[i] || {});
|
||
}
|
||
return out;
|
||
}
|
||
|
||
// ── Main build path ──────────────────────────────────────────────────────────
|
||
|
||
/**
|
||
* Build a fresh snapshot from the upstream payload + the previous Redis snapshot.
|
||
* Pure function — caller passes both inputs.
|
||
*/
|
||
export function buildSnapshot(upstream, prevSnapshot, opts = {}) {
|
||
const validated = validateUpstream(upstream);
|
||
const ctxBySymbol = indexBySymbol(validated);
|
||
const now = opts.now || Date.now();
|
||
const prevByName = new Map();
|
||
if (prevSnapshot?.assets && Array.isArray(prevSnapshot.assets)) {
|
||
for (const a of prevSnapshot.assets) prevByName.set(a.symbol, a);
|
||
}
|
||
const prevAgeMs = prevSnapshot?.ts ? now - prevSnapshot.ts : Infinity;
|
||
// Treat stale prior snapshot (>3× cadence = 900s) as cold start.
|
||
const coldStart = !prevSnapshot || prevAgeMs > 900_000;
|
||
|
||
// Info-log unseen xyz: perps once per run so ops sees when Hyperliquid adds
|
||
// commodity/FX markets we could add to the whitelist.
|
||
const whitelisted = new Set(ASSETS.map((a) => a.symbol));
|
||
const unknownXyz = validated.universe
|
||
.map((/** @type {{ name: string }} */ u) => u.name)
|
||
.filter((name) => typeof name === 'string' && name.startsWith('xyz:') && !whitelisted.has(name));
|
||
if (unknownXyz.length > 0) {
|
||
console.log(` Unknown xyz: perps upstream (not whitelisted): ${unknownXyz.slice(0, 20).join(', ')}${unknownXyz.length > 20 ? ` (+${unknownXyz.length - 20} more)` : ''}`);
|
||
}
|
||
|
||
const assets = [];
|
||
for (const meta of ASSETS) {
|
||
const ctx = ctxBySymbol.get(meta.symbol);
|
||
if (!ctx) {
|
||
// Whitelisted symbol absent from upstream — carry forward prior with stale flag.
|
||
const prev = prevByName.get(meta.symbol);
|
||
if (!prev) continue; // never seen, skip silently (don't synthesize)
|
||
const missing = (prev.missingPolls || 0) + 1;
|
||
if (missing >= STALE_SYMBOL_DROP_AFTER_POLLS) {
|
||
console.warn(` Dropping ${meta.symbol} — missing for ${missing} consecutive polls`);
|
||
continue;
|
||
}
|
||
assets.push({
|
||
...prev,
|
||
stale: true,
|
||
staleSince: prev.staleSince || now,
|
||
missingPolls: missing,
|
||
});
|
||
continue;
|
||
}
|
||
const prev = coldStart ? null : prevByName.get(meta.symbol);
|
||
const asset = computeAsset(meta, ctx, prev, { coldStart });
|
||
assets.push(asset);
|
||
}
|
||
|
||
// Snapshot warmup = any asset still building a baseline. Reflects real
|
||
// component-score readiness, not just the first poll after cold start.
|
||
const warmup = assets.some((a) => a.warmup === true);
|
||
|
||
return {
|
||
ts: now,
|
||
fetchedAt: new Date(now).toISOString(),
|
||
warmup,
|
||
assetCount: assets.length,
|
||
assets,
|
||
};
|
||
}
|
||
|
||
export function validateFn(snapshot) {
|
||
return !!snapshot && Array.isArray(snapshot.assets) && snapshot.assets.length >= 12;
|
||
}
|
||
|
||
export function declareRecords(data) {
|
||
return Array.isArray(data?.assets) ? data.assets.length : 0;
|
||
}
|
||
|
||
// ── Entry point ──────────────────────────────────────────────────────────────
|
||
|
||
const isMain = process.argv[1]?.endsWith('seed-hyperliquid-flow.mjs');
|
||
if (isMain) {
|
||
const prevSnapshot = await readSeedSnapshot(CANONICAL_KEY);
|
||
await runSeed('market', 'hyperliquid-flow', CANONICAL_KEY, async () => {
|
||
// Commodity + FX perps live on the xyz builder dex, NOT the default dex.
|
||
// Must fetch both and merge before scoring (see fetchAllMetaAndCtxs).
|
||
const upstream = await fetchAllMetaAndCtxs();
|
||
return buildSnapshot(upstream, prevSnapshot);
|
||
}, {
|
||
ttlSeconds: CACHE_TTL_SECONDS,
|
||
validateFn,
|
||
sourceVersion: 'hyperliquid-info-metaAndAssetCtxs-v1',
|
||
recordCount: (snap) => snap?.assets?.length || 0,
|
||
declareRecords,
|
||
schemaVersion: 1,
|
||
maxStaleMin: 15,
|
||
}).catch((err) => {
|
||
const cause = err.cause ? ` (cause: ${err.cause.message || err.cause.code || err.cause})` : '';
|
||
console.error('FATAL:', (err.message || err) + cause);
|
||
process.exit(1);
|
||
});
|
||
}
|