mirror of
https://github.com/koala73/worldmonitor.git
synced 2026-04-25 17:14:57 +02:00
* feat(seeds): BIS DSR + property prices (2 of 7) Ships 2 of 7 BIS dataflows flagged as genuinely new signals in #3026 — the rest are redundant with IMF/WB or are low-fit global aggregates. New seeder: scripts/seed-bis-extended.mjs - WS_DSR household debt service ratio (% income, quarterly) - WS_SPP residential property prices (real index, quarterly) - WS_CPP commercial property prices (real index, quarterly) Gold-standard pattern: atomic publish + writeExtraKey for extras, retry on missing startPeriod, TTL = 3 days (3× 12h cron), runSeed drives seed-meta:economic:bis-extended. Series selection scores dimension matches (PP_VALUATION=R / UNIT_MEASURE=628 for property, DSR_BORROWERS=P / DSR_ADJUST=A for DSR), then falls back to observation count. Wired into: - bootstrap (slow tier) + cache-keys.ts - api/health.js (STANDALONE_KEYS + SEED_META, maxStaleMin = 24h) - api/mcp.ts get_economic_data tool (_cacheKeys + _freshnessChecks) - resilience macroFiscal: new householdDebtService sub-metric (weight 0.05, currentAccountPct rebalanced 0.3 → 0.25) - Housing Cycle tile on CountryDeepDivePanel (Economic Indicators card) with euro-area (XM) fallback for EU member states - seed-bundle-macro Railway cron (BIS-Extended, 12h interval) Tests: tests/bis-extended-seed.test.mjs covers CSV parsing, series selection, quarter math + YoY. Updated resilience golden-value tests for the macroFiscal weight rebalance. Closes #3026 https://claude.ai/code/session_01DDo39mPD9N2fNHtUntHDqN * fix(resilience): unblock PR #3048 on #3046 stack - rebase onto #3046; final macroFiscal weights: govRevenue 0.40, currentAccount 0.20, debtGrowth 0.20, unemployment 0.15, householdDebtService 0.05 (=1.00) - add updateHousingCycle? stub to CountryBriefPanel interface so country-intel dispatch typechecks - add HR to EURO_AREA fallback set (joined euro 2023-01-01) - seed-bis-extended: extend SPP/CPP TTLs when DSR fetch returns empty so the rejected publish does not silently expire the still-good property keys - update resilience goldens for the 5-sub-metric macroFiscal blend * fix(country-brief): housing tile renders em-dash for null change values The new Housing cycle tile used `?? 0` to default qoqChange/yoyChange/change when missing, fabricating a flat "0.0%" label (with positive-trend styling) for countries with no prior comparable period. Fetch path and builders correctly return null; the panel was coercing it. formatPctTrend now accepts null|undefined and returns an em-dash, matching how other cards surface unavailable metrics. Drop the `?? 0` fallbacks at the three housing call sites. * fix(seed-health): register economic:bis-extended seed-meta monitoring 12h Railway cron writes seed-meta:economic:bis-extended but it was missing from SEED_DOMAINS, so /api/seed-health never reported its freshness. intervalMin=720 matches maxStaleMin/2 (1440/2) from api/health.js. * fix(seed-bis-extended): decouple DSR/SPP/CPP so one fetch failure doesn't block the others Previously validate() required data.entries.length > 0 on the DSR slice after publishTransform pulled it out of the aggregate payload. If WS_DSR fetch failed but WS_SPP / WS_CPP succeeded, validate() rejected the publish → afterPublish() never ran → fresh SPP/CPP data was silently discarded and only the old snapshots got a TTL bump. This treats the three datasets as independent: - SPP and CPP are now published (or have their existing TTLs extended) as side-effects of fetchAll(), per-dataset. A failure in one never affects the others. - DSR continues to flow through runSeed's canonical-key path. When DSR is empty, publishTransform yields { entries: [] } so atomicPublish skips the canonical write (preserving the old DSR snapshot); runSeed's skipped branch extends its TTL and refreshes seed-meta. Shape B (one runSeed call, semantics changed) chosen over Shape A (three sequential runSeed calls) because runSeed owns the lock + process.exit lifecycle and can't be safely called three times in a row, and Shape B keeps the single aggregate seed-meta:economic:bis-extended key that health.js already monitors. Tests cover both failure modes: - DSR empty + SPP/CPP healthy → SPP/CPP written, DSR TTL extended - DSR healthy + SPP/CPP empty → DSR written, SPP/CPP TTLs extended * fix(health): per-dataset seed-meta for BIS DSR/SPP/CPP Health was pointing bisDsr / bisPropertyResidential / bisPropertyCommercial at the shared seed-meta:economic:bis-extended key, which runSeed refreshes on every run (including its validation-failed "skipped" branch). A DSR-only outage therefore left bisDsr reporting fresh in api/health.js while the resilience scorer consumed stale/missing economic:bis:dsr:v1 data. Write a dedicated seed-meta key per dataset ONLY when that dataset actually published fresh entries. The aggregate bis-extended key stays as a "seeder ran" signal in api/seed-health.js. * fix(seed-bis-extended): write DSR seed-meta only after atomicPublish succeeds Previously fetchAll() wrote seed-meta:economic:bis-dsr inline before runSeed/atomicPublish ran. If atomicPublish then failed (Redis hiccup, validate rejection, etc.), seed-meta was already bumped — health would report DSR fresh while the canonical key was stale. Move the DSR seed-meta write into a dsrAfterPublish callback passed to runSeed via the existing afterPublish hook, which fires only after a successful canonical publish. SPP/CPP paths already used this ordering inside publishDatasetIndependently; this brings DSR in line. Adds a regression test exercising dsrAfterPublish with mocked Upstash: populated DSR -> single SET on seed-meta key; null/empty DSR -> zero Redis calls. * fix(resilience): per-dataset BIS seed-meta keys in freshness overrides SOURCE_KEY_META_OVERRIDES previously collapsed economic:bis:dsr:v1 and both property-* sourceKeys onto the aggregate seed-meta:economic:bis-extended key. api/health.js (SEED_META) writes per-dataset keys (seed-meta:economic:bis-dsr / bis-property-residential / bis-property-commercial), so a DSR-only outage showed stale in /api/health but the resilience dimension freshness code still reported macroFiscal inputs as fresh. Map each BIS sourceKey to its dedicated seed-meta key to match health.js. The aggregate bis-extended key is still written by the seeder and read by api/seed-health.js as a "seeder ran" signal, so it is retained upstream. * fix(bis): prefer households in DSR + per-dataset freshness in MCP Greptile review catches on #3048: 1. buildDsr() was selecting DSR_BORROWERS='P' (private non-financial) while the UI labels it "Household DSR" and resilience scoring uses it as `householdDebtService`. Changed to 'H' (households). Countries without an H series now get dropped rather than silently mislabeled. 2. api/mcp.ts get_economic_data still read only the aggregate seed-meta:economic:bis-extended for freshness. If DSR goes stale while SPP/CPP keep publishing, MCP would report the BIS block as fresh even though one of its returned keys is stale. Swapped to the three per-dataset seed-meta keys (bis-dsr, bis-property-residential, bis-property-commercial), matching the fix already applied to /api/health and the resilience dimension-freshness pipeline. --------- Co-authored-by: Claude <noreply@anthropic.com>
261 lines
12 KiB
JavaScript
261 lines
12 KiB
JavaScript
import { describe, it } from 'node:test';
|
||
import { strict as assert } from 'node:assert';
|
||
|
||
import {
|
||
parseBisCSV,
|
||
selectBestSeriesByCountry,
|
||
buildDsr,
|
||
buildPropertyPrices,
|
||
quarterToDate,
|
||
validate,
|
||
publishTransform,
|
||
planDatasetAction,
|
||
publishDatasetIndependently,
|
||
dsrAfterPublish,
|
||
KEYS,
|
||
META_KEYS,
|
||
} from '../scripts/seed-bis-extended.mjs';
|
||
|
||
// Minimal BIS-style SDMX CSV fixture covering:
|
||
// - Two DSR series per country (one private/adjusted → preferred, one
|
||
// households/unadjusted → deprioritised) so selectBestSeriesByCountry
|
||
// has to use dimension prefs to pick.
|
||
// - A real-index SPP series plus a YoY-pct series — the real-index
|
||
// variant (UNIT_MEASURE=628, PP_VALUATION=R) must win.
|
||
// - Missing values (`.`) and empty rows — must be discarded.
|
||
const DSR_CSV = [
|
||
'FREQ,BORROWERS_CTY,DSR_BORROWERS,DSR_ADJUST,TIME_PERIOD,OBS_VALUE',
|
||
'Q,US,P,A,2023-Q2,9.8',
|
||
'Q,US,P,A,2023-Q3,10.1',
|
||
'Q,US,P,A,2023-Q4,10.4',
|
||
'Q,US,H,U,2023-Q2,7.5',
|
||
'Q,US,H,U,2023-Q3,7.6',
|
||
'Q,GB,P,A,2023-Q3,8.2',
|
||
'Q,GB,P,A,2023-Q4,.',
|
||
'Q,GB,P,A,2023-Q4,8.5',
|
||
'',
|
||
].join('\n');
|
||
|
||
const SPP_CSV = [
|
||
'FREQ,REF_AREA,UNIT_MEASURE,PP_VALUATION,TIME_PERIOD,OBS_VALUE',
|
||
'Q,US,628,R,2022-Q4,100.0',
|
||
'Q,US,628,R,2023-Q1,101.2',
|
||
'Q,US,628,R,2023-Q2,102.5',
|
||
'Q,US,628,R,2023-Q3,103.0',
|
||
'Q,US,628,R,2023-Q4,104.1',
|
||
'Q,US,628,R,2024-Q4,108.5',
|
||
'Q,US,771,R,2023-Q3,5.4', // YoY-change variant — must not be chosen
|
||
'Q,XM,628,R,2023-Q4,99.0',
|
||
'Q,XM,628,R,2024-Q4,100.5',
|
||
].join('\n');
|
||
|
||
describe('seed-bis-extended parser', () => {
|
||
it('exports the canonical Redis keys', () => {
|
||
assert.equal(KEYS.dsr, 'economic:bis:dsr:v1');
|
||
assert.equal(KEYS.spp, 'economic:bis:property-residential:v1');
|
||
assert.equal(KEYS.cpp, 'economic:bis:property-commercial:v1');
|
||
});
|
||
|
||
it('exports per-dataset seed-meta keys distinct from the aggregate', () => {
|
||
// Health monitoring (api/health.js bisDsr / bisPropertyResidential /
|
||
// bisPropertyCommercial) points at these keys — the whole point of the
|
||
// P1 fix is that a DSR-only outage stales ONLY bisDsr, not all three.
|
||
assert.equal(META_KEYS.dsr, 'seed-meta:economic:bis-dsr');
|
||
assert.equal(META_KEYS.spp, 'seed-meta:economic:bis-property-residential');
|
||
assert.equal(META_KEYS.cpp, 'seed-meta:economic:bis-property-commercial');
|
||
// Must not collide with the aggregate "seeder ran" marker.
|
||
assert.notEqual(META_KEYS.dsr, 'seed-meta:economic:bis-extended');
|
||
assert.notEqual(META_KEYS.spp, 'seed-meta:economic:bis-extended');
|
||
assert.notEqual(META_KEYS.cpp, 'seed-meta:economic:bis-extended');
|
||
});
|
||
|
||
it('maps BIS quarter strings to first day of the quarter', () => {
|
||
assert.equal(quarterToDate('2023-Q3'), '2023-07-01');
|
||
assert.equal(quarterToDate('2024-Q1'), '2024-01-01');
|
||
assert.equal(quarterToDate('2024-Q4'), '2024-10-01');
|
||
// Non-quarterly strings pass through unchanged (monthly or daily BIS periods).
|
||
assert.equal(quarterToDate('2024-06'), '2024-06');
|
||
});
|
||
|
||
it('parses CSV rows and drops blank lines', () => {
|
||
const rows = parseBisCSV(DSR_CSV);
|
||
assert.ok(rows.length >= 7, 'expected at least 7 non-empty rows');
|
||
assert.equal(rows[0].TIME_PERIOD, '2023-Q2');
|
||
assert.equal(rows[0].BORROWERS_CTY, 'US');
|
||
});
|
||
|
||
it('buildDsr prefers DSR_BORROWERS=P / DSR_ADJUST=A and returns latest+QoQ', () => {
|
||
const rows = parseBisCSV(DSR_CSV);
|
||
const entries = buildDsr(rows);
|
||
const us = entries.find(e => e.countryCode === 'US');
|
||
assert.ok(us, 'expected US entry');
|
||
// The adjusted-private series wins, so latest must be 10.4 not 7.6.
|
||
assert.equal(us.dsrPct, 10.4);
|
||
assert.equal(us.previousDsrPct, 10.1);
|
||
assert.equal(us.period, '2023-Q4');
|
||
assert.equal(us.date, '2023-10-01');
|
||
assert.ok(us.change !== null);
|
||
});
|
||
|
||
it('buildPropertyPrices picks the real-index series (628/R) and computes YoY', () => {
|
||
const rows = parseBisCSV(SPP_CSV);
|
||
const entries = buildPropertyPrices(rows, 'residential');
|
||
const us = entries.find(e => e.countryCode === 'US');
|
||
assert.ok(us, 'expected US entry');
|
||
assert.equal(us.indexValue, 108.5); // latest observation
|
||
assert.equal(us.period, '2024-Q4');
|
||
assert.equal(us.kind, 'residential');
|
||
// YoY: 108.5 / 104.1 − 1 ≈ 4.2%.
|
||
assert.ok(us.yoyChange !== null && Math.abs(us.yoyChange - 4.2) < 0.2, `yoyChange=${us.yoyChange}`);
|
||
// Euro Area (XM) should also come through.
|
||
const xm = entries.find(e => e.countryCode === 'XM');
|
||
assert.ok(xm, 'expected XM entry');
|
||
assert.equal(xm.kind, 'residential');
|
||
});
|
||
|
||
it('decouples DSR / SPP / CPP: DSR empty + SPP+CPP healthy → SPP+CPP written, DSR TTL extended', () => {
|
||
// Simulated fetchAll() output when WS_DSR fetch failed but WS_SPP / WS_CPP
|
||
// succeeded. The previous code hard-gated everything on DSR: publishTransform
|
||
// would yield { entries: [] }, validate() would fail on the full object, and
|
||
// afterPublish() never ran → fresh SPP/CPP data silently dropped. The fix
|
||
// must classify each dataset independently.
|
||
const data = {
|
||
dsr: null,
|
||
spp: { entries: [{ countryCode: 'US', indexValue: 108.5 }], fetchedAt: 't' },
|
||
cpp: { entries: [{ countryCode: 'US', indexValue: 95.2 }], fetchedAt: 't' },
|
||
};
|
||
// SPP/CPP must be WRITTEN (fresh data).
|
||
assert.equal(planDatasetAction(data.spp), 'write');
|
||
assert.equal(planDatasetAction(data.cpp), 'write');
|
||
// DSR must have its EXISTING TTL extended (no canonical overwrite).
|
||
assert.equal(planDatasetAction(data.dsr), 'extend');
|
||
// publishTransform yields an empty DSR payload → validate() returns false
|
||
// → atomicPublish skips the canonical DSR write and extends its TTL via
|
||
// runSeed's own skipped branch (preserving the previous DSR snapshot).
|
||
const publishData = publishTransform(data);
|
||
assert.deepEqual(publishData, { entries: [] });
|
||
assert.equal(validate(publishData), false);
|
||
});
|
||
|
||
it('decouples DSR / SPP / CPP: DSR healthy + SPP+CPP empty → DSR written, SPP+CPP TTLs extended', () => {
|
||
// Reverse failure mode: DSR fetch succeeded, SPP/CPP both returned empty
|
||
// (e.g. BIS property-price endpoint hiccup). DSR must still publish fresh
|
||
// data; SPP/CPP old snapshots must survive via TTL extension.
|
||
const data = {
|
||
dsr: { entries: [{ countryCode: 'US', dsrPct: 10.4 }], fetchedAt: 't' },
|
||
spp: null,
|
||
cpp: null,
|
||
};
|
||
assert.equal(planDatasetAction(data.dsr), 'write');
|
||
assert.equal(planDatasetAction(data.spp), 'extend');
|
||
assert.equal(planDatasetAction(data.cpp), 'extend');
|
||
const publishData = publishTransform(data);
|
||
assert.equal(publishData, data.dsr); // passes DSR slice straight through
|
||
assert.equal(validate(publishData), true); // canonical DSR write proceeds
|
||
});
|
||
|
||
it('planDatasetAction treats a {entries:[]} slice as extend-TTL (not write)', () => {
|
||
assert.equal(planDatasetAction({ entries: [] }), 'extend');
|
||
assert.equal(planDatasetAction(null), 'extend');
|
||
assert.equal(planDatasetAction(undefined), 'extend');
|
||
});
|
||
|
||
it('publishDatasetIndependently writes per-dataset seed-meta ONLY on fresh write, not on extend-TTL', async () => {
|
||
// Capture every Upstash REST call so we can assert which keys were touched.
|
||
const origUrl = process.env.UPSTASH_REDIS_REST_URL;
|
||
const origTok = process.env.UPSTASH_REDIS_REST_TOKEN;
|
||
const origFetch = globalThis.fetch;
|
||
process.env.UPSTASH_REDIS_REST_URL = 'https://mock.upstash.invalid';
|
||
process.env.UPSTASH_REDIS_REST_TOKEN = 'mock-token';
|
||
const calls = [];
|
||
globalThis.fetch = async (_url, opts) => {
|
||
const body = JSON.parse(opts.body);
|
||
calls.push(body); // e.g. ['SET', 'key', 'value', 'EX', 123] or ['EXPIRE', ...]
|
||
return { ok: true, status: 200, json: async () => ({ result: 'OK' }) };
|
||
};
|
||
try {
|
||
// 1. Fresh payload → canonical key written + per-dataset seed-meta written.
|
||
calls.length = 0;
|
||
await publishDatasetIndependently(
|
||
KEYS.spp,
|
||
{ entries: [{ countryCode: 'US', indexValue: 108.5 }], fetchedAt: 't' },
|
||
META_KEYS.spp,
|
||
);
|
||
const sets = calls.filter(c => c[0] === 'SET').map(c => c[1]);
|
||
assert.ok(sets.includes(KEYS.spp), `expected SET on canonical key ${KEYS.spp}, got ${JSON.stringify(sets)}`);
|
||
assert.ok(sets.includes(META_KEYS.spp), `expected SET on seed-meta key ${META_KEYS.spp}, got ${JSON.stringify(sets)}`);
|
||
|
||
// 2. Empty payload → canonical key TTL extended, seed-meta NOT written.
|
||
// (This is the core P1 invariant: a DSR outage must not refresh
|
||
// seed-meta:economic:bis-dsr, otherwise health lies "fresh".)
|
||
calls.length = 0;
|
||
await publishDatasetIndependently(KEYS.dsr, null, META_KEYS.dsr);
|
||
const metaSets = calls.filter(c => c[0] === 'SET' && c[1] === META_KEYS.dsr);
|
||
assert.equal(metaSets.length, 0, `seed-meta must NOT be written on extend-TTL path, got ${JSON.stringify(metaSets)}`);
|
||
// Any SET at all on the extend path is wrong — only EXPIRE-style calls expected.
|
||
const canonicalSets = calls.filter(c => c[0] === 'SET' && c[1] === KEYS.dsr);
|
||
assert.equal(canonicalSets.length, 0, `canonical key must NOT be re-written on extend-TTL path`);
|
||
} finally {
|
||
globalThis.fetch = origFetch;
|
||
if (origUrl === undefined) delete process.env.UPSTASH_REDIS_REST_URL; else process.env.UPSTASH_REDIS_REST_URL = origUrl;
|
||
if (origTok === undefined) delete process.env.UPSTASH_REDIS_REST_TOKEN; else process.env.UPSTASH_REDIS_REST_TOKEN = origTok;
|
||
}
|
||
});
|
||
|
||
it('dsrAfterPublish writes seed-meta:economic:bis-dsr only after a successful canonical DSR publish', async () => {
|
||
// Regression for the ordering bug: previously seed-meta was written
|
||
// INSIDE fetchAll() before runSeed/atomicPublish ran. If atomicPublish
|
||
// then failed (Redis hiccup), seed-meta would already be bumped → health
|
||
// reports DSR fresh while the canonical key is stale. The fix moves the
|
||
// write into an afterPublish callback that fires only on successful
|
||
// canonical publish.
|
||
const origUrl = process.env.UPSTASH_REDIS_REST_URL;
|
||
const origTok = process.env.UPSTASH_REDIS_REST_TOKEN;
|
||
const origFetch = globalThis.fetch;
|
||
process.env.UPSTASH_REDIS_REST_URL = 'https://mock.upstash.invalid';
|
||
process.env.UPSTASH_REDIS_REST_TOKEN = 'mock-token';
|
||
const calls = [];
|
||
globalThis.fetch = async (_url, opts) => {
|
||
const body = JSON.parse(opts.body);
|
||
calls.push(body);
|
||
return { ok: true, status: 200, json: async () => ({ result: 'OK' }) };
|
||
};
|
||
try {
|
||
// 1. DSR populated → seed-meta IS written (this is the post-publish path).
|
||
calls.length = 0;
|
||
await dsrAfterPublish({
|
||
dsr: { entries: [{ countryCode: 'US', dsrPct: 10.4 }], fetchedAt: 't' },
|
||
spp: null,
|
||
cpp: null,
|
||
});
|
||
const metaSets = calls.filter(c => c[0] === 'SET' && c[1] === META_KEYS.dsr);
|
||
assert.equal(metaSets.length, 1, `expected SET on ${META_KEYS.dsr} after successful publish, got ${JSON.stringify(calls)}`);
|
||
|
||
// 2. DSR null/empty → seed-meta NOT written. atomicPublish would have
|
||
// skipped the canonical write in this case anyway (validate=false),
|
||
// but this guards against a future caller invoking the hook with an
|
||
// empty slice.
|
||
calls.length = 0;
|
||
await dsrAfterPublish({ dsr: null, spp: null, cpp: null });
|
||
assert.equal(calls.length, 0, `expected no Redis calls when DSR slice is empty, got ${JSON.stringify(calls)}`);
|
||
|
||
calls.length = 0;
|
||
await dsrAfterPublish({ dsr: { entries: [] }, spp: null, cpp: null });
|
||
assert.equal(calls.length, 0, `expected no Redis calls when DSR slice has zero entries`);
|
||
} finally {
|
||
globalThis.fetch = origFetch;
|
||
if (origUrl === undefined) delete process.env.UPSTASH_REDIS_REST_URL; else process.env.UPSTASH_REDIS_REST_URL = origUrl;
|
||
if (origTok === undefined) delete process.env.UPSTASH_REDIS_REST_TOKEN; else process.env.UPSTASH_REDIS_REST_TOKEN = origTok;
|
||
}
|
||
});
|
||
|
||
it('selectBestSeriesByCountry ignores series with no usable observations', () => {
|
||
const rows = [
|
||
{ FREQ: 'Q', REF_AREA: 'US', UNIT_MEASURE: '628', PP_VALUATION: 'R', TIME_PERIOD: '2023-Q1', OBS_VALUE: '.' },
|
||
{ FREQ: 'Q', REF_AREA: 'US', UNIT_MEASURE: '628', PP_VALUATION: 'R', TIME_PERIOD: '2023-Q2', OBS_VALUE: '' },
|
||
];
|
||
const out = selectBestSeriesByCountry(rows, { countryColumns: ['REF_AREA'], prefs: { PP_VALUATION: 'R' } });
|
||
assert.equal(out.size, 0);
|
||
});
|
||
});
|