* fix(portwatch): per-country timeout + SIGTERM progress flush
Diagnosed from Railway log 2026-04-20T04:00-04:07: Port-Activity section hit
the 420s section cap with only batch 1/15 logged. Gap between batch 1 (67.3s)
and SIGTERM was 352s of silence — batch 2 stalled because Promise.allSettled
waits for the slowest country and processCountry had no per-country budget.
One slow country (USA/CHN with many ports × many pages under ArcGIS EP3
throttling) blocked the whole batch and cascaded to the section timeout,
leaving batches 2..15 unattempted.
Two changes, both stabilisers ahead of the proper fix (globalising EP3):
1. Wrap processCountry in Promise.race against a 90s PER_COUNTRY_TIMEOUT_MS.
Bounds worst-case batch time at ~90s regardless of ArcGIS behaviour.
Orphan fetches keep running until their own AbortSignal.timeout(45s)
fires — acceptable since the process exits soon after either way.
2. Share a `progress` object between fetchAll() and the SIGTERM handler so
the kill path flushes batch index, seeded count, and the first 10 error
messages. Past timeout kills discarded the errors array entirely,
making every regression undiagnosable.
* fix(portwatch): address PR #3222 P1+P2 (propagate abort, eager error flush)
Review feedback on #3222:
P1 — The 90s per-country timeout did not actually stop the timed-out
country's work; Promise.race rejected but processCountry kept paginating
with fresh 45s fetch timeouts per page, violating the CONCURRENCY=12 cap
and amplifying ArcGIS throttling instead of containing it.
Fix: thread an AbortController signal from withPerCountryTimeout through
processCountry → fetchActivityRows → fetchWithTimeout. fetchWithTimeout
combines the caller signal with AbortSignal.timeout(FETCH_TIMEOUT) via
AbortSignal.any so the per-country abort propagates into the in-flight
fetch. fetchActivityRows also checks signal.aborted between pages so a
cancel lands on the next iteration boundary even if the current page
has already resolved. Node 24 runtime supports AbortSignal.any.
P2 — SIGTERM diagnostics missed failures from the currently-stuck batch
because progress.errors was only populated after Promise.allSettled
returned. A kill during the pending await left progress.errors empty.
Fix: attach p.catch(err => errors.push(...)) to each wrapped promise
before Promise.allSettled. Rejections land in the shared errors array
at the moment they fire, so a SIGTERM mid-batch sees every rejection
that has already occurred (including per-country timeouts that have
already aborted their controllers). The settled loop skips rejected
outcomes to avoid double-counting.
Also exports withPerCountryTimeout with an injectable timeoutMs so the
new runtime tests can exercise the abort path at 40ms. Runtime tests
verify: (a) timer fires → underlying signal aborted + work rejects with
the per-country message, (b) work-resolves-first returns the value,
(c) work-rejects-first surfaces the real error, (d) eager .catch flush
populates a shared errors array before allSettled resolves.
Tests: 45 pass (was 38, +7 — 4 runtime + 3 source-regex).
Full test:data: 5867 pass. Typecheck + lint clean.
* fix(portwatch): abort also cancels 429 proxy fallback (PR #3222 P1 follow-up)
Second review iteration on #3222: the per-country AbortController fix
from b2f4a2626 stopped at the direct fetch() and did not reach the 429
proxy fallback. httpsProxyFetchRaw only accepted timeoutMs, so a
timed-out country could keep a CONNECT tunnel + request alive for up
to another FETCH_TIMEOUT (45s) after the batch moved on — the exact
throttling scenario the PR is meant to contain. The concurrency cap
was still violated on the slow path.
Threads `signal` all the way through:
- scripts/_proxy-utils.cjs: proxyConnectTunnel + proxyFetch accept an
optional signal option. Early-reject if `signal.aborted` before
opening the socket. Otherwise addEventListener('abort') destroys the
in-flight proxy socket + TLS tunnel and rejects with signal.reason.
Listener removed in cleanup() on all terminal paths. Refactored both
functions around resolveOnce/rejectOnce guards so the abort path
races cleanly with timeout and network errors without double-settle.
- scripts/_seed-utils.mjs: httpsProxyFetchRaw accepts + forwards
`signal` to proxyFetch.
- scripts/seed-portwatch-port-activity.mjs: fetchWithTimeout's 429
branch passes its caller signal to httpsProxyFetchRaw.
Backward compatible: signal is optional in every layer, so the many
other callers of proxyFetch / httpsProxyFetchRaw across the repo are
unaffected.
Tests: 49 pass (was 45, +4). New runtime test proves pre-aborted
signals reject proxyFetch synchronously without touching the network.
Source-regex tests assert signal threading at each layer. Full
test:data 5871 pass. Typecheck + lint clean.
* refactor: consolidate 5 proxy tunnel implementations into _proxy-utils.cjs
5 near-identical HTTP CONNECT proxy tunnel implementations (3 in
ais-relay.cjs, 1 in _seed-utils.mjs, 1 in seed-military-flights.mjs)
consolidated into two shared functions in _proxy-utils.cjs:
- proxyConnectTunnel(): low-level CONNECT + TLS wrapping, returns socket
- proxyFetch(): high-level fetch with decompression, custom headers,
POST support, timeout
All consumers now call the shared implementation:
- _seed-utils.mjs httpsProxyFetchRaw: 75 lines -> 6 lines
- ais-relay.cjs ytFetchViaProxy: 40 lines -> 5 lines
- ais-relay.cjs _openskyProxyConnect: 35 lines -> 8 lines
- ais-relay.cjs inline Dodo CONNECT: 25 lines -> 10 lines
- seed-military-flights.mjs proxyFetchJson: 70 lines -> 14 lines
Also wires weather alerts proxy fallback (fixes STALE_SEED health crit).
Net: -104 lines. Resolves the TODO at _seed-utils.mjs:311.
* fix(proxy): default tls=true for bare proxy strings
parseProxyConfig returned no tls field for bare-format proxies
(user:pass@host:port and host:port:user:pass). proxyConnectTunnel
checked proxyConfig.tls and used plain TCP when it was undefined,
breaking connections to Decodo which requires TLS. Only http:// URLs
should use plain TCP.
* fix(proxy): timeout covers full response, pass targetPort through
- Move clearTimeout from header arrival to stream end, so a server
that stalls after 200 OK headers still hits the timeout
- Make targetPort configurable in proxyConnectTunnel (was hardcoded
443), pass through from _openskyProxyConnect
* fix(relay): proxy fallback for Yahoo/Crypto, isolate OREF proxy, fix Dockerfile
Yahoo Finance and CoinPaprika fail from Railway datacenter IPs (rate
limiting). Added PROXY_URL fallback to fetchYahooChartDirect (used by
5 seeders) and relay chart proxy endpoint. Added shared
_fetchCoinPaprikaTickers with proxy fallback + 5min cache (3 crypto
seeders share one fetch). Added CoinPaprika fallback to CryptoSectors
(previously had none).
Isolated OREF_PROXY_AUTH exclusively for OREF alerts. OpenSky,
seed-military-flights, and _proxy-utils now fall back to PROXY_URL
instead of the expensive IL-exit proxy.
Added seed-climate-news.mjs + _seed-utils.mjs COPY to Dockerfile.relay
(missing since PR #2532). Added pizzint bootstrap hydration to
cache-keys.ts, bootstrap.js, and src/services/pizzint.ts.
* fix(relay): address review — remove unused reverseMap, guard double proxy
- Remove dead reverseMap identity map in CryptoSectors Paprika fallback
- Add _proxied flag to handleYahooChartRequest._tryProxy to prevent
double proxy call on timeout→destroy→error sequence
* fix(seeder): use TLS for proxy CONNECT tunnel to fix FRED fetch failures
Decodo gate.decodo.com:10001 requires TLS. Previous code used http.request
(plain TCP) which received SOCKS5 rejection bytes instead of HTTP 200.
Two issues fixed:
1. Replace http.request CONNECT with tls.connect + manual CONNECT handshake.
Node.js http.request also auto-sets Host to the proxy hostname; Decodo
rejects this and responds with SOCKS5 bytes (0x05 0xff). Manual CONNECT
over a raw TLS socket avoids both issues.
2. Handle https:// and plain "user:pass@host:port" proxy URL formats — always
uses TLS regardless of PROXY_URL prefix.
_proxy-utils.cjs: resolveProxyStringConnect now preserves https:// prefix
from PROXY_URL so callers can detect TLS proxies explicitly.
All 24 FRED series (BAMLH0A0HYM2, FEDFUNDS, DGS10, etc.) confirmed working
locally via gate.decodo.com:10001.
* fix(seeder): respect http:// proxy scheme + buffer full CONNECT response
Two protocol-correctness fixes:
1. http:// proxies used plain TCP before; always-TLS regressed them.
Now: bare/undeclared format → TLS (Decodo requires it), explicit
http:// → plain net.connect, explicit https:// → TLS.
2. CONNECT response buffered until \r\n\r\n instead of acting on the
first data chunk. Fragmented proxy responses (headers split across
packets) could corrupt the TLS handshake by leaving header bytes
on the wire when tls.connect() was called too early.
Verified locally: BAMLH0A0HYM2 → { date: 2026-03-26, value: 3.21 }
* chore(seeder): remove unused http import, fix stale JSDoc
- Drop `import * as http from 'node:http'` — no longer used after
replacing http.request CONNECT with tls.connect + manual handshake
- Update resolveProxyStringConnect() JSDoc: https.request → tls.connect
* fix(seed-economy): use gate.decodo.com for FRED CONNECT proxy, add fallback logging
resolveProxyString() replaces gate. → us.decodo.com for curl compatibility,
but httpsProxyFetchJson uses HTTP CONNECT tunneling which requires gate.decodo.com.
All FRED series were silently failing with "Parse Error: Expected HTTP/, RTSP/ or ICE/"
because us.decodo.com doesn't respond to CONNECT with valid HTTP.
- Add resolveProxyStringConnect() in _proxy-utils.cjs (no host replacement)
- Export resolveProxyForConnect() from _seed-utils.mjs for CONNECT-based proxy
- seed-economy: _proxyAuth uses resolveProxyForConnect() (FRED), _curlProxyAuth uses resolveProxy() (Yahoo)
- fredFetchJson now logs when direct fails and proxy is tried, labels proxy errors as "proxy: ..."
* fix(seed-economy): roll out resolveProxyForConnect to all FRED seeders
seed-economic-calendar, seed-supply-chain-trade, and seed-bls-series were
still passing the curl-oriented us.decodo.com proxy string to fredFetchJson,
which uses HTTP CONNECT tunneling and requires gate.decodo.com.
Previously each seeder (ais-relay.cjs, _seed-utils.mjs, seed-fear-greed.mjs,
seed-disease-outbreaks.mjs) had its own inline resolveProxy() with slightly
different implementations. This caused USNI seeding to fail because
parseProxyUrl() only handled URL format while PROXY_URL uses Decodo
host:port:user:pass format.
- Add scripts/_proxy-utils.cjs with parseProxyConfig(), resolveProxyConfig(),
resolveProxyString() handling both http://user:pass@host:port and
host:port:user:pass formats
- ais-relay.cjs: require _proxy-utils.cjs, alias parseProxyUrl = parseProxyConfig
- _seed-utils.mjs: import resolveProxyString via createRequire, delegate resolveProxy()
- seed-fear-greed.mjs, seed-disease-outbreaks.mjs: remove inline resolveProxy(),
import from _seed-utils.mjs instead