mirror of
https://github.com/koala73/worldmonitor.git
synced 2026-04-25 17:14:57 +02:00
* feat(supply-chain): Sprint E — scenario visual completion + service parity - E1: fetchSectorDependency exported from supply-chain service index - E2: PRO gate + all-renderer dispatch in MapContainer.activateScenario - E3: scenario summary banner in SupplyChainPanel (dismiss wired) - E4: "Simulate Closure" trigger button in expanded chokepoint cards - E5: affectedIso2s heat layer in DeckGLMap (GeoJsonLayer, red tint) - E6: SVG renderer setScenarioState (best-effort iso2 fill) - E7: Globe renderer scenario polygons via flushPolygons - E8: integration tests for scenario run/status endpoints * fix(supply-chain): address PR #2910 review findings (P1 + P2 + P3) - Wire setOnScenarioActivate + setOnDismissScenario in panel-layout.ts (todo #155) - Rename shadow variable t→tmpl in SCENARIO_TEMPLATES.find (todo #152) - Add statusResp.ok guard in scenario polling loop (todo #153) - Replace status.result! non-null assertion with shape guard (todo #154) - Add AbortController to prevent concurrent polling races (todo #162) - Add polygonStrokeColor scenario branch (transparent) in GlobeMap (todo #156) - Re-export SCENARIO_TEMPLATES via src/config/scenario-templates.ts (todo #157) - Cache affectedIso2Set in DeckGLMap.setScenarioState (todo #158) - Add scenario paths to PREMIUM_RPC_PATHS for auth injection (todo #160) - Show template name in scenario banner instead of raw ID (todo #163) * fix(supply-chain): address PR #2910 review findings - Add auth headers to scenario fetch calls in SupplyChainPanel - Reset button state on scenario dismiss - Poll status immediately on first iteration (no 2s delay) - Pre-compute scenario polygons in GlobeMap.setScenarioState - Use scenarioId for DeckGL updateTriggers precision * fix(supply-chain): wire panel instance to MapContainer, stop button click propagation - Call setSupplyChainPanel() in panel-layout.ts so scenario banner renders - Add stopPropagation() to Simulate Closure button to prevent card collapse
2.0 KiB
2.0 KiB
status, priority, issue_id, tags, dependencies
| status | priority | issue_id | tags | dependencies | ||||
|---|---|---|---|---|---|---|---|---|
| pending | p3 | 165 |
|
Scenario Rate Limiter Keys Off IP — Shared Egress Customers Share Rate Bucket
Problem Statement
api/scenario/v1/run.ts rate-limits requests by client IP. In enterprise or office environments where many users share a single egress IP (NAT, VPN), all users share one rate bucket. A single heavy user can exhaust the quota for all colleagues on the same IP. The correct key for a PRO-gated endpoint is the authenticated API key identity.
Findings
- File:
api/scenario/v1/run.ts - Rate limit key: likely
getClientIp(req)(standard pattern in codebase) - PRO endpoints in the codebase that handle multiple users per IP should key by API key identity
- See MEMORY:
feedback_is_caller_premium_trusted_origin.md— the API key is extractable fromX-WorldMonitor-Key - Minor issue: scenario endpoint is PRO-only, low traffic volume — not urgent
- Identified by security-sentinel during PR #2910 review
Proposed Solutions
Option A: Key rate limit by API key when present, fall back to IP
const apiKey = req.headers.get('x-worldmonitor-key');
const rateLimitKey = apiKey ? `scenario:key:${apiKey}` : `scenario:ip:${getClientIp(req)}`;
Pros: Per-identity limiting for authenticated users, IP fallback for unauth Cons: Unauthenticated requests still IP-keyed (acceptable) Effort: Small | Risk: None
Recommended Action
Apply Option A in a follow-up. Not blocking — scenario endpoint is PRO-only and low-traffic.
Technical Details
- Affected files:
api/scenario/v1/run.ts
Acceptance Criteria
- Rate limit key uses
X-WorldMonitor-Keywhen present - Falls back to IP for requests without a key
- Existing rate limit tests pass
Work Log
- 2026-04-10: Identified by security-sentinel during PR #2910 review
Resources
- PR: #2910
- MEMORY:
feedback_is_caller_premium_trusted_origin.md