mirror of
https://github.com/koala73/worldmonitor.git
synced 2026-04-25 17:14:57 +02:00
* fix(intelligence): include framework/systemAppend hash in cache keys (todos 041, 045, 051) * fix(intelligence): gate framework/systemAppend on server-side PRO check (todo 042) * fix(skills): exact hostname allowlist + redirect:manual to prevent SSRF (todos 043, 054) * fix(intelligence): sanitize systemAppend against prompt injection before LLM (todo 044) * fix(intelligence): use framework field in DeductionPanel, fix InsightsPanel double increment (todos 046, 047) * fix(intelligence): settings export, hot-path cache, country-brief debounce (todos 048, 049, 050) * fix(intelligence): i18n, FrameworkSelector note, stripThinkingTags dedup, UUID IDs (todos 052, 055, 056, 057) - i18n Analysis Frameworks settings section (en + fr locales, replace all hardcoded English strings with t() calls) - FrameworkSelector: replace panelId==='insights' hardcode with note? option; both InsightsPanel and DailyMarketBriefPanel pass note - stripThinkingTags: remove inline duplicate in summarize-article.ts, import from _shared/llm; add Strip unterminated comment so tests can locate the section - Replace Date.now() IDs for imported frameworks with crypto.randomUUID() - Drop 'not supported in phase 1' phrasing to 'not supported' - test: fix summarize-reasoning Fix 2 suite to read from llm.ts - test: add premium-check-stub and wire into redis-caching country intel brief importPatchedTsModule so test can resolve the new import * fix(security): address P1 review findings from PR #2386 - premium-check: require `required: true` from validateApiKey so trusted browser origins (worldmonitor.app, Vercel previews, localhost) are not treated as PRO callers; fixes free-user bypass of framework/systemAppend gate - llm: replace weak sanitizeSystemAppend with sanitizeForPrompt from llm-sanitize.js; all callLlm callers now get model-delimiter and control-char stripping, not just phrase blocklist - get-country-intel-brief: apply sanitizeForPrompt to contextSnapshot before injecting into user prompt; fixes unsanitized query-param injection Closes todos 060, 061, 062 (P1 — blocked merge of #2386). * chore(todos): mark P1 todos 060-062 complete * fix(agentskills): address Greptile P2 review comments - hoist ALLOWED_AGENTSKILLS_HOSTS Set to module scope (was reallocated per-request) - add res.type === 'opaqueredirect' check alongside the 3xx guard; Edge Runtime returns status=0 for opaque redirects so the status range check alone is dead code
1.1 KiB
1.1 KiB
status, priority, issue_id, tags, dependencies
| status | priority | issue_id | tags | dependencies | ||||
|---|---|---|---|---|---|---|---|---|
| pending | p2 | 063 |
|
as any on LLM response in summarize-article.ts bypasses type safety
Problem Statement
server/worldmonitor/news/v1/summarize-article.ts line 155 casts the JSON response to any:
const data = await response.json() as any;
This bypasses TypeScript type checking for data.usage, data.choices, etc. Every other LLM response handler in the codebase uses a typed inline cast. The inline type already exists in llm.ts and can be reused.
Proposed Solution
Replace with typed cast:
const data = await response.json() as {
choices?: Array<{ message?: { content?: string } }>;
usage?: { total_tokens?: number };
};
Technical Details
- File:
server/worldmonitor/news/v1/summarize-article.ts:155 - Effort: Small | Risk: Low
Acceptance Criteria
as anyremoved fromsummarize-article.tsLLM response handling- TypeScript passes with no new
anyusages
Work Log
- 2026-03-28: Identified by kieran-typescript-reviewer during PR #2386 review