* fix(intelligence): include framework/systemAppend hash in cache keys (todos 041, 045, 051) * fix(intelligence): gate framework/systemAppend on server-side PRO check (todo 042) * fix(skills): exact hostname allowlist + redirect:manual to prevent SSRF (todos 043, 054) * fix(intelligence): sanitize systemAppend against prompt injection before LLM (todo 044) * fix(intelligence): use framework field in DeductionPanel, fix InsightsPanel double increment (todos 046, 047) * fix(intelligence): settings export, hot-path cache, country-brief debounce (todos 048, 049, 050) * fix(intelligence): i18n, FrameworkSelector note, stripThinkingTags dedup, UUID IDs (todos 052, 055, 056, 057) - i18n Analysis Frameworks settings section (en + fr locales, replace all hardcoded English strings with t() calls) - FrameworkSelector: replace panelId==='insights' hardcode with note? option; both InsightsPanel and DailyMarketBriefPanel pass note - stripThinkingTags: remove inline duplicate in summarize-article.ts, import from _shared/llm; add Strip unterminated comment so tests can locate the section - Replace Date.now() IDs for imported frameworks with crypto.randomUUID() - Drop 'not supported in phase 1' phrasing to 'not supported' - test: fix summarize-reasoning Fix 2 suite to read from llm.ts - test: add premium-check-stub and wire into redis-caching country intel brief importPatchedTsModule so test can resolve the new import * fix(security): address P1 review findings from PR #2386 - premium-check: require `required: true` from validateApiKey so trusted browser origins (worldmonitor.app, Vercel previews, localhost) are not treated as PRO callers; fixes free-user bypass of framework/systemAppend gate - llm: replace weak sanitizeSystemAppend with sanitizeForPrompt from llm-sanitize.js; all callLlm callers now get model-delimiter and control-char stripping, not just phrase blocklist - get-country-intel-brief: apply sanitizeForPrompt to contextSnapshot before injecting into user prompt; fixes unsanitized query-param injection Closes todos 060, 061, 062 (P1 — blocked merge of #2386). * chore(todos): mark P1 todos 060-062 complete * fix(agentskills): address Greptile P2 review comments - hoist ALLOWED_AGENTSKILLS_HOSTS Set to module scope (was reallocated per-request) - add res.type === 'opaqueredirect' check alongside the 3xx guard; Edge Runtime returns status=0 for opaque redirects so the status range check alone is dead code
3.7 KiB
status, priority, issue_id, tags, dependencies
| status | priority | issue_id | tags | dependencies | ||||
|---|---|---|---|---|---|---|---|---|
| complete | p1 | 061 |
|
sanitizeSystemAppend in llm.ts is weaker than sanitizeForPrompt — intel handlers exposed
Problem Statement
PR #2386 introduced sanitizeSystemAppend() as a private function in server/_shared/llm.ts to filter prompt injection phrases before LLM injection. However, it is weaker than the existing sanitizeForPrompt() in server/_shared/llm-sanitize.js:
sanitizeSystemAppend: 12 hardcoded string-contains phrases, no regex, no control-char stripping, no model delimiter tokenssanitizeForPrompt: compiled regex patterns covering model delimiters (<|im_start|>,[INST],<system>), role prefix injection, Unicode separators, control chars U+0000-U+001F
deduct-situation.ts and get-country-intel-brief.ts pass framework directly to callLlm({ systemAppend: frameworkRaw }), which applies only sanitizeSystemAppend. A PRO user (or any user if todo 060 is not fixed) can inject model delimiter tokens that bypass the weaker filter. Only summarize-article.ts calls sanitizeForPrompt explicitly — creating an inconsistent defense surface.
Additionally, sanitizeSystemAppend strips the phrase 'system:' anywhere in the text, which mangles legitimate PMESII-PT framework content like "Political system: governance legitimacy".
Findings
server/_shared/llm.ts:125-140—sanitizeSystemAppendblocklist; misses delimiter tokensserver/worldmonitor/intelligence/v1/deduct-situation.ts:52—systemAppend: framework || undefined→ goes through weak filterserver/worldmonitor/intelligence/v1/get-country-intel-brief.ts:99— same issueserver/worldmonitor/news/v1/summarize-article.ts:127— correctly usessanitizeForPrompt(systemAppend)before prompt build- Confirmed by: security-sentinel, agent-native-reviewer, architecture-strategist, code-simplicity-reviewer
Proposed Solutions
Option A: Use sanitizeForPrompt inside callLlm() (Recommended)
In server/_shared/llm.ts, import sanitizeForPrompt from llm-sanitize.js and replace the sanitizeSystemAppend call in callLlm() with it:
// @ts-expect-error — JS module
import { sanitizeForPrompt } from './llm-sanitize.js';
// ... inside callLlm, where systemAppend is appended:
const sanitized = sanitizeForPrompt(systemAppend);
Pros: All callLlm callers get the stronger filter automatically | Effort: Small | Risk: Low
Option B: Pre-sanitize in each intel handler before calling callLlm
In deduct-situation.ts and get-country-intel-brief.ts, call sanitizeForPrompt(frameworkRaw) before passing to callLlm:
// @ts-expect-error
import { sanitizeForPrompt } from '../../../_shared/llm-sanitize.js';
const framework = sanitizeForPrompt(frameworkRaw);
await callLlm({ ..., systemAppend: framework });
Pros: Explicit, mirrors summarize-article.ts pattern | Cons: Must be repeated in every new handler | Effort: Small | Risk: Low
Technical Details
- Files:
server/_shared/llm.ts,server/_shared/llm-sanitize.js,server/worldmonitor/intelligence/v1/deduct-situation.ts:52,server/worldmonitor/intelligence/v1/get-country-intel-brief.ts:99 - PR: koala73/worldmonitor#2386
Acceptance Criteria
- All three LLM handlers apply
sanitizeForPrompt-level sanitization toframework/systemAppend - Model delimiter tokens (
<|im_start|>,[INST], etc.) are stripped from framework text sanitizeSystemAppendremoved or merged intosanitizeForPrompt— no parallel paths- Word "system:" alone does NOT get stripped from legitimate framework content
Work Log
- 2026-03-28: Identified during PR #2386 review by 4 independent agents