Commit Graph

2 Commits

Author SHA1 Message Date
Elie Habib
24f23ba67a fix(digest): skip Groq + fix Telegram 400 from oversized messages (#3002)
* fix(digest): skip Groq (always 429) and fix Telegram 400 from oversized messages

Groq consistently rate-limits on digest runs, adding ~1s latency before
falling through to OpenRouter. Skip it via new callLLM skipProviders opt.

Telegram sendMessage rejects with 400 when digest text exceeds the 4096
char limit (30 stories + AI summary = ~5600 chars). Truncate at last
newline before the limit and close any unclosed HTML tags so truncation
mid-tag doesn't also cause a parse error. Log the Telegram error response
body so future 400s are diagnosable.

* fix: strip partial HTML tag before rebalancing in sanitizeTelegramHtml

The previous order appended closing tags first, then stripped the trailing
partial tag, so truncation mid-tag (e.g. 'x <b>hello</') still produced
malformed HTML. Reverse the order: strip partial tag, then close unclosed.

* fix: re-check length after sanitize in truncateTelegramHtml

Closing tags appended by sanitize can push a near-limit message over 4096.
Recurse into truncation if sanitized output exceeds the limit.
2026-04-12 12:00:05 +04:00
Elie Habib
fa64e2f61f feat(notifications): AI-enriched digest delivery (#2876)
* feat(notifications): AI-enriched digest delivery (Phase 1)

Add personalized LLM-generated executive summaries to digest
notifications. When AI_DIGEST_ENABLED=1 (default), the digest cron
fetches user preferences (watchlist, panels, frameworks), generates a
tailored intelligence brief via Groq/OpenRouter, and prepends it to the
story list in both text and HTML formats.

New infrastructure:
- convex/userPreferences: internalQuery for service-to-service access
- convex/http: /relay/user-preferences endpoint (RELAY_SHARED_SECRET auth)
- scripts/lib/llm-chain.cjs: shared Ollama->Groq->OpenRouter provider chain
- scripts/lib/user-context.cjs: user preference extraction + LLM prompt formatting

AI summary is cached (1h TTL) per stories+userContext hash. Falls back
to raw digest on LLM failure (no regression). Subject line changes to
"Intelligence Brief" when AI summary is present.

* feat(notifications): per-user AI digest opt-out toggle

AI executive summary in digests is now optional per user via
alertRules.aiDigestEnabled (default true). Users can toggle it off in
Settings > Notifications > Digest > "AI executive summary".

Schema: added aiDigestEnabled to alertRules table
Backend: Convex mutations, HTTP relay, edge function all forward the field
Frontend: toggle in digest settings section with descriptive copy
Digest cron: skips LLM call when rule.aiDigestEnabled === false

* fix(notifications): address PR review — cache key, HTML replacement, UA

1. Add variant to AI summary cache key to prevent cross-variant poisoning
2. Use replacer function in html.replace() to avoid $-pattern corruption
   from LLM output containing dollar amounts ($500M, $1T)
3. Use service UA (worldmonitor-llm/1.0) instead of Chrome UA for LLM calls

* fix(notifications): skip AI summary without prefs + fix HTML regex

1. Return null from generateAISummary() when fetchUserPreferences()
   returns null, so users without saved preferences get raw digest
   instead of a generic LLM summary
2. Fix HTML replace regex to match actual padding value (40px 32px 0)
   so the executive summary block is inserted in email HTML

* fix(notifications): channel check before LLM, omission-safe aiDigest, richer cache key

1. Move channel fetch + deliverability check BEFORE AI summary generation
   so users with no verified channels don't burn LLM calls every cron run
2. Only patch aiDigestEnabled when explicitly provided (not undefined),
   preventing stale frontend tabs from silently clearing an opt-out
3. Include severity, phase, and sources in story hash for cache key
   so the summary invalidates when those fields change
2026-04-09 21:35:26 +04:00