Commit Graph

6 Commits

Author SHA1 Message Date
Elie Habib
24f23ba67a fix(digest): skip Groq + fix Telegram 400 from oversized messages (#3002)
* fix(digest): skip Groq (always 429) and fix Telegram 400 from oversized messages

Groq consistently rate-limits on digest runs, adding ~1s latency before
falling through to OpenRouter. Skip it via new callLLM skipProviders opt.

Telegram sendMessage rejects with 400 when digest text exceeds the 4096
char limit (30 stories + AI summary = ~5600 chars). Truncate at last
newline before the limit and close any unclosed HTML tags so truncation
mid-tag doesn't also cause a parse error. Log the Telegram error response
body so future 400s are diagnosable.

* fix: strip partial HTML tag before rebalancing in sanitizeTelegramHtml

The previous order appended closing tags first, then stripped the trailing
partial tag, so truncation mid-tag (e.g. 'x <b>hello</') still produced
malformed HTML. Reverse the order: strip partial tag, then close unclosed.

* fix: re-check length after sanitize in truncateTelegramHtml

Closing tags appended by sanitize can push a near-limit message over 4096.
Recurse into truncation if sanitized output exceeds the limit.
2026-04-12 12:00:05 +04:00
Elie Habib
00320c26cf feat(notifications): proactive intelligence agent (Phase 4) (#2889)
* feat(notifications): proactive intelligence agent (Phase 4)

New Railway cron (every 6 hours) that detects signal landscape changes
and generates proactive intelligence briefs before events break.

Reads ~8 Redis signal keys (CII risk, GPS interference, unrest, sanctions,
cyber threats, thermal anomalies, weather, commodities), computes a
landscape snapshot, diffs against the previous run, and generates an
LLM brief when the diff score exceeds threshold.

Key features:
- Signal landscape diff with weighted scoring (new risk countries = 2pts,
  GPS zone changes = 1pt per zone, commodity movers >3% = 1pt)
- Server-side convergence detection: countries with 3+ signal types flagged
- First run stores baseline only (no false-positive brief)
- Delivers via all 5 channels (Telegram, Slack, Discord, Email, Webhook)
- PROACTIVE_INTEL_ENABLED=0 env var to disable
- Skips users without saved preferences or deliverable channels

Requires: Railway cron service configuration (every 6 hours)

* fix(proactive): fetch all enabled rules + expand convergence to all signal types

1. Replace /relay/digest-rules (digest-mode only) with ConvexHttpClient
   query alertRules:getByEnabled to include ALL enabled rules, not just
   digest-mode users. Proactive briefs now reach real-time users too.
2. Expand convergence detection from 3 signal families (risk, unrest,
   sanctions) to all 7 (add GPS interference, cyber, thermal, weather).
   Track signal TYPES per country (Set), not event counts, so convergence
   means 3+ distinct signal categories, not 3+ events from one category.
3. Include signal type names in convergence zone output for LLM context
   and webhook payload.

* fix(proactive): check channels before LLM + deactivate stale channels

1. Move channel fetch + deliverability check BEFORE user prefs and LLM
   call to avoid wasting LLM calls on users with no verified channels
2. Add deactivateChannel() calls on 404/410/403 responses in all delivery
   helpers (Telegram, Slack, Discord, Webhook), matching the pattern in
   notification-relay.cjs and seed-digest-notifications.mjs

* fix(proactive): preserve landscape on transient failures + drop Telegram Markdown

1. Don't advance landscape baseline when channel fetch or LLM fails,
   so the brief retries on the next run instead of permanently suppressing
   the change window
2. Remove parse_mode: 'Markdown' from Telegram sendMessage to avoid 400
   errors from unescaped characters in LLM output (matches digest pattern)

* fix(proactive): only advance landscape baseline after successful delivery

* fix(proactive): abort on degraded signals + don't advance on prefs failure

1. Track loaded signal key count. Abort run if <60% of keys loaded
   to prevent false diffs from degraded Redis snapshots becoming
   the new baseline.
2. Don't advance landscape when fetchUserPreferences() returns null
   (could be transient failure, not just "no saved prefs"). Retries
   next run instead of permanently suppressing the brief.

* fix(notifications): distinguish no-prefs from fetch-error in user-context

fetchUserPreferences() now returns { data, error } instead of bare null.
error=true means transient failure (retry next run, don't advance baseline).
data=null + error=false means user has no saved preferences (skip + advance).

Proactive script: retry on error, skip+advance on no-prefs.
Digest script: updated to destructure new return shape (behavior unchanged,
  both cases skip AI summary).

* fix(proactive): address all Greptile review comments

P1: Add link-local (169.254) and 0.0.0.0 to isPrivateIP SSRF check
P1: Log channel-fetch failures (was silent catch{})
P2: Remove unused createHash import and BRIEF_TTL constant
P2: Remove dead ?? 'full' fallback (rule.variant validated above)
P2: Add HTTPS enforcement to sendSlack/sendDiscord (matching sendWebhook)
2026-04-10 08:08:27 +04:00
Elie Habib
fa64e2f61f feat(notifications): AI-enriched digest delivery (#2876)
* feat(notifications): AI-enriched digest delivery (Phase 1)

Add personalized LLM-generated executive summaries to digest
notifications. When AI_DIGEST_ENABLED=1 (default), the digest cron
fetches user preferences (watchlist, panels, frameworks), generates a
tailored intelligence brief via Groq/OpenRouter, and prepends it to the
story list in both text and HTML formats.

New infrastructure:
- convex/userPreferences: internalQuery for service-to-service access
- convex/http: /relay/user-preferences endpoint (RELAY_SHARED_SECRET auth)
- scripts/lib/llm-chain.cjs: shared Ollama->Groq->OpenRouter provider chain
- scripts/lib/user-context.cjs: user preference extraction + LLM prompt formatting

AI summary is cached (1h TTL) per stories+userContext hash. Falls back
to raw digest on LLM failure (no regression). Subject line changes to
"Intelligence Brief" when AI summary is present.

* feat(notifications): per-user AI digest opt-out toggle

AI executive summary in digests is now optional per user via
alertRules.aiDigestEnabled (default true). Users can toggle it off in
Settings > Notifications > Digest > "AI executive summary".

Schema: added aiDigestEnabled to alertRules table
Backend: Convex mutations, HTTP relay, edge function all forward the field
Frontend: toggle in digest settings section with descriptive copy
Digest cron: skips LLM call when rule.aiDigestEnabled === false

* fix(notifications): address PR review — cache key, HTML replacement, UA

1. Add variant to AI summary cache key to prevent cross-variant poisoning
2. Use replacer function in html.replace() to avoid $-pattern corruption
   from LLM output containing dollar amounts ($500M, $1T)
3. Use service UA (worldmonitor-llm/1.0) instead of Chrome UA for LLM calls

* fix(notifications): skip AI summary without prefs + fix HTML regex

1. Return null from generateAISummary() when fetchUserPreferences()
   returns null, so users without saved preferences get raw digest
   instead of a generic LLM summary
2. Fix HTML replace regex to match actual padding value (40px 32px 0)
   so the executive summary block is inserted in email HTML

* fix(notifications): channel check before LLM, omission-safe aiDigest, richer cache key

1. Move channel fetch + deliverability check BEFORE AI summary generation
   so users with no verified channels don't burn LLM calls every cron run
2. Only patch aiDigestEnabled when explicitly provided (not undefined),
   preventing stale frontend tabs from silently clearing an opt-out
3. Include severity, phase, and sources in story hash for cache key
   so the summary invalidates when those fields change
2026-04-09 21:35:26 +04:00
Elie Habib
751820c1cc feat(prefs): Phase 0 + 1 — sync primitives, Convex schema & preferences API (#2505) 2026-03-29 16:02:56 +04:00
Elie Habib
79ec6e601b feat(prefs): Phase 0 — sync primitives and notification scaffolding (#2503) 2026-03-29 13:57:34 +04:00
Elie Habib
3702463321 Add thermal escalation seeded service (#1747)
* feat(thermal): add thermal escalation seeded service

Cherry-picked from codex/thermal-escalation-phase1 and retargeted
to main. Includes thermal escalation seed script, RPC handler,
proto definitions, bootstrap/health/seed-health wiring, gateway
cache tier, client service, and tests.

* fix(thermal): wire data-loader, fix typing, recalculate summary

Wire fetchThermalEscalations into data-loader.ts with panel forwarding,
freshness tracking, and variant gating. Fix seed-health intervalMin from
90 to 180 to match 3h TTL. Replace 8 as-any casts with typed interface.
Recalculate summary counts after maxItems slice.

* fix(thermal): enforce maxItems on hydrated data + fix bootstrap keys

Codex P2: hydration branch now slices clusters to maxItems before
mapping, matching the RPC fallback behavior.

Also add thermalEscalation to bootstrap.js BOOTSTRAP_CACHE_KEYS and
SLOW_KEYS (was lost during conflict resolution).

* fix(thermal): recalculate summary on sliced hydrated clusters

When maxItems truncates the cluster array from bootstrap hydration,
the summary was still using the original full-set counts. Now
recalculates clusterCount, elevatedCount, spikeCount, etc. on the
sliced array, matching the handler's behavior.
2026-03-17 14:24:26 +04:00