- #962: WebSocket auth now URL-decodes token before comparison. API keys with +/=/
characters (base64-derived) now work correctly for WS streaming.
- #939: Clippy bool_comparison lint fixed in web_fetch.rs test.
- #983: Dockerfile adds perl and make for openssl-sys compilation on slim-bookworm.
- #987: Nextcloud chat poll endpoint corrected from api/v4/room/{token}/chat to
api/v1/chat/{token}/ matching the send endpoint.
- #970: Moonshot Kimi K2/K2.5 models now redirect to api.moonshot.cn/v1 instead of
api.moonshot.ai/v1. The .ai domain only serves legacy moonshot-v1-* models.
- #882: Closed as resolved by v0.5.7 custom hand persistence fix (#984).
- #926: Verified already fixed (rmcp builder API from previous session).
All tests passing. 8 files changed, 75 insertions.
Allow certain Discord channel IDs to respond without requiring @mention,
similar to Hermes gateway's free_response_channels.
- Add free_response_channels field to DiscordConfig
- Add free_response_channels method to ChannelBridgeHandle trait
- Implement free_response_channels in KernelBridgeAdapter
- Modify dispatch_message to bypass mention_only policy for free channels
- Add test for free_response_channels deserialization
The model-aware assistant strip caused infinite agent loops for Claude.
Reverted to empty-only strip which is safe for all models. The
Telegram prefill issue needs to be fixed in the agent loop, not the
driver.
Remaining openai.rs changes:
- strip_trailing_empty_assistant: strips truly empty trailing messages
- Skip tool calls with empty ID or name from streaming responses
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The Copilot proxy for Claude enforces Anthropic's rule that conversations
must end with a user message. For Claude models, strip any trailing
assistant message without tool_calls (including non-empty ones). For
other models, only strip truly empty assistant messages.
This fixes the 'assistant message prefill not supported' error seen
in Telegram and other channel adapters when using Claude via Copilot,
without causing infinite agent loops for other models.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The aggressive strip (all trailing assistant messages) caused infinite
agent loops by removing non-empty responses the agent loop needs.
Reverted to only strip truly empty assistant messages (no content,
no tool_calls). The Telegram prefill issue needs a different fix.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Strengthens the strip to remove any trailing assistant message (not
just empty ones) when it has no tool_calls. The Copilot proxy for
Claude rejects conversations ending with any assistant message as
unsupported prefill. This fixes the Telegram bot channel where the
agent loop appends an assistant message with content.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The Copilot API proxy can sometimes deliver streaming tool call chunks
without a function name, resulting in empty-name tool calls stored in
conversation history. When replayed to the API, these cause
'tool call must have a tool call ID and function name' errors.
Skip malformed tool calls (empty ID or name) during streaming response
finalization and log a warning.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The Copilot API proxy rejects conversations ending with an empty
assistant message as unsupported 'assistant message prefill' when
proxying Claude and Gemini models. GPT models are unaffected.
Strips trailing empty assistant messages (no content, no tool calls)
before sending the request. Applied in both complete() and stream()
paths.
Also reverts unused fixup_request method from copilot.rs since the
fix belongs in the OpenAI driver layer.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Bug: in activate_hand(), kill_agent() is called on the existing agent
BEFORE the new agent is spawned. kill_agent() invokes
cron_scheduler.remove_agent_jobs() which deletes all cron jobs from memory
AND persists [] to cron_jobs.json. The reassign_agent_jobs() call further
down was meant to migrate jobs from old to new (per #461), but it always
runs as a no-op because the jobs are already gone — the order of
operations defeats the fix.
Symptom: every daemon restart silently destroys cron jobs for hand-style
agents. cron_jobs.json is rewritten as []. /api/cron/jobs returns empty.
No error message.
Fix: snapshot the cron jobs into a local Vec BEFORE kill_agent (same
pattern as saved_triggers above), then re-add them under the new agent_id
AFTER spawn_agent_with_parent. Runtime state (next_run, last_run) is
reset so jobs get a fresh start. The existing reassign_agent_jobs()
block is kept as a defensive safety net but is now redundant in the
common path.
Verified with cargo check -p openfang-kernel --lib (clean compile, no
warnings).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The Copilot LLM driver was broken - it expected users to provide a
GITHUB_TOKEN env var, but no standard token type (PAT, gh CLI token)
works with the Copilot token exchange endpoint.
Changes:
- Full rewrite of copilot.rs with OAuth device flow using Copilot's
client ID (Iv1.b507a08c87ecfe98)
- Three-layer token chain: ghu_ (8h) -> Copilot API token (30min),
with automatic caching and refresh
- Dynamic model fetching from Copilot API on daemon startup and on
model_not_supported error
- Init wizard: TUI auth screen with device code display, live model
picker after authentication
- set-key command: interactive device flow for github-copilot provider
- Doctor: detects Copilot auth via persisted token file
- Removed static Copilot model entries (now fetched dynamically)
- Simplified driver instantiation (no env vars needed)
Tested end-to-end with Copilot Enterprise: auth, token exchange,
43 models fetched, completions working with claude-opus-4.6-1m.
Closes#1014
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Replace `×` close icons with SVG X icons across all modals and panels
- Replace `✓` checkmarks with SVG check icons in setup checklist, step indicators, and success states
- Replace `•` and `·` separators with small SVG dot icons for better visual consistency
- Improve icon sizing, alignment, and stroke properties for crisp rendering
Signed-off-by: 诺墨 <normal@normalcoder.com>
- Add `--text-on-accent` CSS variable (white) for text on accent backgrounds
- Replace hardcoded `var(--bg-primary)` with `--text-on-accent` across components
- Update dark theme surface, border, and text color tokens for better contrast
- Adjust dark mode surface colors (`#1F1D1C` → `#242221`) for improved depth
- Refine border and text-muted colors for better visual hierarchy
Signed-off-by: 诺墨 <normal@normalcoder.com>
- Move Overview to first navigation item for better UX
- Consolidate `Chat` into `Agents` section
- Increase chevron and section title font sizes for improved readability
- Adjust section title padding for better visual spacing
Signed-off-by: 诺墨 <normal@normalcoder.com>
When an LLM produces text alongside tool_use blocks (e.g., a chat
message followed by memory_store calls), the text was lost if the
final EndTurn iteration returned empty text. The empty-response guard
would activate and return "[Task completed — the agent executed tools
but did not produce a text summary.]" even though the agent DID
produce text in an earlier iteration.
This is a common pattern when agents are instructed to respond to
users AND persist state via memory_store in the same turn.
Fix: accumulate text content from all ToolUse iterations. When the
final EndTurn has empty text, use accumulated text as fallback before
triggering the empty-response guard.
Applied to both sync and streaming agent loop paths.
Built-in templates and the spawn wizard both hardcoded
provider = "groq" / model = "llama-3.3-70b-versatile" in the
manifest TOML sent to the API. The kernel's default_model overlay
only activates when provider/model are empty or "default", so
hardcoded values bypassed the user's configured default entirely.
Fixes#967
Since v0.5.4, native-tls uses features = ["vendored"], which compiles OpenSSL from source via openssl-sys. This requires perl for the OpenSSL Configure script, but perl was missing from the flake's buildInputs.
Mirrors the Dockerfile fix in #952. Fixes#894.
GHCR defaults new packages to private, so docker pull
ghcr.io/rightnow-ai/openfang:... returned 401 for unauthenticated
users despite the repo being public.
Two changes to the docker job in release.yml:
1. Add OCI labels to the build — links the package to the repo so
GHCR associates it correctly, and is standard practice for
container images.
2. After each push, call the GitHub Packages API (PATCH
/orgs/RightNow-AI/packages/container/openfang) to set visibility
to public. The workflow already holds packages: write, which is
the required scope. This runs on every release tag so visibility
cannot regress if the package is ever reset.
Fixes#961
- Add NOVITA_BASE_URL constant to model_catalog.rs
- Register novita provider in provider_defaults() with OpenAI-compatible endpoint
- Add novita to known_providers() list
- Add novita to detect_available_provider() auto-detection probe
- Document NOVITA_API_KEY in .env.example
Novita AI uses an OpenAI-compatible API at https://api.novita.ai/openai/v1
Default model: moonshotai/kimi-k2.5