## Summary
Three HTML attribute-quoting typos in `README.md`:
- Line 2 — trailing `""` on the light-mode `<source srcset=...>` in the
top logo `<picture>`.
- Line 9 — same trailing `""` on the light-mode `<source srcset=...>`
for the hero image.
- Line 33 — missing closing `"` on `<img width="4 height="1" alt="">`
between the Twitter and Discord badges (was: `width="4 height="1"`, now:
`width="4" height="1"`).
Diff is 3 insertions / 3 deletions, one file, no code paths touched.
### Why
GitHub's markdown renderer tolerates these today, but:
- Strict HTML parsers / README mirrors (pypi page, docs sites, some
RSS/embed previews) can choke on the stray quotes or run attributes
together.
- `width="4 height="1"` is parsed as `width="4 height="` with a stray
`1"` token — the `height` attribute is effectively silently dropped.
- Costs nothing to have the raw source be valid HTML.
Pure docs fix, no behavior change, no new deps.
### Test plan
- [x] `git diff --stat` → 1 file, 3/3.
- [x] Visual inspection of rendered README on fork still looks
identical.
<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Fix three malformed HTML attributes in `README.md` to make the HTML
valid across strict renderers and mirrors. Removed trailing quotes from
two light‑mode `<source srcset>` tags and fixed a missing quote in an
`<img>` tag’s `width`/`height` attributes; no behavior change.
<sup>Written for commit eb12fb9df6.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->
---
_Part of open-source blockchain work from
[kcolbchain.com](https://kcolbchain.com) — maintained by [Abhishek
Krishna](https://abhishekkrishna.com). PR opened via [kcolbchain
contrib-bot](https://github.com/kcolbchain/kcolbchain.github.io/blob/master/deploy/contrib-bot/README.md)._
## Summary
- Bumps pinned `aiohttp` from `3.13.3` to `3.13.4` to patch
[CVE-2026-34515](https://github.com/aio-libs/aiohttp/security/advisories/GHSA-p998-jp59-783m)
(NTLMv2 credential leak via aiohttp's Windows static resource handler).
- **Not exploitable in this codebase.** aiohttp is used only as a client
(CDP polling in
`browser_use/browser/watchdogs/local_browser_watchdog.py:410`, plus two
examples). There is no `aiohttp.web` server, no `add_static`, no
Windows-only server code — none of the vulnerable surface is reached.
- Bumping purely to clear Dependabot alert #29. Safe patch-level
upgrade.
## Test plan
- [x] `uv sync --frozen` resolves cleanly
- [x] `uv run python -c "import aiohttp; import aiohttp.web"` on 3.13.4
- [ ] CI green
<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Bumps `aiohttp` from 3.13.3 to 3.13.4 to patch CVE-2026-34515 in the
Windows static resource handler. We only use `aiohttp` as a client, so
the vuln isn’t reachable; this is a safe patch to clear the security
alert.
<sup>Written for commit 8e9c3488de.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->
GHSA-p998-jp59-783m: aiohttp's static resource handler on Windows
can leak NTLMv2 credentials via UNC path traversal. Fixed in 3.13.4.
browser-use only uses aiohttp as a client (local CDP polling in
watchdogs/local_browser_watchdog.py, plus examples) — no web.Application
or add_static — so the vuln is not reachable here. Bump is prophylactic
to clear the Dependabot alert.
## Summary
- Bumps pinned `pypdf` from `6.9.1` to `6.10.2` to patch
[CVE-2026-40260](https://github.com/py-pdf/pypdf/security/advisories)
(XMP metadata XML entity expansion / billion-laughs RAM exhaustion).
- pypdf < 6.10.0 did not restrict recursive entity declarations in XMP
metadata DTDs, so a small crafted PDF could allocate gigabytes of memory
when opened.
- Call site: `browser_use/filesystem/file_system.py:549` uses
`pypdf.PdfReader` on PDFs the agent has downloaded — i.e. reachable from
attacker-controlled content, which makes this more than cosmetic.
## Test plan
- [x] `uv sync --frozen` resolves cleanly
- [x] `uv run python -c "import pypdf; pypdf.PdfReader"` on 6.10.2
- [x] `tests/ci/test_file_system_*.py` (24 passed)
- [ ] CI green
<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Update `pypdf` to 6.10.2 to fix CVE-2026-40260 (XMP XML entity expansion
“billion laughs” DoS). This secures our `PdfReader` usage in
`browser_use/filesystem/file_system.py` when opening agent-downloaded
PDFs.
- **Dependencies**
- Bump `pypdf` from `6.9.1` to `6.10.2`.
<sup>Written for commit 74ccf0ebd6.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->
pypdf < 6.10.0 did not restrict recursive XML entity expansion when
parsing XMP metadata, allowing a crafted PDF to trigger a "billion
laughs"-style RAM exhaustion via PdfReader. Fixed upstream in 6.10.0.
Bumps to latest patch (6.10.2).
Relevant call site: browser_use/filesystem/file_system.py uses
pypdf.PdfReader on agent-downloaded PDFs, which is reachable from
attacker-controlled content.
## Summary
- Bumps pinned `pillow` from `12.1.1` to `12.2.0` to patch
[CVE-2026-40192](https://github.com/python-pillow/Pillow/security/advisories/GHSA-whj4-6x5x-4v2j)
(FITS GZIP decompression bomb).
- Pillow 10.3.0-12.1.1 did not bound GZIP-compressed reads when decoding
FITS images, enabling a memory-exhaustion DoS via a crafted FITS file.
Fixed upstream in 12.2.0.
- Practical exposure in this repo is minimal — all `Image.open` call
sites operate on PNG screenshot bytes from CDP or bundled static assets,
no FITS input path — but the pinned version was flagged by Dependabot
and the bump is a safe patch-level upgrade.
## Test plan
- [x] `uv sync --frozen` resolves cleanly
- [x] `uv run python -c "import PIL; from PIL import Image, ImageDraw,
ImageFont"` succeeds on 12.2.0
- [ ] CI (`tests/ci`) green
<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Bumps `pillow` from 12.1.1 to 12.2.0 to patch CVE-2026-40192 (FITS GZIP
decompression bomb) and clear the security alert. No functional changes;
the app does not process FITS images.
<sup>Written for commit e7b0caac9f.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->
GHSA-whj4-6x5x-4v2j: FITS GZIP decompression bomb in Pillow < 12.2.0.
Pillow 10.3.0-12.1.1 did not bound GZIP-compressed reads when decoding
FITS images, enabling a memory-exhaustion DoS via a crafted FITS file.
Fixed upstream in 12.2.0.
## Summary
- Wraps `registry.execute_action` in `asyncio.wait_for` with a
configurable per-action wall-clock cap (default 90s, override via
`BROWSER_USE_ACTION_TIMEOUT_S` env var or
`tools.act(action_timeout=...)`)
- On timeout, the action returns `ActionResult(error=...)` so the agent
records the step and recovers, instead of hanging silently
## The bug
Individual CDP calls like `Page.navigate()` have their own 20s timeouts
(`session.py:988`), but the surrounding event-bus plumbing — `await
event` and `await event.event_result(...)` — has none. When a cloud
browser's CDP WebSocket goes silent mid-session (common failure mode
against remote browser services: handshake completes, `/json/version`
returns 200, then the WebSocket stalls), every subsequent action
dispatched through `event_bus` hangs indefinitely.
Flow when this happens:
1. `agent._execute_initial_actions()` → `multi_act()` → navigate handler
dispatches `NavigateToUrlEvent`
2. `await event` / `event_result()` wait on a handler that can never
complete because CDP is silent
3. `tools.act()` doesn't raise, doesn't return — it hangs
4. The outer agent watchdog eventually calls `agent.stop()`
5. `_execute_initial_actions` exits via `InterruptedError` (silently
swallowed at `service.py:2557-2559`)
6. Main loop sees `state.stopped=True`, exits with 0 history entries →
**empty trace**
## Real-world evidence
A 170k-task collector run produced:
- **1,090 empty-history traces** (21% of all outputs)
- Of the 927 with duration timing, **100% hit the 240s outer watchdog**
— median 582s, max 2214s (~37 min)
- Cloud HTTP layer was clean throughout: all 200/201, 0 non-2xx
responses
- Concurrency was not the cause — fail rate actually increased at lower
parallelism, confirming the hang is per-session not back-pressure
Sample timeline from one of these failures (cloud session `78cc5c3e`):
```
14:48:47 Cloud browser created successfully
14:48:47 HTTP GET /json/version 200 OK
14:48:47 Connecting to wss://78cc5c3e...cdp4.browser-use.com/devtools/browser/...
(5 minutes of total silence)
14:53:53 Stopping cloud browser session (triggered by outer watchdog)
```
## The fix
Default cap is **90s**, which sits comfortably above `Page.navigate`'s
20s + lifecycle wait's 8s + internal CDP calls, but well below any
reasonable outer agent watchdog.
Configurable via:
- `BROWSER_USE_ACTION_TIMEOUT_S` env var (process-wide default)
- `tools.act(action_timeout=...)` parameter (per-call override)
## Test plan
- [x] New `tests/ci/test_action_timeout.py` — stubs
`registry.execute_action` with a coroutine that sleeps 30s, asserts
`tools.act()` returns within the 0.5s cap with an
`ActionResult(error=...)` containing `"timed out"`
- [x] New test also covers the fast-handler path — ensures no regression
when actions complete normally
- [x] Existing `test_multi_act_guards.py` and
`test_action_blank_page.py` tests still pass (all 15 green locally)
- [ ] Full CI suite
<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Add per-action, per-step, and per-CDP-request timeouts to prevent hangs
and empty traces. Defaults are 180s for actions/steps and 60s for CDP;
both are configurable.
- **Bug Fixes**
- Tools: wrap `registry.execute_action` with `asyncio.wait_for`; default
180s via `BROWSER_USE_ACTION_TIMEOUT_S` or
`tools.act(action_timeout=...)`. Validate env and per-call override
values (empty/non-numeric/non-finite/<=0) with fallback; on timeout
return `ActionResult(error=...)`.
- Agent: wrap `_execute_initial_actions()` with `asyncio.wait_for` using
`settings.step_timeout`; on timeout log, set `state.last_result`,
increment `state.consecutive_failures`, and continue to the main loop.
- Browser: add `TimeoutWrappedCDPClient` around
`cdp_use.CDPClient.send_raw` with `asyncio.wait_for` (default 60s via
`BROWSER_USE_CDP_TIMEOUT_S` or constructor). Validate env/constructor
values (non-finite/<=0). Use in connect/reconnect; raise `TimeoutError`
on unresponsive CDP calls.
- Tests: add `tests/ci/test_action_timeout.py` (hung/fast paths and
invalid `action_timeout` override) and `tests/ci/test_cdp_timeout.py`
(real `send_raw` path). Update `tests/ci/browser/test_cdp_headers.py` to
patch `TimeoutWrappedCDPClient`.
<sup>Written for commit 32416bb48c.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->
Three attribute quoting typos in README.md:
- Line 2: trailing `""` on the light-mode `<source srcset=...>` in the
top logo picture block.
- Line 9: same trailing `""` on the light-mode `<source srcset=...>`
for the hero image.
- Line 33: missing closing `"` on `<img width="4 height="1" alt="">`
between the Twitter and Discord badges.
GitHub tolerates these today, but they break stricter HTML parsers and
look off in raw source / mirrors. Pure docs fix, no behavior change.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
P2 codex comment on 9a09c4d7: the public `action_timeout` parameter on
tools.act() skipped the same defensive validation that the env-var path
already had. Passing nan made every action time out instantly; inf /
<=0 disabled the guard entirely. Either mode silently defeats the safety
this module exists to provide, especially for callers sourcing timeouts
from runtime config.
Extracted _coerce_valid_action_timeout() (pairs with _parse_env_action_
timeout) and routed the override through it. None / nan / inf /
non-positive all fall back to the env-derived default with a warning.
New test_act_rejects_invalid_action_timeout_override asserts the
fallback by passing bad values and verifying the fast handler actually
executes to completion (which wouldn't happen if nan → immediate
timeout or if inf → hang would leak through).
Two P2 comments from cubic on 9a09c4d7:
1. TimeoutWrappedCDPClient.__init__ trusted its cdp_request_timeout_s arg
blindly. nan / inf / <=0 would either make every CDP call time out
immediately (nan) or disable the guard (inf / <=0) — same defensive
gap we already fixed for the env-var path. Extracted _coerce_valid_
timeout() that mirrors _parse_env_cdp_timeout's validation; constructor
now routes through it, so both entry points are equally safe.
2. test_send_raw_times_out_on_silent_server used an inline copy of the
wrapper logic rather than the real TimeoutWrappedCDPClient.send_raw.
A regression in the production method — e.g. accidentally removing
the asyncio.wait_for — would not fail the test. Rewrote to:
- Construct via __new__ (skip CDPClient.__init__'s WebSocket setup)
- unittest.mock.patch the parent CDPClient.send_raw with a hanging
coroutine
- Call the real TimeoutWrappedCDPClient.send_raw, which does
super().send_raw(...) → our patched stub
- Assert it raises TimeoutError within the cap
Also added test_send_raw_passes_through_when_fast (fast-path regression
guard) and test_constructor_rejects_invalid_timeout (validation for
fix#1). All 14 tests in the timeout suite pass locally.
Earlier commit 9a09c4d7 swapped the CDPClient construction in
browser_use/browser/session.py from the raw cdp_use.CDPClient to our
TimeoutWrappedCDPClient subclass. test_cdp_headers.py patches the
CDPClient symbol in session.py's namespace to assert headers/User-Agent
propagate — but since the code now instantiates TimeoutWrappedCDPClient,
the patch no longer intercepts the call and the mock.assert_called_once
check fails with 'Called 0 times'.
Point the patches at session.TimeoutWrappedCDPClient instead so the
assertions match what the code actually constructs. Header propagation
still works end-to-end because TimeoutWrappedCDPClient forwards
*args/**kwargs to super().__init__.
cdp_use.CDPClient.send_raw awaits a future that only resolves when the
browser sends a response with a matching message id. There is no timeout
on that await. Against the cloud browser service, the failure mode we
observed is: WebSocket stays alive at the TCP/keepalive layer (proxy
keeps pong-ing our pings), but the browser upstream is dead / unhealthy
and never sends any CDP response. send_raw's future never resolves, and
every higher-level timeout in browser-use (session.start's 15s connect
guard, agent.step_timeout, tools.act's action timeout) relies on
eventually getting a response — so they all wait forever too.
Evidence from a 170k-task collector run: 1,090 empty-history traces,
100% hit the 240s outer watchdog, median duration 582s, max 2214s, with
cloud HTTP layer clean throughout (all 200/201). One sample showed
/json/version returning 200 OK and then 5 minutes of total silence on
the WebSocket before forced stop — classic silent-hang.
Fix: add TimeoutWrappedCDPClient, a thin subclass of cdp_use.CDPClient
that wraps send_raw in asyncio.wait_for(timeout=cdp_request_timeout_s).
Any CDP method that doesn't respond within the cap raises plain
TimeoutError, which propagates through existing `except TimeoutError`
handlers in session.py / tools/service.py. Uses the same defensive env
parse pattern as BROWSER_USE_ACTION_TIMEOUT_S — rejects empty /
non-numeric / nan / inf / non-positive values with a warning fallback.
Default is 60s: generous for slow operations like Page.captureScreenshot
or Page.printToPDF on heavy pages, but well below the 180s step timeout
and any typical outer watchdog. Override via BROWSER_USE_CDP_TIMEOUT_S.
Wired into both CDPClient construction sites in session.py (initial
connect + reconnect path). All 17 existing real-browser tests
(test_action_blank_page, test_multi_act_guards) still pass.
The main execution loop already wraps _execute_step with asyncio.wait_for
using settings.step_timeout (default 180s). But _execute_initial_actions,
which runs before the main loop, is unwrapped — if it hangs (e.g. the
first navigate stalls on a silent CDP WebSocket before the per-action
timeout can catch it), the agent blocks indefinitely without ever
entering the main loop. No step gets recorded, history stays empty, and
any outer watchdog eventually kills the run with zero diagnostic data.
Wrap _execute_initial_actions with the same step_timeout. On timeout,
record the failure in state.last_result / consecutive_failures and fall
through to the main execution loop so the agent can still attempt to
recover. InterruptedError (from an interrupting callback) is still
swallowed silently — same contract as before.
Paired with the per-action asyncio.wait_for added in tools/service.py,
this closes the last unprotected path in the pre-main-loop flow.
Closes#4533.
## Summary
Adds `browser-use record start <path>` / `record stop` / `record status`
to capture the current session as an MP4 via CDP screencasting — all the
underlying machinery (`Page.startScreencast`, `VideoRecorderService`)
already existed in the repo; this just exposes it on the CLI.
- `RecordingWatchdog` gains a public `start_recording(path, size?,
framerate?)` / `stop_recording() -> Path` / `is_recording` API. The
existing `BrowserConnectedEvent`/`BrowserStopEvent` handler is
refactored to use it, so profile-driven recording
(`record_video_dir=...`) is unchanged.
- New `record` subcommand wired through argparse (`skill_cli/main.py`),
the daemon dispatch allowlist, and `skill_cli/commands/browser.py`.
Works with `--session NAME` via the existing named-daemon
infrastructure. `record stop` prints the saved file path so it can be
captured programmatically (as requested in the issue).
- `CLIBrowserSession` intentionally skips watchdogs; the handler lazily
attaches `RecordingWatchdog` on first `record start` so non-recording
sessions pay no cost.
- Output is `.mp4` (libx264) — matches the existing encoder. Gated
behind the existing `browser-use[video]` optional extra; the CLI returns
a helpful error if deps are missing.
## Example
```bash
browser-use --session demo record start /tmp/demo.mp4
browser-use --session demo open https://example.com
browser-use --session demo click 3
browser-use --session demo record stop
# /tmp/demo.mp4
```
## Test plan
- [x] `uv run pytest -vxs tests/ci/test_action_record.py` — 6 new tests,
all pass (~23s). Covers: full start/stop cycle against a real headless
browser (produces a decodable MP4), double-start rejection,
stop-without-start returns None, profile-driven flow unchanged, argparse
parsing, dispatch registration.
- [x] `uv run pyright` on changed files — clean.
- [x] `uv run ruff check` / `ruff format` — clean.
- [x] Live end-to-end CLI smoke test: `record start` → `open` → `record
stop` produced a valid ~11 KB MP4.
- [ ] CI green.
<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Adds `record start/stop/status` to the `browser-use` CLI to capture the
current session as an `.mp4` via CDP screencasting, with simple
start/stop APIs on `RecordingWatchdog` and reliable shutdown that
finalizes recordings.
- **New Features**
- `browser-use record start <path>`, `stop`, and `status`; `start`
supports `--framerate`, `stop` prints the saved path, and `status`
returns path, framerate, and size.
- Works with `--session NAME`; lazily attaches `RecordingWatchdog` so
non-recording sessions have no overhead.
- Outputs `.mp4` (libx264) via the existing encoder; gated behind
`browser-use[video]` with a clear error if missing.
- **Bug Fixes**
- `on_BrowserConnectedEvent` degrades gracefully when recording cannot
start (e.g., missing `browser-use[video]` or undetectable viewport) so
sessions still launch with `record_video_dir` set.
- Daemon shutdown now awaits `stop_recording()` (no timeout) and
finalizes any in-progress recording, preventing truncated MP4s.
<sup>Written for commit 44f7ead5cd.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->
Two more issues from automated review on #4711:
1. (P2, Codex) float() accepts 'nan' and 'inf' — both parse successfully
and bypass the fallback path. 'nan' makes asyncio.wait_for time out
immediately for every action; 'inf' effectively disables the hang
guard. Extracted the parse into _parse_env_action_timeout() which
rejects non-finite and non-positive values (including 0 and negatives)
with a warning + fallback.
2. (P2, Cubic) The previous reload test left browser_use.tools.service
pinned at _DEFAULT_ACTION_TIMEOUT_S=45.0 (the last monkeypatch value),
which would leak into any later test in the same worker. Added a
_restore_service_module fixture that pops the env var and reloads
cleanly on teardown.
Expanded test coverage to include 'nan', 'NaN', 'inf', '-inf', '0', '-5'
alongside the existing '' / 'abc' cases — all fall back to 180s.
`asyncio.wait_for(stop_recording(), timeout=5.0)` could expire while the
ffmpeg encoder was still flushing, leading the daemon's subsequent
`os._exit(0)` to kill the executor thread mid-write and leave the exact
truncated MP4 this hook was meant to prevent. `stop_recording()` already
offloads the blocking close to an executor, so awaiting it directly is
safe — and if it genuinely hangs, a stuck daemon is a clearer failure
signal than silent video corruption.
Verified end-to-end: start recording → `open` → `close` (no explicit
`record stop`) now produces a decodable MP4 with the captured frames.
Two issues flagged by automated review on #4711:
1. (P1, Codex) The 90s default was *below* the extract action's intentional
120s page_extraction_llm.ainvoke timeout (tools/service.py:1096,1172).
Slow-but-valid extractions against large pages would be truncated into
timeout errors — a regression. Raised default to 180s, which sits above
that 120s inner cap with grace.
2. (P2, Cubic + Codex) float(os.getenv('BROWSER_USE_ACTION_TIMEOUT_S', '90'))
ran at import time. An empty or non-numeric value (common with env
templating) raised ValueError and prevented browser_use.tools.service
from importing at all — turning a config typo into a process-wide
startup failure. Wrapped in try/except with a warning and fallback to
the hardcoded 180s default.
Tests:
- test_default_action_timeout_accommodates_extract_action — pins the
default >= 150s so future edits can't silently regress extract.
- test_malformed_env_timeout_does_not_break_import — reloads the module
with empty / non-numeric env values and asserts it falls back cleanly,
plus verifies a valid numeric env value still takes effect.
Individual CDP calls like Page.navigate() have their own 20s timeouts, but
the surrounding event-bus plumbing (await event, event_result()) does not.
When a cloud browser's CDP WebSocket goes silent mid-session, agent handlers
hang indefinitely — agents never emit a step, any outer watchdog eventually
fires, and the run returns with zero history.
Observed in practice: a 170k-task collector run produced 1,090 empty-history
traces (21% of output). 100% hit the 240s outer watchdog; median 582s, max
2214s. Cloud HTTP layer was clean (all 200/201) — hang was entirely in CDP.
Wrap registry.execute_action in asyncio.wait_for with a configurable per-
action cap (default 90s, BROWSER_USE_ACTION_TIMEOUT_S env var or
tools.act(action_timeout=...)). On timeout, the action returns
ActionResult(error=...) so the agent can record the step and recover.
New tests/ci/test_action_timeout.py covers both hung and fast handlers.
Existing tools.act tests (test_multi_act_guards, test_action_blank_page)
still pass.
- `on_BrowserConnectedEvent` now catches `RuntimeError` from
`start_recording()` so sessions with `record_video_dir` configured but
missing `[video]` extras (or a viewport that can't be sized) keep
starting — prior graceful-degradation behavior is restored.
- Lazy `RecordingWatchdog` in the CLI handler now calls
`attach_to_session()`, so `AgentFocusChangedEvent` / `BrowserStopEvent`
handlers are wired correctly if the session dispatches them.
- Daemon shutdown finalizes any in-progress recording before tearing the
browser down, preventing truncated MP4s on `close`, idle timeout, or
signal-driven exit.
- Added regression test that monkeypatches `start_recording` to raise and
asserts `on_BrowserConnectedEvent` swallows it without breaking startup.
Closes#4533.
- `RecordingWatchdog` gains public `start_recording(path, size?, framerate?)`,
`stop_recording() -> Path`, and `is_recording`; the existing
`BrowserConnectedEvent`/`BrowserStopEvent` path is refactored to use them,
so profile-driven recording behavior is unchanged.
- `browser-use record start <path>` / `record stop` / `record status`
subcommands wired through argparse, daemon dispatch, and the browser
command handler. `record stop` prints the saved file path so it can be
captured programmatically, matching the issue's requested UX. Works with
`--session NAME` via the existing named-daemon infrastructure.
- The CLI's `CLIBrowserSession` intentionally skips watchdogs; the handler
lazily instantiates `RecordingWatchdog` on first `record start` so CLI
recording doesn't pay the watchdog-setup cost for non-recording sessions.
- Output format is `.mp4` (libx264) since that's what the existing
`VideoRecorderService` encodes; optional dependency gate is unchanged
(`pip install "browser-use[video]"`).
- New `tests/ci/test_action_record.py` exercises the full stack against a
real headless browser + `pytest-httpserver`, verifying decodable MP4
output, double-start rejection, stop-without-start no-op, that the
existing `profile.record_video_dir` flow still works, and the argparse /
dispatch wiring.
## Summary
Fixes#4046
The skill CLI crashes on startup when `lmnr` is installed but internally
broken (e.g., Python 3.13 with certain package states). The import
raises `TypeError` instead of `ImportError`, which escapes the existing
handler and kills the entire application.
## Root Cause
`browser_use/observability.py` line 52 only catches `ImportError`, but a
broken `lmnr` installation can raise `TypeError` during its internal
initialization.
## Fix
Broadened `except ImportError` to `except (ImportError, TypeError)` so
the no-op fallback decorator is used in both failure modes. Chose
specific exceptions over `except Exception` to avoid masking unrelated
errors.
## Tests Added
New file: `tests/ci/test_observability.py` with 4 tests:
- `test_fallback_when_lmnr_not_installed` ImportError fallback
- `test_fallback_when_lmnr_raises_type_error` TypeError fallback
(regression for #4046)
- `test_observe_noop_decorator_works_on_sync_function` sync decorator
verification
- `test_observe_noop_decorator_works_on_async_function` async decorator
verification
All pass. Happy to adjust based on feedback!
<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Fixes a startup crash in the skill CLI when `lmnr` is installed but
broken by catching `TypeError` during import and falling back to the
no-op observe decorator. Keeps the CLI running even if observability is
unavailable.
- **Bug Fixes**
- Catch `(ImportError, TypeError)` in `browser_use/observability.py` and
disable observability when `lmnr` fails to import.
<sup>Written for commit 80f798bc17.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->
Resolves#4683
<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Fixes input clearing so we don’t stop on JS clear failures, and
clarifies “clear-then-type” as the default. Adds a clear-only option via
text="" and a way to append via clear=False. Resolves#4683.
- **Bug Fixes**
- Removed premature returns in JS clear to enable fallback strategies.
- Aligned docs and help to the default behavior: clear-then-type;
`text=""` clears only; `clear=False` appends (`browser_type` tool,
`InputTextAction` schema, CLI `input`, SKILL.md).
<sup>Written for commit 4476f6e16e.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->
## Problem
`list_chrome_profiles()` in `browser_use/skill_cli/utils.py` opens
Chrome's `Local State` JSON file without specifying an encoding:
```python
with open(local_state_path) as f:
```
On Windows with a non-UTF-8 default locale (e.g. Chinese GBK/CP936),
Python's `open()` uses the system code page. Chrome's `Local State` is
always UTF-8, so profile names containing non-ASCII characters (e.g.
Chinese `用户1`) are decoded as mojibake (`鐢ㄦ埛1`).
## Fix
Add `encoding='utf-8'` to the `open()` call, consistent with how
`browser_use/browser/profile.py` already handles file reads (e.g. lines
949, 1062, 1108).
## Reproduction
On a Windows machine with Chinese system locale:
```python
profiles = Browser.list_chrome_profiles()
for p in profiles:
print(p["name"]) # Before fix: 鐢ㄦ埛1 (mojibake)
# After fix: 用户1 (correct)
```
Fixes#4673
<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Read Chrome’s `Local State` in `list_chrome_profiles()` using UTF-8 to
prevent garbled profile names on Windows with non-UTF-8 locales.
Non-ASCII names (e.g., Chinese) now display correctly.
<sup>Written for commit 9c314e626e.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->
On Windows with a non-UTF-8 default locale (e.g. Chinese GBK/CP936),
open() without an explicit encoding uses the system code page. Chrome's
Local State file is always UTF-8, so profile names containing non-ASCII
characters (e.g. Chinese '用户1') are decoded as mojibake.
Fixes#4673
…k guidance
When `browser-use connect` fails to discover a running Chrome, the error
now points to the correct `chrome://inspect/#remote-debugging` URL. The
SKILL.md also guides agents to prompt users with two options: enable
remote debugging or use managed Chromium with a Chrome profile.
<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Fixes the connect failure UX by pointing to the correct Chrome remote
debugging page and adding clear fallback steps. The error now links to
`chrome://inspect/#remote-debugging`, and SKILL.md guides users to
either enable remote debugging or use managed Chromium with their Chrome
profile.
<sup>Written for commit d0fbf4c580.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->
When `browser-use connect` fails to discover a running Chrome, the error
now points to the correct `chrome://inspect/#remote-debugging` URL. The
SKILL.md also guides agents to prompt users with two options: enable
remote debugging or use managed Chromium with a Chrome profile.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary
- pin the privileged stale workflow action to the immutable commit
behind `actions/stale@v9`
- preserve current behavior by keeping the same major tag target and
only removing tag-retarget drift
- leave existing workflow permissions and stale policy settings
unchanged
## Why
This scheduled workflow runs with `issues: write` and `pull-requests:
write`, so pinning the marketplace action to a commit reduces
supply-chain drift without changing workflow behavior.
## Validation
- `git diff --check`
- `python -c "from pathlib import Path; import yaml;
yaml.safe_load(Path('.github/workflows/stale-bot.yml').read_text(encoding='utf-8'));
print('yaml-parse-ok')"`
- local PC Control review-coder queued against the staged diff; it
remained running during push, so I treated it as degraded evidence and
completed a bounded manual preflight on the one-line workflow change
## Notes
- This intentionally stays scoped to one workflow line.
- I did not change permissions, timing, or stale policy behavior.
- `uv` / `pre-commit` and `actionlint` were not available on this
Windows shell, so I did not claim those checks ran locally.
<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Pin `actions/stale` in the scheduled workflow to the immutable v9 commit
to eliminate tag drift and reduce supply-chain risk. Behavior,
permissions, schedule, and stale policy remain unchanged.
<sup>Written for commit b1f755d509.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->
thousands of users have attempted to use close, so why not add it
<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Adds `close()` to `BrowserSession` as an alias for `stop()`, so
`session.close()` cleanly stops the session and matches common APIs.
<sup>Written for commit 76604913ad.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->
Resolves#4610
<!-- This is an auto-generated description by cubic. -->
## Summary by cubic
Prefer the `playwright`-bundled Chromium over system Chrome by default
to make local launches consistent across machines and CI. Also switches
installation to `uvx playwright install chromium`.
- **Refactors**
- Reordered search priority: channel-specific (non-default) ->
Playwright Chromium -> system Chrome -> other native browsers ->
Playwright headless-shell.
- Unified pattern ordering to always prioritize the target browser
group, then fall back to others.
- Switched install command and error messages from `chrome` to
`chromium`.
<sup>Written for commit 03e2bc4da8.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->
<!-- This is an auto-generated description by cubic. -->
## Summary by cubic
Gracefully handle MCP stdio disconnections by catching BrokenPipeError,
preventing crashes and shutting down the server cleanly.
- **Bug Fixes**
- Wrap `server.run` in a `try/except BrokenPipeError`.
- Log a warning and exit cleanly when the MCP client disconnects.
<sup>Written for commit df4e2f9f15.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->
Resolves#4620
<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Fix pagination button detection to prefer semantic labels ("first",
"last") over shared glyphs ("«", "»") across sites. Removed those glyphs
from first/last patterns and reordered checks so first/last win before
next/prev, treating "«" and "»" only as prev/next fallbacks.
<sup>Written for commit 9ad4c63cdb.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->
Resolves#4609
<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Fixes#4609 by preventing substring leaks during sensitive data
redaction. Redaction now replaces longer secrets first and uses shared
utils for consistent behavior.
- **Bug Fixes**
- Redact longest matches first to avoid partial/substring leaks.
- Support both legacy flat and domain-scoped `sensitive_data` formats.
- Apply consistent redaction across message manager and views.
- **Refactors**
- Added `collect_sensitive_data_values` and `redact_sensitive_string` in
`browser_use/utils.py`.
- Replaced inline redaction logic in
`browser_use/agent/message_manager/service.py` and
`browser_use/agent/views.py`.
<sup>Written for commit 65f87b7fca.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->
Fixes#4626
<!-- This is an auto-generated description by cubic. -->
---
## Summary by cubic
Replace deprecated asyncio.get_event_loop/run_until_complete with
asyncio.run() in the CLI to restore Python 3.14 compatibility. Fixes
#4626 and prevents runtime errors in the `doctor` and `tunnel` commands.
<sup>Written for commit 99a8674214.
Summary will update on new commits.</sup>
<!-- End of auto-generated description by cubic. -->